I can see where you're coming from with this Warp and I tend to be open to everything, but I wouldn't necessarily characterise this trend you describe as "worrying".
The core of the issue with these "meta" game techniques comes from the fact, that after all video games can be (in principle) arbitrary programs running on a general computing platform and many of the underlying abstractions these games rely on (e.g. (non)volatile memory, time and computing performance) leak through, making it impossible to distinguish where the game starts and ends, to distinguish where the boundaries of the virtual world in which we think about concepts such as characters, mechanics, rules, abilities and goals and the incidental facts about its implementation on a general-purpose computer lie.
As has been stated already, there's no one way to interpret or execute a game (how is "the" memory initialized, how quickly will computations be done, etc) given how underspecified the environment in which it is executed normally is (mostly out of pragmatism). The rigidity of old consoles masked this issue to a certain degree, because if one were to break out of the boundaries of the game world at least underlying system would behave identically: ROM would not be writable, the amount of free writable memory the same, computations be done at the same speed (console hardware iterations exist, but I'm not sure how to gauge their influence at the moment)
This certainty is only going to decrease as games running on modern OSes and vastly more varied hardware are opened up to TASing (we're not quite there yet, and there'll be many other issues to conquer such as the frequently changing nature of modern video games via online delivery of patches, something I guess the unassisted speedrun community had to put up with for quite a while now, so there's much to learn from them).
For instance arbitrary code execution in these environments would be very circumstantial (depending maybe on permissions, security features and other operating system abstractions and properties such as how memory allocation is done), but also very powerful.
So, are you willing to view video games as general-purpose computer programs or are you constraining yourself to the high-level thinking of the virtual world when there never was a specification as to what this entails to begin with (that we can uniquely determine, only conjecture (which is what you do when you talk about in-game and out-of-game mechanics), anyway)?
Most speedrunners and followers are probably in-between: obviously "accidental" corruption of important game memory by some external cause (e.g. Cheat Engine) would be out of the question, but what about killing a game while it is in the middle of a non-atomic filesystem transaction?
Is process termination a game control? Not in the view of a game as a virtual world, but in the view as a computer program it is.
What about if the game has a chat feature, that allowed you to corrupt memory by inputting certain character sequences? In that case one would break out of the notion of a virtual world where concepts like memory and its layout don't exist, but not out of the notion of the virtual world as a certain execution state on a computer, where suddenly entirely different things matter.
(I'm not even touching on the subject, what in these two views would then count as game completion!)
In the end it is a question of definition: To you it is an unfortunate fact, that game makers didn't/couldn't make their virtual worlds "watertight", to others it is an opportunity that opens up new ways for optimisation.
I think that if we promote and encourage the "category aspect" of speedrunning, that there is more than one valid way to interpret (and, by extension, to finish) a video game, we can deal with the complications arising from issues mentioned above, so be vocal about (and active in!) the categories you want to see most, so they don't go away.
All syllogisms have three parts, therefore this is not a syllogism.