Joined: 9/22/2009
Posts: 18
This probably sounds dumb (I don't know much about CS & math) but why is it not viable to use a brute force backwards induction method. Doesn't the complexity rise linearly that way? Let's say we consider Mario Bros.. We start at the last controllable frame of the game, try every possible input combination (or those that make sense), find the fastest one, then use that and go back one frame - rinse and repeat. That way all the games should be solved relatively quickly...or where do I go wrong?
Player (36)
Joined: 9/11/2004
Posts: 2623
1) It's not possible to emulate backwards. 2) There are multiple possible and equivalent ending states.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Blue_Streak
He/Him
Joined: 5/5/2007
Posts: 12
Location: Mesa, Arizona, USA
wuschelbeutel1 wrote:
This probably sounds dumb (I don't know much about CS & math) but why is it not viable to use a brute force backwards induction method. Doesn't the complexity rise linearly that way? Let's say we consider Mario Bros.. We start at the last controllable frame of the game, try every possible input combination (or those that make sense), find the fastest one, then use that and go back one frame - rinse and repeat. That way all the games should be solved relatively quickly...or where do I go wrong?
I can see where you think the linear complexity comes in. Let's say that there are 20 blue spheres and that each of these 20 could be the final one, but let's just pick one to be the final for now. There are then 19 different paths from a blue sphere to our final blue sphere. We would determine which of these is quickest and then go to the next step back, which would contain 18 possible ways to get to sphere 19. Of these 18, we'd again find one to be the fastest and go back another step along that path...and so on to the beginning. This is exactly identical to solving the problem in the forward direction by finding the next closest sphere (which I will henceforth call the nearest-neighbor method). But the problem is not that simple. There are plenty of instances in TASes when seemingly suboptimal choices are made at a given time in order to yield a faster outcome later. In general there could be a path that is larger than the optimal one at some step which actually leads to a series of much smaller steps. The following two images show a sample traveling salesman problem with the starting point at A (we can ignore the need to return to A at the end). The first one always takes the path to the next closest point, yet this is suboptimal when compared to the second version (even ignoring the last step). If this is visually hard to see you can try aligning the arrows in a long row. So we cannot solve the problem optimally by strictly finding the quickest time from one blue sphere to the next and taking that move as the next step. It is possible that this nearest-neighbor method actually does yield the optimal solution, but in general it will not, and therefore we must test a far larger space of possibilities that makes the complexity scale exponentially and not linearly. You may wonder if the fact that the problem is restricted to a rectangular grid that wraps simplifies the problem in a way that makes the nearest-neighbor method always optimal. I would conjecture that it does not, although I don't have a proof of that at the moment. On a related yet important note, even if the optimal solution was found using the nearest neighbor method, there are higher-level rules in the game that would make the solution wrong. One example of this is turning blue spheres into rings. Let's say Sonic is in the position shown above. The shortest move to a blue sphere is to walk straight forward, so that would be move one. There are then three blue spheres at equal distances that can be used for move two, but because turning takes extra frames we would say that the sphere directly ahead is the next fastest one to reach. However, we know that it's quicker in the long run to go to either the left or right sphere and go around the perimeter turning them all into rings. If a system had no knowledge of this rule and used the nearest-neighbor method it would find the wrong answer. Thus, we have to input some knowledge of the higher level system mechanics into a special stage solver. This is a real blessing computationally for us because we can be reasonably confident that we know the fastest solution to any enclosing pattern of blue spheres, although it's still a problem to figure out where the optimal entry and exit points occur in the formation since these depend on what happened previously what's left to do in the stage. So in summary, a sequence of local optimizations does not necessarily give us the global minimum and the presence of higher-level rules requires us to compute optimal solutions for larger substructures of spheres. The first condition makes solving this a pain in the ass, but the second is a real gift because we can use our complete knowledge of the game's rules to simplify the problem somewhat.