Sure, I'm willing to share it. It's not the most user-friendly thing ever, but it does its job. Not sure what you mean by sharing or degenerate states, but I do avoid positions that clearly don't have enough moves left to match up the pieces. I haven't tried extremely large depth-first searches but I imagine they'd take forever without some heuristics about what moves to try first.
http://pastebin.com/HRHi4698
Here are a lot of details about the program:
The format of a puzzle is a 256-length string of digits: 0 is empty space, 1 is a wall, 2-4 are colors that need to be paired up to solve the puzzle, and 5-7 are any other colors that don't need to be paired up. This string can contain any extra newlines, so I usually have it broken into the 16 rows for readability. After the string, you can put a | and then a number to give a movecount goal which the program will try to beat; the default is 100. The input file is at least one of these puzzles with newlines between them, and I run the program with "denki.exe < levels.txt".
The basic heuristic is a "minimum distance" function, which puts a lower bound on the number of moves it must take to pair up the necessary colors. This provides some basic pruning, as we can throw out positions where moves so far + minimum distance >= goal. This is only based on the walls, not on other pieces, so it works really well on mazelike puzzles and not as well on puzzles with very large pieces that can only move a little.
There is also a "looksSolvable" function. Normally this just returns 1 (true), but for some puzzles writing a small custom function and recompiling the program can help by forcing the program to avoid positions that I know will make the puzzle unsolvable, like pairing certain pieces too early.
There are a bunch of different solving functions:
- iterativeDeepeningSolve: (old) try a depth-first search with depth 0, depth 1, ...
- singleMapSolve: try a breadth-first search, storing every position in a hashmap, so we can ignore positions we've already seen. If the map gets too big, kill the search.
- moduloMapSolve: same as singleMapSolve, but keep a small rotating list of maps storing the positions in depths i, i+1, i+2, etc. and ignore positions in any of the currently used maps. If the maps get too big in total, kill the search.
- moduloMapSuboptimalSolve: same as moduloMapSolve, but have a size limit on each map, and when it gets too big throw out positions with high (minimum distance + penalty * number of groups) to keep the "best" positions. I use this one a lot, and penalties 0, 1, and 5 seem good - often one will catch a solution the others won't. This one will never run out of memory but of course it will often miss fast solutions because it throws out intermediate states. 10000 states per map works okay; 100k is better but very slow.
- iterativeDeepeningSolve2: try a depth-first search with depth 0, depth 1, etc. and try to cache recently seen positions so we can . Shouldn't run out of memory but may take extremely long to find optimal solutions.
Right now I would use moduloMapSolve for optimal solves and moduloMapSuboptimalSolve for suboptimal.
Some things I haven't figured out yet:
- Figure out a minimum-distance function that takes other pieces into account, such as knowing we have to move around the big fixed piece in 6-3 or 3-13.
- Write a fast algorithm for "can this set of pieces still form into this shape". This would make build-a-shape bonus puzzles solvable, as well as some of the harder puzzles where the board/piece geometry means a specific shape must be made to complete the puzzle.
- Handle the case of a piece blocking a hallway, like in the level 2 bonus levels. If something is stuck, we can't go around it, and we have to match pieces on each side of the obstruction, prune the position.
- Figure out which pieces can go where and prune positions based on that. For instance, if the only way between two areas is a 1-wide horizontal hallway then we can't match vertical 1x2 blocks on either side of it.