Enumerating all of 3x3x3 cube space - required performance characteristics
Submitted by Jerry Bryan on Tue, 10/07/2008 - 11:24.
I've been playing around with what it would take to enumerate the entire cube space for the standard 3x3x3 cube, using a large number of computers working together. I've come up with what I think is a reasonable approach, a bit past the bare edge of what might be possible with today's technology, but perhaps not too far past. The assumption will be that we want to solve the problem in one year, with all the machines involved in the project fully dedicated to the problem for the whole year.
The following is not a compete description of my proposed approach, but rather is an analysis of the performance characteristics that would be required. There is not much advantage in coming up with an approach that would require billions of years to compute.
Cube space contains the well known 4.3E19 positions (approximately). This can be reduced to about 4.5E17 with symmetry and antisymmetry. My proposed algorithm would require 934778 computers (a little less than a million), with the 934778 figure being the number of unique corner positions reduced by symmetry and antisymmetry. Each computer would have to store the distance from Start for about 4.9E11 edge positions, where 490497638400 is one half the size of the edges group. At one byte per position, 4.9E11 is about one half a terabyte, which is at the level of a large disk drive. Using memory would be better than using disk, but it would surely be easier to find a million machines with a half terabyte disk than a million machines with a half terabyte of memory. Storage requirements could in principle be reduced to two bits per position. I worry that if this project were to be put in place for real, it would be necessary to use the full byte per position in order to allow the computers to take checkpoints and to be restartable from a checkpoint.
There are 31536000 seconds in a year. About 1.43E10 positions would have to be calculated per second overall, and about 15283 positions would have to be calculated per second per machine. The figure of about 15000 positions per second per machine certainly seems doable based on my experience through the years in enumerating entire cube spaces. This is a very different order of magnitude of speed as compared to finding optimal solutions for particular positions, which typically takes much more than one second per position.
Finally, all the machines would have to send large amounts of data to each other. log2(490497638400), or 39 bits would be required to represent each edge position. Essentially, that means that each edge position could be stored in five bytes. Each machine would be sending and receiving about 15000*5 bytes of data per second to and from other machines, or about 75000 bytes. Network bandwidth is usually stated in bits per second rather than bytes per second, so each machine would have to send and receive about 600000 bits of data per second. (Within a machine, what would have to be stored would be distances from Start, ergo either one byte or maybe even two bits would be required for each position. The five bytes per position would arise only when information had to be exchanged between machines.)
For each computing resource - processing speed, memory, disk space, number of computers, and network bandwidth - the resources would be challenging but not utterly unreasonable in perhaps the next decade or two. As all the required computing resources get bigger and faster and cheaper, I think the key obstacle would be coming up with the million machines in something like the style of seti@home. seti@home runs as a screensaver in the background using otherwise idle resources. The approach I'm talking about would require a dedication of many more resources on each machine than does the seti@home approach. It seems to me that to use something like the seti@home approach to enumerate all of cube space, cube space would have to be enumerated by finding optimal solutions for one position at a time, which seems to be prohibitively slow. On the other hand, the seti@home approach would allow all the machines to be running pretty much independently of each other and asynchronously of each other. The approach I'm talking about would require all million machines to be running cooperatively and somewhat synchronously with each other at all times.