Exhaustive search of the cube space

I have been dabbling with the numbers and here is what I have come up with: Let's assume that Moore's law will hold for another X years. This is not unreasonable following advancements in parallel processor designs, quantum and biological computing. So, in X years according to this model, computers could have the potential computing power of N=2^(X/1.5) times the power of todays computers. If a respectable computer from today could optimally solve (considering the optimal solvers of today) a cube in 30 minutes (1800 seconds)...then a computer from this possible future could solve the cube in about 1800/N seconds. Now, assuming we were to hack away at this until these God computers came about, we would only net the equivalent of twice the computing time on these new computers, or about 36 months. This means that to solve the cube completely by time T in years, we would need to solve around 4.8 billion cubes/second. If we also consider farming this out to other computers of equivalent capabilities and enumerating the reduced cube space of 450541810590509978, then we can solve for a potential time frame on exhaustively searching the cube space.

Note: this model does not take into account the logistic model of increasing the number of computers as time goes on. If someone would like it done, I will do it.

Here are some solved possibilities:

15 years and 166 million users
20 years and 16.5 million users
25 years and 1.6 million users
30 years and 162,000 users
35 years and 16,112 users

It would seem that 35 years could be our target, but knowing the ingenuity of humankind, I would expect an earlier exhaustive search to be completed.

On a side note, this could be extended to another, easier problem: proving the lower bound for the maximal number of moves to solve the cube.

Let's say the computer of today can suboptimally solve a random cube in 20 moves in 3 seconds on average. The the numbers work out much better...

15 years, 277,200 users
20 years, 27,500 users
25 years, 2,730 users
30 years, 270 users
35 years, 27 users!

Of course if a position does not solve suboptimally in 3 seconds we solve it optimally on a separarte computer and if it comes out 21 or more, we have a new lower bound.

Question/comments/corrections welcome!
I rounded to remain conservative.

Thank you for your time,
-Joe

Comment viewing options

Select your preferred way to display the comments and click 'Save settings' to activate your changes.

I'm very optimistic that the

I'm very optimistic that the search space for Rubik's cube can eventually be processed in it's entirety, but I think it's very unlikely that such a search can ever be processed "one cube at a time".

Present day "one cube at a time" solvers can take hours for solving one position. One the other hand, algorithms that enumerate cube space as a graph (often described as "searching a tree", except that formally it's a graph instead of a tree) already can process thousands of positions per second.

For example, my current solver which I used to calculate all positions up to 10f from Start and up to 13q from Start processes about 12,000 equivalence class representatives per second on the fastest machine on which I run it. My current equivalence class representatives take symmetry into account, but not anti-symmetry. So in effect, my solver processes about 500,000 positions per second, or about 2 micro-seconds per position. It's a far cry from that speed to a "one at a time" solver that can take hours per position.

My current solver was written over ten years ago, and was designed to minimize memory usage rather than to maximize speed. I'm sure it could be speeded up a good bit by optimizing for speed, what with larger memory on modern machines, and the speed could be doubled for sure by taking anti-symmetry into account. And other solvers that have been described on this forum that operate in the mode of "enumerating all of cube space" seem to be at least as fast as mine, if not a good bit faster.

But let's just do a little arithmetic. In round numbers, the size of cube space is 4.3 x 1019. Reduced by symmetry and anti-symmetry, the size reduces to about 4.3 x 1017 (these are rounded off, "back of the envelope" type of calculations). Let's suppose we could someday produce 106 equivalence representatives per second on one machine (not unreasonable to suppose -- that's about 100 times faster than my current solver). The calculations would then take about 4.3 x 1011 seconds on one machine. There are about 3.1 x 107 seconds per year. So the calculation of cube space would take about 14000 years on one machine. If the problem were infinitely decomposable and amenable to running with all of its pieces in parallel, then 14000 machines could solve the problem in one year.

Another way to look at the problem is more like you did in supposing a certain amount of improvement every few years. I look at as that we can probably get one more level away from Start about every ten years. Looked at in that way, we about still about 100 years away from a solution.

Coset approaches

I think the recent coset approaches (the edges coset I did and the new (U,D,F2,B2,R2,L2) approach that Silviu used) may yield something quite usable. For the edges coset, I see runs that take about a day for 44M positions with no symmetry; that's 500 positions a second, and most of these positions are at 18f. (This is M+inv representatives; if you want total positions, it is solving about 50,000 positions a second.) We are still working on the new approach, but it's quite possible it will be several orders of magnitude faster.

The nice thing about the coset approach is that you end up with a totally embarrassingly parallel solution---at least for the edges coset, each coset exploration is totally independent of each other and solves a completely disjoint set of positions.

So I do believe, if there is sufficient interest, we will have explored all of cube space in a relatively short time.

Of course there are different questions. For instance, if the only thing we want to know is the diameter of the graph, then most coset explorations, which have their deepest positions at 19, will also eliminate all neighbor cosets (which can be no deeper than 20). This provides another factor of ten or more. If on the other hand we want an exact count of positions at each depth, then we'd have to run each coset and this would be quite a bit more expensive.

I suspect what will happen is either a 21f* will be found, somehow, at which point it will probably be accepted that that is the diameter, or else a reasonably significant fraction of the space will be explored with no 21*f found, and people will simply lose interest as the "chance" of there being a distance-21 position diminishes.

Between Kociemba's symmetry searches and Silviu's and my coset searches, we've only found several hundred (almost 700) distance-20 positions altogether, and that's with many billions of "candidate" positions, ones that seem more likely to generate deep cubes. As Kociemba has stated, most antipodes of subgroups of the cube exhibit high symmetry; it seems unlikely that of all the distance 21 positions, none of them exhibit symmetry of four or greater. (Of course Kociemba is still finishing up some symmetry classes of symmetry 4, and I am still running edge cosets, so anything can happen.)

Or, in a few years or so, when we end up with systems-on-a-chip with a gigabyte of memory and a good CPU, someone will probably put together a few thousand of these and just set them to work. No hard disk is needed, and very little network I/O (do this coset, do that coset).

Very nice!!! There is an i

Very nice!!!

There is an interesting improvement that one can do to this. You can explore each coset up to depth 18 and save all positions that are unfound i.e. the ones at depth 19 and up for that coset.
After you explored every coset you collect the remaining positions in a set. Then you begin with the first position in the remaining set and multiply it by each generator if the result of this multiplication takes you out of the set then this position is at depth 19. If no multiplication takes you out of the set then its depth is higher then 19.
In this manner one can find all positions at depth 19. Afterwards continue with the same approach and find the depth 20 positions and so on.
The main disatvantage of the coset exploration is that when you come to higher depths you find to many solutions for the same position and this makes it impossible to run the coset until completion.

The improvement I mention is nothing else but a simple backwards search tehnique.

Several comments about Moore'

Several comments about Moore's law:

1. The end of Moore's law has been predicted several times before. And each time a technology breakthrough of some sort rescued Moore's law for a few more years. I think we are on a cusp where Moore's Law is once again at some considerable risk. The fundamental problem at this point is not that electronic components can't be made smaller and faster. The fundamental problem right now is heat. The faster is a computer's clock cycle, the more heat it generates. Indeed, the next big idea from Intel (for one example) is to make their chips run slower rather than faster. In order to continue to make computers more powerful, a single chip will (and any many cases already has) more than one processor per chip. This technique has the name multi-core chip (I don't understand the name multi-core -- I would call it a multi-processor chip -- but Intel seems to identify the concept of a chip with the concept of a processor).

2. Moore's law was never about the speed of the chip. It was always about the density of circuits on a chip -- making transistors and other circuit components smaller, if you will. In that respect, Moore's law may continue for quite a few more years before it runs into some fundamental physical barrier. Of course, making circuits smaller generally speaking allows them to run faster -- until you get to the point that there is more heat than can be dissipated. Maybe we just need to water cool (or even liquid nitrogen cool) our desktop PC's.

3. For many years, Apple had a policy that they would never make a computer that required a fan -- fan noise is uncool and all devices manufactured by Apple had to be cool (c.f. the iPod). But Apple long since has had to abandon that policy because faster chips couldn't be cooled any other way.

4. The bottom line right now is that Moore's law may continue for quite a few more years in terms of circuit density, but that doesn't mean that individual processors are going to get much faster than they are right now. IMHO.

5. I recently had to opportunity to visit a large supercomputer center. All the machines were roughly speaking speaking the size of a refrigerator (or several refrigerators, for the really big ones). Instead of exotic and super high speed designs (e.g., Cray), the machines all had thousands of processors each, where each of the thousands or processors were fairly mundane -- no faster than what most of us have on our desktops. The place reminded me of the old mainframe type computing centers where I used to work. The computer room floor was many thousands of square feet, and it was dominated by huge air conditioning units. Despite how mundane and low tech the chips are, when you put many thousands of them in one computer cabinet, they generate enormous amounts of heat. So the main function of this huge computer room was to supply huge amounts of cold air (and in some cases, cold water) to these new-fangled super computers. As I said earlier, the current fundamental problem with building powerful computers is dissapating all that heat.

What about memory?

I've never thought seriously about this, but your estimates may well be rather conservative. Remember that the speed of these searches is roughly proportional to the size of the pruning tables that can be used: We shall (probably) be upgrading to computers with much more RAM over the next couple of decades!

Mike

There was a thread on twistyp

There was a thread on twistypuzzles that briefly touched on this: http://twistypuzzles.com/forum/viewtopic.php?t=2270

Jaap's Puzzle Page:
http://www.geocities.com/jaapsch/puzzles/