Rubik’s Cube’s Math Secret

Teaching a computer to solve the classic brain-teaser yields unexpected insights
Andrew Winslow
“If someone later finds an algorithm to find the shortest sequence to solving a Rubik’s cube, it would solve one of the biggest problems—if not the biggest outstanding problem—in theoretical computer science,” Andrew Winslow says. Photo: Kelvin Ma
August 30, 2011

Share

Solving a Rubik’s cube—the three-dimensional puzzle invented nearly 40 years ago by the Hungarian architect Ernő Rubik—isn’t the hardest thing. After all, the world’s record is just under six seconds. But trying to solve it with a computer is quite a different story. Given that there are 43 quintillion possible starting positions, crunching the numbers gives even the fastest computers a processing headache.

Just ask Andrew Winslow, G14, a doctoral student in theoretical computer science whose concentration is in computational geometry. He and a group of Boston-area researchers decided to figure out how a computer might most efficiently solve the Rubik’s cube—and not just the standard one with three squares per row, but ones with up to 17 squares per row. They came up with some surprising findings that relate to real-world problems.

“This is such a natural, classic, nerd sort of question,” says Winslow. “Everyone’s played with a Rubik’s cube, but how hard is it from a computer-science lens to solve?”

He and his colleagues found that in one sense, at least, solving complex Rubik’s cubes as they grow in size is slightly easier than other computer scientists had predicted. Previously, it was thought that if complex cubes of different sizes were scrambled into their “worst-case”—hardest to solve—state, the maximum number of moves needed to set them aright would rise proportionately as the number of squares per row increased. Scientists in general, including computer scientists, tend to think that problems get proportionately more difficult as they get bigger, Winslow points out.

But in a paper that the group will present at the 19th annual European Symposium on Algorithms in September, they proved that with Rubik’s cubes, the usual way of thinking leads to the wrong conclusion.

“As the cube gets bigger, the number of moves you need to make grows, but not as fast as [the cube] gets bigger,” Winslow says. That’s because as the cube grows, “you can combine lots of little moves into single big moves.” In other words, you can move whole rows of squares at the same time instead of moving individual squares. “We showed essentially that you can always do that.”

For the math junkies among you, that means the maximum number of moves to solve a cube with N squares per row is proportional to N²/log N, not simply N², as originally thought.

Of course, all this work on what is basically a children’s toy might seem hard to justify, but Winslow says it’s worth the effort. “In the process of trying to solve these problems,” he notes, “you develop new techniques that other people use—proof techniques and ways of thinking about certain problems.”

It was fitting, too, that the group was led by MIT computer science professor Erik Demaine. Demaine—who is co-advisor for Winslow’s doctoral work, along with Diane Souvaine, professor of computer science in the School of Engineering at Tufts—is as well known for his work on the mathematical underpinnings of origami as for his engineering feats.

A Riddle with No Answer?

Once the team dealt with the problem of solving larger cubes, they faced a related one: computing the shortest sequence to solve a cube. What they found was that an algorithm—or set of mathematical rules—for addressing a variation of this problem probably does not exist.

In fact, they showed that if such an algorithm did exist, it would also apply to a many other computer science conundrums. The explanation has to do with the classic “P vs. NP” problem in mathematics, which has stumped experts for 40 years. According the Clay Mathematics Institute, the P vs. NP problem says that some questions fall into a special category: they “require an impossibly long time to solve” directly by computers, but the answers to them can be easily checked to determine if they are correct. Since 2000, the institute has offered a $1 million prize to the whiz who can solve the problem.

Winslow maintains that finding the shortest Rubik’s cube solution is one of those P vs. NP type of problems: no computer can efficiently solve it, even though it’s very easy to tell whether a solution is correct—in this case, when each side of the cube is one solid color.

“We essentially showed that if you could solve this problem efficiently, you could solve lots of other very significant problems efficiently,” he says. That would include real-world questions like how to most efficiently stack boxes in warehouses, how to map out highly efficient airplane travel routes, or how to assign jobs to workers so that all jobs are completed as quickly as possible.

“If someone later finds an algorithm to find the shortest sequence to solving a Rubik’s cube, it would solve one of the biggest problems—if not the biggest outstanding problem—in theoretical computer science,” Winslow says.

And while world-champion speedcubers, as they are called, can solve the Rubik’s cube in under six seconds while juggling the cubes, riding unicycles and yes, even blindfolded, Winslow shyly admits that he had never solved one by hand before the paper came out. “So I learned it,” he says. “You just need to sit down for a couple of hours and read the instructions online about it.”

Taylor McNeil can be reached at taylor.mcneil@tufts.edu.