Game uses fun as incentive to solve hard chip-design problems

A new game prototype called FunSAT from University of Michigan computer scientists Valeria Bertacco and Andrew DeOrio marries human intuition to computerized chip design to solve problems that computers are bad at by making it fun for humans to help them:
By solving challenging problems on the FunSAT board, players can contribute to the design of complex computer systems, but you don't have to be a computer scientist to play. The game is a sort of puzzle that might appeal to Sudoku fans.

The board consists of rows and columns of green, red and gray bubbles in various sizes. Around the perimeter are buttons that players can turn yellow or blue with the click of a mouse. The buttons' color determines the color of bubbles on the board. The goal of the game is to use the perimeter buttons to toggle all the bubbles green...

The game actually unravels so-called satisfiability problems--classic and highly complicated mathematical questions that involve selecting the best arrangement of options. In such quandaries, the solver must assign a set of variables to the right true or false categories so to fulfill all the constraints of the problem.

In the game, the bubbles represent constraints. They become green when they are satisfied. The perimeter buttons represent the variables. They are assigned to true or false when players click the mouse to make them yellow (true) or blue (false).

Once the puzzle is solved and all the bubbles are green, a computer scientist could simply look at the color of each button to gather the solution of that particular problem.

Game Utilizes Human Intuition To Help Computers Solve Complex Problems

FunSAT: Human Computing for EDA



  1. first, we help them. we lend our knowledge in order to increase their capacities.
    then, they begin to help us out. this was already happening, but up until this point nobody noticed just how much…

    now, it’s too late. they’ve already got a foothold. they know.

  2. Having that guy in the 1950s do crossword puzzles in the paper that turned out to be game-theory problems relating to missile defense worked out OK for us.

  3. Why they chose sudoku as an example beats me, because it is a game that is eminently suited to be solved by computer.

    BJacques gets props for referencing an obscure PKD book.

  4. I think of this a lot – games stimulate the brain in ways that other activities just don’t – accesses more of (the supposed) Einstein’s lost 90% – so much of what we do could be done like this.

    But helping them to learn … helping them to learn … seems foolhardy, no?

    The event horizon will be when they work out what consciousness means to us. Then, they can beguile, fool and hypnotise us.

    They have been practicing, btw, if you care to read the ROTM portion of

  5. They came up with this neat idea, hoped to get a lot of online participation, and implemented it in Java?

    Was there something about this game that made it impossible to implement in Flash?

  6. #8:
    Based on the description (I don’t have Java installed on this machine) the game could be implemented in JS: no need for fancy-shmancy plugins at all.

  7. FoldIt is a fantastic idea, but the UI is horrible. I’m a full-time 3D graphics guy, but I have a really hard time pulling things the way I want to in their interface.

  8. I got up through level five, then started to get bored. So far, the puzzles could have all been done much better by a computer. I’m guessing if I stick with it for a little while, it may get more interesting.

    It’s interesting — I have an MSc in AI, and I’m actually (so far) solving it using a fairly standard AI algorithm: at each step, pick the option that appears to give you the most reward and constrains you the least. In this case, that means starting with the toggles that satisfy the most number of lights for one option and the least number of lights for the other. The reason I’ve found it boring, though, is that I’ve never once been wrong using that algorithm, so have never had to back-track and really think it through.

    It reminds me of, as LennStar noted above. I really never understood, though, and just felt like I was guessing.

    @johnofjack: You’re surprised? These people are obviously programmers, not design people. The easiest language to bash it out in and stick it on the web is Java. Most likely, no one in the team knows Flash, or has ever needed to. But if it starts to get some attention, it’s true that they would be advised to get a Flash person.

  9. @johnofjack: They’re scientists. Scientists use “serious” languages like java because that’s what they learned in CompSci classes. Flash isn’t the kind of thing that you study as an AI researcher, I don’t think.

    I agree Flash would’ve been better though. The java app pegs my CPU at 100%, for one thing.

  10. I’m yet to see why this is something humans are good at but AI would have an issue with. As noted above a fairly generic algorithm leads to the correct result quite quickly and for the early puzzles even a brute force attempt with a decently written application would be able to check every possibility faster than a human could click the buttons to enter in even one possibility.

    If future puzzles took significantly longer to check the states of the bubbles (making brute force take far too long) and standard searching algorithms didn’t work then I suspect the majority or people would find the time taken to work out the right answer frustratingly long and requiring the app to ‘think about’ whether they’d got the correct answer whenever they changed a value would quickly frustrate and cause them to give up.

  11. For those who aren’t familiar with it, 3sat is a product of sums boolean expression where each of the terms is the OR of 3 variables or inverses of the variables.

    For example F(A,B,C,D) = (A+B+C)(!A+C+D)(!B+!C+D)

    Your task is to find an assignment to the variables that makes the whole statement true. To do that each of the parenthesized terms must be true. In this case A=1, D=1 is an assignment that “satisfies” all the terms. There may be others.

    In the funsat game, I assume that each of the blobs in the middle is a term in the overall function. A green ball is one that is satisfied, a red ball is one where all of the variables have been assigned and it is still not true.

    I wish that funsat showed the expressions so that you could have an idea which variables matter to that term. The only way to figure it out is to zoom in on one ball. Then it shows only the inputs that matter for that term.

    -Jeff Bell

  12. After playing with it for a while, it turns out I was missing something. If you turn on the checkbox at the bottom you can see which input contribute to that term. Also, the darker the circle, the fewer undecided terms remain.

    For the SAT, remember to do all of the smallest circles first. Thy have only one variable, and there is only on way to go. Then find all the black circles. They have only free variable left, so you don’t have to decide anything yet.

Comments are closed.