HOWTO build a Linux-based supercomputer out of Playstation 3s

Discuss

29 Responses to “HOWTO build a Linux-based supercomputer out of Playstation 3s”

  1. WeightedCompanionCube says:

    The Cell processor is nice. It’s a real vector machine with a PowerPC front end.

    You CAN get cell boards for conventional cluster machines and they are a lot more powerful than a rack of PS3, but they ain’t cheap!

  2. Takuan says:

    this is great! Finally I can do those yield calculations without testing!

  3. ian_b says:

    Can GPUs be used similarly, and if so, what effect will OpenCL have on supercomputing?

    Also, there are new PCI-E cards coming out using the spurs engine, similar to a cell but with 4 stream processors. Will these cards be open for direct programming like the cell? Does anyone know what the price point for these will be?

  4. zuzu says:

    http://www.ps3cluster.org/ for the actual HOWTO.

    MPI is one of the most popular open standards for clustering/supercomputing on commodity hardware. While MPI runs on many platforms, we assume that you now have Linux running on your system. To enable it as a cluster ‘node’ you need to add the necessary communications layers, and then MPI itself. Here we illustrate the use of the ‘yum’ installer in this section. However, you can install these packages in whatever method you are most comfortable with. Finally, we include a link to the IBM Cell SDK (currently 3.0). It is helpful to compile your programs using this compiler to get the fastest performance from your MPI grid.

  5. dculberson says:

    Ten PS3s doesn’t even begin to compare to a modern supercomputer. not when the top two are over the petaflop mark. (That’s over 30,000 PS3s, assuming the earlier gflops rating is accurate.. which is debatable.)

    It’s marketing fluff to call this guy’s toy a supercomputer. Just like Apple calling the G5 a supercomputer when it came out.

  6. jmtd says:

    If the research department had to factor the electricity and heat displacement costs into the purchasing decision odds are this supercomputer would not be nearly as cost effective as buying in computing resource from a centralized source. Most university departments still pay things like estates bills with a traditional top-sliced funding model.

  7. technogeek says:

    Having worked on a 1980′s vintage supercomputer design (which, alas, never made it to production), I have to say that the Cell processor’s architecture is surprisingly close to what we were looking at — a core high-performance scalar unit surrounded by a group of vector processors. And now that folks have figured out how to use processor clusters effectively (which was an unsolved problem back then), combining a pile of Core-based systems can indeed yield a pretty powerful system at reasonable cost.

    At this budget, I’m almost tempted to build one just for the geek cred of being able to say I have a supercomputer in the house.

  8. takeshi says:

    No sarcasm intended here. I’m wondering: what exactly is a supercomputer nowadays? How fast does it need to be? My feeling was that all modern laptops would be considered supercomputers by 1960s standards, so somebody please fill me in here.

    I understand that by definition a supercomputer is one pushing the limits of modern technology, but what is the agreed-upon speed at which a modern computer qualifies? I just have no idea how fast the best modern computers are. And regardless of how fast conventional processors can be, are our experiments with quantum computation any faster? Wouldn’t that inevitably make the term “supercomputer” obsolete? Or are we not there yet?

  9. blitz says:

    >Instead, researchers write proposals justifying their request for time on the machines, and these proposals are reviewed and approved by a board of their peers in the computational science community.

    …or I can a spend pittance on a cluster like this and never have to write such a proposal again. Hmm…

  10. Daemon says:

    Not sure why you’d use PS3s though, other than the hacker cred for that particular conversion. I’m fairly certain you can get more processor power for the same cost with PCs.

  11. Coxswain says:

    Daemon @ 4, doesn’t the PS3 get better performance on certain kinds of calculations? I seem to recall that’s one of the reasons Sony touted its unconventional processor setup

  12. Thiazi says:

    Dr. Vijay Pande runs what was touted as the current fastest supercomputer, used for modeling protein folding. It runs off Linux and uses, among other things, Sony PS3′s. They give good FLOPS, apparently.

  13. dculberson says:

    …except your cluster won’t be useful for real scientific work… (unless you really like waiting, that is.)

  14. xdmag says:

    #4 Daemon, U.S. export laws forbids powerful processors from being shipped to some countries around the world. The PS3 is an affordable computing device, which, being made outside of the U.S., is not subject to those laws.

  15. rjmccall says:

    Building a supercomputer from PS3s is so cost-effective in part because Sony sells them at a loss — at least, this was true for a long time, and I believe it still is. They’ve also got very specialized hardware of a sort that’s quite useful for many supercomputing applications, but cheap is the dominant concern.

  16. spatulalilacs says:

    Funny that, as far as i know, Sony loses money on the console, intending to make money on the games (the ol’ gillette strategy). if this is in fact the best budget way to supercompute (whatever that means these days), do you guys think enough people will do it to really screw sony?

    also, is there anything we can build out of razors, without the need for any blades beyond the ones that come with them?

  17. Anonymous says:

    Hope they didn’t order NEW (slim) PS3s, which intentionally don’t run Linux.

    http://www.theregister.co.uk/2009/08/28/sony_ps3_slim_linux_install_loss/

    Doh. They’ll only be good for training new recruits with Modern Warfare 2.

  18. Takuan says:

    clearly, we need a new top level. I propose: “sooperdooperputer

  19. Takuan says:

    “dinkum-thinkum”?

  20. LaundroMat says:

    I think Sony would still prefer to sell their consoles and recoup at least some of the investment than having them in a warehouse somewhere without selling them at all (thereby losing 100% of the manufacturing cost of the device).

    On another note: shouldn’t the current US government use this “PS3 as a supercomputer”-story to retroactively defend the invasion of Iraq?

    When the PS2 was launched, the West prevented the Iraqi government of getting their hands on the technology as it was deemed powerful enough to guide nuclear missiles with. (Apply your own dose of salt).

    With the launch of the PS3, no such actions have been undertaken, which leads me to believe Saddam Hussein could have had a whole bunker full of PS3′s ready to hasten the downfall of the West.

  21. wellwatch says:

    For sheer floating point math the PS3 is the best deal on the block you get 76.8 GigaFLOPs (3.2 GHz *6 spes per chip * 4 flops per clock cycle) for only like $400 to get that kind of performance out of intel chips you’d need two quad core chips, and I don’t think those boxes are very cheap. What really kinda sucks is the RAM though you only get 256 MB in each PS3 so it is very limited as to what computing applications you can use it for. At work I convinced them to buy me 26 PS3s, and they all run Linux with GridSolve some software from the university of tennessee at knoxville, because it was a pain to get Open MPI to work with the Cell.

  22. Bill Barth says:

    It’s pretty misleading to say that most researchers rent time on supercomputers and that they’ll save some money by building one of these.

    Typically, scientists rent supercomputer time by the hour. A single simulation can cost more than 5,000 hours at $1 per hour on the National Science Foundation’s TeraGrid computing infrastructure. “For the same cost, you can build your own supercomputer and it works just as well if not better,” Khanna said. “Plus, you can use it over and over again, indefinitely.” The cost for his initial Playstation grid was $4,000.

    As a provider of cycles under the TeraGrid program, I’ll say that a) most of the simulations run on TG systems use a lot more that 5k CPU hours, and b) we don’t charge a dime for access*. Instead, researchers write proposals justifying their request for time on the machines, and these proposals are reviewed and approved by a board of their peers in the computational science community.

    I’m a little confused as to where Dr. Khanna got such an idea.

    * There are a small number of non-academic organizations that would traditionally not be eligible for free access that pay join partnership programs (i.e. companies in industry or other government funded labs like NASA). These programs usually involve some system time, but they don’t pay on a dollars per hour basis. We have no program, though, through which academic researchers could buy cycles. It’s all free for them.

  23. technogeek says:

    #22: The terminology is fuzzy, but in fact the biggest supercomputers these days are scalable systems, and the largest ones are just the biggest configurations thereof. A sufficiently large subset of that capability could still be argued to be a supercomputer since the only real difference is how much money you’ve thrown at it.

    Not everyone needs the best machine in the world. For a lot of things, having a smaller one that you own all the cycles of rather than a bigger one that you have to share with other researchers may in fact yield better throughput… and in many cases the performance needed for a particular task is significantly bigger than “a mainframe” or a simple server farm but significantly smaller than something like Blue Gene.

    And as the World Community Grid folks have shown, enough small contributions of cycles can add up to more processing power than you could afford any other way.

    I’m an engineer. I don’t need the ultimate solution, just the most cost-effective solution that meets the actual requirements.

    Song cue: Frank Hayes’ “You Can Build A Mainframe From The Things You Find At Home”

  24. nprnncbl says:

    Unfortunately, Sony intentionally hobbles the PS3 when running linux: only 6 of the 7 PSEs are available, and the RSX (essentially the GPU) is unavailable (see Linux for PlayStation 3), to prevent people using linux to play “backed up” games.

  25. takeshi says:

    @ dculberson:

    Thanks for answering my question. Petaflops? Jesus. Are we building the Death Star or something?

  26. teh_chris says:

    @daemon/ian B

    the reason to use PS3′s is that the ps3 client runs the math not on the CPU but on the consoles many GPUs. multiple cores and optimized pipelines are relatively new technologies in CPUs, but they are relatively old concepts in GPUs given the highly parallel nature of computer graphics game consoles and high end video cards have tons of GPU power because they can have many parallel cores and pipelines.

    since GPU’s do pretty much one thing they can be super specialized for that one purpose. the separation of 3d graphics from the CPU in the late 90′s is what put the PC on the map for 3d imaging. before that change, high end 3d graphics (CAD, animation, etc.) was really only feasible on high end unix workstations.

    scientific computing on GPUs is fairly cutting edge, based mostly on nvidia’s CUDA technology. the BOINC project is just now adding GPU support via CUDA: http://boinc.berkeley.edu/cuda.php

    you don’t need a PS3, you can also use your nvidia graphics card: http://www.gpugrid.net

  27. Earth Man says:

    “…they can build their own for about $4K, saving cash and freeing up time for additional experimentation…”

    And not, of course, for time spent on GTA IV.

  28. hilbertastronaut says:

    @ nprnncbl: Sony doesn’t necessarily hobble the Cell intentionally. One of the reasons why the Cell boards and servers are way more expensive than the PS3s is because of flaws in the silicon.

    Each Cell CPU has 8 vector processing units, called SPEs. About 10% of the Cell CPUs produced have all their SPEs working; these Cells go into the boards and servers. About 90% have at least one of the SPEs broken and unusable: those with at least seven of the eight SPEs functional go into the PS3s. One of the seven that works is claimed by the PS3 for system purposes and the other six are under programmer control.

    BTW this is one of the reasons why multicore is a good thing for the chip industry: if one of the cores turns out broken, the manufacturer can disable it and sell the chip at a cheaper price. Why else would AMD offer 3-core processors (3 being such an awkward not-power-of-two number)? Also GPU manufacturers do this all the time: this is why there are many different boards in a series and why the higher-performing boards cost more (because there are fewer of them).

    It’s just like with diamonds: for diamonds of the same size, the ones without flaws are more expensive, because there are fewer of them.

  29. hilbertastronaut says:

    @ nprnncbl: oops, i missed your comment about Sony disabling the GPU. But the comment about 6 SPEs vs. 8 SPEs still holds.

Leave a Reply