Raspberry Pi supercomputer

in Development, R-Pi by the machinegeek | 18 comments


The crew at the University of Southhampton have been working on a Raspberry Pi based supercomputer. “The machine, named ‘Iridis-Pi’ after the University’s Iridis supercomputer, runs off a single 13 Amp mains socket and uses MPI (Message Passing Interface) to communicate between nodes using Ethernet. The whole system cost under £2,500 (excluding switches) and has a total of 64 processors and 1Tb of memory (16Gb SD cards for each Raspberry Pi). Professor Cox uses the free plug-in ‘Python Tools for Visual Studio’ to develop code for the Raspberry Pi.” The racks holding the components are made from Legos!

They’ve published links to their detailed build instructions on the Raspberry Pi at Southhampton website.

This entry was posted in Development, R-Pi and tagged , .

Comments

  1. A.Lizard says:

    Cool, but wouldn’t a Parallelia “supercomputer” (32 GFLOPS Peak Performance for $99) be more cost-effective? http://www.kickstarter.com/projects/adapteva/paralalella-a-supercomputer-for-everyone/posts/378142

  2. Blah says:

    This whole thing is ridiculous -

    - The RPi has 256MB memory so total RAM is 32GB – you cant class the very slow SD cards as ‘memory’
    - The Ethernet connection is actually on the USB bus so the interconnect will be incredibly slow
    - The processor is a relatively slow 700 MHz ARM v6 – a modern i7 will conservatively be 10x the performance per core
    - The GPU is closed source and there is no open programming interface
    - For much less money you could have bought a Core i7 motherboard + GPU/RAM/SSD which would be massively faster and less effort

    • Kris Lee says:

      I think that you are pretty much right – the main issue here is the closed GPU that can not be put into use.

      Other ridiculous thing is to attempt to include the SD card space as a memory space. This is just plain lying.

      Still I can consider this interesting as lyinga way of playing with concept of distributed computing. Then again they could have managed with just few of Pi’s.

  3. anon says:

    ..and 100Mbit Ethernet, not even GigEth. And a 64-port switch needs to be costed too. So I guess it is really just a “concept” but it seems a very expensive concept, with little use.

  4. Paresh Mathur says:

    I have often thought about performing distributed computing using just a few ATTiny controllers. I consider it a cool project, some day I would learn about distributed computing enough to implement this. The basic idea is to connect every thing over I2C and then try to implement the interface in such a way that the programmer sees it as one computer and not 4 or 8 controllers. The choice of I2C is because it is already available on the devices and is relatively fast.

    Any pointers on how to implement this are welcome.

  5. martin robilliard says:

    I think all the above comments are (apart from Paresh Mathurs) getting this all wrong, it’s not about a modern i7 betting this we all know that, it’s about building a cheap super computer that most schools can afford, the concept will be the same if you used 64 i7′s or 64 pi’s.

    • rasz says:

      There is nothing Super about this. Other than a mess of cables and cost.

    • A.Lizard says:

      The idea is cheap, accessible gigaflops and a learning platform for parallel programming. As I said, the Parallelia is quite a bit cheaper (16 processor array on a single board for <$100) What's the advantage in a far more expensive and harder to build array of Raspberry Pis? I like the Raspberry Pi, but there are no universal solutions.

    • Blah says:

      Except that it isn’t cheap or a super-computer.

      Also, if you want to experiment with developing MPI applications you can use ‘mpirun -np 64′ to just create 64 processes on a single machine so this argument doesn’t stack up either.

  6. Paul says:

    Yeah, now I know why folks around the world still can’t get their hands on Pi’s – they get hoarded by mad scientists like this.

  7. Mayank Bhatia says:

    I dont think it is a “super computer”. I think thats why there is no mention of peak performance of the system. Idea is to learn how to connect many pi’s to work distributed loads. Tomorrow there could be more powerful pi-like boards and hopefully the concept and scripts can still be ported.

  8. Scott says:

    Whatever happened to the idea of doing something “just because it can be done”? Sure, this “supercomputer” isn’t of much use as, well, a “supercomputer,” but why not do it just to show what can be done? And if it leads others to experiment with new and different ways of applying this technology, isn’t that the true measure of its success?

  9. randomdude says:

    I think you guys may have missed the point of the experiment, it seems to me he is demonstrating the concept of multi processing, or using multiple computers to accomplish a task that a single computer can’t, this same concept can be applied to more powerful machines to accomplish massive task that require an immense amount of processing power, for example chaining several hundred i7 machines together, is there a need for it now? no absolutely not, but it may become necessary in the future to make immesnly complex calculations which is why its a good idea to have the concept proven, if you don’t believe me, then you should know that even the most powerful computers in the world cannot finnish the decimal sequence that follows the constant Pi (3.14…..) it could be possible with enough computers and enough power to find the end, but not if we stay with a single computer system. so i don’t think its fair to criticize this man for this project, this is probably the cheapest way to demonstrate this concept.

    • GotNoTime says:

      Calling it a supercomputer is silly due to performance. A cluster is more accurate and would be entirely appropriate for research purposes.

  10. Tang Kin Meng says:

    Your project should be interesting. More so if the PI memory is of 2GB at least and can support at last a 500GB fast storage attached. And with all these still a cost effective board. You can bring array and high end distributed computing to the forefront. Imagine you can have 500 such systems house in a cabinet no more then 4ft X 2 ft. Ideal for private cloud deployment.

  11. Ian Lewis says:

    There are a lot of negative comments here by people taking the purist view. This is fine but rather misses the point I think.

    As I see it this piece of work isn’t about creating a real supercomputer for real-world applications it is more about using the Raspberry Pi as a prototyping tool to demonstrate some principles of super computing.

    An effective demonstration of parallel processing is useful and having multiple Raspberry Pi computers sharing tasks is an effective way of doing so. It’s simple, it’s visually easy to understand. My 14 year old son would have no trouble moving from this prototype to ‘getting it’ with more abstract super computing concepts.

    This is a good idea and a good demonstration. Maybe it will lead to more who can appreciate parallel computing in the future, we will need them.

  12. ee says:

    As I see it this piece of work isn’t about creating a real supercomputer for real-world applications nor about an effective demonstration because as many people above have mentioned, it is ineffective even as a demonstration. Creating multiple instances or multiple VMs is hardly an “abstract” concept – especially to university students – give people more credit than that.
    it is more about publicity. Professor sold out.

Leave a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Notify me of followup comments via e-mail. You can also subscribe without commenting.