Hacker News new | past | comments | ask | show | jobs | submit login
New Parallella Boards (parallella.org)
115 points by ungerik on July 14, 2014 | hide | past | favorite | 22 comments



Well, this post has perfect timing for my ends. :-) http://sfbay.craigslist.org/sby/sys/4567451353.html

I backed the original Kickstarter, have a bunch of Parallellas, and due to a cash crunch have to liquidate a bunch of them. Above is the link to my Craigslist ad.

These have equivalent specs to what Adapteva is calling the "embedded" version, but they come with the older, smaller heat sink and thus will still need an external fan.

Note that I'm both selling below Adapteva's list price and that the version I'm selling is marked "out of stock" on Adapteva's site. :-)


Out of curiosity, how hard is it to program them? Do you need to be proficient with C? or as long as you can understand OpenMPI and such you are good to go?

How do you program the FPGA? I' asking this because I have a heavy OO background and even when C is not entirely strange to me, I'm not sure if it's as easy as to make a weekend project with that or not so much.

Your thoughts on this? Did you chain together more than one?

Thanks for your comments!


Right now you basically need to either use C or OpenCL. (I'm not familiar with OpenPI and Google doesn't pull up anything that seems relevant to this conversation.) If you're not proficient in C, I would recommend the OpenCL route.

The crew at Erlang Solutions is working on running Erlang on the Epiphany chip, but I haven't seen any visible progress from them.

I can't tell you anything about programming the FPGA. You don't need to directly mess with the FPGA to use the Epiphany. From what I've seen, the jump from C to VHDL is even bigger than the jump from OO to C, so probably not a weekend project. Also, there's no FOSS toolchain for programming the FPGA--you have to use Xilinx's tools, about which I've heard mixed reviews.

I bought the cluster with the intention of, well, building a cluster, but to date haven't actually tried that--Real Life got in the way of doing that. I do have all the hardware sitting here to connect them over plain old Ethernet.


Sorry, it was my bad. I meant to say OpenMPI :P

Thanks for the comments! it certainly gives me some idea of what it would be.


Okay, OpenMPI will help you with clustering multiple Parallella units to work on the same task. AFAIK it won't run on the Epiphany itself, so you have to look elsewhere for that.


Still got them? I'd be interested in one .. assuming you're willing to ship to Austria.


Email me at afishionado@gmail.com.


I'd buy one but I'm out of town until the 22nd... let me know if they're still available.


The fact that price goes down as the application goes from server to desktop to embedded is very telling, in several ways.

One way is the commoditization of computing: actual computer power is now a commodity, with very low prices and standardized interfaces. Ethernet is the new pallet, and servers are trucks.

On the other extreme, the embedded landscape demands catering for each application, demanding a lot of GPIOs, specific hardware and hand work. It gets expensive.

Of course, the Parallella just shows this in such a nice way because it evens the (massive) computing power in all offerings, that differ only in interface. Actual consumer products will try to balance this, but it still tells us a history.


     The fact that price goes down as the application goes 
     from server to desktop to embedded is very telling, in 
     several ways.
The fact that you've fallen for a very simple marketing trick? The name "server", "desktop", and "embedded" are marketing terms. They're all the same CPU. The main difference is the number of GPIO pins between the board numbers.

Basic marketing trick. Always name your products as "good, better, and best" in some form. Unfortunately, it feels slimy to me, and I don't like it.

That said, I do like the idea of the Epiphany chip they're offering. But based on cost and performance alone, its clear that a $120 AMD R7 260x graphics card will be superior to what they offer here. (With its 14 compute units at 1.1 GHz, can do 896 SIMD integer or single-precision floating point operations per clock). R7 260x also has Windows / Linux drivers and OpenCL support included...

Epiphany is doing a disservice to themselves if they are trying to compete against "desktop" and "server" computers. Their niche is in their performance / watt. GPUs, with their 5GHz+ GDDR5 RAM, ~GHz clock speed, and super-parallel architectures will continue to dominate supercomputing at the ~100W to ~500W levels.

Epiphany IV is a supercomputer design at ~2W. Anything more is settled by the current laptop market. (AMD Kabini hits 150 GFLOPs at ~25W for only ~$60 CPU on a $30 motherboard)


   The name "server", "desktop", and "embedded" are marketing terms. They're all the same CPU. 
Uh, no. The embedded version features the Zynq 7020, the others use the Zynq 7010.

The server version is cheaper because it leaves off the HDMI port and associated circuitry.


IO also costs money, often more than the actual circuitry.

They certainly have some margin, and it's probably bigger at the embbebed version, but it's not only a marketing trick.


I agree... its not "only" a marketing trick. The Epiphany IV looks like a unique computer architecture that I'd be excited to play around with (if I ever got the time...).

But I think its a bit of a stretch to call their "Desktop" offering suitable for "A Personal Computer", especially when the Zynq Z7020 is the core CPU of it. The performance offered is solidly in the "embedded" realm, and barely will be more powerful than your cell phone. (1GB of RAM is weaker than most people's cell phones...)


I'm sorry for the mistake in the first phrase. But it seems that you haven't read any phrase further nor TFA.


The embedded version is the most expensive, which is probably what you meant to say in your first sentence.

The embedded variant comes with a bigger FPGA than the desktop version.


Bing, you're right. I posted and left the computer, so I couldn't correct.


I wonder how long it's going to take until someone is going to port OpenCL natively to Fortran. Lots of Fortran programmers in HPC and they currently can only use NVIDIA GPUs and Intel MIC. ARM platforms could be interesting in the long run. I hope either AMD or Cray take the plunge. If that happens, I'd be happy to integrate Parallela with my parallel computing preprocessor framework[1].

[1] https://github.com/muellermichel/Hybrid-Fortran


Parallella went through a lot of production issues. Happy to see them having momentum now they fulfilled their KS deliveries.


So glad these folks haven't died. If I can get one it will be a nice complement to the Zedboard.

Still need to get the bugs worked out of my Xilinx on Ubuntu setup though. Annoys me to have to run Windows in a VM to use the toolchain.


For what it's worth, I haven't had any trouble on RHEL (which is recommended).


Is a 64-core model in sight at all?


In sight? Yes. Shipping to more than the original Kickstarter Backers? Probably not.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: