Hacker News new | past | comments | ask | show | jobs | submit login
The Design of the Connection Machine (1994) (tamikothiel.com)
99 points by boulos on Nov 13, 2021 | hide | past | favorite | 38 comments



William Daniel Hillis's dissertation about the Connection Machine is available online [1] and is a fascinating read. It generated some interesting past discussion here [2]. I started reading through the dissertation earlier this year with a hope of gathering enough information to reverse engineer Feynman's partial differential equation based method to calculate the number of buffer's needed per chip. I had to drop that project for lack of time but came away really impressed with the considerations that went into the design of the CM.

[1] https://dspace.mit.edu/bitstream/handle/1721.1/14719/1852428...

[2] https://news.ycombinator.com/item?id=12281637


I’ve always been curious about both the CM and Feynman’s PDE based approach, but never found any clear writing on the topic. If anyone is digging into this, I’m happy to assist in any way I can (I have a theoretical physics background, fwiw). Feel free to hit me up (email in profile).


I was fortunate enough to have been paid to write some Star Lisp code for the CM-1 that was provided to my company via DARPA.

We also received a Butterfly Machine but I never used it.


> Perhaps we need to do exactly the opposite, and look at the so-called primitive or pre-industrial cultures to find out how they use ornament to increase the significance and worth of the objects they produce.

I think that ornamentation and aesthetics are often overlooked or minimized in the pursuit of optimization, typically cost reduction. But this article and this line in particular elucidates why we stand to benefit from keeping aesthetics in mind.


The final black hypercube design looks modern even for contemporary standards.


And it probably always will, because it’s so stark and devoid of period references that it’ll never look more or less in context. That’s the point, I think.


I worked as a CPU architect at Intel for a decade. Never once did anyone solve an architecture problem the way Feynman did on CM-1. From wikipedia:

"The engineers had originally calculated that seven buffers per chip would be needed, but this made the chip slightly too large to build. Nobel Prize-winning physicist Richard Feynman had previously calculated that five buffers would be enough, using a differential equation involving the average number of 1 bits in an address."

He determined the buffer size using a differential equation? Modern engineers would run trillions of simulation cycles of an RTL model and sweep the buffer size for thousands of workloads to find the inflection point, and present an Excel chart in a meeting. Then the design engineers would determine if there was enough die area to implement the buffer size, and even if there wasn't area but the performance was significant, they would have to implement it.

Engineering now vs engineering then.


>Engineering now vs engineering then.

Your post was great up until this. I don't like the 'Things were great in the past but look at us now' tone in general (doesn't mean it's not true in some cases), but also that's not what the historical incident shows : the "engineers then" were just as flabbergasted at Feynman's approach as anybody, and didn't trust the results.

Feynman was a physicist, not an engineer. Differentials and integrals were his breakfast and dinner for ~50 years at the time. You and his colleagues at the CM team saw Outsider's Effect in action: where a talented individual, with a mind fresh of dogmas and established approaches, manage to repurpose the tools they practiced and honed in a different domain to engage and destroy a target in another domain from a highly unusual angle. Like startups, it doesn't always work. But when it works, it's fantastic.


As much as I admire him, Feynman was also no stranger to "Proof By Intimidation." So, skepticism about his claims was always warranted if you couldn't follow them yourself.

This is the difference between engineering and academia--abandoning an incorrect claim or assumption may carry a cost. If you are engineering something, someone might might actually hold you to your claims and start allocating money and resources. So, your claims had better be accurate.

This is part of where the famous intransigence of engineers comes from. "Yes" and "No" are difficult answers. The world is full of "Maybe". Having the intuition from experience to pull the trigger even when the answer is "Maybe" is what makes you a high-level engineer.


> Feynman was also no stranger to "Proof By Intimidation." So, skepticism about his claims was always warranted if you couldn't follow them yourself.

Any specific example?


Mostly interviews with students or grand-students.

However, I think Carver Mead also mentions this about "Collective Electrodynamics". Feynman got a lot of it right, but there were gaps and he simply bulldozed over people who poked at the gaps--sometimes because he believed it obviously true and sometimes because he wanted to dissuade people from working on something he was already working on himself.

It's kind of a bad habit of lots of academics. Nobody has the time to understand the whole stack of things you stand on, so someone (generally junior) poking at something way down the stack is generally going to get brushed off with "That's obviously true". And, generally, that's fine. But, sometimes it's not and in engineering sometimes it has a cost.

A vital part of engineering is communication. For example, I have calculated the resonant frequency of a microprocessor power grid using a Poynting vector formulation. It was very clever and very accurate, but completely indecipherable to most engineers. It was my job to correlate that with something that fellow engineers understood and trusted.


You read way too much into that.

I never said A was better than B, you just assumed and then got on your soap box to lecture me.

Also, I don't care what you like or if you think my post is great.


This thing has so much soul. Its physical design is intentional; its form reflects its structure and function.

Supercomputers now are just big racks with maybe a cool graphic printed on the side, which makes me kind of sad.


Agreed. Seems The Connection Machine was the last gorgeous supercomputer.


I loved working at TMC. Amazing collection of some of the smartest and most interesting people.


Did/do you work at Ab Initio too, or just TM?


got any cool stories?


Someone should make a mini CM-2 -- desktop size. With the LEDs and everything. And then some kind of emulator (or FPGA) on the inside that could actually run their code. Maybe connect to a desktop via USB or something.

I wonder if the compiler/whatever for the host machine is available anywhere?


There is the *Lisp (StarLisp) simulator at least: http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/l...

This seems to be a GitHub mirror: https://github.com/LdBeth/star-lisp


I have made a ~10% model of one with lights and all.

The innards do not run CM software though. But I'd did try to stay true to the overall spirit of the design. I have a bunch of Intel Edison boards slotted into the cubes that can slide in and out to add processing capacity for my model setup. I copied the overall gist of the original cooling solution too.

I agree that there should be kits for this sort of thing. The machine looks so cool.


if anyone is interested..I'm sure there are other people more qualified, but I did spend quite a bit of time working with Paris and the underlying microcode system CMIS. it really wouldn't be that huge of a project to undertake. (@gmail.com)

one FPGA cell looks a lot like a CM processor, but since it gets reprogrammed every cycle, idk that that direct mapping works. but something would.

an interesting question though is the point. CM software wasn't anything like 'normal' software...so it took some subs tantial work to port codes.

would anyone really use such a thing or it is just a curiosity?

a _really_ cool project would be implementing CM-LISP

edit: everything except the router that is. that would be a bit of trouble


@convolvatron Why do you say that implementing CM-LISP would be a really cool project?


unfortunately someone has managed to scrub all the PDFs off the internet.

CM-LISP to me is kind of the holy grail of implicit parallelism. its kind of a relational paradigm but organized around recursive distributed mappings.

the only way that I can think to compile it is to build something like a relational query optimizer. except (I'm pretty sure(?)) mappings are first-class objects and the distributions of the underlying sets are fully dynamic.

so that's already a lot to try to come to grips with, but in this case the target is SIMD. which would in theory permit fulfillment of the CM thesis - arbitrarily scalable general computers without the overhead of reactive consistency (cache coherency).

CM programming was actually not that bad - but you needed to lay everything out in 2^n cartesian meshes. CM-LISP is just a free-for-all in terms of domain organization. actually I do really want to find the pdf again because its a much nicer model than graph-ql

I kind of suspect that it can't be compiled effectively, but last time I checked I'm only like 0.05 Steeles. I don't _know_ that it would turn out to be as compelling a programming model as I think, but would love to find out.

edit: I did find some cm-lisp related material in Hillis's thesis at https://dspace.mit.edu/bitstream/handle/1721.1/14719/1852428...

[there is a strong implication in the thesis that layout was intended to be dynamically optimized - so maybe the compiler issue isn't as terrible as I think]


Thinking about it - the Tera MTA (https://en.wikipedia.org/wiki/Cray_MTA) would make a really nice platform for CMLISP.


Anyone know how many CM-1s and CM-2s were made? I spent a few minutes googling and got nothing. Did they sell 10s of them or 100s of them or more?


from my very hazy memory, there were only a few CM-1s made. maybe 10 in total outside TMC?

CM-2s showed up at most supercomputer centers in the US, and even some universities. I think Yale(?) had an 8k. there were several at black sites - at least another 8 machines I guess.

so at least tens, but I don't think 100s


Ok, thanks.

It occured to me how cool it would be to pick one up on eBay. Obviously, I didn't find one. (Not that would have actually bought one.) I guess this is why :)


haha - first thing I did when I saw the hypercube/nuerons tshirt was check ebay - no dice!


In the article it's referred to as the "CM-1 tshirt design" so I plugged the first two words into duckduckgo and the first result led me to someone who is selling the tshirts:

http://www.tamikothiel.com/cm/cm-tshirt.html


Which is actually the same site as the article. It's Tamiko Thiel's personal site.


hey thanks! goes to show i need to step out of my ebay bubble from time to time


I have one and the printing and material quality were very good.


top500.org for June 1994 lists 79 entries for Thinking Machines which is their high water mark, but it looks like most of those are CM-5s with about 20 being CM-2.

So assuming some of the CM-5s are replacements for CM-1 and 2s I would expect they sold maybe 100 machines total, 150 absolute max across all three models.


The mid-1980s to mid-1990s was the golden age of computer architecture experimentation. Computer aided circuit design meant a relatively small group of engineers could design and fabricate a new CPU design.

The competition from rapidly evolving commodity CPUs ended this era. It took 3-5 years to make a new generation special purpose CPU and financially justify manufacture. By that time the commodity CPU price-performance increased an order of magnitude and caught up. Few of these designs ever produced a second generation. And I think only Convex made it to gen-3 before folding.


I just saw a nice keynote by David Beazley on how he installed python on a CM5 at Los Alamos, very entertaining


What OS was it running? or did he compile Cpython from source?


CM5 used solaris front ends, back ends didn't have an OS.


This is the talk: https://m.youtube.com/watch?v=4RSht_aV7AU, it is a full hour. Quite enjoyable but light on in depth technical stuff. Don't recall hem mentioning how git python running technically, it is more about how he came in the position to do so in the first place.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: