Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What Extropic is building (extropic.ai)
185 points by jonbraun on March 11, 2024 | hide | past | favorite | 140 comments


I’m not a fan of Extropic, but I’m seeing a lot of misconceptions here.

They’re not building “a better rng”- they’re building a way to bake probabilistic models into hardware and then run inference on them using random fluctuations. Theoretically this means much faster inference for things like PGMs.

See here for similar things: https://arxiv.org/abs/2108.09836

There’s a company called Normal Computing that did something similar: https://blog.normalcomputing.ai/posts/2023-11-09-thermodynam...


Skimmed the litepaper. Has the flavor of: you can do "simulated" annealing by literally annealing. I like the idea of using raw physics as a "hardware" accelerator, i.e. analog computing. fwiw, quantum computing can be seen as a form of analog computing.

I do think that a "better rng" can be interesting and useful in and of itself.

Thanks for the Normal Computing post, it felt more substantial.


I make a better RNG right now (https://arbitrand.com).

We experimented with doing ML training with it, but it's not clear that it trains any better than a non-broken PRNG. It might be fun to feed the output into stable diffusion and see how cool the pictures are, though.


Cloud RNG number streaming is interesting but costly, no? I did have the idea to serve truly random numbers via a quantum computer (trivial by just preparing the simplest state and measuring). Anything else can't be said to be truly random.


You don't need a quantum computer to sample noise from quantum processes.

It's a prohibitively expensive way to go, and depending on how you built the quantum computer, it may be more susceptible to interference and non-quantum noise than using good circuits and custom systems.


Oh sure, it was more of a joke app idea, or in another sense, for those who want philosophically "perfect" randomness. After all, assuming a single bit sample, anything but a hadamard state has nonoptimal entropy. Of course, it makes almost no sense to sample a bit at a time since we want pragmatism. But having a virtual coin flip essentially "create" a new, nonexistent bit of information in the universe -- that's funny (well, assuming certain interpretations of QM).


The website has a picture https://arbitrand.com/fpga.png but it is difficult to understand what it represents. Needs clarification.


That looks like the placement of their circuit on the FPGA. Like a screenshot from ChipPlanner in Quartus


with error correction, qc is entirely distinct from analog computing. that is what makes it even remotely viable, theoretically.


I'm using the term "analog computing" to mean using non-digital (or even digital but in a nonstandard way). A quantum processor is not digital as each qubit has an uncountable number of states. A quantum computer would likely have a classical (digital) part to measure the quantum processor's registers, which are a bunch of "analog" states. And even if one wants to use terminology in a way that qc isn't analog, it would still objectively share many qualities with "normal" analog computing (basically all of its differentiators against classical compute).

Then again, I should probably also ask what you mean by "analog computing", and why you think quantum error correction would not allow qc to be classified as analog.

For context, I did research on crafting a high-fidelity (error-correcting) quantum gate (successfully).


It did make me curious however, if we dropped the requirement that operations return correct values in favor of probably correct values - would we see any material computing gains in hardware? Large neural models are intrinsically error correcting and stochastic.

I’m unfortunately not familiar enough with hardware to weigh in.


The trouble is if you use actual randomness then you lose repeatability which is an incredibly useful property of computers. Have fun debugging that!

What you want is low precision with stochastic rounding. Graphcore's IPUs have that and it's a really great feature. It lets you use really low precision number formats but effectively "dithers" the error. Same thing as dithering images or noise shaping audio.


Yeah debugging would be a pain, but in the context of inference/training unnecessary. There is some set of ops which requires high precision, if I L2 normalize a tensor - I really need it to be normalized. But matmul/addition? Maybe there is wiggle room.

Big challenge would be whether any gains could compete with the economy of scale from NVidia.


So it sounds like this startup is explicitly not using foundation models?

Is there any evidence that such a probabilistic model can run better than a state of the art model?

Or alternatively what would it take to convert an existing model (let's say, an easy one like llama2-7b) into an extropic model?


Is there any evidence that such a probabilistic model can run better than a state of the art model?

No, but they got 15M seed funding anyway.


I wouldn't want to write this off because you get the feeling these guys are on to something that could be hugely important (ignoring quantum this thermodynamic that) - but surely it feels like they need to get to the point a lot faster e.g.

"We're taking a new approach to building chips for AI because transistors can't get any smaller."

I really don't know what they gain by convoluting the point and it's pretty hard to follow what the CEO is talking about half the time.


Quantum computing people have been selling this exact spiel (including the convoluted talking points) for decades and it keeps working at getting funded. It has not produced any results for the rest of us, though.


One difference is that baking mathematical models into electronic analogs is older than integrated circuits. The reason we deviated from that model is because the re-programmability and cost of general purpose, digital computers was way more economical than bespoke hardware for expensive and temperamental single purpose analog computers. The unit economics basically killed analog computing. What Extropic (and others) have identified is that in the case of machine learning, the pendulum might have to swing back because we do have a large scale need for bespoke hardware. We'll see if they're right.

Quantum computing has been exploring an entirely new model of computation for which it's hard to even articulate the problems it can solve. Whereas using analog computers in place of digital is already well defined.


A lot of quantum computing companies have the same idea of hard-baked analog computing for a useful algorithm. D-Wave was the biggest one to go bust.


Neither has fusion research produced anything for us yet. Should we stop funding it?


Arguably yes, in a commercial sense. To give the fusion folks some credit, they haven't been promising that commercial products are "just around the corner" for the last 30 years the way QC people have, and the quacks (cold fusion) were excised from the field for making those false promises. I do think that if your field as a whole continually makes huge promises and never delivers, it should probably tarnish that field's reputation.

However, if you're thinking about research grants, no. That's the point of research grants.


The tech could be really cool if e.g. classifiers could be represented within the probability space modeled on their hardware. However their shaman-speak isn't confidence inducing.


Your summary seems to miss a later quote from the article:

> Extropic is also building semiconductor devices that operate at room temperature to extend our reach to a larger market. These devices trade the Josephson junction for the transistor. Doing so sacrifices some energy efficiency compared to superconducting devices. In exchange, it allows one to build them using standard manufacturing processes and supply chains, unlocking massive scale.

So, their mass-market device is going to be based on transistors.

The actual article read like a weird mesh of techno-babble and startup-evangelism to me. I can't judge if what they are suggesting is vaporware or hyperbole. This is one of those cases where they are either way ahead of my own thinking or they are trying to bamboozle me with jargon.

I personally find it hard to categorize a lot of AI hype into "worth actually looking into" vs. "total waste of time". The best I can do in this case is suspend my judgement and if they come up again with something more substantive than a rambling post then I can always readjust.


> trying to bamboozle me with jargon

Am I the only one who thought the article was clear, lucid, and reasonably concise?

The company's success or failure will depend on execution, but the value proposition is quite sound. Maybe I've just spent too much time in the intersection between information theory, thermodynamics, and signal processing...

"Don't splurge on high SNR ('digital') hardware just to re-introduce noise later." == "Don't dig a hole and fill it in again. You waste energy twice!"


> Doing so sacrifices some energy efficiency compared to superconducting devices.

In most applications superconductivity does not actually yield better energy efficiency at system level, since it turns out cooling stuff to negative several hundred degrees is quite energy demanding.


Convolutional neural networks were a huge advancement in their time


I don't disagree. I just come away from the article feeling more confused as opposed to enlightened and excited about what they're building.

It even makes me think that they don't understand what they're talking about which is why they're using complicated terminology to mask it but I'm hopeful I'm wrong and this is an engineering innovation that benefits everyone.


I get that feeling, too.

There may or may not be something there, but the article is mostly buzzword-slinging. They wrote "This will allow us to put an Extropic accelerator in every home, enabling everyone to partake in the thermodynamic AI acceleration." Huh?

If they said something like "We are trying to cut the cost of stable diffusion by a factor of 100", that would sort of make sense. But then people would want to see a demo.


A proof-of-concept would be amazing and that's what I thought they were releasing and would justify the hype (as opposed to a whitepaper). Maybe in a couple of months we'll see a HN post doing exactly this and we can eat our words (which I really hope is the case).


I have no idea about the merits of this approach, but I found this interview with the founders a lot more sensical than the linked article:

https://twitter.com/Extropic_AI/status/1767203839818781085


This was definitely easier to follow.

Since they're building a special-purpose accelerator for a certain class of models, what I'd like to see is some evidence that those models can achieve competitive performance (once the hardware is mature). Namely, simulate these models on conventional hardware to determine how effective they are, then estimate what the cost would be to run the same model on Extropic's future hardware.


Ah, but running an experiment like that risks it returning an answer you don't like.


Much, much better. The first minute or so explains what they are trying to do and why in a way the I can understand.

This interview makes me much more excited and less skeptic than Verdon's usual mumbo-jumbo jargon. He should try using simpler, and more humble language more often.


This interview makes their product seem like BS. First, they literally cannot simply explain the problem or solution. Regardless, their pitch is that they're building a more power efficient probability distribution sampler. No one in AI research thinks that's a bottleneck.

edit: btw the bottleneck in AI algos is matrix multiply and memory bandwith.


My take on the Garry Tan interview (which seems pretty clear, regardless of whether this is snake oil or not) is that Extropic are building low power analog chips because we're hitting up against the limits of Moore's Law (limit's of physics in reducing transistor size), and at the same time the power consumption for LLM/AI training and inference is starting to get out of hand.

So, their solution is to embrace the stochastic operation of smaller chip geometries where transistors become unreliable, and double down on it by running the chips at low power where the stochasticity is even worse. They are using an analog chip design/architecture of some sort (presumably some sort of matmul equivalent?) and using a "full-stack" design whereby they have custom software to run neural nets on their chips, taking advantage of the fact the neural nets can tolerate, and utilize, randomness.


Just watched a few minutes of the Lex interview, and have to say Verdon gives off a totally different vibe there, and seems to be talking gibberish about quantum computing.

However, the idea of using analog matrix multiply is reasonable, and has already been done by at least one company:

https://mythic.ai/products/m1076-analog-matrix-processor/


I'm sorry this may come off as rude, not my intention: The Gary Tan interview explicitly says those things, I'm not sure that's really your "take".


Fair enough, but others seem to have a different take!


True!


Computationally, yes, those are the bottlenecks. But I would also add supervised training data, as we can never get enough of that and it is one of few things that increases in compute are (to my mind, you could argue that by scaling unsupervised training further we could do away with it, but I am not yet convinced) not able to solve.


Their startup is addressing computing bottlenecks so that's what I addressed. Supervised training dat isn't a bottleneck on LLMs, Diffusion models, or any of the hot areas at the moment.


I think the situation is less clear than that. While I have limited research experience with image generation, I believe I do have a fair understanding of large language models. From the publication of GPT-2 until ChatGPT, it was true that the argument always was that supervised training data was not a priority and that it all boiled down to scaling the amount of unsupervised training data. However, this all changed with preference tuning, etc. and I think there is also an argument to be made that the extensive training data curation that we see today (and is withheld from the "papers" we see for the models) is a form of supervision in its own right. It could be that we will see computational/data scaling dominate again, but I think it is equally possible that we will have the next few years dominated by data curation and exploring forms of supervision to "extract" value out of what was learnt at the unsupervised training stage.

Still, you are correct that Extropic is looking at the computation rather than data. But, I wanted to chime in so as the discussion here would not leave the impression that we are still in the days of pure unsupervised scaling.


My understanding is that the goal of these approaches are to avoid those bottlenecks.


Did they invent new DL algorithms and publish them? If I remember what I heard in the interview correctly, this targets existing architectures.


No, they're using analog computers. They point that out in the interview and the linked article.


To clarify, I meant neural network architectures not chip architectures.


Vaguely though what they are talking about sounds like it might be better for training? (I'm really stretching it here)


Yes that's stretching the truth


And Lex's podcast/interview with Guillaume Verdon, one of said founders.

https://m.youtube.com/watch?v=8fEEbKJoNbU&pp=ygUVbGV4IGZyaWR...


Anyone else get super creepy vibes from the way he talks in this video? I'm calling that it's a fraud.

If it is a fraud, how do people like this get funded?? (And how can I be creepier so that my real ideas get funded)


He gets lots of interesting guests (and some BSers) on his podcast, so people listen.


I'm not talking about lex. Lex is fine (if boring; that's good, it puts the focus on the guest).


The sad truth: get on Twitter and say a lot of weird, "high-minded" things. It's where VCs hang out, and this is the language they get from a lot of people.


People need to read Hamming’s old papers in which he very clearly explains why analog circuits are not viable at scale. This is also why the brain uses spikes rather than continuous signals. The issue is noise, interference, and attenuation. There’s no way to get around this. If they have invented a way, I’d like to see it. But until it’s demonstrated, I’d take such things with a large grain of salt.


You can re-quantize analog signals into a finite number of levels to prevent noise accumulation. That's how TLC (8 levels) and QLC (16 levels) flash memory cells work. The cells store an analog value, but it's forced to a value close to one of N discrete values. The same approach is used in modems.

Deep learning doesn't seem to need that much numerical precision. People started with 32-bit floats, then 16-bit floats, now sometimes 8-bit floats, and recently there are people talking up 2-bit trinary. The number of levels needed may not be too much for analog. If you have a regenerator once in a while to slot values back to the allowed discrete levels, you can clean up the noise. That's an analog to digital to analog conversion, of course.

That's not what these guys are talking about, as far as I can tell.


analog circuits are making a comeback because they are great for simulating the equations of the physical world more efficiently than a digital approach. https://spectrum.ieee.org/not-your-fathers-analog-computer


Sounds interesting. Do you have a link? (or at least a title?)


Not at the moment, but I do recall he has a chapter on this in his book “The Art of Doing Science and Engineering”, which I also recommend. He uses very long transmission lines to explain this, but the same thing applies at the nano scale, and perhaps to an even greater extent due to the much noisier environment and higher frequencies.


I really hope this was an experiment in using gen AI:

“Create a website for a new company that is building the next generation of computing hardware to power AI software. Make sure it sounds science-y but don’t be too specific.”


Why make such a low effort pessimistic comment. What happened to HN?


HN has always been a tense standoff between a few cliques, the first two being the ostensibly intended audience;

* competent and curious engineers

* entrepreneurs, who live on a continuum where one end is...

* ...hucksters and snake-oil purveyors, of which there are plenty, and

* (because this is the Internet) conspiracy theorists and other such loons

and recently

* political provocateurs

You can make a thread work (for that group of people) if it self-selects who reads it. Unfortunately, AI is catnip to all five of these groups, so the average thread quality is exceptionally low – it serves all five groups badly.

Whether some of these people _should_ be served well is a separate question.


same thing that is happening everywhere. cognitive effort is getting unbalanced, it used to be necessary to put effort in on write, and on read.

now, it is hard to tell who put effort in at all. read or write.

would you consider your own response to be optimistic or high effort?


Snark has always been part of this website.


I think pointing out BS is an important part of a useful forum.


You might have responded to the wrong comment.

binoct's comment is not "pointing out BS" - that requires that you, y'know, actually point out things. binoct's comment is sneering, without any content or thought. Pure fluff.


^ How to spot a sucker


The use of “full-stack” was the first thing I noticed. Everyone, please stop using that term. I’m pretty sure, with a high degree of certainty, you don’t know what it means. If you do, there’s a merit badge waiting for you. And can we please stop using “hallucinations” to describe output. Yes, it may look like your tool dropped acid, but that’s not what it is.


Rob Pike once said that he was "full-stack": when he worked on Voyager, he understood the system from quantum mechanics to flight software (https://hachyderm.io/@robpike/109763603394772405)


No joke: The guy that coined the term is the same guy that made the merit badge. Enjoy looking that one up.


full-stack means the ic can take any ticket. do the details beyond that matter?


Well, quantity has a quality all its own. Extending the sprint to wait for the person to complete their PhD as part of the "research" part of the ticket would not quite be Scrum.


Buy the ticket, take the ride…


first you have to be told what you’re buying next.

(before that you were given the chance to object to the estimate, but not to change it.)


No sympathy for the Devil, keep that in mind.


I now think of the "stack" of a modern business as starting with physics and ending with making someone happy (unless you are Oracle). Full-stack engineers should then know how to connect physics to peoples' happiness.


> can we please stop using “hallucinations” to describe output.

Right. A better word is confabulation.

I.e. pseudomemories, a replacement of a gap in information with false information that is not recognized as such.


Uninmportant, but if you're citing Moore's paper I feel like you're just trying to pad out the references to make it look like you're serious


At a high level it is the right answer to the data center electricity demand problem. Which is that we need to make AI hardware more efficient.

Pragmatically, it doesn't make much sense given that it would take years for this approach to have any real work use cases in a best case scenario. It seems way more likey that efficiency gains in digital chips will happen first making these chips less economically valuable.


This guy spends an extraordinary amount of time posting memes and e/acc silliness.

So much so I wonder what the hell they're doing with this company. Is he a prolific poster and an engineering genius? Or is he just another poster


For the longest time I thought the person behind the account was just some random guy who was probably very into crypto and decided to dabble in AI because of the parallels between e/acc and the whole "to the moon" messaging you find in crypto communities.

Never would have guessed the guy was an actual physicist


Hard time believing this is legit given how much time the CEO spends goofing around on social media. If it were possible to short startups, this would be a top candidate.


honestly, it would be too early to say this. Considering the people who invested in this startup, its better to assume CEO is capable. If he is not able to deliver in reasonable timeline then, we all are free to blame him for posting things on SM. actually many knows his company because he is goofing around on SM especially e/acc stuff.


It's more interesting to see who passed on it. There isn't a single top tier VC here.

This whole pitch sounds like the usual quantum computing babble.


exactly!! most of the large accounts hyping this stuff up are invested in it or dont seem to have a clear idea of what they invested in and/or part of the inner clique. also there doesn't seem to any serious hardware investors in that group - more folks who have bought into e/acc and all the scifi mythos around it.


So, basically this seems to be a way to replace PRNGs with real randomness with some knobs so you can adjust the distribution. Let's assume for the sake of argument that this can replace every single PRNG call in inference and training, how much savings in cost/energy/run time would there actually be?


Assuming they're free: Essentially nothing. PRNGs are incredibly cheap.


This is a quantum computing company, specifically for quantum ML.


Could someone smarter than me explain if this is a big deal or just hype? The work sound promising, but I wonder how long it would take to build and validate.


After skimming the article, go back to the beginning, and ponder the opening stanza:

> We are very excited to finally share more about what Extropic is building: a full-stack hardware platform to harness matter's natural fluctuations as a computational resource for Generative AI.

This is New Age, dressed up with the latest fashion.


So exciting! We'd be walking amongst our GAI brethren this very day if it weren't for the computational limits of those pesky RNGs!


I can sell you a solution to that in AWS/Azure (or on prem) today if you really want to use a TRNG for your ML training :)

They are very energy efficient (measured in pJ/bit), but non-cryptographic PRNGs, which are typical for ML, are far more efficient.

It's not obviously wrong to think that AI algorithms will pick up bias from "overfitting" to their PRNGs used during training, but I'm not expecting the benefits to be very large.


As far as I can tell, as someone with relevant hardware expertise, this is a quantum machine learning startup.


Sounds like a really convoluted new agey way of talking about some kind of analog computing.

AFAIK there are other efforts to develop analog neural network ASICs. Since neural networks are noise-tolerant this could work and could allow faster computations than conventional must-be-perfect digital circuits. IBM, Intel, and others have experimented with this.

I wouldn't believe there's anything particularly novel here unless a lot more detail or test hardware is given.

I'm not 100% sure this is true but I've heard that this fellow was involved with the NFT craze and made money there, and that sets off alarm bells. I've suspected for a while that e/acc is a marketing thing since it's just repackaging old extropian stuff from the 1990s.

"I want to believe" but have seen enough to be skeptical of extreme claims without hard evidence.


I have a suspicion that a lot of people are nodding along because they don't want to seem like the village idiot.


on the other hand, HN is filled with self-proclaimed critics that they dismiss everything and display their utter lack of imagination -- like AI, Metaverse, (success of) Snapchat, AirPods before


Did HN dismiss AI? I saw some skepticism and still do but not dismissal.

This site doesn’t get everything right. It tends to miss things that succeed in the consumer space because this is a pro audience not a mainstream audience. But it usually gets hard science right.


nods vigorously


I don't know about the AI aspect, but this sounds perhaps related to probabilistic finance simulations (Black-Scholes, Heston, etc). I've heard rumors that these types of simulations account for an obscene amount of compute at AWS


I'm similarly suspicious, and find it curious that this is the first I'm hearing about this at all. I don't have personal connections in physics or AI circles but I feel like I'd usually expect to have read mention of these ideas before finding this press release.


My takeaway is that the chip's goal is to provide a way to produce random numbers with some configurable distribution that is faster and more energy efficient.

As far as the feasibility and impact on AI in general, I have no idea.


This is hype.

Someone should tell them about MCMC and alike.

Or if they want to accelerate MCMC for a particular problem, they can build a classical ASIC and scale it.


It sounds like complete BS, unfortunately.


It's a startup with well-credential and very technical founders and a fair seed round focused on accelerating one bottleneck in a newly popular computing paradigm using techniques that are known in research but never yet commercialized.

It might fail for the reasons many startups fail, but it's not prima facie fantasy.


But I don’t see the bottleneck. What are they optimizing that’s worth all this effort? As others have noted, RNGs are not a notable bottleneck in AI.


They're not wrong that sampling a complex, higher-dimensional probability distribution is hard to do efficiently. I'm not sure how useful it is to do it more efficiently, though.

Also, the fact that they're using ultra-cold superconductors makes me wonder how much noise helps and how much it hurts. If your system is all about leveraging noise well, but you can only use super special well-behaved noise, then "bad noise" could easily ruin the quality of your generated solutions.

It's cool to see something so wacky out there, though!


interesting that a company w/ no public repositories has 1.1k github followers https://github.com/extropic-ai


It's led by the e/acc [1] founder, BasedBeffJezos [2]. He has a huge cult following. It's turned into a lot of Twitter memes and shitposting [3].

[1] https://en.wikipedia.org/wiki/Effective_accelerationism

[2] https://twitter.com/BasedBeffJezos

[3] https://knowyourmeme.com/memes/cultures/eacc-effective-accel...


Honestly, as interesting as the the chip sounds, I'm admittedly kind of biased against the company's probability of success simply because the founder is basically the #1 e/acc meme account/shitposter on Twitter.

Like, it's hard to take someone seriously when they spend tons of time shitposting on Twitter, it's even harder when it's revealed that they're behind one of the most popular shitposting accounts within a niche, almost cult-like community.


Back in the day, there was a saying that went something like: Steve Jobs was really good at what he did, and also an asshole to people. The former is really hard to replicate, so instead you'll find a lot of people going around and imitating the latter.

Today, it's a very different situation. Now we have Elon Musk, who is really good at what he does, and also tweets a lot...


Yeah, but Elon is one of one.


The founder ('Beff Jezos') has a large twitter presence.


To be fair it isn't very common to detail proprietary hardware in github repos. And any code for such novel processors would be fascinating but useful only for theory rather than practice at the moment. The lack of open code is a missing merit badge rather than a demerit.


Physical learning machines require noise to learn. They are also necessarily dissipative. See https://arxiv.org/abs/2209.11954. The key is to engineer the noise to maximise the learning rate. In classical devices, stochastic switching is controlled by temperature through the Kramers rate. This means kT controls energy loss. If you use dissipative quantum tunnelling this is not the true thermodynamic lower bound. Any quantum nonlinear dissipative system, with a far from equilibrium steady state, is a good case to consider. Dispersive optical bistability, realised in SC quantum circuits, is the way to go. And quantum error correction is unnecessary.


The amount of buzzwords on this page should disqualify this from even getting votes on HN. Anyone who writes like this is trying to confuse and mislead the reader.


Too early to tell about what this will be in the future. Either it turns out to be a foundational startup or a flash in a pan.

But at least it is not the 5000th so-called AI-powered SaaS company that is using OpenAI API that has raised $20M+ to VCs and burning hundreds of thousands every month with little to no plan to generate revenue.

Will be watching this one closely, but highly skeptical of this company.


Hear hear, better to see someone go for broke trying something novel.

At best they advance the field massively, at worst the backers lose their money but the tech/knowledge finds a home elsewhere and the knowledge in the field is nudged forward.


Man, I am not a pessimist and I am very bullish on AI-the-field but my spidey sense is tingling that this is BS.

- It is written in a way that sacrifices legibility for supposed precision but because the terms used can't really be applied precisely, it's equivalent to spurious digits in a scientific calculation. The usual reason this occurs is to obfuscate or to overawe the audience.

- It is hard to overstate the difficulty of beating semiconductor with a wholly new branch of technology. They're so insanely good. People have been trying to beat them for decades and there's not even a solid theoretical thesis as to how to do so. Even the theoretical advantage of quantum computing is predicated on error correction being scalable which is a totally open question even theoretically.


If room-temperature-stable bio-enhanced AI-specific-computer-powered chatbots don't seem like a realistic goal then maybe you should have clicked "play" on the linked spotify widget.


For me its the dichotomy between how absolutely impenetrable the blog post is, combined with the "Set the tone fam, play 'Entropy' by Noizinski on Spotify :)" widget in the bottom right. Like they're trying to check every box on the engagement farming list (something, to be sure, beff jezos is famous for).

Very bad vibes. Hire someone who can communicate, and demonstrate what you're building.


It feels like serious people would have said something more like "we are going to improve the performance (measured in s), of the algorithms/models such X, Y, Z which are used in a, b, c."

Can anyone name a company which used such absurd language to describe themselves and then actually delivered something valuable? There must be one.


I'm saddened to see the honorable name of Extropy and Extropianism, which carefully never descended to this level or anything like it, be stolen and captured by this nonsense.


Is this sarcasm? (Genuinely can't tell.)

And also, are you the real Eliezer?


No, not sarcasm, and I am Eliezer Yudkowsky. I was around on the old Extropians mailing list starting in 1996, and their leadership did not talk like this. Max More (the founder of Extropianism) was a careful thinker then, and I haven't heard anything different about him more recently than that.

"Extropy" is a term that was previously coined by a group of fairly nice people to describe themselves, and so far as I know is being stolen here without permission.


Oh wow, hello!

Seems like it was quite a time to be online. I mostly know of it through this version of events. Not sure how accurate you'd find it:

https://aiascendant.substack.com/p/extropias-children-chapte...

The word "extropy" itself seems to go back several decades before the mailing list, if I'm reading correctly here: https://en.wikipedia.org/wiki/Extropianism. Still, I wouldn't be surprised if many/most of the original mailing list members found this usage a corruption.


I've thought for a while that what quantum computing will probably deliver is not going to be magical infinite processing power, but extremely fast, computational access to parameterizable physical processes. That is, a rock can simulate being a rock better than a computer can, but how do you hook it up to the rest of your system? But while I can imagine replacing a simple MCMC model, for example, with a stack of physics-based chips, is there a path all the way to designing, training and executing something LLM sized on top of that technology? I'm not smart enough to know, but as esoteric as it sounds, it feels like it's drawing on the less speculative end of the spectrum, and seems like a noble effort and not an actual scam.


so far the only thing they’ve built is more posts


TBF that's not a bad place to be in the current hype cycle. Better than releasing and being permanently written off as yet-another-ChatGPT-wrapper.


I believe this link is communicating within the family of thought from which this blog post also comes:

https://knowm.org/thermodynamic-computing/

It's a random, unassuming 7-year-old blog post from a DARPA-funded and defense-involved inventor. They happen to work in neuromorphic computing. Their other posts talk about some of that work. A cynical take is that it can seem like just hand-wavey garbage, but then again, it's been quietly getting tons of defense contractor money.

I came across it years ago, and it has greatly accelerated my worldview, and has made me feel ahead of the curve in understanding what is going on in the universe. It's informed my community organizing. It's informed how I understand AI and consciousness and language, and the intersection of all these things.

I'm inclined to believe that the people in this area are clued into something very substantial about how the universe works.

EDIT: oops, shared the wrong link. This one is about thermodynamic evolution


Seems like they're "passive" energy chips are only gonna be targeted $$$ towards big organizations, which make use of the Josephon effect. But if they're targeting transistor technology for the masses, how will they have an advantage against the incumbents


fund my new simulated annealing accelerator startup where we etch your model onto an aluminum flake and then hit it with a blowtorch


for all the hype around building alien tech, this is a bit underwhelming. the stuff from this startup feels more alien than what extropic is talking about - https://www.emergentia.tech/technology


Not only does this read like pure bullshit, it is bullshit on a website that crashes the Apple Vision Pro (and makes my laptop suffer).

My prediction is that they will raise a nine-figure sum over the next decade, and never release a product that comes close to the performance of an NVIDIA card today.


improve title pls


the engineering alone will be a nightmare


I know everyone is calling BS on this, and I am just a simple web developer so what do I know but there are at least two priors that make me think that what is discussed here could have some validity.

* The stochastic/random nature of processors is already used in cryptography for physically uncloneable functions. Dunno if this has any practical uses in industry, and it is crypto, so it is probably also BS, but it is the same phenomena you get if you log in into your BIOS and turn off ECC of your RAM.

* The very first computer capable of MCMC was designed by von Neumann himself and used uranium as a source of randomness as part of the Manhattan project.

Anyway semiconductors have never been my strong suit, but I guess this is more of a IP play then a consumer product business. Now let me get back to writing unit tests.


Obvious grifty nonsense.


smells like snake oil. will probably end up becoming a cryptocurrency scam? or some other grift? time will tell.

>Extropic is also building semiconductor devices that operate at room temperature to extend our reach to a larger market.

funny stuff


meh, lmk when they actually ship something that's not bs


Comments read like a confessional from out of the loop.


The litepaper discusses Extropic's mission to develop a novel hardware platform that harnesses the natural fluctuations of matter as a computational resource for Generative AI.

Key Points

The demand for computing power in AI is increasing exponentially, but Moore's Law is slowing down due to fundamental physical limitations of transistors at the atomic scale.

Biology hosts more efficient computing circuitry than current human-made devices by leveraging intrinsic randomness in chemical reaction networks.

Energy-Based Models (EBMs) are a potential solution, as they are optimal for modeling probability distributions and require minimal data. However, sampling from EBMs is difficult on digital hardware.

Extropic is implementing EBMs directly as parameterized stochastic analog circuits, which can achieve orders of magnitude improvement in runtime and energy efficiency compared to digital computers.

Extropic's first processors are nano-fabricated from aluminum and run at low temperatures where they are superconducting, using Josephson junctions for nonlinearity.

Extropic is also developing semiconductor devices that operate at room temperature, sacrificing some energy efficiency for scalability and accessibility.

A software layer is being built to compile abstract specifications of EBMs to the relevant hardware control language, enabling Extropic accelerators to run large programs.

---

Is this real or just theoretical?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: