Hacker News new | past | comments | ask | show | jobs | submit login
ASML and Samsung seal deal on 2nm chips (koreaherald.com)
145 points by ycdxvjp on Dec 16, 2023 | hide | past | favorite | 91 comments



Everything over the last years tells me we are very close to the end game in chip production.

1. power stopped scaling

2. clocks stopped - 4x over 20 years?

3. node names became fiction.

4. EUV now differentiates the top from everyone else.

5. IPC improvements are a trickle.

6. Tons of non-moores law ideas are here:

A) GPUs

B) chiplets

C) stacked die

D) specialized accelerators

7. Governments are now involved in dividing up the tech. Since its finally mature / done.

It has been incredible to watch the progression over the last 50 years.


I used to agree with you until I saw this presentation by Jim Keller, one of the biggest names in chip architecture. It really illustrates how many more options we have to improve performance.

https://www.youtube.com/live/oIG9ztQw2Gc?si=vCBGGm0tkhG-VOyd


I like that one. Building a chip involves many different manufacturing steps, each of which has room for improvement. I liked how he even mentions that advances in sandpaper allow us to make better chips.


Depends what you consider as part of chip production. If you consider new instructions and the related software as new chip production, then the performance improvements have continued quite dramatically. Single chip inference has gone 1000x over the last 10 years according to Bill Dally from Nvidia [1] at 10 minutes (probably not original source). Main gains according to the video have been different numeric types (fp16 for example) and more specialised hardware together with more complex instructions. Note that process improvements have only caused a 2.5x from that 1000x. He also mentions that Google‘s TPU has no fundamental benefit over dedicated instructions in GPUs. Quite an interesting video IMO.

[1]: https://youtu.be/kLiwvnr4L80


So going from 65nm to 3nm only gave us 2.5x speed? If by speed we only understand frequency, then it might be true. But speed depends also on IPC and having more transistors means higher IPC.


Ah yes thanks! Makes sense. So, more transistors allow for the more complex instructions, which allow for more effective instructions per cycle (such as SIMD).


Dennard scaling stopped years ago. There's definitely room at the bottom for different topologies — probably 5–10 generations worth. But! These are probably 5 year steps, not two year steps. Also, there's just as much room for the improvement of HW design as there is in peeling off SW abstraction layers above the HW.


I'm guessing we're not done, we're just reaching the endgame with silicon dies. Next steps will be new tech sometimes with a long lead time until it has been developed to the extent that it's competitive with silicon. Things like integrated photonics (or a completely photonic CPU), quantum computers, or diamond instead of silicon (or other materials instead of silicon).

Are we not still getting exponential growth in capability vs time?


It's impossible for growth to be exponential indefinitely. Eventually it'll end up as a sigmoid curve.

Secondly, progress isn't smooth. It comes in phases and leaps, and it's possible for it to stagnate for a long time.


Certainly. We're near the top of the computational capacity for silicon chips. However, the computational capacity of a cubic centimeter of our universe is mind bogglingly enormous, what we can do now is a drop of water in the ocean (or much much smaller). There are lots of other ways to compute and many of them in active development.

Folks have been saying exponential growth is over for many decades now. It may have slowed in some ways or growth might be going in different directions, but it's still growth and I see no signs that the end is near or we're close to a major longterm slowdown of growth.


I think we'll start with a cubic cm of silicon before space.

However if you like the topic and want to read some fiction involving it, try the Jean Le Flambeur trilogy by Hannu Rajanieme (sp?)


I think we are only limited by the amount of matter and energy in the Universe. We are far from reaching any limit.


Dead wrong. You are limited by the speed of causality. Even if the universe is infinite your growth rate would cease to be exponential since the speed of causality is linear and the reachable sphere is growing only by the third power.

5% growth per year and two thousand years later you coloinize thousands of observable universes. The observable universe has a length of 90 billion lightyears.

And no, time dilation won't save you unless you think of dumb ideas like cloning yourself so that the humanity that spreads is considered "you". Otherwise you will only get to go to a single planet.


I think there was a lot of research to use other kind of materials beside silicon for making chips. One of them was carbon. I even heard about optical chips, using light instead of electricity.


We have a long way to go before we get to the end of the road in where we can go in chip design. The real sore spot moving forward is the grind it's going to be to improve price/performance ratios.

Increasingly I'm adjusting by being willing to spend more than I would previously consider reasonable on big ticket tech purchases, rather than waiting for prices to drop.


I think us software types may be more sad about the end of the "free lunch" than the buying public. The 90s in particular had amazing performance increases, but you also needed to buy a new computer (which were more expensive then) every ~3 years, or upgrade roughly every 18 months if you wanted to stay near the top.

Anyway, my optimization skills will stay in demand :)


I am just an average dev working in python and occasionally c++, but I'd like to improve my skills and learn more about optimization. Would you be able to point me to a structured resource (book, course) where I can learn more about optimization?


I’m not into hardware at all. But my friend is. He works on niche application chip firmware. What he tells me boggles my mind. And I think there’s a ton that can be improved on that side for sure. The whole tool stack is so arcane. Most people working in this field are older and conservative. No one wants to change anything.


Yeah, well. I'm a software guy, not a hardware person. But think about it if the hardware guys were like "move fast and break things". It's amazing how much space the hardware people (Asics) have given us the to experiment and explore on their incredibly robust platforms.

Maybe some of them needed to be mature and conservative.


I’m not advocating for another extreme. But the things he told me could be certainly improved a lot without breaking things. In fact things are breaking already. I’m taking even basic things like tools. He tells me many devs have no intellisense of any sort which leads to slow dev pace and bugs. Poor CI setups. Papering over old languages with codegen but not doing it properly thru AST. But literally just string concat in Python. Leads to lots of hard to track bugs. Lots of examples like this.


Every time someone says we've hit peak microprocessing, we in fact have not hit peak microprocessing.


That can be true while approaching an asymptote. Unless there is a switch in technology (optical, different materials, whatever), I don't expect much for the time being. At least in CPUs and especially in single core performance. GPUs and other massively parallel devices are easier to improve with 3D techniques and such.


Point 3 always makes me unreasonably (or perhaps reasonably) angry.


We arent.

There are materials which could make cpus eg 1000x faster, but their life span would be e.g year or two and cost would be 10000x

>Tons of non-moores law ideas are here:

What?


>> >Tons of non-moores law ideas are here:

What?

Those were all things that don't affect transistor scaling. Well some depend on having high density chips, but they don't enable denser chips. Chiplets and stacked die are the things being done because transistor scaling is near its limit.

15 years ago people were looking at those things but they didn't make sense because we could just move to the next node. So they were deemed future possibilities for when scaling wouldn't be cheap and easy. They are no longer "future" tech, they are present day. Meaning scaling is about over, and we are pulling out all the stops on other ways to go fast.


Just to head off the inevitable comments about how 2nm isn't really 2nm: yes, we know, it's just a convention in the industry to indicate relative performance. It's basically a marketing term, more than anything else.

The numbers VLSI engineers actually care about are things like millions of transistors per square millimeter (MTr/mm^2), power per bit in picojoules (pJ), transistor rise times in nanoseconds, etc...

In those metrics, steady gains have been made, it's just that the actual component sizes haven't matched up to the gains recently. Instead, new types of technologies like "gate-all-around" have been used to eke out more performance instead of simply shrinking everything proportionally.


I really hope they can all decide on a better number which everyone agrees on soon. It's ridiculous having to look up each time if Intel 10nm is the same as which one from TSMC again and all of that. There have been some contenders, but the lure of saying "hey, I have the smallest number, so my process is best!" is just too strong it seems.


Have you considered that things are complicated enough now it can't be boiled down to single number comparisons?


Sure, but I know that there have been various proposals for a better number. Different companies just couldn't decide on which proposal they like the best.


Since Intel changed their node names you can roughly assume that for a given node TSMC > Intel > Samsung.

This might change in the future if Intel gets their shit together but I'm not buying their stock yet, they look like the Boeing of microchips to me.


>I really hope they can all decide on a better number which everyone agrees on soon.

How about 2nm, 2nm Ultra, 2nm Ultra Pro and 2nm Ultra Pro Max.

Or, like Intel used to call iterations of their 14nn technology, 10nm, 10nm+, 10nm++, 10nm+++ and 10nm++++.


Only CPP and MMP really matter. Different foundries offer slightly different numbers for either, usually if they're better at one, they're slightly worse at the other.

This was interesting, though idt it's been standardised: https://spectrum.ieee.org/a-better-way-to-measure-progress-i.... Measure node by factors of density of logic, memory, interconnects all together, which is pretty well balanced.


Just for size reference, the SARS-CoV-2 (covid) virus is 100nm in diameter.

A banana is 200,000,000nm, just in case you were wondering.


Just for completeness, “2nm” is more like 12-45nm actual transistor size. So on average about 5 of these transistors could fit inside a single SARS-CoV-2 viron.


Interesting - 100 micro bananas - I didn't realize these nominal node sizes had gotten quite so far from actual measurements!


So it's not the vaccine, but Covid itself, that is going to deploy the 5G chips into people's bodies?


It’s insane how small transistors are now. Technically you could have a full 2 bit adder inside a single virion within a generation.


Well, so instead of giving money to Intel, you go get infected by your virus of choice. You still have to manage heat, though.

I wonder how can you install software, or does it come in ROM?


That's a lot of banana chips.


Bananas are radioactive, so better keep them away from the chips.


Just read this in Chip War (p. 333) by Chris Miller:

> Gelsinger has cut a deal with ASML to let Intel acquire the first next-generation EUV machine, which is expected to be ready in 2025. If Intel can learn how to use these new tools before rivals, it could provide a technological edge.

Is the book talking about the same tech or different? If the same, does it mean that Intel only gets a year or two of headstart?


Surely ASML would prefer that Samsung and Intel could keep up with TSMC on manufacturing the latest and greatest CPU/GPU/SoCs. So not hard to believe that they'll sell the first next gen machines to Samsung.


I think ASML will be happy to sell to whoever wants to buy. The more, the better.


Exactly, they don't want a runaway winner among their customers.


> ASML is expected to supply 10 units of the High-NA EUV equipment to the market next year, and Intel has reportedly secured six of them

ouch


I believe that's just the pilot (ie training) version with the full version coming a bit later after


A few days ago it was reported that Chine companies started producing 5nm chips (model Kirin 9006C) - many years earlier than US government expected they will be able to do so. With this insane progress I will not wonder if Chinese companies start mass producing 2nm chips earlier than anyone else.


That didn't use EUV and wasn't cost effective, it doesn't mean much of anything.


I would expect another decade until China will be on par with the West. And another few years on top if they'll overtake it.


Atomic radius (average distance from the center to the outermost isolated electron) of copper 135pm, silicon 0.11pm.

2nm is 7 copper atoms or 9 silicon atoms.


And for our next trick ... ... pattering to drive transistor switching rather than just transistor etching!



Maybe we can construct transistors using subatomic particles. If only we can convince those quarks and bosons to do calculation for us...


Anyone here who knows what are the benefits of going from, say, 4 or 3nm to 2nm? ASML talks about energy cost per function mostly, but it’s not as clear cut as transitions in the 2000s were if I‘m not mistaken.


Concisely, but eliding: smaller transistors consume less power. Using smaller transistors, you can get the same performance at a lower power budget, or more performance at the same power budget.


You get less dynamic power loss. Static power hasn’t been as forgiving for a while now though.


How far does a $762 million deal get you? Doesn't seem like such a huge sum for priority access to next gen 2nm machines, but also hard to judge as a layperson...


Just based on some price familiarity with previous generations and extrapolating forward, that does seem like a contract for one machine with a gold-plated service contract or two machines with a good service contract. It seems to me more like a pilot program, but Samsung is a very significant player in semi so idk... I'm not familiar enough with this industry. It could just be because it's very early in the generation.


I like how users talk about hundreds of millions or billions like it's just small change for them.


Lmao that's only like the price of 2 of the new high NA EUV machines. I believe they're slated to be 350-400M€ ea.


> How far does a $762 million deal get you?

Oh, you know, a billion here, a billion there, who counts anymore, right? (The other posters already aswered this one, seems like two top of the line machines?)

Anyways... the numbers being invested at the top of this game are completely nuts. The future like in Cyberpunk 2077 doesn't seem that crazy! It begins with Metaverses, Neuralinks, LLMs, hundreds of billions of spending in the hardware/silicon business (boosted by AI demand) and ends in, well play Cyberpunk. They may even get the year right.


ASML also recently shipped high NA (0.55) EUV machine to Intel. This brings resolution from 13nm to 8nm


Samsung still has to solve yield and capacity problems. It wasn't only process node advantage that made customers form lines in front of TSMC shop.


I wonder why Nvidia doesn't establish it's own fab business. It might be a good fit for them.


AFAIK the cost of producing semiconductors on a leading-edge node are so high you are forced to aggegate demand across seveal chip designers. Intel offering foundry services is evidence of this dynamic.

Edit: grammar.


True, but aside the internal demand I don't see why can't they get external customers.

I think Samsung did the same.


Economics, I'm guessing. They got enough money floating around recently that they would've done it by now, if it made em more money.

Their pockets are flush from crypto and now AI.


I assume they don't want to loose focus on software, which is currently their moat.


Is there a good comparison of how each fabs nanometer metric (aka version number) lines up?


Nigel Tufnel, chief scientist at TSMC, explains the merits of the latest hardware from ASML:

"This is a fab, but it's very special because if you can see, the numbers all go down to 2nm. Look, right across the board. 2nm, 2nm, 2nm, 2nm ..."

"And most of the fabs go down to 3."

"Exactly."

"Does that mean it's ... smaller? Is it any smaller?"

"Well, it's one smaller, isn't it? It's not 3. You see, most fabs are going to be running at 3. You're on 3 on your fab -- where can you go from there? Where?"

"I dunno."

"Nowhere, exactly. And what we do is, if we need that extra push over the cliff, you know what we do?"

"Put it down to 2."

"2. Exactly. One smaller."


Nice Spinal Tap parody...

[1] https://en.wikipedia.org/wiki/Up_to_eleven


Thanks to both! I didn't know about it, so I needed the clarification. The scene: https://youtu.be/4xgx4k83zzc


I know this is a joke, but TSMC is already talking about 1.4nm processes by 2027-2028:

https://www.tomshardware.com/tech-industry/manufacturing/tsm...

It really looks like the current tech could take us to sub 1nm.


The point is that the number of “nanometers” has precious little physical significance anymore. It’s just a version number that counts downwards.


I know you just, but the numbers being so small makes a full decrement all that more impressive; 3 -> 2 is equivalent to 6 -> 4, 9 -> 6, 12 -> 8, ...


You joke but it does seem like we’re near the end of the line. We can go from 3 nm to 2 nm to 1 nm, but we can’t go negative nanometers. That’s it.


There's no (natural) law that prescribes that integers must be used, is it? Also, from what I understand that nanometer count is a bit symbolic, less literal.

EDIT: for instance IMEC talks about "sub 1nm" process, that is 0.7nm (https://www.tomshardware.com/news/imec-reveals-sub-1nm-trans...)


Going from 1 nm to 0.7 nm is hardly as impressive as going from 3 nm to 2 nm. It’s only 0.3 nm less!


(relative) size (still) matters!


They are joking too


Next up is picometers, femtometers, attometers, zeptometers, yoctometers, rontometers, and quectometers (not to be confused with quesometers). They can just keep going until they hit the Planck length as they continue to divorce their marketing from reality.


Those units sound made up. Nobody will believe them.


They're all made up.


Not too long ago nanometer was made up.


Intel has gone with ångstroms.


And if we run out of prefixes we will add new ones.


Not just # of atoms?


0.5nm, 0.25nm, 0.125nm ...


> Nigel Tufnel

Deep cut


"Nigel gave me a drawing that said 2nm. Your understanding of pitch width and packaging technology is not my problem. I do what I'm told."


Dude, I almost spilled my coffee xD




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: