Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel is all-in on backside power delivery (ieee.org)
345 points by mfiguiere on June 11, 2023 | hide | past | favorite | 157 comments


I thought the Anandtech coverage did a bit better job filling in both background & details. https://www.anandtech.com/show/18894/intel-details-powervia-...


Thanks. Very helpful.

One of the wild things to me is how incredibly elaborate chipmaking as become over the years. Per Wikipedia, the 6502's layout was made with "a very manual process done with color pencils and vellum paper". That's the processor that launched the personal computing revolution, powering the Apple II, Commodore PET and VIC-20, the Acorn, and the BBC Micro. Nearly 50 years later, things have gotten so fiendishly complex that "flip it over and put some stuff on the back" is a major industry change requiring who knows how many billions in R&D.


Chip making is the first industry that was automates by computers. Much like how steam engines were first used in coal mines.


The discussion around steam engines is generally that coal mines were the appropriate place because early steam engines were very bad and there wasn’t good transport so they could only really go in places with a plentiful local supply of coal. (There is also a requirement that coal be in sufficient demand for the steam engines to be worth it. To some extent, the bad transport meant that a lot of the price of coal was in the transport so the price of the coal for the steam engine was lower as it didn’t include that transport).

I don’t really think chip making is much like that. I wouldn’t say that chip making was the first industry to be automated by computers either.


Not making nuclear weapons or banking?


"Chip making is automated."

Is not a true statement.


“Automated” != “Fully Automated”/“Autonomous” IMO. I took one class on this years ago and didn’t pay a lot of attention so disregard me if I’m super off base, but Silicon does rely heavily on computer aided design, no? I think a more charitable reading of the parent in that light makes it an actually quite insightful comment


I can't think of a single aspect of chipmaking that is not automated to some degree. Manufacturing of semiconductors is probably the most automated thing that humans have ever managed to do in the physical world.


I mean, steam engines didn't fully automate mining either, but I still get his point.


Chip making is in a really sad state of automation. There's lots of money poured into very ingenious products, but the major incumbents have to real incentive to standardize, so you still have some unstructured file formats (SPICE) with different dialects veeery hard to parse correctly, you have ad-hoc APIs depending on vendor and software versions that all try to capture you into their ecosystem, you have half-assed attempts at standards (à la xkcd/927) that you can't rely on.

And this sad state of affairs shows no sign of evolving favorably. Closed-source software and corporate interests at their finest.


Also just the state of highly sophisticated but niche software (ecosystems) in general. It usually winds up super messy and only just put together enough to function for the small number of users.


Why would better standards be “better”, beyond just platonic affection for more standard things?


Wow so we still design CPUs on silk screens by hand?


I'm writing this comment on a computer, not with pencil and paper. Does that mean it was automated?


Have you ever published anything without using a computer? Comparing that process versus this, I think it's obvious the answer is yes.


Somewhat, IMO. For example, did you use spellchecking or autocorrect (where ‘use’ may mean you spent a tiny less time/attention on spelling, trusting your spellchecker to add wiggly lines to words you might misspell, even if you don’t make any typo?)


Are you nailing it to a church door when you're done?


Does Backside Power Delivery mean the silicon area of the chip will be placed upside down on the motherboard to ensure efficient heat dissipation? (Since the power source will be on the opposite side compared to current day silicon)

P.s. in case you never heard of BPD technology, https://semiengineering.com/challenges-in-backside-power-del... is also informative (thanks @rektide for the Anand link!)

Edit:

Relevant quote from TFA which somewhat answers the above question:

> Of note, because the carrier wafer is on the signal side of the chip, this means it presents another layer of material between the transistors and the cooler. Intel’s techniques to improve heat transfer take this into account, but for PC enthusiasts accustomed to transistors at the top of their chip, this is going to be a significant change.

TL;DR: No.


Previously we were using flip chip where silicon was exposed. With backside power delivery, there's now wiring on both sides of the chip. The transistors are buried. The article discusses how debugging is harder since nothing is exposed, and how there's now more thermal resistance.

Intel's implementation has other factors, as discussed in the Anandtech article. Normally there's a fairly thick silicon base underneath the transistor layer, but Intel polishes that away to nearly nothing after the frontside signal wire layers are put down (to make hooking up the backside power easier). That greatly reduced structural strength & is a somewhat taxing process, so before doing the polishing down, they put a carrier wafer atop the frontside signal wire layers, adding some structural strength. And there it stays.

Which means now that the top of the chip has a carrier wafer just for structure. Then the signal wire layers. Then the transistor layer with a bunch of PowerVias also in it. Then the backside power layer.

In the first sample chip, there's still a quite a few signal layers, 14, Vs 15 on other chips. Only down by 1. But better chip utilization and other benefits (ir droop). The backside is 4 layer. Maybe maybe the backside power delivery might have (or maybe could someday) greatly let them cut down on the number of wire layers, which are all now between the transistors & the heatsink. But not yet. So there's a significant number of layers of things burying the transistors now.

None the less, the thermals here looked fine.

It's a great article.


They must have a way to reach and debug the signal layers, at least in the lab?

If not, I would intuitively think this whole approach would fail?


Why do you think it's necessary to debug signal layers by physically connecting to them (other than the I/O pins of the chip)? The only time you would actually try to connect to a chip other than through the I/O pins is in an adversarial situation.

You might want to read this: https://en.wikipedia.org/wiki/Design_for_testing


It's not usually necessary, but they do in fact have this capability and do use it for testing and failure analysis (I think it is mostly used for debugging fabrication issues, where physically seeing the construction of the chip is important). The other amazing thing they can do is some level of rework: creating or removing connections to test a change or hypothesis without the expensive process of making a new mask for that layer. There's a video from intel's testing lab where the leader brags he can take a chip with a single failed transistor somewhere in it, and give you a picture of it.


Typically failed transistors are identified by the DFT scan chain. After the logic is netlisted (meaning, they create a giant list of flops, gates, and the wires that connect them), the DFT tooling adds additional inputs to each flop that let it be controlled by a sideband signal, as a giant shift register. So you can put the chip in DFT mode and set all the flops to a known pattern. Run the clock a few cycles and you know what state the chip should be in (because you can simulate it). So you scan that giant shift register back out and check for differences.

This process is automated, and for a given chip you might have a few gigabytes of stimulus total, which can collectively identify a single failed transistor (and kick that part out of the batch).

Now, I have no idea how you take a picture of a bad transistor. That's beyond me.


> I have no idea how you take a picture of a bad transistor.

Or why you would want to.


Pentium 2 cartridge, double-sided in 3, 2 ... 1...


> for PC enthusiasts accustomed to transistors at the top of their chip, this is going to be a significant change.

Seriously? The chip is inside a carrier but enthusiasts care which side the transistors are on? Absurd.


Ads are getting in the way on mobile. Don’t know how sites like this are going to exist. For me personally the content does not out weigh the ads


I didn't see any ads.

Reading on my phone using Firefox with adblockers. Why are you using no adblockers.


Site looks great to me! Perhaps your platform has a way of installing a content blocker like ublock?


Same here, chrome on Android. Luckily the diagrams were still visible and that's all I really needed.


The site looks great on mobile Safari at least. Maybe your system has an adblocker available?


Installed Adblock plus. No change. What did you use?


Adblock Plus like many adblockers gets industry funding for a whitelist of “acceptable ads”. uBlock Origin, 1Blocker, NextDNS, ControlD, and sometimes AdGuard tend to block more. The DNS adblock options are particularly interesting as they can block ads in any app, similar to running PiHole or a hosts file, but in the cloud. There are also extensions to change the website to dark mode (Dark Reader), accept some but not all cookie prompts on your behalf (Consent-O-Matic) and block “Open in App” messages (Banish) or annoyances (Hush) or other annoying behaviours (StopTheMadness). Basically… you can have a browsing experience that’s nice and sane, and just the way you want it.


Thanks. Adguard worked


Ublock Origin on Kiwi browser for Android. Also NextDNS. Zero ads.


I use Firefox Focus as my content blocker on iOS, it works really well imo.


Hmm. I guess I use Firefox Focus. I’d actually forgotten that I’d set it up, and thought I was just using something built in to the OS.


AdGuard has always worked fine for me on iOS.


uBlock origin is the one you want


I don't see any ads there at all. Aren't you using an ad-blocker?


Don't ever allow us by default. Use umatrix


Backside power delivery should help portables and maybe get some of these crazy desktop numbers down. If all goes as planned Intel looks to have a 2 year jump on TSMC based on current timetables. But I'm worried about them adding nanosheet transistors around the same time. They got into problems before being too ambitious and having a stalled roadmap for a very long time.


Assuming they deliver on a timetable they say they will, which intel has had a well established track record of not doing. With intel, over the last 5-10 years I don't believe anything they say until there are chips in peoples hands at a volume that is meaningful.


Gelsinger (re)introduced OKRs a couple of years ago. Everyone's being held accountable for their promises at all levels now. It's probably caused them to stop dreaming so much and get more conservative but predictable.


Nothing about Intel right now is conservative.

They're spending astronomical amounts of money on R&D and capex.

They're "all-in" on multiple strategic directions.


I don't think Intel's problem for the last +10 years has been dreaming too much.

It's more to do with a deeply dysfunctional and arrogant organization from top to bottom due to many factors, one of them being having the market "cornered" for the previous ~10 years.


Here's to hoping a real engineer (Gelsinger) at the helm will make the difference.


It is hard to nail down exactly when things started going poorly for Intel, but they definitely weren’t going well under Krzanich (engineering background) and I have trouble blaming Swan given the dysfunctional company he inherited. If Gelsinger pulls it all together, IMO Swam deserves some credit for laying a good foundation.

Intel has put out some interesting chips in the last couple years. Anyway, their dominant years set a pretty high bar, I don’t think we’ll see that again (I guess the aliens decided to start spreading around the technology they send).


I personally don't think that Swan and Krzanich did incredibly stupid things; I think it's just very hard to run a billion dollar corporation. A few mistakes, a few unlucky breaks, and a few bad decisions - yes, they made some - can really turn your fortunes bad fast. Having run a small business myself (two, actually), you realize how much a role luck, fortune, and randomness plays in your success and failure.

That said, if there's a person built for this job, Gelsinger is him. Life's better when this industry has competition.


>IMO Swam deserves some credit for laying a good foundation.

Absolutely. Yes. I was surprised how well he did during his time as CEO. Considering he is a Finance and MBA Guy.


> adding nanosheet transistors

This is may be to there benefit, as it gives them another 5 years to say “oh we’ll be ahead soon” without actually releasing anything.


Reading stuff like this makes some part of me really happy- I'll probably never even get into this part of the industry, but reading about breakthroughs like this always just feel really amazing


I worked at Intel for 3 years and I felt similarly. I was in a group writing software to help the R&D physicists figure out the kinks of the low-level devices (next-gen transistors) way before going to silicon. The day-to-day work was relatively mundane, but talking to these people about their problems and needs made me feel like I was doing my little bit to move humankind forward.


I can imagine, you are helping in something so central to the supply chain, you know what you do will last in impact (even if the code doesn’t last)


Soon: "Doing my little bit to move AI-kind forward."


Heh, I remember when this was the norm: Samsung, TSMC, and GloFo waiting for Intel to work out the kinks and tooling before implementing it themselves.


To be fair, most of that research happens pre-competitively (cfr imec roadmap). That includes working with tool makers to work out the kinks. Of course there is a big hurdle from there to an actual product-scale deployment.


(I don't understand why sibling comment was downvoted so much with no comment.)


I think comments on this site start greying out with just 1-2 downvotes, and we can’t see each other’s scores, right? So it is possible that just a couple people didn’t like it for random/obscure reasons.

It appears to be positive now, or at least it isn’t greyed out anymore. I wouldn’t wonder too much about these temporarily downvotes comments, they usually bounce back pretty quick if they are any good.


Would it be the case that a lower grade or even waste silicon blanks can be used to do the power component?

Can a lower level of 'scale' be used, to achieve the power lines, and so more assured/fault-tolerant production?

If it has to lock-step the generational burdens of masking and technology, and is unable to capitalise otherwise unused silicon, it doubles costs in those inputs. Not that it doesn't mean it can't deliver both improved power budget and better signals (less interference) but it might even be capable of becoming a lower cost option, if the bonding/positioning/lapping processes aren't too expensive, and its input costs and consequences for wiring plane costs are better than the alternatives to boot!


Wafers are cheap. Bonding is not. And grinding is expensive. (Just ask Harris and Burr-Brown, who introduced this step ages ago for their processes requiring dielectric isolation.) So if that's cheaper than a few EUV masks, that says a lot about EUV!


the silicon is not a critical cost here at all.


I know this might break the rules a little but can we all just agree on one thing: Chip Making technology is fucking cool.


Apparently chip-making uses (or used?) Chlorine trifluoride - the stuff that can set asbestos or sand on fire - to clean chemical vapour deposition chambers.


It sets ashes on fire. The only way to stop ClF3 fire is to wait it out. Shit is just unstoppable. Reminds me of FOOF. In a bad way.


https://interestingengineering.com/science/chlorine-trifluor...

>

    ”is, of course, extremely toxic, but that’s the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water-with which it reacts explosively." He continues, "It can be kept in some of the ordinary structural metals-steel, copper, aluminum, etc.-because of the formation of a thin film of insoluble metal fluoride which protects the bulk of the metal, just as the invisible coat of oxide on aluminum keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, and has no chance to reform, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes.”


I don't have the link, but Derek Lowe did a great In The Pipeline on this stuff.



Thanks!

I was on my phone, and realized I hadn’t bookmarked it.

I like all the stuff he did in that series, so the second link is great!


And I enjoyed this one, almost as much as the OP: https://www.science.org/content/blog-post/things-i-won-t-wor...

This is my favorite quote:

> And that's at room temperature. At seven hundred freaking degrees, fluorine starts to dissociate into monoatomic radicals, thereby losing its gentle and forgiving nature.


Just FYI, for some reason, the embedded video in this post[0] does not work.

Here it is on YouTube[1].

[0] https://www.science.org/content/blog-post/chlorine-trifluori...

[1] https://www.youtube.com/watch?v=M4l56AfUTnQ&t=1s


The fire diamond on Wikipedia says 4 0 4 or: deadly, explosive, NOT flammable. Weird stuff!


It's not flammable. Check the white part (OX ~W~). It's an oxidizer (that reacts with water).

It's same with pure oxygen. Oxygen is not flammable. It's not a fuel source. For example a pure hydrogen can be kept in bottle indefinitely. But if you put something even a bit reactive - like rusted piece of iron, in pure O2 it will burn like a candle.

ClF3 makes pure O2 look like water. I.e. whatever you put in, chances are it's going to burn and make HF/HCl as byproduct. Good luck.


Yeah I said not flammable (sorry in all caps, which may have looked like an acronym). That is what makes it interesting. It sounds like "oxygen on steroids" might be a way to describe it. I wonder... can it burn oxygen (with oxygen being the "fuel")?


Not as such it seems, neither have an electron they can donate. In fact under normal conditions ClF3 will dissociated into ClF + F2, also strong oxidizers on their own.

Maybe under special conditions they would react, but it seems they aren't able to rip each others electrons.


That SEM cross section image is downright beautiful, IMO.


Hopefully this will be the kick in the backside that Intel needs to start leading again.


Intel, just like Boeing and to a lesser extent many other F500 companies appear to be fully financialized at this point. Their executives probably wake up each day and think about how much stock to buy back and nothing else.

Do they even remember what it is the company used to do for a living?


Yeah, that's why back in Jan 2021 Intel ousted their CEO Bob Swan, who was a finance guy running Intel into the ground.

The replacement CEO, Pat Gelsinger, has an engineering background and looks to be fixing things properly:

https://en.wikipedia.org/wiki/Pat_Gelsinger

The Intel board looks to have recognised and addressed the financialisation problem properly for once. Pretty rare for a board.

Pity the boards of IBM, HP (etc) haven't been as capable.


Well, that's missing a rather important part of the picture. The real culpit is Brian Krzanich, who, with beloved approval from the board I assume, completely destroyed the company. The cardinal sin was being way overly aggressive on the process technology which completely failed and left intel stuck on 14nm for way too many years. They use an "inappropriate work relation" as a reason to kick him out, but that obviously wasn't the real reason.

Bob Swan clearly wasn't the guy that could fix all this. Pat finally got the job he always wanted (he left for VMware because he didn't get it originally), but Intel is ship that's hard to turn (floating iceberg might be a more fitting analogy).

I'm rooting for Intel; I have friends there and competition is good for us consumers. Also, Intel is a friend of open source.


Ahhh, thanks. I'd wondered whether the CEO prior to Bob Swan did much of the damage, as Intel were known to have fucked up the process side for years by that point.

Sounds like a clear Yes then.

That being said, Bob Swann was the Intels' CFO from 2016 prior to his becoming CEO. So he was still involved during the later part of Brian Krzanich's tenure.


It's more than process problems though, and really more than BK too. Intel was the proverbial termite-riddled house by the 2010s on multiple levels.

BK was COO from 2012 and CEO from 2013, and intel was already starting to spin its wheels by that point on stuff like modems and atom and wireless and lacked a proper strategic direction. It's hard to know how much internal jank was in the architectures back then but it probably wasn't insignificant, after all Core M ties back to Pentium III and P6/Pentium Pro at least.

He certainly didn't help anything but the organization that produced BK (he started as an engineer) and put him in the boardroom wasn't going to pick Lisa Su or Jensen Huang as plan B. The organizational forces that gave us BK would have put another suit in the chair and applied the same pressures to them, the problem with historical counterfactuals is always that these forces really matter more than specific individuals being in the chair in most cases.

People forget, he was literally a process engineer by trade too, it's not like he came in as a beancounter. That was all just natural pressures of the market.

On the other hand, if you count out 5 years from when he became CEO... that's around the time the problems started with 14nm (Broadwell struggled to be born) and the point where uarch performance progression really stalled out etc. And of course 14nm was followed by 10nm and the interminable delays.

But in hindsight a lot of the delays appear to have been "termite problems", yes the process was a mess but the IP teams couldn't get their shit together either, and that's why server products have been running 2+ years behind schedule, Alder Lake has its AVX-512 disabled, Meteor Lake is not happening on desktop, and 2.5GbE is going back for its... sixth stepping? Those teams are underperforming and it has nothing to do with 10nm delays.

I realize 2015-2017 is when shit started really hitting the fan but like, unless BK walked in on day 1 as CEO and was like "alright boys we're making Broadwell shitty and giving Skylake-X the worst AVX-512 implementation known to man" it's not entirely his fault either, just the termite rot was still not structural yet. Both the fab teams and IP teams were having visible problems already not too long after he took the chair.

He's not a great CEO by any means, and he actively made things worse over his tenure but... it's kinda hard to believe that he just actively made Intel shit in 4 years as CEO all by himself. They had to have had problems already, and some things like Pentium 4 and Meltdown (which goes all the way back to P6) point to that. But moore's law was the great equalizer back then... just right the ship and catch up on the next node and you'll be fine. Nodes are an active problem right now and it requires advanced packaging that is placing more emphasis on the architecture to cater around that. Things are just a lot harder now.


> Alder Lake has its AVX-512 disabled

Alder lake has AVX-512 disabled because 512 bit data paths on atom cores don't make sense and Microsoft couldn't execute on a IHV specific scheduling change so quickly. AVX2+ and later Windows scheduler changes will take care of this. Just like P-states, Intel now has hardware scheduling hints for the OS as well.

> Meteor Lake is not happening on desktop

Because arrow lake is tracking closely within 6 months and MTL/LNL are focused on the platform power efficiency.

> Those teams are underperforming and it has nothing to do with 10nm delays.

It very much has to do with process nodes as well, with tight coupling of process to chip area (yield, thermals) and number of transistors.


AVX-512 on atom is cool, KNL was a good idea.

I get that they aren’t making these chips for me, but Alder lake with AVX-512 would have been so cool, it would be like having a Phi in the same package as your main cores.

I’m not sure what exactly killed the Phi, but not having to talk through PCIe might have given them a chance to keep up with the inevitable march of NVIDIA Tesla bandwidth improvements.


Intel was this way, but IMO Pat Gelsinger has turned it around — as per this, their surprisingly competitive GPU perf, and hopefully Intel Foundry Services.

IDK if it's enough to call it a Satya Nadella-like CEO transition, but I've been pretty impressed watching it happen.


I went with the A770 for my new desktop this year and couldn't be happier.

They did one thing in their first gen that ATI/AMD and Nvidia have always completely failed to do: release 1st party open source Linux drivers that just work without any fuzz.

I see no reason to ever even look at an Nvidia or AMD card again, for my own purposes, assuming Intel keep releasing GPUs. It's laughable how bad their drivers have always been. And Nvidia's pricing is a disgrace.


Aren't AMD drivers open source for linux?


No. Not the fully featured ones.

Also, the way they've dragged their feet on this for decades, and still do. Doesn't inspire confidence.


When I will buy the next laptop of videocard, I'll have a look at intel gpus and linux driver situation.

On my laptop I have disabled the NVidia GPU on linux because of the drivers, and use the AMD integrated GPU. Hope Intel will prioritise open source GPU drivers in the future, if they will do that sincerely it will win goodwill from me and hopefully from the community as well.


The open source ones should be fine for a laptop APU. Certainly alot better than whatever the hell Nvidia think they're doing...

But yeah, for fancier features like OpenCL, Vulkan, raytracing, you'd have to use amdgpu "pro" drivers which includes many proprietary parts.

Intel are definitely committed to providing open source Linux drivers for the gpus, at least based on past behaviour. They've had open source drivers for the iGPUs pretty much since the beginning(2010), IIRC.

My last two desktops actually didn't have dGPUs at all because I couldn't be bothered, and I'm not much of a gamer. But now GPUs are too important for many other things, so I'm very happy that Intel got into it. And they didn't disappoint with drivers again really. Though it did take some time to get really sorted, mostly because they had a lot of driver tech debt to sort out on all platforms. So drivers have been shifty all around, but now they're quite stable.


You don’t need to use the pro drivers for Vulcan, in fact the open source RADV driver typically has better gaming performance and compatibility than the pro package.


If you are on linux you are happy that the game runs, you don't care about the 10% more fps you could get with the competitor GPU.

Since graphic cards do more than rasterization and lighting, it is very important to have a serious GPU with open source drivers, so that all the other open source projects related to ML can do more. I suppose my next GPU will be from intel (never thought I would say good things about them, after they almost killed the x86 cpus).


> their surprisingly competitive GPU perf,

I think the only thing Pat Gelsinger deserves credit for when it comes to Arc is firing Raja Koduri.

Whether or not this turns out to actually be good for the company or not remains to be seen. A lot is riding on Battlemage not being a mess like Arc was.


Hope it will fire the guy that releases new cpu sockets without reason.

I bought an AMD AM4 socket motherboard 3 years ago for a good price ~55$ (A320MK) and recently I upgraded the CPU (a 5600G - 12 threads at 3.9GHz with powerful integrated GPU for 100$) and the motherboard supports it. Very nice!


Intel rested on their laurels for many years but their processors are neck and neck with AMD in some segments (probably because they had margin to burn, which they now have to stay competitive). And they are still adding fabs in the US. When I think of financialized companies I think like IBM. I wouldn’t count Intel out yet.


I don’t know if they have been resting on their laurels. Alder Lake which was a huge departure from their previous architectures came out at a very good time - just as their previous design was running out of steam. It takes years to get a new design from drawing board to production.

IMHO Intel’s R&D was humming along just fine but the company as a whole had problems pushing stuff into full production. The new CEO managed to get the company into shape enough to actually ship things again.


IBM is a good example, all that is left is rearranging ships on the deck while it goes down.

https://www.macrotrends.net/stocks/charts/IBM/ibm/net-income


Microsoft was able to reverse it, hopefully Intel can as well.


What does "start leading" even mean? Every CPU launch, AMD is ahead and the Intel beats them or Intel is ahead and then AMD beats them. There is no leading here. There are only wins for people who want CPUs that keep getting faster, cooler, require fewer watts to run, and overall improve generation upon generation.


It's only been perennial leapfrogging for the past several years, when AMD has been doing well with their Zen processors and Intel has been on a recovery path from their 10nm failures. But before that, there were a few stretches of several years each where Intel was solidly in the lead and AMD was always an also-ran. Intel couldn't have retained their dominant market share for decades without having those long periods of undisputed technical leadership to solidify their lead.


> Intel couldn't have retained their dominant market share for decades without having those long periods of undisputed technical leadership ...

That kind of misses where Intel pulled all kinds of illegal business tactics to have AMD excluded from manufacturers, etc.


Sounds like someone either wasn't around for, or forgot about, both the Athlon era (when folks were just as likely to have an Intel as they were AMD), as well as the 486 days, where folks were just as likely to have an Intel as they were a Cyrix (thanks, Quake, for ruining that party).

It's not "only the past few years", it's "we're back to healthy competition" after a period of basically illegal business practices breaking the market.


@wtallis I have a question unrelated to this thread, but if you have a way to contact you privately it's about an article you wrote in 2017. Thanks


But people are pining for the "good old days" where Intel sucked, but AMD sucked more. Now these two companies are competitive and innovating. This is how it's supposed to be.


And meanwhile they're both getting cucked by Apple because the Intel/AMD brand monoliths are too heavy to make any groundbreaking/architectural changes now.


AMD's Zen 4 chips are competing with Apple's M2 on performance and at least closing the gap on efficiency. Intel's Alder Lake is losing on efficiency but is still trading blows for performance. I am generally a fan of Apple but the M2 was just a modest iteration of the M1, it's not a groundbreaking architectural change either. Even the Pro/Max/Ultra die multiplication strategy is unlikely to continue competing much longer, the latest M2 Ultra is losing to AMD and Intel in some aspects right out of the gate.


M2 is still quite far ahead in user-interactive tasks like browser and JVM, as shown by actual user experience in battery life. Cinebench isn't exactly the only efficiency test and Anandtech shows AMD still trailing far behind on the SPEC2017 suite (a variety of real-world applications).

In desktops (Mac Studio etc) where you can run unlimited watts, yeah Apple doesn't do as well there, but the efficiency is still amazing. And in the laptops, the actual user experience shows the cinebench numbers aren't capturing something.


I'm not sure if I even trust Intel/Windows at this point. The m1 macs are just so much snappier in absolutely every way imaginable than my old 11th gen Dell. My old PC would some times take minutes (!) to fully wake up from sleep, the battery drained in a couple of hours when coding, and it was sluggish the whole time compared to plugged in performance. I plan to see if I can borrow someone's 13th gen PC and use it for a week and see how it compares. I'm not able to find any reviews that covers the user experience on and off wall power.


"My old PC"

Well of course a newer laptop of going to be "snappier" lmao.

"My new car is faster and more fuel efficient than my old car"


11th gen Intel isn't some ancient PC, it released in 2021, before the M1 that user compared it to. Don't be disingenuous.


Aha, it's still a perfectly cromulent point. _every_ device starts off fast, phones laptops, whatever.

I've had quite a few MBs and they all slow down after a while, needing an overhaul/refresh in the same way that windows machines, phones etc do.

Get that red memory pressure and kernel desperately swapping pages in & out, woooo. Perhaps things are different with >16GB machines with the SSD mostly empty.


I've never had slowdown issues with PCs or Macs unless hardware is outright failing. You can stop gaslighting us now.


> I'm not sure if I even trust Intel/Windows at this point

It helps that it's Unix on the desktop. If you're comparing against a Windows install then I'm sure yes it will be way snappier, if you're comparing against Linux it really shouldn't be. Linux doesn't always have powersaving down quite right etc though, ofc.

The sales pitch of OS X to powerusers imo is that it's Unix on the desktop that is well-supported by the vendor and has a good ecosystem of professionally made apps. If you want to tinker there's nothing inherently bad about Linux either, but OS X actually does just work fairly well, although it's not without flaws as well. But it's faster than Windows, less amateur than Linux, actually has a user base unlike BSD. So it sits in an interesting spot - the "willing to spend money" niche.

There are a number of downsides and unpleasant aspects to OS X too, of course. But yeah, it's inherently going to be snappier than Windows if that's your reference point, *nix generally is, that's not OSX exclusive either.

People are generally more hostile to vendor hardware-software integration today than they used to be in the past, I think. Amiga, BeOS, SGI, HP-UX, Solaris, Cray, Nonstop, zSystem, CUDA... there is a lot of the computing world that runs on proprietary hardware-software integration and always has.

It has been an interesting sea change that people see open/plays nice as the default, it's an interesting sign of how copyleft has won in the long term that proprietary is seen as greedy/suspicious in general. I've been feeling that's a significant thing for a while. People are hostile to these products when sometimes it's simply paying more for a specific thing or a niche, if you want a premium *nix laptop the M1s are very nice actually.


In most aspects.

The m2 chip is more power efficient, but it can't compete against Intel and amd


I don't know about that either, based on Tom's Hardware review of the M2 Ultra it is still pretty competitive and well rounded.

Intel beats Apple at most specs but the margin is too small to say Apple "can't compete", especially for a company that didn't even have a desktop CPU three years ago.


By Apple? Or by ARM?

There's a reason it's an "ARM" processor & not an "Apple" one, even though yes, Apple did contribute a lot of their own design to their specific chip.

Unfortunately, Apple took the shortcut of gaining performance by having memory/SB/NB/kitchen sink all together on the same die, which is very...not scalable. It's a good way to be "top in class" for the class your in, but it comes with a ceiling; you can't hit the higher levels because where competitors can use the whole die for processor, Apple are stuck using only part of the die, the rest being used for memory/controllers, etc.

Absolutely stellar architecture for laptops for sure, but not so great for desktops/servers, imo.


> where competitors can use the whole die for processor, Apple are stuck using only part of the die, the rest being used for memory/controllers, etc.

The memory isn’t on the same die in the M-series chips.


It's an Apple M2.


Intel is producing amazing GPUs AND CPUs right now.


They really are not making amazing GPUs. The ones on the market are only decent value because Intel is selling them far below what it cost to make them, because there's no way they could sell them at proper prices. Even still, they have ceased production, and the product is still on the shelves.


Honestly being anywhere on the competitive spectrum is pretty impressive. GPUs aren’t just some commodity product.

They’ve required decades of iteration to lead to the flagship products we have today. To achieve at least a fraction of that is an achievement of itself.

But yeah I am going to hold on to my NVDA for now.


Haha, yeah, in any reasonable comparison to like “complexity of things that humans have made over all time,” they are definitely pretty amazing. But they appear to be third best among their peers. I’d take an Intel dGPU though.


I'd have it the other way around. Intel's doing OK with cpus but man their P-core is way too big & hot. They've been adding and adding to the same core design for a decade & it's a monster. Also their process is still flagging. They've managed well for such limitations but real change has to come, soon.

Meanwhile Arc is kicking ass. It's launch was a dud, but the team has really kept pushing on making the chip run better and better & it's such a great value now. People need to reassess the preconceptions they formed at launch. https://www.digitaltrends.com/computing/intel-arc-graphics-c...


Again, it only looks good because it's sold at such a deep discount.

A770 has a 400mm^2 die and a 256-bit bus to 16GB of GDDR6. It only competes favorably to cards that cost less than half of what it did to manufacture.

You can say that it's a great deal for a consumer, but it is a terrible deal for Intel, and the only reason they are selling them for such a price is that they already have the stock and couldn't sell it for any higher price.


There could definitely be a lot of truth to this but we don't really know do we? Do we have any idea how much these chips actually cost various chipmakers?1

Amd's rx580 was a $280 gpu with 256bit bus too. It got down near $200 for a while. It was only 240mm^2 though. I simply don't know what chips actually can cost these days. I wouldn't be surprised to find out Intel's taking a bath here but I also would be super unshocked to hear other folks making gpus have colossal markups.


Tuna-Fish decides what qualifies as a "discount" simply by shooting out of his ass, unless you have insider knowledge you'd like to share.


Completely disagree.

The new P's are insane.

I have an i5-1240p and it's the best chip I've ever owned.

12 cores, 25w tdp.


The -P SoCs are not the same thing as the P-cores contained within. The i5-1240P SoC has 4 P-cores and 8 E-cores. The -P at the end of the model number denotes it as being in the middle product segment of their mobile lineup, between the lower-power -U models and the higher-power -H models. All three product segments use the same P-cores and E-cores, but in varying quantities and clock speeds.


Apologies, I misunderstood.

In any case, the i5-1240p is fantastic. I wouldn't prefer any other SOC over it given it's an Intel, an x64 chipset and an absolute animal in power, compatibility and performance (especially multi-core).

Also, their n100 chipsets are insanity for low-end, low TDP computing. I don't think there is anything comparable on the market. As I understand it, this uses all efficiency cores.

An n100 server could handle any task you could ever throw it and only pull 6watts of power.


They're only too hot and inefficient when run at 5+ ghz. You'll see those same P cores perform very admirably in terms of efficiency when they are run in their peak efficiency zone, like in server and mobile chips.


They did it to take market share, and they have, slightly (6% market share against AMDs 9%). If you look closely their software story is miles ahead of AMDs, so I can see them coming to a comfortable second in the next few years… which is nothing small given the leader is apparently a trillion dollar company.


Their APUs are not competitive. Somehow the 13 gen has a worse igpu than previous generations. The CPU is plenty fast, though.

Meanwhile, you can run Cyberpunk on the newest AMD APUs.


I imagine the pioneers of this were the camera-sensor folks (mostly Sony?) doing BSI ?


BSI requires avoiding having wiring on both sides, so that there's a side where no wiring will interfere with incoming light. But it does at least share the idea of polishing down the die so the layer of bulk silicon under the transistors is very thin.


Unfortunate name.


Yeah, maybe PowerBottom was already trademarked.


Reminds me of a few months ago when Nvidia announced their new CuLitho fabrication process, which is Spanish means “cute little ass”.

I hope Nvidia adds backside power delivery to their new CuLitho chips, just so we can get as many jokes as possible out of this fab cycle.


Cannot be worst than Audi "etron" which means horseshit in French :)


I'm sure just about any product name is going to sound bad in some random language used somewhere on Earth.


CuLitho de Nvidia.


Culitho != Culito


Oh come on we’re allowed to have fun at Intel and Nvidia’s expense. This is like the most niche joke if I can’t make it on HN I’m doomed.

I’ve got the Spanish speaking software engineer joke market locked up!


H is silent in spanish, so when spoken culitho = culito


Yeah, sadly my first thought was "Intel CPUs are powered by farting now?"

Second thought was "I wonder if it reduces fan noise."



I don’t know, it made me snicker. But then again, I do spend most of my working hours among 14 year olds.


they had to know. someone had to know

I thought maybe it was only the ones in the article who were saying it, but no it's also in the intel advertisement video https://www.youtube.com/watch?v=Rt-7c9Wgnds&t=73s


It feels weird to me that they are not rather just laying down power layers first, then some copper, and then the logic layers.

So just build a deeper stack of layers, rather than flipping and grinding.

But probably there is a reason.


I think this might be the end of cool die photos, now that all the cool stuff will be hidden in between two layers of wiring.

Though I'd be curious to see what the power delivery layers look like after this change.


What, exactly, do you think one could see on the back of a flip chip?

If anything, BPD means there are more layers that can be shown.


Interesting! I guess they might also return back to FIVR when the CPU voltage regulators were integrated into the chip silicon. (Haswell / Broadwell)


Where does Intel currently stand in the race with AMD?

AMD has been doing extremely well while Intel hasn’t been able to regroup.


https://www.cpubenchmark.net/cpu_value_available.html#xy_sca...

It looks like Intel still has a solid lead in single core perf, which is frankly the biggest factor for me for a general purpose desktop CPU. Of course, other uses have other priorities. The charts are missing one important measure, power efficiency.


Power Via is more of a fab technology and AMD is fabless. Therefore not really possible to compare directly in this aspect. AMD relies on TSMC or Samsung to make their chips.


This actually looks incredible. Maybe this will be the turning point for Intel.


Hopefully the competition stays healthy and keep challenging each other for the top position. Intel proved already that being on the top spot for to long they became lazy.


The image is a bad photoshop? The left signal should not have been mirrored.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: