Is there good coverage of how Intel became so uncompetitive? My instinct is to say this is just the natural result of the presence of MBAs who are trained to focus _exclusively_ on this quarter's results so ignore R&D investment and also shit on employees by doing gimmicks like hotdesking to pinch pennies.
I'm willing to bet my intuition is wrong, especially given my extremely deep bias against MBAs and 'this quarter' thinking. Any great sources on the full story?
I think they have lost a lot of their best employees because their pay is not that great. They aimed for paying at the 50th percentile in wages and the top companies come in and can offer double or more their current pay.
Doesn't help they switched from large cubes with five-foot high cubicle walls which did a good job on minimizing noise and visual distractions to having cubicle walls with a height of about 3-4 feet high and much smaller. Plus they installed sit-stand desks and you have people stand and make phone calls that can be heard 30 feet away. Doesn't help for concentrating on problems.
This is a routine issue in the semiconductor industry. Payscales haven't keep up with software across that whole industry. The trade organizations are aware of this, but I haven't seen a lot of push to level the playing field. It's unfortunate, as there are lots of really fascinating problems to solve on the materials and integration side.
The industry trade organization did make a snazzy movie to try to attract young graduates into the industry.
Pay disparity is something that you’ve gotta fix early on. If you wait until the results of losing top level talent are finally being felt by everyone, then chances are your business might not have enough cash left to rectify the problem.
This is generally true however Intel still has and makes tons of cash, especially compared to AMD. It will take them some time to recover from this though, in this industry it can easily mean 5+ years.
When I was younger I was looking into getting into this industry but was put off by low salaries. Broadly in electronics space I noticed pay is very low which I find puzzling.
I worked for a tenant in an complex that was mostly populated by semiconductor manufacturing and research people.
You could use the parking lots as a proxy for employee prosperity. The chip guys drove better cars than the students, but the government and other corporate people had better cars on average. The chip bigshots made bank though. The chip people would also have crunch periods where they worked crazy hours.
The workers making bank were the tradesmen building the facilities that housed tools.
Only thing I can add to this is I had an Uber driver in SF area who was a computer engineer for Intel and wasn't doing Uber as a fun aside to meet people...
Yes on both counts. I was contacted a couple years back about a machine learning contracting gig. When I asked what the hourly rate was the recruiter said $45/hour. I laughed and said that the going rate for that kind of skill set is like $120+/hour. That has not been an isolated incident.
Given the company, it's usually long-term contracts that more closely resemble full-time employment. The location is also usually outside of the biggest, most expensive cities.
$120/hr * 40 hrs/wk * 50 wks/yr = $240K before self-employment taxes and paying for insurance. Ends up being comparable to typical compensation in cities that aren't SF, Seattle, or other top-tier markets.
Now if someone was doing more typical ad-hoc freelance work on a project basis, $120/hr might be too low for typical work. However, $90-120/hr as a contractor with a long-term contract and stable pay over 12-18 months is not uncommon.
Unless someone has a lot of new, small contracts knocking at their door nonstop, it can make more financial sense to take a $120/hr long-term contract over sporadic $200+/hr work that changes month to month.
Very true, though it'll depend on the seniority of the position.
$240K/yr, minus self-employment taxes, insurance, and other overhead, not to mention the lack of other non-monetary benefits and comp, is not a lot for someone doing serious machine learning.
I'd imagine that the type of specialized talent that Intel needs to catch up to AMD have plenty of options open to them. A comp strategy based on median industry comp might work to maintain their position, but probably won't churn out new industry-leading tech needed to jump ahead.
> $120 is about a third of what my accountant charges per hour.
Some professions typically require both a related degree and one or more certifications or licenses. Accounting is one of those (CPA, CMA, CFA, CISA, CIA, EA) and this ends up being reflected in the hourly rate.
This is so true. I have friends whose managers told them - "the only way to get more pay is to leave intel", "our hands are tied, we cant give more pay or promo", " to get more pay and promotion, one option is to leave intel and then come back"...all of these folks are in FAANG now.
Man I hate that attitude. My company pulls that on absolutely every benefit asked about.
"We've done a survey and our pay and benefits are on par for the industry" = we are proud to be a profoundly mediocre place to work in every possible way.
Can't wait for this pandemic to end and the market for my industry/region to pick up again.
Seems like there were years when intel did not have much competition from AMD. Each year the "big news" from intel was their their Celeron/Pentium/iX/whatever was now up to 100MHz faster than last year's (during limited situations where temperatures allow blah blah blah). Or they add some new strange trademarked exotic-sounding technology brand name each year like "The new Intel Celeron/Pentium/iX with..." "Intel MaxT"/"Intel TruEvo/"Intel Cartara"/"Intel Artuga Plus"/"Intel Versuvia"/"Intel Centurior Max"/"Intel T-Xe" that no one really understands but is basically some sort of mundane enterprise feature no one actually cares about that does something for remote wifi management
Quick - rush to the store to get a new laptop!!! I totally need an extra 100MHz single-core maximum boost on a 4-core 2Ghz CPU (i.e. 5% max...) with Intel TruStep MagrittePro (TM) technology that does something to the colour gamut of my monitor in certain content creation scenarios.</sarcasm>
It strikes me that Intel have been caught napping and resting on their laurels. AMD appear to have come out with a great competitive product and Intel don't seem to have anything to compete with it with because they've been milking the market for the past 5+ years with tiny incremental clock increases and nothing actually "new".
They've allowed AMD to eat their lunch.
Maybe they've got a "real" new product they've been keeping in reserve that they'll bring to market now and surprise everyone with. Maybe they'll bring out something amazing next year. Who knows. Maybe. Maybe not. Seems like they've blown their lead for now regardless.
Don't look for a change: their position was worse in the days of the Netburst dead-end and back then the company was just lucky that a second, independent path had been followed in the Haifa office to target a niche market (yes, laptops were niche in that age).
So has there been organizational dysfunction for decades? I think not: there are things Intel has always been doing very well and post-Netburst improvements after Core 2 have also been significant.
I believe that a big element is basically luck: you invest in a certain progression path and that investment will yield results so you keep going. Another path might be sufficiently better to write off the inferior path investment but you don't know that. Perhaps the unsatisfying path taken is the least bad of all. Even Netburst improved over time, a bit, and so did whatever forgettable AMD had been building between the Athlon glory days and those of Ryzen. As long as you see progress, it's very hard to just give it up for a fresh start (that may or may not be better). We can just be lucky that a luck/lock-in imbalance has never persisted long enough to make the lucky one the only survivor, because then they would never leave the left dead-end they'll inevitable run into some day.
I think we've all taken the ability to reliably introduce process node improvements for granted to some extent. Intel has clearly been caught out by its inability to get 10nm - which I understand was overambitious - to work and pretty much everything else follows on from there.
At the same time AMD has been fortunate that TSMC has been able to continue with its node shrinks - supported by demand from and cash generated by high end mobile devices.
None of which is to say that Intel has been well run (cough McAfee cough) or that your criticisms aren't valid to some extent.
> Intel has clearly been caught out by its inability to get 10nm - which I understand was overambitious - to work and pretty much everything else follows on from there.
It's more than process node. Zen 3 now has superior IPC as well; the now unqualified performance lead of AMD devices happens at lower frequency. The process node problem is just the most visible and embarrassing problem Intel has. It is not the only problem. Intel's 10th gen is also still well behind the curve wrt side channel attacks. That gap will persist until Rocket Lake appears.
The rot at Intel is old and runs deep.
The one area that Intel has managed to maintain superiority is quality. Intel is still the thing to buy if you want minimum glitches.
> The one area that Intel has managed to maintain superiority is quality.
There are lots deliverables, and thus lots of things that can have superior quality I guess. But my recent experience has been the reverse of yours. Intel's early WiFi modules on a CPU board were so bad I was forced to buy a go down the suck it and see path, but a whole pile of them from various manufacturers on EBay, build a room of 60 machines so I could test failure point of the bloody things. Intel at about 10 clients or so, I didn't find the fail point of Atheros and it was 1/4 the price. Then there was the i915 which hardly worked for a year after laptops were being sold with it, followed by a seeming unending series of bugs from the Manage Engine, follow by Apple cancelling heir Macbook refresh because of Intel bugs. Then I had the misfortune of using Intels T265. To name but one bug, it you would not re-initialise it's USB bus (say after a CPU reset). Intels response is "won't fix, suggest you splice in a relay USB power line".
You must be looking at a difference sort of quality to me. Right now, Intel is terrible. In the mean time I've had the pleasure of using a few 10's of AMD GX-420CA MPU's, and they are just working after 5 years, zero failures.
> The one area that Intel has managed to maintain superiority is quality. Intel is still the thing to buy if you want minimum glitches.
This is kind of a big one, and why I prefer Intel. I can't see it continuing, though. Intel will feel pressure to:
* spin products to seem even somewhat competitive
* reduce their margins on clock speed (historically, they overclocked well because of conservative margins), and run a little too fast/too hot
* push things out the door before they're ready
I don't care very much about a 50% performance difference, but I do care a lot about stability. If Intel can maintain a quality lead, I'm likely to keep buying Intel.
Few executive teams have that long-term discipline.
Not disagreeing that Intel has deep problems but doesn't greater transistor density support more sophisticated architectures and hence superior IPC? Also the failure to get 10nm working has presumably been problematic for the architecture team who have to design for two processes at the same time rather than focus on just 10nm?
> doesn't greater transistor density support more sophisticated architectures and hence superior IPC?
Intel claims Rocket Lake will deliver 10% IPC improvement over Cannon Lake on the same 14nm node. Clearly 14nm hasn't been tapped out despite the five years and 3-4 (depending on who you ask) previous microarchitectures they've used it for. Yes, smaller devices enable better designs. Ironically if Rocket Lake actually delivers 10% then it's corroboration that Intel has been slacking on microarchitecture as well.
> for the architecture team who have to design for two processes at the same time rather than focus on just 10nm?
Intel has been designing devices for at least two nodes as SOP for 13 years now; they called it "tick-tock" which is the cycle of developing a new microarchitecture on one node and then porting to a smaller node.
The difference this time is that they're going backwards... In any case the argument that there is some great difficulty with moving cores between nodes is tough to support given the history.
> they called it "tick-tock" which is the cycle of developing a new microarchitecture on one node and then porting to a smaller node.
The whole tick-tock cycle has been interrupted by the 10nm failure which has almost certainly thrown off their architecture development cycle. It's not about moving cores to a smaller node its about knowing whether you're designing for 10nm or 14 nm.
Are there any industries where high-end manufacturing doesn't subsidize low-end and low-middle products substantially?
Are we saying this all started to unravel when most of the money moved to mobile and Intel still, after 20 years of warning shots, didn't have a compelling strategy there?
For Intel I think ~90% of the PC market plus ~90% of the server market provides enough volume to easily cover their R&D. The rest of the industry may have needed to pivot to mobile to fill their fabs but that's not Intel's problem.
Is there an electric generator evolution scenario here Intel should be watching for?
Back in the day before the electric grid really caught on, hydro- and coal-powered electric generators could be found in businesses large and small that had needs that went well beyond the limited voltage and amperage requirements of the meager electrified buildings at the time. It made sense for a business to have its own power plant engineering department to just feed and maintain the beasties if the productivity from thirsty electrically-powered machines far outweighed the enormous cost of an in-house power plant, consumables and staffing. As the grid evolved though, this need went away, and only really heavy industry these days exceed the delivery capabilities of the grid will run their own on-site power plant (more often they just try to "peer" co-locate next to a "real" power plant, kind of like HFT's with exchanges). Power plant manufacturing is now a heavily-specialized industry with margins much lower than Intel has become accustomed to.
Could the "grid" of the Net, cloud and mobile devices evolve to the point where the existing PC and server markets retreats in relative scale into "heavy industrial" applications like cloud infrastructure, and shift out Intel's cash flow from under them? Under such a scenario, the PC and server markets don't shrink as much as grow dramatically slower on a relative basis to B2C-grade gear and infrastructure that explodes in growth comparatively. Intel will still be plenty profitable if well-run, and still be mighty, but won't grab all the headlines, and evolve into a GE-like "mainstream" business.
> Could the "grid" of the Net, cloud and mobile devices evolve to the point where the existing PC and server markets retreats in relative scale into "heavy industrial" applications like cloud infrastructure, and shift out Intel's cash flow from under them?
It could, but the extractable rents of centralized data (eg. the Law of Conservation of Attractive Profits) will continue to push against this for a while yet.
Arguably, the hardware and connectivity we have now is already more than capable of dispensing with data centralization for most use-cases, relegating the cloud to niches like commodity backup-and-restore services, and premium on-demand compute capacity analogous to the use-cases for supercomputers (of which AI/ML is a growing subset).
We're already hitting use-cases and scenarios where centralized synchronization is becoming a hindrance that imposes additional costs, but the loss of control of a locus for extracting attractive profit margins implied by alternative software architectures generally limit investment.
Could the "grid" of the Net, cloud and mobile devices evolve to the point where the existing PC and server markets retreats in relative scale into "heavy industrial" applications like cloud infrastructure, and shift out Intel's cash flow from under them?
Absolutely. Many analysts are predicting that the future is mostly public cloud (heavily ARM) accessed from mobile devices (100% ARM) so they're imploring Intel to get back into mobile before it's too late.
My perspective is that every process node is so critical that missing the mark on 10nm had far-reaching ramifications. This kind of technology has gargantuan inertia. The entire hardware ecosystem is strongly interconnected between different firms, and each firm's future technology depends strongly on its past execution. Failure to deliver on 10nm not only jeopardized the smooth rollout of the subsequent process node, it also hurt Intel's ability to deliver large quantities of full-featured chips to customers.
I don't personally believe that Apple would be going to in-house silicon instead of Intel for its flagship laptops if there were a viable way to avoid doing so. Intel is so hurt right now that I surmise it'd be willing to negotiate a fat discount for a flag-carrying customer, and the loss of Windows functionality is kind of disappointing at the upper end of the market (where some tools are clearly better supported by windows or at least x86).
Apple would be doing their own thing anyway. It’s not just about performance, there’s all the myriad other tweaks and customisations you can do on your own SOC to differentiate yourself. We see this with the mobile SOCs. Secure Enclave, sensors, Specialist machine learning accelerators for image processing, exactly the core count and cache you want, optimised bigLittle. The T2 chip is actually a modified iPhone SoC design. With Apple Silicon that can just be a sub-unit in the main SOC.
Yes moving to a new architecture is challenging, but Apple has done it before any this time it can be for keeps. Never again will they be beholden to another company’s priorities, or stuck with me-too processors their competitors have equal access to. A better Intel road map might have resulted in putting the transition off, but I think it was inevitable eventually ever since Apple bought PA Semi.
> My perspective is that every process node is so critical that missing the mark on 10nm had far-reaching ramifications.
The problem with Intel missing on 10 nm is not so much that 10 nm is critical, but that it is critical to their roadmap. Large CPU design is heavily pipelined (like large CPUs), so you miss on the process node, you've still got a team building the refined next release for the year after, and a third team working on the more refined release for the following year.
Then you have decision making; it's hard to get a sense for if you need to go back and make a good new design on 14 nm, if 10 nm is going to be ready enough soon (but it's been several years now of not ready enough), splitting design resources.
Not to be pro-intel but I think they suffered from being too far ahead. They picked a path for 10nm when nobody else could even care. They got stuck in it for too long.. they're so invested and now there's a flocking of competition that can leverage faux-7nm processes that actually sells. If they can get back on track it will be a massive business success.
> My perspective is that every process node is so critical that missing the mark on 10nm had far-reaching ramifications.
I wonder if the migration from "tick-tock" to every third iteration is a case of believing your own PR. "Everything is fine" is what they should have been telling us, while internally it was flashing red lights and klaxons.
Or maybe this started even earlier, with the generations of hardware that gave us Spectre. The target became unreachable, they used smoke and mirrors, and when that blew up they just sort of gave up. Maybe the intervention should have come back then but didn't (cite the "MBAs took over" comment elsewhere in the thread)
From what I've seen, they aren't. At the same node, and same wattage per core, a Ryzen low-power core has better performance per watt and leagues more I/O. That was back in the iPhone X days, I don't think it's gotten any better since
Personally, I'm very skeptical on Apple beating AMD.
I think its partly their desire to add their own IP (use of Apple Silicon as the name is probably revealing of how they think of the new chips) was probably decisive in making the move from x86.
Plus probably still cheaper than any x86 alternative.
It's probably also a great way to avoid head-to-head competition.
Apple marketing always reminded me of how for decades, Rolls-Royce advertisements never would explicitly say things like the engine horsepower and displacement-- just "ample."
Now it will be that much easier to dodge performance questions. "Our machines are not built to run (mainstream software or game), so of course the performance is sketchy in the emulation penalty box. Just run the seven pieces of native MacOS software and it really flies."
They have some history with this with PowerPC but it really doesn't explain why they would make such a move now as they are already using x86 (if it is superior) - makes no sense to put themselves at a disadvantage just to be able to dodge performance questions.
Apple is all about mobile now, even for computers. So they don’t care that AMD has great desktop CPUs, they need great mobile (laptop) CPUs too. The Ryzen 4750U is a great laptop chip, but I can pretty much guarantee that in perf/watt the new Apple Silicon CPUs will blow them out of the water.
Completely agree and it's not just about CPU it's about having a great power efficient GPU and about being able to do things with the neural engine for example that would not be possible on an x86 Intel or AMD laptop design.
> Is there good coverage of how Intel became so uncompetitive?
François Piednoël, performance architect at Intel for 20 years, recently gave a presentation that covers a lot of the reasons behind Intel's decline in his presentation "How to Fix Intel" https://www.youtube.com/watch?v=fiKjzeLco6c
I think he is a little overrated, and the only reason he is that much known is because it is actually rare to see former Intel engineers opening a Youtube channel with some Intel content. His take about how to fix Spectre made completely no sense to me, so either I missed something, or he is actually lacking on some technical subjects...
Anyway still some interesting insight about what he saw happened there.
The situation can be explained in a very boring way in any case. You had the tick-tock cycle, then Intel 10nm was broken, but they thought not anything they could not fix with one more year of process debugging, maybe 2 worst case. And they had so much advance that they could even have tolerated 3. Except 10nm basically never worked. And the new microarchitectures were designed for it... They switched to thinking what to do with 14+++++++++ far too late, and even then I admit the result is suboptimal.
SemiAccurate is... let's say very very opiniated (as usual), to be polite, and I don't think Rocket Lake will be that much a disaster, but also yes Zen 3 is solid, although not that much magical and a bit overpriced for now (but I'm sure AMD will be able to adjust if really needed)
The final thing is: Intel had really an insane advance, and medium/high core count is not that much required right now in volume for consumers. I'm not sure about the situation in datacenters, could be more of a problem for Intel. But Intel has a level of market addressing that is insane and far above AMD. So it is not that much a big deal for Intel, at least if they manage to come back on tracks in the coming years.
So did Intel became that much uncompetitive? I don't really think so. Enthusiasts are just so a bit too much now that AMD really is competitive. Choosing Zen 2 was still a (often excellent) compromise. Choosing Zen 3 over Intel would be a no-brainer if cheaper (and when you can!), and depending of what you do it can often be the good choice even at the current prices, or at least for some specific models.
"received a bachelor's degree in business administration from the University at Buffalo School of Management in 1983 and his MBA from Binghamton University in 1985."
"January 31, 2019, Swan transitioned from his role as CFO and interim CEO"
A CEO who has only a non-technical education (such as an MBA) is VERY unusual for hardware or software companies that are successful. Often the CEO of this kind of company has at least some technical education, and usually the CEO has lots of it. After all, most of the decisions in such firms will have a technology component to them.
A few examples:
* Lisa Su (CEO of Advanced Micro Devices (AMD)): BS, MS, and PhD in Electrical Engineering from MIT, and is a fellow of IEEE
* Jensen Huang (CEO and founder of NVIDIA): BS in electrical engineering from Oregon State University, master's degree in electrical engineering from Stanford University
* C.C. Wei (CEO of TSMC): Ph.D. in Electrical Engineering from Yale University
* Simon Segars (CEO ARM): Bachelor of Engineering degree in electronic engineering at U of Sussex, Master of Science degree from the School of Computer Science at the University of Manchester
* Sundar Pichai (Alphabet/Google CEO): Has an MBA, but also has an M.S. from Stanford University in materials science and engineering
* Eric Schmidt (former Google CEO): BS in Engineering, M.S. degree for designing and implementing a network, and PhD degree in EECS, with a dissertation about the problems of managing distributed software development and tools for solving these problems
* Jeff Bezos (Amazon CEO): Bachelor of Science in Engineering (BSE) in electrical engineering and computer science from Princeton
* Mark Zuckerburg (Facebook CEO): In Harvard studied Psychology and Computer Science (did not earn a degree, but did study it for a few years and implemented the first version of Facebook).
* Tim Cook (Apple CEO): MBA from Duke University and a Bachelor of Science degree in Industrial Engineering from Auburn University.
* Satya Narayana Nadella (Microsoft CEO): Bachelor's in electrical engineering from the Manipal Institute of Technology in Karnataka; M.S. in computer science at the University of Wisconsin–Milwaukee; MBA from the University of Chicago Booth School of Business
* Reed Hastings (Netflix CEO): Bachelor of Arts degree in Mathematics (Bowdoin College), MS Computer Science (Stanford University)
I'm sure there are more examples, but I think that amply demonstrates my point.
The only other example I found of a no-tech CEO leading a tech company was Safra Catz (Oracle CEO), who has a bachelor of arts and a J.D. (Law School). My search wasn't exhaustive, but it was illuminating.
Now let's compare this to Bob Swan (Intel CEO), who received a bachelor's degree in business administration from the University at Buffalo School of Management in 1983 and his MBA from Binghamton University in 1985. No tech at all. Maybe Mr. Swan can do well anyway, but his lack of technical education is extremely unusual when compared to most other tech companies.
I don't think it's a coincidence that the two least inspired, lowest quality CEOs on this list (Sundar Pichai and Tim Cook) are exactly the two who have MBAs.
While CEO, Pinchai's and Cook's companies increased in value by 2.7x and 8.6x respectively, so they must be doing a very high quality job at what they were selected for.
Yes but many people would say (and that's my opinion as well) that they achieved that by resting on what was already working and minor/logical improvement of prviously mapped steps, and that that would be caught like a deer in headlight by the need to perform a massive change to adapt.
They're extremly good (better than most) at keeping the machine well oiled and maintained to ensure it performs as best as possible, but when the machine won't be able to do the job anymore they will have a hard time foreseeing it or finding the new solution.
The interesting one I think about is if the current CEO is basically their version of Steve Ballmer at Microsoft. Not highly loved by tech, media or finance, but basically held the ship together long enough to enable Microsoft to figure out how to transition away from the sinking ship that was Microsoft Windows and into new markets.
Basically he just needs to keep it afloat long enough for Intel to be able to find its version of Satya Nadella and Azure to unlock the next leg of growth.
The examples you cite are people with operational excellence that were built up internally over many years. As an outsider I have no trust that Intel still has such talent, it seems they are fully run by bean counters now. Who says it's a Microsoft and not a Kodak? Microsoft had the immeasurable advantage of two big markets cornered for themselves: enterprise software and home computers software. Intel meanwhile is a market leader that is getting outplayed on every market, the only thing in their favor is missing volume by their competitors - that's not an advantage that is going to last long.
Kodak gets unfairly picked on. They built their business by taking a cut from every single picture taken. There was nothing in the digital camera model that could replicate that revenue stream.
They did manufacture cameras themselves, but it wasn't enough. Remember that Kodak was making money in film manufacturing, film processing, and printing. They were literally making money from every click of the shutter. Even if they built the best cameras in the world, it wouldn't have saved them. And the ability of any electronics company to make cameras from standardized components made it impossible for them to keep a lead in cameras too.
Nah. Sensors. Sensor tech. It's hugely different from mainstream electronics or putting things together from standardized components. Sony took that lead, with Samsung and a few other close behind. Each time Nikon sells a camera, Sony gets a cut. Same thing for most other camera companies, except Canon.
Everything has sensors now, from manufacturing processes, to cell phones, to your optical mouse.
Imaging is much bigger than film.
Kodak R&D had a lead there too, but blew it bringing it to market.
> They were literally making money from every click of the shutter.
and that’s the pivot the software players made in tech, they monetised every click of the mouse (and every tap) while intel is stuck with the burden of the platform costs
Let's mash this up with the evil HP business model that's on the front page. Kodak could have sold digital cameras that required Kodak DRMed flash cards that you would have to pay to erase and reuse. And maybe the photos could be further DRMed so you'd have to take the flash card to a certified Kodak lab.
And how exactly would that have competed with five hundred other brands that didn't pull that trick? The market eventually rejected even the soft-lockin of nonstandard memory cards (think of the 12 slots nobody ever uses that come with every SD/CompactFlash reader)
I'm not even sure it was the first wave of digital cameras that did them in-- people bought boatloads of digital Kodaks in the 2000s. I recall they were very big on dock ecosystems-- drop the camera on a base to charge it, and I think they had some printer docks. That should have helped to differentiate them from cheap Olympus/Nikon/Canon/etc. point-and-shoot cameras.
Yeah, there wasn't the residual income but there was a fair bit of an upgrade cycle for a few years to make up for it, which could have bought them time to find a solid place in the market.
I think the problem was the second wave: the point-and-shoot consumer camera disappeared (outside of novelties like Instax). After 2015 or so, you're either looking at interchangeable lens systems or other prosumer/pro-level kit, or you're using a phone. Kodak never made much of an inroad in the pro-level digital market, and I'm not sure they had a business adaptable to cameraphone sales-- I don't think they made their own sensors or optics, so a licensing or subcontracting business would have nothing to offer.
My first digital camera was a Sony compact that would only take Sony Memory Sticks - one of those 12 slots you mentioned. I got tired of paying twice as much for half the memory of standard cards, so I made sure my next camera took SD.
Kodak was always a big player in the camera business, but they did it to increase the film business - they wanted people taking and printing as many pictures as possible, because that's where the real money was. It's hard today to imagine the scale of that business. And that's the problem, if Kodak had sold every single sensor that went into every single digital camera, it wouldn't have been enough to save them.
> And how exactly would that have competed with five hundred other brands that didn't pull that trick?
I think I agree with the point you're making but I still feel obligated to point out that cheaper and better printer (like the brother laser pointed in the other thread) isn't stopping HP from making huge bank with their ink scam for decades.
Yep - no doubt - I'm not placing any actual bets. Mostly the thought is that similar to the Windows franchise, the existing Intel x86 server franchise [1], while for sure not growing, seems boringly steady enough for a good 3-5 year runway to have a decent shot at developing or finding tech oriented leadership talent. It will be interesting to watch.
As an outsider to MS, its unclear that any one particularly saw Nadella as the obvious successor to Ballmer until they announced it. He somewhat came out of wood work IIRC. Same with Lisa Su.
Really? Because Ballmer was there from basically the start (employee #30). He held positions of leadership all over the company and quite frankly knew it inside and out. Agree or disagree with how he ran the ship he knew the ship and it was his life.
The current Intel CEO has been there for 4 years as a CFO prior to taking over as CEO. We know he can spell Intel... we know he isn't even a little bit technical (his degrees are business administration and MBA and just about all of his previous positions of note are as CFO). So basically he's going to run the company like a beancounter... which always works out well at companies that need heavy R&D to stay competitive.
Maybe? Intel is in a high-technology field, where research can take a decade to bear fruit. I hope they're already planting the seeds for that growth today, or else that they have plans that will keep them afloat for a very long time.
Why wouldn't they? They probably have contracts that last over that whole time frame already, and their revenue is an order of magnitude higher than AMD.
They could lose a lot of market share and then come up with something better in the end, like in the Athlon days.
Not obvious that it will happen, but not impossible.
I don't know, but I can tell you why it won't get better. Anyone smart enough to get promoted at Intel is smart enough to move to Apple and make 50% more. I know because I did that. The ones I know left at INTC tell me that financial engineering is off the charts.
There are lots of moving parts with an enormous organisation like Intel.
But if you ask me there is one thing to point the finger to, it would be ex-Intel Chairman Andy Bryant ( He retired earlier this year ). Raised to CFO in 90s. Promoted to CAO in 00s. And later head the process and manufacturing operations while forced out Patrick Gelsinger. That was I think in 2009. The Otellini and Bryant era.
Paul Otellini retired in 2012, and BK ( Brian Krzanich ) was picked ( by Andy Bryant ) as Otellini's successor.
There seems to be lots of power play within Intel that we will never know.
The TL;DR is that Intel has always been a vertically integrated shop (meaning that they usually fab and design their own chips), and that is starting to bite them because pure-play foundries are improving their tech at a faster rate.
Intel has been unable to keep up with process advancements in their foundries, and that has led to pure-play foundries like TSMC taking massive market share. As chips get smaller and smaller, Intel has failed to keep up. They can only do 10nm for their mobile stuff and 14nm for their desktop stuff, whereas fabs like TSMC have been in 7nm territory for a while now and are moving into 5nm territory.
Just skimming the first article, the TL;DR that I took away is that TSMC can get revenue from their old foundry nodes for much longer than Intel can. So the issue is not how fast TSMC improves, but more that Intel has to invest proportionally much more to keep up.
Note: the I/O chip on AMD Zen 2 / Zen 3 is a 14nm GloFo chip. Only the CPU-cores ("Zepplin" maybe, or whatever they call them now) are 7nm TSMC.
So AMD's strategy also leads to lower fabrication costs: because they can make a far cheaper 14nm chip to handle the slower portions of I/O (talking to RAM, or PCIe), while the expensive 7nm parts of TSMC are used only for the cores / L1 cache / L2 cache / L3 cache.
Intel has a competitor to chiplets, called EMIB, based off its Altera purchase. EMIB is pretty cool, but has only been deployed in a small number of situations so far (There was a Xeon + FPGA chip Intel made. There was the hybrid Intel+AMD chip, and finally the new big.LITTLE clone chip that Intel merged an Icelake + Atom core together). I don't know why Intel hasn't invested more heavily into EMIB, Foveros, and other advanced-packaging technologies... but Intel is clearly working on it.
Intel can do it, they just haven't decided to do so yet. They have the tech for sure.
Its simply a matter of priorities. Its not so much that Intel "isn't" investing into it, its arguable that Intel just hasn't invested "enough" into it.
AMD went all in: they literally bet their entire company on advanced-packaging, with AMD GPUs using an active interposer with HBM2, and now Zen-chips taking a chiplet strategy. And to be fair: AMD had to do something drastic to turn the tide.
We're just at a point where AMD is finally reaping the benefits of a decision they made years ago.
--------
If I were to take a guess: Intel was too confident that they could solve 10nm / 7nm (or more specifically: EUV and/or quad patterning), which would have negated the need for advanced packaging.
AMD on the other hand, is fabless. They based their designs off of what TSMC was already delivering to Apple. Since TSMC leapfrogged Intel in technology, AMD can now benefit from TSMC (indirectly benefiting from from Apple's investments).
Intel's failure bubbled up from the fab level: Without 10nm chips, Intel was unable to keep up with TSMC's performance, and now AMD is advancing.
----
AMD's strategy just works really well for AMD. AMD is forced to keep buying chips from GloFo (which are limited to 14nm or 12nm designs). All of AMD's decisions just lined up marvelously: they fixed a lot of issues with their company by just properly making the right decisions in a lot of little, detailed ways. A happy marriage of tech + business decisions. I dunno if they can keep doing that, it almost feels lucky to me. But they're benefiting for sure.
AMD took something that seemed like a downside (the forced purchase of 12nm or 14nm chips, even when 7nm was available), and turned it into a benefit.
> They based their designs off of what TSMC was already delivering to Apple. Since TSMC leapfrogged Intel in technology, AMD can now benefit from TSMC (indirectly benefiting from from Apple's investments).
Did Apple actually buy a significant stake in TSMC or are you just referring to the fact that Apple is one of their large customers along with Qualcomm, Nvidia, Huawei (until recently) etc.?
> Did Apple actually buy a significant stake in TSMC or are you just referring to the fact that Apple is one of their large customers along with Qualcomm, Nvidia, Huawei (until recently) etc.?
I'm talking more like the later: all of these companies (Apple, Qualcomm, NVidia, etc. etc. and of course, AMD) are effectively pooling their money together to fund TSMC.
I don't mean to single Apple out as if they're the "only" ones funding TSMC's research. (And I can see how my wording earlier mistakenly can be interpreted in that manner. I was careless with my wording). Its more of a team effort (although Apple does seem to spend significant amounts of money trying to get first-dibs on the process).
For better or worse AMD has always been much more willing to quickly throw everything out to go with a completely new design paradigm. Something it was a massive bust like X2 or Bulldozer. Sometimes it's the Athlon or the Zen.
The only time Intel really did that was with the end of the P4 and frankly even then they waited as long as they could before doing it. The rest of the time it's all carefully planned stepped increase and safe design change.
Both have their advantages, but for your question it means Intel will have to take a leap they clearly don't like taking.
Historically most successful AMD designs came from the outside. 286 was Intel clone, Am386 Intel microcode copy. Am486 lagged by one year, offered lower performance and was still developed using Intel IP. K5 first 100% AMD design, slow and late, competed in the bottom low end, considered a failure. K6 100% external design by NexGen, let AMD move up to middle of the market. K7 designed by DEC Alpha team and manufactured in Motorola partnership, great success.
> The TL;DR is that Intel has always been a vertically integrated shop (meaning that they usually fab and design their own chips), and that is starting to bite them because pure-play foundries are improving their tech at a faster rate.
Are you saying that vertically integrated companies are inherently disadvantaged because pure-play companies have a bigger list of orders and can spend more on evolving technology?
There is advantage to be vertically integrated, namely a better global optimization across domains, like between design and manufacturing. Would be interesting to know why it's not enough - if it's always not enough. If not always, then this particular case needs more reasons.
The balance likely comes down to volume. Specifically, total global chip sales.
If Intel builds Intel (I believe their contract fabing is a rounding error?), then they're directly tied to Intel chip sales.
This sets up a potential death spiral. Intel misses a process node deadline, Intel's products are uncompetitive, Intel sales decrease, less demand for Intel fab, less money for Intel fab improvement.
Intel can temporarily paper over this by shifting money from other areas of the company, but it's not a good path to be on.
Conversely, as you might expect, if Intel sales are increasing then the opposite, virtuous cycle holds.
So essentially, Intel's fortune is tied to the Intel_sales : (global_sales / number_of_leading_edge_non-Intel_fabs) ratio.
And with regards to that, two huge things happened in the marketplace recently: (1) mobile chip sales explosion, (2) GlobalFoundries exiting leading process race.
If Intel hadn't been screwed by a process engineering miss, longer term trends would still have hit them hard.
This is the case with every foundry, though. It’s the reason GloFo isn’t competitive anymore, for example. What’s more interesting is the physics reason that they tripped up the last generation - what did TSMC do right that Intel did wrong? What bets were made? Which ones paid off?
I believe the issue was that Intel was leaning harder on EUV trying to make it and burying the competition instead of a more cautious approach by TSMC. Zen 3 is finally using some EUV layers whereas I believe Intel already wanted to use EUV heavily in their "10"nm process.
I think the idea is that the open fab shops just have a hell of a lot more work than Intel and greater economies of scale. Making zillions of mobile chips means big money even if the profit margin per chip is smaller. This in turn means more R&D and eventually they overtake the company that only fabs their own desktop processors and chipsets.
In the long run Intel screwed themselves over by not leasing out fab time to other companies. They put themselves in a niche in an industry that is naturally dominated by the largest player. And it's extra embarrassing that they did so because they knew very well how important it was to be the biggest--they were for a few decades!
Maybe Intel could have held on longer if they had a successful mobile chip to stuff into billions of smartphones, but their mobile efforts were short lived and seemed to be treated with disdain by the management. The first product kind of sucked and instead of sticking with it and improving they just threw in the towel, both on mobile processor and the baseband chip. An embarrassing misstep for a company as big as Intel.
>In the long run Intel screwed themselves over by not leasing out fab time to other companies.
sounds like Google who also treated their cloud/fabric computing as a competitive advantage to be kept to themselves and as result they missed the cloud business.
>their mobile efforts were short lived and seemed to be treated with disdain by the management.
classic. Low margin high volume future usually can't survive in the shadow of the high margin cash cow of yesterday that is still being milked.
I think the point is if TSMC develops a new node they will find customers for it, and they compete with other foundries on node.
With intel their foundries only have internal customers, who can only go to their internal foundries. There’s no competitive pressure on the foundries, and if they are ready early that’s wasted capacity. So so capacity and technology planning is based on a common road map to meet the needs of their slowest (er, I mean only) customer.
Intel now and 10 years ago are not that different. If not a success of TSMC with EUV process, Intel would not be in a such troubled spot. Few years ago a reasonable assumption in the industry was that EUV would not be production-ready for quite some years. So Intel made a reasonable bet that they would be better with pushing existing technologies further.
But then a company in The Netherlands solved the extreme ultraviolet generation and optics problem and Intel failed to push their existing process to 10nm allowing TSMC and Samsung to overtake it as a technology leader. And Intel now has to deal with a situation that it had no experience for the last 40 years when the company was a technology leader.
Several of the engineers I worked with a few years ago left Intel (for the company I work at) after they switched to hotdesking to save money on facilities.
Basically the concept was instead of engineers having an assigned office or desk, the office would have a large field of unassigned desks and generic equipment. Every day you would come in, reserve a desk, take any belongings from a locker, and set up shop. At the end of the day, you would tear down, lock up, and leave.
It is beautifully efficient from a top-down perspective, but it turns out employees like the consistency of a known work location each day. Its also nice to be able to put up a picture of family, from what I gather. I suspect the cost of personnel replacement dwarfs the saving of n% reduction in desk need.
> Several of the engineers I worked with a few years ago left Intel (for the company I work at) after they switched to hotdesking to save money on facilities.
Throwaway account as I work there, but I'd like to make some minor corrections to this: I don't think most of Intel ever switched to hotdesking. It was a trial program. In my building it was on one or two floors. I can't speak for all the locations, but most of the other buildings at our site did not have this at all. Unfortunately I was on that floor and hotdesked for 2 years.
I'm pretty sure none of the process/fab folks were hotdesked (I used to work in that area and stayed in touch with those folks). Nor did any of the circuit designers/architecture folks get hotdesked. After a few years, the trial was over and everyone reverted back to having their own permanent desk.
It's incredibly unlikely that this is a reason for Intel's decline.
I have a Lenovo P920 at Google. Dual Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz for 72 virtual cores. And if I want more cores in the cloud, that's also an option. What does yours look like?
That’s not the normal work station at google. Only the chome, Linux, and maybe android team gets it since they need more cpu and memory to compile their projects.
I was careless with my wording. By workstation, I was emphasizing the station portion.
As an example, my current cube at Intel has walls all around (albeit short, but some buildings have walls taller than most people), 2 file storage cabinets with a fair number of drawers, a decent sized desk (enough for 2 people, not that we'd do that). Correspondingly, the cube is enough for 2 people. Also, I have a whiteboard.
From the context, I think the point was about the whole "work"station, not just the computer. It sounds like intel has cubicles, while google has smaller desks with no dividers.
Also, this is very ungoogly on your part. Flaunting some superficial thing like this is dumb. I’m hope you aren’t leak other more important pieces of info just to win internet arguments.
The worst part of hot-desking in my experience is you always seem to end up next to some guy from marketing who is on their phone all day, and as a result you get absolutely no thinking time.
Same problem comes up with open offices. Some ass from another department constantly talking on the phone 9 to 5. I developed a Pavlovian twitch every time his phone rang.
> Some ass from another department constantly talking on the phone 9 to 5.
Dude, that guy's job probably relied on him being on the phone all day - he's just doing it. It's not his fault that you can hear him, it's your office manager/designer.
Some of us really do have to spend a huge chunk of our day talking to other people on the phone, zoom, etc. There's n other way to get our jobs done, and if they don't give us private office space to do it in, then we have to do it in the open space they do give us.
Got to wonder - why are you there at all? How is this one particle better from working at home? Clearly you're not there because of the environment (proximity to folks on your team etc)
That’s the ultimate sign of disrespect for workers as human beings. Cubes and open office are bad enough but hotdesking is the next step.
I would be miserable in such an environment. I need my keyboard and screens in a certain layout. Books need to be in a certain place. If that was taken away from me I don’t think I could live with that.
> Cubes and open office are bad enough but hotdesking is the next step.
It's not that simple. I worked with hotdesking at Intel. Those workstations, although still small, were bigger than the (assigned) ones I saw at Google and Facebook when I interviewed there, and Intel's one offered more privacy. Visiting those companies for an interview was depressing - finding out your crappy cube at Intel was still better than the ones at FAANG. Surreal.
There are a lot of negative comments in this thread relating to hot desking. The perspective is interesting.
From my own experience at first I felt the same, but soon after add noise cancellation headphones (almost everyone has), less cords eg Bluetooth, usb-c monitors, and ability to shift adhoc into smaller teams and groups when required I find it becomes such a winner.
A lot of people seem setup or mark a desk as theirs, with books or custom Sun Station or whatever, and most people seem to respect this and get others needs.
Hot desking obviously works differently for different types of work and or personalities.
Trying to save money that way always makes me shake my head. Seriously what does 100 sqft of office space cost? $200/month? Maybe $300 when you include common space? So $3600/year. Or 3% of your engineers salary? Doesn't take a very big hit in productivity to erase any savings many times over.
In the Bay Area, it’s more like $60-120/sqft/yr, and that’s triple net. Add in 50-100% because you need walkways and such between offices, bathrooms, security, mail room, IT, and other common areas. Then add more for building operating expenses, property taxes, building insurance, and common area maintenance. Yes, in commercial leases the tenant pays the property taxes, because the leases are for multiple years.
If you take the top end, shmancy digs near the VC teat you're still looking at only 10% of your engineers salary. Meanwhile in West Berkeley we signed a lease for $22 sqft/year. I share a 20X30 room with another engineer. Everyone else has a 10X10 office with a door. And we don't have rivers of VC money.
Sometimes I think the way big corps do offices is a legacy of mass production factory stuff. If you have an assembly line making toasters, yeah you need everyone in the same big building on a fixed schedule. But any business where people can work from home it likely means you could in thoery let individual groups decide where they want an office. Give a group of 12 engineers a stipend of $4000/mo to rent office space, see what happens.
It is beautifully efficient from a top-down perspective, but it turns out employees like the consistency of a known work location each day
I worked somewhere once that tried this open-plan hot-desking nonsense. The HR dept kept their nice private office with fixed desks however. They said it was because they had to deal with confidentiality. Whereas us dumb engineers developing the IP of the firm presumably didn’t...
At the end of the day, you would tear down, lock up, and leave.
...so you would never be able to have a computer on for more than the length of your workday? That sounds insane --- I could never get any real work done that way, because so much time would be spent on "restoring state" to the way it was at the end of the previous day. I wouldn't call that efficient at all.
Efficiency is your problem, not that of the office designer. If they can show that they've squeezed 20% more people into the same number of square feet, they're golden.
Laptops can be suspended rather than shut down, so maybe they're counting on that as the solution.
...so you would never be able to have a computer on for more than the length of your workday? That sounds insane --- I could never get any real work done that way, because so much time would be spent on "restoring state" to the way it was at the end of the previous day.
Treating desks like hotel rooms instead of apartments. Instead of having one desk that is "yours," you get assigned to any available desk every day. It seems like a terrible way to work, but it cuts down on the overall number of desks needed since large offices will never have 100% attendance.
That may work for people who constantly move around like sales people or managers. Interestingly managers keep their offices even though they rarely use them. For people like engineers i think it’s super important to have a steady work environment.
Chip architecture isn't current Intel's problem, process architecture is.
There's some overlap between the 2, but 100% of Intel's current struggles can be chalked up to dysfunction on their process side.
Given the lead time of chip arch (2-3 years?), the chip side of the house is arriving at manufacturing start day, and the process and specs they'd been optimizing for just aren't available.
Until Intel's process catches up, other parts of the company have limited options. (TSMC!)
The process side of this industry is in a really scary spot right now. TSMC is killing it. Nvidia is using Samsung to fab the RTX 3xxx chips, and there's some rumblings that low yields are a reason why those are in such short supply (not to mention, whenever Samsung releases a new phone only some regions get the Exynos chips, because historically they couldn't produce enough of them; Samsung used to have a pretty cool relationship with Qualcomm whereby Snapdragons were fabbed with Samsung, but more recently, they've moved to TSMC).
On the high performance side, everyone is moving to TSMC. If Samsung and Intel's fabs continue to exhibit issues at these smaller process node sizes, the monoculture is only going to get worse. We need Intel to get their act together, not just because it pushes better design innovation from AMD and Nvidia (not that they need it right now), but because their fabs are a critical, independent part of the software supply chain. They're the last western company with any form of high yield fabrication on high performance chips. At this point, we shouldn't just be worried about Intel's bottom line; we need to start being legitimately worried about national security (both in tactile cyber-warfare terms, and also more nebulous economic terms with regard to western manufacturing).
The geopolitical angle is not to be forgotten either.
TSMC is majority-based in a country whose land and government is claimed by another, nearby, much larger country. A country willing to leverage international economic options to further that claim.
TSMC's global importance has direct implications on the world's appetite for intervening or preemptively selling arms against a hypothetical Taiwan Strait invasion.
The interesting question is which company it makes the most sense to try to save. Intel seems to be plagued by various mismanagement and doesn't have a strong record as a contract fab for the other companies that may need a competitor to TSMC. If you're going to save somebody it may make as much sense to try to get one of the other players back into the state of the art game, like GlobalFoundries, which also operates several good (but not as good as TSMC) contract fabs in the US.
Did he get the boot or just decide to leave on his own? It was my impression that he decided to leave to cut his losses since they weren't going to be able to be turned around.
Jim's child has leukemia, but most of Jim's work there, "was already done anyway".
I have a close friend, who is also a close friend of Jim's family. Apparently the arrogance of Intel management hindered his ability to actually put together the type of team that he did at AMD. I'm actually concerned about Intel now.
I thought Zen 3 would be, maybe 5-8% better than Intel's current offerings, and newer chips would be on parity with them, but this is just embarrassing.
I recall Intel intentionally took the foot off the accelerator in the midst of the GFC, and forfeited or skipped some next development to wait things out a while.
Hi, MBA here, not sure what "training" you're referring to. Certainly nothing I learned in school taught me to focus on quarterly results and ignore R&D. I learned to attempt to push ROIC above WACC in order to ensure firms are generating economic profits, I learned that we probably should capitalize R&D and include it in ROIC to show that it does impact value, among other things
That's the problem with "MBA thinking". You can't put a dollar value on R&D. You just can't. Anything you do will be a gross approximation, and because the number is so fuzzy, you'll be encouraged (even subconsciously) to fudge it around to make other things look better, which will undoubtedly lead you to undervalue R&D to the point where it negatively impacts your business, but you have no idea why, because you're solely focused on dollar figures and not what really matters. Y'know, like developing the ability to solve actual customer problems in an exceptional manner. This sort of thing is very difficult to quantify, and any approach that starts from a finance perspective is always going to be sub-optimal at best, but often just flat-out wrong.
I see so many people optimizing the things that are quantifiable at the expense of the things that aren't, when the loss of the qualitative things is what's killing their business. MBA types seem to never get this, and the form of your reply is basically a textbook example of that.
What? Sure I can. It’s on the income statement. Then I can show that some firms see more of a return on that R&D spending than others, and that impacts how valuable they are.
It’s the same as how some firms get more return out of spending the same amount on factories as others. They are better factories.
I have to say, you're really walking right into this one. Your approach, that everything about the business can be quantified and then optimized is exactly why MBAs kill companies. The relationship between the employees and the company and the customers and the product is fundamentally emotional and therefore beyond quantification. R&D is fundamentally hopeful and creative and that future potential cannot be quantified either. Accounting is a fine management tool, especially for optimization of companies and products that already have the magic. Don't let those nice cognitive tools turn you into a paperclip maximizer.
It's on the income statement five years from now, not the one you have when you're making the decision today. The R&D paid for five years ago will commonly have been under different market conditions.
> Then I can show that some firms see more of a return on that R&D spending than others, and that impacts how valuable they are.
The question is, how do you cause your company to be the one getting more of a return?
Time being a factor is exactly why you capitalize R&D, which means make it an asset on your balance sheet. The second question is definitely an interesting one however I usually look at businesses more from an outside in view -so it’s not what I think about every day.
I love this website for blindly downvoting an MBA. Proud of y’all.
On the other hand, attempting to quantify R&D seems like a fool’s errand. Maybe quantifying between different paths of R&D to see which could pay off more is beneficial but cutting the R&D budget outright because it’s not profitable? Seems like you’re cutting away from your company’s future at that point.
My main question would be why Intel would even bother releasing Rocket Lake if, balancing between higher IPC and lower clocks, the performance would be _lower_ than the 10 series chips. So I disagree with the article that this will be an unqualified disaster. It's quite likely that they will be a little faster, at least core for core. But it also seems like these are notebook chips hacked into a desktop socket and limited to just 8 cores.
That means the best case scenario for Intel would be (barely) scraping back their "single threaded gaming performance" crown while completely giving up against the multi-threaded performance of AMD's higher core count Zen 3 chips. The only way Rocket Lake would make any sense in the market would be if these are priced less than $400 (probably a lot less), and so Intel's margins will be much lower on what is likely to be a much larger die with more transistors.
I don't think it's possible to call this anything other than a pure desperation move.
You ask: "Why Intel would even bother releasing Rocket Lake if, balancing between higher IPC and lower clocks, the performance would be _lower_ than the 10 series chips?"
My understanding from the article is that the answer to that question is that Intel is unable to produce any quantity of chips at 10nm that makes (enough) money. The author surmises that rather than go a year with AMD essentially unchallenged in the market, Intel back ported a 10nm chip to a process that they have better margins on 14nm, so that they could tout "improvements" on an architectural basis while skipping over the question of improvements on a system basis.
For me, what is most telling is the 500 series chip set which seems uncharacteristic.
From a purely speculative point of view (that is code for pulling a wild ass guess out of my butt here) I could see this as having been the design they had done for Apple's next generation Macbooks before they lost to Apples' A14x chip. The factors that lead me to that guess are that it really looks like a 'point' product (specific changes, not a general set of family changes) to me that doesn't fit into thee PC ecosystem as well as other chip introductions (like IceLake) did with respect to their overall roadmap.
14nm+++++++++++++ is still higher performance when power is less of an issue. Clocks higher. It’s a back port of a higher performance part to an older but more mature node.
Rocket Lake is desktop. Apple already uses 10nm Ice Lake in it’s smaller laptops, and 45W Tiger Lake (10nm) is expected to be released with Rocket Lake.
TSMC's capacity for 2020-2021 is fully booked. Also, it would take Intel 1-2 years to redesign a processor for the TSMC process, but they thought their own 10nm was coming in less than a year so they never switched. And also pride.
In addition to the TSMC being booked and designs being fab specific using TSMC is a horrible sign for TSMC, basically showing investors that they've totally given up on their multi billion dollar investment. It would be a total admit of defeat.
I think one reason is that they need to release a desktop CPU with PCIe Gen4. And if they can't do it in 10nm, they have to backport it to 14nm just to stay competitive.
Amd has had Gen4 out in desktop CPUs for over a year now.
They don't need to release one at all. They can skip PCIe 4 completely and go to 5. Their roadmap lists Sapphire Rapids in about a year (so in about 3 years on calendars everyone else uses).
>> My main question would be why Intel would even bother releasing Rocket Lake...
Pure speculation, but maybe someone's bonus depended on delivering and this technically satisfies the requirement. It's not designs problem fab can't deliver the right node.
This is just bad all around. Not just for intel, but for the entire industry. I always prefer companies doing well because their products are successful, not because their competitors fall down.
More and more, Apple's timing on their switch to in-house ARM designs seems perfectly timed.
Apple should have switched to AMD instead. They would have almost perfect compatibility with Intel hardware and they would not need investing lots of money to develop chips.
Now they either need to halt some Mac lines or develop all kinds of CPUs, from mobile (which they probably can do, because they should be similar enough to iPhones) to server-grade (which they have no experience at all). And there's no way Mac Pros would sell enough to offset development costs. I just don't understand how they are going to manage that situation.
> They would have almost perfect compatibility with Intel hardware and they would not need investing lots of money to develop chips.
With the ARM transition, Apple has perfect compatibility with the iPhone which is arguable more important for Apple than Intel compatibility.
Apple is already investing tons of money in CPU design, they are just doing it via proxy with Intel building the chips for them. Bringing it in-house means they can add the features they want to the CPU on the time frames that meet their needs.
Going with AMD just shifts who they outsource their chip design to, it doesn't give them the sort of control over their architecture they want.
"
With the ARM transition, Apple has perfect compatibility with the iPhone which is arguable more important for Apple than Intel compatibility."
You can as much emulate x86 on arm than you can emulate arm on x86, so the point is moot
The main point of switching to their own silicon is to unify their entire product range onto the same architecture.
Investing in a switch to AMD would be doing literally the opposite of what they're trying to achieve. Plus the shift of the big cloud players moving onto their own ARM chips means they're going to abandon x86 and move into a space that's already gaining traction in server grade.
'Now' they don't need to do anything but build the products. You act as if they only started working on the mac chips after they announced it. They'll have been planning this since the first A processors started performing really, really well.
I on the other hand hope that Apple will stay away from x86. If they started co-op with AMD it would be a matter of time AMD would have to produce special versions of processors that only Apple could purchase and end users wouldn't be able to do any repairs and a lot of manufacturing power would have been wasted on Macs making it more expensive for other people.
> it would be a matter of time AMD would have to produce special versions of processors that only Apple could purchase
How is that supposed to affect anybody else who is still buying the regular stuff, or be better for Apple customers than the same thing but with an architecture transition and the inability to natively virtualize the x64 editions of Windows and Linux?
> and a lot of manufacturing power would have been wasted on Macs making it more expensive for other people.
Apple and AMD are both TSMC customers. They come out of the same fabs.
> How is that supposed to affect anybody else who is still buying the regular stuff, or be better for Apple customers than the same thing but with an architecture transition and the inability to natively virtualize the x64 editions of Windows and Linux?
For one, these CPUs would make the fabs busy, that means AMD would produce less CPUs for the "masses" and probably that would make it more expensive for everyone.
I think the inability to virtualize would kind of fit the walled garden Apple is going for.
> Apple and AMD are both TSMC customers. They come out of the same fabs.
It's not like you can produce different CPUs at the same time. It requires setup and probably particular fab is able to produce one type of CPU in a batch. The TSMC resources are finite as well.
An unusually unfavourable article, even by the standards of semiaccurate.com.
Can this new processor family be interpreted as something less terrible than "palpable desperation" and effectively giving up on the 10nm process? For example, prices might be aggressively low.
Although this is possible, it would be a huge hit to Intel's image.
I think one can state that Zen 3 is higher performing than any Intel's architecture/model across the board. What happens when they'll release Rocket Lake? They'll introduce a new(er), slower architecture, to tackle... the budget segment?
I think that, from an engineering point of view, Intel is going to be in deep for a few years, with the hope that they'll manage to pull a new architecture in 3/4 years.
On the other hand, I've specifically written "engineering", because Intel still has copious amounts of "green persuasion", which shouldn't be underestimated.
This is one of the more oddly axe-grindy articles I've seen in a while. Poorly edited English and writing what feels like angrily about some pieces of silicon--is this just the house style for this site?
Back in early 2000s Charlie was a beat reporter for The Inquirer. The Inquirer was written like a British tabloid (in fact they were split from The Register.)
As for his grudge, it probably stem from Intel's underhanded tactics. AMD had the Athlon XP that buried Intel's Pentium 4 in every measure.
Intel made some very backhanded and immoral moves-
* Bribed Dell to not use AMD
* Threatened Taiwanese mobo manufacturers to not build Athlon mobo by withholding North bridge chips
* Blackmailed the said manufacturers to not even show up for product conventions held by AMD
* Forced RAMBUS down everyone's throat until Athlon forced them to release the i815 chipset (finally supports DDR1!)
As a result AMD never managed to grow their revenue to a level that can sustain their R&D. When the Intel's Core arch arrived 4 years later it was over for AMD.
The Inquirer had a lot of contacts that suffered greatly, and it must have left quite an impression on Charlie. I remember that one of his reports was that during a trade show, a Taiwanese mobo rep had to meet Charlie in secret in a hotel room to talk about AMD, in fear of Intel catching wind and retaliate.
BTW Charlie also has a grudge against Nvidia for similar reasons, but Nvidia has been competent enough for the last decade that Charlie couldn't find anything to complain about.
S|A is known for these ranty opinion articles - and dislike of Intel. On the other hand they do seem to have access to a lot of industry insiders and break important stories all the time.
I believe S|A's dislike of Intel and access to industry insiders are correlated.
Specifically I know a lot of industry insiders have a dislike of some of Intel's practices. When S|A talks about them, S|A knows to take the "ranty" tone, as a means of signalling up front to readers, "Hey, this isn't going to be a bland slide presentation..."
Yes, this is Charlie's house style and has been for ages. He's always hyperbolic and his tone always favors some companies over others. Everything's either the best or the worst (usually the worst), and so on. It's pretty much exactly like reading a tabloid.
The counterbalance is that he's got one of the absolute best networks of moles in Silicon Valley and always seems to know things, often big things, before anyone else. Sometimes even before it's on anyone else's radar.
So if you're familiar enough with him to back out his style to see the underlying (likely) facts, which isn't too hard, you can learn a lot about the big chip industry. And that's not just interesting to people like us, but very very valuable to a lot of people.
(Disclaimer: I haven't read SemiAccurate much since he paywalled it ~10 years ago, a move that I don't like at all but completely understand from the perspective of his business.)
I was curious and peaked behind the paywall years ago, when the PS4 / XBox One stuff was leaking. Charlie did good: he knew details of those systems a year in advance.
He's ranty and very aggressive with his writing. His best stuff is kept behind the paywall. It wasn't really worth the money to me (and I only subscribed once as a curiosity). But those "moles" he talks about are clearly the real deal.
Why does “poorly edited English” matter when the content seems to be on point? Not everyone on the Internet is a “native” English speaker. Content matters, not who and how they wrote it.
For me the overly agressive tone was detrimental while reading the article. I kept thinking: "Is this really necessary?" It reminds me of the rethoric that we've seen in politics in the last four years.
Just a reminder. Intel bribed, intimidated and blackmailed hardware manufacturers to alienate AMD while AMD was offering better product. Intel was fined record 1 Billion Euro. Intel hasnt paid a single cent of that fine to this day.
Microsoft turned this around in 2009-2010 with one decision: Give everyone in engineering a 10-15% raise to keep up with competition. Intel’s fallen far behind on pay, so they have a brain drain problem and a morale problem. They won’t fix it until they pay people more. I think Intel’s more than 15% behind - they may need to bump by 25% or more.
Depending on which branch of the company you're talking of (fab design, chip design, ...), and based on what info you can actually get on it which is less fact and more "large amount of anecdotical evidence", Intel's is between more and a lot of than 15% behind.
And while I agree with your proposed first step of the solution, and the urgency to do it given the delay between that and any impact on results, their tendency to look only for short term gains and benefits will probably impede them.
They've learned to love easy money, and they didn't need to really pay their people well because there was no serious competition, not in fab, not in chips. Now they're fallen behind on both, as easily predicted, but for a short while they extracted massive amount of money from their margin.
From keeping close tabs on general software engineering pay, Intel is only half what other companies pay for the similar years of experience. This means that anyone who is competitively looking in the job market is skipping over Intel. Anyone in the company who is very competent can easily jump ship and get a 2x pay increase, and what’s left inside Intel are the rest.
The US needs to do whatever it takes to get TSMC to build not only top-tier fabs but centers of excellence here. As it stands TSMC on Taiwan, which is threatened by China, has damn near a monopoly on competitive fabrication. A single hit on Taiwan could terminally stall the entire global supply chain for top-tier chips.
It can be a mutually beneficial arrangement, perhaps letting TSMC US produce some defence-relevant silicon and providing a safe haven and nest-egg for Taiwans elites in case of invasion/unification.
That is not a beneficial arrangement for the Taiwanese population, compared to "if Taiwan fall this capability is lost, defending the island is absolutely necessary".
Taiwans international relations are pretty delicate. I’m not sure if there is a lot that they want to change right now. Controlling the manufacturing of top end microchips is exactly the kind of card a small “neutral” country with a scary neighbor wants up their sleeve. Sure you could have a US embassy and recognize their sovereignty and put a military base and park some destroyers there but that would be a serious escalation with China.
Think on decades time scale, electron based turing machines are asymptoting in design, there isn’t much space for someone to be ahead until we start making optical computers.
Essentially, if you trap a few photons in little resonator cavities that rely on mostly all of the electrons making up the cavity, the next set of photons can pass through the cavity without interaction.
This means the light interacts with engineered electrons in a specific way to create an optical switch. This is a critical component of photonics on a chip, and was just demonstrated with Si (which is a requirement for something in the mid term).
People have been doing "neat hacks" with nonlinear optics for a long time. None of it is remotely close to being usable as a photon-controlled photon gate.
All-optical switching (i.e. photonic computing) is like fusion power generation. It's been "right around the corner" for 20 years, yet we never seem to round the bend.
It is, however, great for physicists writing grant applications.
Summary: this is converting information from the photonic domain to the thermal domain. Heat diffuses very slowly; you won't be able to put a lot of these devices near each other and run them at usable speeds without them causing each other thermal interference.
> the green light of a standard green laser pointer is generated from the non-linear interaction of pairs of 1064nm photons.
Um, no. It is caused by stimulated emission in the lasing medium, when electrons drop from a higher to lower energy level. Pairs of photons do not interact with each other.
> It is caused by stimulated emission in the lasing medium, when electrons drop from a higher to lower energy level.
In a typical green laser pointer a 1064nm light is produced via ordinary laser mechanisms, then 1064nm light is upconverted in a non-linear medium to 523nm.
What's the comparison when it comes to mobile chipsets? I would love to have a laptop (12"-14", light and thin) with an AMD CPU and an actually decent GPU, but they seem to be incredibly uncommon.
I'd be interested in an x86-64 processor that took the 1+3+4 approach of the Snapdragon 875. One big core with a super high clock rate and massive IPC, three smaller ones, and four that are smaller still. A desktop CPU with a single-core performance equal to half a normal 8-core chip would be an absolutely incredible tool for IPC-constrained applications like game console emulation.
The bigger problem is you need operating system support. Linux/android has it. iOS has it. If iOS has it there’s a very good chance that macOS has it (we’ll probably find out on Tuesday).
What does Windows support heterogeneous cores at all? I don’t think it does. And until that happens, would you be able to sell a chip to the public?
Surface ARM laptops have big.little so there must be some kind of support but I don't have any confidence that it's good. It will take MS a while to get it refined. People are probably better off disabling the little cores so threads don't slow down by getting accidentally scheduled on them.
What actually happened to memristors? I read about them nearly a decade ago and we still don't have any on the market if I'm not mistaken. How come Intel didn't invest heavily in those?
And why is it only AMD that has a GPU baked into a CPU? Vega graphics are simply amazing and beat Intel's integrated graphics hands down. It's so nice to have only a single chip without buying an extra graphics card.
Unfortunately, AMD isn't very present. I see Intel sponsoring every eSports event out there and handing out i9s as prizes. I'm not tunes into the gamers but I still have the impression they believe Intel reigns supreme...
With RISC-V about to enter the market and AMD beating Intel on benchmarks, Intel better pull a rabbit out of their behind to stay competitive. But that's just a layman's view on the subject.
I hope so, but they might not have a choice. I think the next AMD chips are rumored to have AVX-512 and being on a smaller process node will help immensely with heat. At 14nm and being a backport that may not be optimized for 14nm, heat management may be a problem and downclocking still required.
Despite that, it is still probably too late anyways. My industry has moved on to GPUs and FPGAs for real-time compute, but I remember when everyone was waiting for 512 after utilizing AVX-256 for some time. Although the thought of a 64-core threadripper with avx-512 at 5GHz on all cores is appealing.
For gamers, I expect these will be priced such that Intel will offer a better price-to-performance ratio than AMD. I've seen so many industry pundits talking about the defeat of Intel for the last 2 generations, but gamers just quietly look the frames-per-second to $ ratio and keep buying Intel and the cheaper Intel-based motherboards. I think Rocket Lake will allow this trend to continue while Intel prepares Alder Lake.
Not sure how much that’ll hold, a lot of the gaming and pc subreddits and the communities there have been strongly favouring AMD chips in builds and advice. With Zen 3 chips I expect that to continue.
There's always a loud contingent of AMD fanboys, but then there's the silent majority who don't care and just buy on raw gaming performance per $, which is reflected in market share.
Intel mentally isn't anywhere near the rock bottom they would need to be to admit inferiority to AMD by actually underpricing them even though they easily have the margin to do so. This might even be the right strategy if they think they can come back in a couple of years and beat AMD, why damage the brand now?
This would not be a change in pricing strategy by Intel. Intel has consistently provided better price-to-performance ratio for gaming than AMD. Since AMD bumped their pricing with this launch, Intel arguably still holds this edge, especially if you account for overclocking and board prices.
It seems like Intel wasted at least 5 years. They thought they are invincible and competition will never catch up, plus they thought if they change the model name and repackage to a new socket, people will still buy it. They however didn't consider the fact that people bought these processors, because there was nothing else and also new people are becoming teens and adults and they need computers as well. Maybe they also counted on brand loyalty? I think brand loyalty ends when you need to waste your time waiting for a task to complete.
I am so happy to pre-order 5950X. I will also buy new Threadripper when it gets released.
Ceding/ignoring the smartphone/ARM market had a double impact on Intel. Obviously they missed a titanically large market.
But that huge market drove investment in fab technology outside of Intel, and allowed them to close the gap and provide competitive, now superior, fabs to AMD.
I'm willing to bet my intuition is wrong, especially given my extremely deep bias against MBAs and 'this quarter' thinking. Any great sources on the full story?