Hacker News new | past | comments | ask | show | jobs | submit login
Perhaps it is simpler to say that Intel was disrupted (learningbyshipping.com)
238 points by MBCook on July 2, 2018 | hide | past | favorite | 196 comments



One angle that is missing to this story: Nokia. Nokia bet big on Intel by partnering with them at the worst possible moment. In a nutshell, Intel was looking to get in on the mobile action and Nokia was willing to partner. This was ten years ago.

When platforms failed to come together because Intel was not delivering the goods, Nokia had to scramble to get new phones ramped up around alternative platforms. Nokia lost a lot of valuable time this way right when Apple and Android started kicking its ass. I wouldn't point at it as the root cause for Nokia's rapid collapse but it certainly was a contributing factor. E.g. Meego was years late to the market partly because of this and ultimately killed off in favor of the deal with Microsoft (another ill fated bet). And yes, Intel Atom was a big part of that failing as well.

Bottom line is that Intel tried and failed to get into the mobile market repeatedly and failed while being way too comfortable milking desktops and servers. Now that that market finally seems to be drying up, they have a problem and still no viable strategy to deal with it. Atom flopped. Itanium flopped. AMD is back looking stronger than ever. MS is has retired the wintel brand years ago and is seemingly open to running on ARM and getting quite cozy with Linux. Apple is widely rumored to also consider switching to their quite credible in house ARM based processors.


After the N9 was a success, they could have bet on both. Buy an OS from MS and offer Meego/SwipeUI in one or two high-end devices. Elop simply seemed more interested in selling.


Meego on the N9 was absolutely amazing. I just wish Alien Dalvik had been made widely available to help bootstrap the ecosystem.


Oh yes, it was amazing. Due to my involvement with the Qt toolkit used by both, I used the N9 and BB10 (Blackberry's QNX based phone OS), in that order, before switching to Android. The N9 OS had a soul somehow, the BB10 OS was pleasantly (very) fluid and efficient. Now I'm on Android, it runs all the apps and doesn't microstutter much anymore. Oh well.


> Apple is widely rumored to also consider switching to their quite credible in house ARM based processors.

Perhaps like many others who frequent the industry news, I'm wondering if the future simply looks like macOS on ARM with people moving clang targets to an `arm-apple-darwin` that's macOS compatible.

I wonder how far off this future is and what it means for the valuation of Intel. Will it translate to a small dent in their share price?


In my personal estimation? About two more Apple A-series generations. You can see from their focus on IPC over core count + speed (as other ARM manufacturers are) that they’re clearly tuning for desktop oriented workloads. With where the Bionic is compared to Intel’s mobile offerings, and the leaps they’re making it’s not too far off. And the only other manufacturers in their realm are Nvidia (with Denver/Denver2) and Cavium (ThunderX2).


> Apple is widely rumored to also consider switching to their quite credible in house ARM based processors.

Does the Mac App Store require submission as source code?

If not, consider the implications.


It wouldn't be the first time apple changed processor architecture.

They implemented a dynamic binary translator to ease the transition (which they discontinued a couple versions later -- initially released in Tiger, and was removed in Lion).


They require (I think -- maybe just strongly encourage) submissions to the iOS App Store to include LLVM bitcode. It’s easy to imagine them doing the same for the Mac. That would be a pretty clear signal that they plan to switch architectures.


Though it's not required they could easily make it fashionable / desirable (anyone remember fat binaries i.e. the ones including both PPC and Intel executables in one .app container?).

Or they could slowly make it required.


> Does the Mac App Store require submission as source code?

No. You do not submit source code.

> They require (I think -- maybe just strongly encourage) submissions to the iOS App Store to include LLVM bitcode.

Apple does not currently require LLVM bitcode for iOS. It does require it for Apple TV and Apple Watch.

Additionally, LLVM bitcode is not architecture agnostic. It would not solve the problem of going from x86 to ARM.

> (anyone remember fat binaries i.e. the ones including both PPC and Intel executables in one .app container?)

This was also used in Apple's transition from 32-bit to 64-bit on Mac. And later again on iOS.

So yes, if Apple were to transition CPU architectures again, fat binaries would be the likely approach.


Bitcode is not required for iOS apps.


With the right abstraction layers, source code is not necessary to change underlying hardware architecture. Windows supporting 32-bit x86 legacy programs on ARM is one such example. Transmeta was another.


Windows for ARM is clearly pushing you into the UWP ecosystem which means you can say goodbye to any browser that isn't Edge. Software that depends on DRM, anti-cheat or uses shell extensions is likely not going to work. There is a large hole caused by lack of 64bit support. The cherry on top of all of this is that the emulation is slow and completely negates any energy efficiency benefits of using ARM in the first place.


Wouldn't that be funny to see Apple go from employing JKH to employing Linus.


"In 2006 AMD (struggling) bought ATI for $5.4B. Intel just didn’t even notice. It was super weird."

I think this is the best line in the whole thing. Intel didn't notice, and it was super weird, and it was the best thing they ever did.

Fun fact: the first thing the engineers did with their new toy was bolt then-new R700 pipes to a hypertransport bus, so it used hypertransport to fulfill memory requests instead of a native Radeon GDDR controller.

The purchase of AMD and the Christmas morning giddiness of unwrapping that gift and turning it into what eventually became the prototype of the APU was the most brilliant thing ever.

To put it in perspective, lets count all x86 sales: all the desktops, all the laptops, all the servers, all the weird little things like x86 chinese tablets, all of it.

Combined Xbox One and PS4 sales (both massive APUs, essentially) dwarf all of the sales both Intel and non-console AMD do; it dwarfs it in per chip and per thread. Intel only claims more sales in dollars than AMD (incl. MS and Sony deals) because of Intel tax, no legitimate reason.

Similarly, lets do the same with Radeon sales vs Nvidia GPUs... take all of the Nvidia GPUs (desktop, laptop, ARM SoCs like the Shield and the Nintendo Switch, server compute, etc), and all the AMD GPUs that aren't XBOne and PS4... console sales dwarf everything else that both companies make.

AMD won both races and no one even paid attention: AMD makes more x86 CPUs (both per chip and per thread), and more GPUs. Optimizing performance software (ie, games) has far more focus on both of AMD's platforms than anything else.

Intel and Nvidia are now the underdogs in their own races. This happened silently, and the cheer-leading for Team Blue and Team Green drowned out reality for a bit.


"To put it in perspective, lets count all x86 sales: all the desktops, all the laptops, all the servers, all the weird little things like x86 chinese tablets, all of it.

Combined Xbox One and PS4 sales (both massive APUs, essentially) dwarf all of the sales both Intel and non-console AMD do; it dwarfs it in per chip and per thread."

Ummmm, no...

"PC vendors shipped a total of 263 million computers last year, down from 270 million in 2016." https://www.statista.com/chart/12578/global-pc-shipments/

And that's not counting chips sold to those building their own computers nor does it count servers.

The PS4 has sold about 80 million console while the XBox One has sold about 40 million over around 4 years.


Only ~40% [1] of those PC have discrete graphics. ( Updated to 40%, turns out there are 140% GPU shipment compared to PC shipment, 39% of them has dGPU ) But even so the APU used in consoles are very little in numbers compared what Nvidia sold in Cars, CUDA, dGPU, Crypto Mining etc. The AMD numbers already includes APU. Even on an conservative estimate of 300M GPU unit sales and AMD 10% market share, that is 30M unit per year.

So the OP was so wrong I am not even sure where to start. ( And to make thing worst HN are upvoting him, talk about fake news problem. )

[1] https://www.jonpeddie.com/press-releases/gpu-market-increase...


> AMD won both races

AMD's revenue is half of NVIDIA's and about 9% of Intel's. There's something missing from your analysis...



Exactly. I picked revenue precisely because it's the metric that would be comparable if AMD was winning in the "more and cheaper" style of Amazon or whatnot. They're not. Honestly they're mostly noise still, though their recent products have been a lot more competetive.


Feels like Apple vs, say, Samsung.

Margin vs Volume


> Intel and Nvidia are now the underdogs in their own races.

I guess you and I have different definitions of what an 'underdog' means. Intel ships more CPUs (and GPUs) overall than AMD, and Nvidia does more discrete GPUs (at least in PCs):

https://store.steampowered.com/hwsurvey

I'm not sure how valuable and relevant console sales are, since manufacturers make pennies on each unit. It makes sense that AMD, being the more "discount" brand, would be in more consoles than Intel. Except the first Xbox, has Intel ever been in a console?

AMD has turned around with Ryzen, and by some accounts is selling better lately, but Intel has had bigger numbers for a decade.


> I'm not sure how valuable and relevant console sales are, since manufacturers make pennies on each unit.

Semiconductors are a scale business. The one with most scale wins. The margins are of less relevance.

Intel has always been the unbeaten one, with most chips out there by an order of magnitude. Now that they lost this lead, everything (including P&D and margins) is expected to suffer.


I would argue it was the worst decision they could have made. There was a ton of talk in the press that they overvalued ATi at 2-3x it's actual valuation ($5.4b), forcing them to take on a hefty debt. It also directly led to the sale of Spansion (which is now part of Cypress Semiconductor and valued at 4.5x it's sale value) and Imageon (after they purchased ATi, which became Qualcomm's Adreno) literally right before both of those markets skyrocketed.

On top of that, Nvidia was valued at much less than ATi at the time and was willing to sell to AMD, given the requirement that Jensen Huang be the CEO of the new combined company. A competent CEO was exactly what AMD needed at that time, given their turnover of three inadequate CEO's previous to Lisa Su. And Huang has proven to be quite competent [1].

[1] https://www.marketwatch.com/investing/stock/nvda


AMD's biggest mistake by an order of magnitude was betting everything on CMT only to find out it was a bust, but they were stuck pumping out variants for the next 7 years.


For anyone wondering what CMT stands for, it is Clustered Multi Threading which was used before Zen. Evidently it wasn't as effective as SMT. I wasn't aware but this link has a brief explanation. (https://www.quora.com/How-are-physical-cores-in-AMD-better-t...)


Which can be directly attributed to Dirk Meyer’s personal and technical team leadership. I highly doubt Jensen would have pursued the same course.

I’m not an Nvidia evangelist or anything, but one has proven results and the other only has failure after failure under their belt.


Do yoh have any source to back any of those assertions? Up until now I've heard nothing similar to anything you've mentioned. In fact, IIRC AMD has been hemorrhaging cash until very recently due to the introduction of the Ryzen product line, and still they barely break even. That appears to be entirely incompatible with the rosy scenario you've described.


> AMD won both races

What do you mean by "win"? Total number of chips sold? Then why not declare Qualcomm the winner because they have shipped billions of ARM cores with integrated graphics processors?


And the irony of this is that the integrated GPUs running in those Qualcomm chips are old ATi GPUs [1].

The IP was sold off directly due to the debt AMD leveraged in their purchase of ATi.

[1] https://en.wikipedia.org/wiki/Adreno


> Intel and Nvidia are now the underdogs in their own races

For Nvidia specifically I think that's only true of the gaming market (maybe crypto as well?), but my understanding is their massive increase in value over the last half-decade or so is predominantly from the deep learning market, where they continue to dominate by no small margin due to CUDA being a hard dependency of the major DL frameworks.

I don't have any hard numbers on it, but I wouldn't be surprised if the CUDA-tax is more than the Intel-tax ever was.


For my next desktop build I’ve decided to go with AMD, except for GPU where I have to stick with nVidia because of CUDA. I would gladly drop 15% gaming performance, and invest saved buck into better monitor for example but CUDA is so much better than OpenCL, that I will have to go with Team Green on that front. Too bad si ce I would like to support AMD twice...


The CUDA tax is serious, however in this case it is mostly the fault of competing vendors that there is no CUDA or almost-CUDA implementation, since it is fairly high-level, unlike an ISA and the accompanying treadmill of patented features which move the expiry date of the "platform" 20 years into the future every couple years.


> it is mostly the fault of competing vendors that there is no CUDA or almost-CUDA implementation

I very much agree, but I also don't have the skills or knowledge to say what kind of effort it takes to build such a language/toolchain. It seems like such an obvious opportunity for AMD, and has for so long, could it be less incompetence and more engineering difficulty and/or adoption struggles?

I know I've seen talk of CUDA-killers around these parts before, would love to hear more details from people more familiar with this stuff.


> I know I've seen talk of CUDA-killers around these parts before, would love to hear more details from people more familiar with this stuff.

From the market perspective, it seems to me that a "CUDA killer" would not actually help. I think we need a free CUDA toolchain.


There is a free CUDA tool chain it is called clang, there just isn’t a runtime library or backend for any other architecture except NVIDIA at the moment although I think AMD is working on it.


I think that's roughly what I meant; I'd like to see AMD dump resources into making a fully open, compatible-with-all-modern-GPUs toolchain to directly compete with and replace the CUDA one. For all I know, they already are?

Heck even a non-open, AMD-specific toolchain would probably be great for consumers, but an open, cross-compatible toolchain would be even better for us, and might be better for AMD as well by allowing non-AMD-employed experts to contribute.


> I think that's roughly what I meant; I'd like to see AMD dump resources into making a fully open, compatible-with-all-modern-GPUs toolchain to directly compete with and replace the CUDA one. For all I know, they already are?

There's OpenCL. Except OpenCL performance is worse than CUDA performance on NVidia, so anyone using NVidia's hardware (essentially, everyone) pretty much wants to use CUDA instead.


Exactly. We need something roughly on-par with CUDA, though it could be a bit below CUDA on Nvidia cards if it allows near-CUDA performance on non-Nvidia cards. Making all this easier to install/use/deploy could go a long way to winning user over, too; I'll gladly let a model run longer if I spent a lot less time getting things setup properly. The integration part is no small amount of work; it needs to be so smooth that I don't have to care what kind of card I have, and can run TF/Pytorch/etc. models in the same way regardless.


OpenCL made the mistake of being C only in the beginning while CUDA embraced C, C++ and Fortran, plus any language that would bother to write a PTX backend.

SPIR was too little, too late.


Maybe we could have a PTX frontend for SPIR-V. ;- )



Maybe! To be a success, I think it needs to integrate with TF/Pytorch/etc. at least as seamlessly as CUDA does and be clearly competitive in benchmarks of common DL stuff (e.g. training on Imagenet). It's not clear to me it's even possible to take a TF model I have and use this to run it on an AMD card.


I need to see numbers on console sales dwarfing x86 sales because it strains credulity. I don't buy it, not even close.


> Combined Xbox One and PS4 sales dwarf all of the sales both Intel and non-console AMD do;

Margins on console chips are garbage. AMD was the only chip maker desperate enough to take the deal.

Intel has 12x gross revenue and 90x operating income as AMD.

AMD is doing good work. But they’ve got a long, long ways to go.


Incumbent consoles practically don't make any money for AMD. MS and Sony sell them for less what they really cost.


Any gamers or graphics experts here? I keep hearing that Intel's integrated graphics is terrible, yet being a typical desktop user/non-gamer, I've always avoided dedicated GPUs in favor of Intel on purpose. Intel GMAs always offered better battery life, better Linux compatibility, better operating temperatures (and zero additional fan noise) yet they did everything a non-gamer can ask for: hardware-accelerated video playback, desktop effects like the ones Steven writes about, etc.

Recently I looked into AMD's 2400G APU with much superior graphics but I don't understand what is it for. Sure it can sort of run some games at mediocre frame rates on last-century 1080p monitors, but not higher. But everything else Intel will do just as well... Looks like all these integrated GPUs are starved of memory bandwidth anyway. So what's the point of these APUs then? Where's the AMD graphics advantage? And what's problem then with Intel's integrated graphics?


You have to remember the GPU described in the article were in 2010 - 2012 era. And it was iGPU in Atoms. They key to GPU performance were drivers, and Intel in those days won't very active in GPU drivers development. At the time updating GPU drivers for notebook were problematic. Lot of issues left unresolved, and Mozilla blacklist lot of Intel drivers from GPU acceleration. And they (purposely?) make a mess of dGPU graphics switching.

It wasn't until Broadwell, and later Iris Graphics era, which I think is 2012 / 2013 before things starting to pick up. Intel actively optimising their drivers for stability, OpenGL, and Performance.

AMD's Graphics Advantage is that their drivers are better tested. Over the near three decades of history has taught us, the GPU hardware is absolutely nothing with out top notch drivers support. Intel I740, Matrox, 3Dfx, S3 Verge, 3Dlabs, PowerVR......


Actually the early Intel GPU efforts were hillariously bad partly because they licensed someone else's GPU...

https://en.wikipedia.org/wiki/System_Controller_Hub#Poulsbo

There /still/ isn't support for this closed platform on Linux/etc based systems due to a combination of (now) outdated hardware, very few systems in developer hands (even lower interest), and the lack of performance even with proper drivers* (I don't have a good citation for that).

https://en.wikipedia.org/wiki/Bonnell_(microarchitecture)

It didn't help that the power requirements fell very solidly in to the uncanny valley between 'crappy, but very low power so we can forgive it' and 'good, but uses lots of power'.


You are not correct that it is all in the drivers.

The Intel GPUs just do not have the FLops to complete with the high end offerings from NVIDIA and AMD. Here is a realistic benchmark of GPUs and notice that Intel isn't even listed in the first page, that is how slow Intel's GPU is:

http://gpu.userbenchmark.com/


My macbook runs on Intel integrated graphics, and I frequently want just a little bit more performance for day-to-day work.

In particular, if you set a 4k monitor to 1440p effective resolution in macOS, it renders a 5k (1440p doubled) image and then down-scales that to 4k. This seems like it would be ideal but in practice, things are too laggy.

I upgraded from a 1440p monitor to a 4k one, but I ended up going down to 1080p effective in order to get acceptable performance due to intel's under-powered integrated graphics. I'm fairly positive that AMD's APUs would be sufficient in this scenario.

On the gaming side, I also just got a GPD Win 2, which is a little handheld gaming computer that runs on a Intel Core m3 (graphics and all). At 720p, the Intel integrated graphics can actual run a lot of games, especially older ones. But OF COURSE I'd like to see better performance there. Unfortunately, AMD doesn't have anything available yet for that power/TDP range (7w).


> This seems like it would be ideal

This seems lazy. Instead of properly supporting  2160p you get a sad kludge instead.


The work required of developers to support arbitrary DPI is more or less impossible. No system anywhere has ever managed it.

Intel GPUs are underpowered and have been for a long time. Their drivers are also full of bugs and odd edge cases.

Intel saw iGPUs as a value-add for makers of cheap systems and treated them as such. That's a major point of Sinofsky's piece: graphics ended up being important and Intel knee-capped themselves and all their partners by failing to understand that. He gives an example: You can't put a discrete GPU into that 10" laptop or Intel will kill you by removing your discounts and kickbacks, making your laptop more expensive than your competitors.


Windows has been attempting to get this working well for years, and for the most part it was horrible. But over the last year or so I've found arbitrary DPI scaling on Windows has worked very well, at least meeting my expectations and being more flexible than Mac's implementation. Mac took the less painful route but at this point Windows might finally start having the advantage here.


Here's an Anandtech benchmark from the relatively recent release of Ryzen+Vega: https://www.anandtech.com/show/12425/marrying-vega-and-zen-t...

Essentially, the current gen Intel GMAs are more or less equivalent to AMD's last gen APUs. Ryzen with Vega is around 3x faster. In the past, AMD tended to focus on the iGPU at the expense of the CPU, but with Ryzen, it's also comparable to Coffee Lake while maintaining the massive GPU lead.


> Any gamers or graphics experts here?

> And what's problem then with Intel's integrated graphics?

I play video games. The problem with Intel's integrated graphics is simple: they have historically not been powerful enough that you can run many video games at a playable fps.


> Recently I looked into AMD's 2400G APU with much superior graphics but I don't understand what is it for. Sure it can sort of run some games at mediocre frame rates on last-century 1080p monitors, but not higher.

Historically, AMD/ATI integrated graphics chipsets were a budget-friendly alternative to avoid the performance void that Intel integrated graphics provided. Everyone who gamed on the budget end knew this. You looked for AMD/ATI integrated, or if you had the money to spend, NVIDIA had some discrete solutions as well.

Over the years, Intel stepped up their game and closed the gap. However, these days even the Intel Iris platform is poor where used, because it's typically utilized on higher-end computing where high resolution monitors are provided to match.

Because pixel fill rate is a bottleneck for all graphics solutions, the gains that Intel Iris provide are set back by the higher resolutions they tend to be required to drive.


> on last-century 1080p monitors

Were you around in the last century? :)

I don't remember 1080p monitors becoming common anywhere before circa 2005.


1080p is the bane of the 21st century. In the 20th century, we had 1200p monitors (i.e. 1600x1200).

onion, belt, etc.


As of 2013, at least, the Intel chips were pretty good. Not top-of-the-line by any stretch, but 1/2 to 2/3 as fast as the discrete chip in my Macbook: http://archagon.net/blog/2013/12/19/late-2013-15-macbook-pro...

Still, I'd want as much power as humanly possible for gaming on the go. 15-20 extra frames per second matters a lot.


Intel's OpenGL drivers were traditionally quite buggy and tended to lie about what features they actually supported in hardware.

So depending on the execution paths you could even trigger software rendering by relying on wrong assumptions.


This article is missing many critical details. Most notably it doesn't touch on the core identity crisis Intel has been having since the i386. When Intel cut off third party "fabs" from producing its chips, it became reliant on high margin chips that were completely vertically integrated. In a sense this incentivized the company to ignore lower margin fields like embedded and mobile. Obviously the iPhone changed the game in mobile, and showed that there is a space for high-margin mobile parts. Intel was equipped for this transition because they spent years optimizing their CPU->Factory connection. Their factories were optimized to bump out CPUs, not mixed silicon, to the point where Intel's own wireless group would produce wireless Chips at TSMC. Intel faced dwindeling consumer demand, which led to factories not being full, which led to less investment in chip fabrication. Keep in mind in the last several years, Intel's main chip performance advantage was due to the superior manufacturing abilities of it's factories. Better factories mean more transistors per inch, more transistors means more cores and IPC (assuming competent designers). Intel's inability to produce general silicon hurts them to this day, as Intel's effort to open its fabs to the general public have largely failed (Intel Custom Foundry). A further issue that plagued Intel was it's inability to produce integrated mobile SoCs in a timely manner. Intel Acquired infineon when they realized they needed a quick pathway to building an integrated 4G modem. Ultimately the product was not built in time, but at least now this investment is kind of paying off as Intel has won the modem spot in modern iPhones. However Intel was not able to put together a product competitive enough with Qualcomm due to the time it takes to mobilize enough resources to build a high end SoC.


> Intel's inability to produce general silicon hurts them to this day, as Intel's effort to open its fabs to the general public have largely failed (Intel Custom Foundry).

Their refusal to fab ARM chips hurts them more than the design-rule complexity of their process.


Intel was disrupted, for sure. Basically, I'm left wondering if there was some key brain-drain that occurred to Intel. Back in grad school in the mid 90's, most of the profs thought that Intel would collapse under the weight of the x86 ISA, but one prof knew people in Intel and told me about their roadmap going out to 2010, with plans for kicking butt. All of that played out! This shows amazing foresight. However, in recent years, Intel seems to have gotten itself painted into a corner with regards to die sizes, while AMD has strategically shifted to combining chips with smaller die sizes to increase yields and increase margins with more competitive pricing.

"Interposers, Chiplets and...ButterDonuts?" -- https://www.youtube.com/watch?v=G3kGSbWFig4

Something has happened with Intel, which has lost its vaunted intelligent "paranoia."


When I worked for Intel in 2005-2007, a lot of the workforce joined near the beginning, and planned to make their career at Intel. Most of those people probably retired in the last few years.

After two years, I was still "the new guy."

They also totally goofed mobile, as the article explains. Everyone knew mobile would be a big deal; it was "obvious." Even Moore's law predicted mobile. (Every 18 months the number of transistors you can fit in a given area doubles = every 18 months you can make the same chip use half the space.)

Why did they goof mobile? I think the article explains the symptoms well, but even I have trouble explaining "why" given that everyone could see mobile coming.


In 2003, Intel was able to reinvent its CPU design center in response to the previous iteration of PC miniaturization (the laptop market was ready to explode, while desktops were slowing down). They did this by running a skunkworks design project in Intel's Israel division (https://en.wikipedia.org/wiki/Pentium_M). After the Israeli engineers were able to show how their design fixed Pentium 4's architectural issues, it became the basis for all successful Intel processors for the next decade.

Intel did great in process technology and the ecosystem around their chips. They were extremely well positioned to grab the smartphone market. Contrary to popular belief, x86 is entirely capable of the TDPs required by mobile phones. The Core m3 has a configurable TDP under 4W while making none of the compromises of Atom. It might have taken another skunkworks project to cut some modules and bring it down to the milliwatt range, but it could have been done. Intel simply allowed itself to be outmaneuvered out of the smartphone market, by complacency and unwillingness to lower the "tax". That worked fine for a decade, but now they're seeing the side effects of the resulting brain drain.


>The Core m3 has a configurable TDP under 4W

No. This is best to say a safety measure: going over 4w over 5 seconds? turning on deep throtlinng.

Real engineering datasheets you get as a client tell very clearly that TDP of Y line chips is 17w.

Intel chips at such wattages are just a lot of dark silicon.

Your analysis is wrong.


The Core m3 launched nearly 10 years too late tho. They didn't have anything near 4w in 2007.


> Why did they goof mobile? I think the article explains the symptoms well, but even I have trouble explaining "why" given that everyone could see mobile coming.

Okay, so exactly how do you sell:

"Hey, we need to get in on this mobile thing. It's an order of magnitude more chips, but instead of margins of 30 dollars per chip, we'll only receive ... get this ... 30 cents. Isn't that a great idea?"


"30 cent mobile chips will replace most most of your 30 dollar chip market, by not getting in the market you're giving a competitor a massive leg up"

Kinda like saying "why should this big company named Sears care about this little company named Amazon"


If profits are going away, there's no point in cutting your throat prematurely. You might as well milk your customers while you can.


Intel didn't make the transition the last time until it was against the wall:

https://anthonysmoak.com/2016/03/27/andy-grove-and-intels-mo...


Intel hit a huge performance/power bump in the early 00s with the pentium 4 architecture. If it wasn’t for Intel Israel and the more pragmatic core architecture, they might have been unable to recover very nicely in the latter part of that decade.


> Intel seems to have gotten itself painted into a corner with regards to die sizes, while AMD has strategically shifted to combining chips with smaller die sizes

This line has a "Moore's law ended" marked all over it. One player expected it, the other didn't.


AMD has basically always been behind Intel in process, so they couldn’t count on that to get them to the top, they had to do something else.

Intel has always been on top so they had less incentive to prepare for that. As Ben thompson said in his article Intel just assumed they could keep going without running into process shrink problems of the magnitude they have.


For FIFTY YEARS, the corpses of those who predicted the demise of Moore's Law and were wrong have littered the graveyard of semiconductor companies.

Intel survived because they bet on Moore's Law--while DEC and IBM could get 30-40% more performance out of good design, Intel could just wait 6 months. That eventually crushed the companies that could do excellent VLSI design.

Suddenly, now that Moore's Law finally broke, everybody needs VLSI designers back. However, those designers are all gone--dead, retired, or so screwed over by management that they are never coming back.

I know a lot of people in their late 40's to early 60's from that industry and they all have similar sentiment: "I will run a hot dog stand before working in a semiconductor company ever again."


Interesting.

Couldn't younger people learn good VLSI design? I guess they would need mentors though to help them get up to speed quickly.


Things surrounding VLSI design are a remarkable amount of tribal knowledge that isn't written down.

Things like: "Why don't you tie a pass transistor to a storage node directly to a long bus?" killed at least 3 microprocessor designs (that I know of).

There are a million things like this, and none of them are worth writing down since they are of zero use to anyone outside of the industry/company and often are specific to design techniques or fabrication lines.

Samsung moved almost 100 engineers to Boston in order to try to second source manufacturing for the Alpha, had DEC engineers who were happy to help them, and still was only marginally successful.

In VLSI, 6 nines means you still have 200 failing transistors on your 200 million transistor chip.


Weren't their "plans for kicking butt" in the mid 90s largely based on Itanium?


They had to do with how Intel was going to turn the inside of their x86 chips into something resembling RISC chips.


Um, they spent a gigabuck and drove DEC and HP out of the market.

I'd say that kicked butt.


HP saw that they couldn't compete with PA against Power, MIPS, Sparc, and Alpha. So they threw in with Intel to develop Itanium in order to be competitive. Intel saw the lucrative server market. Too bad Intel took too long to ship Merced (the first Itanic) At least we got the C++ ABI out of it?


This is a much better piece then the @stratechery one. And my thought on @stratechery were the same as S.Sinofsky, it wasn't the integration that is the problem. It was a go to market. I think the word "go to market" is better than what I described as "vision and execution".

Had its execution been perfect, its Roadmap of better IPC, more Core, and 10nm / 7nm node, it would still have lots of room to adjust. Although I think a better execution is only just delaying the problem. Like the article have said, it really is a multiple things.

Had Intel set out to make the best SoC for Netbook.

Had Intel set out to make the best Graphics within those die space.

This is the classic case, where Steve Jobs describe companies with a monopoly are ran by sales and marketing people.[1] And Product people ( Pat Gelsinger ) are driven out of management decision process.

I still remember in one of the video interview, Pat Gelsinger; I cant remember if it was EMC or VMware era, clearly describe how x86 would rule Sever space as long as it continue to improve and innovate. And trying create a half hearted SoC with x86 into Mobile wouldn't work. And it was all too late. ( I can no longer find that video )

And some people might think S.Sinofsky view on Intel graphics a little strange, remember Intel graphics didn't became good until Apple forces them to be better. And S.Sinofsky left M$ in 2012, the design of those Surface devices must have happened during 2010 era, and to make matter worst the graphics on Atom were generation behind what Intel had on the desktop.

[1] https://www.youtube.com/watch?v=-AxZofbMGpM


Intel is resilient, if anything.

They have had a ton of flops and misdirections. The 186. iAPX. i740. Xscale. Itanium. The FDIV bug. Pentium 4.

AMD has had Intel up against the ropes more times than they can count. Faster and cheaper 386 and 486 clones threatened them. The K6 came around and beat the Pentium II. AMD leap-frogged them with their 64-bit chips. Intel has always came back and stomped them right back into oblivion. They'd probably be close to turning the lights off if it weren't for Ryzen.


> Intel has always came back and stomped them right back into oblivion

Intel illegally abused their monopoly to prevent AMD from getting as much traction as they should have gotten. They've been convicted and payed billions in fines over that behavior.

Intel should have suffered much more at the hands of the Opteron and Athlon 64 chips; if they hadn't been abusing their market position things would have worked out better for AMD and their stumble with Bulldozer wouldn't have hurt as much as it did.

For the curious, Intel would sell CPUs at retail to OEMs, then provide a rebate/kickback to discount those CPUs. Shipping AMD processors would reduce those rebates. Shipping too many AMD processors would eliminate the rebates outright. This made selling AMD-based systems non-tenable for many OEMs.


Intel hasn't paid a cent of those fines, in fact they lately successfully appealed for the first time.

https://www.ft.com/content/f460ef98-930f-11e7-bdfa-eda243196...


Interesting, thanks for pointing that out. However, your link is paywalled. Do you have another source?


damn FT, it wasnt paywalled when I went there from google :/

https://www.google.com/search?q=Intel%20wins%20review%20of%2...


This is exactly what I was thinking, too. I remember well the days that AMD was beating Intel to market in 800MHz clock speeds for a fraction of the cost of an Intel chip and lots of people were assuming AMD was going to overtake Intel. But for some reason the chips were either overhyped or just weren't able to gain mindshare of the average PC consumer on a large scale and AMD's stock price has seemingly been frozen in time.

I'm not sure whether or not the latest jabs at Intel and the prophecies of their inevitable decline are accurate this time or seeded in people's hate of older giant tech companies, but this is not the first time I've read of Intel's demise. Just going by earnings its at best premature to cede the future to AMD. I will say that BK's abrupt departure did raise some concerns in my mind.


Was the 186 really a flop? Maybe it wasn't successful as the CPU of a PC but it was used in embedded systems and peripheral cards. That seems to be the role it was designed for with the 286 intended as the solution for PCs. It was manufactured for 24 years so clearly somebody found it useful.


IBM was considered resilient, now they are on the way to becoming Unisys.


Sorry to nitpick but Pentium 4 was a temporary flop until the HT version came along for which AMD had no answer. And I don't think the K6 beat Pentium II. Having bought a $2000 K6-2 computer in 1998 with maxed out RAM and graphics, I instantly regretted that decision as all my friend's mediocre Pentium II's ran applications and games better than my K6-2.


Not even a nitpick, the claim that P4 HT had "no answer" from AMD is just false.

https://www.anandtech.com/show/1517

Intel never definitively reclaimed x86 until Conroe/Core 2.


The original HT implementation was so incredibly bad that people would turn it off to improve performance on multithreaded loads.


So true! It only worked on example code, it was very difficult to get an improvement in a real world app.


While it may be simpler, it may also be inaccurate. The fact that Intel's dominance isn't as pervasive as before does not mean it's no longer #1. AMD has a long way to go.

One a related note, to my knowledge AMD is still essentially absent from the Autonomous Driving market, with NVIDIA and Intel fighting to be top dogs through different approaches¹. With some estimates for the size of that market (including non-chip portions) going as high as trillions of dollars², I hope for AMD's sake it has a plan to catch up sooner rather than later.

__________

1. https://www.benzinga.com/top-stories/18/02/11148965/which-ch...

2. http://fortune.com/2017/06/03/autonomous-vehicles-market/


They're too busy playing catch up with deep learning rocm library.

I wish they're aren't so behind in everything I care about and give us some competition so those graphic card goes down in prices already.



And he recently left Telsa to be SVP in charge of the Silicon Engineering Group at Intel: https://www.anandtech.com/show/12689/cpu-design-guru-jim-kel...


When Manchester United will hire him?


What Autonomous Driving market? From what I have heard, right now autonomous driving even by leaders like Waymo is incredibly small-potatoes and the rigs running in these cars are mostly prototypes built with commodity hardware.

It's just way too early to start picking winners and losers in the hardware space for autonomous given that nothing is being mass-produced. And there is no price-competition in the space because the major players are willing to pay whatever it takes as part of their R&D efforts to win the race.



All the big automakers in Germany are looking for hardware manufacturers that will deliver them autonomous driving hardware. Their typical lead time is >3 years and they definitely plan to have such hardware in mass produced cars 2-3 years from now.


I remember getting money from Intel to put MMX instructions into a game and later getting money to use hyper-threading. In both cases, at the time, the improvement was at the margins and not worth the work it hadn't been subsidized. It seemed like the best thing you could say about it was that it didn't work on an AMD chip.


Nowadays, MMX (and newer SIMD instruction sets) are a crucial part of many optimizations. Hyperthreading is provided in all modern processors. Basically, those technologies proved themselves.


At the time the SIMD instruction set did not exist and the MMX instruction set was difficult to apply in a way that got us an advantage in our application. When you say all modern processors have Hyperthreading, are you sure you don't mean multithreading? I am not sure every processor has hyperthreading. I am not knowledgeable enough to argue about the overall value of MMX and Hyperthreading but I think the point of the article at the top of this page is that it in the end -- although maybe not at first -- it was counter-productive.


As someone who left Intel last year for one of its customers in the Valley, I have to say I'm sad that the future looks pretty dark for them. In the exit interview, I said I might come back if the CEO and my department head left (they both did), but I didn't know the company was such a mess.

That said, Intel is viewed from the outside as sort of an old monopolistic monolith, but a lot of that profit gets plowed into a bunch of truly innovative semiconductor R&D going on that I do not believe is being replicated by competitors. 3D XPoint is probably the most visible example. AMD, TSMC, Samsung, etc. I doubt will pick up the torch.


So where do you feel their competitive advantage lies right now, by the way?

You make it sound both like they do struggle lately but they have stuff lined up that cannot be competed with. Slightly confusing for me, could you elaborate?


Well the two are not mutually exclusive. I worked there for close to 5 years on a few different silicon-related projects that I characterize as unique in the market and not a single one has even made it into an announced product. These were not small endeavors, so either the development times for silicon are extremely long or Intel cannot execute on its ideas. Regarding the former possibility, I heard from people who were around since the beginning that 3D XPoint was in development for a decade before it was announced in 2015.


Oh, I agree, but what you describe seem to be latent / possible-in-the-future breakthroughs. Until they see the light of day they are irrelevant IMO.


From the article it sounds more like Intel kept building up a moat that everyone got sick of crossing.


Ya.. I go back and forth in my head a LOT about the importance of moats. Constantly second guess myself. You sacrifice SO MUCH velocity building a moat, and you cut off any possibility of other companies helping “rise all ships”.

But at the same time, aren’t there competent cloners lurking behind every corner ready to jump on your product market fit and outpace you?

This article pushed me a little further into the “don’t worry about moats; make your partners successful” camp. But I waver.


Reading this makes it very clear why Apple would be moving towards using its own chips in laptops.


Besides everything else going on, they left 68k because it was falling behind and couldn’t compete or speed up enough. They left the G3/G4 line when once again they were stuck and Intel was running away.

So they went to Intel. They’re now on the same treadmill as everyone else. Can’t do better, can’t be totally stuck. But Intel isn’t progressing very fast, and they don’t seem to be focusing on the things Apple wants like ultra-low power.

But Apple’s chip division is kicking ass. The 10.5” iPad Pro is a monster, and could easily power a MacBook faster than an Intel chip. The real question is what to do on the MBP and iMac Pro.

Apple has never been keen on AMD for some reason, not sure if they have a chance on the high end. My impression is they don’t compete on low end/low power but I don’t know how accurate that is.


> The 10.5” iPad Pro is a monster, and could easily power a MacBook faster than an Intel chip.

Very true. I have that exact iPad Pro (10.5") and the MacBook 12" and I can tell you that the iPad Pro is much stronger. The damn thing never stops for breath. Even trendy games with very very detailed graphics take 10 seconds to start on a bad day (usually 5-6) and they run at 60+ FPS at all times.

If we are talking about everyday apps and Safari, the iPad Pro is the last tablet you will ever need in your life. It's simply perfect.

> The real question is what to do on the MBP and iMac Pro.

My only complaint about my other MacBook (Pro 15", 2015) are the loud fan noises, to be fair. I know Apple's thin and beautiful laptops are what people love but I'd buy some slightly bulkier MacBook if it was quiet.

Speaking of which, the iMac Pro is the quietest machine of its class out there. Definitely gonna gather some cash and buy it. I don't think I'll need another desktop machine for at least 7 years, if not 10-12...

Never been a fanboy in my life but I honestly can't wait to see Apple chip exclusive iPhones and MacBook. (I am mentioning iPhones because they still use Qualcomm's or Intel's radios and chips for their signals stack.)


A Mac Pro with an epyc cpu seems like it could be pretty sweet.


Is there actually any fundamental reason an ARM chip couldn't be made as powerful as Xeon or that it's harder than it is for x86, or is it just a matter of the (obviously expensive and time consuming) research and development not having been done yet?


Don't forget the full (not i) Mac Pro, which they've committed to updating.


Pretty sure an ARM tablet chip is not going to beat Intel, even on the low end. Otherwise we would have moved on already. Also, apple does like AMD and uses their gpus because they have bad blood with nvidia. Apple primarily makes laptops and AMD has not been competitive until they developed Ryzen.


https://9to5mac.com/2017/09/22/iphone-8-geekbench-test-score...

The iPhone 8 CPU is competitive with a Core i5


That claim would be highly suspect in general, given that a desktop CPU has at least 10 times more headroom in power consumption.

For such a claim to be true, you would have to expect that the phone CPU is somehow 10 times more efficient per Watt, while having the same kind of architecture (general CPU, not GPU/TPU/...).


I think he might be correct.

Apple A11 has 297 GFlops CPU performance. (1)

That “7th gen i5” is apparently i5-7360U. 32 FLOPs/cycle/core (2) * 2 cores * 3.50 GHz (=max turbo) = 224 GFlops.

> the phone CPU is somehow 10 times more efficient per Watt

Not 10 times. The i5’s TDP is just 15W. I don’t know about A11 but high-end Qualcomm chips can be up to 5-6W, just 3 times difference.

> while having the same kind of architecture (general CPU, not GPU/TPU).

That’s a continuum. They both CPUs, they both have SIMD that’s how they achieve that high flop/cycle, but the architecture can be quite different. A11 has 6 cores. Intel spends many transistors, and therefore energy, doing very complicated things: branch prediction, indirect branch prediction, cache synchronization between cores, speculative execution… Apple has full control over OS and software, they can probably get away with not doing some of these. Intel and AMD have to maintain compatibility with all software (including single threaded), built by all compilers and languages, that’s why they have fewer tradeoffs available to them.

(1) https://forum.beyond3d.com/posts/2021926/

(2) https://stackoverflow.com/a/15657772/126995


If you give the i5 it's full voltage and crank up the clock speed it'll totally beat an Apple chip. But in order to fit in a Macbook Air's power envelope you need to really throttle things back and a chip that's designed to operate in that regime has a big advantage over a chip that isn't.


It is true but at the same time it is not. Geekbench measures the unthrottled performance of a CPU. One thing you have to consider is that ARM SoCs contain a cluster of heterogenous cores. A high performance low efficiency core for bursty workloads or games and a highly efficient core for "sleepless" standby.

What happens is that people see a benchmark and they see low power consumption and now conclude that it must have 10 times better performance per watt. x86 is bad, ARM is good

But the thing is you cannot have your cake and eat it too. The ARM SoC is clearly more efficient but only during standby or low intensity workloads and because everything is integrated into a single chip. As soon as you use a constant CPU/GPU intensive workload like servers or video games or even just google maps [1] you will have the same energy efficiency as intel chips.

[1] My phone battery barely lasts 2 hours with google maps


It’s perfectly possible that the iPhone CPU can’t run 4 hours straight due to heat issues in its enclosure while a desktop CPU can. But that could be fixed.

Intel has not been optimizing for TDP/power draw as aggressively as Apple has. Apple started from that position and went up in performance. Intel started more in performance and had tried to keep TDP in check.


That a myth; Geekbench results aren't comparable cross-platform.


From the linked article:

"Geekbench comparisons between phones and laptops used to be pretty meaningless, as the tests were not directly comparable, but that hasn’t been the case for some time now. Founder John Poole confirmed that it is legitimate to directly compare scores across platforms"

Look at these workloads[0] and tell me what doesn't look comparable across platforms? Parsing a 1.5MB HTML page, running SQLite queries, compiling code using LLVM, running Dijkstra's algorithm on a huge graph, repeatedly rendering PDFs, etc, etc.

[0] https://www.geekbench.com/doc/geekbench4-cpu-workloads.pdf


No, that is no longer true. Since Geekbench 3, results are comparable cross-platform.


Video encoding is a pretty poor benchmark for this.


I know you are wrong because Geekbench scores are showing the A-series chips to be competitive with Intel chips already.


Can’t they have an architecture where specialized chips deal with the most common heavy duty tasks to counteract raw CPU power ?


I think it was a very smart move of them to hedge for exactly this situation by introducing the A series chips (and to this end building the capabilities in-house). Of course, hindsight is 20/20 and we’ll probably never know a 100% if that was the core reason to do that, but if it was the case, it was brilliant.


I’m not sure that was the plan so much as their ruthless search for power efficiency and integration on the phone.

But wow does it look like it will pay off.


The A series is just custom ARM. It's really nothing groundbreaking. They created ARM chips to power phones and tablets. The possibility of moving full computers to ARM was just a bonus, not some genius plan.


"Just".

Apple's mobile CPUs are several years ahead of all the competition in terms of single core performance.


Every successful Apple innovation goes through the same phases.

1. Well that's stupid. Why would they do that? Nobody needs this.

Within a few months...

2. Huh. This is actually really sweet. Nobody else has this technology.

And then a few years pass...

3. It's really nothing groundbreaking.


Except for the things Steve presented... they started at 2.


Like iOS is just custom NetBSD?


I don't think Apple cares enough about a line of products that make us less than 10% of their CPU line to go through the engineering effort to create ARM based laptops.

I think it's more likely that Apple will make iOS more flexible on the iPad and make it a true laptop alternative.


Or Apple could decide that the resources spent tending x86 hardware and dev tools for just 10% of their products would be better spent on one architecture (ARM) across all products. Switching Macs from x86 to ARM would be a one-time investment compared to the ongoing costs of maintaining separate x86 and ARM product lines.


How is it a "one time investment"? They still would have to create processors specifically for the high end where thermal limits aren't as big of a concern that are used in their desktops.


Not necessarily, they could run them at a higher clock and perhaps put two or more of them on the motherboard for the high end units. Keep in mind that the iphone 8 is already on par with their fastest 13-inch macbook pro.


How would that compare to the top end Intel chips that Apple is using in the iMac Pro and the forthcoming Mac Pro?


Apple does not keep two products around that fill the same niche. If the iPad were upgraded to serve as a true laptop alternative (surface pro style), they would get rid of the laptops.

I don't think they are though. The current ARM chips in the high-end iphones and ipads are too powerful for mobile. They don't make sense from an investment standpoint (why not spend that hardware effort on rejuvenating the mac line which can make more money?). They make a lot of sense as a midway point to a performance tier where the A series can serve as a laptop chip.


What does a "rejuvenated Mac look like"? The entire market for traditional PC's have been declining for years. What could Apple possibly do to reverse that decline? It could very well be argued that the current processors for PCs are "overserving" the market.


The watch is less than 10% and the HomePod is less than 10% and they seem to manage just fine custom designing chips for them. And at some point the outlier in your example is going to be the Mac, and I don't think Apple would care enough about it to go through the engineering effort of creating x86 laptops when all their expertise is in ARM.


The watch is part of the iPhone ecosystem. You can't use a Watch without buying an iPhone. The watch is an accessory to the iPhone.

From a software side, WatchOS is a trimmed down version of iOS and share many of the same frameworks. The HomePod is an even further trimmed down version of iOS.

"Designing x86 based hardware" is relatively easy. Intel does all of the heavy lifting with designing and integrating the components. Apple has shown any expertise in designing chips that are a match for Intel's on the high end.

Every other processor transition that Apple has done - 68K -> PPC -> Intel involved a massive CPU advantage that made emulation viable. That wouldn't be the case with x86 -> ARM. On the other hand, Apple had to migrate processors just to stay competitive with Wintel each time. If they stay with Intel, they don't have to worry about not having competitive performance with other PC manufacturers.

Finally, Apple has shown no expertise in delivering chips that are competitive with Intel on the high end. I'm sure they could do it, but they don't get the benefit of spreading their engineering fixed costs across almost a quarter billion devices a year if they develop a high end ARM desktop chip like they do with iOS devices.


> The watch is part of the iPhone ecosystem. You can't use a Watch without buying an iPhone. The watch is an accessory to the iPhone.

So? The iPhone was part of the pc ecosystem, for years you needed a pc to even set it up. That says nothing about its architecture or the resources required to develop it.


It says a lot. The watch can either lead to more sales of the iPhone because if you want an Apple Watch, you have to buy an iPhone. If you have an iPhone, the Watch provides additional functionality to the iPhone.

The Mac doesn't benefit enough from the iPhone tie-in to make it worthwhile as an adjunct to the iPhone.


It will be nice not to pay the Intel tax.


This is Tim Cook's Apple we're talking about, where margins are king. The cost savings from switching away from Intel, if and when that happens, will not be passed on to the consumer.


Was there ever an Apple where this was not the case? They’ve been making $5000 computers for decades.


The LC II was a big deal as a cheap color Mac when it was introduced at $1700. Of course, inflation. Today that's $3000.


Largely agree (dropping prices some would help with sales).

But I think Tony’s point was it would be nice for Apple to not have to pay the tax. It would help their bottom line.


The Windows ARM laptops are hardly cheaper than their intel counterparts but they have worse specs and inadequate software compatiblity.


If Apple does in fact move towards their own AX CPUS for PCs then might start with a hybrid approach before completely eliminating their dependency on the x86 instruction set.


I think the touchbar MBPs include an ARM CPU, so in a sense they already started the hybrid approach.


I was thinking something similar. It might also explain the slow progress on the Pro and MB Pro as they wait for Apple CPU silicon to be reborn. Time will tell.


i’d rather have Apple turn the iPad into something that can replace a laptop for most people.

If they don’t do it, expect Microsoft to perfect a future version of the Surface.


I mean, you just described the surface. Apple has been making god awful decisions for the past 5 years or so.

MS is beating apple at their own game.

The iPad pro is a great example of how out of touch apple is.


Except windows 10 is really, really, terrible.


Except it really, really isn't.


Depends on the point of view.


True, it’s entirely my opinion.

I still can’t get windows 10 to stop turning down the brightness on my screen or keep my ssh sessions open if I undock with a closed lid though, and this is on two different laptops.

As for use outside of an enterprise setup, I really dislike how it auto installs installers for free2play Facebook styled games. I don’t want my operation system to do anything remotely like that.


All you have to do to get it to stop automatically installing random games without your permission is set a DWORD to 0 in the registry at HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\ContentDeliveryManager\SilentInstalledAppsEnabled.

And people say that Linux is the OS that requires bizarre incantations to do basic things.


I never needed to do that on my W10 Pro copies.


Well lucky you. It clearly happens to a lot of people's Windows 10 Pro copies as evidenced by all the people complaining about it. It certainly happened to mine, on multiple computers, until I made the registry edit. Did you disable the Windows Store entirely or something? I've gotten my copy of W10 Pro to behaving in a semi-sane way but it took an obnoxious number of group policy edits, registry edits, and random obscure powershell commands to get that way.

Do your W10 Pro copies also include a way to disable Bing in the start menu search bar without multiple registry edits? I would love that. It was by far the most obnoxious change in the 1803 update.


Plain W10 Pro OEM releases.

I just removed all the tiles from the status bar, reverting to a plain old start menu.

Also I only use standard Windows accounts, with a special one for testing UWP and store related stuff.


I wonder how these W10 auto-install software.

My W10 Pro copies surely don't.


Do you mean, I essentially, that Apple refusing to make a touch enabled notebook style computer shows how out of touch Apple is? Or something else, like not developing a better keyboard?

As Apple tells it, they do not foresee merging MacOS and iOS, as the user interface is to different.

Microsoft is more willing to ship products with bad UX, and power through based on their market position, and willingness to stick with it until they get it right.

I cannot see Apple, as it is, releasing something like Windows 8.

The other thing is that iOS is the money maker. Even if iPad doesn’t sell as much as it might, it supports the critical mass of the iOS ecosystem, which is a vulnerability for Apple, due to much smaller market share.


>>>.The iPad pro keyboard cover is a great example of how out of touch apple is.

FTFY. The first time a colleague requested an iPad pro with a keyboard and got it, I was blown away by how much origami apple expects you to do to properly use it.


I haven't used the keyboard cover but just the basic Smart Cover, I constantly feel like I must be using it wrong. A lot of times, I am using it wrong unknowingly and after a minute or two the whole thing collapses.

Not to mention it's impossible to use the Smart Cover in portrait mode so holding recipes while I'm cooking means propping it up against the toaster.


I've been using an origami cover for my iPad 12.9" and am reasonably satisfied with its different configurations, including portrait. https://www.amazon.com/gp/product/B072JZ1K55/


that link shows what one might call intuitive origami, apple's "smart cover" falls in a category all its own. a good 60% of time you fold it incorrectly when opening or closing it and it can take up to a minute of finagling before you figure out how to fold it so it stays shut/open.


Everyone is ultimately proprietary even in the Open Source world. Once you invest heavily in any sort of architecture or platform, you’re not moving anywhere with your current investment.

I’ve argued this point repeatedly in real life and on HN. The people who don’t want to use any of AWSs proprietary infrastructure or developers who want to use the repository pattern for the sole purpose of “not locking themselves” into one vendor.

I’ve never in 20+ years seen a major organization switch from a well known vendor to save a few dollars. Even if you theoretically can change your infrastructure, the risks and the amount of regression testing you have to do hardly ever makes it worthwhile to switch infrastructure.

As far as open source coming to the rescue, if someone else doesn’t takeover an open source project, more than likely, your company won’t either.


I am largely in agreement with you, however there are many well-documented cases, right here in HN, about companies bleeding a lot of money due to their usage of AWS and a few others. Such services are really good when you are starting off -- they eliminate a huge upfront time cost, and for most startups time to market is the difference between life and death. And as you start, they are pretty cheap as well.

However, once the startup takes off and becomes cash-positive and has to scale, then AWS in particular becomes a huge point in your balance sheet. It's a very well executed vendor lock-in, I will give them that.

Many companies are using European VPS hosting and once they have mid-level business contract that guarantees 99.9999% uptime because the provider can replicate to 3 separate physical data centers, they are happy both with the service and the cost. Granted they need their own Ops teams but long-term this scales much better and is sustainable.

Managed cloud infrastructure is mostly overrated and the complexity of it is by design. AWS in particular is borderline crazy lately, they have dozens and dozens of inter-connected services and "AWS consultant" is now a commonplace title in CVs.


Amazon doesn’t “lock you in” with VPS hosting. If that’s what is costing you, you can access all of AWSs other services over the internet. You can setup a VPN from your colo center to AWS and still access all of thier managed services.

How is it less complex to host your own database servers, load balancers, queuing system, redundant storage, CI/CD servers, ELk stack, redundant memcache or Redis servers, distributed job scheduler (I’ve used Hashicorps Nomad in the past), configuration servers, etc. These are just the services that you don’t have to manage the underlying servers and you get redundancy.

I’m first and foremost a developer. But the money the company I work for saves by not having to hire dedicated people to manage and babysit servers more than makes up for the cost of AWS.

It just so happens that I know AWS well enough and have experience as an “architect” (and have the certifications to give them a warm and fuzzy) to be competent at the netops and devops side of things.


> How is it less complex to host your own...

It is not less complex. It is definitely more complex and is harder. My point was that financially it is more sustainable long-term. And once your org is bigger, dedicated Ops team gives much more peace of mind. Whoever is on shift flips 3 switches and things are back to normal in 2 minutes, 99% of the time. That is not always the case even with a huge provider like Amazon and Google.

> But the money the company I work for saves by not having to hire dedicated people to manage and babysit servers more than makes up for the cost of AWS.

Disagreed. Periodically, there pop up articles here in HN that prove with numbers and historical timelines that AWS only saves you time and white hair while you are smaller. Once you start to scale up and/or use more of their services, bills start to pile up quicker than before (people have mostly analyzed the exponential growth of their billing). Apologies that I don't keep the links but I remember that I've read at least 5 such articles in the last year, right here in HN.

> It just so happens that I know AWS well enough and have experience as an “architect” (and have the certifications to give them a warm and fuzzy) to be competent at the netops and devops side of things.

Good for you, many of us don't however. I honestly have no intention to either. It's a very specific vendor cloud system with a huge amount of proprietary tech baked in, and I have no desire to entangle my career prospects with its success. Career preferences. ;)


It is not less complex. It is definitely more complex and is harder. My point was that financially it is more sustainable long-term.

Can you set up a duplicate infrastructure that’s close to your international facilities worldwide cheaper? Even support would be cheaper because you can have one central team manage your worldwide infrastructure. Netflix moved their entire infrastructure to AWS. You’re not taking into account the cost of enploying people to babysit and do the “undifferentiated heavy lifting”, the cost of over provisioning just in case, the cost of the red tape to provision backup hardware for failover, etc.

* And once your org is bigger, dedicated Ops team gives much more peace of mind.*

You can easily outsource netops to a dozen companies that can be cheaper because they can manage multiple accounts and they can outsource the grunt work to cheaper labor. I know, I’ve worked in two companies that outsource day today management of netops.

Whoever is on shift flips 3 switches and things are back to normal in 2 minutes, 99% of the time. That is not always the case even with a huge provider like Amazon and Google.

You’ve never had to deal with a colo center. Have you ever dealt with AWS support as a representative of a large company?

Disagreed. Periodically, there pop up articles here in HN that prove with numbers and historical timelines that AWS only saves you time and white hair while you are smaller. Once you start to scale up and/or use more of their services, bills start to pile up quicker than before (people have mostly analyzed the exponential growth of their billing).

If the fixed cost of your infrastructure is growing faster than your revenue...you’re doing it wrong. Even if you can get simple VPS elsewhere (and yes you can). Thsts not where you get the win from AWS. You can host VPS anywhere and still take advantage of all of AWSs services.

Good for you, many of us don't however. I honestly have no intention to either. It's a very specific vendor cloud system with a huge amount of proprietary tech baked in, and I have no desire to entangle my career prospects with its success. Career preferences. ;)

That’s just what the original article said. You always tie yourself to a specific technology and risk that technology becoming out of favor. The trick is to stay nimble and to keep learning. Right now, there is a lot more money and opportunity in being a “Cloud Architect” than knowing how to set the same things up on prem.


I can't comment on the veracity of the article, but I'd like to just add that the article was a compelling read.

I am a fan of AMD because I like competition, especially from a spunky competitor. I'd say the same about Nintendo, which keeps finding ways to win by focusing on fun over specs.


> ...key assumptions...:

> ...

> Discrete graphics

I get the impression that Steven is willing to make up the history to match the conclusion.


How so?


Intel were the first to mass market with reasonable "integrated" (even if it was just part of the northbridge, it would be in every platform) accelerated graphics.


He also totally ignores the i740 - not a major part of the story of Intel and perhaps a misstep, sure, but to act as if they hadn't tried to take a discrete graphics chip to market well before AMD bought ATi is crazy.

Also, he seems to act as if Itanium was developed in response to AMD64:

>Brilliant brilliant choice by NT team was the bet on the AMD 64 bit instructions. Seeing AMD64 all over the code drove them “nuts”.

>That led to Itanium…more proprietary distraction.

It was pretty close to the opposite. IA64 was Intel's plan for the future for more than half of the 90s, and the spec for AMD64 wasn't even published until 2000/2001 or thereabouts. Actual AMD64 processors came out about two years after Itanium. His quote to support the above assertion even mentions this, which confuses me more.


Another angle: BK's 2016 lay offs were done in such a bad style that experienced engineers began leaving company in large numbers. I guess that drained the company with a lot of talent.


Thinking about it, I wonder if some of the reasons for the unethical tactics Intel did in the mid-2000s was fear that the revenue would decline quickly if the market share goes to say 10%.


Am I missing something here? Intel's operating income is higher than it's ever been. They aren't being disrupted by any meaningful measure of disruption.


Was this post written on a phone? The amount of abbreviations ("Ppl", "gfx") and emojis is bothersome.


I knew it was going to be a rough read when the first paragraph said "Let’s explore in this annotated twitter thread." Maybe it shows my age, but I loathe twitter threads, be they annotated or not. And if you need to annotate a twitter thread, maybe it should have been posted on twitter in the first place.


If that bothers you, maybe the internet is not for you... :P

Anyways, seeing people communicate and expressing themselves in different ways is pretty cool to me.


Colloquial expressions and shorthand generally make a written work harder to comprehend for a broad audience. Broad comprehension and dissemination is generally what you want in written work.


Jah, szerintem is gecijó a diverzitás! :P


zo'o .u'i mu'inai ma xunai .eidai do tavla fo lo jbobau

.ui lojban


What's the deal with all of the random italics disrupting the flow of the article?


I believe they’re quotes from Ben Thompson’s article, but it took me until about halfway through to figure that out.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: