This doesn’t affect me, so I won’t comment on the decision or its merits/alternatives. But I’m astonished that in the same article:
- they say that they removed DirectX 12 support to address a security vulnerability
- they say you can restore DirectX 12 support with the older, vulnerable firmware
… and don’t warn that this implies reintroducing the vulnerability. I mean, sure, the implication is clear if you’re reading top to bottom to understand the reason for the change. But it’s certainly not going to be clear to people who understandably may find this looking for a “fix”, where skipping to the solution is a common pattern.
That seems exceedingly irresponsible to me. Security fixes are for your users, not for your own checklist to cover your ass.
What do you expect them to do, instead? Simply not mention any alternatives? It's absurdly clear from the article that downgrading will leave you vulnerable, so anyone who actually cares is going to read the one (1) paragraph required to determine that.
I expect them to not explicitly endorse bypassing security measures they deemed appropriate to ship. But, you didn’t need to ask that, I thought it was quite clear that was my expectation.
People have been complaining about dx12 not working with that driver on haswell in early 2020 on their forums. And intels support staff responded with "that chip does not officially support dx12" https://community.intel.com/t5/Graphics/Why-the-DirectX-12-i...
Note that those chips are two years older then DX12 which was released july 2015 together with win10.
edit: however intel silently changed the list of supported APIs their forum staff is linking. When looking at old patch notes, those chips for some time did officially feature DX12 https://www.intel.com/content/www/us/en/developer/articles/n... - with a small-print of "mostly but not completely".
edit2: it seems intel was under pressure at the time to push drivers that fix game crashes in triple-A games on win10 on haswell.
One more thing: the article linked here has a "lastModifieddate" metatag of 2020-11-04 - did they set the wrong publishing date in their CMS? hahaha. So some tech journals probably monitor intel for new articles and now they are copying from each other with no one checking if the news is older than moldy bread.
edit: looking at SA-00315 i wonder if DX12 ever only worked on those chips because it gracefully overflowed some buffers
edit2: mmh the big new thing in DX12 were low level APIs.
Yeah, it's a questionable move. However, those who care more about DX12 on Haswell iGPUs than about the risk coming from the patched vulnerability have the option of staying on the previous driver, at least.
The way I see it is that the writing has been on the wall for pre-Skylake iGPUs ever since Intel refused to make DCH drivers for them. Ultimately, even the (U)HD series seems to be a stopgap that is lasting longer than Intel wishes it did.
This is a dumb question but how far "back" (regressed) would we have to go to get "open secure" hardware?
ie we decide that having open sourced hardware designs and some way to verify the silicon matches tha hardware is the way to go.
This is not just the CPU but all the chips on a motherboard, GPU included?
IIRC Russia still fabs 286s for its military because they have verified the design and don't have to worry the NSA has added a few extra circuits.
I would guess we would could go a couple of generations back before the reduction in power more than was compensated for by the presumed extra peace of mind?
RISC-V and MIPS are the only semi-modern architectures I know of that can be implemented with 100% open hardware down to the schematics of every single component on the mainboard/CPU/etc.
And the main problem with those is performance. Even the fastest RISC-V board currently available (which is also proprietary) is still twice as slow as a Raspberry Pi 3.
Is that “just” a matter of the process node used to fab them, though? What would the power profile of a RISC-V taped out for TSMC’s 5nm process look like?
Wouldn't the energy efficiency of a (much) smaller process node enable you to greatly overclock the chip (compared to now where it's thermally limited to a low clock), though? And/or just pack many of them per die?
If I had to guess it's related to how DirectX12/Vulkan/Metal/Mantle are all about the basic idea that if you have a full MMU on the GPU, you can expose a more direct, console like API. The idea being that at worst you can only crash your own process's GPU context and that was always allowed. Maybe Intel found a hole in their MMU's implementation that lets you break out from the GPU side? I feel like it would have to be some hardware issue that fundamentally break's DX12's value add if patched to decide to just wholesale remove the feature.
It's worth noting that Haswell only ever supported Tier1 resource binding on DX12, so none of the new bindless stuff would even be to blame. I suspect this won't affect any projects too harshly, because anything DX12 exclusive probably requires at least Tier2 resource binding support. But I'm guessing something in the resource binding system is to blame...
As a game dev "you can expose a more direct, console like API. The idea being that at worst you can only crash your own process's GPU context and that was always allowed." is extremely funny to me. Any time I test Vulkan code on my machine I make sure to save all my work first since if the code has any bugs it has a decent chance of trashing kernel data structures and causing a BSOD.
The primitives being in place (virtual addresses in GPU data structures like the command lists, letting user space manage GPU memory) says nothing about there being bugs in the drivers or not.
As I understand it, Windows 11's CPU limits are designed for CPU's that have specific hardware mtigiation for Spectre and Meltdown
Given that we're still seeing new variants of those today it doesn't feel like the most crushing requirement
Plus there's frankly nothing "compelling" about Windows 11 yet. All the promised features like SSD <--> GPU data transfers (mimicing the Xbox One and PS5) are just "things we're going to add" at some indeterminate date
Microsoft has by their own clock until 2025 to wow people over to Windows 11, and they've not even shown up to the race
> As I understand it, Windows 11's CPU limits are designed for CPU's that have specific hardware mtigiation for Spectre and Meltdown
"To run Windows 11, CPUs need to have the hardware virtualisation features to enable virtual secure mode for Virtualisation-Based Security and the Hypervisor-Protected Code Integrity that underlies a range of protections that Microsoft has been building since Windows 8, like Application Guard, Control Flow Guard, Credential Guard, Device Guard and System Guard. Now they'll be on by default for all PCs, not just specially selected devices."
Sounds like they want to turn my PC into something locked down like an iPhone and not risk me being able to “break in” to it via any of those class of vulnerabilities.
As far as I can see, these are all legitimate security features that will help protect the users (e.g. from ransomware attacks)
I don't see anything in Windows 11 that makes it more of a walled garden compared to Windows 10.
All the security measures can be bypassed by a technical user if need be, e.g. running unsigned drivers [1], or installing on a machine without a TPM [2], or setting up an offline account on the Home edition.
Satya Nadella sees Windows 11 being an open ecosystem as a selling point compared to alternatives [3]
Along the launch of Windows, the Microsoft Store is more open now than before [4]
For comparison, the current Linux boot security is very poor compared to other OSs, because it lacks similar features [5]
I think Windows 11 is a step in the right direction, and I'm saying that as a happy Linux user.
For sure, Windows 11's CPU requirement has effectively ensure that CPUs going forward have some Specture/Meltdown mitigation in hardware. However, I would argue that the line wasn't drawn purely on technical reasons, but with non-technical considerations as well.
Initially, Windows 11 requires an Intel 8th gen or above CPU, but they went back to add the Core i7-7820HQ CPUs to ensure that the Surface Studio 2 currently in-market could receive the upgrade. Or consider that some 7th gen and 8th gen CPUs are based on Kaby Lake, but only 8th gen CPUs are supported.
The OEMs are also quite excited about getting people to replace their computers[1]. I am sure that's part of the consideration.
Admittedly, there's a light version of Windows 10 that runs pretty snappy on 2GB RAM tablets. I migrated one of those tablets (Asus Transformer of a few years ago) to Linux+Gnome because it was the only desktop manager to properly address screen rotation out of the box (again, it was a few years back) and I found it fairly slower on 2GB RAM than its original Windows 10. Agree completely on everything else: Microsoft totally borked the user interface from Win 8 onward, then decided it wasn't enough and added telemetry, then ads.
This doesn't impact many people, but WiFi 6e support on the 6Ghz band is currently only in Win 11 (won't work on Win 10, I've tried). Otherwise I pretty much agree with you, nothing that compelling.
But AVX512 has been killed on Alder Lake also -- not that I was too excited about it, but it's certainly sapping my interest in Intel in the short and medium term. All because only P cores can execute AVX512 and E cores will fault on those instructions.
Can't let programs using them affinitize themselves to P cores only, oh no. Definitely need to kill the entire instruction set extension, or only allow it if BIOS writers figure out they can use unpublished methods to enable it but only if they disable E cores at boot. /s
(I may be wrong on some details above, I haven't been keeping the closest of eyes on the issues.)
My understanding is that AVX512 is not all that it's cracked up to be anyway, and that it also clocks the CPU down while it is executing those instructions. So my understanding is that a program (even if not a particularly heavy one) that constantly executed AVX512 instructions could cause a noticeable drop in performance to the entire system.
I haven't been following the Alder Lake situation too closely, but AVX512 is really multiple things that ought to have been orthogonal: (1) 512-bit vectors, (2) a much cleaner vector instruction set, (3) masked vector instructions.
It's mildly understandable that Intel didn't want to implement (1) on the E cores (though it would arguably have been better to just dual-issue them, like AMD did for 256-bit vectors on the original Zens). But there's no reason not to implement (2) and (3) on the E cores.
Perhaps somebody will clarify that (2) and (3) are available on Alder Lake, that would take a lot of the hurt out of this announcement, but it does sound like they're not -- and that's a major bummer.
> It's mildly understandable that Intel didn't want to implement (1) on the E cores (though it would arguably have been better to just dual-issue them, like AMD did for 256-bit vectors on the original Zens). But there's no reason not to implement (2) and (3) on the E cores.
It is very likely that all these aspects are very hard to separate from each other.
Agreed, wasn't too excited about it as a prospect because of reports like that, but now it'll be even harder to test for myself, and if it turns out to have _some_ use case, it's another generation before consumer grade stuff can benefit from it. Completely off my radar now.
DirectX 12 support is getting better with Proton and VKD3D. It's not universal yet, but it works better than I ever expected such efforts to go. Today, I was able to play a game again than I have pretty much given up on playing on Linux. Granted, it probably does not use DirectX 12...
Your actions repeated a few million times = Thanks Microsoft, for your contribution to e-waste and to a 1.5 degree warmer world arriving earlier than 2040.
Can't wait for Windows 10 to be EOL'ed, for this problem to get worse. "Switch to Linux" they say, but hah, the corporate bean counters will say retraining is more expensive, just buy those new PCs, that 1.5 degree problem, that's not our issue!
How about putting the blame where it belongs? Intel knowingly sold CPUs with Meltdown/Spectre for close to a decade before it was disclosed to the public.
Switching to Linux does not fix the issue as it is part of the hardware. We have mitigated the issue, and done so poorly. You are not required to upgrade to Win 11, and gives your computer roughly 3 more years until you repurpose it for lesser tasks.
Most of the corporate IT leases employee hardware already and has 3-5 year laptop refresh cycles accounted for. I don't see how Windows 11 will affect anything in the big scheme of things for large enterprise IT procurement.
But as far as home users are concerned, I fully agree with you.
Maybe in companies flush with cash - most businesses won't change their hardware unless they have a good reason to. Losing support for getting security patches may be one such reason.
But the average office clerk will certainly not get regular updates to the latest and greatest hardware.
That's an interesting way to resolve an exploit like that, what would have necessitated disabling DirectX 12 in its entirety as opposed to other methods?
Most likely, a security vulnerability in the GPU's MMU that fundamentally cannot be patched on DX12 due to the increase in programmer flexibility that DX12 offers. Could be a race condition somewhere, for example.
I had a 4th gen processor (i5-4200) and it couldn't render video without lag until until the OS was upgraded to Windows 10 with DirectX 12, after which it was more than capable of handling video. DX12 probably added a few years to the life of my computer.
Intel has been a spoiler in the GFX market for the longest time.
Every laptop I've had that has discrete GFX has had recurring bugs with both Firefox and Chrome because of the two-GPU situation. I would have the hardest time documenting what was going on, they'd fix the bug, then in the next major release it would break again.
I was initially a little surprised, as someone running a trusty Haswell CPU, but then I remembered that DXVK handles all this for me and I probably won't be affected. Not sure if this is a win or a vulnerability, but I'll take it!
IMHO that's on the very low end of vulnerabilities to be concerned about, but the paranoia-kings would rather disable an entire feature because of it. A sad reflection of what the software/security industry has become. (At the very high end is automatic remote code execution, something which I really hope a GPU driver would never have, but then again, I've been surprised too many times already... )
> but the paranoia-kings would rather disable an entire feature because of it.
Windows is installed on many computers, including many office computers, of which many will inevitably have Haswell CPUs with IGP drivers installed by default.
Given how few people would be using DirectX 12 on Haswell IGPs, the decision is easy IMO.
In penetration tests I often find vulnerabilities that give me command access to a remote windows host as a low privileged user. Especially in windows environments, privilege escalation is super nasty because once you elevate privileges you have free reign of LSASS and have a lot of options for lateral movement on a network and dumping the "secure vault".
Gpu drivers have had vulns, but more commons are the software wrappers, like GeForce experience having poor configure and implementations (like that time their bundled node.JS allowing you to inject any process into their “web server” exe allowing for EOP, bypassing applocker/whitelisting etc)
- they say that they removed DirectX 12 support to address a security vulnerability
- they say you can restore DirectX 12 support with the older, vulnerable firmware
… and don’t warn that this implies reintroducing the vulnerability. I mean, sure, the implication is clear if you’re reading top to bottom to understand the reason for the change. But it’s certainly not going to be clear to people who understandably may find this looking for a “fix”, where skipping to the solution is a common pattern.
That seems exceedingly irresponsible to me. Security fixes are for your users, not for your own checklist to cover your ass.