Hacker News new | past | comments | ask | show | jobs | submit login
DirectX 12 applications no longer working on 4th gen Intel processor graphics (intel.com)
188 points by pantalaimon on Nov 6, 2021 | hide | past | favorite | 82 comments



This doesn’t affect me, so I won’t comment on the decision or its merits/alternatives. But I’m astonished that in the same article:

- they say that they removed DirectX 12 support to address a security vulnerability

- they say you can restore DirectX 12 support with the older, vulnerable firmware

… and don’t warn that this implies reintroducing the vulnerability. I mean, sure, the implication is clear if you’re reading top to bottom to understand the reason for the change. But it’s certainly not going to be clear to people who understandably may find this looking for a “fix”, where skipping to the solution is a common pattern.

That seems exceedingly irresponsible to me. Security fixes are for your users, not for your own checklist to cover your ass.


What do you expect them to do, instead? Simply not mention any alternatives? It's absurdly clear from the article that downgrading will leave you vulnerable, so anyone who actually cares is going to read the one (1) paragraph required to determine that.


I expect them to not explicitly endorse bypassing security measures they deemed appropriate to ship. But, you didn’t need to ask that, I thought it was quite clear that was my expectation.


It's not clear except to programmers. Non programmers will assume the new driver version introduced the vulnerability.

A hardware vulnerability is not in most non-programmers radar, they won't consider it.


Must be a slow news week. Driver 15.40.44.5107 was released January 10th 2020. The bugs fixed disclosed some time later in https://www.intel.com/content/www/us/en/security-center/advi...

People have been complaining about dx12 not working with that driver on haswell in early 2020 on their forums. And intels support staff responded with "that chip does not officially support dx12" https://community.intel.com/t5/Graphics/Why-the-DirectX-12-i...

Note that those chips are two years older then DX12 which was released july 2015 together with win10.

edit: however intel silently changed the list of supported APIs their forum staff is linking. When looking at old patch notes, those chips for some time did officially feature DX12 https://www.intel.com/content/www/us/en/developer/articles/n... - with a small-print of "mostly but not completely".

edit2: it seems intel was under pressure at the time to push drivers that fix game crashes in triple-A games on win10 on haswell.

One more thing: the article linked here has a "lastModifieddate" metatag of 2020-11-04 - did they set the wrong publishing date in their CMS? hahaha. So some tech journals probably monitor intel for new articles and now they are copying from each other with no one checking if the news is older than moldy bread.

edit: looking at SA-00315 i wonder if DX12 ever only worked on those chips because it gracefully overflowed some buffers

edit2: mmh the big new thing in DX12 were low level APIs.


Basically the way Intel got to fix a security vulnerability was to disable DirectX 12 support, great move. /s


Yeah, it's a questionable move. However, those who care more about DX12 on Haswell iGPUs than about the risk coming from the patched vulnerability have the option of staying on the previous driver, at least.

The way I see it is that the writing has been on the wall for pre-Skylake iGPUs ever since Intel refused to make DCH drivers for them. Ultimately, even the (U)HD series seems to be a stopgap that is lasting longer than Intel wishes it did.


This is a dumb question but how far "back" (regressed) would we have to go to get "open secure" hardware?

ie we decide that having open sourced hardware designs and some way to verify the silicon matches tha hardware is the way to go.

This is not just the CPU but all the chips on a motherboard, GPU included?

IIRC Russia still fabs 286s for its military because they have verified the design and don't have to worry the NSA has added a few extra circuits.

I would guess we would could go a couple of generations back before the reduction in power more than was compensated for by the presumed extra peace of mind?


RISC-V and MIPS are the only semi-modern architectures I know of that can be implemented with 100% open hardware down to the schematics of every single component on the mainboard/CPU/etc.

And the main problem with those is performance. Even the fastest RISC-V board currently available (which is also proprietary) is still twice as slow as a Raspberry Pi 3.


You can use ARM but it will only be open to the people paying for the NDA.


Is that “just” a matter of the process node used to fab them, though? What would the power profile of a RISC-V taped out for TSMC’s 5nm process look like?


They're still small slow processors.

Sure, it's possible to design a big fast RISC-V processor to rival the M1, but nobody has done it yet.


Wouldn't the energy efficiency of a (much) smaller process node enable you to greatly overclock the chip (compared to now where it's thermally limited to a low clock), though? And/or just pack many of them per die?


>RISC-V and MIPS are the only semi-modern architectures...

OpenPOWER.


GPUs are overrated. Just render things in software to a framebuffer, add more CPU cores if you need them.


> Russia still fabs 286s for its military

You'd think they could at least find some Arm 32-bit design. A 286 is just so cumbersome and slow...


That's probably for maintaining older machines. I doubt it's the bleeding edge of verified Russian CPU clones.


But see, that's why they have a turbo button.


Oof. I'd love to know why.

If I had to guess it's related to how DirectX12/Vulkan/Metal/Mantle are all about the basic idea that if you have a full MMU on the GPU, you can expose a more direct, console like API. The idea being that at worst you can only crash your own process's GPU context and that was always allowed. Maybe Intel found a hole in their MMU's implementation that lets you break out from the GPU side? I feel like it would have to be some hardware issue that fundamentally break's DX12's value add if patched to decide to just wholesale remove the feature.


It's worth noting that Haswell only ever supported Tier1 resource binding on DX12, so none of the new bindless stuff would even be to blame. I suspect this won't affect any projects too harshly, because anything DX12 exclusive probably requires at least Tier2 resource binding support. But I'm guessing something in the resource binding system is to blame...


As a game dev "you can expose a more direct, console like API. The idea being that at worst you can only crash your own process's GPU context and that was always allowed." is extremely funny to me. Any time I test Vulkan code on my machine I make sure to save all my work first since if the code has any bugs it has a decent chance of trashing kernel data structures and causing a BSOD.


The primitives being in place (virtual addresses in GPU data structures like the command lists, letting user space manage GPU memory) says nothing about there being bugs in the drivers or not.


Could not be bothered to do a proper fix - Windows 11 and Alder Lake is the new thing.

I am still miffed about Skylake being out of support for Windows 11.


As I understand it, Windows 11's CPU limits are designed for CPU's that have specific hardware mtigiation for Spectre and Meltdown

Given that we're still seeing new variants of those today it doesn't feel like the most crushing requirement

Plus there's frankly nothing "compelling" about Windows 11 yet. All the promised features like SSD <--> GPU data transfers (mimicing the Xbox One and PS5) are just "things we're going to add" at some indeterminate date

Microsoft has by their own clock until 2025 to wow people over to Windows 11, and they've not even shown up to the race


> As I understand it, Windows 11's CPU limits are designed for CPU's that have specific hardware mtigiation for Spectre and Meltdown

"To run Windows 11, CPUs need to have the hardware virtualisation features to enable virtual secure mode for Virtualisation-Based Security and the Hypervisor-Protected Code Integrity that underlies a range of protections that Microsoft has been building since Windows 8, like Application Guard, Control Flow Guard, Credential Guard, Device Guard and System Guard. Now they'll be on by default for all PCs, not just specially selected devices."

https://www.techrepublic.com/article/windows-11-understandin...


Sounds like they want to turn my PC into something locked down like an iPhone and not risk me being able to “break in” to it via any of those class of vulnerabilities.


As far as I can see, these are all legitimate security features that will help protect the users (e.g. from ransomware attacks)

I don't see anything in Windows 11 that makes it more of a walled garden compared to Windows 10.

All the security measures can be bypassed by a technical user if need be, e.g. running unsigned drivers [1], or installing on a machine without a TPM [2], or setting up an offline account on the Home edition.

Satya Nadella sees Windows 11 being an open ecosystem as a selling point compared to alternatives [3]

Along the launch of Windows, the Microsoft Store is more open now than before [4]

For comparison, the current Linux boot security is very poor compared to other OSs, because it lacks similar features [5]

I think Windows 11 is a step in the right direction, and I'm saying that as a happy Linux user.

1. https://gearupwindows.com/how-to-disable-driver-signature-en...

2. https://www.howto-connect.com/install-windows-11-without-tpm...

3. https://www.windowscentral.com/satya-nadella-wants-windows-1...

4. https://blogs.windows.com/windowsdeveloper/2021/09/28/micros...

5. https://0pointer.net/blog/authenticated-boot-and-disk-encryp...


Re: [4] will they re-allow crypto miners in the store? How about Torrent clients?



That first app violates Microsoft's ban on mining cryptocurrency on device: https://docs.microsoft.com/en-us/windows/uwp/publish/store-p...

"Apps that enable the mining of crypto-currency on device are not allowed."


For sure, Windows 11's CPU requirement has effectively ensure that CPUs going forward have some Specture/Meltdown mitigation in hardware. However, I would argue that the line wasn't drawn purely on technical reasons, but with non-technical considerations as well.

Initially, Windows 11 requires an Intel 8th gen or above CPU, but they went back to add the Core i7-7820HQ CPUs to ensure that the Surface Studio 2 currently in-market could receive the upgrade. Or consider that some 7th gen and 8th gen CPUs are based on Kaby Lake, but only 8th gen CPUs are supported.

The OEMs are also quite excited about getting people to replace their computers[1]. I am sure that's part of the consideration.

[1]: https://www.cnbc.com/2021/10/05/microsofts-panos-panay-expla...


there wasn't really anything compelling past windows 7 except "continues to receive drivers and security updates"


Admittedly, there's a light version of Windows 10 that runs pretty snappy on 2GB RAM tablets. I migrated one of those tablets (Asus Transformer of a few years ago) to Linux+Gnome because it was the only desktop manager to properly address screen rotation out of the box (again, it was a few years back) and I found it fairly slower on 2GB RAM than its original Windows 10. Agree completely on everything else: Microsoft totally borked the user interface from Win 8 onward, then decided it wasn't enough and added telemetry, then ads.


This doesn't impact many people, but WiFi 6e support on the 6Ghz band is currently only in Win 11 (won't work on Win 10, I've tried). Otherwise I pretty much agree with you, nothing that compelling.


There is no technical reason a 6E driver for Win10 cannot be written by someone.


If that's a selling point of Win11, I bet they refuse to sign the driver for Win10.


Didn't DirectStorage get it's Win11 exclusivity removed? and it'll be on Win10 as well?


They finally fully mitigated Spectre and Meltdown?


But AVX512 has been killed on Alder Lake also -- not that I was too excited about it, but it's certainly sapping my interest in Intel in the short and medium term. All because only P cores can execute AVX512 and E cores will fault on those instructions.

Can't let programs using them affinitize themselves to P cores only, oh no. Definitely need to kill the entire instruction set extension, or only allow it if BIOS writers figure out they can use unpublished methods to enable it but only if they disable E cores at boot. /s

(I may be wrong on some details above, I haven't been keeping the closest of eyes on the issues.)


My understanding is that AVX512 is not all that it's cracked up to be anyway, and that it also clocks the CPU down while it is executing those instructions. So my understanding is that a program (even if not a particularly heavy one) that constantly executed AVX512 instructions could cause a noticeable drop in performance to the entire system.


I haven't been following the Alder Lake situation too closely, but AVX512 is really multiple things that ought to have been orthogonal: (1) 512-bit vectors, (2) a much cleaner vector instruction set, (3) masked vector instructions.

It's mildly understandable that Intel didn't want to implement (1) on the E cores (though it would arguably have been better to just dual-issue them, like AMD did for 256-bit vectors on the original Zens). But there's no reason not to implement (2) and (3) on the E cores.

Perhaps somebody will clarify that (2) and (3) are available on Alder Lake, that would take a lot of the hurt out of this announcement, but it does sound like they're not -- and that's a major bummer.


> It's mildly understandable that Intel didn't want to implement (1) on the E cores (though it would arguably have been better to just dual-issue them, like AMD did for 256-bit vectors on the original Zens). But there's no reason not to implement (2) and (3) on the E cores.

It is very likely that all these aspects are very hard to separate from each other.


Agreed, wasn't too excited about it as a prospect because of reports like that, but now it'll be even harder to test for myself, and if it turns out to have _some_ use case, it's another generation before consumer grade stuff can benefit from it. Completely off my radar now.


>> I am still miffed about Skylake being out of support for Windows 11.

You can always run the latest versions of Linux.


Ah yes, because Linux is known for its perfect hardware support and DirectX support.

https://news.ycombinator.com/item?id=28490753

(Ok to be fair I’ve never had issues with an iGPU on Linux. Probably because none of the package maintainers can afford a discrete GPU :P)


DirectX 12 support is getting better with Proton and VKD3D. It's not universal yet, but it works better than I ever expected such efforts to go. Today, I was able to play a game again than I have pretty much given up on playing on Linux. Granted, it probably does not use DirectX 12...


Regressions galore if you do and happen to be on hardware that is not that common, though.


I am still miffed about Skylake being out of support for Windows 11.

Is there a particular feature you find desirable enough to make up for the hot garbage that is the rest of Windows 11?


Security updates past 2025.

TBF, it's not like that's going to be nearly as much of an issue in 2025 when that Skylake CPU hits 10 years old.

Additionally, you can still run Windows 11 on Skylake, you just have to disable a few things.

I'm sure there will be a Windows Update patcher if MS does the same thing they did to prevent Server 2012 from updating on Kaby Lake and newer CPUs.

I think is very reasonable from MS's position to officially sunset those CPUs while not trying too hard to prevent them from running.


Same. I was planning on riding my build a while longer yet, but I was unwilling to be left behind, so I'm upgrading my PC soon.


Your actions repeated a few million times = Thanks Microsoft, for your contribution to e-waste and to a 1.5 degree warmer world arriving earlier than 2040.

Can't wait for Windows 10 to be EOL'ed, for this problem to get worse. "Switch to Linux" they say, but hah, the corporate bean counters will say retraining is more expensive, just buy those new PCs, that 1.5 degree problem, that's not our issue!


How about putting the blame where it belongs? Intel knowingly sold CPUs with Meltdown/Spectre for close to a decade before it was disclosed to the public.

Switching to Linux does not fix the issue as it is part of the hardware. We have mitigated the issue, and done so poorly. You are not required to upgrade to Win 11, and gives your computer roughly 3 more years until you repurpose it for lesser tasks.


Most of the corporate IT leases employee hardware already and has 3-5 year laptop refresh cycles accounted for. I don't see how Windows 11 will affect anything in the big scheme of things for large enterprise IT procurement.

But as far as home users are concerned, I fully agree with you.


Maybe in companies flush with cash - most businesses won't change their hardware unless they have a good reason to. Losing support for getting security patches may be one such reason.

But the average office clerk will certainly not get regular updates to the latest and greatest hardware.


That's an interesting way to resolve an exploit like that, what would have necessitated disabling DirectX 12 in its entirety as opposed to other methods?


Most likely, a security vulnerability in the GPU's MMU that fundamentally cannot be patched on DX12 due to the increase in programmer flexibility that DX12 offers. Could be a race condition somewhere, for example.


Would this be grounds to return the product for a refund outside of the normal return window?


DirectX 12 is Windows 10 exclusive and that came out 1-2 years after 4th gen, so this would not be a feature Intel was advertising at the time.


I had a 4th gen processor (i5-4200) and it couldn't render video without lag until until the OS was upgraded to Windows 10 with DirectX 12, after which it was more than capable of handling video. DX12 probably added a few years to the life of my computer.


Lmao. Intel just can't catch a break.


Intel has been a spoiler in the GFX market for the longest time.

Every laptop I've had that has discrete GFX has had recurring bugs with both Firefox and Chrome because of the two-GPU situation. I would have the hardest time documenting what was going on, they'd fix the bug, then in the next major release it would break again.


MS just gave everyone with a 4th gen proc another reason to open their wallets for something newer.


Lol, I'm on a 4th gen because my wallet is empty. But I like to think that it's because performance improvements up to 8th gen were pathetic.


So DX12 support has not been deprecated in newer drivers (as the page states), it has actually been completely removed.


Some people don't know what that word means...


I was initially a little surprised, as someone running a trusty Haswell CPU, but then I remembered that DXVK handles all this for me and I probably won't be affected. Not sure if this is a win or a vulnerability, but I'll take it!


escalation of privilege

IMHO that's on the very low end of vulnerabilities to be concerned about, but the paranoia-kings would rather disable an entire feature because of it. A sad reflection of what the software/security industry has become. (At the very high end is automatic remote code execution, something which I really hope a GPU driver would never have, but then again, I've been surprised too many times already... )


When you can have web content interact with graphics APIs, escalation of privilege is a huge deal.


Maybe we shouldn't have websites talking directly to kernel code.


Then why wouldn't they call it remote code execution?


Everything about that sentence is awful. Let's just not do any of that.


Next, there will be Web DMA, then push notifications to your prefrontal cortex.

Just make sure you're not using hardware with known vulnerabilities.


> but the paranoia-kings would rather disable an entire feature because of it.

Windows is installed on many computers, including many office computers, of which many will inevitably have Haswell CPUs with IGP drivers installed by default.

Given how few people would be using DirectX 12 on Haswell IGPs, the decision is easy IMO.

In penetration tests I often find vulnerabilities that give me command access to a remote windows host as a low privileged user. Especially in windows environments, privilege escalation is super nasty because once you elevate privileges you have free reign of LSASS and have a lot of options for lateral movement on a network and dumping the "secure vault".


Gpu drivers have had vulns, but more commons are the software wrappers, like GeForce experience having poor configure and implementations (like that time their bundled node.JS allowing you to inject any process into their “web server” exe allowing for EOP, bypassing applocker/whitelisting etc)

https://www.nvidia.com/en-us/geforce/forums/geforce-experien...


Pretty much every meaningful exploit chain includes privilege escalation.


So is there some kind of emulator for these, or will an era of software become lost?


Why is that reported by Intel? isn't that a Microsoft decision paid by Intel?


there’s a joke about gamers who avoid macs in here somewhere


engineering obsolescence?


This is not good Intel. Are you going to deprecate Windows next? Linux?


The Intel graphics drivers for Linux are open source. If they pull a blinder like that, it can be undone.


Partially, the microcode certainly isn't.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: