A surprisingly large number of lines of code become security critical by dint of inclusion in security critical systems designed by others, or in support systems for those critical systems.
...which is a big reason why I find it insane that people aren't more interested in microkernel (or unikernel+hypervisor, which ends up in the same place) designs.
If you can have a tiny trust-kernel that has been proof-checked, keep everything else outside of it, and the things outside of it can only communicate with (or even observe) their peers via messages sent through it, then you don't need to worry about including untrusted code in your app.
Instead, you just slap any and all untrusted code into microservices (microdaemons?) in their own security domains/sandboxes/VMs/whatever, and speak to them over the kernel's message bus (or in the VM case, a virtual network), and suddenly they can't hurt you any more.
I think the problem is that operating systems aren't that useful until they have applications and a userbase. Some projects (like Mirage[1] and a few others) try to get around this by building on top of Xen, but that creates a lot of friction.
This is true. It's also why most serious projects are paravirtualizing OS's such as NetBSD or Linux. QNX and Minix 3 leverage code from NetBSD. Most of the L4's do a L4 Linux that runs Linux in user-mode on the tiny kernel. I link to specific examples in another comment here. Even the CHERI secure processor project ported FreeBSD (CheriBSD) to it for the need to keep legacy software.
That's a serious issue that's killed a number of past projects. At least many modern ones learned the lesson and are acting accordingly.
Worth keeping in mind that L4 kernels do much, much less than conventional operating systems. They're more like libraries for building useful OS's on top of.