My company does most of our development in Delphi, but unfortunately that means that crash dumps are very difficult or impossible to utilize with the embedded Borland/Embarcadero TDS debugging information. I wrote a utility that extracts the TDS info from a Delphi executable and uses an LLVM debug info library to write out a PDB and also modify the executable to point to that PDB.
To the best of my knowlede, the Novavax vaccine actually does this. Instead of delivering an mRNA nanoparticle, it delivers a spike protein nanoparticle. In fact, they just started phase 3 trials: https://www.sciencemag.org/news/2020/12/novavax-launches-piv...
For all intents and purposes, yes. Besides making software easy to install, package managers figure out all the dependencies that a given package requires and install those as well. Can you install yum/dnf on Debian? I'm sure there's a way to do it. But good luck handling all the dependencies for a given package. Plus, there will be file conflicts when dnf installs a base dependency that apt already installed.
In other words, nothing good comes from trying to install two different package managers on one system.
That was already a possibility even before all of this DoH publicity. Mozilla, etc. pushing DoH publicizes it's availability, but there was nothing in the past preventing malware from tunneling all sorts of traffic over HTTPS. DNS inspection isn't an end all, be all for malware security. It just gets the low hanging fruit.
There was a lot of low-hanging fruit given that most malware writers aren't going to set up all of this infrastructure for custom protocols.
And even when they did, creating various C&C servers, the lack of ESNI would allow for detecting activity once the daily domain creation algorithm was reverse-engineered:
Using the NTDLL calls breaks stuff because NTDLL doesn't understand anything about the overlaid subsystems. NT was originally designed to support multiple subsystems in addition to Win32, like the Interix (POSIX compliant) subsystem and the OS/2 subsystem I believe. I think (I could be wrong on this) that this was possible by switching the system call table depending upon which process was currently active (doable by switching the system call vector).
IIRC the syscall table doesn't get swapped out. Each subsystem translates its calls to NT API calls. For example, user32.dll and kernel32.dll are a part of the win32 subsystem and eventually end up calling NT APIs in ntdll.dll. It's possible for a process to have no subsystem, these are called native NT process and the only dll loaded by default into their address space is ntdll, csrss.exe is an example of this.
Technically the Windows native layer supports fork()-like abilities, but the problem is that the Win32 subsystem and dependent layers (GUI, etc.) don't have fork() copy-on-write capability. So even if you were able to fork a process, you'd still have to deal with duplicating all of the stuff built on top. In fact, the old POSIX compatible subsystem layer utilized this functionality to actually provide a compliant fork().
Of course this all ignores the fact that Win32 processes are heavier duty than Linux processes (though I don't know if that's due to Win32 subsystem overhead or not). Look at any benchmark and you see an order of magnitude difference in process creation times. You're much better off creating threads instead.
Your CPU resolves every pointer memory access through those tables under the hood. It's an extremely powerful data structure. It can let you allocate linear memory like a central banker prints money. But if you're a Windows user, then only Microsoft is authorized to access it, and they don't want you having the central banker privileges that are needed in order to implement fork(). That's the way the cookie crumbles.
I wasn't specifically talking about any ntdll call, but in one of their "deep dive" videos on Channel 9 they state WSL1 supports fork() because the NT kernel natively supports it, it just isn't exposed on the normal API surface
Pretty much all CPUs with MMUs since MULTICS inherently support fork(). So for NT it's a question of prohibiting the behavior. Microsoft's research department even writes papers about how they disagree with fork(). I was truly impressed by the work they did implementing Linux ABI in the NT executive for WSL 1.0. Big change in policy. It's going to be sad to watch it go, but might be for the best. A really well-written ghetto for Linux code is still a ghetto.
This is a weird statement. The MMU doesn't natively do a fork(). The kernel needs to implement the copy-on-write-fault. I.e. all pages marked read only and the write fault handler needs to realize a given address is COW and perform a lazy copy.
I think you got confused. The "native layer" referred to NTDLL in the comment you replied to. I think(?) that's what the earlier POSIX subsystem was built on. That's why I said WSL doesn't use that.
> > But I and I'm sure many others would not be focusing upon that, though I can appreciate that mentality due to the way media/tabloids cover such and all issues in life
> That's a personal attack quite out of order.
Not really. It's just a societal commentary on how we've conditioned ourselves by the media we consume into passing quick blame instead of taking a slow, reasoned approach to analyzing the event, seeing the compounding factors at stake and only in the face of gross negligence assigning blame.
This. Very much this. In the presentations I've seen on the matter, the most vulnerable systems are those measuring and carrying telemetry back from various meters around the power system. One doesn't have to directly compromise the system to have an effect. Simply providing bad data to automated protection systems could be enough to cause instability.
The source can be found here: https://github.com/powerworld/TDSToPDB