XNU open source bits from Apple are always a tiny subset of the actual kernel code. Plus, long gone are the people who Apple hired from the FreeBSD world who believed in it enough to push internally for it. So who knows at this point what is filtered out at publish time.
Don't get me wrong, it's great to read how file drivers really work, and read how XNU manages virtual memory, but other than that, it's a pretty crappy drop.
Yea personally I'm mostly fascinated with the semi microkernel architecture. I don't really mind that the rest is closed. And yea, I've suspected they lost some good people after seeing all the new security issues in the kernel the past years.
If only wayland was more mature, and the Linux world would have put down X11 once and for all, I would be back on Linux a long time ago :)
> If only wayland was more mature, and the Linux world would have put down X11 once and for all, I would be back on Linux a long time ago :)
I'm curious why Wayland changes things for you. X11 is warty as hell, but that's generally not something visible to the user. I don't go "Eww, they didn't bother framing the graphics message packets properly -- I'm going to use another OS".
Haha, well it's not only X11, but it's one of the main reasons. Other reasons are that for my usage and work, it just "works", no hours of configuration in dotfiles. Also it's convenient as a cross platform developer, since I can just crosscompile to Linux and Windows(mingw) via Docker. However, without the Mac, you'll have to break some licenses with a hackintosh. I think the project is cool, but in work related situations I can't do that because of legal reasons.
And then again when their site gets subverted and you have to reinstall since you might have a rootkitted binary. I loved Mint for its incredible usability. Just can't recommend a supplier whose security was that bad.
Nah. You can't hold a 'maintainer transgression' which happened two years ago against them forever.
They have indeed implemented solid checks and offer a Shaxxx-sum file for every iso file they publish. Not only that, bt publicly "soul searched" and went to great lenghts to assure the community and its users that such mistakes would not be repeated. Now verifying the authenticity of the iso file actively encouraged, on the download page.
They made a mistake, took solid steps to improve, and the show has moved on. You should too, instead of smearing the project far far down the line. (I'm being retorical, I know)
The project's security sucked across the board. They didn't care or know how to do it. One hacker here even appeared to hack them in mid-discussion and post database credentials that showed they were using defaults. They then implemented a mitigation after bad press and soul searching. The thing I'm doing isn't smearing the project: it's letting people know not to trust its security without 3rd party verification (esp pen tests). That's because (a) it's a sane default for any project and (b) this one failed hard on the basics at least once.
So, I advise caution until I see a 3rd party evaluation showing their security is good now. You apparently followed them carefully. Did any security professionals look at their site/db/whatever after the fixes and give independent confirmation? That's all I'd need to stop reminding people of this.
The graphical user interface has almost nothing to do with ease of configuration. And yes, ease of configuration is worth a lot.
Related anecdote: I started a friend of mine on Ubuntu, but she hated all the configuration via endless clicking. She immediately took to Archlinux---there's still configuration, but it's simpler and since it's all text, it's much easier to just read everything on the Archwiki instead of having to follow pictures (or descriptions of pictures).
The Mesa/DRM stack is significantly better than Apple's, because the OpenGL implementation actually works, and Vulkan is supported on at least some hardware.
At the protocol level, X isn't involved that much with the 3D graphics stack anyhow these days. Xorg simply speaks DRI3, which is just a way to marshal file descriptors representing graphics buffers over the connection. It isn't involved in the rendering happening at either end (application and compositor).
At least Khronos will be creating Meta API for Metal, Directx and Vullan to address the difference, I hope that would just work and solve around Vulkan fragmentation and complexity that cant fit into all Apple SDK.
Professional game developers have a different opinion regarding graphics APIs, even those that have been strong OpenGL apologists in the past:
"John Carmack: Its still OpenGL, although we obviously use a D3D-ish API [on the Xbox 360], and CG on the PS3. Its interesting how little of the technology cares what API you're using and what generation of the technology you're on. You've got a small handful of files that care about what API they're on, and millions of lines of code that are agnostic to the platform that they're on."
I do graphics professionally, and I would much prefer to just see Vulkan everywhere. Metal has essentially nothing to recommend itself over Vulkan; it's just a worse API. Things I've dealt with:
- Tessellation is weird in Metal, as you don't have hull shaders and have to wedge it into the compute pipeline.
- I haven't found a good way to disable multisampling while rendering into a multisample texture, something that is trivial in OpenGL. This can be useful for various effects.
- Switching command encoders is really slow.
- There is no good way I have found to be able to get good timing information inside the app as opposed to the profiler. The info is available on iOS, but not macOS. This is important for telemetry, etc.
Studios are obviously just going to choose the API that's supported on the platforms they're shipping to. Usually there's only one "blessed" API, and so that's the one they pick. It doesn't say anything about the quality of those APIs.
Not only gaming is a thing on Mac, more than on GNU/Linux or even BSD combined, it is also a thing on iPhone, iPad and Apple TV, all with Metal support.
Also Apple's augmented reality is based on Metal, and they had Valve on stage praising it at WWDC, with native support on SteamVR SDK.
Finally, many of the frameworks that made use OpenGL, including the window manager, are now working on top of Metal.
Apple is using Metal because it's a modest improvement over OpenGL (in some areas; it's a regression in others) and, more importantly, their OpenGL implementation is extremely buggy.
You haven't said anything about Vulkan vs. Metal, and you can't, because Vulkan is a better API for the reasons I described upthread.
Full disclosure: I have colleagues and friends on the Khronos standards group, and I don't like seeing their work bashed without specific technical reasons.
Game engine vendors also care a great deal about iOS, since they have a large number of customers who do, so that's also driving support for Metal in those engines.
I came to realize that for those of us that actually care about graphics programming, nice GUI tools and UI/UX, Linux world will never be the place to be.
The majority of devs are happy having a replicate of PDP-11 experience, maybe with twm and tools like xdvi or xv.
Those that try to improve the overall desktop experience closer to other desktop systems, get bashed as needless fluff.
I have been continuously let down by GUI tools with "Good UX" on every platform to the point that I honestly don't trust them to be well behaved and well documented any more.
I know it's technically possible to write a good GUI tool but if some tool doesn't have a fully functional CLI I immediately distrust it.
Unless we're talking about a fabrication, a stereotype is just another word for empirical observation.
It will not be true for absolutely all instances (almost nothing is except the laws of physics), but it's just supposed to tell what the most common observed case.
Without stereotypes (a.k.a. generalizations) there's no science and no discussion possible.
You can fault a stereotype for not being representative (if you have different observations or stats or explanation etc.), in which case it's a bad stereotype, but not for being a stereotype.
Those stereotypes are backed by HN posts about how the CLI is so great, how using a tiling window manager is so great, how using XFCE (a CDE clone) is all that is needed, that GNOME/KDE devs destroy the desktop because they care about UI/UX for non-technical users, that systemd is a desktop plague infecting servers,....
Apparently using one's mental faculties and empirical observations to make a generalization has gone out of fashion.
On the other hand, linking to some crappy research in a hastily peer "reviewed" journal, with 20 participants and no controls, that satisfies your biases and which you haven't even read except for the abstract is considered epitome of discussion.
Don't be so quick to discount stereotypes. Look at them as a sort of Bayesian prior - updated by individual encounters, but often not completely off the mark.
After 20+ years of "next year is the year of Linux on the desktop" (which is about Linux seriously challenging or even overtaking MS on desktop users, not just the "Linux works fine for me on my desktop" which was always the case for some outliers) and following Gnome and KDE closely, and efforts by Ubuntu etc and reactions, his observations sounds quite to the point.
The merging of Mach with UNIX was a mistake given it was one of the slowest, over-complicated microkernels. Far as I'm aware, they'd prefer to remove Mach but it's a big job at this point that might affect their ecosystem a lot. If you are interested in open microkernels, I encourage you to look into L4 family (eg OCL4 or OKL4) since they're far more advanced than Mach-based designs. Minix 3 does one with NetBSD userland and self-healing capabilities. GenodeOS is turning one into a desktop. IBM's K42 ran on NUMA machines. JX OS mixes one with a JVM to run the OS in Java. For security, GEMSOS was a security kernel w/ MAC. KeyKOS and EROS were examples of those doing capability-based security. Modern ones like Muen and seL4 just do separation w/ apps managing security policy.
EDIT: Windows is also a "semi-microkernel" with a microkernel inside of it on the bottom that seems to be used for consistent way of interfacing components more than anything. I'm not sure if it's still in there.
Mach is fine. The IPC primitives it exposes are a lot more straightforward than Unix sockets. The performance issues everyone talked about in the '80s are not really problems anymore in 2017.
L4's send/recv context switch trick is great until you get to SMP, at which point its performance necessarily moves closer to that of traditional microkernels like Mach.
Is there a better approach for experimenting with filesystems than editing the kernel sources, recompiling and then running in a VM? e.g. A C library providing the necessary interface for the sources to drop into a kernel eventually, but also allowing you to run tests, set breakpoints, etc within an IDE.
Of course you can get a long way accessing a block device directly with FUSE, but ultimately you end up developing a FUSE library, not something you can use within the kernel.
Don't get me wrong, it's great to read how file drivers really work, and read how XNU manages virtual memory, but other than that, it's a pretty crappy drop.