Hacker News new | past | comments | ask | show | jobs | submit login
Building the XNU kernel on macOS Sierra (0xcc.re)
82 points by mikalv on July 12, 2017 | hide | past | favorite | 62 comments



XNU open source bits from Apple are always a tiny subset of the actual kernel code. Plus, long gone are the people who Apple hired from the FreeBSD world who believed in it enough to push internally for it. So who knows at this point what is filtered out at publish time.

Don't get me wrong, it's great to read how file drivers really work, and read how XNU manages virtual memory, but other than that, it's a pretty crappy drop.


Especially they are hiding everything about the ARM architecture. There have not been any code drops for iOS except WebKit.

As an example, you can see here how Apple removes whole files and even code in #ifdef sections before throwing it over the wall:

https://opensource.apple.com/source/hfs/hfs-366.50.19/make_o...


Yea personally I'm mostly fascinated with the semi microkernel architecture. I don't really mind that the rest is closed. And yea, I've suspected they lost some good people after seeing all the new security issues in the kernel the past years.

If only wayland was more mature, and the Linux world would have put down X11 once and for all, I would be back on Linux a long time ago :)


> If only wayland was more mature, and the Linux world would have put down X11 once and for all, I would be back on Linux a long time ago :)

I'm curious why Wayland changes things for you. X11 is warty as hell, but that's generally not something visible to the user. I don't go "Eww, they didn't bother framing the graphics message packets properly -- I'm going to use another OS".


Haha, well it's not only X11, but it's one of the main reasons. Other reasons are that for my usage and work, it just "works", no hours of configuration in dotfiles. Also it's convenient as a cross platform developer, since I can just crosscompile to Linux and Windows(mingw) via Docker. However, without the Mac, you'll have to break some licenses with a hackintosh. I think the project is cool, but in work related situations I can't do that because of legal reasons.

This also reminds me about a cool project I've found, but not had time to test yet; https://github.com/shinh/maloader


Yeah. Linux is not like that any more.

Try Linux Mint. The only configuration you will do is to enter your username and password.


And then again when their site gets subverted and you have to reinstall since you might have a rootkitted binary. I loved Mint for its incredible usability. Just can't recommend a supplier whose security was that bad.


Nah. You can't hold a 'maintainer transgression' which happened two years ago against them forever.

They have indeed implemented solid checks and offer a Shaxxx-sum file for every iso file they publish. Not only that, bt publicly "soul searched" and went to great lenghts to assure the community and its users that such mistakes would not be repeated. Now verifying the authenticity of the iso file actively encouraged, on the download page.

They made a mistake, took solid steps to improve, and the show has moved on. You should too, instead of smearing the project far far down the line. (I'm being retorical, I know)


The project's security sucked across the board. They didn't care or know how to do it. One hacker here even appeared to hack them in mid-discussion and post database credentials that showed they were using defaults. They then implemented a mitigation after bad press and soul searching. The thing I'm doing isn't smearing the project: it's letting people know not to trust its security without 3rd party verification (esp pen tests). That's because (a) it's a sane default for any project and (b) this one failed hard on the basics at least once.

So, I advise caution until I see a 3rd party evaluation showing their security is good now. You apparently followed them carefully. Did any security professionals look at their site/db/whatever after the fixes and give independent confirmation? That's all I'd need to stop reminding people of this.


Fully aware of it, but I prefers the custom (e.g awesomewm and fluxbox), my problem is just that I don't have time for all that fun anymore :)


The graphical user interface has almost nothing to do with ease of configuration. And yes, ease of configuration is worth a lot.

Related anecdote: I started a friend of mine on Ubuntu, but she hated all the configuration via endless clicking. She immediately took to Archlinux---there's still configuration, but it's simpler and since it's all text, it's much easier to just read everything on the Archwiki instead of having to follow pictures (or descriptions of pictures).


Kdrive isn't bad if you don't mind 100% framebuffer X (which is fast enough to watch videos, run vim, play old videogames and emulators etc)


Never heard of Kdrive, thanks for the tip! I will check it out for sure.


I guess a proper stack for doing 3D graphics, instead of the gimmicks that X has had over the years?

I lost count how many times I sat at FOSDEM X room, seeing the improvements that would eventually come (some day).


The Mesa/DRM stack is significantly better than Apple's, because the OpenGL implementation actually works, and Vulkan is supported on at least some hardware.

At the protocol level, X isn't involved that much with the 3D graphics stack anyhow these days. Xorg simply speaks DRI3, which is just a way to marshal file descriptors representing graphics buffers over the connection. It isn't involved in the rendering happening at either end (application and compositor).


At least Khronos will be creating Meta API for Metal, Directx and Vullan to address the difference, I hope that would just work and solve around Vulkan fragmentation and complexity that cant fit into all Apple SDK.


Metal is fine and supported by all relevant engines, thanks.

As the name says, DRI3 was yet another attempt to improve the whole performance stack on Linux.

As mentioned, I did my share of FOSDEM sessions.


> Metal is fine and supported by all relevant engines, thanks.

thanks for the laugh


Professional game developers have a different opinion regarding graphics APIs, even those that have been strong OpenGL apologists in the past:

"John Carmack: Its still OpenGL, although we obviously use a D3D-ish API [on the Xbox 360], and CG on the PS3. Its interesting how little of the technology cares what API you're using and what generation of the technology you're on. You've got a small handful of files that care about what API they're on, and millions of lines of code that are agnostic to the platform that they're on."

http://fd.fabiensanglard.net/doom3/pdfs/johnc-interviews.pdf

AAA studios already adopted Metal on their engines, Vulkan only matters on GNU/Linux among those having supported graphics cards.

On Android, it is only available as optional graphics API in 10% of devices available worldwide.

On Windows, it is only supported on legacy Win32, it isn't and it won't be supported on UWP.

On Nintendo Switch, there is the option to use the more lower level API NVN instead.

So laugh at will, lets see which APIs game studios care about.


I do graphics professionally, and I would much prefer to just see Vulkan everywhere. Metal has essentially nothing to recommend itself over Vulkan; it's just a worse API. Things I've dealt with:

- Tessellation is weird in Metal, as you don't have hull shaders and have to wedge it into the compute pipeline.

- I haven't found a good way to disable multisampling while rendering into a multisample texture, something that is trivial in OpenGL. This can be useful for various effects.

- Switching command encoders is really slow.

- There is no good way I have found to be able to get good timing information inside the app as opposed to the profiler. The info is available on iOS, but not macOS. This is important for telemetry, etc.

Studios are obviously just going to choose the API that's supported on the platforms they're shipping to. Usually there's only one "blessed" API, and so that's the one they pick. It doesn't say anything about the quality of those APIs.


> AAA studios already adopted Metal on their engines

Genuine question : Do AAA studio really care about metal ? Is gaming on Mac a thing now ?


Not only gaming is a thing on Mac, more than on GNU/Linux or even BSD combined, it is also a thing on iPhone, iPad and Apple TV, all with Metal support.

Also Apple's augmented reality is based on Metal, and they had Valve on stage praising it at WWDC, with native support on SteamVR SDK.

Finally, many of the frameworks that made use OpenGL, including the window manager, are now working on top of Metal.


Apple is using Metal because it's a modest improvement over OpenGL (in some areas; it's a regression in others) and, more importantly, their OpenGL implementation is extremely buggy.

You haven't said anything about Vulkan vs. Metal, and you can't, because Vulkan is a better API for the reasons I described upthread.


I can't say anything good about Vulkan, because I am not going to buy a new laptop or a Google Pixel just to try it out.


Then please stop saying it's a bad API.

Full disclosure: I have colleagues and friends on the Khronos standards group, and I don't like seeing their work bashed without specific technical reasons.


Where have I said it is a bad API?

I have said it has insignificant market share and that share won't get better outside GNU/Linux for the foreseeable future.

That has nothing to do with quality.


A good few of the major engines now support it.

Gaming on the Mac is a thing to some extent. See Steam for MacOS.


Game engine vendors also care a great deal about iOS, since they have a large number of customers who do, so that's also driving support for Metal in those engines.


But Wayland is that exact same 3d graphics stack.


Kind of, because it is built on top EGL so everything is done via the 3D stack.

On X there is a mixture of X protocol, 3D stack and the interactions among all possible combinations.


X11 is also responsible for some of the sluggishness of the Linux desktop, like resizing windows.


With compositing enabled that doesn't really matter.


I came to realize that for those of us that actually care about graphics programming, nice GUI tools and UI/UX, Linux world will never be the place to be.

The majority of devs are happy having a replicate of PDP-11 experience, maybe with twm and tools like xdvi or xv.

Those that try to improve the overall desktop experience closer to other desktop systems, get bashed as needless fluff.


I have been continuously let down by GUI tools with "Good UX" on every platform to the point that I honestly don't trust them to be well behaved and well documented any more.

I know it's technically possible to write a good GUI tool but if some tool doesn't have a fully functional CLI I immediately distrust it.


Your posts are full of stereotypes...


Unless we're talking about a fabrication, a stereotype is just another word for empirical observation.

It will not be true for absolutely all instances (almost nothing is except the laws of physics), but it's just supposed to tell what the most common observed case.

Without stereotypes (a.k.a. generalizations) there's no science and no discussion possible.

You can fault a stereotype for not being representative (if you have different observations or stats or explanation etc.), in which case it's a bad stereotype, but not for being a stereotype.


Those stereotypes are backed by HN posts about how the CLI is so great, how using a tiling window manager is so great, how using XFCE (a CDE clone) is all that is needed, that GNOME/KDE devs destroy the desktop because they care about UI/UX for non-technical users, that systemd is a desktop plague infecting servers,....

Plenty of material.


Apparently using one's mental faculties and empirical observations to make a generalization has gone out of fashion.

On the other hand, linking to some crappy research in a hastily peer "reviewed" journal, with 20 participants and no controls, that satisfies your biases and which you haven't even read except for the abstract is considered epitome of discussion.


Don't be so quick to discount stereotypes. Look at them as a sort of Bayesian prior - updated by individual encounters, but often not completely off the mark.


But if his observations are so far of the mark you can be pretty sure his stereotypes are too.

By the way, visual studio and intelij idea look the same on all platforms


After 20+ years of "next year is the year of Linux on the desktop" (which is about Linux seriously challenging or even overtaking MS on desktop users, not just the "Linux works fine for me on my desktop" which was always the case for some outliers) and following Gnome and KDE closely, and efforts by Ubuntu etc and reactions, his observations sounds quite to the point.


Who cares about year of Linux?

I have never been as productive as I am now. No more fighting xcode or using outdated core tools.


>Who cares about year of Linux?

Apparently tons of people -- at least did.

>No more fighting xcode or using outdated core tools.

Well, if you don't need to use Swift/Obj-C or develop for Mac/iOS (which apparently you don't) then why use XCode at all in the first place?


I see it not as a Bayesian prior but as a form of compressed sensing.


The merging of Mach with UNIX was a mistake given it was one of the slowest, over-complicated microkernels. Far as I'm aware, they'd prefer to remove Mach but it's a big job at this point that might affect their ecosystem a lot. If you are interested in open microkernels, I encourage you to look into L4 family (eg OCL4 or OKL4) since they're far more advanced than Mach-based designs. Minix 3 does one with NetBSD userland and self-healing capabilities. GenodeOS is turning one into a desktop. IBM's K42 ran on NUMA machines. JX OS mixes one with a JVM to run the OS in Java. For security, GEMSOS was a security kernel w/ MAC. KeyKOS and EROS were examples of those doing capability-based security. Modern ones like Muen and seL4 just do separation w/ apps managing security policy.

EDIT: Windows is also a "semi-microkernel" with a microkernel inside of it on the bottom that seems to be used for consistent way of interfacing components more than anything. I'm not sure if it's still in there.


Mach is fine. The IPC primitives it exposes are a lot more straightforward than Unix sockets. The performance issues everyone talked about in the '80s are not really problems anymore in 2017.

L4's send/recv context switch trick is great until you get to SMP, at which point its performance necessarily moves closer to that of traditional microkernels like Mach.


I've been playing a bit around with L4 kernels and some more of Minix 3 from before. Genode seems nice but never had the opportunity to test it yet.

The rest was all new for me, thanks for the information :)


The instructions end with where to copy it to boot off it -- does this just not work at all? Or is it just less performant/capable?


Excellent! PureDarwin [1] is still plodding along slowly.

1: http://www.puredarwin.org/



Yea, I got surprised they didn't have more documentation, at least updated.


Contact details are in my profile - if there's anything in particular you're looking for I can ping a few people from the project.


Thanks, but I actually talked with the people in the project a few years back, and have on my todo list to do it again soon :)


Is there a better approach for experimenting with filesystems than editing the kernel sources, recompiling and then running in a VM? e.g. A C library providing the necessary interface for the sources to drop into a kernel eventually, but also allowing you to run tests, set breakpoints, etc within an IDE.

Of course you can get a long way accessing a block device directly with FUSE, but ultimately you end up developing a FUSE library, not something you can use within the kernel.


I'm sorry but I stopped reading because you've chosen to break lines mid-word (reading on mobile). Justified text would be a lot easier to read.

edit: It looks like Safari's reader mode fixes things.


I thought this was a petty HN UI comment but no. Readying the post is painful nearly ever other line ends in a split word on mobile.


I really tried to read it before whining. Glad to see it's much more readable now.


Sorry about that, the theme will improve. CSS isn't my great side.


I think I managed to fix it :)


Looking good, thanks!


OpenDNS in my office is blocking access to the site because of "malware".

I can't tell if that's a legitimate warning, or it being paranoid about what else is on the site.


Hahaha, yes I just found out my own comany's firewall does the same. I suspect it's the hex domain :)

Online services says it's clean and I know how to secure my own webserver/services :) https://sitecheck.sucuri.net/results/0xcc.re




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: