XDP is completely orthogonal to this work and would not have been a useful performance comparison. "Using AF_XDP" vs. "using a user space driver with IOMMU" is just apples vs. oranges.
The goal we are trying to achieve in this project here is to show that drivers can (and should) be written in better languages to improve security and safety. Note that one of the drivers in the thesis is written in Rust.
And regarding XDP being safer than Rust: Yes of course. But it's also very limiting; you can't write a driver in eBPF. (It currently just prohibits jumps with negative offsets but there's some ongoing work to allow for at least some bounded loops).
We are interesting in making drivers themselves safer, not applications building on top of them.
(As advisor of the thesis I however agree that XDP should have been mentioned; I'm not super happy with the length of the thesis but quite happy with the implementations)
XDP is already being used to replace DPDK and IPVS use cases, so I don't see how its Apples to Oranges. It's generally not a good idea to circumvent the kernel, and in this case the kernel developers are providing better and better tooling for high speed networking to the point that user space driver implementations are no longer attractive compared to using the XDP ingress hook points. In industry I see rapid adoption of XDP (BPF in general is THE topic in Linux right now), and it's enabling all sorts of fascinating new use cases. I would be very interested to see a paper on iommu perf from the XDP perspective.
This isn't about performance of user space drivers but about safer/better drivers. Whether that driver offers an XDP interface or something else is irrelevant. Also, your kernel driver running XDP should also use the IOMMU for both safety and security (e.g., Thunderclap).
(The thesis features a performance evaluation to show that it doesn't get slower when used properly: hugepages are absolutely required; this is very different from non-IOMMU drivers where hugepages only boost performance by ~1-3% in the best case. Also, performance is simple to quantify so it makes for a great evaluation.)
Some context for this thesis/what I'm working on at the moment:
C as a programming language for drivers has failed from a security-perspective. Cutler et al. [1] analyzed 65 security bugs found in Linux in 2017 allowing for arbitrary code execution, 40 of them could have been prevented if the driver was written in a safer language. I've looked at these 40 bugs and found that 39 of them are in drivers.
I disagree with their conclusion that you should therefore consider to write the whole operating system in Go (that's just unrealistic in the near future). But we can at least start to write drivers in better languages as they seem to be the weakest point gaining 97% of the improvement for ~40% of the work. Getting a high-level language driver upstreamed in Linux is of course unrealistic, so user space drivers it is.
Network drivers are particularly interesting because user space drivers are already quite common there and there's a trend towards having network stacks in the user space anyways: QUIC is user space only, iOS runs a user space TCP stack. (Somewhat related: the future API between applications and the network stacks is TAPS instead of sockets, TAPS is also more friendly towards user space stacks as it's finally a modern abstraction for the stack.)
> Getting a high-level language driver upstreamed in Linux is of course unrealistic, so user space drivers it is.
There's work being done on source-level translation from Rust code to C (mrustc is part of this of course, but nowhere near complete), so I'm not sure why this should be seen as unrealistic.
I don't see why using mrustc to generate C code would make Rust drivers more upstream-able in the Linux kernel?
As far as I understand it, the main thing is that the team (Linus at least) does not want to maintain source code in multiple languages.
The problem with your line of reasoning is that it's very academic. Reimplementing drivers in user space is a much higher friction task than remote controlling an XDP program via bpf maps, and you're competing with the device manufacturer over implementation correctness in many cases. XDP isn't going to solve driver implementation issues, but it's an incremental step towards low cost high speed networking which is what most people are using user space networking solutions for. It also directly address the in user space packet processing (QUIC, TCP) and makes it safer by enforcing BPF sanity checks and allowing a user space component in Rust.
I agree with what you are saying, it's all true. There are lot's of problems with C in the kernel, but that's what we're currently stuck with. Practically there is only a small difference in the value proposition of rewriting the kernel in Go and rewriting our existing network drivers in userland in Rust, when our use cases are more aligned with what we're getting with XDP.
David Miller, who is the maintainer of the net tree gave a good talk at netdev about why XDP is the future, and people should stop using DPDK. You can just google the talk.
The goal we are trying to achieve in this project here is to show that drivers can (and should) be written in better languages to improve security and safety. Note that one of the drivers in the thesis is written in Rust.
And regarding XDP being safer than Rust: Yes of course. But it's also very limiting; you can't write a driver in eBPF. (It currently just prohibits jumps with negative offsets but there's some ongoing work to allow for at least some bounded loops). We are interesting in making drivers themselves safer, not applications building on top of them.
(As advisor of the thesis I however agree that XDP should have been mentioned; I'm not super happy with the length of the thesis but quite happy with the implementations)