Jonathan Blow posted this on Twitter:
"I kind of don't understand what's the big deal with these 'modern' "display servers" like Wayland / Mir / etc. it ought to be kind of simple actually, because we now have all this experience with drawing graphics and we know how it should go. So I wonder if it's just being made more complicated than it really needs to be, the same way GUI libraries always are. GUI libraries are generally terrible, but this is self-perpetuating as people making new GUI libs absorb bad assumptions from the old ones. There's an interview up with a Mir developer who mentions Mir is structured as communication via a protocol over sockets. And, like, I have literally NO IDEA why you would build a window system like that in 2014. It makes no sense to me at all. (But if you do decide that's how things should go, things become a lot more complicated, so that's at least part of the problem.)
A major reason computers are so unreliable and un-fun to use is because software is now a massive pile of overcomplication. When it comes to a core thing like a window system, that many programs will interface with, simplicity should be a high design priority. Because every bit of complication that goes into the window system propagates. EVERY SINGLE PROGRAM becomes more complicated. Every piece of software becomes harder to develop. The toll in man-years becomes HUGE very quickly. Yet for some reason people don't learn. I think there is some Stockholm Syndrome happening: programmers can't even imagine how much more they would get done if the underlying systems were as simple, reasonable and solid as they should be."
Is this the same Jonathan Blow who said "Braid physically cannot be ported to Linux, the sound APIs simply can't handle it"? :P (Braid was later ported to Linux by a third party and runs fine, IIRC)
Regardless, the answer is "everything is simple when you imagine a single use-case", which as far as I know is what he is doing. I would suggest that people with experience building multi-use-case display servers would know best, those building single-use-case display servers second best, and everybody else is essentially working on guesswork and imagination.
That said, if he wants to go ahead and prove us all wrong by writing a small, elegant display server which fulfils everybody's needs without compromise, I wish him luck and can't wait to start using it :)
Jon Blow is a smart guy, but this sounds like he just hasn't looked at the problem long enough and deep enough to understand it.
I'm sure almost everyone here has at some point looked at a task an said 'Oh that's easy, I'll do it in a week!', and then realised that it's a rabbit warren of peculiar instances, customer needs, and unique problems.
It's like people who think Time is easy, but they forgot timezones, and then they write timezones but they forgot half hour timezones, or quarter hour timezones, or daylight savings time, or leap years, or different calendars, or the fact that calendars change over time, over place, that timezones change, are adapted, etc.
I mean, it's a server. It's a singleton process, which many other processes much communicate with. What other widely supported ICP technique do you think is better for this job, and would make it simpler to develop against?
> Mir is structured as communication via a protocol over sockets
It's important to note that there's a big distinction between Wayland and Mir here. Mir provides an ABI; a protocol over sockets is an implementation detail.
OTOH, Wayland is defined as a protocol that runs over a socket.
So, the way to make peace between the camps is to write an implementation of the Mir API that (optionally) uses the Wayland protocol?
Back to the interview: is there actually an answer there to "what is is that Mir does, that Wayland doesn't?". Apart from the protocol versus API thing, and a claim that protocols cannot be versioned easily (PDF files seem a counterexample to me), I can't find it.
He's generally right about over-complication, but he either lacks the understanding of just how much X hits his bullet points for bad software; or he wants to stick with the current ball of mud rather than add the complexity of migrating to something that will eventually be simpler (hopefully).
I assume the former, but the later is an arguable position I happen to disagree with.
FWIW I liked MGR [1] You just wrote special control codes to stdout in order to create windows and such like. Very simple, no libs required, dead easy to use from programs.
My opinion is that it's very, very hard to avoid this.
I would say that in all the software I have ever written, some part of it sucked. It's actually quite upsetting if I think too hard about it, I wish it wasn't that way. This is mainly because of four reasons (in no particular order):
1. I didn't have time to really finish it
2. I didn't test it thoroughly enough
3. There was some part of the underlying language or framework that I was unaware of that behaves unexpectedly (e.g. the framework encodes output, the int in the language can only handle up to 16384, or you can add a general error catcher, but if you invoke a web method in a particular way, the framework 'helpfully' circumvents the error handler, leading to weird bug that you never hear about)
4. I was using a new language/framework that I didn't know quite how to use. It's not just inexperience, quite often the examples for new frameworks/languages turn out to be bad practices or inefficient ways of doing things in the long run.
Even if you're a good programmer and try and plan ahead, quite often you're writing something you don't quite understand until you actually write it and as you go you make, in hindsight, a couple of poor decisions. It's often a big job to go back and change those poor decisions, and even if you did it might not completely fix the problem. Some remnants of the old design stick around like a bad smell.
And there's also bad design decisions that don't come out until the program is out in the wild, you might have thought the purpose of your wheel was to make bicycles, but in fact only a couple of neckbeards make bicycles, everyone else is making cars out of them and they're completely the wrong design.
And worse, only the neckbeards are on the mailing list so you get a disproportionate amount of feedback from a vocal minority and don't even realise it's not really suitable.
And on top of that, some software suffers from the first mover advantage though it's badly written and then gets almost, but not quite, abandoned, so no-one ever moves to the much better alternatives.
And then all those little warts and mistakes, as you say, propagate.
I have screamed at my screen in rage at how stupid some programmer has been while designing a program that almost does what I want but quite spectacularly (or worse silently) fails at some key part. But they probably tried hard to make it good.
In truth, writing excellent software is still the domain of a very elite few, the rest of us write ok, but very brittle, software.
The headline here is attention grabbing, but this is actually a very solid interview with an expert developer talking about the pitfalls of protocol and API development, and the issues both technical and political of such a huge change of something like a Display Server.
It was good perspective, and I look forward to seeing where both Mir and Wayland end up.
> at the heart of convergence lies the fact that you want to scale across different form factors
I have this feeling one day we'll look back at this and laugh. "Do you remember when they all tried to shove the same interface on all devices? What were they thinking?!"
We might want different interfaces for different devices, but it's still preferable to have a uniform API across these devices at the level of display server.
I don't think this is that silly. For example, tiling window managers scale extremely well all the way from a small netbook with 8 or 9 inches to multiple 27 or 30 inch monitors.
The problem isn't so much the 8 to 80 inch gap. We've solved that problem. The problem is the 8 to 1 inch gap. Tilling window managers start to fail on smart phone/tablet size screens.
Phones and tablets just run all windows full screen, each on their own workspace. A few let you split the screen in half and run one app each. Sounds like a tiling window manager to me.
A few apps -- on Android anyway -- have a floating mode, where they occupy some space over other apps, always on top. This is another thing tiling window managers do.
Indeed. In fact, that's one of the reasons I switched to Windows Phone 8: It was the closest I could get to the linux+dwm environment where I do most of my computing. Why would anyone want a desktop metaphor on a phone, complete with tiny icons and illegible drop-shadowed text?
I agree, but I don't think Canonical is really trying to do that. Ubuntu Touch, besides the side taskbar, has quite different interface than desktop Ubuntu. But there should be a common design language across platforms, even if they are optimized differently to take advantage of the screen size or form factor.
I'm still hoping it won't be long until "Remember when computers had physical interfaces, we didn't just think about what we wanted and had it happen?"...
> at the heart of convergence lies the fact that you want to scale across different form factors
I have this feeling one day we'll look back at this and laugh. "Do you remember when they all tried to shove the same interface on all devices? What were they thinking?!"
Umm but even phones have HD displays and 1GHz 64bit processors. Why should that be any different that a desktop system with a 5K display these days?
I always found the backlash against this project somewhat baffling. The display server problem goes back a long ways. We've had a couple of distinct groups of opinions: a) people who think X is just fine b) people who think replacing X is a worthy task, and who've started to explore the topic in earnest.
What we've been lacking was c) people who can articulate a pressingly urgent use case where a new display server is on the critical path, AND have the resources to code it up. Canonical is, as far as I know, the first in that category.
I agree with the supposition that, IF Wayland/Westland represented a suitable head start on meeting the goals that Canonical has with Mir, that Canonical would do well to invest its resources there. Outside of Canonical, there has been a lot of debate as to whether the "Wayland/Westland is the right direction for Canonical" supposition is true.
At the end, I tend to give more weight Canonical's opinion in this matter. They are, after all, urgently developing a project in which a new display server is on the critical path. The risk in foregoing the head start offered by Wayland/Westland is primarily on them.
From the vantage point of this casual observer, Mir has lit a fire for the Wayland/Westland project. That's great! If unifying against a common threat is what pushes the project along at a faster rate, that's okay. But I still tend to think that Canonical is going to win this race by virtue of them having something to strive for on the other side of building a display server.
Sure, but that's "the resources of Canonical" (more like a small amount of Canonical's already relatively meagre development resources, on a project that is not exactly a crucial enabler or breadwinner) vs. practically everyone else in the Open Source industry and community.
In my experience, "practically everyone else in the Open Source industry and community" only means the developers are N > 0. I like to believe that N -> Infinity, but I think very many projects N -> 0.
There are few open source projects where even one or two talented, dedicated, full-time developers would not represent a substantial increase in labor. There was just an article posted a day or two ago talking about how people would be surprised if they knew how small some teams at Apple really were.
This isn't to disparage F/OSS development. I still believe in the development model more than others. I just think we need to be realistic about project resources. Open source isn't magic in this regard.
>> I always found the backlash against this project somewhat baffling.
In my case, I found the Wayland concept very compelling from the beginning. I had thought X had become bloated and needed a replacement. The Wayland approach seemed to be a huge simplification and in need of developers to bring it home. From my POV it looked like Mir happened because Canonical agreed about X and thought Wayland looked like a nice idea but they had slightly different ideas and forked for reasons I never really understood. After reading this interview I still don't understand. A fork without clear purpose can seem like a waste of resources and market fragmentation and that will lead to some resentment. I suspect some of it is related to their desire to support closed-source blob drivers and I can understand that, but I also had enough of that world and don't want anyone to support it. In the end, I personally don't hate Mir, but I still don't understand why they're doing it. Either way I look forward to running a simplified desktop without X.
Ah yes, Android. I don't actually know enough about the technology or licensing in this case to assess this idea's merit. It's hard to argue with the existing market penetration of Android tech though. If SurfaceFlinger or something derived from it could be used, that does seem rather an obvious solution.
EDIT: I found this page, which at least presents some reasoning for why a new system rather than SurfaceFlinger: http://kdubois.net/?p=1815
X11 will always be around as a client running under a more modern display software. Kind of like how we're still running VT100 terminals on top a graphical UI with mouse, sound and 3D graphics capabilities. Mac OS X has been running X11 applications like that for more than a decade now. It works just fine. http://xquartz.macosforge.org/landing/
That's true, although it doesn't mean that Mir or Wayland will necessarily stick as Mac OS window system did. It's a way too fast morphing landscape, even for a 10-year long project. Mac is a relatively stable platform based on a carefully curated ecosystem with predictable future, etc. Linux is not.
From the article it doesn't really sound like Wayland or Mir will be relevant. It's far more likely everyone will skip the the display/surface/compositer go straight down to EGL and GL.
If all you're doing is running one thing, and that one thing is in total control of the screen and the input devices, sure, but once more than one application needs to access them all at the same time, you need some kind of mediator, and that's what the display server is for.
That makes sense from a system architecture perspective but will application developer need to write in that language or can they just write in EGL and GL?
That's the point. The Wayland design philosophy is to give you a buffer and let you draw into it however you want. That's in marked contrast to X, which defines its own drawing API.
Wayland is not meant to be doing very much. Its main job is to give programs the buffers they need.
Basically what happened is that we changed our wordpress theme, and it turns out that the new one puts a bit more pressure on our server than the old one. We used to be able to handle a top spot on hacker news, but this time our server died on us.
We decided to switch to cloudflare to mitigate some of the load. This requires changing DNS settings, and it seems that for some people, there's been a gap when they haven't been able to access the page. This is of course a little unfortunate. However, it does mean that other people can actually access the page rather than just getting stuck loading.
It's all a bit hectic here at the moment. It's possible there's some other problems that I haven't yet fully discovered.
Eventually it will become obvious to replace systemd with a true lisp dialect and we'll just replace it with emacs lisp and be done with this dark chapter of non-emacs init systems.
Cursed be the people who articulate a vision and code it. Punish failure. Punish presumed failure. Presume all new ideas to be failure until they are widely adopted. Reserve the greatest wrath for those who give away their presumed failures.
All the hate for systemd, Wayland, Mir, etc. is proof to me that they're doing something right. Do you really want crappy bug-ridden init shell scripts and bloated, flickery, tearing GUIs?
I assume good faith, and would just google why people object (although everyone knows), and respond to the most common reasons, rather than demanding that everyone repeat themselves.
My problem is that the reply was a completely empty justification for anything, not that it was defending Poettering or systemd. The reason other people disagree with things that one agrees with is not because they are bitter, angry, hateful types who love to object (because they have never accomplished anything, and are secretly jealous, just like mom said.)
It's a big mistake to think that way. If you can't come up with a compelling reason for users to upgrade, it's a lot harder to get the install base large enough where developers can confidently switch over.
Users will switch over when their distro starts using it by default, the same as they would for a new version of the kernel , a new init system or any other lower level software.
The fact is that the design of X is archaic and not well designed for the needs of the modern era. The benefit to end users is that rich multimedia software becomes much easier to develop for Linux so they get more of it.
Wayland's design as such is that every window manager implements their own compositor. Your regularly scheduled updates will get you on board, though I assume it'll be in compatibility mode with XWayland for quite some time.
Wayland has crippled customizability compared to X11. This customization is much more important to the user because, as you said, the improvements of Wayland are not relevant to the user.
Years ago I worked in a IT department where all the developers had X-terminals. 20-odd folks running full desktops off of one server. A common prank was to run little programs that would cause all the windows to "melt" and puddle at the bottom of the screen, or flip around to mirror images, etc. by setting the display to someone else's terminal.
We did something similar, but it was years before xauth and xhost.
It also wasn't anything malicious, just a prank, when you run xeyes on someone else's terminal. They would laugh a little and then continue with their lifes. Nothing serious, requiring sysadmining.
Which is a good thing. It lets you do automation on GUI tools that otherwise don't support automation (xdotool). It's something I hope Wayland and MIR will support. Of course there should probably be some kind of permissions system. E.g. you need to start a process in a special way so it can do these things or there is some kind of white list of programs that are allowed to do this.
How? A program that runs on your computer has access to all your files anyway. How is it anymore of a security problem if it can't just access your files, change configurations, run other programs but can also send X11 events to other programs?
If I can't use something like QJoyPad under Wayland/MIR I won't use Wayland/MIR.
> A program that runs on your computer has access to all your files anyway.
This is also a security problem. If I have the means to run a program (downloaded from the internet; don't fully trust) in a sandbox that restricts its filesystem access, it'd be nice to also not allow it to control my whole GUI session.
Your perfectly fine software(X11), lets windows spy on each other and read keystrokes to other windows. Any client can pretty much control the entire server.
This is a general problem in Linux. Non-root programs have too much power across the board. A more Android-like permission style solution would be much better.
A major reason computers are so unreliable and un-fun to use is because software is now a massive pile of overcomplication. When it comes to a core thing like a window system, that many programs will interface with, simplicity should be a high design priority. Because every bit of complication that goes into the window system propagates. EVERY SINGLE PROGRAM becomes more complicated. Every piece of software becomes harder to develop. The toll in man-years becomes HUGE very quickly. Yet for some reason people don't learn. I think there is some Stockholm Syndrome happening: programmers can't even imagine how much more they would get done if the underlying systems were as simple, reasonable and solid as they should be."
I'm curious about HN's opinion on this.