In Wayland this is simple. In Xorg it is not because the X protocol contains no knowledge of what scaling factor your app is rendering at. The server doesn’t know about DPI scaling. You can’t just make things “appear” as if they’re a different scale factor; X apps get real pixel coordinates in their events, directly from the X server.
The Wayland compositor is aware of what DPI every surface is in a given frame exactly, and it has a unified protocol to advertise it.
What macOS does is not terrible, but downsampling from 2x to 1.5x will look a decent bit worse than say 3x to 1.5x or rendering at actual 1.5x. In theory with today’s Wayland protocol, as far as I know, a surface could set its own DPI to 2x or 3x and get the desired behavior today. However it definitely isn’t free...
Because of text rendering. Apple is a very... idealistic company. From the very beginning of their use of anti-aliased vector fonts, it’s clear that they did not want hacks like pixel grid fitting or subpixel anti-aliasing that compromises the integrity of the font or ties the rendering closely to the way it is blitted. The way they made up for this was ultimately by shipping HiDPI displays in most of their devices. That way, the blurry rendering became more palatable.
Now don’t get me wrong: I’m not saying it looks unusable at 1.5x or 2x scaled down to 1.5x. It just looks dramatically less crisp than actual 1.5x with subpixel anti-aliasing. But you can’t scale a surface with subpixel anti-aliasing without defeating it because the process of scaling will treat the different subpixels of rendered text as just RGB values. This looks terrible. Even worse is if you are using hinting on top of that - now you are grid fitting to a grid that has no correlation to the actual grid of the display, thus, hinting actually just makes things look worse while compromising the integrity of the outlines.
Granted, upscaling 1x has some of these issues and it is passable. Still ugly, but it doesn’t look unusably bad. But it is still quite wrong.
Turning off hinting and subpixel AA might seem like the right call, but I don’t think so. For some languages, like East Asian languages, subpixel AA can add visual clarity that makes it much easier to read complex glyphs. While for English it is merely a creature comfort, for some languages it might provide a much needed legibility improvement.
All of this is what I was trying to subtly allude to near the end, but oh well :)
tl;dr: The X server itself is not aware of window DPI, and cannot virtualize pixels in events and protocols. Wayland compositors are, but it is still maturing and software and hardware compatibility is gradually working its way up. And OSes other than macOS use tricks like hinting and subpixel AA to improve font legibility which do not play terribly well with framebuffer scaling.
Tl;still dr: doing this would be hard for historical reasons :)
Fractional scaling is mostly needed on High DPI displays. High DPI displays also happens to be the case where you don't need subpixel AA, nor you need native FS rendering. Even for East Asian languages.
It's ironic, I think, you say Apple is an idealistic company, but they have implemented a very simple and pragmatic approach, and with great results.
While here we're discussing why Linux can't do the same due to a dozen self-imposed restrictions that don't matter in practice. The concept of (non-fractional) logical pixels can be implemented very neatly into any rendering subsystem without this affecting the concept of pixels, or events, etc. Seems no one can see the forest for the trees in this case.
> Fractional scaling is mostly needed on High DPI displays. High DPI displays also happens to be the case where you don't need subpixel AA, nor you need native FS rendering. Even for East Asian languages.
2x scale displays certainly make East Asian text readable, but rendering 2x and downscaling to 1.5x will do a lot of damage to the legibility. It’s far from ideal.
> It's ironic, I think, you say Apple is an idealistic company, but they have implemented a very simple and pragmatic approach, and with great results.
When they first implemented it, it shipped on displays that could only handle 1x scaling for a very long time. The first “Retina” Macbook is from 2012, while macOS is from 2001. I have an iBook G4 sitting next to me. It’s a nice piece of hardware, but yes, text is blurry. Apple is “idealistic” in that they favored elegant engineering over practicality. Not everything Apple has ever done embodies this, but they definitely try to change the status quo rather than make the most of it, and this is surely no different.
I didn’t say it as necessarily a jab, but it had its tradeoff. It’s a distinctly Apple move.
> While here we're discussing why Linux can't do the same due to a dozen self-imposed restrictions that don't matter in practice. The concept of (non-fractional) logical pixels can be implemented very neatly into any rendering subsystem without this affecting the concept of pixels, or events, etc. Seems no one can see the forest for the trees in this case.
The self-imposed restrictions you refer to are issues with the design of the Linux graphics stack. Like I said, the X11 protocol does not afford the X server any awareness to the scaling an application renders at. The deeper you dig the clearer it is that it simply couldn’t be fixed. In the X11 world, screen coordinates are exposed to all clients. In the Wayland world, clients only use surface-relative coordinates. This might sound like an unimportant distinction, but when you are trying to implement DPI virtualization, the API matters. If applications running at different scale factors suddenly have their drop down and context menus appearing all across the screen, that’s not a good experience.
So why is X11 like this? Well, it’s ridiculously old. The X protocol has changed and evolved a fair bit, but it’s so old that its lineage predates the very first Mac altogether. Xorg has improved plenty in the last 20 years, but the protocols have been largely frozen for quite a while, with some minor exceptions. Trying to fix these issues now, 40 years down the road, is a losing battle. It would be possible to try to pull a Microsoft, invent and implement a new DPI virtualization layer. But it’s just not that simple. Xorg does not have a builtin compositor, and it does not force you to use compositing. From the server PoV, there’s no obvious place to scale surfaces. There aren't even "surfaces."
The list goes on. So if it’s so old and problematic, why didn’t they try to fix or replace it? Well, it turns out they have. It’s called Wayland. It’s not new; it’s been cooking since 2007. Wayland provides a global compositor that is aware of the scaling factor of surfaces and old X11 apps are DPI-virtualized by virtue of the way Xwayland works.
So why isn’t Wayland everywhere? Because open source developers can’t will it into existence. They have to push things forward incrementally. The Linux desktop is not exactly the best funded thing in existence. Canonical decided to try to pursue their own display standard called Mir instead of collaborating on Wayland, so their resources were not going into it. Meanwhile, semi-political squabbles with NVIDIA and the Linux kernel developers has lead the proprietary NVIDIA drivers to be very slow to support Wayland. NVIDIA tried to forge a path around the unified buffer management APIs used by every other graphics vendor and proposed EGLStream, which ultimately hasn’t worked out and they are now pursuing a better path forward.
The lesson here is you can’t say “why didn’t they just do the obvious thing? It’s sad everyone is too stupid to understand this.” Nobody doesn’t get how to implement DPI virtualization. But in 1984, display scaling wasn’t on the top of everyone’s mind. It wasn’t in the early 2000’s either. So it’s not surprising that the decades long legacy of X has left it hard to handle scaling.
It's worth adding on here: The reason they did not simply try to use the existing X drivers is because that was an equally bad thorn in their side fraught with plenty of peril. But let's please not get into that. I am not an expert, and when I dug into it, it felt like enough yak shaving to last a life time.
In other words:
Tl;dr: it’s hard to do under Linux for historical reasons.
> 2x scale displays certainly make East Asian text readable, but rendering 2x and downscaling to 1.5x will do a lot of damage to the legibility. It’s far from ideal.
I'm typing this on a MacBook with scaled display and legibility is just fine. Asian scripts look fine, too. iPhones also use a scaled output (most recent models all do, there's no match between physical pixel and GPU pixel at all anymore on OLED, and there's no match between GPU and logical pixel either). You'll never hear customers in Japan or China complain about poor legibility.
That's the difference between theory and practice. If you try it, you realize there's no problem. Otherwise I'm sure we can keep writing about how bad it's in theory all day.
> I have an iBook G4 sitting next to me. It’s a nice piece of hardware, but yes, text is blurry.
You're talking about an iBook that 1) isn't high DPI 2) using OSX which isn't scaled (fractionally or in any way at all).
I'm sorry but your iBook is irrelevant to the discussion. We're not discussing here legacy text AA on legacy low DPI display. None of this matters.
>I'm typing this on a MacBook with scaled display and legibility is just fine. Asian scripts look fine, too. iPhones also use a scaled output (most recent models all do, there's no match between physical pixel and GPU pixel at all anymore on OLED, and there's no match between GPU and logical pixel either). You'll never hear customers in Japan or China complain about poor legibility.
>That's the difference between theory and practice. If you try it, you realize there's no problem. Otherwise I'm sure we can keep writing about how bad it's in theory all day.
I do not appreciate the way you are treating me as if I clearly do not have any experience or knowledge on the subject matter. I in fact use an iPhone and have owned multiple Mac computers, including my M1 Mac Mini. I also studied Japanese in college and font rendering was always a kind of issue.
Comparing Linux and macOS is not too interesting since Linux actually is closer to macOS in many regards with font rendering. Instead, it would be instructive to first compare Windows 10 to macOS (Big Sur).
This comparison shows the default fonts in macOS and Windows 10 rendering the word 「醤油」. macOS Text Edit is on the left, and Windows 10 Notepad is on the right. I set the font size to 10pt on Windows to help make it a more even comparison; Windows defaults to 12pt which makes 1x plenty legible. However, in a head-to-head comparison at a similar font size, 1x is not terribly legible here on either side. Still, at 1.5x you can see a clear difference in legibility already. This is with a common word that is mildly complex, but still not nearly the worst case scenario for typography. And yet, some of the features are quite difficult to distinguish. It's good enough, but it's not great. The Windows typography here is ugly, but practical.
What if macOS could scale 1.5x natively? Well, the comparison would look like this:
On the left is 12pt at 1x, and on the right is 18pt at 1x. At first glance it does look similar, and neither is ideal. However, this is still a noticeable improvement. The two strokes near the bottom of the left kanji are now distinguishable with the human eye, and it is generally less blurry. This is without the advantage of hinting or subpixel AA, which should make it even less blurry and increase the horizontal spatial resolution enough to make more features easily distinguishable. And this is still not a worst case example. It’s an example of something common.
By the way, regarding iPhones. I have an iPhone XS. The iPhone XS has a DPI of 458! That is insane. The monitor I'm currently using, by comparison, has a DPI of around 110. So yes, you can get away with quite a lot on an iPhone XS without any noticeable artifacts or blurriness. But that's not realistic. Laptops and phones are different devices with different hardware and different distances that we view them at. OLED displays on computers are still the exception.
> You're talking about an iBook that 1) isn't high DPI 2) using OSX which isn't scaled (fractionally or in any way at all).
> I'm sorry but your iBook is irrelevant to the discussion. We're not discussing here legacy text AA on legacy low DPI display. None of this matters.
I don't know what to tell you, I was trying to illustrate the pitfalls of Apple's early choice to not integrate techniques like hinting and subpixel anti-aliasing into their font rendering. It's not like the font rendering looks meaningfully different on an iBook G4 than it does today, it's that the screens have gotten higher resolution. They waited it out. If you want to talk about impractical decisions, the 10 year stretch when Apple laptops had blurry fonts and no HiDPI displays is certainly relevant to the discussion.
Also, Linux still needs to support 1x rendering well. It does not have the luxury of choosing the hardware it runs on.
I'm sorry the conversation tone is changing, I didn't do this intentionally. But there are few things that logically don't connect for me here.
Supporting FS using the macOS algorithm doesn't affect 1:1 rendering at all. You can enable subpixel AA when rendering at non-fractional scale, and you can disable it when you do fractional scale.
Windows 10 already does this per control i.e. when you render a control in RGBA it disables subpixel AA, otherwise it supports it. In this case what I propose is vastly simpler: do it per desktop, or even per X server instance.
It's literally an afternoon project, and let people pick their settings.
Anyway none of this matters, because Linux on the desktop doesn't matter either (yet). And with that attitude honestly it'll never matter.
As a last note, you can do subpixel AA with the macOS algorithm. It'll be more performance intensive as it'll be a custom shader to scale this way, but it's mathematically 100% doable. I just don't think it's worth the bother.
What I am trying to tell you is that 1:1 font rendering on macOS is simply not very good. This caused Apple to be the ass of jokes for a while. If you think that 1:1 rendering of common kanji is good, well, we’ll just have to agree to disagree. As for downsampling for fractional scale... I was only ever trying to demonstrate that it is not ideal, and not everyone wants it. I still (as I had said early in the thread) find it to be an acceptable tradeoff.
And yeah, UI toolkits could switch off hinting and subpixel AA at 2x. I acknowledge that this would be logical anyways, since subpixel AA is not very useful at 2x.
But honestly, as far as Apple typography goes, it’s all beside the point. This was in a thread where I was initially trying to demonstrate that doing it in Linux is hard for historical reasons. I believe I made that point. As for will it ever be done? Well, like I said, it has been done. Wayland supports what you are describing and more just fine, right now, and it works very well. (Full disclosure that most programs will actually just use fractional scale, but you can also round up to the next integral and the compositor will downsample as you would hope.) The actual hard problem is getting Wayland working for everyone, on their hardware and with their software. Progress is being made every year, but it’s been a long road. Until then, I just think it would make more sense for Elementary OS to work on more tractable problems with the Linux desktop.
The Wayland compositor is aware of what DPI every surface is in a given frame exactly, and it has a unified protocol to advertise it.
What macOS does is not terrible, but downsampling from 2x to 1.5x will look a decent bit worse than say 3x to 1.5x or rendering at actual 1.5x. In theory with today’s Wayland protocol, as far as I know, a surface could set its own DPI to 2x or 3x and get the desired behavior today. However it definitely isn’t free...
Because of text rendering. Apple is a very... idealistic company. From the very beginning of their use of anti-aliased vector fonts, it’s clear that they did not want hacks like pixel grid fitting or subpixel anti-aliasing that compromises the integrity of the font or ties the rendering closely to the way it is blitted. The way they made up for this was ultimately by shipping HiDPI displays in most of their devices. That way, the blurry rendering became more palatable.
Now don’t get me wrong: I’m not saying it looks unusable at 1.5x or 2x scaled down to 1.5x. It just looks dramatically less crisp than actual 1.5x with subpixel anti-aliasing. But you can’t scale a surface with subpixel anti-aliasing without defeating it because the process of scaling will treat the different subpixels of rendered text as just RGB values. This looks terrible. Even worse is if you are using hinting on top of that - now you are grid fitting to a grid that has no correlation to the actual grid of the display, thus, hinting actually just makes things look worse while compromising the integrity of the outlines.
Granted, upscaling 1x has some of these issues and it is passable. Still ugly, but it doesn’t look unusably bad. But it is still quite wrong.
Turning off hinting and subpixel AA might seem like the right call, but I don’t think so. For some languages, like East Asian languages, subpixel AA can add visual clarity that makes it much easier to read complex glyphs. While for English it is merely a creature comfort, for some languages it might provide a much needed legibility improvement.
All of this is what I was trying to subtly allude to near the end, but oh well :)
tl;dr: The X server itself is not aware of window DPI, and cannot virtualize pixels in events and protocols. Wayland compositors are, but it is still maturing and software and hardware compatibility is gradually working its way up. And OSes other than macOS use tricks like hinting and subpixel AA to improve font legibility which do not play terribly well with framebuffer scaling.
Tl;still dr: doing this would be hard for historical reasons :)