Hacker News new | past | comments | ask | show | jobs | submit login

> If every widget supports fractional DPI correctly

That is a big if.

> the output will be bit-for-bit identical between the two different approaches.

It won't be identical. When you do it on widget-by-widget basis, you eventually reach the end of your surface, so you may need to paint your antialiased pixels, but the space is beyond your surface.

When the framebuffer is being considered as one global surface, the scaler will do the antialiasing for you outside of your surface, so you won't hit this problem.

Another thing is Apple scales; they limit the error caused by the antialiasing to a group of pixels, either 8x8 or 9x9. The error caused by fractional scaling won't spread outside of this group.

But for the sake of argument, let's say that these errors are not noticable and we can ignore them.

> the GPU and output encoder options chosen by Gtk3 and Apple have a major flaw: accelerated scaling is usually only available in sRGB colorspace, so you get either gamma-incorrect scaling or you need to fall back to a non-accelerated codepath.

This could be output encoder specific; I'm not aware of such limitation, so I'm looking into Intel docs now (TGL ones, volume 12: Display Engine), cannot find any mention of it. Would you have any pointers?

Or do you mean specifically GPU (texture) scaling? I'm not that familiar with GPU part, but I would be surprised if that was true, when the hardware today considers LUTs for the buffers.

For older hardware, or for ARM SBCs, that could be very well true.

---

In the end, both approaches have their pros and cons: with the encoder scaling, you won't be ever pixel-perfect in fractional scales, just good enough; but with software managed fractional scaling, you are over-complicating the already complicated code, so it won't be bug-free, and in the end, might consume more power (and CPU cycles on your CPU!) than the brute-force approach of encoder scaling that is being offloaded to dedicated hardware.




> This could be output encoder specific; I'm not aware of such limitation, so I'm looking into Intel docs now (TGL ones, volume 12: Display Engine), cannot find any mention of it. Would you have any pointers?

> Or do you mean specifically GPU (texture) scaling? I'm not that familiar with GPU part, but I would be surprised if that was true, when the hardware today considers LUTs for the buffers.

To scale in linear RGB colorspace, you first need to do a colorspace transform on the entire buffer, then scale, then do another colorspace transform. I can't find any device that does this in a single step correctly, except for some rare GPU extensions.


To quote the Intel docs:

> Plane scaling is [sic] supports both linear and non-linear scaling modes. The scaling mode is programmed in the PS_CTRL. In HDR mode, scaling and blending operations are generally performed in linear mode.

To be limited to sRGB would mean, that the hardware is pretty much limited to SDR. That would make it unusable in mainstream market today; just good enough for low-end SBCs.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: