Hacker News new | past | comments | ask | show | jobs | submit | andreyv's comments login

It creates a compound literal [1] of type array of int, and initializes the specified array positions using designated initializers [2] with the results of calls to puts().

Using designated initializers without the = symbol is an obsolete extension.

[1] https://gcc.gnu.org/onlinedocs/gcc/Compound-Literals.html [2] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html


Even the original version 1507 is still supported on the LTSC channel. Support for UTF-8 manifest was added only in version 1903.


Old does not mean bad. Even a decade later Wayland struggles to provide basic features that were built in the X protocol.


When rewriting apps, people have a tendency to say: "well, Oldversion provides X. We could make a Newversion that provides X better, but that's hard so maybe X is actually bad? We should provide Y instead." Then they're confused when nobody with X needs wants to use Newversion.


Which ones do you have in mind? I don't miss X in the slightest.


Xmodmap


It's OK to use X until Wayland has support for the missing features. I don't really care what I'm running. But it's clear that X is not in shape to see significant development (e.g. to support new needs) in the coming decades.


The number of registers available to the program is fixed in the instruction set. The program cannot address more registers without recompiling it to an extended instruction set.


First person shooters. Vertical synchronization causes a noticeable output delay.

For example, with a 60 Hz display and vsync, game actions might be shown up to 16 ms later than without vsync, which is ages in FPS.


Key word here being "might". What actually gets displayed is highly dependent on the performance of the program itself and will manifest as wild stuttering depending on small variations in the scene.

I've seen no game consoles that allow you to turn vsync off, because it would be awful. No idea why this placebo persists in PC gaming.


The Soviet Union briefly tried 5 and 6 day weeks in the 1930s: https://en.wikipedia.org/wiki/Soviet_calendar


You can use systemd-cron [1] to run traditional cron jobs with systemd. No need for a separate daemon anymore.

[1] https://github.com/systemd-cron/systemd-cron


Instead of "/usr/bin/time" you can also write "command time".


or its shorter form, \time


1% is still a lot for power saving. If the system is idle, it should be at 0%. If it is anything above, then it shows poor design.


Realistically no modern consumer OS is ever at idle for long.

It's constantly monitoring WiFi signals, battery level, checking for background processes to run, and a hundred other things.

Whether CPU usage is being reported as 0% or 1% averaged over the course of a second doesn't have anything to do with poor design. It's just being rounded from values like 0.3% or 0.8% anyways.


Monitoring Wi-Fi signals is, afaik, something that happens on the Wi-Fi chip itself, not the CPU.

While you’re correct that nothing stays truly idle, the modern design is that the main CPU really does stay largely idle because of the power costs involved and instead dedicated microprocessors absorb the load when possible.


Sure, but most modern OSes would inform the user when the WiFi connection dies - so there's something happening on the CPU too.


On a state change, sure. But you're not frequently gaining & losing WiFi access.


The only state where a modern computer is "idle" is when it's turned off.


You can’t really make this claim without measuring the actual impact, especially on a system with frequency scaling and 8+ heterogenous cores.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: