Hacker Newsnew | past | comments | ask | show | jobs | submit | Freaky's commentslogin

I too have the previous generation Paperwhite and it's a laggy piece of junk. About the only thing it's even remotely zippy at is flicking through pages rapidly when I accidentally brush my wet hand against the stupid touchscreen while I'm in the bath.


Looks like my fix for CPU feature detection under clang made it in, so such builds should now have much faster addslashes/base64/etc.

They're still disabled by default on FreeBSD - my PR is pending, and the patch has been in testing in ports for a while: https://github.com/php/php-src/pull/12288


In addition to being a great game, I also find Syx technically impressive - in a genre where it's so common to start running into serious performance problems with just two or three digit populations, Syx still zooms along effortlessly as you approach five. I believe its pop cap is 40,000 units.

I'm sure they're greatly simplified in comparison - it isn't trying to simulate complex interpersonal relationships or painstakingly track everyone's hair growth, but they still have a decent amount of detail to them given the scale.

The demo is just an older version of the full game (usually lagged behind 3 major releases, not sure where it is now, I think it's more up to date?) - and far from making it feel like I didn't need to pay for the thing to enjoy it, it instead made it an easy buy.


> We were very generous in terms of warm-up time. Each benchmark was run for 1000 iterations, and the first half of all the iterations were discarded as warm-up time, giving each JIT a more than fair chance to reach peak performance.

1,000 iterations isn't remotely generous for JRuby, unfortunately - JVM's Tier-3 compilation only kicks in by default around 2,000, and full tier-4 is only considered beyond 15,000. I've observed this to have quite a substantial effect, for instance bringing manticore (JRuby wrapper for Apache's Java HttpClient) down from merely "okay" performance after 10,000 requests to pretty much matching the curb C extension under MRI after 20,000.

You can tweak it to be more aggressive, but I guess this puts more pressure on the compiler threads and their memory use, while reducing the run-time profiling data they use to optimize most effectively. It perhaps also risks more churn from deoptimization. I kind of felt like I'd be better off trying to formalise the warmup process.

It's rather a shame that all this warmup work is one-shot. It would be far less obnoxious if it could be preserved across runs - I believe some alternative Java runtimes support something like that, though given JRuby's got its own JIT targetting Java bytecode I dare say it would require work there as well.


Agreed, the JVM's thresholds for method compilation is higher than the test strategy seemed to account for.

Also, quoting from the site:

    TruffleRuby eventually outperforms the other JITs, but it takes about two minutes to do so. It is also initially quite a bit slower than the CRuby interpreter, taking over 110 seconds to catch up to the interpreter’s speed. This would be problematic in a production context such as Shopify’s, because it can lead to much slower response times for some customers, which could translate into lost business.
There's always ways around that, for example pushing artificial traffic at a node as part of a deployment process, prior to exposing it to customers. I've known places that have opted for that approach, because it made the best sense for them. The initial latency hit of JIT warm-up wasn't a good fit for their needs, while every other aspect of using a JIT'd language was.

As ever, depends on the trade-off if it's worth the extra work to do that. e.g. if I could see after 5-10 minutes that TruffleRuby was, say, 25% faster than YJIT, then that extra engineering effort may be the right choice.

edit: Some folks throw traffic at nodes before exposing them to customers to ensure that their caches are warm, too. It's not necessarily something limited to JIT'd languages/runtimes.


If you are able to snapshot the state of the JIT, you can do the warming on a single node. The captured JIT state can then be deployed to other machines, saving them from spending time doing the warming. This increases the utilization of your machines.

While this approach sounds like a convoluted way to do ahead of time compilation, I’ve seen it done.


IBM's JVM used to support it at one stage, not sure if it still does or if other JVMs have picked it up.


It appears to have a cool-sounding JIT server mode, allowing multiple clients to share a caching JIT compiler which does most of the heavy-lifting:

https://www.usenix.org/conference/atc22/presentation/khrabro...

https://developer.ibm.com/articles/jitserver-optimize-your-j...

It also has a "dynamic AOT compiler", so first-run stuff can be JITed and cached for future execution instead of it all starting out interpreted every time.


The shared class cache was the thing I was thinking of, I think.

https://developer.ibm.com/tutorials/j-class-sharing-openj9/

Looks like maybe there is similar for OpenJDK & Oracle Java as of version 12 (I think it is?)

https://docs.oracle.com/en/java/javase/19/docs/specs/man/jav...


It is enough iterations for these VMs to warm up on the benchmarks we've looked at, but the warm-up time is still on the order of minutes on some benchmarks, which is impractical for many applications.


I have a compiler that eventually produces 10x faster bytecode than the current JRuby. It takes a bit of warmup time.

The first command is "Let there be Light."


I encountered a weird bug with deserializing JSON in a JRuby app during an OpenJDK upgrade - it would sporadically throw a parse error for no apparent reason. I was upgrading to OpenJDK 15, but another user experienced the same regression with an LTS upgrade from 8 to 11.

The end result of my own investigation led to this quite satisfying thread on hotspot-compiler-dev, in which an engineer starts with my minimal reproduction of the problem and posts a workaround within 24 hours: https://mail.openjdk.org/pipermail/hotspot-compiler-dev/2021...

There's also a tip there: try a fastdebug build and see if you can convert it into an assertion failure you can look up.


fastdebug is a good tip, thanks for sharing!


FreeBSD had a pretty decent option in the base system two decades ago - FFS snapshots and a stock backup tool that would use them automatically with minimal effort, dump(8). Just chuck `-L` at it and your backups are consistent.

Now of course it's all about ZFS, so there's at least snapshots paired with replication - but the story for anything else is still pretty bad, with you having to put all the fiddly pieces together. I'm sure some people taught their backup tool about their special named backup snapshots sprinkled about in `.zfs/snapshot` directories, but given the fiddly nature of it I'm also sure most people just ended up YOLOing raw directories, temporal-smearing be damned.

I know I did!

I finally got around to fixing that last year with zfsnapr[1]. `zfsnapr mount /mnt/backup` and there's a snapshot of the system - all datasets, mounted recursively - ready for whatever backup tool of the year is.

I'm kind of disappointed in mentioning it over on the Practical ZFS forum that the response was not "why didn't you just use <existing solution everyone uses>", but "I can see why that might be useful".

Well, yes, it makes backups actually work.

> Also, it's unclear to me what happens if you attempt a snapshot in the middle of something like a database transaction or even a basic file write. Seems likely that the snapshot would still be corrupted

A snapshot is a point-in-time image of the filesystem at a given point. Any ACID database worth the name will roll back the in-flight transaction just like they would if you issued it a `kill -9`.

For other file writes, that's really down to whether or not such interruptions were considered by the writer. You may well have half-written files in your snapshot, with the file contents as they were in between two write() calls. Ideally this will only be in the form of temporary files, prior to their rename() over the data they're replacing.

For everything else - well, you have more than one snapshot backed up, right?

1: https://github.com/Freaky/zfsnapr


> Unfortunately, this is not true. You need to grab all the DB files (WAL, etc.) in a consistent manner. You can't grab them while writes are in progress.

Perhaps you could be more specific, because the former is exactly what a filesystem snapshot is meant to do, and the latter is exactly what an ACID database is meant to allow assuming the former.

> Look at what Kanister does with its recipes to get consistent DB snapshots

I looked at a few examples and they mostly seemed to involve running the usual database dump commands.


They insist on adding it to the standard response path, but they're happy for you to remove it:

    header -Server
However as this isn't global configuration it'll tend to pop back up in implicit configs like HTTP redirects and error handling if not overridden.


Is it possible to disable it on http redirects? I haven't found any way to do that


Caddy also supports Unix sockets, which should be rather more difficult to smuggle requests to, and can be protected by file permissions:

    admin listen unix//var/run/caddy/admin.sock


This (if they definitely must leave the functionality enabled by default) is what should be the default honestly. I still can't fathom why that isn't the case!


Caddy maintainer here: we're looking to move to unix socket by default for Linux distributions. See https://github.com/caddyserver/caddy/issues/5317, the plan is to set this env var in the default service config but I'm trying to be careful about backwards compatibility so I haven't pushed the change for our deb package yet. Will likely do it soon.


I'll see about getting it made the default for the FreeBSD port at least.


I would imagine so the default behaviour could be identical across platforms.


I imagine it's for Windows users. But yes, it could very sensibly be the default in Unix.


I make remote snapshot backups with Borg using this: https://github.com/Freaky/zfsnapr

zfsnapr mounts recursive snapshots on a target directory so you can just point whatever backup tool you like at a normal directory tree.

I still use send/recv for local backups - I think it's good to have a mix of strategies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: