But how much time does that 0.3 watt hour query take to run? They imply that an individual ChatGPT query takes 0.3-3 watt hours, but most queries come back in seconds, so we need to scale that over a whole hour of processing.
Edit: Scrolling down: "one second of H100-time per query, 1500 watts per H100, and a 70% factor for power utilization gets us 1050 watt-seconds of energy", which is how they get down to 0.3 = 1050/60/60.
OK, so if they run if for a full hour it's 1050*60*60 = 3.8 MW? That can't be right.
Edit Edit: Wait, no, it's just 1050 Watt Hours, right (though let's be honest, the 70% power utilization is a bit goofy - the power is still used)? So it's 3x the power to solve the same question?
The Steam networking sockets do offer the same functionality as ENet. Is it possible to use the Steam Datagram Relay without the steam networking sockets? I would assume so. Not sure I see the benefit of supporting both.
The core of Godot's netcode is way too minimal. It gives you a way to synchronize state and make RPC. That's it.
As the author mentions adding in the higher level functionality like prediction, rollback, etc is extremely complicated so it's nice that netfox takes care of a lot of that complexity.
Clearly it does have something to do with fly.io considering fly is and has been pushing for litefs/stream as the ideal database solution for fly users. It seems reasonable that readers would compare it to other fly offerings.
We have... never done that? Like ever? LiteFS is interesting for some read-heavy use cases, especially for people who are doing especially edge-deployed things, but most people who use databases here use Postgres. We actually had a managed LiteFS product --- LiteFS Cloud --- and we sunset it, like over a year ago. We have a large team working on Managed Postgres. We do not have a big SQLite team.
People sometimes have a hard time with the idea that we write about things because they are interesting to us, and for no other reason. That's also 60-70% of why Ben does what he does on Litestream.
I’m sorry. I think that I, and probably others, have misinterpreted it. Between Ben’s writings on the fly blog and litefs cloud it seemed like that was the case. I didn’t realize it had been discontinued.
Neither LiteFS nor Litestream (obviously) have been discontinued. They're both open source projects, and were both carefully designed not to depend on Fly.io to work.
This was an interesting connection to me between meditation and neuroscience. Buddhists talk about the "monkey mind" that chatters incessantly. Well, that's the default mode network, part of your brain that is active when you're not engaged in a specific task, when you're thinking about self, others, past or future. A useful adaptation in our past environment for sure, but overactivity can be detrimental. The Buddhist solution is to mediate, to focus the attention on a singular thing and not be distracted by the chatter. That ability lives in the prefrontal cortex! It's able to override the DMN and it's something that can be trained by just exercising it.
While it is literally just sitting somewhere watching the breath it is easy to do things wrong like sitting concentrating hard or being bored instead of observing the boredom. The irony of doing nothing being a difficult skill. Anyone interested probably wants to find some serious mediators to talk to - Buddhist monks are recommended but there are others here and there.
I suspect the best way to forge an independent path in 2025 is to combine your work with social media. It seems like people with a dedicated audience can do whatever they please and live comfortably.
Does relativity mean that time is not just a human construct? It’s hard to believe it is irrelevant when we have discovered special behaviors that are part the fabric of reality.
Every time I’ve tried to run a standard Linux distro like Ubuntu for more than a couple of years I inevitably end up breaking something in a way that I can’t recover.
I have had the same experience. Don’t run random commands from the internet, don’t install anything that doesn’t come from the distro vendor (a few very notable exceptions can be made for things like Docker if you really must), don’t mess with configuration files, do upgrades their way. Generally speaking you will have zero problems. Sometimes they will do something like switch from one network manager to something like netplan but overall that stuff is trending towards ease of use, not complexity.
If you install the newest versions of whatever from random repos or compile stuff yourself you are very likely to mess things up. But nowadays there is very little reason to do that. And you can pick a distro that releases at a pace you are comfortable with, so you have choices.
Don't use custom repos, use container technologies (e.g. Flatpak, Docker etc) to install applications, update the system regularly (at least once a week).
Usually broken distro upgrades I see are because people run "curl randomdomain.ck/totallysafescript.sh | sudo bash -" to install things or use custom repos.
I hate Flatpaks; they're bloated monstrosities and I only run them when I have no other choice. Outside of that, distribution package maintainers tend to do a good job and that is my preferred way of running programs.
container stuff breaks the MOST for me. The hooks into the subsystems invariably are not working correctly be it like xdg preferences or finding things that are global, its nice to package things into their own sandboxes but those sandboxes have not played well with my wider systems. I am still thankful for snap getting me recent copies of popular software on my aged debian installs however.
This is why I like Arch's Pacman a lot, and the reason why I avoid Debian derivatives.
That `totallysafescript.sh` could at least be inside of the package manager scope. Most of the times someone already did it, and published it to AUR.
IMO the reason why there are so many people running random scripts in Ubuntu/Debian is due to how more difficult/inconvenient it is to get a dpkg .deb when compared to a PKGBUILD file. Same for MacOS, in which you have to either rely on Homebrew wizardry or just running the script
> That `totallysafescript.sh` could at least be inside of the package manager scope. Most of the times someone already did it, and published it to AUR.
The AUR is still not as good as proper package management and shouldn't be considered a stable or reliable method of software distribution at scale.
This experience has been unique to (k)ubuntu (more than 15 years ago) for me.
I've been running rolling release distros for a decade and never had any problems - you have to follow some software migrations when needed, but I managed to migrate to systemd on Arch without an issue while any dist upgrade on ubuntu was wrecking my system.
It's not a good distro. I don't know why people insist on using it. Notice that the GP said Debian instead. (Probably Stable, because testing and unstable will break within 10 years.)