No it's not. 2560x1440 has terrible PPI on larger screens. Either way with a 4k monitor you don't technically need to game at 4k as most intensive games offer DLSS anyway.
Too much personal preference with PPD. When I upgraded to a 32" monitor from a 27" one i didn't push my display through my wall, it sat in the same position.
Not entirely clear on what you mean, but if you refuse to reposition your display or yourself after hopping between diagonal sizes and resolutions, I'd say it's a bit disingenuous to blame or praise either afterwards. Considering you seem to know what PPD is, I think you should be able to appreciate the how and why.
Yep. I have both 4k and 1440p monitors and I can’t tell the difference in quality so I always use the latter for better frames. I use the 4k for reading text though, it’s noticeably better.
There are good 4K gaming monitors, but they start at over $1200 and if you don't also have a 4090 tier rig, you won’t be able to get full FPS out of AAA games at 4k.
I've seen analysis showing that DLSS might actually yield a higher quality image than barebones for the same graphics settings owing to the additional data provided by motion vectors. This plus the 2x speedup makes it a no brainer in my book.
Also, ultrawide monitors. They exist, provide more immersion. And typical resolution is 3440x1440 which is high and and the same time have low ppi (basically regular 27" 1440p monitor with extra width). Doubling that is way outside modern GPU capabilities
Almost no one plays on native 4k anyway. DLSS Quality (no framegen etc) renders at 1440p internally and by all accounts there is no drawback at all, especially above 60fps. Looks great, no noticeable (excluding super sweaty esports titles) lag and 30% more performance. Combined with VRR displays, I would say 4k is perfectly ok for gaming.
I watched the same video you talking about [1], where he's trying the PG27UCDM (new 27" 4K 240Hz OLED "gaming monitor" [2]) and his first impressions are "it's so clean and sharp", then he starts Doom Eternal and after a few seconds he says "It's insane [...] It looks perfect".
Today someone's pipeline broke because they were using python:3 from Dockerhub and got an unexpected upgrade ;-)
Specifically, pendulum hasn't released a wheel yet for 3.13 so it tried to build from source but it uses Rust and the Python docker image obviously doesn't have Rust installed.
Wow, that's crazy. I tried a 6 digit hash and got a 404, then I tried another 6 digit hash and got "This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository."
Aside from the casino story (high value target that likely faces tons of attacks, therefore an expensive customer for CF), did something happen with them? I'm not aware of bad press around them in general
Why Rust? Aren't you alienating Python devs from working on it?
I see that UV is bragging about being 10-100x faster than pip. In my experience the time spent in dependency resolution is dwarfed by the time making web requests and downloading packages.
Also, this isn't something that runs every time you run a Python script. It's ran once during installation of a Python package.
I actually think that Python's tooling should not be written in Python. Because if yes, you end up with at least two version of Python, one to run the tooling, one to run the project.
I'm not sure of the answer, but one thing Rust has obviously bought them is native binaries for Mac/Windows/Linux. For a project that purports to be about simplicity, it's very important to have an onboarding process that doesn't replicate the problems of the Python ecosystem.
If you are building a production app that uses python in a containerized way, you may find yourself rebuilding the containers (and reinstalling packages) multiple times per day. For us, this was often the slowest part of rebuilds. UV has dramatically sped it up.
Uv has already proven itself by being faster at every step it seems like, except maybe downloading. But notably it includes unpacking and/or copying files from cache into the new virtualenv, which is very fast.
so it take 3 seconds to run instead of 0.3?
Don't get me wrong, that's a huge improvement but in my opinion not worth switching languages over
Features should be developed and tested locally before any code is pushed to a CI system. Dependency resolution should happen once while the container is being built. Containers themselves shouldn't be installing anything on the fly it should be baked in exactly once per build.
Good idea! I've gone ahead and implemented this feature: if "obscure in URL" is turned on, the text won't be visible unless you focus on the textbox (e.g. to edit it).
I came on here looking for an article about all of the network issues last night streaming the game. Couldn't find one so I'll rant here in the comments ;-).
In my neighborhood we have 3 ISPs but one is only just recently available (Google Fiber) so there's not many on it as we already have Spectrum and AT&T fiber available. Lots of people complaining across different streaming services (YouTube TV, Hulu, Paramount+, etc) and also across different internet providers (Spectrum and AT&T... just 1 data point for Google Fiber). Lots of buffering, scaling down to extremely low bitrates where you couldn't even make out how many timeouts were left and could barely make out the score.
Sending each customer their own bespoke video stream works fine for movies and shows, but apparently works terribly for popular live events.
Some sort of multicast solution would fix this... but then theres DRM.