Hacker News new | past | comments | ask | show | jobs | submit | ericfrederich's comments login

4k gaming is dumb. I watched a LTT video that came out today where Linus said he primarily uses gaming monitors and doesn't mess with 4k.


No it's not. 2560x1440 has terrible PPI on larger screens. Either way with a 4k monitor you don't technically need to game at 4k as most intensive games offer DLSS anyway.


What matters is the PPD, not the PPI, otherwise it's an unsound comparison.


Too much personal preference with PPD. When I upgraded to a 32" monitor from a 27" one i didn't push my display through my wall, it sat in the same position.


Not entirely clear on what you mean, but if you refuse to reposition your display or yourself after hopping between diagonal sizes and resolutions, I'd say it's a bit disingenuous to blame or praise either afterwards. Considering you seem to know what PPD is, I think you should be able to appreciate the how and why.


And FSR, which is cross gpu vendor.


Not anymore. FSR4 is AMD only, and only the new RDNA4 GPUs.


I have seen AMD's PR materials for RDNA4, and as far as I can tell, they do not say anywhere anything like that.

People read too much into "designed for RDNA4".


https://cdn.videocardz.com/1/2025/01/AMD-FSR4-9070.jpg

Why would they write that on their marketing slides?


Because it only works on these cards right now.

Further elaborated by their GPU marketing people on interviews. To summarize "RDNA4 for now" and "we're looking into supporting older...".


Yep. I have both 4k and 1440p monitors and I can’t tell the difference in quality so I always use the latter for better frames. I use the 4k for reading text though, it’s noticeably better.


That's why I also finally went from 1920x1200 to 4k about half a year ago. It was mostly for reading text and programming, not gaming.

I can tell the difference in games if I go looking for it, but in the middle of a tense shootout I honestly don't notice that I have double the DPI.


There are good 4K gaming monitors, but they start at over $1200 and if you don't also have a 4090 tier rig, you won’t be able to get full FPS out of AAA games at 4k.


I still have a 3080 and game at 4K/120Hz. Most AAA games that I try can pull 60-90Hz at ~4K if DLSS is available.


Most numbers people are touting are from "Ultra everything benchmarks", lowering the settings + DLLS makes 4k perfectly playable.


I've seen analysis showing that DLSS might actually yield a higher quality image than barebones for the same graphics settings owing to the additional data provided by motion vectors. This plus the 2x speedup makes it a no brainer in my book.


Also, ultrawide monitors. They exist, provide more immersion. And typical resolution is 3440x1440 which is high and and the same time have low ppi (basically regular 27" 1440p monitor with extra width). Doubling that is way outside modern GPU capabilities


A coworker who is really into flight sims runs 6 ultrawide curved monitors to get over 180 degrees around his head.

I have to admit with the display wrapping around into peripheral vision, it is very immersive.


Almost no one plays on native 4k anyway. DLSS Quality (no framegen etc) renders at 1440p internally and by all accounts there is no drawback at all, especially above 60fps. Looks great, no noticeable (excluding super sweaty esports titles) lag and 30% more performance. Combined with VRR displays, I would say 4k is perfectly ok for gaming.


Taking anything Linus or LTT says seriously is even dumber....


I watched the same video you talking about [1], where he's trying the PG27UCDM (new 27" 4K 240Hz OLED "gaming monitor" [2]) and his first impressions are "it's so clean and sharp", then he starts Doom Eternal and after a few seconds he says "It's insane [...] It looks perfect".

[1] https://www.youtube.com/watch?v=iQ404RCyqhk

[2] https://rog.asus.com/monitors/27-to-31-5-inches/rog-swift-ol...


Nonsense 4k gaming was inevitable as soon as 4k TVs got mainstream.


Today someone's pipeline broke because they were using python:3 from Dockerhub and got an unexpected upgrade ;-)

Specifically, pendulum hasn't released a wheel yet for 3.13 so it tried to build from source but it uses Rust and the Python docker image obviously doesn't have Rust installed.


Wow, that's crazy. I tried a 6 digit hash and got a 404, then I tried another 6 digit hash and got "This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository."

Insane


> 1) Fork the repo. 2) Hard-code an API key into an example file. 3) <Do Work> 4) Delete the fork.

... yeah if <Do Work> is push your keys to GitHub.


R2 in only "free" until it isn't. Cloudflare hasn't got a lot of good press recently. Not something I'd wanna build my business around.


Aside from the casino story (high value target that likely faces tons of attacks, therefore an expensive customer for CF), did something happen with them? I'm not aware of bad press around them in general


R2 egress is free.


Why Rust? Aren't you alienating Python devs from working on it?

I see that UV is bragging about being 10-100x faster than pip. In my experience the time spent in dependency resolution is dwarfed by the time making web requests and downloading packages.

Also, this isn't something that runs every time you run a Python script. It's ran once during installation of a Python package.


I actually think that Python's tooling should not be written in Python. Because if yes, you end up with at least two version of Python, one to run the tooling, one to run the project.


I'm not sure of the answer, but one thing Rust has obviously bought them is native binaries for Mac/Windows/Linux. For a project that purports to be about simplicity, it's very important to have an onboarding process that doesn't replicate the problems of the Python ecosystem.


If you are building a production app that uses python in a containerized way, you may find yourself rebuilding the containers (and reinstalling packages) multiple times per day. For us, this was often the slowest part of rebuilds. UV has dramatically sped it up.


Uv has already proven itself by being faster at every step it seems like, except maybe downloading. But notably it includes unpacking and/or copying files from cache into the new virtualenv, which is very fast.


It parallelizes downloads and checking of the packages.

It also doesn't compile .py files to .pyc at install time by default, but that just defers the cost to first import.


It runs every time you build a docker image or build something in your CI


so it take 3 seconds to run instead of 0.3? Don't get me wrong, that's a huge improvement but in my opinion not worth switching languages over

Features should be developed and tested locally before any code is pushed to a CI system. Dependency resolution should happen once while the container is being built. Containers themselves shouldn't be installing anything on the fly it should be baked in exactly once per build.


Modern CI can also cache these dependency steps, through the BuildKit based tools (like Buildx/Dagger) and/or the CI itself (like GHA @cache)


Wait until you realize that "giving up the decently sized ecosystem of Powershell libraries" is a net positive ;-)


Would be nice if the "obscure in URL" feature wouldn't show the text in the textbox when you send it to someone.


Good idea! I've gone ahead and implemented this feature: if "obscure in URL" is turned on, the text won't be visible unless you focus on the textbox (e.g. to edit it).


Well noticed. Good point ...

(Or an additional "Obscure in textbox" checkbox or something along those lines ...)


Dude, let's fix spam callers first that are calling my USA number from a USA number.

This shouldn't be hard. If we can't fix that then good luck tracking down bad actors on the interwebs


I've nearly given up on my phone as a device for making calls because of this.


I came on here looking for an article about all of the network issues last night streaming the game. Couldn't find one so I'll rant here in the comments ;-).

In my neighborhood we have 3 ISPs but one is only just recently available (Google Fiber) so there's not many on it as we already have Spectrum and AT&T fiber available. Lots of people complaining across different streaming services (YouTube TV, Hulu, Paramount+, etc) and also across different internet providers (Spectrum and AT&T... just 1 data point for Google Fiber). Lots of buffering, scaling down to extremely low bitrates where you couldn't even make out how many timeouts were left and could barely make out the score.

Sending each customer their own bespoke video stream works fine for movies and shows, but apparently works terribly for popular live events.

Some sort of multicast solution would fix this... but then theres DRM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: