Hacker Newsnew | past | comments | ask | show | jobs | submit | gmueckl's commentslogin

Why not link the paper on arxiv? https://arxiv.org/abs/2508.21038

Social interactions are subject to network wffects and have all but captured by social media companies. This part of the web is dead and won't come back.

Is this true for text? Glyph rendering is complicated and I have a hard time imagining a high quality GPU renderer that can beat a CPU renderer due to this.

I believe the situation has changed in the last year or so? I remember reading a bunch of stuff about how text rendering is finally being able to be pushed onto the GPU lately.

The SDL3 TTF library has the TextEngine API to build a font atlas targeting the CPU or GPU (eg https://wiki.libsdl.org/SDL3_ttf/TTF_CreateGPUTextEngine). Its slightly more cumbersome because you have to work with TTF_Text objects instead of char*s but it's quite doable.

It matters who you communicate concerns to. Something as fundamental as "I think that your team shouldn't even exist" should go to the team leads and their managers exclusively at first. Writing that to the entire affected team is counterproductive in any organization because it unnecessarily raises anxiety and reduces team productivity and focus. Comments like this from influential people can have big mental and physical health impacts on people.

This entire situation looks very suspicious. Was Carmack even responsible for triaging research projects and allocating resources for them? If yes, then he should have fought that battle earlier. If no, then the best he could do is to refuse to use that OS in projects he controls.

It should be fine to give your opinion on efforts.

Yeah it sounds to me here like the urge to reach for HR had less to do with Carmack and more to do with the overall culture at Meta.

Carmack had no direct say over research AFAIK.

That’s not how big companies work.

Not when this is his personal opinion he thought nothing should follow from.

"I think that your team shouldn't even exist" doesn't mean "I want your team to no longer exist.".


But the name Carmack carries some clout and people listen to him (too) closely because of his reputation alone. This is soft power that automatically comes with responsibility.

Yes and he used it to try and stop something he saw as a total waste.

If I was on that team I'd welcome the opportunity to tell John Carmac why he was wrong or if I agreed start looking for another project to work on.

When I was on nuclear submarines we'd call what you are advocating "keep us in the dark and feed us bullshit."


This assumes that you would be sincerely listened to, which you wouldn't in a case like this. Higher ups in large organizations don't have the bandwidth to listen to everybody.

Your sub's officers also need to constantly be aware of what to communicate to whom and in which language. Your superiors certainly kept you in the dark about a ton of concerns that were on their plate because simply mentioning them to subordinates would have been too distracting.


You say your piece and if not heard, do an internal transfer. This whole don't tell people the truth about technical matters to not hurt their feelings or disrupt some people's paychecks is not serious business.

I want to know where you have found a workplace staffed entirely by androids. What you're advocating for would fall apart the moment it had contact with humans. It's why diplomacy is both necessary and difficult. It seems it is a lost art knowing how to navigate hard conversations and has been replaced with one of avoidance or tactless 'brutal honesty'.

Maybe on a mediocre team. But that was the parent comment's point.

On well-functioning teams, product feedback shouldn't have to be filtered through layers of management. In fact, it would be dishonest to discuss something like this with managers while hiding it from the rest of the team.


> Comments like this from influential people can have big mental and physical health impacts on people.

So what are we supposed to do? Just let waste continue? The entire point of engineering is to understand the tradeoffs of each decision and to be able to communicate them to others...


I also think that they have access to more helpful resources than people outside the field do, e.g. being able to contact people working on the lower layers to get the missing info. These channels exist in the professional world, but they are hard to access.

Depends on what electricity costs in your place. It can be anywhere from 10 ct/kWh to 45 ct/kWh and that makes a huge difference at the end of the month.

A huge difference, or between a (10W x (8,760 hours/12) x 10¢/kWh =) 73¢ and $3.29 difference per month?

If you want a "raw" driving experience, you need to go on a race track in a "proper" race car. I use the quotes because you could come up with very diffferent definitions for them depending on your particular perspective. Amateur car races are a thing, btw.

I'm glad that all these assistants exist for road vehicles. I think of myself as a fairly disciplined driver (welly who am I kidding, really?), but these systems have saved my bacon more than once over the years.


That is wishful thinking. Every layer we added between humans and the machines (and even the ones in the machines themselves) take hordes of dedicated humans to maintain: IDEs, compilers/interpreters, linters, CI tools, assemblers, linkers, operating systems, firmware, microcode, circuitry, circuit elements (manufacturing processes).

Just about every time somebody on this site says “we developers”, you can assume they’re ignoring the (large majority of) developers that don’t work on the same things they do, in the same way.

Yes, all those ever-growing layers of intricate abstraction that you take for granted and “don’t have to worry about” are conceived of, designed, built, and maintained by developers. Who do you think wrote the compiler for that syntax you don’t want to learn?


The point of abstraction is that it doesn’t leak. Most developers don’t need to understand compiler theory or assembly to be productive.

No one in my company writes assembly. very few developers work at that level of abstraction - this means those who made the compilers are doing a good job.


Yes, and very few people working on compilers do OS kernels, and very few people working on databases do compilers, etc. etc. My point is, they're all developers, so when you say "we developers", you'd better be speaking for all of them.

I agree with you. But not many people work with or understand the abstraction at OS or circuitry level.

That’s kind of my point: most people will work on higher abstractions but there will be some who maintain lower ones.

I write C# but I barely care about memory, gc nor microcontrollers nor assembly. Vast majority of people work on higher abstractions.


I would challenge that it is really a vast majority working at these highest levels of abstractions. There are thousands of people working on C#, Java and JavaScript runtimes and basic libraries. There are thousands of people working on compilers and thousands more (morw likely tens of thousands) working on operating systems and drivers etc... I think that the amount of effort that goes into all of this is severely underestimated because it so far removed from the perspective of a high level application developer.

Sometimes (only sometimes, I promise) I wonder whether this kind of legislation is being dreamt up by a think tank tasked with planning how to implement some ulterior goal (e.g. massively increased surveillance to fight crime - it's far too easy to unsert something more nefarious here). The politicians then just follow the action plan and repeat talking points from party advisors.

Like the German Socialist Democratic Party did in Germany in 1933? How well did that go?

Very well, until it didn't.

[flagged]


This is the only mention of "age verification" in all 900 pages of Project 2025:

"In addition, some of the methods used to regulate children’s internet access pose the risk of unintended harms. For instance, age verification regulations would inevitably increase the amount of data collection involved, increasing privacy concerns. Users would have to submit to platforms proof of their age, which raises the risks of data breach or illegitimate data usage by the platforms or bad actors. Limited-government conservatives would prefer the FTC play an educational role instead. That might include best practices or educational programs to empower parents online."

The policy recommendations for "Protecting Children Online" are found on page 875. The two main recommendations they make are:

"The FTC should examine platforms’ advertising and contract-making with children as a deceptive or unfair trade practice, perhaps requiring written parental consent."

"The FTC can and should institute unfair trade practices proceedings against entities that enter into contracts with children without parental consent. Personal parental responsibility is, of course, key, but the law must respect, not undermine, lawful parental authority."

https://static.heritage.org/project2025/2025_MandateForLeade...


Project 2025 also asserts that porn isn't protected by the first amendment at all and should be banned. It seems disingenuous to ignore that.

I provided what was immediately pertinent, and I linked to the full, searchable document.

That’s not disingenuous.

Yes, they oppose porn. They do not advocate for age verification as the solution to it (or age verification at all), which is what would make their position on porn relevant to the topic at hand.


I haven't even had a cursory look at decoders state of the art for 10+ years. But my intuition would say that decoding for display could profit a lot from GPU acceleration for later parts of the process when there is already pixel data of some sort involved. Then I imagine thet the initial decompression steps could stay on the CPU and the decompressed, but still (partially) encoded data is streamed to the GPU for the final transformation steps and application to whatever I-frames and other base images there are. Steps like applying motion vectors, iDCT... look embarrassingly parallel at a pixel level to me.

When the resulting frame is already in a GPU texture then, displaying it has fairly low overhead.

My question is: how wrong am I?


I'm not an expert, but in the worst case, you might need to decode dense 4x4-pixel blocks which each depend on fully-decoded neighbouring blocks to their west, northwest, north and northeast. This would limit you to processing `frame_height * 4` pixels in parallel, which seems bad, especially for memory-intensive work. (GPUs rely on massive parallelism to hide the latency of memory accesses.)

Motion vectors can be large (for example, 256 pixels for VP8), so you wouldn't get much extra parallelism by decoding multiple frames together.

However, even if the worst-case performance is bad, you might see good performance in the average case. For example, you might be able to decode all of a frame's inter blocks in parallel, and that might unlock better parallel processing for intra blocks. It looks like deblocking might be highly parallel. VP9, H.265 and AV1 can optionally split each frame into independently-coded tiles, although I don't know how common that is in practice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: