Hacker Newsnew | past | comments | ask | show | jobs | submit | overgard's commentslogin

I'm not sure I'd say that React's DX is actually that great. I've watched C++ developers come over from using QT and the learning curve is massive. Plus almost all the bugs come from dealing with React's renders.

I think React by default is weird because most things don't actually need that degree of reactivity. Like, if most of the time only one thing is displaying or updating a value React just adds a lot of headache. And the amount of useEffect I see? (By necessity mind you) with the need for linters in order to figure out if you missed something in a dependency list.. I just think it's the wrong model for most projects


AI right now seems more like a religious movement than a business one. It doesn't matter how much it costs (to the true believers), its about getting to AGI first.


Assuming users accept those ads. Like, would they make it clear with a "sponsored section", or would they just try to worm it into the output? I could see a lot of potential ways that users reject the ad service, especially if it's seen to compromise the utility or correctness of the output.


Billions of people use Google, YouTube, Facebook, Tiktok, Instagram, etc and accept the ads. Getting similar ad rates would make OpenAI fabulously profitable. They have no need to start with ad formats that might be rejected by users. Even if that were the intended endgame, you'd want to boil the frog for years.


If they're profitable, why on earth are they seeking crazy amounts of investment month after month? It seems like they'll raise 10 billion one month, and then immediately turn around and raise another 10 billion a month or two after that. If it's for training, it seems like a waste of money since GPT-5 doesn't seem like it's that much of an improvement.


In terms of sources, I would trust Zitron a lot more than Altman or Amodei. To be charitable, those CEOs are known for their hyperbole and for saying whatever is convenient in the moment, but they certainly aren't that careful about being precise or leaving out inconvenient details. Which is what a CEO should do, more or less, but, I wouldn't trust their word on most things.


I agree we should not take CEOs at their word, we have to think about whether what they're saying is more likely to be true than false given other things we know. But to trust Zitron on anything is ridiculous. He is not a source at all: he knows very little, does zero new reporting, and frequently contradicts himself in his frenzy to believe the bubble is about to pop any time now. A simple example: claiming both that "AI is very little of big tech revenue" and "Big tech has no other way to show growth other than AI hype". Both are very nearly direct quotes.


Those two statements are not contradictory, and thinking that they are belies a pretty fundamental misunderstanding of his basic thesis.

The first statement is one about the present value of AI. The second statement is about their belief of the future value of AI.


It is not about the present and future value of AI at all. It is about the present and future value of things other than AI. Here is the full quote:

"There is nothing else after generative AI. There are no other hypergrowth markets left in tech. SaaS companies are out of things to upsell. Google, Microsoft, Amazon and Meta do not have any other ways to continue showing growth, and when the market works that out, there will be hell to pay, hell that will reverberate through the valuations of, at the very least, every public software company, and many of the hardware ones too."

I am not doing some kind of sophisticated act of interpretation here. If AI is very little of big tech revenue, and big tech are posting massive record revenue and profits every quarter, then it cannot be the case that "there is nothing left after generative AI" and they “do not have any other ways to continue showing growth” — what is left is whatever is driving all that revenue and profit growth right now!


Because to make GPT-5 or Claude better than previous models, you need to do more reasoning which burns a lot more tokens. So, your per-token costs may drop, but you may also need a lot more tokens.


GPT-5 can be configured extensively. Is there any point at which any configuration of GPT-5 that offers ~DeepSeek level performance is more expensive than DeepSeek per token?


I thought the thing that made DeepSeek interesting (besides competition from China) was that its inference costs were something like 1/10th. So unless that gap has been bridged (has it?) I don't think a calculation based on DeepSeek can apply to OpenAI or Anthropic.


From a privacy and security standpoint, hell no!


I'd love for this kind of thing without the incredibly obnoxious commentary. I felt like I was reading propaganda rather than a how-to.


I don’t know much about audit logs, but the more concerning thing to me is it sounds like it’s up to the program reading the file to register an access? Shouldn’t that be something at the file system level? I’m a bit baffled why this is a copilot bug instead of a file system bug unless copilot has special privileges? (Also to that: ick!)


I suspect this might be typical RAG where there is a vector index or chucked data it looks at.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: