Hacker Newsnew | past | comments | ask | show | jobs | submit | lanza's commentslogin

The difference in usefulness between ChatGPT free and ChatGPT Pro is significant. Turning up compute for each embedded usage of LLM inference will be a valid path forward for years.

That's a JIT. It uses the same compiler infrastructure but swaps out the AoT backend and replaces it with the JIT backend in LLVM. Notably, this blog post is targeting on-device usage which a custom JIT is not allowed. You can only interpret.

Because the usefulness of an AI model is reliably solving a problem, not being able to solve a problem given 10,000 tries.

Claude Code is still only a mildly useful tool because it's horrific beyond a certain breadth of scope. If I asked it to solve the same problem 10,000 times I'm sure I'd get a great answer to significantly more difficult problems, but that doesn't help me as I'm not capable of scaling myself to checking 10,000 answers.


Without reading an entire novel's worth of text, do they explain why they picked these dates? They have a separate timeline post where the 90th percentile of superhuman coder is later than 2050. Did they just go for shock value and pick the scariest timeline?


Only gripe I have with the tool is that once you've gotten a country right a few times it zooms in too far. I still had no clue where Eritrea was after getting it right like four times. Just got lucky.

But now that the map only shows me three possible countries I can trivially remember which one it was. Ask me again tomorrow while only showing me the full map and I might guess it's in South America.


Wrong thread?


> I really don't understand what their endgame is here.

To not lose. History is full of stories of incumbents not wanting to cannibalize themselves and dying because of it.


> Nobody up to this day has been able to give a formal mathematical definition of intelligence, let alone a proof that it can be reduced to a computable function.

We can't prove the correctness of the plurality of physics. Should we call that a dead end too?


> Llama 3.1 405B can currently replace junior engineers

lol


Can LLM join a standup call? Can LLM create a merge request?

At the moment it looks like an experienced engineer can pressure LLM to hallucinate a junior level code.


The argument is that, instead of hiring a junior engineer, a senior engineer can simply produce enough output to match what the junior would have produced and then some.

Of course, that means you won't be able to train them up, at least for now. That being said, even if they "only" reach the level of your average software developer, they're already going to have pretty catastrophic effects on the industry.

As for automated fixes, there are agents that _can_ do that, like Devin (https://devin.ai/), but it's still early days and bug-prone. Check back in a year or so.


Not training new workers and relying on senior engineers with tools is short sighted and foolish.

LLMs seem to be accelerating the trend


On one hand, I somewhat agree; on the other hand, I think LLMs and similar tooling will allow juniors to punch far beyond their weight and learn and do things that they would have never have dreamed of before. As mentioned in another comment, they're the teacher that never gets tired and can answer any question (with the necessary qualifications about correctness, learning the answer but not the reasoning, etc)

It remains to be seen if juniors can obtain the necessary institutional / "real work" experience from that, but given the number of self-taught programmers I know, I wouldn't rule it out.


I think many people using llms are faking it and have no interest in “making it”.

It’s not about learning for most.

Just because a small subset of intelligent and motivated people use tools to become better programmers, there is a larger number of people that will use the tools to “cheat”.


Tools are foolish? Like, should we remove all of the other tools that make senior engineers more productive, in favor of hiring more people to do those same tasks? That seems questionable.


Tools are great, but there is a way to learn the fundamentals and progress through skills and technology.

Learn to do something manually and then learn the technology.

Do you want engineers who are useless if their calculator breaks or do you want someone who can fall back on pen and paper and get the work done?


Well what if their pen breaks? Perhaps a good fluid dynamics engineer needs to be able to create ink from common plants?

I get the argument, it’s just silly. Calculators don’t “break”. I would rather have an engineer who uses highly reliable tools than one who is so obsessed with the lowest levels of the stack that they aren’t as strong at the top.

I’m willing to live with a useless day in the insanely unlikely event that all readily available calculators stop working.


There's an incentive problem because the benefit from training new workers is distributed across all companies whereas the cost of training them is allocated to the single company that does so


Most broken systems have bad incentives.

Companies don’t want to train people ($) because employees with skills and experience are more valuable to other companies because retention is also expensive.

We are not training AND retaining talent.


> The argument is that, instead of hiring a junior engineer, a senior engineer can simply produce enough output to match what the junior would have produced and then some.

...and that's just as asinine of a claim as the original one


Why? I can say that, in my personal experience, AI has allowed me to work more efficiently as a senior engineer: I can describe the behaviour I want, scan over the generated code and make any necessary fixes much faster than either writing the code myself or having a junior do it.


Plain grift, or are they high on their own supply?


Both? Both is good.


When shopping for a new car to take to the race track on the weekends did you stop and point out that the Honda Odyssey's suspension is too soft?


If I see a single ad I'm uninstalling it and switching to Apple maps.


I've driven to the wrong location twice because the top search result was an ad. e.g. I searched for "Home Depot" and immediately clicked on the top link (there's only one Home Depot nearby) and after a few mins realized I was headed somewhere else entirely, because they'd injected a tiny subtle ad above their search results.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: