Hacker News new | past | comments | ask | show | jobs | submit login

So the team I lead does a lot of research around all the “plumbing” around LLMs. Both technical and from a product-market perspectives.

What I’ve learned is that for the most part that AI revolution is not going to be because of PHD-level LLMs. It will be because people are better equipped to use the high-schooler level LLMs to do their work more efficiently.

We have some knowledge graph experiments where LLMs continuously monitor user actions on Slack, GitHub etc and build up an expertise store. It learns about your work, your workflows and then you can RAG them.

In user testing, people most closely associated this experience to having someone just being able to read their minds and essentially auto-suggest their work outputs. Basically it’s like another team member.

Since these are just nodes in a knowledge graph, you can mix and match expertise bases that span several skills too. Eg: A Pm who understands the nuances of technical feasibility.

And it didn’t require user training or prompting LLMs.

So while GPT-5 may be delayed, I don’t think that’s stopping or slowing down a revolution in knowledge-worker productivity.




This ^^^^^!!

Progress in the applied domain (the sort of progress that makes a different in the economy) will come predominantly from integrating and orchestrating LLMs, with improvements to models adding a little bit of extra fuel on top.

If we never get any model better than what we have now (several GPT-4-quality models and some stronger models like o1/o3) we will still have at least a decade of improvements and growth across the entire economy and society.

We haven't even scratched the surface in the quest to understand how to best integrate and orchestrate LLMs effectively. These are very early days. There's still tons of work to do in memory, RAG, tool calling, agentic workflows, UI/UX, QA, security, ...

At this time, not more than 0.01% of the applications and services that can be built using currently available AI and that can meaningfully increase productivity and quality have been built or even planned.

We may or may not get to AGI/ASI soon with the current stack (I'm actually cautiously optimistic), but the obsessive jump from the latest research progress at the frontier labs to applied AI effectiveness is misguided.


> a revolution in knowledge-worker productivity.

That's a nice euphemism for "imminent mass layoffs and a race to the bottom"...


In my lifetime there have seldom been much layoffs due to improved technologies. The companies tend to invest to keep up with the rival companies. The layoffs come more when the companies become loss making for whatever reason eg. the UK coal industry going, or Detroit being undercut by lower cost car makers.


Knowledge worker productivity has increased in other ways over the decades. Increases don't always lead to mass layoffs. Rails made (and still makes) many many web devs much more productive than before. Its arrival did not lead to mass layoffs


Productivity has always meant the ability to do more with less. Or it can mean doing even more with more.

Were we at a peak 1000+ years ago and have only gone downhill since at every technological breakthrough?


These productivity gains won't be shared with the employees. I think some people underestimate what a violent populus can do to them if they squeeze out even more Yacht money from the people.


Every one of those employees is capable of either using the new tools to start their own companies and solve unaddressed customer problems, or negotiate for comp that has equity. This has always been true.

The losers here are algorithm junkies who refuse to learn new skills and want to solve yesterday’s problems.


So you surely wouldn't be against a harsh inheritance tax so every generation can get a fair shot at the same issues?


Psssh, y'all been letting the billionaires and trillionaires do this forever now. Products only get more subpar and profit margins only grow and we're all too busy hating each other for sex, skin colour, sexuality, etc because we're just animals.

Ain't gonna change unless we genetically engineer our dumbass evolutionary history out of ourselves.


The idea that someone should be paid by a corporation when they don't provide value is very strange to me. Doing so seems like the real race to the bottom


what about when someone provides long-term value? They would be replaced by a short-term thinking corp (namely, all of them) for providing less value than an alternative with it's value purely in the short-term.

We are accelerating by preferring short-term gains. Like a fire becoming an explosion, that's modern society. Corps now throw the future under the bus for a slight boost in short-term value.


This conclusion is the lump of labor fallacy. It's not that simple.


It’s that sang, “radiologist aren’t losing their jobs due to AI .. only radiologist who don’t use AI are losing their jobs”.


The technology is not dystopian but our economic system makes it so.

Up to you to figure out which will hold.


No, the job market will adapt, just like it did during the industrial and information revolutions, and life will be better.


It will be better for those who already have it good. How it will affect those who don't is the real question here.


You have no idea if that's true or not.


"The job market will adapt and horses will simply find employment elsewhere now that we have cars"

The industrial revolution is not an apt analogy. Humans were still too essential to getting factories to actually work. Horses becoming useless - by no fault of their own - is an apt analogy. We are rushing to a world where humans can be fully replaced.

This "humans will always be in the loop no matter what" is just Cope. We simply don't know what will happen or the what the upper bound of AI capabilities will be. But 100% automation and humans as knowledge workers being as useless vs AI as horses vs cars is no longer sci-fi. We don't know if it will, but this is a future that actually could happen within our lifetimes.


I already feel like Copilot in VScode can read my mind. It’s kind of creepy when it does it multiple times a day.

ChatGPT also seems to also be building a history of my queries and my prompts are getting shorter and shorter because it already knows my frameworks, databases, operating system, and common problems I’m solving


just a question for understanding - if we say 'it learns', does it mean it actually learns this as part of its training data? or does this mean it's stored in a vector DB and it retrieves information based on vector search and then includes it in the context window for query responses?


The latter. “Learning” in the comment clearly refers to adding to the knowledge graph, not about training or fine-tuning a model. “and then you can RAG them.”


Honestly I wish you people would stop forcing this "AI revolution" on us. It's not good. It's not useful. It's not creating value. It's not "another team member"; other team members have their own minds with their own ideas and their own opinions. Your autocomplete takes my attention away from what I want to write and replaces it with what you want me to write. We don't want it.


OP's talking about a specific use-case related to tech companies like Google. Not creative writing or research, areas in which AI is in no shape for supporting humans with it's current safety alignment.


I'm not talking about creative writing or reearch.


I find inline AIs like Github Copilot to be annoying, but browser based AIs like Mistral og ChatGPT a really good and welcome help.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: