Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Folks really over index when an LLM is very good for their use case. And most of the folks here are coders, at which they're already good and getting better.

For some tasks they're still next to useless, and people who do those tasks understandably don't get the hype.

Tell a lab biologist or chemist to use an LLM to help them with their work and they'll get very little useful out of it.

Ask an attorney to use it and it's going to miss things that are blindingly obvious to the attorney.

Ask a professional researcher to use it and it won't come up with good sources.

For me, I've had a lot of those really frustrating experiences where I'm having difficulty on a topic and it gives me utter incorrect junk because there just isn't a lot already published about that data.

I've fed it tricky programming tasks and gotten back code that doesn't work, and that I can't debug because I have no idea what it's trying to do, or I'm not familiar with the libraries it used.



It sounds like you're trying to use these llms as oracles, which is going to cause you a lot of frustration. I've found almost all of them now excel at imitating a junior dev or a drunk PhD student. For example the other day I was looking at acoustic sensor data and I ran it down the trail of "what are some ways to look for repeating patterns like xyz" and 10 minutes later I had a mostly working proof of concept for a 2nd order spectrogram that reasonably dealt with spectral leakage and a half working mel spectrum fingerprint idea. Those are all things I was thinking about myself, so I was able to guide it to a mostly working prototype in very little time. But doing it myself from zero would've taken at least a couple of hours.

But truthfully 90% of work related programming is not problem solving, it's implementing business logic. And dealing with poor, ever changing customer specs. Which an llm will not help with.


> But truthfully 90% of work related programming is not problem solving, it's implementing business logic. And dealing with poor, ever changing customer specs. Which an llm will not help with.

Au contraire, these are exactly things LLMs are super helpful at - most of business logic in any company is just doing the same thing every other company is doing; there's not that many unique challenges in day-to-day programming (or business in general). And then, more than half of the work of "implementing business logic" is feeding data in and out, presenting it to the user, and a bunch of other things that boil down to gluing together preexisting components and frameworks - again, a kind of work that LLMs are quite a big time-saver for, if you use them right.


Strongly in agreement. I've tried them and mostly come away unimpressed. If you work in a field where you have to get things right, and it's more work to double check and then fix everything done by the LLM, they're worse than useless. Sure, I've seen a few cases where they have value, but they're not much of my job. Cool is not the same as valuable.

If you think "it can't quite do what I need, I'll wait a little longer until it can" you may still be waiting 50 years from now.


> If you work in a field where you have to get things right, and it's more work to double check and then fix everything done by the LLM, they're worse than useless.

Most programmers understand reading code is often harder than writing it. Especially when someone else wrote the code. I'm a bit amused by the cognitive dissonance of programmers understanding that and then praising code handed to them by an LLM.

It's not that LLMs are useless for programming (or other technical tasks) but they're very junior practitioners. Even when they get "smarter" with reasoning or more parameters their nature of confabulation means they can't be fully trusted in the way their proponents suggest we trust them.

It's not that people don't make mistakes but they often make reasonable mistakes. LLMs make unreasonable mistakes at random. There's no way to predict the distribution of their mistakes. I can learn a human junior developer sucks at memory management or something. I can ask them to improve areas they're weak in and check those areas of their work in more detail.

I have to spend a lot of time reviewing all output from LLMs because there's rarely rhyme or reason to their errors. They save me a bunch of typing but replace a lot of my savings with reviews and debugging.


My view is that it will be some time before they can as well because of the success in the software domain - not because LLM's aren't capable as a tech but because data owners and practitioners in other domains will resist the change. From the SWE experience, news reports, financial magazines, etc many are preparing accordingly, even if it is a subconscious thing. People don't like change, and don't want to be threatened when it is them at risk - no one wants what happened to artists and now SWE's to happen to their profession. They are happy for other professions to "democratize/commoditize" as long as it isn't them - after all this increases their purchasing power. Don't open source knowledge/products, don't let AI near your vertical domain, continue to command a premium for as long as you can - I've heard variations of this in many AI conversations. Much easier in oligopoly and monopoly like domains and/or domains where knowledge was known to be a moat even when mixed with software as you have more trust competitors won't do the same.

For many industries/people work is a means to earn, not something to be passionate in for its own sake. Its a means to provide for other things in life you are actually passionate about (e.g. family, lifestyle, etc). In the end AI may get your job eventually but if it gets you much later vs other industries/domains you win from a capital perspective as other goods get cheaper and you still command your pre-AI scarcity premium. This makes it easier for them to acquire more assets from the early disrupted industries and shield them from eventual AI taking over.

I'm seeing this directly in software. Less new frameworks/libraries/etc outside the AI domain being published IMO, more apprehension from companies to open source their work and/or expose what they do, etc. Attracting talent is also no longer as strong of a reason to showcase what you do to prospective employees - economic conditions and/or AI make that less necessary as well.


I know at least two attorneys who use LLMs productively.

As with all LLM usage right now, it's a tool and not fit for every purpose. But it has legit uses for some attorney tasks.


I frequently see news stories where attorneys get in trouble for using LLMs, because they cite hallucinated case law (e.g.). If they didn't get caught, that would look the same as using them "productively".


Asking the LLM for relevant case law and checking it up - productive use of LLM. Asking the LLM to write your argument for you and not checking it up - unproductive use of LLM. It's the same as with programming.


>Asking the LLM for relevant case law and checking it up - productive use of LLM

That's a terrible use for an LLM. There are several deterministic search engines attorneys use to find relevant case law, where you don't have to check to see if the cases actually exist after it produces results. Plus, the actual text of the case is usually very important, and isn't available if you're using an LLM.

Which isn't to say they're not useful for attorneys. I've had success getting them to do some secretarial and administrative things. But for the core of what attorneys do, they're not great.


For law firms creating their own repositories of case law, having LLMs search via summaries, and then dive into the selected cases to extract pertinent information seems like an obvious great use case to build a solution using LLMs.

The orchestration of LLms that will be reading transcripts, reading emails, reading case law, and preparing briefs with sources is unavoidable in the next 3 years. I don’t doubt multiple industry specialized solutions are already under development.

Just asking chatGPT to make your case for you is missing the opportunity.

If anyone is unable to get Claud 3.7 or Gemini 2.5 to accelerate their development work I have to doubt their sentience at this point. (Or more likely doubt that they’re actively testing these things regularly)


Law firms don't create their own repos of case law. They use a database like westlaw or lexis. LLMs "preparing briefs with sources" would be a disaster and wholly misunderstands what legal writing entails.


This lawyer uses llms for everything. Correspondence, document review, drafting demands, drafting pleadings,discovery requests, discovery responses, golden rule letters, motions, responses to motions, deposition outlines, depo prep, voir dire, opening, direct, cross, closings.


I find it very useful to review the output and consider its suggestions.

I don’t trust it blindly, and I often don’t use most of what it suggests; but I do apply critical thinking to evaluate what might be useful.

The simplest example is using it as a reverse dictionary. If I know there’s a word for a concept, I’ll ask an LLM. When I read the response, I either recognize the word or verify it using a regular dictionary.

I think a lot of the contention in these discussions is because people are using it for different purposes: it's unreliable for some purposes and it is excellent at others.


> For law firms creating their own repositories of case law

Wait, why would a law firm create its own repository of case law? It's not like it has access to secret case law that other lawfirms do not.


> Asking the LLM for relevant case law and checking it up - productive use of LLM.

Only if you're okay with it missing stuff. If I hired a lawyer, and they used a magic robot rather than doing proper research, and thus missed relevant information, and this later came to light, I'd be going after them for malpractice, tbh.


Ironically both Westlaw and Lexis have integrated AI into their offerings so it's magic robots all the way down.

https://legal.thomsonreuters.com/en/products/westlaw-edge https://www.lexisnexis.com/en-us/products/protege.page


Surely this was meant ironically, right? You must've heard of at least one of the many cases involving lawyers doing precisely what you described and ending up presenting made up legal cases in court. Guess how that worked out for them.


The uses that they cited to me were "additional pair of eyes in reviewing contracts," and, "deep research to get started on providing a detailed overview of a legal topic."


The problem it‘s marketed as a general tool that at least in the future will work near perfectly if we just provide enough data and computing power.


> For some tasks they're still next to useless, and people who do those tasks understandably don't get the hype.

This is because programmers talk on the forums that programmers scrape to get data to train the models.


Honestly it's worse than this. A good lab biologist/chemist will try to use it, understand that it's useless, and stop using it. A bad lab biologist/chemist will try to use it, think that it's useful, and then it will make them useless by giving them wrong information. So it's not just that people over-index when it is useful, they also over-index when it's actively harmful but they think it's useful.


You think good biologists never need to summarize work into digestible language, or fill out multiple huge, redundant grant applications with the same info, or reformat data, or check that a writeup accurate reflects data?

I’m not a biologist (good or bad) but the scientists I know (who I think are good) often complain that most of the work is drudgery unrelated to the science they love.


Sure, lots of drudgery, but none of your examples are things that you could trust an LLM to do correctly when correctness counts. And correctness always counts in science.

Edit to add: and regardless, I'm less interested in the "LLM's aren't ever useful to science" part of the point. The point that actual LLM usage in science will mostly be for cases where they seem useful but actually introduce subtle problems is much more important. I have observed this happening with trainees.


I have also seen trainees introduce subtle problems when they think they know more than they do.


This attorney uses it all day every day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: