Hacker News new | past | comments | ask | show | jobs | submit login

it's _extremely_ useful for lawyers, arguably even more so than for coders, given how much faster they can do stuff. They're also extremely useful for anyone who writes text and wants a reviewer. Also capable to execute most daily activities of some roles, such as TPMs.

It's still useful to a small subset of all those professions - the early adopters. Same way computers were useful to many professionals before the UI (but only a small fraction of them had the skillset to use terminals)




Except for those lawyers who rely on it for case law eg https://law.justia.com/cases/federal/district-courts/new-yor...


I think the big mistake is _blindly relying on the results_ - although that problem has been improving dramatically (gpt3.5 hallucinated constantly, I rarely see a hallucination w/ the latest gpt/claude models)


> it's _extremely_ useful for lawyers,

How so? How are you using LLMs to practice law? Genuinely curious.


Drafting demand letters, drafting petitions, drafting discovery requests, drafting discovery responses, drafting golden rule letters, summarizing meet and confer calls, drafting motions, responding to motions, drafting depo outlines, summarizing depos, …

If you’re not using ai in your practice, you’re doing a disservice to your clients.


How do you get the LLM to the point where it can draft a demand letter? I guess I'm a little confused as to how the LLM is getting the particulars of the case in order to write a relevant letter. Are you typing all that stuff in as a prompt? Are you dumping all the case file documents in as prompts and summarizing them, and then dumping the summaries into the prompt?


Demand letters are the easiest. Drag and drop police report and medical records. Tell it to draft a demand letter. For most things, there are only a handful critical pages in the medical records, so if the original pdf is too big, I’ll trim excess pages. I may also add my personal case notes.

I use a custom prompt to adjust the tone, but that’s about it.


I feel pretty uncomfortable dumping A/C privileged material into ChatGPT. Am I wrong? Or are you just limiting this to non-privileged material always?


curious about what tools you're using - is it just chatgpt? Any other apps/services/models?


I use the same and have stopped needing lawyers for many things I used to. paying for chatgpt pro pays for itself.

imo while many here on HN debate the future of SWEs in the era of LLMs, there is no debate about future of many legal jobs - they will disappear


Just ChatGPT and claude


multiple lawyer friends I know are using chatgpt (and custom gptees) for contract reviews. They upload some guidelines as knowledge, then upload any new contract for validation. Allegedly replaces hours of reading. This is a large portion of the work, in some cases. Some of them also use it to debate a contract, to see if there's anything they overlooked or to find loopholes. LLMs are extremely good at that kind of constrained creativity mode where they _have_ to produce something (they suck at saying "I dont know" or "no"), so I guess it works as sort of a "second brain" of sorts, for those too.

There's even reported cases of entire legislations being written with LLMs already [1]. I'm sure there's thousands more we haven't heard about - the same way researchers are writing papers w/ LLMs w/o disclosing it

[1] https://olhardigital.com.br/2023/12/05/pro/lei-escrita-pelo-...


Five years later, when the contract turns out to be defective, I doubt the clients are going to be _thrilled_ with “well, no, I didn’t read it, but I did feed it to a magic robot”.

Like, this is malpractice, surely?


It only has to be less likely to cause that issue than a paralegal to be a net positive.

Some people expect AI to never make mistakes when doing jobs where people routinely make all kinds of mistakes of varying severity.

It’s the same as how people expect self-driving cars to be flawless when they think nothing of a pileup caused by a human watching a reel while behind the wheel.


In the pileup example, the human driver is legally at fault. If a self driving car causes the pileup, who is at fault?


My understanding is the firm operating the car is liable, in the full self driving case of commercial vehicles (waymo). The driver is liable in supervised self driving cases (privately owned Tesla)


Well, maybe its wheel fell off.

So, the mechanic who maintenanced it last?

...

We don't fault our tools, legally. We usually also don't fault the manufacturer, or the maintenance guy. We fault the people using them.


Any evidence it's actually better than a paralegal? I doubt it is.


This is malpractice the same way that a coder using Copilot is malpractice




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: