it's _extremely_ useful for lawyers, arguably even more so than for coders, given how much faster they can do stuff. They're also extremely useful for anyone who writes text and wants a reviewer. Also capable to execute most daily activities of some roles, such as TPMs.
It's still useful to a small subset of all those professions - the early adopters. Same way computers were useful to many professionals before the UI (but only a small fraction of them had the skillset to use terminals)
I think the big mistake is _blindly relying on the results_ - although that problem has been improving dramatically (gpt3.5 hallucinated constantly, I rarely see a hallucination w/ the latest gpt/claude models)
How do you get the LLM to the point where it can draft a demand letter? I guess I'm a little confused as to how the LLM is getting the particulars of the case in order to write a relevant letter. Are you typing all that stuff in as a prompt? Are you dumping all the case file documents in as prompts and summarizing them, and then dumping the summaries into the prompt?
Demand letters are the easiest. Drag and drop police report and medical records. Tell it to draft a demand letter. For most things, there are only a handful critical pages in the medical records, so if the original pdf is too big, I’ll trim excess pages. I may also add my personal case notes.
I use a custom prompt to adjust the tone, but that’s about it.
multiple lawyer friends I know are using chatgpt (and custom gptees) for contract reviews. They upload some guidelines as knowledge, then upload any new contract for validation. Allegedly replaces hours of reading. This is a large portion of the work, in some cases. Some of them also use it to debate a contract, to see if there's anything they overlooked or to find loopholes. LLMs are extremely good at that kind of constrained creativity mode where they _have_ to produce something (they suck at saying "I dont know" or "no"), so I guess it works as sort of a "second brain" of sorts, for those too.
There's even reported cases of entire legislations being written with LLMs already [1]. I'm sure there's thousands more we haven't heard about - the same way researchers are writing papers w/ LLMs w/o disclosing it
Five years later, when the contract turns out to be defective, I doubt the clients are going to be _thrilled_ with “well, no, I didn’t read it, but I did feed it to a magic robot”.
It only has to be less likely to cause that issue than a paralegal to be a net positive.
Some people expect AI to never make mistakes when doing jobs where people routinely make all kinds of mistakes of varying severity.
It’s the same as how people expect self-driving cars to be flawless when they think nothing of a pileup caused by a human watching a reel while behind the wheel.
My understanding is the firm operating the car is liable, in the full self driving case of commercial vehicles (waymo). The driver is liable in supervised self driving cases (privately owned Tesla)
It's still useful to a small subset of all those professions - the early adopters. Same way computers were useful to many professionals before the UI (but only a small fraction of them had the skillset to use terminals)