Hacker News new | past | comments | ask | show | jobs | submit login

I'm only peripherally involved in the law field, and I am aware that ChatGPT and similar do not consistently provide correct legal citations. It also gets things wrong some of the time; the only way to tell is to be an expert or to look up the facts yourself. Don't use ChatGPT for legal writing, nor for anything else that requires accuracy.



When it's correct, it can be a good search aid. But for a lot of things, it is just incorrect with high levels of confidence.

You can also ask it Bluebook questions and it will often get the right answer. At other times, it will get the right answer but cite to the wrong rule (not that it matters that much).

Another issue is that it can cite to the correct case, but misunderstand what it is citing to. You can be really specific and ask something like "what is the x-factor test from Doe v. Doe" and it will get three factors correct and invent the other two.

The thing with law, though, is that there are often already many quick reference materials that have already been extensively published that will get you the answer you are looking for more quickly than you can get it through either search or a chat interface. Many state bar associations make available the equivalent of a "practice area in a box" full of checklists, templates, and other material geared towards making it possible for you to start working in that area almost immediately.

I have had it be useful in course correcting my research in an unfamiliar area of law. I was wasting a lot of time reading secondary sources and cases that were not relevant to my problem because I knew nothing about that area of law and my search queries were just leading me in unproductive directions. ChatGPT pointed me towards a more relevant case that opened up the rest of my research for me using conventional tools like Westlaw. It saved me a lot of time. But I did not use it at all for the final work product and never used it blind without looking at a source.


That's right, generative model outputs are worthless unless checked by a human. You are using it right. For the moment I don't think there is any single domain where AI can work on its own, autonomy was reached in 0% of fields. That makes me think the removal of the human in the loop will take a long time. We are still safe, AI will be our sidekick.

It's crazy how AI seems to progress at incredible speed and yet we don't get closer to full autonomy anywhere. It's as if we discover new problems at the same speed we are solving them. Just 5 years ago nobody would think hallucinations will become a central issue in AI, we might discover other unknown unknowns that hide in our future.


I've been thinking that it's funny how these AI tools are framed as assistants, but it seems they are actually the opposite. They are great at big picture stuff but sloppy when it comes to details. So the more logical division of labor is to make the human the assistant.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: