Hacker News new | past | comments | ask | show | jobs | submit login

Not only is this a mindbogglingly terrible idea with the currently imperfect state of AI. Even if in the following years we hypothetically have a fully auditable, open-source, open-weights AI which is guaranteed 100% to never make mistakes, a law should never be so complex that you need a superhuman AI to write it.





> a law should never be so complex that you need a superhuman AI to write it.

I would argue even more strongly: laws should be so simple that the majority of people affected by them should be able to understand the main points of those laws by reading them, even without legal assistance.

Lawyers to dot the i's and cross the t's, and to make sure the counterparty also did so, not to explain the basics.


> I would argue even more strongly: laws should be so simple that the majority of people affected by them should be able to understand the main points of those laws by reading them, even without legal assistance.

Ignorance of the law, famously, is no excuse. As a layperson you're expected to read and understand all laws that affect you.

This is, of course, already absurd.


Indeed.

I would reframe that saying: Ignorance of the law must not become an excuse.

The difference being: when a reasonable person would in fact be ignorant, that's a systematic problem with the legal system.


The reason ignorance of the law is no excuse is because anybody could claim it whether they knew or not, and nobody would necessarily know if it's true. I suspect that it would be viewed differently if the law is truly complex.

“Your Honor, why should I bother obeying a law my elected representatives could not be bothered to write?” seems like it should be a reasonable defense, but you’re right that the onus is on us as The Governed to know and understand every bit of slop and hallucination on the books.

Without being cynical: There are powerful and wealthy actors who benefit from the rules being so complex that the little guy can't manage. So it won't get simplier, if the rules are made by people who profit from them being a untraversable jungle that can be interpreted one way or another. If the rules are vague and complex, the corrupt elites profit, as they are the ones who decide how to interpret them.

And they currently got handed the keys in the nation with the worlds most powerful military.


If anything, the superhuman AI should be used to write simpler laws that are unambiguous and free from exploitation and/or loopholes. Well, at least better than we humans do today.

Laws become complex primarily to account for loopholes, or in some cases to add loopholes. It's still a bad idea to use an AI to write the laws and it always will be.

> simpler laws that are unambiguous and free from exploitation and/or loopholes

Is such a thing possible?

I mean, "simple" often has a lot of exploits and loopholes, while "unambiguous" pretty much means Lojban and similar instead of normal languages like English, Welsh, or Swahili.


This is a fantastic point and gets at the crux of the issue.

I do see a future where laws are encoded in an unambiguous, conceptual/symbolic form, to be as simple as possible, but virtually unintelligible to read by a human directly, a machine code of sorts.

The point of a 'laywer' AI would be to act as an oracle to interpret the law in the target language at the level of specificity requested by the user. For those writing the law, the AI would interpret the motives and question the user about any ambiguity, potential ethical issues or clashes with other law, to assert conceptual integrity.


That is unacceptable. You’re describing a black box that cannot be examined, understood or held accountable; you’re assuming it’s programming is absolutely objective, which it wouldn’t be even if all responsible parties had the best intentions. That throws us back into times of worshipping an infallible god at the temple. Law, democracy ultimately, must be made by the people for the people, or it becomes a tool in the hands of the rich and powerful agains the majority. This is the same reason elections must be held using paper, so every single citizen can understand and participate in the election process. The second you bring voting machines into the equation, you’ve created a technocracy, leaving the unknowing out and depend on those in control of the machines.

A fair point, but addressed by having ‘attorney’ models open source/open weights.

That just replaces a "difficult but humanly possible" task (practicing law) with a "we think this must be possible in principle but all of us working together have only scratched the surface" task (interpretability).

When the average person cannot understand either the law or matrix multiplication, they're definitely not going to be able to understand laws created by matrix multiplication.

(And to the extent that our brains are also representable by a sufficiently large set of matricies, we also don't understand how we ourselves think; I believe we trust each other because we evolved to, but even then we argue that someone is "biased" or "fell for propaganda" or otherwise out-group them, and have to trust to our various and diverse systems of government and education to overcome that).


That doesn’t address anything. It only leads to the generation of a single line of arguments stacked against each other, but does nothing on accountability, observability, and is just as enigmatic and intransparent to laypeople. You’re describing a dystopia of complex technology that only a very small elite will be able to control.

It's certainly a challenge.

Clearly the language spoken today is not entirely the same language spoken a few hundred years ago.

The meaning and intent behind a word written a century or two earlier has the ability to morph and change.

So, that is to say if it's indeed possible to truly make a communication system that is entirely unambiguous and only as complex as is necessary for it to achieve its goal, I do not know.

Even if it is not provably possible however, that should not remove the intent to aim towards that. Especially with regards to legislation or law writing.

I can't recall the source but I think with programming there is an aphorism along the lines that just about every program can be shorter and it is not without bugs. It seems to be something similar is applicable for laws.

Aside, I'll have to look into lojban as I am unfamiliar with it. It looks interesting though.


I don't necessarily agree with the premise that simple language is more open to loopholes. However, even if it were true, common law systems [0], such as those of most of the UK and the US, are arguably intended to deal with ambiguities and the risk of potential exploits and loopholes, by offering judges significant leeway in their interpretation of a given law and establishing precedent. It is the civil law system [1] which is much more sensitive to precise wording.

[0] https://en.wikipedia.org/wiki/Common_law

[1] https://en.wikipedia.org/wiki/Civil_law_(legal_system)


Not the OP, but I think the point is that simple language is inherently less precise than complex and legal-specific language.

The implication isn't so much that AI will write laws, as it is that it can raise standards, make things clearer and more detailed.

And... Enable better understanding of context, since unlike human politicians, most LLMs have very board knowledge.

So it should reduce some of the automatic bad decision making that comes from bureaucrats making laws about things they don't (and maybe can't) understand.


As it stands, I disagree that LLMs have very broad knowledge. Or at least, that it's anything more than extremely superficial (e.g. what a non-expert human would get from skimming a Wikipedia page). At least in my experience, you don't have to go very far off the beaten path at all to completely stump an LLM, even fancier models like ChatGPT o1.

In those cases where legislators don't understand what they are legislating, using LLMs to write laws seems even more dangerous.

> a law should never be so complex that you need a superhuman AI to write it.

Or to read and follow it.

AI will write BS laws for the legislature and BS compliance processes for the executive, generating precisely the opposite of what both parties intended.


> imperfect state of AI

Don’t forget to compare with the alternative. Caffeinated staffers pulling all-nighters to ship a massive document will also make errors.

> a law should never be so complex

That ship sailed decades ago and AI had nothing to do with it.


[flagged]


I don't remember that, probably because it happened in a different country than mine, but in any case it is an example that the problem already exists. The article does get close (but not quite) to addressing the problem when it suggests 'This means that beyond writing them, AI could help lawmakers understand laws', but by focusing on AI writing complex laws, the article is basically proposing the same problem — but now with AI™!

Laws don't need to be understood only by lawmakers, they need to be understood by the citizenry to whom they apply. If AI can be used to actually simplify the laws themselves, making them easier to understand as enacted to lawmakers and citizens [0], that is more than welcome! But this proposal, which appears to come from Bruce Schneider and a data scientist, rather than any legal or ethical scholar, is bonkers.

[0] https://en.wikipedia.org/wiki/Plain_English


The congressmen's staff read the bill and summarize it for him.

Making a summary of a giant spending bill seems like a real job for an AI.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: