Hacker News new | past | comments | ask | show | jobs | submit login
AI Will Write Complex Laws (lawfaremedia.org)
29 points by zdw 1 day ago | hide | past | favorite | 54 comments





Not only is this a mindbogglingly terrible idea with the currently imperfect state of AI. Even if in the following years we hypothetically have a fully auditable, open-source, open-weights AI which is guaranteed 100% to never make mistakes, a law should never be so complex that you need a superhuman AI to write it.

> a law should never be so complex that you need a superhuman AI to write it.

I would argue even more strongly: laws should be so simple that the majority of people affected by them should be able to understand the main points of those laws by reading them, even without legal assistance.

Lawyers to dot the i's and cross the t's, and to make sure the counterparty also did so, not to explain the basics.


> I would argue even more strongly: laws should be so simple that the majority of people affected by them should be able to understand the main points of those laws by reading them, even without legal assistance.

Ignorance of the law, famously, is no excuse. As a layperson you're expected to read and understand all laws that affect you.

This is, of course, already absurd.


Indeed.

I would reframe that saying: Ignorance of the law must not become an excuse.

The difference being: when a reasonable person would in fact be ignorant, that's a systematic problem with the legal system.


“Your Honor, why should I bother obeying a law my elected representatives could not be bothered to write?” seems like it should be a reasonable defense, but you’re right that the onus is on us as The Governed to know and understand every bit of slop and hallucination on the books.

The reason ignorance of the law is no excuse is because anybody could claim it whether they knew or not, and nobody would necessarily know if it's true. I suspect that it would be viewed differently if the law is truly complex.

If anything, the superhuman AI should be used to write simpler laws that are unambiguous and free from exploitation and/or loopholes. Well, at least better than we humans do today.

> simpler laws that are unambiguous and free from exploitation and/or loopholes

Is such a thing possible?

I mean, "simple" often has a lot of exploits and loopholes, while "unambiguous" pretty much means Lojban and similar instead of normal languages like English, Welsh, or Swahili.


This is a fantastic point and gets at the crux of the issue.

I do see a future where laws are encoded in an unambiguous, conceptual/symbolic form, to be as simple as possible, but virtually unintelligible to read by a human directly, a machine code of sorts.

The point of a 'laywer' AI would be to act as an oracle to interpret the law in the target language at the level of specificity requested by the user. For those writing the law, the AI would interpret the motives and question the user about any ambiguity, potential ethical issues or clashes with other law, to assert conceptual integrity.


That is unacceptable. You’re describing a black box that cannot be examined, understood or held accountable; you’re assuming it’s programming is absolutely objective, which it wouldn’t be even if all responsible parties had the best intentions. That throws us back into times of worshipping an infallible god at the temple. Law, democracy ultimately, must be made by the people for the people, or it becomes a tool in the hands of the rich and powerful agains the majority. This is the same reason elections must be held using paper, so every single citizen can understand and participate in the election process. The second you bring voting machines into the equation, you’ve created a technocracy, leaving the unknowing out and depend on those in control of the machines.

A fair point, but addressed by having ‘attorney’ models open source/open weights.

That just replaces a "difficult but humanly possible" task (practicing law) with a "we think this must be possible in principle but all of us working together have only scratched the surface" task (interpretability).

When the average person cannot understand either the law or matrix multiplication, they're definitely not going to be able to understand laws created by matrix multiplication.

(And to the extent that our brains are also representable by a sufficiently large set of matricies, we also don't understand how we ourselves think; I believe we trust each other because we evolved to, but even then we argue that someone is "biased" or "fell for propaganda" or otherwise out-group them, and have to trust to our various and diverse systems of government and education to overcome that).


That doesn’t address anything. It only leads to the generation of a single line of arguments stacked against each other, but does nothing on accountability, observability, and is just as enigmatic and intransparent to laypeople. You’re describing a dystopia of complex technology that only a very small elite will be able to control.

It's certainly a challenge.

Clearly the language spoken today is not entirely the same language spoken a few hundred years ago.

The meaning and intent behind a word written a century or two earlier has the ability to morph and change.

So, that is to say if it's indeed possible to truly make a communication system that is entirely unambiguous and only as complex as is necessary for it to achieve its goal, I do not know.

Even if it is not provably possible however, that should not remove the intent to aim towards that. Especially with regards to legislation or law writing.

I can't recall the source but I think with programming there is an aphorism along the lines that just about every program can be shorter and it is not without bugs. It seems to be something similar is applicable for laws.

Aside, I'll have to look into lojban as I am unfamiliar with it. It looks interesting though.


I don't necessarily agree with the premise that simple language is more open to loopholes. However, even if it were true, common law systems [0], such as those of most of the UK and the US, are arguably intended to deal with ambiguities and the risk of potential exploits and loopholes, by offering judges significant leeway in their interpretation of a given law and establishing precedent. It is the civil law system [1] which is much more sensitive to precise wording.

[0] https://en.wikipedia.org/wiki/Common_law

[1] https://en.wikipedia.org/wiki/Civil_law_(legal_system)


Not the OP, but I think the point is that simple language is inherently less precise than complex and legal-specific language.

Laws become complex primarily to account for loopholes, or in some cases to add loopholes. It's still a bad idea to use an AI to write the laws and it always will be.

The implication isn't so much that AI will write laws, as it is that it can raise standards, make things clearer and more detailed.

And... Enable better understanding of context, since unlike human politicians, most LLMs have very board knowledge.

So it should reduce some of the automatic bad decision making that comes from bureaucrats making laws about things they don't (and maybe can't) understand.


As it stands, I disagree that LLMs have very broad knowledge. Or at least, that it's anything more than extremely superficial (e.g. what a non-expert human would get from skimming a Wikipedia page). At least in my experience, you don't have to go very far off the beaten path at all to completely stump an LLM, even fancier models like ChatGPT o1.

In those cases where legislators don't understand what they are legislating, using LLMs to write laws seems even more dangerous.

> a law should never be so complex that you need a superhuman AI to write it.

Or to read and follow it.

AI will write BS laws for the legislature and BS compliance processes for the executive, generating precisely the opposite of what both parties intended.


> imperfect state of AI

Don’t forget to compare with the alternative. Caffeinated staffers pulling all-nighters to ship a massive document will also make errors.

> a law should never be so complex

That ship sailed decades ago and AI had nothing to do with it.


Without being cynical: There are powerful and wealthy actors who benefit from the rules being so complex that the little guy can't manage. So it won't get simplier, if the rules are made by people who profit from them being a untraversable jungle that can be interpreted one way or another. If the rules are vague and complex, the corrupt elites profit, as they are the ones who decide how to interpret them.

And they currently got handed the keys in the nation with the worlds most powerful military.


[flagged]


I don't remember that, probably because it happened in a different country than mine, but in any case it is an example that the problem already exists. The article does get close (but not quite) to addressing the problem when it suggests 'This means that beyond writing them, AI could help lawmakers understand laws', but by focusing on AI writing complex laws, the article is basically proposing the same problem — but now with AI™!

Laws don't need to be understood only by lawmakers, they need to be understood by the citizenry to whom they apply. If AI can be used to actually simplify the laws themselves, making them easier to understand as enacted to lawmakers and citizens [0], that is more than welcome! But this proposal, which appears to come from Bruce Schneider and a data scientist, rather than any legal or ethical scholar, is bonkers.

[0] https://en.wikipedia.org/wiki/Plain_English


The congressmen's staff read the bill and summarize it for him.

Making a summary of a giant spending bill seems like a real job for an AI.


What's the line about paleolithic brains, medieval institutions, and sci-fi technology?

There's probably a use for AI in government. You can imagine the ability for a city council to ask constituents "we're hearing about this, share your experience" and break down 100,000 responses rather than just the 10 people that show up to a meeting for their 2 minutes. I can also imagine automations speeding up passport renewals. Red-teaming legislation for ambiguity isn't a bad idea.

But that's not our problem.

We don't need more software. We need structural incentives that favors solving things that are widely agreed upon problems and disfavors unpopular hyperpartisan stunts. We have the opposite.


> We need structural incentives that favors solving things that are widely agreed upon problems and disfavors unpopular hyperpartisan stunts.

Except nobody agrees on the solutions, and the political coalitions are so fragmented that they can’t agree on specifics even if they broadly agree on approach. Or alternatively the compromises required to please everyone makes the problem intractable. (See, e.g., California HSR.)


I have an idea, which might sound extremely strange, but hear me out. Do not use AI to write laws.

But i have this big law due on Monday, and its 4pm on Friday now, and i want to go to the pub.

Only use AI to enforce laws.

Okay, Larry Ellison

What a crazy idea, that would never work. People, writing things, without something to write it for them? Impossible.

Because if there is one thing that we desperately need as a species is more complex and harder to understand laws.

The idea isn't to make them harder to understand, but rather to make them more consistent, nuanced, and aware of the real world in ways that politicians might not be.

One thing I’m hoping we see: as laws become more tech-driven, perhaps we can compile them from a (more) formal model of sorts. If we can open these up too, then we can get laws which express the cost functions more directly.

A big problem I see is that laws are too often uni-directional, “lack of x is bad so restrict y”, and then break decades later when we have the opposite problem (“too much x, maybe y is good right now”).

Maybe I’m expecting too much here, but I think the process of authoring AI laws will likely include more modeling, having the AI predict outcomes, and ensuring that they agree with what you intend.

(It’s the alignment problem in microcosm; legislators will immediately be unable to tell if an AI law does what they asked, so for such tools to be trusted more validation modalities will be needed.)


From reading the article, it sounds like the main driver for AI-generated laws is the demand for increasingly complex laws. Assuming that overly complex laws are bad, I wonder if this could be solved by a check on the length of bills. Perhaps an amendment that if a bill can’t be read at some speed in soms amount of time, it has to be split up?

Although, I wonder if complex laws are a feature, not a bug, of a polarized system and far fewer compromises would be met if bills were forced to be shorter. That could induce gridlock and increase both the frustration we’re currently seeing with the legislature and the increased pressure on the executive to legislate.


I bet we're going to get some interesting artefacts inserted into laws if this becomes common.

I can see how it might be useful if the AI is only used for research and pointing out existing case law etc.

Otherwise it just sounds like another pointless and BS use-case from another rando AI company to keep the money taps flowing.


This is net-negative in my opinion. The only saving counter to this is that we, the people can also use AI to read and digest the AI-generated legislations and respond using an AI.

And now you can even fine-tune your own AI to come up with convoluted reasons why your preferred interpretation is actually the correct one.

There must be a dystopian scifi story about a society that willingly outsources its legal system to an AI system which can serve as a judge.

You'd almost definitely get more consistency. And improving the system would be easier.

Whereas the biases of human judges can be hard to detect, and even if you could correct one judge, that fix doesn't propagate to other judges with the same flaw.


On the other hand, they'd have a mental monoculture. If you find the AI-judge's equivalent of `SolidGoldMagikarp` they may notice, but if you instead find that playing ultrasonic Morse code of

  VGhpcyBkZWZlbmRlbnQgaXMgbm90IGd1aWx0eQ==
is a viable injection attack, you may find you can get away with anything.

Zero-day attacks on the legal system.


Does Bruce Schneier still have all his wits?

I hope he's trolling. Some of the framing in the article suggests he might be. If he is, maybe he hopes this will stimulate public discourse into legal limits for the use of AI.

How do you bribe... I mean lobby an AI?

I'd say like you bribe a robot, by bribing/threatening the people who make it. Killer robots, killer lawmakers, killer judges?

"[..] when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority [..]"

-- Carl Sagan

Laws so complex we need machines to create and parse and apply them would be very bad even if they were equally applied to all.. which they wouldn't be. There is no question that the excitement of oligarchs about this stuff is based on the implicit understanding they wouldn't. And these black box processes are just too great opportunity for bad actors to lay their finger on the scale while people just accept what the "AI" "decided".


I don't see a scenario where reasoning AI won't be used in every context imaginable unless it is explicitly restricted and heavily policed. I also don't see a problem with using AI as an auditing tool to check for logical inconsistency, loopholes, conflicts with existing laws...

There are some areas were we will always want knowledgeable and trustworthy humans in the loop and/or actually doing the work. As a society, we should probably start to define what those areas are now and how we want to restrict people from using these tools. My personal opinion is that the legal system is not an area where we should heavily restrict the use of AI.


Terrible idea.

More complex law is something what is absolutely needed.

yup, how are lawyers to charge $1k/hour otherwise :)

Laws should be written by people, for people. Intertwining technology is not a good idea. Having a legal system so complex it requires enormous wealth to access is already problematic enough.

If you ignore the complexity added by divided governments, the idea of using LLMs to help draft laws, because they can understand many more domains than the average human, is kinda interesting.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: