Hacker News new | past | comments | ask | show | jobs | submit login

This is a fantastic point and gets at the crux of the issue.

I do see a future where laws are encoded in an unambiguous, conceptual/symbolic form, to be as simple as possible, but virtually unintelligible to read by a human directly, a machine code of sorts.

The point of a 'laywer' AI would be to act as an oracle to interpret the law in the target language at the level of specificity requested by the user. For those writing the law, the AI would interpret the motives and question the user about any ambiguity, potential ethical issues or clashes with other law, to assert conceptual integrity.






That is unacceptable. You’re describing a black box that cannot be examined, understood or held accountable; you’re assuming it’s programming is absolutely objective, which it wouldn’t be even if all responsible parties had the best intentions. That throws us back into times of worshipping an infallible god at the temple. Law, democracy ultimately, must be made by the people for the people, or it becomes a tool in the hands of the rich and powerful agains the majority. This is the same reason elections must be held using paper, so every single citizen can understand and participate in the election process. The second you bring voting machines into the equation, you’ve created a technocracy, leaving the unknowing out and depend on those in control of the machines.

A fair point, but addressed by having ‘attorney’ models open source/open weights.

That just replaces a "difficult but humanly possible" task (practicing law) with a "we think this must be possible in principle but all of us working together have only scratched the surface" task (interpretability).

When the average person cannot understand either the law or matrix multiplication, they're definitely not going to be able to understand laws created by matrix multiplication.

(And to the extent that our brains are also representable by a sufficiently large set of matricies, we also don't understand how we ourselves think; I believe we trust each other because we evolved to, but even then we argue that someone is "biased" or "fell for propaganda" or otherwise out-group them, and have to trust to our various and diverse systems of government and education to overcome that).


That doesn’t address anything. It only leads to the generation of a single line of arguments stacked against each other, but does nothing on accountability, observability, and is just as enigmatic and intransparent to laypeople. You’re describing a dystopia of complex technology that only a very small elite will be able to control.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: