Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Likely they will get a gigantic sea of trained deep learning weights that noone understands.


That really isn't going to play well with a judge or jury. I can see the poor programmer up on the stand not being able to say what it meant or how the car made its decisions.


That is, unfortunately, the problem with neural nets in general.

They are amazing logical constructs, but there is a fundamental opaqueness to them that absent sufficient neural network mass to convincingly simulate a human, we can't apply the same methods of formal verification and behavioral inference we could for other more specific machine implementations.

No one can explain just why certain weights work at certain situations and not others. They just do.

Whether someone in the justice system is comfortable effectively legislating from the bench by creating precedent holding companies liable for NN based behavior that there are no hard and fast ways for them to proof test against in the first place is another question however.


It’s almost as though you have to treat them like human drivers. We cannot formally verify 16 year olds, either, or sufficiently introspect accurate reasons for their behavior. Instead, we require them to pass a test, we apply actuarial cost models, etc.


16-year-olds can talk and try to explain themselves. Humans try very hard to make themselves understand. Neural nets can't (at least, not yet).


You can figure what the Tesla NN would say if they built an explanatory speech system in. Something like "Opps sorry - I thought that which line was a lane and didn't recognise the barrier." Not all that helpful here really.


There's no fundamental law that requires neural networks be hard to understand. In fact debugging and interpreting neural networks is a very active area of research and getting easier everyday. In twenty years I would not be surprised if tools are so good, due to economic forces, that it's easier to understand why a neural network made a decision than it is to understand why some complex hand written conditionals program reached a certain result.


That still doesn't solve the problem.

Nodes X, Y, and Z reached action potential tells us nada useful about useful about the NN.

It's not a question of not being able to run the code in debug; we can absolutely do that.

It's a question of the outcome only being dependent on a seemingly random set of numbers which no one can really reason from. That's the issue. With teenagers, we at least have the ability to structure incentives such that they continually improve their driving behavior, and most importantly, they are actually capable of learning after you cut them loose with the car. No hardware upgrades required.

The car on the other hand? Not so much.

W.r.t another poster's suggestion of the employment of actuarial models: I consider the employment of insurance to be a less than satisfying marshaling of our economic time, and a backdoor social control mechanism that still just makes me fidget. But that's just me.


> It's not a question of not being able to run the code in debug; we can absolutely do that.

Don't think of debugging/stepping through neural network code using the same tools as stepping through procedural code. In the future you'll have better visual tools that show you a lot of information about the state of the network at once, rather than just the contents of a few registers that you see with a modern debugger.


No one can explain just why certain weights work at certain situations and not others. They just do.

I think the problem will come when some lawyer latches on to that as a way of saying the auto maker is putting a product it cannot prove or explain on the road, but still claiming it’s safe.


Liability is about incentives. Strict liability moves the externalities of unsafe cars onto the manufacturer.


To an untrained juror, even regular computer code is a black box whose behaviour is entirely unpredictable.

The only difference between a neural net and regular logic is that the latter has someone who claims to understand it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: