Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My point is humans are already behaving extremely poorly as drivers and that already causes accidents. There are ways to help that be less of an issue that also helps self-driving become possible. But instead we accept those ridiculous risks when driving ourselves and yet expect to hold self-driving to a much higher standard while also complaining if it's not aggressive enough in traffic. That may very well turn self-driving into AGI but that's a choice not a characteristic of the problem.

You're attacking a strawman. I'm not saying the solution to self-driving is to change the environment completely to make the problem trivial (i.e., walls on sidewalks). I'm saying that some of the poorly defined and even worse enforced rules of driving could be worked on and that would help self-driving as well. How much would be enough to bring it out of being AGI I'm not sure, but it would definitely help.




We have plenty of rules already. Rules get broken. The more enforcement approach is a road to China.


I personally don't think enforcing rules around roundabouts so I don't have the near-death experience I had the other day "a road to China". I actually love driving myself and don't particularly care about self-driving technology. But as far as I can tell it's not even possible for me, a reasonably fit human, to drive in way that prevents those risks. If we are not willing to tackle that and at the same time expect airliner reliability from self-driving cars then we've defined the problem as impossible.


Consider your half example, how do you propose to do that?


The rule already exists but is not enforced. If tickets were issued for not following it in the cases where the rule is ignored at low speed routinely it would be less likely to happen at high speed with risk of life. All the infrastructure and manpower as well as most of the rules are already in place. We just choose to ignore it routinely and then it's no surprise AGI is required to do self-driving within that mess.


I wasn't asking if the (unmentioned) rule existed, I'm asking how do you propose to enforce it?


By writing tickets to offenders in the cases where an officer witnessed the ofence? I thought that part was implicit. These are not cases where we need more policing or resources. Just rules that have been chosen to not be enforced and a few others that should be written. You don't need a police state to have better rule following. Just actually enforce the rules in the cases you catch and behavior changes much more broadly.


Right. Except we are talking about corner cases where pre-programmed computers on wheels kill people. Merely better rule following does next to nothing, and that's expected. The idea that we can follow rules to make computers on wheels "work" is false, unless one wants to go the China route where near-everything is monitored and punishment is extracted automatically (and even then... it still wont work).


My whole point is that if you make the environment more predictable you make the problem easier, not that the only thing you need to do is that. You seem to be attacking a strawman where somehow the whole environment is tailored towards self-driving cars by creating a police state and then the software is very simple. What I'm describing is using all the resources we already have to design and maintain roads to also help with solving the problem, together with all the technology that still has a long way to go. And if we can do that by doing things that also lower risks in normal driving I don't really see the downside. We already see that in the world. There are countries where it is much safer to drive because there's been a continuous focus on solving exactly this type of issues.


Nope. As I just said, even going the police state route _wont work_. Your premise, that we can make the environment more predicable to make pre-programmed cars easier to program is wrong. That would be attacking the tiny minority of the real world problem. I'm entirely comfortable with that prediction. You disagree, that's fine. I suspect we will both be around to see.

If you want to instead talk about making it safer for human drivers, fine, that's a different subject.


It's not just making the programming easier, it's making the problem actually possible. We have a much higher tolerance for fatal accidents with human drivers than we ever will with automated ones. So if you have an environment where many thousands of people are killed today and then expect self-driving to work in the exact same context with airliner level reliability you've defined the problem as practically impossible. If someone builds a great self-driving technology that applied to the total US fleet only kills 20 thousand people a year I doubt it will ever be accepted. And yet that would be half the fatalities that currently exist.


There is no making the programming easier, or possible. It's not a problem you can solve with a program. I blame sci-fi for this disconnect.

Human sensory augmentation will save lives, but that has nothing to do with the marketing ploy that cars can have self.


>> My point is humans are already behaving extremely poorly as drivers

I haven't been in an accident in over 15 years


I had never been in an accident in 22 years of driving a vehicle until several months ago when I was rear-ended while sitting at a red light, in bright, mid-day, dry conditions. My car was the only one at the light even. This extremely poorly driving (or distracted?) human is presumably still out there on the roads somewhere.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: