Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well said. Autonomous navigation is still a huge open research problem, in the sense that we don't even know how it _could_ work, there is no AI work capable of integrating all of the necessary pieces. Heck, we don't even know how simple bird or lizard intelligence works, much less be able to replicate human cognition that goes into operating heavy machinery in a busy multi-agent social context.

So promising that it will be not just solved, but productized into a polished consumer gadget, in 5 or so years, was an astonishing, unbelievably foolish promise.

And worst of all, yes, it came from people who should have known better.

And as a quick aside: there's much talk about "ethical AI", but here's a more common AI ethics failure: taking money from funders, shareholders, and customers, with promises of imminent deliverables, while knowing full well that the AI advances required for that haven't happened yet, and there's no evidence for when they will, if at all.




Indeed, it's unclear if anyone has yet to develop a path planner that can overcome "the freezing robot problem". Even with perfect perception performance, which doesn't exist, the path planning to deal with congested environments is an unsolved problem, and the path planning to deal with ambiguous multi-modal intent might be an unsolvable problem.

To add insult to injury, the manner in-use today of modeling humans as essentially just another kind of dynamic object breaks down extremely quickly once humans are normalized to the presence of autonomous robots. The humans change their own behavior model to achieve their own objective function. They're not dynamic objects, they're learning objects, and now your AV models (which will include human interaction and reaction behaviors on purpose if you're it right and on accident if you're not) breakdown along an entirely new longitudinal axis.

I think the AV market fundamentally broke when GM acquired Cruise for $1B seemingly out if nowhere.

1) Because the reality is that the DARPA Urban Challenge result was not assuredly generalizable, and it may be the case that there's still basic science to be done before the domain is just better, cheaper sensors & high-performance compute away from being productionizable.

2) It's not at all clear that deep learning is really iterating toward a solution to this problem either, but the improvements to methods and hardware have produced the ability to make very compelling demos, that even the purveyors of themselves might believe in, and show iterative improvements on the previous result, thus fueling more investment into something that isn't even known to be possible.

3) But, none of that would matter if GM hadn't decided to make a power move in throwing $1B at an acquisition and in one fell swoop turned an early stage unproven scientific research market into a super, super frothy capital market where it looked like anybody at any moment could be a unicorn without any clear reason or any fundamentals.

Investors went insane and the expectations and promises followed them.


We have multi player poker breakthrough so it seems possible. I don't think planning is the core issue, I think that's solvable in time, it's the hardware and reliable general perception that is the blocker


In poker, the computer knows, that none of the players will suddenly start playing chess instead of poker. On the road, the number of possible situations is way way way bigger.


Not just bigger. Bigger can be fixed with more cpu time. It is a fundimentally different problem. Nobody has a good theoretical answer even with infinite cpu time.

I say this while on a car ferry. Getting on this boat required me to navigate several strange road markings and obey a half-dozen hand gestures from staff, including several that were contrary to the painted lines. No AI is even contemplating car ferries.


It's not just car ferries. Accident scenes and construction zones sometimes require you to obey hand signals and ignore road markings.

There aren't many car ferries around where I live. But there's plenty of construction...


Ever area had its little thing. They may be edge cases individually but edge cases, as a group, must be planned for.


Right. And, sure, that's an edge case for most people. But even if such edge cases only crop up every now and then, you're now at the difference between a reliable automated door-to-door system and one that can drive you around most of the time but every now and then forces a hopefully sober/licensed/competent driver to get behind the wheel to take some actions.

That's a huge difference. Maybe you can address it with some sort of remote OnStar-like system but now you're forcing a remote operator to jump into an unknown context and take actions that were too hard for the AI.


Just need to worry about car Dynamics and pedestrian locations. None of these things can teleport. The problem does have physics imposed limits.


Computer integration in poker works because we simultaneously upgraded the infrastructure to support computer players while upgrading the computer players.

I don't think we will ever change road infrastructure in the same manner on any appreciable scale. (Hell, there's a pothole in front of my street that's been there for 5 years).


> much less be able to replicate human cognition that goes into operating heavy machinery in a busy multi-agent social context

This holds if you expect self-driving cars to be able to work like humans in picking up "body-language" cues around other drivers. But we can also approach the problem the other way around and set better fixed rules for how driving needs to be done and adjust the road and car infrastructure to make the problem simpler. Human drivers get away with breaking so many rules that the problem becomes much harder than it needs to be.


Someone should redo the old "Spam Solutions" form except for self-driving cars.

Found it: https://craphound.com/spamsolutions.txt

The parent response would have these ideas checked:

(X) Requires immediate total cooperation from everybody at once

...

(X) Asshats

(X) Jurisdictional problems

...

(X) Technically illiterate politicians

...

(X) Countermeasures must work if phased in gradually


>Someone should redo the old "Spam Solutions" form except for self-driving cars.

Automating being dismissive of discussion does sound like a great idea...

> Requires immediate total cooperation from everybody at once

It requires setting stricter traffic rules and having them be followed. Something that's done everywhere in the world every year. It can be done as gradually and as locally-specific as needed.

> Asshats

We already have those on the road causing accidents. Traffic rules are there to punish this exact behavior. The fact that we don't enforce a few of them makes self-driving harder. I'm not "failing to account for asshats", I'm specifically targeting them.

> Jurisdictional problems

We already have different jurisdictions where different driving requirements exist. Having self-driving cars that are only allowed to drive in country X would be nothing new.

> Technically illiterate politicians

My suggestions was specifically to make the road rules stricter and enforced. This is not a technical issue, it's the boring old issue of what the road rules should be and who should follow them.

> Countermeasures must work if phased in gradually

Enforcing road rules works fine even if done gradually. It also makes self-driving gradually simpler.


>> Requires immediate total cooperation from everybody at once

> It requires setting stricter traffic rules and having them be followed. Something that's done everywhere in the world every year.

But you mentioned two somethings: "Setting stricter traffic rules" and "having them be followed." The very first examples in the TFA were from Argo's testing i Pittsburgh:

Recently, one of the company’s cars encountered a bicyclist riding the wrong way down a busy street between other vehicles. Another Argo test car came across a street sweeper that suddenly turned a giant circle in an intersection, touching all four corners and crossing lanes of traffic that had the green light.

Setting stricter traffic rules is certainly done everywhere in the world every year. People failing to follow traffic rules is also done everywhere in the world every day, and that's the big problem. Fully autonomous driving requires the ability to make snap decisions that may have little to no precedence in your past experience. This may be a solvable problem for AI, but it's a really, really hard problem that self-driving aficionados seem to consistently underestimate.


What I'm saying is that it helps, not that it's a silver bullet. But we seem to be stuck in a mindset that self-driving needs to be perfect within the current practice on the road. The more I drive the more it's obvious how human drivers are horrible and often in ways that are extremely easy to police automatically (e.g., tailgating can be checked for with the current toll infrastructure in some of the roads I use). If we want self-driving sooner we may very well have to attack it across the whole system and work on regulation, enforcement, road-design, etc, in parallel with working on the flashy technology bits. If instead we decide that self-driving has to work in the complete mess that are roads today then AGI is quite possibly a requirement.


And strange things happen all the time in high traffic high congestion urban environments.

...

ie where we want to use cars.


Back in the late 90's, early 00's, the segway was released to much the same breathless prognostications that we hear today about self driving cars. Doerr said it would be more important than the internet. Kamen claimed it would restructure global transportation.

I don't need to go into everything that was said, you can google it. There was one thing said, however, that I think would be appropriate to touch on here. Kamen, just like you, claimed that traffic laws and the streets could be restructured to better accommodate the segway. That was the instant I knew the segway would not be taking off for a long, long time. If you need for people to restructure their laws and cities solely to accommodate your new technology, you should probably work a little bit harder on perfecting your technology. Because restructuring society's laws and transportation infrastructure simply to use your technology is probably not going to happen. The only time you get a radical restructuring of that nature is when the invention frees you from needing the infrastructure at all.


> If you need for people to restructure their laws and cities solely to accommodate your new technology,

To be fair, this did happen -- with cars.

But that also sets the bar: to warrant re-architecting cities, an advance needs to be as superior to cars as cars were to horse transport.


What I'm describing isn't reshaping your city to fit in self-driving cars. It's doing the things that we should be doing anyway to make road fatalities less embarrassingly huge and with that helping self-driving be an easier problem to tackle. Human drivers should be keeping much larger following distances and indicating properly. Road markings should be much clearer. Signage should be unambiguous. We should do all those things even if we don't care about self-driving cars just because they avoid accidents in general.


Yea but now electric scooter are all the rage and he was basically right (execution was wrong). Cities are even thinking about restructuring roadways to accommodate them (and more bikes).


Electric scooters are barely a blip in a city's overall transportation scheme. The biggest regulations going on is how to keep them from blocking the sidewalk.

Even restructuring a citys roadways for bicycles--a long-term, proven good technology--has been incredibly slow going. And this with something governments are actively trying to improve.

It will be decades until there is substantial enough change that self driving cars are viable as described in the great-grand-parent. And that's if it moves quickly.


The "humans are the bugs" fallacy.

https://news.ycombinator.com/item?id=18447674

What's next? Walls on sidewalks? Computers are not smart or dumb. They are machines. Anthropomorphizing them ("self") is foolish.


My point is humans are already behaving extremely poorly as drivers and that already causes accidents. There are ways to help that be less of an issue that also helps self-driving become possible. But instead we accept those ridiculous risks when driving ourselves and yet expect to hold self-driving to a much higher standard while also complaining if it's not aggressive enough in traffic. That may very well turn self-driving into AGI but that's a choice not a characteristic of the problem.

You're attacking a strawman. I'm not saying the solution to self-driving is to change the environment completely to make the problem trivial (i.e., walls on sidewalks). I'm saying that some of the poorly defined and even worse enforced rules of driving could be worked on and that would help self-driving as well. How much would be enough to bring it out of being AGI I'm not sure, but it would definitely help.


We have plenty of rules already. Rules get broken. The more enforcement approach is a road to China.


I personally don't think enforcing rules around roundabouts so I don't have the near-death experience I had the other day "a road to China". I actually love driving myself and don't particularly care about self-driving technology. But as far as I can tell it's not even possible for me, a reasonably fit human, to drive in way that prevents those risks. If we are not willing to tackle that and at the same time expect airliner reliability from self-driving cars then we've defined the problem as impossible.


Consider your half example, how do you propose to do that?


The rule already exists but is not enforced. If tickets were issued for not following it in the cases where the rule is ignored at low speed routinely it would be less likely to happen at high speed with risk of life. All the infrastructure and manpower as well as most of the rules are already in place. We just choose to ignore it routinely and then it's no surprise AGI is required to do self-driving within that mess.


I wasn't asking if the (unmentioned) rule existed, I'm asking how do you propose to enforce it?


By writing tickets to offenders in the cases where an officer witnessed the ofence? I thought that part was implicit. These are not cases where we need more policing or resources. Just rules that have been chosen to not be enforced and a few others that should be written. You don't need a police state to have better rule following. Just actually enforce the rules in the cases you catch and behavior changes much more broadly.


Right. Except we are talking about corner cases where pre-programmed computers on wheels kill people. Merely better rule following does next to nothing, and that's expected. The idea that we can follow rules to make computers on wheels "work" is false, unless one wants to go the China route where near-everything is monitored and punishment is extracted automatically (and even then... it still wont work).


My whole point is that if you make the environment more predictable you make the problem easier, not that the only thing you need to do is that. You seem to be attacking a strawman where somehow the whole environment is tailored towards self-driving cars by creating a police state and then the software is very simple. What I'm describing is using all the resources we already have to design and maintain roads to also help with solving the problem, together with all the technology that still has a long way to go. And if we can do that by doing things that also lower risks in normal driving I don't really see the downside. We already see that in the world. There are countries where it is much safer to drive because there's been a continuous focus on solving exactly this type of issues.


Nope. As I just said, even going the police state route _wont work_. Your premise, that we can make the environment more predicable to make pre-programmed cars easier to program is wrong. That would be attacking the tiny minority of the real world problem. I'm entirely comfortable with that prediction. You disagree, that's fine. I suspect we will both be around to see.

If you want to instead talk about making it safer for human drivers, fine, that's a different subject.


It's not just making the programming easier, it's making the problem actually possible. We have a much higher tolerance for fatal accidents with human drivers than we ever will with automated ones. So if you have an environment where many thousands of people are killed today and then expect self-driving to work in the exact same context with airliner level reliability you've defined the problem as practically impossible. If someone builds a great self-driving technology that applied to the total US fleet only kills 20 thousand people a year I doubt it will ever be accepted. And yet that would be half the fatalities that currently exist.


There is no making the programming easier, or possible. It's not a problem you can solve with a program. I blame sci-fi for this disconnect.

Human sensory augmentation will save lives, but that has nothing to do with the marketing ploy that cars can have self.


>> My point is humans are already behaving extremely poorly as drivers

I haven't been in an accident in over 15 years


I had never been in an accident in 22 years of driving a vehicle until several months ago when I was rear-ended while sitting at a red light, in bright, mid-day, dry conditions. My car was the only one at the light even. This extremely poorly driving (or distracted?) human is presumably still out there on the roads somewhere.


That's a stereotypical engineering solution and fails to account for how humans actually act in the real world. Humans will always break some rules. As a taxpayer I don't think huge infrastructure changes or lots of increased traffic enforcement are a good use of limited public resources.


I can think of far worse uses for that money than trying to make our roads safer. The current system costs us 40k in lives and millions of serious injuries per year - there is a lot of headroom for improvements. https://www.nsc.org/road-safety/safety-topics/fatality-estim...


With unspoken assumptions about how much an increase is needed things become arbitrary.

IMO, if allowing self driving cars on highways costs less than 100k per mile it’s a rather trivial expensive at 16 Billion in the US. Extending that to every road would be much harder to justify. Similarly, developing something like a set of more clear hand gestures for directing traffic would not be a major issue.

Honestly, the need for expensive changes seems unlikely, though some changes such as paint choices for lane markings would probably increase safety or efficiency.


We've done that for ages. They're called trains.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: