Hacker News new | past | comments | ask | show | jobs | submit login

Don't kid yourself, the car has no bandwidth storage or performance to send back anything other than a few raw frames from disengage events or other rare triggers.



Why would you say that? The car has LTE and connects to WIFI. It could easily send way more data than any care company at any time including over WIFI.


And we don't pay for the LTE bandwidth. Tesla covers the cost of uploading the data.


>It could easily send way more data than any care company at any time including over WIFI.

Except that it isn't, and even Karpathy said the quantity doesn't matter, it's the data quality.


What they are sending way more data because from our knowledge GM and Ford are sending back 0 data and Waymo doesn't have half a million cars worth of data internally to pick from.


>from our knowledge GM and Ford are sending back 0 data

Yes, two of the autonomous vehicle leaders are not using any data whatsoever.

I thought this was the smartest forum on the internet?


It depends entirely on how they design the system. They don't necessarily need to send all the data from the cars back home when they can send test cases to cars, run the tests in a shadow mode to collect real world results, then send the test results back home.


The presentation makes it clear your claim is entirely false, you should watch it.


Which presentation did you watch? Karpathy said specifically "it's not a massive amount of data, it's just very well picked data" when talking about how the cars only send data when one of the configured triggers fires.


There’s a large gap between ‘a few frames’ and a massive amount of data, and the amount sent lies somewhere in the middle. Clearly they can’t send all data (nor would they want to) but it seems it is sufficient for significant learning to take place and the examples shown were good quality over at least a few seconds, so hundreds of frames for each example.


No, it's spot on. It's entirely what I said: the car can only deliver a few raw frames, and only in response to particular triggers.

Notice the cherry-picked examples in the presentation. There is a whole class of problems the field cars can never help with, since they lack the dead-reckoning sensor setup and precise odometry a development car would have.


They showed video in the presentation which was clearly not ‘a few frames’, unless by a few frames you mean seconds of video.


> There is a whole class of problems the field cars can never help with, since they lack the dead-reckoning sensor setup and precise odometry a development car would have.

Can you give an example? I'm curious what kind of triggers strictly require lab-calibrated hardware.


Short video clips from all cameras are sent back to Tesla when associated with a disengagement event, queued for upload when the vehicle is on wifi.


I hope to god they are sending back short video clips randomly sampling all driving conditions, not just the disengagement events.


They are.


I wonder if Tesla is getting subpoenaed for video clips.

Other than for accidents, the SEC investigation, etc.


My FOIA requests say no, but lots of blind spots. I'm not operating "at scale" due to the cost involved with non-electronic FOIA requests.


Thanks.

I can imagine that police could mine this just like they're doing with Google geolocation data.


I hope Tesla has strong governance controls over customer data, and a fierce inside counsel for pushing back against unnecessary or overly broad LEO requests.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: