how would a bank evaluate the risk -- do they generally have an internal team or farm it out to a consultancy? they don't have the history or contacts like VC or google's invesment funds.
meanwhile, they are playing footsie with saudia arabia again:
https://www.youtube.com/watch?v=iDv2E22Mz0s&feature=youtu.be...
The big banks like this have their hands in everything and connections all over the place. They have their own VC funds, they have investment bankers who specialize in different areas of tech, and they often have their own tech bus dev people in Silicon Valley who are deeply networked with both company founders and VC firms. For a lot of startups, being able to put the JP Morgan/Chase logo on your website as a customer is huge. The founders and VCs know this.
> The big banks like this have their hands in everything and connections all over the place. They have their own VC funds, they have investment bankers who specialize in different areas of tech, and they often have their own tech bus dev people in Silicon Valley who are deeply networked with both company founders and VC firms. For a lot of startups, being able to put the JP Morgan/Chase logo on your website as a customer is huge. The founders and VCs know this.
And it was those very people that were selling their round A. Very competent people they were... :)
this tech is interesting but so poorly understood that it's just using the (public) roads as one large alpha test. given a NN there is no way to verify what safety ranges are there. for instance if each camera slightly changed exposure or occlusion are the results smoothly changing? all they can do is try it and hope the inputs are in a safe part of their optimization space.
also donating to reality-based candidates; even those who want to break up larger companies at least align themselves with a bit more with science and scepticism in things as they are. And to be honest as an employee if you end up in Facebook.codebase.1 or Facebook.codebase.2 who cares, your downside is minimal.
Let's just drive multi-ton vehicles instead [1]. Highway driving might be easier to visually parse but higher speeds and probably less controlled kinematics (i.e. does the software know how to adjust for the cargo) give one pause:
there's 200-300k cars out there with just cameras + radar and their bet is the software catches up to the rhetoric. Adding lidar is next to impossible on the current fleet and adds to the cost if they start now. It is an early design bet which the coders now have to meet.
Hopefully no one bought a car hoping it will magically be “fully autonomous” in the future, given that no one knows whether it will even be possible with a research vehicle in 10 years
Yeah, I mean I hope people realize that just a few thousand and ”Full self driving” doesn’t imply ”Fully autonomous” in the sense that people can use their Tesla as an Uber and be drunk in the back seat! It just means that they’ll have the best autonomy that Tesla can provide which still means they have to pay attention. “Fully autonomous” to me is the point where my kid who can’t drive can use it as a taxi to school with no other driver etc. Musk doesn’t imagine that yet I hope.
Now I'm confused. At one point, Tesla literally had the following description for "Full self driving": "in the future, Model 3 will be capable of conducting trips with no action required by the person in the driver's seat".
Isn't that pretty much what you said "Full self driving" does NOT imply?
Conducting (some, specific/easy) trips with no action is easy. Even conducting most trips is likely doable within the near-ish future.
“Full autonomy” (at least to me) is being able to do any trip. Not just some or most. Because the key benefit is that the car can be empty, or the passenger drunk/blind/...
That’s what I think the crucial difference is between their marketed “full self driving” and true full autonomy.
1. What percent of HN readers have a NYT subscription?
2. There are far more comments here than the source -- is that appropriate? Should the comments ride along w/ the article (for future reference and the fact that they paid for the orginal story/reporting)
I read it more as a "here's the keys to the kingdom" and referenced the redirectMethod which even has a step-by-step guide to replicate it. The author is from a search engine marketing consulting firm so it would seem to be almost against his self-interest to lay bare these tactics.
the positions for Vision/Perception at zoox all mention nn's:
https://jobs.lever.co/zoox i bet even simple questions like "how does this scenario change decisions when the lighting / sensor rotation are altered" cannot be answered.
Changing lighting and rotation on sensor input is a pretty standard way to improve neural net performance (it's called Data Augmentation), so I'm pretty sure they could answer that.
The Telsa autonomy investor day (which applied to the cars not the people) spent most of their time explaining their NN chip, ghost riding to correct predictions etc. Definitely gave the impression that the network was doing a lot of the decision making.
Of course, because NNs are hot with investors now and Tesla wants their money.
Companies frequently misrepresent how their technology works to the public. Typically they'll do some small portion with the hot technology for buzzword compliance, and then build the rest of it with an actually sensible, boring, and tailored-to-the-problem technology stack, oftentimes with a lot of proprietary legwork done by their data scientists and engineers. This way they get the best of all worlds: investment dollars from gullible investors, PR from journalists that want to hop on the next big thing, a product that actually works, and misdirection so competitors hop on approaches that aren't going to work anyway.
That doesn’t seem to be the case with Tesla. Their custom chip is almost entirely focused on being a neural network accelerator and they seem to be hugely reliant on neural networks. Notably Waymo is said to be less reliant on neural networks but Tesla definitely depends on them.
That doesn't really prove much. The classification is the heavy part that needs acceleration. The algorithmic side would probably not need an accelerator, as it's basically a decision tree.
My original point was that the decision tree should hopefully degrade gracefully as the prediction quality goes down, and not have any brand that leads to an insane behavior.
Now obviously, the better your predictions, the better the driving will be, which is why you'd want NN accelerators.
Gives me the impression that Tesla made a small innovation which they're trying to overblow to investors since buzzwords like machine learning are more interesting than actual progress. I'm not saying Tesla isn't making massive progress but that anything they say to the press should be taken with a grain of salt.
This reminds me of a day at Asymetrix - a startup founded by Mr Allen. Somehow we got a very early incarnation of Mosaic and I remember showing everyone how to bounce over to each lab with just a click and what authoring in HTML meant. Asymetrix's main product at the time was Toolbook, essentially a Hypercard clone -- but this was obviously the future. Paul's reaction was muted but I think he knew "This is happening without us!"
They had the very strange idea they were going to take on Microsoft and Borland with their own C++ development system, which seemed to make no sense to me given the other things they were doing.