Hacker News new | past | comments | ask | show | jobs | submit | more edshiro's comments login

> The first car from Lyft’s Level 5 self-driving initiative will be the Ford Fusion Hybrid. Lyft’s use of a Ford Fusion apparently isn’t associated with the partnership the two announced last year. Other AV companies have used the Ford Fusion as a platform for integrating self-driving technologies

Why are we still talking about Level 5 autonomous driving when we can't even get Level 4 working properly? I believe this is sending the wrong message.

On the topic of the acquisition, it seems like a good strategic buy for Lyft. I am still not sure whether they have the capital nor the talent pool to develop a strong autonomous vehicle product. It also seems like they are a bit late to the party, as there are more and more doubts on the reality of self-driving in the next few years.

I may sound sceptical, but I am just cautious when it comes to news on self-driving cars. However I am genuinely excited at this development and look forward to hearing more about how Lyft is progressing on this quest.


"Level 5" is the name they've given that division - as much a goal as anything. I doubt they'll launch at anything near true level 5 capability


>Why are we still talking about Level 5 autonomous driving when we can't even get Level 4 working properly? I believe this is sending the wrong message.

i think it is capability gap between tech companies and car companies. Car companies can't even get cameras around all the body to avoid accidental scratches during parking. Where is tech companies, while could easily do a lot of car tech, have no business case doing anything less than Level 5 - the Level 5 is a tech platform where is anything less is an advanced car, and the tech companies are in the platform business, not car business.


Uber and Tesla are dead last in self-driving tech, beaten by everyone from Hyundai to Baidu-BAIC. I don't know where you get the idea that "car companies can't even get cameras all around the body to avoid accidental scratches." The leaders of the industry are Daimler-Bosch, GM, Waymo, Volkswagen, Ford, Aptiv, and BMW-Intel-FCA. Mostly car companies.


Traditional car companies are also working on autonomous vehicles.

There is a lot of business to be had with Level 4. While the car may not be able to go everywhere, in restricted areas a robot-taxi/shuttle service would be a great advancement. We would all love to be at Level 5 but I would take a solid Level 4 in a few years instead of waiting much much longer for a viable Level 5 solution.


> Car companies can't even get cameras around all the body to avoid accidental scratches during parking.

This has been a feature introduced by most car companies in the last ten or so years.

By the way, I don't think any Tesla car has this.


Great advice. I am not seeking investment yet but this is definitely something I may have to do in the future if I want to accelerate growth for instance.

After a few unsuccessful attempts working as employee in startups, I have decided to build my own. Over time and through multiple setbacks I have learned more about myself and what I want when it comes to the type of company I am building. To me it's very important for my co-founder and I to have control over the destiny of our company. Getting VC money can shift the balance of power and you could lose control over what you have built, which is not ideal. For instance VCs could block an exit opportunity which could result in life changing money for you because the return they get is below their expectations.

This may sound very naive, and I don't claim to understand your circumstances, but make sure you already have a strong, growing business so that you play a strong, iron-clad hand when discussing funding with VCs.

All the best my friend.


Same thing. At least I know that Instagram is mostly all fake and ego-inflating, and I am totally fine with that.

Plus I have much stricter controls on Instagram and only allow a few people to see my photos and videos.


Also, wasn't this practice (i.e. engineering to fit design goals) what made Apple successful in releasing the iPod, iPhone, etc?

I am an engineer but welcome the perspective of designers and believe anyway that both need to work hand in hand.

In the case of driverless vehicles however, I am not sure the focus should overly be on design because this is a very hard problem that has yet to be solved, and maybe there was a way of designing a vehicle that was evolutionary rather than revolutionary, while mostly focusing on the technical challenges that must be overcome to get us to autonomy.


The key is that if your design goals are ambitious but achievable then you end up with a killer product. If your design goals are unrealistic or unachievable you tank what's achievable chasing a dream. Being frank the difference is probably that Steve Jobs had 30 years being a hands on expert in his field, and this guy was just some bloke who fancied building a self driving car.

All too often I've seen people set design goals when they don't understand the underlying problem. If jobs had targeted building an iPod that was physically smaller than the current smallest hard disk available he'd have been in this position.


Nice one! I don't remember all that much from reading the Mask-RCNN paper last year and have not seen many implementations so it's nice to be presented with this Pytorch implementation.

From what I recall about Faster R-CNN, the Regions Of Interest (ROI) are pre-determined via Selective Search, right? So I presume you would need to do the same thing with Mask-RCNN? I think this is the part I am the most confused with since I have never implemented Selective Search myself. Could you point me to introductory material on it?

Lastly, I can see the author of this work has read my blog post on understanding SSD MultiBox - glad it helped in some way :).


Another popular implementation of MRCNN in Kerala+Tensorflow is found here: https://github.com/matterport/Mask_RCNN.

Selective search is implemented in Fast-RCNN. Faster-RCNN improves upon that and uses a Region Proposal Me to propose RoI that may contain objects which speed up training and inference time.


RCNN uses selective search to generate the ROIs. What makes Faster RCNN faster is not having to spend time on selective search.


It uses an RPN to generate the region proposals so it completely does away with the selective search which was the bottleneck for speed in fast RCNN


Since autonomous driving technology is not there yet, I believe truck OEMs should focus on ADAS to enable lane keeping and lane change on highways only, as this is the easiest part of the job. Drivers would be require to intervene as soon as the truck exits the highway.

I presume this is what Tesla will do with their trucks although I am very concerned by their very dubious marketing when it comes to autonomous driving technology.


Using GPUs to mine Bitcoin is a bad idea. That's why most serious Bitcoin miners would buy ASICs.

But Ethereum is much more amenable to GPU-mining so I presume NVIDIA and AMD got a lot of love (and $$$$) from this mining community.


What constitute a good or bad idea has to be related with how much money you end up making. As it stands the variety of crypto currency and the fact that an ASIC's algorithm is set in stone makes it so it's possible to mine some crypto currencies on GPUs for a better profit with respect to the cost of electricity.


Not to mention, the ASIC I've seen was mounted with two delta-like fans and was about 3 times noisier than 6 TI cards. Also, power availability is always a limiting problem the true metric eventually becoming how much hardware can you run on the power you have access to and how much money will that net you per day/month.


Next month Bitmain is coming out with the Antminer E3 that mines all EtHash based cryptos so quite soon we will see how it will effect Ethereum.

At a price point of $800 and about as powerful as 6 RX570s ($350 each) which are current best bang for buck on this it should sell very well at least.


Doesn't Bitmain have ridiculous power consumption?


They use a lot of power but also have higher hash rates per watt than GPUs.


Most people will use Bitcoin as a stand in for any crypto currency.


This looks quite nice in the sense that it's an index of Deep Learning projects, but it only seems to copy the README and link to the the github projects of those papers.

I thought by clicking "Get Model" I would get the model right away but it just redirects me to the github page of the project.

There is certainly value in getting information about all these models in one place but I feel more friction can be elimaned by providing direct ability to download the model files.


NVidia published a very interesting paper on this technique called "End-To-End Learning For Self-Driving Cars": https://arxiv.org/abs/1604.07316 . It's an enjoyable read.

The car learns to steer itself on an empty road. It's a good experiment to witness the power of deep learning and neural nets. For autonomous vehicles though, you need much more than that (e.g. sensor fusion, obstacle detection, localization, behaviour prediction, trajectory prediction, path planning, motion control, etc.).


From that article:

> Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. […] Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e. g., lane detection.

One can imagine that it might be more difficult to get a network to solve this large problem all at once, and that there might be easier to decompose the problem and solve each part. Would it be a good idea to guide the end-to-end system by first decomposing the problem and solving each part, then using that solution as a starting guess for the whole problem? I mean, the decomposition might perhaps be a reasonable approximation of how the whole problem should be solved. (Then again, it might not.)


The steering angles are probably automatically obtained from the script you run when manually training the toy vehicle. I presume at every frame an image is captured along with the steering angle for this image.

You then create a neural net architecture that is being fed images and steering angle positions and outputs steering angle predictions. This is a regression problem that can be expressed in plain English as:

"First train the neural network with a collection of images and associated steering angles. After training, if I were to give the neural network a new image it has never seen before, what steering angle would the network predict?"

NVIDIA has a paper on it[1], and I blog about a similar coursework I complete as part of Udacity's self-driving car nanodegree[2]

[1] https://arxiv.org/abs/1604.07316

[2] https://towardsdatascience.com/teaching-cars-to-drive-using-...


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: