Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The cost of LIDAR is coming down (washingtonpost.com)
98 points by edward on Dec 6, 2015 | hide | past | favorite | 61 comments


LIDAR is pretty cool because you don't have to do object recognition on the point cloud to know what you shouldn't be bumping into - since the data comes back "already 3D" you have distance and size information and just need to calculate speed. Another benefit LIDAR has is its high resolution as compared to RADAR.

Where LIDAR typically falls down is in adverse conditions (rain, snow, low sunlight in the horizon). There's a group at my university, MDAR, who is prototyping a tech to better overcome the adverse weather issues through a better laser sensing technique.

Here's an article with a little more technical information [1].

Disclaimer: I've done some part time work on market research for MDAR.

[1] http://articles.sae.org/13899/


you don't have to do object recognition on the point cloud to know what you shouldn't be bumping into - since the data comes back "already 3D"

This is true of any pointcloud system though. No system returns density though, so you still have to make assumptions about what is worth running into at what resolution. So if a leaf falls in front of your car, you don't immediately slam on the brakes.

I don't know of any vision systems that do object recognition immediately, in fact I'm not even sure that is possible - or desirable. I guess if you baked segmentation into the firmware it would serve as a fast classifier, but AFAIK nobody is even talking about doing that as you would need a very well trained vision library and classifier to deploy to FPGA.

I make the distinction because we build fairly dense pointclouds with monocular RGB cameras and VSLAM. LIDAR isn't required anymore. LIDAR is faster with lower error tolerances today, but nowhere near as cheap or portable. I suspect monocular SLAM will overtake LIDAR in the next couple of years as it's getting faster and more accurate everyday.


I agree VSLAM is good and getting better everyday, but I don't necessarily agree it'll displace LiDAR for autonomous vehicles. The first generations will probably have both. Sure, LiDAR is more expensive, but we're talking about avoiding collisions here -- I believe the extra safety is worth it. I don't think portability is not a big issue for cars (though it is a bigger concern for smaller aerial vehicles). LiDAR will still work in the dark, and potentially with much more range than cameras can. Having worked with vehicles using both technologies, I'd feel far better on the LiDAR-equipped one.


100% agree currently. Your point about dark I was totally overlooking, so I think that's the big difference that makes LiDAR better long term until something better comes along.


You make really good points all around. The part we're not so sure about is if there's no place for LIDAR when discussing autonomous driving.

As you mentioned we'll have to see where economics takes over as cameras are much cheaper, but less accurate.

I'm curious if your camera-based solution is for outdoor or automotive use. If it is meant for outdoor use, how is performance with oncoming headlights, a low-horizon sun, or in snow?


Ours is all environment, but for mobile AR and not autonomous driving. So our draw distances don't need to be as fast and we have more persistence and more frames per meter than a driving system would have at 60km/h. That said, if we weren't restricted to mobile hardware we could probably get close to matching LIDAR speed and resolution.

The snow thing is interesting actually, but in a different way than you might think. Because we do loop closure it makes matching scenes that are temporally separated (eg. no snow vs snowy or occluded by leaves etc...) very hard.


I'm not sure if I understand you correctly. But Mobileye, probably the most prominent company in the field, does object recognition? And they design their own ASICs.

https://www.youtube.com/watch?v=jKfwHsHUdVc


Right, but you don't have to in order to get 3D depth data from a point cloud.


If LIDAR becomes cheaper though, then what wins? It should take less processing and mean lower latency I would think.


> you don't have to do object recognition on the point cloud

This is true, as long as you're willing to treat everything your LiDAR sees equally. The problem with this is that you're treating, say, a patch of grass (which you might as well drive over), a rock (which you'd rather drive around) and a pedestrian (which you might rather stop for, as a moving obstacle) the same way. Object recognition is still a good idea with LiDAR. And I'm not even getting into noisy returns or "negative" obstacles (i.e. holes).

Disclaimer: has done research in object recognition with LiDAR ;)


Bottom video on this page is my favourite example of this - one 800kg autonomous vehicle carefully avoiding stepping on the grass...

http://www.drtomallen.com/publications--videos.html


What I wonder about for LIDAR is how it will work when you have 20+ Cars using these devices simultaneously, say when driving on a busy road. Do they suffer from not being able to discriminate their own signals from those of the other cars, or is that (statistically) not a concern?


With a flash LIDAR, it's not a big problem. If you're taking 100 images a second at 200 meters, you're only receiving for 1.2us every 10ms, for a duty cycle of about 1/10000. Conflicts are possible but unlikely. If you vary the flash timing randomly, the odds of N successive collisions are (1/10000)^N. A frame with a conflict is going to look totally different than the previous and following frames, so it will be obvious when you've been jammed.


What would be stopping people from driving around with jammers, or how about enthusiasts who would mod the LIDARs on their vehicles to achieve a higher frame rate? Will police have detectors on their cars, patrolling for these vehicles that are not following the spec?


The same thing that stops people driving around with GPS and cell-phone jammers. That is, nothing, apart from the threat of prosecution if caught.

This seems to be a common complaint about new technologies; people claim that somehow anti-social, violent or abnormal behaviour will be a problem merely because it is _possible_.


You can blind one of the things with a laser pointer at the right IR frequency. But the device can easily detect this, locate it, and use a camera to take a picture of the jammer. For directional devices, jammers are targets; you get direction, but not range.


I suspect that a little bit of modulation cleverness goes a long way towards eliminating crosstalk.


Not sure if you look at military applications then when a group of aircraft will fly in formation only 1 of them would have the radar on in active scan mode and the data is usually going to be shared over the data link. With phase shift due to movement and the uncertainty of how many cars are there around you modulation might not be enough to prevent cross talk.


Do they still do that? If so it's because they don't want to have frequency agility turned on all the time so potential adversaries get less opportunity to figure out the relevant algorithms. The degree by which radars adjust their pulse frequencies is way larger than the doppler shift of anything you're likely to run into traveling in the atmosphere. You will occasionally have a conflict by chance but radars by their very nature have to deal with a certain amount of noise.

With lidars you won't be able to adjust frequency because that's not how lasers work. Well, you can with free electron lasers but nobody is going to be putting those in lidars. But a laser is directional in a way that a laser isn't. You will occasionally get some false returns but real objects persist from scan to scan in a way that noise doesn't. I've worked with a small fleet of lidar equipped robots and lidar cross talk never caused us any problems. Sunlight on the other hand could be a big problem but automotive lidars are all rated for the outdoors.


Ah yes - the ol' "facing west near sunset" bug... I lost a few nights sleep to that one. :-)


Yes, overload is real, you can never eliminate crosstalk only suppress it, and degenerate cases will probably pop up as the tehcnology scales. But coming to a compromise that allows every car to generate a map of the road should easily be possible. Well, "easily" in the technological sense, perhaps not the political sense.

For military applications it's more useful to have one radar at high range/power than to have n radars at low range/power, while it's just the opposite for civilian lidar applications. So I don't see military radar SOP as evidence that this is inherently difficult.


This is why I believe LIDAR is a blind alley. We will need to use stereo camera setups and clever bit of firmware (maybe even neural nets) to extract 3d directly from the picture in real time, just like the rest of us monkeys.


The other unintended consequence of not doing any kind of material recognition is that the system will treat a plastic bag or piece of styrofoam the same as it would a large piece of sheet metal. One of those requires evasive maneuvers and the others do not.


If LIDAR could sweep materials with varying frequency and measure the reflection vs. absorption across a band, you could get pretty decent material identification.


There are some conditions that LIDAR handles badly. We discovered that the charcoal fabric used on many office chairs is a very good IR absorber. The SICK LMS won't see those things at point-blank range.

Some LIDAR units give you "first and last", that is, the time to the first reflection and to the last one. For a solid surface, both values will be very similar. For snow, rain, and foliage, they'll differ quite a bit. This is used in aerial surveys, where "first" gives you the tops of trees and "last" gives you the ground. It hasn't usually been available for ground-based LIDAR units, but there's no reason it couldn't be.


Isn't the same true for stereo camera ?


I'm more excited by the possibility of a small light-weight LIDAR device that can be attached to drones for terrain mapping.

My wife is an archaeologist and we've seen lots of discoveries of buried historical sites found by flying over areas with airplanes and expensive (and large) LIDAR systems. It would be great to be able to disrupt that with prosumer equipment.


You can get a Velodyne "puck", that is light enough for many drones, for around $7k. The problem is that the scan density is not that great, so for many applications (e.g. archaeology) I suspect it's not worth it. Structure from Motion with good imagery can get some pretty good results these days.


I'd hold on until MIT's new tech replaces the need for LIDAR:

http://news.mit.edu/2015/algorithms-boost-3-d-imaging-resolu...


Imagine this as a consumer technology: 3d scan your home, car, yard and I'm sure game developers will create titles that "take place" at your home.


It's already possible to do it with any drone that can take good quality photos: http://www.makerbot.com/blog/tag/uav The resolution depends on the quality of the pictures and the number of them.


Similar to project Tango. It uses structured light for the same goals.


LIDAR is not the only way for visual odometry as shown with the kitti challenge, stero camera show similar accuracy than LIDAR : http://www.cvlibs.net/datasets/kitti/eval_odometry.php

The same is true for object detection : http://www.cvlibs.net/datasets/kitti/eval_object.php

And tracking : http://www.cvlibs.net/datasets/kitti/eval_tracking.php


Since there are so many experts on the thread, say these are deployed in every car on the road. How will one tell its own laser reflection from everybody else's, if there's a hundred lidars per city block scanning simultaneously?


Light travels really fast. As each pulse has an extremely short length, and there aren't that many scans per second comparative to the lengths of the pulses, collisions should be rare. When you do have a collision, it will look completely different from the pulses around it, so you can easily discard it.


Are you guessing or do you know this for a fact? I know that in radars for example this is dealt with using matched filters (i.e. filters tuned to extract known, but delayed signal through correlation), but that requires continuous, modulated signal.


Fewer laser elements means cheaper, but also means lower resolution. I agree that a combination of video and radar works well for current active driver aids in cars. LIDAR isn't necessary.


True but what resolution do we need? The smallest object a car would need to stop for would be what, a cat or racoon? We don't need snowflake resolution.


32 elements might be able to detect objects that small. It depends on how far away they are.


what's wrong with stereoscopic video cameras?

turning them into a point cloud isn't that difficult and much better than lidar for recognition purposes.


Does anyone know of any examples of a "solid state" lidar as mentioned in the article? Im unable to visualise how such a thing would actually work, and its got me excited!


Maybe they use a digital micromirror device also used in dlp projectors. By using a fish eye lens you could use the mirror array to look around and calculate the traveling distance of light pulses.

I am also thinking about a 3d camera. Maybe those are getting good enought to measure distance.


Another possibility might be to use multiple solid state laser transmitter/receiver pairs located close together, kind of like ink-jet printer nozzles. Because they're discreet, you would lose some resolution compared to a continuous scan system, but it might be OK for the purpose of staying in a road.

And, since the vehicle is moving (most of the time) anyway, maybe multiple returns can be integrated something like a synthetic aperture radar system to form a more complete picture.


I thought the micro-mirror approach would be considered electromechanical and not solid-state?


TriLumina (http://trilumina.com/) was at Connected Car Expo this year. They didn't have an actual device to show, but they have developed a solid state LIDAR with some pictures and a data sheet on their website.


Time-of-Flight cameras are one option:

https://en.wikipedia.org/wiki/Time-of-flight_camera



From their site

Based on the ToF principle, the cameras employ an integrated light source. The emitted light is reflected by objects in the scene and travels back to the camera, where the precise time of arrival is measured independently by each pixel of the image sensor, producing a per-pixel distance measurement.


Is there something like this available, but in the <$250 segment. These I understand are > $1000 market.


The second generation Kinect (the XBox 360 unit.)


Lidar competes for outdoors scanning. Isnt kinect out of that game?


What happens after one of these things fails on a 10 year old car? Given the track record of most electronics on cars, this doesn't seem to be that unlikely.

Given these kinds of liability concerns, it might be that reducing the cost of the tech won't reduce the cost of the product much.


I guess the computer will tell you to get the sensor replaced? "By 2018, they are expecting to have a third-generation system in place which will be the size of a postage stamp and will sell for under $100." http://blog.lidarnews.com/postage-stamp-sized-lidar/#sthash....


Finally. I was pushing for this back in 2003, but the volume wasn't there.

There are quite good eye-safe flash LIDAR units.[1] Advanced Scientific Concepts in Santa Barbara makes some.[1] A pulsed laser sends out a flash, and the receiver has a 128x128 pixel chip with time of flight counters for each pixel. Range in hundreds of meters, in sunlight. No moving parts except a fan. Cost is about $100K, because they are hand-made using custom ICs produced in tiny volumes. Space-X uses one on the Dragon spacecraft to dock with the space station.

Back in 2003, when we were preparing for the DARPA Grand Challenge, I went down to Santa Monica to see the thing. But the demo was of components on an optical bench, aimed out a garage door into a parking lot. The technology wasn't ready to use yet, but it was clearly the right idea, something that would become cheap in volume. They were working more on making the device more sensitive than on making it cheaper, because they were getting DoD funding. Some of their units even have a photomultiplier tube stage, like an image intensifier. The photoelectric effect is at the atomic level, and so fast you can use it in front of a time of flight device.

So we ended up using the SICK LMS scanner, like everybody else.

There's a tradeoff between field of view and range with flash LIDAR, because there's a limited amount of illumination power. A single one of these devices won't give a full circle of scan, like the Velodyne rotating devices.

There's a cheaper, but less precise, approach, used in the Kinect 360. That's the ZCam technology. See p. 13 of this paper [2] for how it works. It's a phase-shift type LIDAR; the outgoing beam is modulated with an RF signal, and the phase shift of the RF carrier is measured on return. That's also all solid state, but not as good at rejecting interfering illumination. It doesn't have digital counters for each pixel; the timing system is analog. So they don't need counters running at tens of GHz behind every pixel.

Once someone is ready to order > 100,000 flash LIDARs, they'll get cheap. The technology is not inherently that expensive.

The main reason for having a LIDAR is not to see obstacles. It's to profile the terrain ahead. You can detect potholes and drop-offs. Since we were driving off-road for the DARPA Grand Challenge, that was absolutely essential. Automatic driving systems for on-road use can sleaze by without doing that. Most of the time. Until they encounter a big pothole, or are misled by road markings into a ditch. That's why Google has that LIDAR up on top.

[1] http://www.advancedscientificconcepts.com/ [2] http://tesi.cab.unipd.it/47172/1/Tesi_1057035.pdf


I have a friend who worked at ASC and got to play with their "Portable 3D Flash LIDAR Camera Kit"[1] a few years after you saw the tech on an optical bench. Paired with a optical video camera, you get a full-color 3d scene in realtime 30fps. Pretty neat.

Their stuff is much smaller and more advanced now and they're pushing the OEM angle pretty hard[2].

[1] http://www.advancedscientificconcepts.com/products/older-pro...

[2] http://www.advancedscientificconcepts.com/applications/autom...


They've made a lot of progress. But their site says "The ASCar website is not providing product or company details at this time. The company will begin promoting its products in 2015." Getting the price down by three orders of magnitude is tough for a small startup, especially since the technology pushes the limits of the technology at the high price point.


(They're still small, but have been around since 1987.)

My office is right next door to ASCar's. I won't say much because it's probably privileged information but let's just say things seem to be coming along quite nicely.


Not mentioned in article but bodes well for drones, aerial mapping, etc.


What wavelengths are typically used for LIDAR?


Infrared.


Infrared is a wide band though, 0.7µm-1mm.

It looks like LIDAR products typically operate in the near-infrared, with two products I picked at random operating at 905nm[1] and 850nm[2]. They'd want to pick bands that are invisible to humans, ones where laser diodes exist that can be modulated, and ideally bands that suffer minimal attenuation from environmental factors like water. NIR meets those criteria pretty well.

1. http://velodynelidar.com/docs/datasheet/63-9229_VLP16_Datash...

2. https://sick-virginia.data.continum.net/media/dox/6/16/516/P...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: