That article describes a systematic design defect of Tesla’s system. (It’s a defect that all camera based systems have. They are designed to ignore stationary objects because they can’t adequately distinguish background objects from obstacles.) Car and Driver tested this last year, and the Tesla ran into the stationary dummy nearly every time. This is something humans have almost no problem dealing with.
It has been a problem, but it is definitely not an intrinsic defect of camera based solutions. The stopped truck example has been examined in depth with rooted Tesla's. The Tesla actually detects the truck, but fails to fully recognize the scenario and attempts to drive under it! See also, greentheonly's work on Twitter on researching this.
Ignoring some stopped objects is only a temporary limitation.
I said it was a systematic defect camera based systems share, not that it was intrinsic to camera based systems. Camera based systems have much less information about object positioning than LIDAR based systems. They filter out non-moving objects because the vision processing algorithms are not sophisticated enough to distinguish between the background and obstacles. If they stopped for stationary objects in the road, they’d routinely stop because they mistook the background for a stationary object.
That may or may not be a temporary limitation. It depends on whether vision processing becomes reliable enough where you can almost always distinguish an object in the road from an object in the background.
I'll echo that. Heavy rain, I could barely see the lines, but the car seemed to find them just fine and was driving straight and smooth.
Until traffic on the other side of the median barrier hit a big deep puddle, absolutely covering my car with a thick sheet of water. Autopilot immediately started screaming at me to take control. I already had my hands on the wheel, but it still scared me because I couldn't see either! After the water washed a way in a couple seconds, I was able to re-enable AP and keep going.