Marcus believes that there are still "edge cases" in the field of self-driving cars that have not yet been solved and should not be tested on the road around the clock.A 2016 New York Times article about self-driving cars began: "The era of self-driving cars has arrived, and some automakers have invested billions of dollars in research and development... and have begun testing in some cities in the United States." Seven years have passed, where has self-driving been?
Gary Marcus, professor emeritus of psychology and neuroscience at New York University, expressed some views on this area. He said there is still a problem in the field, one that Marcus has highlighted dozens of times over the past few years, namely edge cases, the non-routine situations that often confuse machine learning algorithms.
The more complex the situation faced by self-driving cars, the more unexpected anomalies there will be. And the real world is complex and chaotic, and we cannot list all the possible non-routine events that may occur. No one has yet figured out how to build a self-driving car that can cope with this fact.
Marcus said that the first time he emphasized the major challenges that edge cases bring to autonomous driving was in an interview in 2016. "At that time, I was tired of the hype and finally gave up on this view. Now when I re-read this transcript, I think it is still applicable now."
The technological progress we see now is largely driven by large-scale brute force cracking techniques, such as the supercomputer Deep Blue and the Atari game system. The development of these technologies makes mankind extremely excited. At the same time, if you're talking about robots for homes or robots driving down the street, excitement isn't as high.
Generally speaking, self-driving cars perform well under normal circumstances, such as they drive safely on sunny days. But if they are placed in complex environments such as snow and rain, driverless driving will become very bad. Previously, American journalist and contributing editor Steven Levy wrote an article about Google's autonomous driving. The article mentioned that Google achieved a major victory in 2015, and this victory was that the system could automatically identify leaves.
Identifying leaves is too simple for humans, but it would be a major advance for self-driving cars. Humans can use common sense to reason and figure out what this thing might be and how it got there, but a self-driving system just remembers something and lacks reasoning, and that's the limitation that self-driving cars face...
People have been looking forward to more mature self-driving technology. Just a few days ago, the California Public Utilities Commission approved self-driving car companies Cruise and Waymo to operate 24/7 in San Francisco. This decision provides the two companies with greater leeway to test their cars. After the news was announced, many people said that the era of self-driving cars has arrived, although it is later than expected.
In fact, we don’t have any truly self-driving cars yet, and as Cade Metz, a well-known American journalist, explained on my podcast “Humans vs. Machines” a few months ago, every self-driving vehicle on public roads will either have a human safety driver or some human remotely supervising to help the vehicle out of trouble.
Now, new edge cases are emerging in autonomous driving, such as a Tesla crashing into a parked jet.
Marcus said that no matter how much data these systems are trained on, new situations will always arise.
Just recently, ten more self-driving cars lost contact with the mission control center. Without the supervision of the control center, self-driving cars got lost, stopped in the middle of the street, and many other accidents:
The field of autonomous driving is constantly changing, so many researchers, including Marcus, do not understand the California Public Utilities Commission's approach.
It would be crazy to test self-driving anywhere and at any time without a rigorous, carefully vetted solution to address edge cases. This applies not only to self-driving cars but also to other areas based on machine learning.
Edge cases are everywhere, and anyone who thinks it's all easy to fix is kidding themselves.
We need to tighten up our management, and if we don’t, we may see major accidents with driverless cars, automated doctors, universal virtual assistants, home robots, and more in the coming years.
At the end of the article, Marcus stated that he completed this article on an airplane equipped with an autopilot. During the 9-hour flight, the autopilot was working all the time, and humans were also involved during this period, which constituted a human-in-the-loop. Ultimately, Marcus doesn’t think there will be autonomous planes, and he doesn’t think any quasi-autonomous cars have been approved yet.