Saleswhale Blog

The race to build the first fully autonomous car

Written by Hari Ayyappan | February 12, 2019

Everyone talks about artificial intelligence (AI) these days. AI promises to revolutionize the way we work and live. Yet it threatens to displace human workers. Some worry that AI will get too smart for its own good (and our own). Others look forward to the new freedoms and opportunities that automation will unleash.

How to make sense of everything?

Who should we believe?

Above all, what do these developments mean for ordinary folks like us?

I decided to speak to Max Lorenz, our software engineer who works on our AI sales assistant. Max first got interested in AI during his university days. After graduation, he sought companies where he could apply his AI knowledge, and Saleswhale was a perfect fit. So, he moved from Hamburg, Germany, to join us in Singapore. Today, he trains our AI on email data, so that it can have more meaningful email conversations with our clients’ leads.

Max at work

There are many hot AI developments in various fields and industries. But we've decided to focus on just one: self-driving cars.

All views expressed are my own and Max’s

  1. Is anyone leading the race to build self-driving cars?
  2. When will a self-driving car be able to navigate the streets of Marrakesh without a scratch?
  3. How will self-driving cars change our lives?
  4. How “terrified” should we be about an autonomous future?

Is anyone leading the race to build self-driving cars?

YY: Who are the main players in the race to build the first full-fledged self-driving car? Also, who’s leading?

Max: There are two main players: car manufacturers and tech companies. Nearly every major car manufacturer invests in building self-driving cars, as they don’t want to miss out on this immense opportunity. Their edge is their long histories of building cars. But the tech companies -- especially tech giants like Google -- have the advantage in terms of software development.

It’s tough to say who’s leading. There are so many experiments happening, with varying degrees of success. Many of these experiments also happen behind the scenes.

YY: Hmmm, no clear leader, despite the huge investments made in this field over the years?

Max: It is incredibly hard to build a self-driving car that people are comfortable to ride in! So many nuances to get right, so many hurdles that remain. Still, everyone involved continues to pour plenty of resources into research and development. That’s because they recognize the tremendous value of a fully autonomous car.

By the way, there are six levels of driving automation, ranging from level zero to level five. Manual cars stand at level zero. Level five is the ultimate goal: full automation, no human intervention needed at all.

At present, many self-driving cars are around level two. At this stage, the driver needs to monitor the AI as it drives and prepare to intervene when necessary.

Going from level two to level three is the biggest and most challenging gap to cross. At level three, the driver can safely relax while the car drives, but must still be prepared to intervene if necessary. In other words, the main responsibility for monitoring the environment shifts from the human to the car. 

 



How Tesla's self-driving autopilot works. The car can change speeds and look out for blind spots, but the driver still has to keep his hands on the steering wheel.

YY: Why is it so challenging to build self-driving cars?

Max: AI needs data to build its intelligence. LOADS of clean, precise, and usable data.

So, where’s all that data coming from, and what happens to it?

As the self-driving car moves, it collects a lot of data from its surroundings using various sensors. Like radars, cameras, and LIDAR (Light Detection And Ranging).

Screenshot of what a self-driving car by Waymo “sees” as it drives around. This video was built using footage and real-time data from an actual trip on city streets.

But you can’t feed the AI raw data right away. You need to “clean” the data first. This means eliminating or adjusting errors and irregularities in your data set. It’s tedious and hard work, but must be done.

You’ve also got to annotate your data. This is the process of labelling your data, so that the AI can recognize specific objects like humans, cars, and trees. The AI can then use the annotated data to learn where to drive and what to avoid. It will also learn to recognize similar patterns when it encounters new data. Annotating every pixel of an image with an object class is tedious work, so it is often done using automation.

Once you’ve trained the AI with data, you’ve got to put it back on the road so that it can apply what it learned. Self-driving cars normally drive on test courses first, or undergo simulated test drives. After passing the test drives, the cars can drive on actual roads — but only roads approved by governments!

As the car drives around, the AI is constantly making assessments about the objects it “sees” and how to react to them. Can the AI identify a human when it encounters one? More importantly, does it know how to react to the human? This is so challenging, because all kinds of unexpected things can happen on the road.

When will a self-driving car be able to navigate the streets of Marrakesh without a scratch?

Traffic in Marrakesh, Morocco. Image by Ying Yi.

YY: Oh, definitely! I visited Marrakesh, Morocco, last year and the traffic there was crazy. People jaywalked. Cars ignored traffic signs. Pedestrians shared lanes with motorcycles, push carts, and donkeys. According to a local, the only traffic rule that people adhere to is “avoid hitting anyone or anything”.

I’d be impressed if a self-driving car could make its way through such chaotic road conditions without a scratch!

Max: Haha that’ll be crazy! Most of the current self-driving cars can do OK on the streets of some American cities. But in other parts of the world like Marrakesh, where the traffic is much more unpredictable and messy? Nearly impossible!

YY: Given enough time and experience on the road, I suppose a self-driving car could develop the kind of “intuition” about navigating safely that the Marrakesh locals have.

Max: Sure, we need to put more self-driving cars on the road. Expose them to all kinds of road conditions. Then they can learn from a wider and deeper set of data.

We can’t expect self-driving cars to be infallible and cause zero deaths. But the current technology is still far from being reliable and safe enough for the public to accept.

One major problem is that AI often falters when it comes to unexpected scenarios. It can react to things that it has “seen” before. But if, say, an elephant suddenly crosses the street, the AI gets confused and messes up.

YY: Tell me more about some of these screw ups!

Max: Whoa, where do I begin?

One of the most common mistakes is wrong detection. You know those speed limit stickers at the back of large vehicles? In some cases, AI mistook those stickers for highway speed limit signs, causing the vehicle to reduce speed. Other self-driving cars have also confused plastic bags for rocks and stopped.

It’s little things like these that can confuse AI. So, its training data needs to be super comprehensive.

YY: Oh no! Anything else?

Max: Yeah. Confusing traffic conditions. GPS malfunction. Bad weather. Encountering unfamiliar terrain.

YY: Then again, we’re not looking for 100% success rate, are we?

Max: Right. Self-driving cars are not perfect. But they should be less deadly than human-driven cars. Too many things can go wrong with human-driven cars. On the other hand, AI does not text and drive. AI does not drink and drive. AI does not get affected by emotions.

Secondly, humans can be slow to react to danger. On the other hand, an AI can assess multiple scenarios in a split second, and pick the course of action that would cause the least amount of damage.

By the way, for every failure a self-driving car makes, companies collect a lot of data about what happened and why. Then they do a software update, so that the entire fleet of self-driving cars learns from the car’s mistake. So, over time, fatalities caused by self-driving cars should decrease.

 

Mercedes

Benz doing an automated test drive in Shanghai, China. The test aims to assess driving behavior in the face of extremely heavy traffic and infrastructure peculiarities.

How will self-driving cars change our lives?

YY: Let’s say that we succeed in developing safe and reliable fully autonomous cars. What are some of the exciting possibilities we can look forward to?

Max: Self-driving car subscription services! Like, ride-hailing services, but without human drivers. Just fleets of self-driving cars.

Think about what it means to no longer have to own a car. With fewer cars on the road, traffic jams will be less of a problem. No need to clear so much land to build parking spaces too.

YY: This will have major implications for urban planning!

Max: Oh yes. The roads would look very different.

For one thing, there would be no need for any traffic signs. Fully autonomous cars can communicate with each other as they drive. So, they can adjust their speeds to avoid hitting each other, and they know how to give way to each other at intersections. Give them a network of unmarked roads and they’ll make their way around just fine.

YY: What about the impact on industries beyond transport and urban planning?

Max: Media and entertainment industries could stand to gain. People sitting inside self-driving cars will have loads of free time, right? This is a good opportunity to keep people occupied with music, TV, fun ...

YY: Just a thought … will people really have more free time? Bosses might see commuting time as time that could be spent working. Or, as a dear colleague of ours would have put it, “if there is time to commute, there is time to work!”

Max: I won’t be surprised if that happens …

(Laughs)

One thing is for sure. There will be some businesses that will try to monetize the new-found freedoms made possible by self-driving cars!

How “terrified” should we be about an autonomous future?

YY: We’ve talked about some of the possible “winners” of the race to build self-driving cars, like the media and entertainment companies. But what about the “losers”? Who will they be?

Max: It’s tough to say. The advent of self-driving cars will cause a lot of changes, but these changes will happen at a slow and gradual pace. Rather than trigger any huge immediate losses.

Inevitably, there would be people who lose their jobs to self-driving cars. Drivers are the most obvious. Any kind of repetitive work that does not require creativity will eventually be replaced by AI.

YY: Manual and laborious work too...

Max: Yea. Self-driving trucks would be a huge boon. Truck drivers have it tough: the pay is low, the hours are long, and they are away from their loved ones often. Self-driving trucks will free them from that kind of hard life.

I also believe that there will also be new jobs created along with the rise of self-driving cars. Like how the decline of horse-drawn carriages also opened up a lot of opportunities in the car industry. These new opportunities may even pay more than the jobs that were phased out.

 



A self-driving truck by Embark, a San Francisco-based startup, travelled 2,400 miles from California to Florida without relying on a human driver.

YY: Is there anything we should be, well, terrified about, when it comes to a future with fully autonomous cars?

Max: I don’t think there is anything to be scared about.

Let’s revisit the issue of fatalities caused by self-driving cars. Sure, self-driving cars have made mistakes and caused some deaths. But as the technology improves, the mistakes and fatalities should decline over time.

In many ways, AI makes better drivers than humans. Did you know that in America, human mistakes are responsible for 94 percent of the 33,000 traffic fatalities each year? Self-driving cars don’t get affected by drink or emotions, so in theory, they could eliminate those mistakes and save an estimated 31,000 lives a year.

YY: It’s probably a perception issue too. I read this article about how more and more people are worried about the safety of self-driving cars. Part of the reason is the widespread media coverage of accidents involving self-driving cars. The article concluded that to overcome negative perceptions, governments could regulate autonomous vehicle development and use.

Max: Yes. The media often focuses on things that are unusual, rather than commonplace. In the case of self-driving cars, the technology is new so every failure will be scrutinized, even if the number of accidents isn’t huge.

But what about the many accidents that involve regular cars? According to the article, there are five million crashes that happen every year in USA. If the media were to cover every one of them, there would be thousands of car crash reports every day!

YY: OK, how about this scenario. There has been some debate about how your self-driving car might be forced to kill you, in order to save more lives or prevent a serious accident. Whoa! Is this something that we should worry about?

Max: Hmmm, what would an AI do in a situation like that ... the actions it takes ultimately depends on the programmers who developed it. What are the underlying principles that were built into the AI.

I think for commercially available cars, the AI is unlikely to sacrifice the drivers and passengers. Who would willingly buy a car that might end up having to kill you?! What I reckon would happen is that the AI would activate the brakes. Do everything it can to minimize damage, while saving the driver.

YY: I wonder if it is fair, though, to expect AI to solve such moral dilemmas, when humans can’t even agree on the answers?

Max: There are no easy answers to questions like these. Even human drivers would find it hard to make the “right” decision in a split-second.

But we can’t avoid dealing with moral or ethical issues. Especially in industries where life-and-death situations are unavoidable, like healthcare.

It’s impossible and impractical for AI to account for and assess every little scenario that could happen. In the case of self-driving cars, should it save the life of an adult or a child? How about one child versus a group of them? Does the age of the child matter?

No, what we should do is give the AI we build some guiding principles. Like, do no harm, or save the driver. So that if the AI ever encounters an impending disaster, it can quickly simulate all the possible outcomes, and then decide what to do based on its principles.

YY: So many things to think about when building self-driving cars…

Max: It’s a good debate to be had. We can’t just look at self-driving cars from a business or scientific point of view. We’ve got to hear from other fields as well, like the social sciences. As we move slowly but surely towards an autonomous future, we need ethics to make sure that humanity benefits from all these technological changes.

Originally published on 12 February 2019, updated on 18 December 2019

Image Credit: Cha Pornea