What happens when self-driving cars—or other artificial intelligence—interact with human beings? How do we know what the robot will do? How does the robot respond to us?
We discussed those questions, the current state of driverless cars, and more with Michael Clamann, an adjunct lecturer at Duke Science & Society.
Tell us about your research.
The main type of research I do has to do with people’s relationship with technology. That can be in a lot of different areas, but right now my research is in how driverless cars are going to interact with pedestrians and bicyclists.
If you are going to cross the street, we’re taught to try to make eye contact with the driver. Maybe they’ll look at you, maybe they’ll wave you in front of them. But with a driverless car, there might be someone sitting in that seat, but they could be reading, watching Netflix, or whatever. They don’t need to be paying attention to the road in front of them. So there’s got to be a way to replace that interaction. I’m looking into ways that we might do that.
What kinds of options are you looking at?
People are still trying to figure out what the car needs to communicate, and there’s not like a whole lot of agreement about that. But what people do agree on is that the most important thing to pedestrians is that the car sees them. That the car knows that they’re there.
The direction that I’m taking is a simple approach to communicating with the pedestrians. Putting a light, or something like that, on the front of the car, the equivalent of a brake light. Again, there’s still a lot of challenges with that because on the front of the car you’re competing with other lights. There are also rules and regulations about what colors you’re allowed to use, and where you can place it. So, all this has to go through a really complex approval process before we can actually put this on a car.
The challenge comes down to being able to communicate to the right number of people. People are doing this research looking at one car, one pedestrian. But there’s going to be situations where there could be pedestrians crossing from both sides of the road. There could be four-lane highways, where there are cars coming from both directions in multiple lanes. So, anything that people come up with is going to have to apply to all those circumstances.
Where is the technology for self-driving cars at right now? How long will it be until we’re all using them everyday?
Despite what you might read, the technology is just not ready yet. It’s going to be a couple of decades before you can actually walk into a car dealership and buy one. What you are going to see a lot sooner are slow-moving shuttles in cities, like public transportation, where you might get six people in one of these shuttles, traveling ten miles an hour through a pedestrian mall or something like that.
When you see these predictions online that say, Oh, we’re going to have an autonomous car in the next three years, usually it means they’re going to be releasing an autonomous car, but it’s going to have a limited functionality. It’s only going to be able to go in a small number of places or maybe only go up to a certain speed. It isn’t for being able to walk into a dealership to buy one of these things. It’s for a some kind of a transportation system that’s going to be really limited.
Once they do become common, how will they shape our society?
Oh, that’s a great question. It’s really hard to answer, because there’s going to be two phases to autonomous cars going out there.
In the first phase, we’re essentially going to see autonomous cars replacing what we have now. Right now, one of the ways that autonomous vehicles can see is something called LIDAR, which is bouncing lasers off objects that reflect them back to the car and tell the car how far away things are. Those LIDARs are like $75,000 a piece. It’s going to make these things, at least initially, way too expensive for people to afford. More than likely, you are going to see them in fleets like Lyft and Uber and things like that. That initially is not going to be a huge change over what we have now.
Once we start figuring out how they can actually affect us we’re going to see a second evolution. Think of it the same way we had cell phones. Initially, back in the late ’90s, when people first started to get mobile phones, it was just another way to get in touch with people and there wasn’t really that much to it. Flash forward to to 2007, 2008, when we started to see smart phones, we could see these things completely change our lives with extra technology that went into it. Now we’re basically walking around with computers in our pockets.
We’re going to see that same evolution with autonomous vehicles. First it’s going to be pretty much what we’ve been doing now, but with the computer in the driver’s seat. Fast forward twelve, fifteen years, when people start to catch on to the possibilities, we’re going to see a shift. I don’t know what that shift is going to be, but I am personally pretty excited to see what it is.
If students are interested in this field, what kinds of things can they look into to learn more?
There are a lot of different ways, depending on what they’re interested in.
Students who are interested in psychology can be learning about a field called human factors, which is the overlap between psychology and engineering—understanding how people react when they are working with different technologies.
Any type of robotics or engineering is a good way to get interested. So any student that’s involved in a first robotics club is going to be understanding a background that they need for this.
Student’s that are interested in coding or any type of computer science or computer programming—that’s the background that you need to develop the artificial intelligence algorithms that these things use to see and think.