Late nights, as I drive home after teaching Cognitive IoT at Stanford Continuing Studies, my loyal companion on the road is a fleet of Google’s self-driving cars, now known as Waymo. They drive defensively at 25 mph, and with a human at the ready to take control if the car disengages into manual model. What strikes me the most about the driverless car is its silence and it’s seemingly oblivious disconnect with the social fabric of the communication on the road.
Driverless car pilots are focused on such things as computer vision and deep learning, teaching the car the necessary “skills” to comprehend its surroundings, and to create algorithms that enable it to make better driving decisions on the road. The cars are learning the rules of the road, enabled by object detection and classification in order to safely navigate a crowded roadway comprising human-driven cars, pedestrians, road signs, traffic lights, and the occasional puddle or pothole. As for the autonomous car’s ability to react to humans inside the car, that’s coming, too.
Baffling a Driverless Car with Honks, Nods, and Eye Contact
Recently, US Senator Gregg Harper asked Ford Motors at a US House sub-committee hearing what a self-driving car would do if he honked at it. The answer is that the car will ignore it. Car manufacturers have not thought about how a car will react to honking or any other social communication—the kinds of things that make up the norm of our driving lives. There are so many nuances to honking that it is seemingly impossible for a car to apply machine learning to understand the context of a honk in order to properly react to it. A driver might honk in excitement when they spot a friend in a neighboring car, or may honk in road rage at a car cutting them off. A whole group of cars might honk at a group of people rallying for a cause for which they want to show support. “Honk if you love driverless cars!”
When we suddenly encounter a jaywalking pedestrian, we human drivers perform more than mere object recognition. When another human crosses our path in an unexpected place, we notice them, note their expressions, and make interpretations about their intent, and then make a decision to stop or slow down, perhaps even with empathy! In other words, we make subjective judgements. This is another example of a class of object recognition that we cannot teach an autonomous car however much progress we might be making in affective computing—that area of research where we teach machines empathy in order to simulate human emotions. I might let the person pass in the middle of the road if I suspect it is someone carrying a heavy shopping bag across from an apartment building, which may be far away from the next legal pedestrian crossing. Another person may make the same interpretation but choose not to stop, because they do not want to encourage the jaywalker’s unsafe behavior. So beyond the contextual interpretation, our cognitive reasoning influences our decision making, as well.
Human drivers communicate with each other and with bicyclists on the road through an extensive and subtle system of nods, eye contact, and hand waves. Yield signs are a great example of where we not only follow the rules of the road to give right of way, but we also keep ourselves safe by signaling a car entering the highway to merge in front of us, or by nodding when a driver lets us turn left. The classic case where the self-driving car struggles is determining how to merge when it comes upon an unexpected construction zone. The bottom line is that there is a great deal of unwritten communication that transpires between human drivers by which we navigate such potentially chaotic situations. Can we teach a driverless car the delicate dance we perform when negotiating ending lanes and construction signs, as well as the nonverbal permissions we take and grant as we get back into single file with other cars?
Machine Communication with other “Things”
All the foregoing notwithstanding, today’s driverless cars are smart enough to avoid the accidents that are commonly caused by the human carelessness that leads to 35,000 traffic fatalities each year in the US alone. They can predict the actions of cars ahead of them at speeds incomprehensible to humans, as was recently demonstrated by Tesla. The Tesla car predicted, based on its computer vision using sensor data, that two cars ahead of it were about to stop, saving the driver from an accident. Cars can quietly communicate with roads and traffic lights using an exchange of sensor data and predict and react to changing road conditions with more alertness than humans can ever hope for. Humans can’t see around corners, but driverless cars can. The many sensors that enable driverless cars can save us in sudden foggy conditions or when a thin sheet of ice invisible to the human eye is forming on the road as we drive. And equally interesting, those sensors may be borne by the car’s passengers (and occasional drivers), as well.
Human Wearables Become our Communication Proxy
Renault Motors has recently partnered with Sensoria Fitness of Seattle to sync its biosensor-enabled smart socks and other garments with Renault’s Sport Motor app to help race car drivers track their heart rate during a race and in training. They’re also using the technology to learn more about how a race car driver moves his/her feet on the pedals. This makes the wearable a viable and fascinating proxy for communicating with the car using IoT sensor signals. We’re seeing this kind of technology being integrated into the car, as well, with biosensors—complete with haptic feedback—embedded into the car seats.
Transforming All Road Stakeholders into Cognitive IoT
What automotive AI might be lacking in human cognition, it makes up in agile decision-making. We cannot expect a real-time conscious communication between an autonomous vehicle and a human, at least not at high speeds for decisions that can impact safety. This has convinced me that all the elements involved with the social fabric of our roadway communication need to be enhanced by cognitive IoT. The road, traffic lights, parking spots, wearables, and the car’s AI all have to process the huge volume of real-time sensor and intent data, and improve its capabilities through machine learning to enable safe and effective communication with the driverless car. The road can alert the car about a pothole or icy conditions. The parking spot can inform the car about space availability—and even take out its payments using Blockchain. Traffic lights can signal the car to go on through an empty road. Humans can communicate using wearables that predict their intents using AI models. Taken together, these capabilities enable a symphony of cognitive IoT devices empowered by facial recognition, the tracking of driver behaviors, a coming together of data and sensor sources providing information about road and traffic conditions—all feeding one another in symbiotic concert. Indeed, the driverless car is the catalyst that is set to accelerate the realization of cognition in city infrastructure, wearables, and other cars.
On our path to this driverless world, sharing the road with self-driving cars, my “drive” home is only going to get better with the likes of Waymo and its fellow travelers, all sharpening their AI, hungry to learn our communication methods. The “silence” of driverless cars will be broken as we progress from fragmented human communications not understood by cars to intelligent, nuanced cognitive communication from the many things on the road.
About the Author
Sudha Jamthe, CEO of IoTDisruptions.com, globally recognized technology futurist, and keynote speaker is the author of “2030 The Driverless World: Business Transformation from Autonomous Vehicles” and three IoT books. She brings twenty years of digital transformation experience in building organizations, shaping new technology ecosystems, and mentoring leaders at eBay, PayPal, Harcourt, and GTE. She has an MBA from Boston University, and teaches IoT Business at Stanford Continuing studies. Sudha also aspires to bring cognitive IoT and autonomous vehicles together.