What the heck is he doing?
Autonomous vehicles are excellent at obeying the rules of the road – they know when to slow down, they follow the traffic lights, they can tell highways from countryside roads. But, there is something they can’t grasp just yet – not everyone is like them.
Precisely speaking, self-driving cars don’t realize that human drivers allow themselves to break the rules on the road. With this being said, autonomous vehicles can’t predict human behavior.
To overcome this problem, engineers are relying on vehicle-to-vehicle communication (V2V). Similarly to the way airplanes avoid each other in the air, cars on the road will have equipment that notifies them of other vehicles’ position, speed, and direction. Thus, a self-driving car will be able to react to other vehicle’s behavior by analyzing specific behavior indicators. Yet, this technique is difficult to implement due to the need for a large number of vehicles with V2V communication.
Equipping other vehicles with V2V communication system will take time and resources before we see the positive effect on the traffic safety. Big brands have united to improve communication between vehicles and are already working on projects that will take us there.
Where the heck am I going?!
Another type of unpredictable events that may “scare” driverless cars are the adverse weather conditions.
When the weather is harsh, drivers find their way on the road by following the lanes. If there’s heavy rain, fog, or snow, that might be hard. For autonomous vehicles, the situation is no different. Their sensors and cameras can’t detect the lanes that are covered in snow.
What do we do? Do we ban autonomous vehicles in places like Chicago during the winter? In days of heavy rains, a puddle on the road might look like a simple black patch, and it can disguise a deep pothole. Dark areas on the road are still not recognizable by autonomous vehicles. And the situation when roads are covered in snow could be a total nightmare.
Because as it turned out, the weather was the primary cause of system failures during tests with self-driving vehicles.
Yet, a machine is programmed never to break the rules. And then, if we do teach autonomous cars to react spontaneously to a situation, how would we know when the machine is making a judgment call to save a life or just having a malfunction?
So there comes the dilemma – teach a machine to act like a human or a human to accept the irregularities of a machine. Where do the two meet?
Decisions, Decisions, Decisions….
Self-driving vehicles are all great on paper – latest technologies, smooth riding experience, the ability for all passengers to enjoy a trip. Yet, safety is not entirely guaranteed.
Most likely computers will never have the thinking abilities of a human brain. They can observe and orient themselves around the environment, sure. But all they do is react to predefined situations from a list of probabilities.
Programmers will have to teach autonomous cars to measure, evaluate, and act on a situation only in the split of a second. How to avoid a pedestrian stepping on the crosswalk in the last minute, or a child jumping in the street to save his ball, or a truck in front of us breaking sharply to avoid a crash?
Software engineers will have to improve computer algorithms to be able to not only observe but also adopt some of the driving tactics that other drivers have. Humans will need to learn how to drive alongside machines as well.
And maybe at the end of this learning process machines will become more like humans. Or humans will become a newer version of machines.
Save
Save
Save