In the 2004 science fiction film I, Robot, the police detective hero played by Will Smith is in a car crash resulting in his vehicle and another sinking in a river. The other car contains a trapped 12-year-old girl.
With imminent death by drowning confronting both characters, a rescue robot appears and rescues the hero from his doomed car but leaves the girl to die. Why? Because, the robot’s logic tells it, her survival was statistically less likely than the Will Smith character’s.
This ethical dilemma of choice – as a human would see it – underpins much of the movie (set in the year 2034), informing the viewer of the driving emotional forces motivating the hero.
In our real world of 2017, such a dilemma is confronting us already with the advent of vehicles – cars and trucks – that can drive semi-autonomously. It’s not apparent at all that such vehicles can make any choice, never mind a ‘right’ choice, if confronted with a situation on the road where a split-second decision has to be made.
Human drivers are confronted with this every day and we don’t get it right much of the time. Should we expect the robots to?
I think choices like this will become a make-or-break issue for self-driving vehicles becoming truly autonomous and in widespread use at scale (although we’re not ready yet). The technology is already here and evolves as we – humans and machines – learn more.
Isn’t it true, though, that the human history of technological innovation and eventual adoption over the past few centuries is littered with casualties that we sort of accept as the price to pay for reaping the benefits to society at large of that innovation and adoption?
It’s worth setting out clear definitions of terms such as ‘self-driving’ and ‘autonomous driving.’ These are IBM’s definitions, as good as any I’ve seen:
What is “self-driving”?
Automated: Driver must be present
- Partially – Driver monitors automatic functions, cannot perform non-driving tasks.
- Highly – System recognises its limitations and calls driver to take control, if needed. Driver can perform some non-driving tasks.
- Fully – System handles all situations autonomously without monitoring by driver. Driver allowed to perform non-driving tasks.
Autonomous: No driver required
- Limited – Designated areas where vehicles, infrastructure and the environment are controlled.
- Fully – Integrated with other vehicles in normal driving conditions.
Set that as the backdrop to discussion about self-driving and ethical choices and it makes the overall picture far clearer (or muddier, depending on your point of view).
Yet this is purely one example of a much larger picture that embraces the wider view of artificial intelligence and how it’s developing to benefit humanity. Ethical choices and dilemmas abound.
And that brings me to a thought-provoking feature in the Guardian on October 2, reporting on a debate in Australia on these very issues. Read it right here…
Should a driverless car swerve to miss a child, knowing it will kill its passenger? Or should it maintain its path and end a younger life?
It’s deeply troubling ethical dilemmas like these that Sandra Peter believes will hinder the mass uptake of driverless cars, possibly beyond our lifetimes.
Peter, the director of Sydney Business Insights, posed the quandary on an episode of ABC’s Q&A devoted to the future, where discussions focused on the ethical complexities and seismic structural shifts brought by technology, artificial intelligence, big data and automation.
“Smart people are trying to figure out how this works,” Peter said.
“We have a project out of MIT that is looking at who should die, basically, in the case of driverless cars,” she said. “A little child runs in front of the car, should the car kill me and drive me into a pole or save the child? Luckily the child pretty much all the time makes it.”
“The old lady, on the other hand, doesn’t always make it. If it’s two cats and the child, it’s a higher likelihood than the two dogs, and so on.”
A similar theme arose in a discussion of artificial intelligence and its ability to surpass human comprehension and control, a theme given new life by reported findings in Google’s powerful AI project known as DeepMind.
Should we treat AI as a serious threat and if so how? @adambspencer Ed Husic & @sandraapeter respond #QandA pic.twitter.com/UvS9L6RJLu
— ABC Q&A (@QandA) October 2, 2017
In February it was reported that DeepMind became more aggressive as a competitive game intensified.
But the biggest risk in the rapid advances of artificial intelligence, Peter said, was not that “they’re coming to get us”. Rather, it was that humans’ inherent biases would be reflected in the AI we designed.
Robots, in this view, would make biased decisions about who goes to jail, who gets a loan or who gets parole. “Those sorts of biases, these algorithms, it’s not of our own making, we don’t train them to be biased, but they’re modelled on the real world,” Peter said.
The conversation also focused on the disruptive nature of technology on existing industry and what skills young Australians need to survive in an increasingly automated world.
The author, ethics advocate and drone expert Catherine Ball said creativity and life experience would be essential in a world where mundane jobs were taken by robots. Such creativity should be balanced by Stem, coding and problem-solving skills.
“The World Economic Forum predicted we will need complex problem-solving skills,” Ball said. “Robots are good at doing the mundane but not good at thinking outside the square or being creative.
.@DrCatherineBall thinks coding is an essential skill for jobs of the future. Ed Husic & @LaundyCraigMP agree with re-skilling #QandA pic.twitter.com/7SAnUZXsXp
— ABC Q&A (@QandA) October 2, 2017
“Keep your experiences and your life experiences broad. Travel, travel, travel. Meet as many different kinds of people as you possibly can.”
The assistant innovation and science minister, Craig Laundy, predicted Australia’s education sector would be radically reshaped as workers moved through jobs with increasing frequency. The notion of reskilling and lifelong learning would grow, he said.
Laundy maintained that jobs in traditional sectors such as mining and agriculture would remain but that roles in aged and disability care would become more important.
He predicted complementary technology would bring prosperity and jobs, contrary to the “doom and gloom” around automation.
“Complementary technology like exoskeletons where humans will be in them enabling – and this comes out of the defence space – enabling them to perform tasks that are [superhuman], above and beyond our natural abilities and the integration of the individual and the machine,” he said. “It’s not just the machine doing everything.”
Ball spoke in similarly optimistic terms about drones, which she said could greatly aid in humanitarian efforts and environmental protection. She described a world in which drones deliver blood at crash scenes, help save the Great Barrier Reef from crown-of-thorns starfish, protect swimmers from sharks, aid police and firefighters, and deliver goods and services. Many of those examples were already occurring, she said.
“There’s even a company in the Rockies that you could pop on your virtual reality headset, fly a drone around the Rockies in real life and land it back on its landing pad and you will have experienced a part of the world you’ll never have experienced before,” Ball said.
The panellists were asked whether technology had made us more alone, despite its capacity to foster interconnectedness.
The shadow digital economy minister, Ed Husic, said technology should not be blamed for the way individuals use it. “It’s people’s decisions about how they use tech and the way in which they relate to each other,” he said.
“That’s at the heart of this. I see the good, the upside of being able to communicate with one person on the other side of the world.
“I came from a migrant family where you had to wait once a month to ring the other side of the planet for 10 minutes and you budgeted that call because it costs so much. That’s all you did. Now you get on Skype, you can do that instantaneously.”
guardian.co.uk © Guardian News & Media Limited 2010
Published via the Guardian News Feed plugin for WordPress.
(Photo at top via Pixabay. CC0)
4 responses to “Who should die when a driverless car crashes? Q&A ponders the future”
I think MIT’s ethical dilemma’s on who should die, are largely false dilemma’s. A unfortunate distraction as there are plenty of other ethical aspects to deal with. The ‘who should die’ scenario’s to me seem all based on an emotional response to the term ‘autonomous’, even though it only really means the disaster prone driver is no longer in control. Wrote this a while back: https://www.zylstra.org/blog/2015/10/why-false-dilemmas-must-be-killed-to-program-self-driving-cars/
Thanks for the link to your post, Ton. You make a very good case for why the scenario depicted in my post couldn’t happen. And re the MIT report, I missed seeing that at the time.
I don’t disagree with your core argument that autonomous vehicles do not move about in isolation but are connected digitally to a much wider world via sensors and their own intelligence and learning capabilities.
I would argue that this situation shouldn’t result in the ethical dilemmas portrayed or indeed the outcomes, ie, deaths of humans. The key word is “shouldn’t” rather than “wouldn’t”. There is far too much uncertainty surrounding technologies and human behaviours to make concrete statements, in my view.
I believe the state you envision will happen. Eventually. But the road to that desired place will be littered with the casualties of ethical dilemmas.
We learn as we go and the price will be high.
“But the road to that desired place will be littered with the casualties of ethical dilemmas.
We learn as we go and the price will be high.”
Indeed.
Just came across Peter Bihr’s posting, where he touches on the German government action plan for autonomous cars: “Key points include that autonomous driving is worth promoting because it causes fewer accidents, dictates that damage to property must take precedence over personal injury (aka life has priority), and that in unavoidable accident situations there may not be any discrimination between individuals based on age, gender, etc. It even includes data sovereignty for drivers.” (source: https://medium.com/@peterbihr/getting-our-policies-ready-for-ai-futures-a0160ced447e )