Stop the Robots

Researchers say social, ethical and political concerns in the UK about artificial intelligence (AI) are mounting and greater oversight is urgently needed, according to the Guardian. Otherwise, we could expect to see the kind of social disruption that greeted the advent of genetically-modified (GM) foods during the past decades.

The Guardian’s report notes that there are no testing standards or requirement for AI to explain their decisions. There is also no organisation equipped to monitor and investigate any bad decisions or accidents.

AI has entered public consciousness during the past couple of years with largely a negative focus. In how AI is reported in the media – mainstream and social – that focus tends to be about how the robots are coming to take away our jobs, our livelihoods, our comfort zones.

Undoubtedly, elements of AI – notably automation and machine learning – will have a significant and long-lasting effect on work and workplaces: how work is done and who does it. We already have examples where serious experiments and testing are taking place where automation will replace the roles of people in some manual jobs that, broadly speaking, are repetitive and predictable.

Amazon, for example, is experimenting in the UK with warehouse automation that could transform its logistics and supply chain as the FT reports:

The technology shows the transformation happening at the core of the logistics sector. Many functions that were once solely done by human hands are being carried out by robots as advanced automation takes root.

Watch the FT’s 5-minute video in its report – it’s very good and includes a segment on what grocery retailer Occado is also doing in this area.

In the US, retailing giant Walmart is testing robots in supermarkets to handle tasks like scanning shelves for out-of-stock items, incorrect prices and wrong or missing labels – the kind of work previously done by people, who aren’t well suited for this type of job in the long term nor do it well.

The white collar is also in the sights of AI and automation. Earlier this year, Bloomberg reported on JP Morgan’s experiments with machine learning that use a computer program to analyse legal agreements quicker and more accurately than armies of lawyers:

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone and never asks for vacation.

Bloomberg’s report had arguably the best financial story headline of 2017 as this screenshot shows.

JP Morgan software

These three examples are really the tip of an immense iceberg indicating the scale of change that is just ahead. While it’s clear that some jobs will be directly affected – meaning workers affected will lose them or need to be retrained for other work (a huge topic in itself) – it’s also clear that other jobs will produce benefits beyond the obvious (faster, greater accuracy, saving cost, etc) where repetitive and predictable work – think of the JP Morgan example – is done by computers and machines thus enabling the humans to focus on things we are good at, ie, cognitive work that isn’t repetitive and predictable.

That is what I call ‘augmented intelligence.’

This is just part of a still-emerging and -evolving landscape that, without a regulatory framework, will have profound implications for the well-being or otherwise of all of us. The Guardian’s report I mentioned earlier and embedded below is a great introduction to the idea of standards and accountability and what we should expect.

(Photo at top via The Verge.)


Powered by Guardian.co.ukThis article titled “Artificial intelligence risks GM-style public backlash, experts warn” was written by Ian Sample Science editor, for The Guardian on Wednesday 1st November 2017 10.30 UTC

The emerging field of artificial intelligence (AI) risks provoking a public backlash as it increasingly falls into private hands, threatens people’s jobs, and operates without effective oversight or regulatory control, leading experts in the technology warn.

At the start of a new Guardian series on AI, experts in the field highlight the huge potential for the technology, which is already speeding up scientific and medical research, making cities run more smoothly, and making businesses more efficient.

But for all the promise of an AI revolution, there are mounting social, ethical and political concerns about the technology being developed without sufficient oversight from regulators, legislators and governments. Researchers told the Guardian that:

  • The benefits of AI might be lost to a GM-style backlash.
  • A brain drain to the private sector is harming universities.
  • Expertise and wealth are being concentrated in a handful of firms.
  • The field has a huge diversity problem.

In October, Dame Wendy Hall, professor of computer science at Southampton University, co-chaired an independent review on the British AI industry. The report found that AI had the potential to add £630bn to the economy by 2035. But to reap the rewards, the technology must benefit society, she said.

“AI will affect every aspect of our infrastructure and we have to make sure that it benefits us,” she said. “We have to think about all the issues. When machines can learn and do things for themselves, what are the dangers for us as a society? It’s important because the nations that grasp the issues will be the winners in the next industrial revolution.”

Today, responsibility for developing safe and ethical AI lies almost exclusively with the companies that build them. There are no testing standards, no requirement for AIs to explain their decisions, and no organisation equipped to monitor and investigate any bad decisions or accidents that happen.

A central goal of the field of artificial intelligence is for machines to be able to learn how to perform tasks and make decisions independently, rather than being explicitly programmed with inflexible rules. There are different ways of achieving this in practice, but some of the most striking recent advances, such as AlphaGo, have used a strategy called reinforcement learning. Typically the machine will have a goal, such as translating a sentence from English to French and a massive dataset to train on. It starts off just making a stab at the task – in the translation example it would start by producing garbled nonsense and comparing its attempts against existing translations. The program is then “rewarded” with a score when it is successful. After each iteration of the task it improves and after a vast number of reruns, such programs can match and even exceed the level of human translators. Getting machines to learn less well defined tasks or ones for which no digital datasets exist is a future goal that would require a more general form of intelligence, akin to common sense.

“We need to have strong independent organisations, along with dedicated experts and well-informed researchers, that can act as watchdogs and hold the major firms accountable to high standards,” said Kate Crawford, co-director of the AI Now Institute at New York University. “These systems are becoming the new infrastructure. It is crucial that they are both safe and fair.”

Many modern AIs learn to make decisions by being trained on massive datasets. But if the data itself contains biases, these can be inherited and repeated by the AI.

Earlier this year, an AI that computers use to interpret language was found to display gender and racial biases. Another used for image recognition categorised cooks as women, even when handed images of balding men. A host of others, including tools used in policing and prisoner risk assessment, have been shown to discriminate against black people.

The industry’s serious diversity problem is partly to blame for AIs that discriminate against women and minorities. At Google and Facebook, four in five of all technical hires are men. The white male dominance of the field has led to health apps that only cater for male bodies, photo services that labelled black people as gorillas and voice recognition systems that did not detect women’s voices. “Software should be designed by a diverse workforce, not your average white male, because we’re all going to be users,” said Hall.

Poorly tested or implemented AIs are another concern. Last year, a driver in the US died when the autopilot on his Tesla Model S failed to see a truck crossing the highway. An investigation into the fatal crash by the US National Transportation Safety Board criticised Tesla for releasing an autopilot system that lacked sufficient safeguards. The company’s CEO, Elon Musk, is one of the most vocal advocates of AI safety and regulation.

Yet more concerns exist over the use of AI-powered systems to manipulate people, with serious questions now being asked about uses of social media in the run-up to Britain’s EU referendum and the 2016 US election. “There’s a technology arms race going on to see who can influence voters,” said Toby Walsh professor of artificial intelligence at the University of New South Wales and author of a recent book on AI called Android Dreams.

“We have rules on the limits of what you can spend to influence people to vote in particular ways, and I think we’re going to have to have limits on how much technology you can use to influence people.”

Even at a smaller scale, manipulation could create problems. “On a day to day basis our lives are being, to some extent, manipulated by AI solutions,” said Sir Mark Walport, the government’s former chief scientist, who now leads UK Research and Innovation, the country’s new super-research council. “There comes a point at which, if organisations behave in a manner that upsets large swaths of the public, it could cause a backlash.”

Leading AI researchers have expressed similar concerns to the House of Lords AI committee, which is holding an inquiry into the economic, ethical and social implications of artificial intelligence. Evidence submitted by Imperial College London, one of the major universities for AI research, warns that insufficient regulation of the technology “could lead to societal backlash, not dissimilar to that seen with genetically modified food, should serious accidents occur or processes become out of control”.

Scientists at University College London share the concern about an anti-GM-style backlash, telling peers in their evidence: “If a number of AI examples developed badly, there could be considerable public backlash, as happened with genetically modified organisms.”

But the greatest impact on society may be AIs that work well, scientists told the Guardian. The Bank of England’s chief economist has warned that 15m UK jobs could be automated by 2035, meaning large scale re-training will be needed to avoid a sharp spike in unemployment. The short-term disruption could spark civil unrest, according to Maja Pantic, professor of affective and behavioural computing at Imperial, as could rising inequality driven by AI profits flowing to a handful of multinational companies.

Subbarao Kambhampati, president of the Association for the Advancement of Artificial Intelligence, said that although technology often benefited society, it did not always do so equitably. “Recent technological advances have been leading to a lot more concentration of wealth,” he said. “I certainly do worry about the effects of AI technologies on wealth concentration and inequality, and how to make the benefits more inclusive.”

The explosion of AI research in industry has driven intense demand for qualified scientists. At British universities, PhD students and postdoctoral researchers are courted by tech firms offering salaries two to five times those paid in academia. While some institutions are coping with the hiring frenzy, others are not. In departments where demand is most intense, senior academics fear they will lose a generation of talent who would traditionally drive research and teach future students.

According to Pantic, the best talent from academia is being sucked up by four major AI firms, Apple, Amazon, Google and Facebook. She said the situation could lead to 90% of innovation being controlled by the companies, shifting the balance of power from states to a handful of multinational companies.

Walport, who oversees almost all public funding of UK science research, is cautious about regulating AI for fear of hampering research. Instead he believes AI tools should be carefully monitored once they are put to use so that any problems can be picked up early.

“If you don’t continuously monitor, you’re in danger of missing things when they go wrong,” he said. “In the new world, we should surely be working towards continuous, real-time monitoring so that one can see if anything untoward or unpredictable is happening as early as possible.”

That might be part of the answer, according to Robert Fisher, professor of computer vision at Edinburgh University. “In theory companies are supposed to have liability, but we’re in a grey area where they could say their product worked just fine, but it was the cloud that made a mistake, or the telecoms provider, and they all disagree as to who is liable,” he said.

“We are clearly in brand new territory. AI allows us to leverage our intellectual power, so we try to do more ambitious things,” he said. “And that means we can have more wide-ranging disasters.”

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.