Governments and tech companies risk a popular backlash against artificial intelligence (AI) unless they better explain how it will be used, according to a new report.
Polling for the Royal Society of Arts (RSA) found widespread concern that AI will create a “Computer Says No” culture, in which crucial decisions are made automatically without consideration of individual circumstances.
If the public feel “victimised or disempowered” by intelligent machines, they may resist the introduction of new technologies, even if it holds back progress which could benefit them, the report warned.
Fear of inflexible and unfeeling automatic decision-making was a greater concern than robots taking humans’ jobs among those taking part in a survey by pollsters YouGov for the RSA.
Despite recent publicity about the misuse of Facebook users’ personal data to target online ads, issues surrounding social marketing were bottom of the list of concerns.
Some 60% of those questioned opposed automated decision-making by computers in recruitment and promotion choices and the same proportion said it should not be used to help courts judge whether to grant a defendant bail or recommend rehabilitation. Just 11% backed its use in recruitment and 12% in the courts.
Use of automated decision-making in the immigration system was opposed by a margin of 54%-16%, in social security by 52%-17%, in healthcare by 48%-20% and in financial services by 48%-27%.
Only in the area of advertising and social media were opinions more balanced, with 28% opposing its use against 26% who supported it.
Some 61% of those questioned said their main concern about automated decision-making was AI does not have “the empathy or compassion to make important decisions”.
Almost one-third (31%) said the use of AI would reduce accountability, while just 22% said they feared the loss of jobs.
The report warned: “When people feel like they are under attack, they may resist change or innovation, even if this undermines progress and means that they also lose out on benefits.
“Bringing citizens’ voices ‘into the loop’ of innovation, its deployment and regulation could be one means of minimising risks and securing benefits.”
The report found high levels of awareness of headline-grabbing AI innovations like driverless cars (84%) and digital assistants (80%). But fewer than one-third (32%) of those questioned had heard about AI’s role in automated decision-making.
The RSA’s director of action and research, Anthony Painter, said: “If the models of embracing AI on offer appear to be top-down Chinese-style adoption of new technology versus a Silicon Valley free-for-all, it’s little wonder we’re seeing the warning signs of a new wave of populist backlash.
“The establishment appears to worry most about AI replacing jobs, but actually the public is more concerned with a world of constant surveillance, monitoring and ‘Computer Says No’. What we’re seeing is more akin to a creeping feeling of loss of influence and control, as we saw with the Brexit vote.
“To prevent a backlash against tech firms and government, we need new rights for workers and consumers, responsibilities for employers and corporations, and an active role for government: not just in nurturing innovative companies, but also in making AI part of a more civil society.”
The RSA’s Forum for Ethical AI is conducting a series of “citizens’ juries” to discuss the use of AI and automated decision-making with the public.