Artificial intelligence (AI) is a bigger threat to national security than terrorism, the newly appointed president of one of the world’s oldest science institutions has warned.
Jim Al-Khalili, the incoming president of the British Science Association and professor of physics and public engagement at the University of Surrey, U.K., told The Telegraph the looming dangers of AI trump those posed even by climate change, antibiotic resistance, world poverty, the threat of pandemics or terrorism.
Citing Russian cyber hackers meddling in the 2016 U.S. election, he argued little would stop “cyber terrorists” from forcing their way into AI-controlled infrastructure, such as power grids, transport networks, and military installations.
“I am certain the most important conversation we should be having is about the future of AI,” he said. “It will dominate what happens with all of these other issues for better or for worse.”
Fears that the rise of automation and AI, known as Fourth Industrial Revolution, will endanger jobs is also warranted, he said. His concerns are mirrored by a November 2017 report by the management consulting firm McKinsey, which estimated 50 percent of current work could be automated as soon as 2030.
Al-Khalili is the latest expert to warn against the unregulated rise of AI. In April, a report by the research organization Rand Corporation concluded advances in technology and AI could see humanity speeding towards an international nuclear war.
This rise of such technology could create a dangerous cycle where governments feel obliged to update their nuclear arsenal, while trusting advice from AI machinery which could be flawed or tampered with.
Read more: Protect children from robot peer pressure, say scientists
“The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War,” the report read.
“The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world’s major nuclear powers. It’s not the killer robots of Hollywood blockbusters that we need to worry about; it’s how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.”
Similarly, venerated physicist Stephen Hawking cautioned AI could destroy civilization before he passed away earlier this year.
“Computers can, in theory, emulate human intelligence, and exceed it,” he said. “Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”
But Subramanian Ramamoorthy, reader in the school of informatics at the University of Edinburgh, disagrees that AI is the biggest threat facing humanity.
He told Newsweek while the popular discourse around AI is heavily driven by major concerns, the technology could also provide benefits that will improve our day-to-day lives.
“Some obviously good applications range from prosthetic and assistive robotic devices that restore the capabilities of the disabled, to predictive models that stabilize and reduce congestion in energy and traffic networks,” he said.
“Closer to home for me, technologies like self-driving cars have the potential to fundamentally change how our cities look and feel for most of us—positively influencing congestion, accessibility and affordability of mobility. Such machines are powered by AI.”
“That said,” he continued, “AI has indeed enabled new forms of issues.
“However, I am not yet convinced that these problems can’t be overcome through careful thinking at the policy level, for which reason I do not yet consider AI to be ‘the biggest challenge facing humanity.’ There are much bigger issues, having to do with people quite independent of technology enabling them.”
This article has been updated with comment from Subramanian Ramamoorthy.