Serious investment thinking that doesn’t take itself too seriously.

HOME

LOGIN

ABOUT THE CURIOUS INVESTOR GROUP

SUBSCRIBE

SIGN UP TO THE WEEKLY

PARTNERS

TESTIMONIALS

CONTRIBUTORS

CONTACT US

MAGAZINE ARCHIVE

PRIVACY POLICY

SEARCH

-- CATEGORIES --

GREEN CHRONICLE

PODCASTS

THE AGENT

ALTERNATIVE ASSETS

THE ANALYST

THE ARCHITECT

ASTROPHYSIST

THE AUCTIONEER

THE ECONOMIST

EDITORIAL NOTES

FACE TO FACE

THE FARMER

THE FUND MANAGER

THE GUEST ESSAY

THE HEAD HUNTER

HEAD OF RESEARCH

THE HISTORIAN

INVESTORS NOTEBOOK

THE MACRO VIEW

POLITICAL INSIDER

THE PROFESSOR

PROP NOTES

RESIDENTIAL INVESTOR

TECHNOLOGY

UNCORKED

There’s more to AI than “killing all humans”

by | Jun 10, 2023

Technology

There’s more to AI than “killing all humans”

by | Jun 10, 2023

If increasingly panic-stricken headlines are to be believed, Artificial Intelligence poses an existential threat to humanity. The Prime Minister’s advisor on AI has warned that we have just two years to protect the species, and the Center for AI Safety has published a statement signed by hundreds of AI researchers, lawmakers, academics, and industry leaders saying, ‘mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’.

Such claims are worth taking seriously given the fact that governments all over the world are considering an array of AI policies. But they shouldn’t make us rush for a pause in AI research or motivate lawmakers to hamper AI innovation. There are risks associated with AI, but there are also benefits. Given the potential for AI to improve almost every aspect of our lives, it is worth considering what degrees of risk justify intervention and what such interventions might look like. 

Including AI on a list of risks that includes nuclear war and pandemics is interesting but unhelpful. Nuclear weapons have only been used in warfare twice by the same military power against the same enemy in one conflict. In other words, nuclear warfare is very rare. This is in contrast to pandemics, which are comparatively frequent. AI’s potential to cause mass casualties is currently theoretical. 

That two people with such diverse views could sign the same statement means there’s not much to be gleaned from it

AI, pandemics, and nuclear weapons all pose threats and the mitigation measures are different for each. Without examining the nature of the threats and the costs of mitigation the Center for AI Safety’s statement looks banal. People with a wide range of risk tolerances and pricing could happily sign it.

Take for example someone, let’s call him ‘Pessimistic Peter’, who believes that the risk of nuclear war is very high. Despite the evidence, he is convinced that there is a 50% chance of the Russian military using a nuclear weapon in Ukraine. Given that such an event would increase the chance of global nuclear war and mankind’s extinction, Pessimistic Peter believes that it is worth the world’s businesses and governments spending 10% of global GDP (about $10 trillion) a year to prevent nuclear war. The recent pandemic really frightened Pessimistic Peter, and he believes that steps to mitigate its spread were woefully inadequate. Although he thinks that Covid never had a chance of wiping out mankind, he thinks that money spent on pandemic mitigation should be quadrupled and that lockdowns should be stricter when the next global pandemic arrives. 

Pessimistic Peter thinks that a human-level intelligence AI is only a few years away and is convinced that such a technology will set itself loose and pose an extinction-level threat to humanity. He wants governments to introduce an AI licensing regime and long prison sentences for those who research and build AI underground.

‘Cautious Catherine’, on the other hand, thinks that the chance of the Russian military detonating a nuke in Ukraine is only 0.5%. But, she still thinks that because the outcome would be so devastating that it would be worth spending 1% of global GDP (about $1 trillion) a year to prevent it. She thinks that pandemics are worrying and fears that another zoonotic pandemic could be worse than Covid. She’s not prepared to accept another lockdown, but she would like governments to establish new departments dedicated to pandemic preparation.

Cautious Catherine is less worried than Pessimistic Peter about AI, but she does think that while the risk of an AI-led extinction is unlikely, she would like for regular government audits of research labs and other institutions engaged in AI research.

Unfounded concerns could lead to a stifling of innovation in technology that has the potential to yield many benefits

Cautious Catherine and Pessimistic Peter have very different attitudes towards AI, pandemics, and nuclear war. But they would both have been able to sign the Center for AI Safety’s statement. That two people with such diverse views could sign the same statement means there’s not much to be gleaned from it. 

The Center for AI Safety claims that its statement ‘aims to […] open up discussion’, which is a worthwhile goal. We should be discussing the threats of AI, but these discussions must be grounded by clear statements of risks and the costs associated with mitigating those risks. Those worried about AI posing a risk of human extinction should state what the nature of the threat is, how likely that threat is to emerge, and what costs would be sufficient to mitigate those threats. Likewise, those who are not worried that AI is going to kill us all should be prepared to tell their critics what risks would justify intervention.

Clarity in AI safety and risk discussions is critical because, absent nuance, unfounded concerns could lead to a stifling of innovation in technology that has the potential to yield many benefits, including advances in tackling the next pandemic. In the coming months and years we will undoubtedly see more statement’s like the Center for AI Safety’s statement and the call of a ‘pause’ on AI research released in March. Everyone involved in the AI debate should speak precisely about their fears, their hopes for AI’s benefits, and the costs of risk mitigation.

This article was originally published by CapX, and is republished here with permission.

About Matthew Feeney

About Matthew Feeney

INVESTOR'S NOTEBOOK

Smart people from around the world share their thoughts

READ MORE >

THE MACRO VIEW

Recent financial news and how it connects across all asset classes

READ MORE >

TECHNOLOGY

Fintech, proptech and what it all means

READ MORE >

PODCASTS

Engaging conversations with strategic thinkers

READ MORE >

THE ARCHITECT

Some of the profession’s best minds

READ MORE >

RESIDENTIAL ADVISOR

Making money from residential property investment

READ MORE >

THE PROFESSOR

Analysis and opinion from the academic sphere

READ MORE >

FACE-TO-FACE

In-depth interviews with leading figures in the real estate/investment world.

READ MORE >