Âé¶¹ÒùÔº


This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Current AI risks more alarming than apocalyptic future scenarios, political scientists find

robot apocalypse
Credit: Pixabay/CC0 Public Domain

Most people generally are more concerned about the immediate risks of artificial intelligence than they are about a theoretical future in which AI threatens humanity. A new study by the University of Zurich reveals that respondents draw clear distinctions between abstract scenarios and specific tangible problems and particularly take the latter very seriously.

There is a broad consensus that is associated with risks, but there are differences in how those risks are understood and prioritized. One widespread perception emphasizes theoretical long-term risks such as that of AI potentially threatening the survival of humanity.

Another common viewpoint focuses on immediate concerns such as how AI systems amplify social prejudices or contribute to disinformation. Some fear that emphasizing dramatic "existential risks" may distract attention from the more urgent actual present problems that AI is already causing today.

Present and future AI risks

To examine those views, a team of political scientists at the University of Zurich conducted three large-scale online experiments involving more than 10,000 participants in the U.S. and the UK. The findings are in the journal Proceedings of the National Academy of Sciences.

Some subjects were shown a variety of headlines that portrayed AI as a catastrophic risk. Others read about present threats such as discrimination or misinformation, and others about the potential benefits of AI. The objective was to examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems.

Greater concern about present problems

"Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes," says Professor Fabrizio Gilardi from the Department of Political Science at UZH.

Even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems, including, for example, systematic bias in AI decisions and job losses due to AI. The study, however, also shows that people are capable of distinguishing between theoretical dangers and specific tangible problems and take both seriously.

Conduct broad dialogue on AI risks

The study thus fills a significant gap in knowledge. In public discussion, fears are often voiced that focusing on sensational future scenarios distracts attention from pressing present problems. The study is the first-ever to deliver systematic data showing that awareness of actual present threats persists even when people are confronted with apocalyptic warnings.

"Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems," co-author Emma Hoes says. Gilardi adds that "the shouldn't be 'either-or.' A concurrent understanding and appreciation of both the immediate and potential future challenges is needed."

More information: Emma Hoes et al, Existential risk narratives about AI do not distract from its immediate harms, Proceedings of the National Academy of Sciences (2025).

Provided by University of Zurich

Citation: Current AI risks more alarming than apocalyptic future scenarios, political scientists find (2025, April 23) retrieved 28 April 2025 from /news/2025-04-current-ai-alarming-apocalyptic-future.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further


88 shares

Feedback to editors