The purported risk of human extinction posed by AI has been exaggerated, says an AI expert Marcus

Priya Walia

Artificial Intelligence

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

According to Gary Marcus, a renowned professor at New York University, the belief that AI poses a risk of human extinction is exaggerated. Marcus asserted that, at least for the time being, he is not apprehensive about the potential for extinction because the circumstances are not definitive. Instead, he expressed greater concern about individuals creating AI systems they cannot effectively manage.

“I’m not personally that concerned about extinction risk, at least for now, because the scenarios are not that concrete. A more general problem that I am worried about… is that we’re building AI systems that we don’t have very good control over, and I think that poses a lot of risks, but maybe not literally existential,” said Marcus in San Francisco.

Marcus raised a valid point by questioning why individuals would continue working on a project if they believed it posed a significant existential risk. Instead of fixating on improbable scenarios where there are no survivors, he suggested that society ought to be directing its focus towards genuine threats.

“People might try to manipulate the markets by using AI to cause all kinds of mayhem and then we might, for example, blame the Russians and say, ‘look what they’ve done to our country’ when the Russians actually weren’t involved. You (could) have this escalation that winds up in nuclear war or something like that. So I think there are scenarios where it was pretty serious. Extinction? I don’t know,” he continued.

Interestingly, upon learning that OpenAI, the creator of ChatGPT, was partnering with Microsoft for the release of its latest and more advanced AI model, Marcus and over 1,000 individuals, including Elon Musk, authored an open letter requesting a worldwide pause in AI advancement, concerned about the potential implications.

While Marcus signed the initial letter, he refrained from endorsing a more concise statement released recently by industry leaders and professionals, including OpenAI’s Chief Executive, Sam Altman, which sparked controversy. The statement called for immediate action from global leaders to mitigate the risks of extinction stemming from AI technology, on par with other large-scale societal risks such as pandemics and nuclear war.

The signatories comprised industry experts who are actively developing systems intending to achieve “general” AI, a technology capable of exhibiting cognitive abilities equivalent to those of human beings.

Via France 24