Sta Hungry Stay Foolish

Stay Hungry. Stay Foolish.

A blog by Leon Oudejans

Inside AI’s ‘Hollywood extinction scenario’ (Telegraph)

Telegraph Technology Intelligence title: Inside AI’s ‘Hollywood extinction scenario’
By: James Titcomb, technology editor
Date: 31 May 2023

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The above passage is just 22 words, but what matters more are the people behind it.

The statement, urging humanity to address AI’s existential risks, was yesterday signed by the world’s top AI laboratories, including OpenAI, the maker of ChatGPT.

The rise of chatbots such as ChatGPT and Google Bard, as well as image generation tools like Stable Diffusion, mean that warnings about AI’s existential threat have been ten a penny in recent weeks.

But yesterday’s statement, co-ordinated by the nonprofit Center for AI Safety, was the first time that the AI industry came together to warn about its own invention.

The organisation’s executive director, Dan Hendrycks, compared it to “atomic scientists issuing warnings about the very technologies they’ve created”.

Nuclear comparisons are in vogue: last week, OpenAI’s Sam Altman called for an international body similar to the International Atomic Energy Agency to regulate AI.

It should be noted that the statement was not universally accepted, despite the wide range of signatories. Researchers including Meta’s AI chief, Yann LeCun, have responded overnight calling the threats overblown and purely hypothetical.

At the very least it is worth exploring the motives of the AI labs in warning about their own technology – after all, if they were so worried they could simply pull the plug.

One theory is that by talking up the Hollywood extinction scenario, AI bosses can distract from the more immediate and realistic concerns that could invite regulation: copyright violations, privacy concerns and misinformation.

Altman has called for regulation, but last week suggested OpenAI could leave the EU because of Brussels’ incoming laws (he quickly reversed the warning).

In the UK, the Information Commissioner is cracking down on AI companies scraping user data without their consent, as The Telegraph revealed this weekend.

Another possibility is that talking up the existential threat of AI makes these technologies appear more capable than they truly are.

Critics have said ChatGPT and similar models are merely fancy versions of autocomplete, lacking anything approaching true intelligence. Claiming that they could one day take over the world is one way of countering that.

Any warning about human extinction should be taken seriously, especially when the world’s top tech companies are making it. But we should also recognise they might have other motives in making such warnings.”


Sources:
https://m4.emails.telegraph.co.uk/nl/jsp/m.jsp?c=%40lwou42kgm4PTE049qNSeWqTyJUnIk0IQp%2F%2Fg66DSVPGx4H68JO1iGUwE4VCcAhPD2wKyUUlpZExHmF7r2yzNsg%3D%3D&WT.mc_id=e_DM155724&WT.tsrc=email&etype=Edi_Tec_New_TechIntel&utmsource=email&utm_medium=Edi_Tec_New_TechIntel20230531&utm_campaign=DM155724
and related article:
https://www.telegraph.co.uk/business/2023/05/30/scientists-warn-ai-could-as-dangerous-nuclear-war/

Archives

Framework Posts

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest