🍎 🍕 Less apples, more pizza 🤔 Have you seen Buffett’s portfolio recently?Explore for Free

AI experts disown Musk-backed campaign citing their research

Published 03/31/2023, 09:57 AM
Updated 04/01/2023, 07:46 AM
© Reuters. FILE PHOTO: Tesla founder Elon Musk attends Offshore Northern Seas 2022 in Stavanger, Norway August 29, 2022. NTB/Carina Johansen via REUTERS
GOOGL
-
GOOG
-

By Martin Coulter

LONDON (Reuters) -Four artificial intelligence experts have expressed concern after their work was cited in an open letter – co-signed by Elon Musk – demanding an urgent pause in research.

The letter, dated March 22 and with more than 1,800 signatures by Friday, called for a six-month circuit-breaker in the development of systems "more powerful" than Microsoft-backed OpenAI's new GPT-4, which can hold human-like conversation, compose songs and summarise lengthy documents.

Since GPT-4's predecessor ChatGPT was released last year, rival companies have rushed to launch similar products.

The open letter says AI systems with "human-competitive intelligence" pose profound risks to humanity, citing 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google (NASDAQ:GOOGL) and its subsidiary DeepMind.

Civil society groups in the U.S. and EU have since pressed lawmakers to rein in OpenAI's research. OpenAI did not immediately respond to requests for comment.

Critics have accused the Future of Life Institute (FLI), the organisation behind the letter which is primarily funded by the Musk Foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases.

Among the research cited was "On the Dangers of Stochastic Parrots", a paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google.

Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as "more powerful than GPT4".

"By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI," she said. "Ignoring active harms right now is a privilege that some of us don't have."

Mitchell and her co-authors -- Timnit Gebru, Emily M. Bender, and Angelina McMillan-Major -- subsequently published a response to the letter, accusing its authors of "fearmongering and AI hype".

"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a 'flourishing' or 'potentially catastrophic' future," they wrote.

"Accountability properly lies not with the artefacts but with their builders."

FLI president Max Tegmark told Reuters the campaign was not an attempt to hinder OpenAI’s corporate advantage.

"It's quite hilarious. I've seen people say, 'Elon Musk is trying to slow down the competition,'" he said, adding that Musk had no role in drafting the letter. "This is not about one company."

RISKS NOW

Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, told Reuters she agreed with some points in the letter, but took issue with the way in which her work was cited.

She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.

Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.

She said: "AI does not need to reach human-level intelligence to exacerbate those risks.

"There are non-existential risks that are really, really important, but don't receive the same kind of Hollywood-level attention."

Asked to comment on the criticism, FLI's Tegmark said both short-term and long-term risks of AI should be taken seriously.

"If we cite someone, it just means we claim they're endorsing that sentence. It doesn't mean they're endorsing the letter, or we endorse everything they think," he told Reuters.

Dan Hendrycks, director of the California-based Center for AI Safety, who was also cited in the letter, stood by its contents, telling Reuters it was sensible to consider black swan events - those which appear unlikely, but would have devastating consequences.

The open letter also warned that generative AI tools could be used to flood the internet with "propaganda and untruth".

© Reuters. FILE PHOTO: Tesla Inc CEO Elon Musk attends the World Artificial Intelligence Conference (WAIC) in Shanghai, China August 29, 2019. REUTERS/Aly Song/File Photo

Dori-Hacohen said it was "pretty rich" for Musk to have signed it, citing a reported rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and others.

Musk and Twitter did not immediately respond to requests for comment.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.
© 2007-2024 - Fusion Media Limited. All Rights Reserved.