Final hours! Save up to 55% OFF InvestingProCLAIM SALE

Exclusive: Google pledges changes to research oversight after internal revolt

Published 02/25/2021, 06:06 AM
Updated 02/25/2021, 06:25 AM
© Reuters. FILE PHOTO: The Google name is displayed outside the company's office in London, Britain
GOOGL
-
GOOG
-
UBER
-

By Jeffrey Dastin and Paresh Dave

(Reuters) - Alphabet (NASDAQ:GOOGL) Inc's Google will change procedures before July for reviewing its scientists' work, according to a town hall recording heard by Reuters, part of an effort to quell internal tumult over the integrity of its artificial intelligence (AI) research.

In remarks at a staff meeting last Friday, Google Research executives said they were working to regain trust after the company ousted two prominent women and rejected their work, according to an hour-long recording, the content of which was confirmed by two sources.

Teams are already trialing a questionnaire that will assess projects for risk and help scientists navigate reviews, research unit Chief Operating Officer Maggie Johnson said in the meeting. This initial change will roll out by the end of the second quarter, and the majority of papers will not require extra vetting, she said.

Reuters reported in December that Google had introduced a "sensitive topics" review for studies involving dozens of issues, such as China or bias in its services. Internal reviewers had demanded that at least three papers on AI be modified to refrain from casting Google technology in a negative light, Reuters reported.

Jeff Dean, Google's senior vice president overseeing the division, said Friday that the "sensitive topics" review "is and was confusing" and that he had tasked a senior research director, Zoubin Ghahramani, with clarifying the rules, according to the recording.

Ghahramani, a University of Cambridge professor who joined Google in September from Uber Technologies (NYSE:UBER) Inc, said during the town hall, "We need to be comfortable with that discomfort" of self-critical research.

Google declined to comment on the Friday meeting.

An internal email, seen by Reuters, offered fresh detail on Google researchers' concerns, showing exactly how Google's legal department had modified one of the three AI papers, called "Extracting Training Data from Large Language Models." (https://

The email, dated Feb. 8, from a co-author of the paper, Nicholas Carlini, went to hundreds of colleagues, seeking to draw their attention to what he called "deeply insidious" edits by company lawyers.

"Let's be clear here," the roughly 1,200-word email said. "When we as academics write that we have a 'concern' or find something 'worrying' and a Google lawyer requires that we change it to sound nicer, this is very much Big Brother stepping in."

Required edits, according to his email, included "negative-to-neutral" swaps such as changing the word "concerns" to "considerations," and "dangers" to "risks." Lawyers also required deleting references to Google technology; the authors' finding that AI leaked copyrighted content; and the words "breach" and "sensitive," the email said.

Carlini did not respond to requests for comment. Google in answer to questions about the email disputed its contention that lawyers were trying to control the paper's tone. The company said it had no issues with the topics investigated by the paper, but it found some legal terms used inaccurately and conducted a thorough edit as a result.

RACIAL EQUITY AUDIT

Google last week also named Marian Croak, a pioneer in internet audio technology and one of Google's few Black vice presidents, to consolidate and manage 10 teams studying issues such as racial bias in algorithms and technology for disabled individuals.

Croak said at Friday's meeting that it would take time to address concerns among AI ethics researchers and mitigate damage to Google's brand.

"Please hold me fully responsible for trying to turn around that situation," she said on the recording.

Johnson added that the AI organization is bringing in a consulting firm for a wide-ranging racial equity impact assessment. The first-of-its-kind audit for the department would lead to recommendations "that are going to be pretty hard," she said.

Tensions in Dean's division had deepened in December after Google let go of Timnit Gebru, co-lead of its ethical AI research team, following her refusal to retract a paper on language-generating AI. Gebru, who is Black, accused the company at the time of reviewing her work differently because of her identity and of marginalizing employees from underrepresented backgrounds. Nearly 2,700 employees signed an open letter in support of Gebru. (https://

During the town hall, Dean elaborated on what scholarship the company would support.

"We want responsible AI and ethical AI investigations," Dean said, giving the example of studying technology's environmental costs. But it is problematic to cite data "off by close to a factor of a hundred" while ignoring more accurate statistics as well as Google's efforts to reduce emissions, he said. Dean previously has criticized Gebru's paper for not including important findings on environmental impact.

Gebru defended her paper's citation. "It's a really bad look for Google to come out this defensively against a paper that was cited by so many of their peer institutions," she told Reuters.

Employees continued to post about their frustrations over the last month on Twitter as Google investigated and then fired ethical AI co-lead Margaret Mitchell for moving electronic files outside the company. Mitchell said on Twitter that she acted "to raise concerns about race & gender inequity, and speak up about Google's problematic firing of Dr. Gebru."

Mitchell had collaborated on the paper that prompted Gebru's departure, and a version that published online last month without Google affiliation named "Shmargaret Shmitchell" as a co-author. (https://

© Reuters. FILE PHOTO: The Google name is displayed outside the company's office in London, Britain

Asked for comment, Mitchell expressed through an attorney disappointment in Dean's critique of the paper and said her name was removed following a company order.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.
© 2007-2024 - Fusion Media Limited. All Rights Reserved.