Selloff or Market Correction? Either Way, Here's What to Do NextSee Overvalued Stocks

Biden administration takes first step toward writing key AI standards

Published 12/19/2023, 07:00 PM
Updated 12/19/2023, 07:41 PM
© Reuters. Words reading "Artificial intelligence AI", miniature of robot and toy hand are picture in this illustration taken December 14, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

By David Shepardson

WASHINGTON (Reuters) - The Biden administration said on Tuesday it was taking the first step toward writing key standards and guidance for the safe deployment of generative artificial intelligence and how to test and safeguard systems.

The Commerce Department's National Institute of Standards and Technology (NIST) said it was seeking public input by Feb. 2 for conducting key testing crucial to ensuring the safety of AI systems.

Commerce Secretary Gina Raimondo said the effort was prompted by President Joe Biden's October executive order on AI and aimed at developing "industry standards around AI safety, security, and trust that will enable America to continue leading the world in the responsible development and use of this rapidly evolving technology."

The agency is developing guidelines for evaluating AI, facilitating development of standards and provide testing environments for evaluating AI systems. The request seeks input from AI companies and the public on generative AI risk management and reducing risks of AI-generated misinformation.

Generative AI - which can create text, photos and videos in response to open-ended prompts - in recent months has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.

Biden's order directed agencies to set standards for that testing and address related chemical, biological, radiological, nuclear, and cybersecurity risks.

NIST is working on setting guidelines for testing, including where so-called "red-teaming" would be most beneficial for AI risk assessment and management and setting best practices for doing so.

External red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team."

In August, the first-ever U.S. public assessment "red-teaming" event was held during a major cybersecurity conference and organized by AI Village, SeedAI, Humane Intelligence.

© Reuters. Words reading

Thousands of participants tried to see if they "could make the systems produce undesirable outputs or otherwise fail, with the goal of better understanding the risks that these systems present," the White House said.

The event "demonstrated how external red-teaming can be an effective tool to identify novel AI risks," it added.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.
© 2007-2024 - Fusion Media Limited. All Rights Reserved.