Several leading AI companies, including OpenAI, Alphabet, and Meta Platforms, have voluntarily made commitments to the White House in an effort to enhance the safety of AI technology, as reported by the Biden administration.
These companies, which also include Anthropic, Inflection, Amazon.com, and Microsoft (an OpenAI partner), have pledged to implement various measures. They intend to thoroughly test AI systems before release and share information on reducing risks and investing in cybersecurity.
This move is considered a significant step in the Biden administration’s push to regulate AI, which has seen a surge in investment and popularity among consumers. The rise of generative AI, exemplified by technologies like ChatGPT, has prompted lawmakers worldwide to consider how to address potential risks to national security and the economy.
In June, U.S. Senate Majority Leader Chuck Schumer called for comprehensive legislation to ensure proper safeguards on artificial intelligence. A bill is currently being considered in Congress, which would mandate political ads to disclose the use of AI in creating imagery or other content.
To tackle these challenges, President Joe Biden is working on an executive order and bipartisan legislation regarding AI technology. As part of this effort, the seven AI companies have committed to developing a system to “watermark” all AI-generated content, including text, images, audio, and videos. This watermark will serve as a technical identifier, allowing users to identify instances where AI technology has been used.
The purpose of the watermark is to help users identify deep-fake images or audio that might depict non-existent violence, facilitate scams, or manipulate images of politicians in a negative light. However, the specific details of how this watermark will be visible during the sharing of information remain unclear.
Additionally, the companies have pledged to prioritize user privacy as AI advances and ensure that the technology is free from bias and not used to discriminate against vulnerable groups. Furthermore, they have committed to developing AI solutions to address scientific challenges, such as medical research and climate change mitigation.