Artificial intelligence (AI) flagships from companies like Google and OpenAI have voluntarily pledged to ensure the safety and transparency of AI.
Many countries are increasingly concerned about the rise of artificial intelligence and are exploring ways to regulate it. Today, industry leaders seem committed to addressing these concerns.
AI giants address the White House
According to the Financial Times, companies specializing in AI innovation, including Google, OpenAI, Meta, Anthropic, Amazon and Inflection AI, will commit “to contribute to the safe, secure and transparent development of AI technology” during a meeting at the White House.
Among the leaders of these AI companies are Brad Smith, president of Microsoft, Mustafa Suleyman, CEO of Inflection AI, and Nick Clegg of Meta. The latter will agree to commit to various security and transparency measures, such as internal and external testing of their technologies before making them public.
In addition, they will inform governments on the various risk mitigation measures. Thus, AI market leaders will make it easy for third parties to report vulnerabilities in their technology.
In May, the CEOs of Alphabet, Anthropic, Microsoft and OpenAI met with the President and Vice President of the United States. This meeting aimed to discuss new measures to promote responsible innovation in the field of artificial intelligence.
In this regard, a White House official said:
“These voluntary commitments have helped push the boundaries of what companies do and raise the standards for safety, security and trust in AI. However, that did not change the need for bipartisan legislation and an executive order from the White House. This is a top priority for the President and his team.”
Last month, US senators proposed a bipartisan bill for the transparent use of AI and thus to promote its competitiveness. Other countries, such as the UK, are also exploring the possibility of regulating artificial intelligence.
This article is originally published on fr.beincrypto.com