LOGIN

Amazon, Google, Meta, Microsoft and other tech firms agree to AI safeguards set by the White House

by Michael Nguyen
8 comments
AI Safety Guidelines

President Joe Biden revealed on Friday that several tech firms including Amazon, Google, Meta, Microsoft, and others are backing new artificial intelligence (AI) safety guidelines developed by his administration. These commitments, Biden stated, are vital for managing the considerable potential and risks associated with AI.

Biden’s administration has gained voluntary pledges from seven U.S. firms, including OpenAI, Anthropic, and Inflection, designed to ensure that their AI technologies are safe prior to launch. Some of these commitments include the involvement of third parties in examining the functioning of next-gen AI systems, though the specifics regarding auditing and accountability remain undefined.

The President asserted that there is a “fundamental obligation” for these firms to ensure the safety of their products, considering the potential threats associated with emerging technologies. He cited the damaging impact of powerful technology lacking adequate safeguards, as evidenced by social media, and claimed that while the commitments represent a positive step, there’s more work to be done.

Public interest and concern have grown with the increase in commercial investment in generative AI tools. These tools, capable of producing human-like text and generating new media, carry risks such as spreading disinformation and deceiving people.

According to a White House statement, the companies pledged to conduct security testing, partially handled by independent experts, to guard against significant risks like biosecurity and cybersecurity. Other potential harms to society, such as bias and discrimination, and theoretical dangers like advanced AI systems gaining control of physical systems or self-replicating, will also be examined.

The firms also promised to establish methods for reporting vulnerabilities in their systems and using digital watermarking to differentiate between real and AI-generated content, also known as deepfakes.

In a meeting behind closed doors on Friday, executives from these firms assured the President of their commitment to uphold these standards. Inflection CEO Mustafa Suleyman emphasized the significance of this collaborative endeavor, considering the competitive nature of the industry.

Under the agreement, the companies will also publicly disclose flaws and risks associated with their technology, particularly regarding fairness and bias.

The aim of these voluntary commitments is to address imminent risks while advocating for the development of legislative regulation over a more extended period. However, some proponents of AI regulations believe more needs to be done for corporate accountability.

Senate Majority Leader Chuck Schumer (D-NY) intends to introduce legislation to regulate AI, in conjunction with the Biden administration’s efforts and the commitments made by the firms.

Microsoft President Brad Smith indicated his company’s support for regulation that would establish a licensing regime for highly capable models.

Concerns are rising among experts and budding competitors that the proposed regulations could be advantageous for large-scale first movers like OpenAI, Google, and Microsoft, leaving smaller players at a disadvantage due to the cost of regulatory compliance.

While the White House pledge addresses safety risks, it does not touch upon other issues concerning AI, such as the impact on jobs and market competition, the resources required to build the models, and copyright issues related to AI learning from human-created content.

OpenAI recently struck a deal with The Big Big News to license AP’s archive of news stories. The financial details of the agreement were not disclosed.

This report comes from O’Brien, based in Providence, Rhode Island.

Frequently Asked Questions (FAQs) about AI Safety Guidelines

What AI safety guidelines have Amazon, Google, Meta, and Microsoft agreed to?

These tech firms have agreed to AI safety guidelines set by the White House, which call for their AI products to be safe before they are released. This includes security testing to guard against major risks, third-party oversight of AI systems, methods for reporting system vulnerabilities, and the use of digital watermarking to distinguish between real and AI-generated images or audio.

Who else has pledged to follow these AI safeguards apart from the tech giants?

In addition to Amazon, Google, Meta, and Microsoft, three other companies, namely OpenAI, Anthropic, and Inflection, have pledged to follow these AI safeguards.

What is the purpose of these AI safety guidelines?

The purpose of these guidelines is to manage the “enormous” promise and potential risks posed by AI technology. They are meant to be an immediate way of addressing these risks, while a longer-term push is made to get Congress to pass laws regulating the technology.

What type of security testing have these companies committed to?

These companies have committed to security testing that will be “carried out in part by independent experts”. This testing will guard against major risks, such as biosecurity and cybersecurity, and will also examine potential societal harms, such as bias and discrimination.

What is the stance of the Biden administration on these AI safety guidelines?

President Joe Biden’s administration has brokered these AI safety guidelines and secured voluntary commitments from the seven companies. Biden has stated that these companies have a “fundamental obligation” to ensure their products are safe, and these commitments represent an important step towards achieving that.

More about AI Safety Guidelines

You may also like

8 comments

TechGuru88 July 22, 2023 - 12:33 am

This is huge! I mean, the tech giants on one table agreeing to something. They need to be super careful though, AI can be unpredictable. Lets see what happens next.

Reply
John_Doe July 22, 2023 - 2:35 am

wow, big tech getting together for once for AI safety. wonder how this will pan out, cuz ai is kinda scary if you ask me. hope they do it right!!

Reply
SciFiLover July 22, 2023 - 9:11 am

AI is taking over and now we’re talking about self replicating machines… feels like a scifi movie plot to me, doesn’t it?

Reply
MaryJane July 22, 2023 - 10:52 am

So…who’s auditing this whole process? Seems like an awful lot of trust in corporations who have often shown they can’t be trusted. Just saying…

Reply
LaymanLarry July 22, 2023 - 3:07 pm

A lot of big words and promises here. Just hope all this tech stuff doesn’t get too crazy! Big names involved though, must be important right?

Reply
SiliconValleyFan July 22, 2023 - 3:26 pm

The commitment to have “red team” tests, that’s gonna be tough. It ain’t an easy promise to make. Look forward to see how this evolves.

Reply
LegalEagle July 22, 2023 - 8:12 pm

voluntary commitments are a start, but we need laws in place to regulate AI. looking forward to Schumer’s legislation.

Reply
AI_Enthusiast July 22, 2023 - 10:07 pm

Good to see that they’re thinking of safety before launching AI products. But, this whole AI regulation thing, it’s gonna be a rocky ride.

Reply

Leave a Comment

logo-site-white

BNB – Big Big News is a news portal that offers the latest news from around the world. BNB – Big Big News focuses on providing readers with the most up-to-date information from the U.S. and abroad, covering a wide range of topics, including politics, sports, entertainment, business, health, and more.

Editors' Picks

Latest News

© 2023 BBN – Big Big News