Alphabet Incamazon.com incArtificial intelligenceBusinessDistrict of ColumbiaGeneral NewsJoe BidenKamala HarrisMeta Platforms IncMicrosoft CorpNew York CityUnited NationsUnited States government Amazon, Google, Meta, Microsoft and other tech firms agree to AI safeguards set by the White House by Michael Nguyen July 21, 2023 written by Michael Nguyen July 21, 2023 8 comments Bookmark 93 President Joe Biden revealed on Friday that several tech firms including Amazon, Google, Meta, Microsoft, and others are backing new artificial intelligence (AI) safety guidelines developed by his administration. These commitments, Biden stated, are vital for managing the considerable potential and risks associated with AI. Biden’s administration has gained voluntary pledges from seven U.S. firms, including OpenAI, Anthropic, and Inflection, designed to ensure that their AI technologies are safe prior to launch. Some of these commitments include the involvement of third parties in examining the functioning of next-gen AI systems, though the specifics regarding auditing and accountability remain undefined. The President asserted that there is a “fundamental obligation” for these firms to ensure the safety of their products, considering the potential threats associated with emerging technologies. He cited the damaging impact of powerful technology lacking adequate safeguards, as evidenced by social media, and claimed that while the commitments represent a positive step, there’s more work to be done. Public interest and concern have grown with the increase in commercial investment in generative AI tools. These tools, capable of producing human-like text and generating new media, carry risks such as spreading disinformation and deceiving people. According to a White House statement, the companies pledged to conduct security testing, partially handled by independent experts, to guard against significant risks like biosecurity and cybersecurity. Other potential harms to society, such as bias and discrimination, and theoretical dangers like advanced AI systems gaining control of physical systems or self-replicating, will also be examined. The firms also promised to establish methods for reporting vulnerabilities in their systems and using digital watermarking to differentiate between real and AI-generated content, also known as deepfakes. In a meeting behind closed doors on Friday, executives from these firms assured the President of their commitment to uphold these standards. Inflection CEO Mustafa Suleyman emphasized the significance of this collaborative endeavor, considering the competitive nature of the industry. Under the agreement, the companies will also publicly disclose flaws and risks associated with their technology, particularly regarding fairness and bias. The aim of these voluntary commitments is to address imminent risks while advocating for the development of legislative regulation over a more extended period. However, some proponents of AI regulations believe more needs to be done for corporate accountability. Senate Majority Leader Chuck Schumer (D-NY) intends to introduce legislation to regulate AI, in conjunction with the Biden administration’s efforts and the commitments made by the firms. Microsoft President Brad Smith indicated his company’s support for regulation that would establish a licensing regime for highly capable models. Concerns are rising among experts and budding competitors that the proposed regulations could be advantageous for large-scale first movers like OpenAI, Google, and Microsoft, leaving smaller players at a disadvantage due to the cost of regulatory compliance. While the White House pledge addresses safety risks, it does not touch upon other issues concerning AI, such as the impact on jobs and market competition, the resources required to build the models, and copyright issues related to AI learning from human-created content. OpenAI recently struck a deal with The Big Big News to license AP’s archive of news stories. The financial details of the agreement were not disclosed. This report comes from O’Brien, based in Providence, Rhode Island. Table of Contents Frequently Asked Questions (FAQs) about AI Safety GuidelinesWhat AI safety guidelines have Amazon, Google, Meta, and Microsoft agreed to?Who else has pledged to follow these AI safeguards apart from the tech giants?What is the purpose of these AI safety guidelines?What type of security testing have these companies committed to?What is the stance of the Biden administration on these AI safety guidelines?More about AI Safety Guidelines Frequently Asked Questions (FAQs) about AI Safety Guidelines What AI safety guidelines have Amazon, Google, Meta, and Microsoft agreed to? These tech firms have agreed to AI safety guidelines set by the White House, which call for their AI products to be safe before they are released. This includes security testing to guard against major risks, third-party oversight of AI systems, methods for reporting system vulnerabilities, and the use of digital watermarking to distinguish between real and AI-generated images or audio. Who else has pledged to follow these AI safeguards apart from the tech giants? In addition to Amazon, Google, Meta, and Microsoft, three other companies, namely OpenAI, Anthropic, and Inflection, have pledged to follow these AI safeguards. What is the purpose of these AI safety guidelines? The purpose of these guidelines is to manage the “enormous” promise and potential risks posed by AI technology. They are meant to be an immediate way of addressing these risks, while a longer-term push is made to get Congress to pass laws regulating the technology. What type of security testing have these companies committed to? These companies have committed to security testing that will be “carried out in part by independent experts”. This testing will guard against major risks, such as biosecurity and cybersecurity, and will also examine potential societal harms, such as bias and discrimination. What is the stance of the Biden administration on these AI safety guidelines? President Joe Biden’s administration has brokered these AI safety guidelines and secured voluntary commitments from the seven companies. Biden has stated that these companies have a “fundamental obligation” to ensure their products are safe, and these commitments represent an important step towards achieving that. More about AI Safety Guidelines Artificial Intelligence: Safety, Security, and Society White House Technology Policy Understanding AI Technology The Future of AI Regulation Voluntary Commitments for AI Safety AI and Cybersecurity Artificial Intelligence and Bias You Might Be Interested In Trump and DeSantis to Speak at Moms for Liberty Gathering, Headlined by GOP Hopefuls in 2024 Hamas is set to exchange more hostages for Palestinian prisoners held by Israel Unanticipated Diplomatic Engagement: US Envoy’s Arrival in Beirut Amidst Gaza Conflict Sailors Celebrate as Wintry Precipitation Elevates Great Salt Lake Level — Temporarily Relocation of Jimmy Carter’s 99th Birthday Celebration to Precede Potential Government Shutdown Truck driver guilty of killing 11 at Pittsburgh synagogue in deadliest attack on Jews in US history Alphabet Incamazon.com incArtificial intelligenceDistrict of ColumbiaGeneral NewsJoe BidenKamala HarrisMeta Platforms IncMicrosoft CorpNew York CityTech Industry RegulationsUnited NationsUnited States governmentWhite House AI Guidelines Share 0 FacebookTwitterPinterestEmail Michael Nguyen Follow Author Michael Nguyen is a sports journalist who covers the latest news and developments in the world of sports. He has a particular interest in football and basketball, and he enjoys analyzing game strategies and player performance. previous post AI: The Unexpected Player in Hollywood’s Labor Strikes, Explained next post Record-high Water Temperatures Cause Early Bleaching of Florida Keys Coral Reefs, Warn Scientists You may also like Bookmark A woman who burned Wyoming’s only full-service abortion... December 28, 2023 Bookmark Argument over Christmas gifts turns deadly as 14-year-old... December 28, 2023 Bookmark Danny Masterson sent to state prison to serve... December 28, 2023 Bookmark Hong Kong man jailed for 6 years after... December 28, 2023 Bookmark AP concludes at least hundreds died in floods... December 28, 2023 Bookmark Live updates | Israeli forces raid a West... December 28, 2023 8 comments TechGuru88 July 22, 2023 - 12:33 am This is huge! I mean, the tech giants on one table agreeing to something. They need to be super careful though, AI can be unpredictable. Lets see what happens next. Reply John_Doe July 22, 2023 - 2:35 am wow, big tech getting together for once for AI safety. wonder how this will pan out, cuz ai is kinda scary if you ask me. hope they do it right!! Reply SciFiLover July 22, 2023 - 9:11 am AI is taking over and now we’re talking about self replicating machines… feels like a scifi movie plot to me, doesn’t it? Reply MaryJane July 22, 2023 - 10:52 am So…who’s auditing this whole process? Seems like an awful lot of trust in corporations who have often shown they can’t be trusted. Just saying… Reply LaymanLarry July 22, 2023 - 3:07 pm A lot of big words and promises here. Just hope all this tech stuff doesn’t get too crazy! Big names involved though, must be important right? Reply SiliconValleyFan July 22, 2023 - 3:26 pm The commitment to have “red team” tests, that’s gonna be tough. It ain’t an easy promise to make. Look forward to see how this evolves. Reply LegalEagle July 22, 2023 - 8:12 pm voluntary commitments are a start, but we need laws in place to regulate AI. looking forward to Schumer’s legislation. Reply AI_Enthusiast July 22, 2023 - 10:07 pm Good to see that they’re thinking of safety before launching AI products. But, this whole AI regulation thing, it’s gonna be a rocky ride. Reply Leave a Comment Cancel Reply Save my name, email, and website in this browser for the next time I comment. Δ