Amy KlobucharAP Top NewsBusinessElectionsUnited StatesUnited States governmentYvette Clarke Lawmakers Question Meta and X on Lack of Rules for AI-Generated Political Deepfakes by Michael Nguyen October 5, 2023 written by Michael Nguyen October 5, 2023 5 comments Bookmark 45 Artificial intelligence-driven deepfakes have gained significant attention this year, primarily for their uncanny ability to make it appear as though celebrities are engaging in unusual actions. From Tom Hanks promoting dental plans to Pope Francis donning a stylish puffer jacket and U.S. Sen. Rand Paul lounging on the Capitol steps in a red bathrobe, these AI-generated deepfakes have captured the public’s imagination. However, as the upcoming U.S. presidential election looms, concerns are rising about their potential impact. Google was the first major tech company to announce plans to introduce new labels for deceptive AI-generated political advertisements that could manipulate a candidate’s voice or actions. Now, several U.S. lawmakers are pressuring social media platforms like X (formerly Twitter), Facebook, and Instagram to follow suit and explain why they have not taken similar measures. Two Democratic members of Congress, U.S. Sen. Amy Klobuchar of Minnesota and U.S. Rep. Yvette Clarke of New York, have penned a letter to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino, expressing “serious concerns” about the emergence of AI-generated political ads on their platforms. They are seeking explanations regarding any rules these platforms are developing to mitigate the potential harm to free and fair elections. Klobuchar, in an interview, emphasized the importance of transparency in this matter, stating, “They are two of the largest platforms, and voters deserve to know what guardrails are being put in place. We are simply asking them, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.” The letter sent by Klobuchar and Clarke warns of the impending 2024 elections and the dangers posed by a lack of transparency in political ads featuring AI-generated content. Such content could lead to a flood of election-related misinformation and disinformation on these platforms, where voters often seek information about candidates and issues. As of now, neither X nor Meta has responded to these requests for comment. Clarke and Klobuchar have set an October 27 deadline for the executives to provide their responses. These actions by lawmakers come as part of a broader effort to regulate AI-generated political ads. Clarke introduced a House bill earlier this year that would amend federal election law to mandate labels for election advertisements containing AI-generated images or videos. Klobuchar is sponsoring a companion bill in the Senate. Google has already announced that starting in mid-November, it will require clear disclaimers on any AI-generated election ads that alter individuals or events on YouTube and other Google platforms. While Meta, the parent company of Facebook and Instagram, does not have a specific rule for AI-generated political ads, it does have policies restricting the use of “faked, manipulated, or transformed” audio and imagery for misinformation. A bipartisan Senate bill, co-sponsored by Klobuchar and Republican Sen. Josh Hawley of Missouri, aims to go further by banning “materially deceptive” deepfakes related to federal candidates, with exceptions for parody and satire. AI-generated ads have already made their way into the 2024 election, including one aired by the Republican National Committee that depicts a dystopian future if President Joe Biden is reelected. Such ads could potentially be banned under the proposed Senate bill. Klobuchar highlighted the need for these regulations, stating that misleading deepfake content during a presidential race could have a significant impact on voters’ perceptions. As the debate over AI-generated political ads continues, lawmakers are navigating the complex balance between free speech rights and the prevention of misleading information. Clarke’s bill, if passed, would empower the Federal Election Commission to enforce a disclaimer requirement on AI-generated election ads, similar to Google’s current practice. The FEC has taken steps towards regulating AI-generated deepfakes in political ads, opening a public comment period on a petition brought by the advocacy group Public Citizen, which ends on October 16. Table of Contents Frequently Asked Questions (FAQs) about AI-generated deepfakesWhat are AI-generated deepfakes?Why are lawmakers concerned about AI-generated political deepfakes?Which major tech company has taken steps to address AI-generated political deepfakes?What actions are U.S. lawmakers taking to regulate AI-generated political ads?What concerns do lawmakers have about AI-generated political deepfakes in the lead-up to the 2024 elections?How are social media platforms like Meta (Facebook and Instagram) and X (formerly Twitter) responding to these concerns?What is the role of the Federal Election Commission (FEC) in addressing AI-generated political deepfakes?How might AI-generated political deepfakes impact the 2024 election?What is the primary goal of lawmakers in regulating AI-generated political deepfakes?More about AI-generated deepfakes Frequently Asked Questions (FAQs) about AI-generated deepfakes What are AI-generated deepfakes? AI-generated deepfakes are computer-generated audio or video content that uses artificial intelligence algorithms to convincingly manipulate and alter the appearance or voice of individuals, often making it appear as if they are saying or doing things they never actually did. Why are lawmakers concerned about AI-generated political deepfakes? Lawmakers are concerned because AI-generated political deepfakes have the potential to deceive voters and disrupt the democratic process. These deepfakes can be used to create misleading political ads that manipulate a candidate’s voice or actions, leading to misinformation and disinformation during elections. Which major tech company has taken steps to address AI-generated political deepfakes? Google was the first major tech company to announce its intention to introduce new labels for deceptive AI-generated political advertisements that could alter a candidate’s voice or actions. This move is aimed at increasing transparency in political advertising. What actions are U.S. lawmakers taking to regulate AI-generated political ads? U.S. lawmakers, including U.S. Sen. Amy Klobuchar and U.S. Rep. Yvette Clarke, have introduced bills in Congress to regulate AI-generated political ads. Clarke’s bill seeks to amend federal election law to require labels on such ads, while Klobuchar is sponsoring a companion bill in the Senate. Additionally, a bipartisan Senate bill aims to ban “materially deceptive” deepfakes related to federal candidates, with exceptions for parody and satire. What concerns do lawmakers have about AI-generated political deepfakes in the lead-up to the 2024 elections? Lawmakers are concerned that a lack of rules and transparency regarding AI-generated political content on social media platforms could lead to a surge in election-related misinformation and disinformation. They worry that voters may be misled by manipulated content, impacting their perceptions of candidates and issues. How are social media platforms like Meta (Facebook and Instagram) and X (formerly Twitter) responding to these concerns? As of the time of this report, Meta and X have not publicly responded to lawmakers’ requests for comment or outlined specific rules for AI-generated political ads. However, Meta has existing policies restricting the use of “faked, manipulated, or transformed” audio and imagery for misinformation. What is the role of the Federal Election Commission (FEC) in addressing AI-generated political deepfakes? The FEC has taken steps towards potentially regulating AI-generated deepfakes in political ads. It has opened a public comment period on a petition brought by the advocacy group Public Citizen, which calls for the development of rules on misleading images, videos, and audio clips in political advertising. How might AI-generated political deepfakes impact the 2024 election? AI-generated deepfakes could play a significant role in shaping voter perceptions during the 2024 election. Misleading content that appears to feature candidates saying or doing things they never did could influence voters and contribute to a climate of misinformation and distrust. What is the primary goal of lawmakers in regulating AI-generated political deepfakes? Lawmakers aim to strike a balance between protecting free speech rights and preventing the dissemination of misleading information in the political arena. They seek to ensure that voters are aware when they encounter AI-generated content and that disclaimers are in place to clarify the nature of such content. More about AI-generated deepfakes Google to Label Deceptive AI-Generated Political Ads Details on Clarke’s House Bill for AI-Generated Political Ad Labels Bipartisan Senate Bill on Banning Deceptive Deepfakes FEC’s Potential Role in Regulating AI-Generated Political Deepfakes Impact of AI-Generated Political Deepfakes on the 2024 Election Lawmakers’ Goals in Regulating AI-Generated Political Deepfakes You Might Be Interested In Nearly 50 European leaders stress support for Ukraine at a summit in Spain. Zelenskyy seeks more aid Trump and DeSantis Engage Moms for Liberty, Indicating the Group’s Ascending GOP Influence Trump turns his fraud trial into a campaign stop as he seeks to capitalize on his legal woes Pence Opts for State-Run Primary Over Nevada Republican Caucus, Forgoing Delegate Opportunity Biden and Trump Maintain Sparse Campaign Activities While Contenders Increase Engagement Ethical Queries Surround President Biden Amid Hunter Biden Investigations and Impeachment Inquiry, According to AP-NORC Poll 2024 electionsAI in politics.AI-generated deepfakesAmy KlobucharAP Top NewsDisinformationelectionsMetaMisinformationpolitical adsregulationsSocial Media PlatformstransparencyUnited StatesUnited States governmentXYvette Clarke Share 0 FacebookTwitterPinterestEmail Michael Nguyen Follow Author Michael Nguyen is a sports journalist who covers the latest news and developments in the world of sports. He has a particular interest in football and basketball, and he enjoys analyzing game strategies and player performance. previous post Attorneys of Alleged Andrew Tate’s Victims Allege Harassment and Intimidation next post Revised Casualty Count in Homs Drone Attack Reaches 80 Fatalities and 240 Injuries You may also like Bookmark Danny Masterson sent to state prison to serve... December 28, 2023 Bookmark American Fast-Food Chains Buck the Trend, Investing Heavily... December 28, 2023 Bookmark Trump ballot ban appealed to US Supreme Court... December 28, 2023 Bookmark Boebert switches congressional districts, avoiding a Democratic opponent... December 28, 2023 Bookmark Boebert Shifts to a Different Congressional District, Dodging... December 28, 2023 Bookmark US Unveils Latest Military Aid Package for Ukraine... December 28, 2023 5 comments PoliticalJunkie23 October 5, 2023 - 6:30 pm these lawmakers, they wanna control everything. deepfakes can be bad, but gotta watch that free speech, ya kno? tricky balance! Reply InfoHound October 6, 2023 - 1:58 am 2024 elections? deepfake chaos! lawmakers pushin’ for rules, but social media giants ain’t talkin’. transparency missin’. Reply MediaMaven October 6, 2023 - 4:47 am AI deepfakes makin’ headlines. Google leads, X & Meta laggin’. 2024 worries. Let’s hope for clarity & fairness in elections! Reply TechNerd87 October 6, 2023 - 5:51 am so, AI deepfakes, like, real trouble, u kno? lawmakers all worried ’bout it. google sayin’ they gonna put labels on fake ads. x & meta? silent. it’s a mess! Reply ElectionWatcher55 October 6, 2023 - 7:12 am FEC, u gotta step in! deepfake ads messin’ with elections. need rules, disclaimers, or it’s chaos out there! Reply Leave a Comment Cancel Reply Save my name, email, and website in this browser for the next time I comment. Δ