LOGIN

Lawmakers Question Meta and X on Lack of Rules for AI-Generated Political Deepfakes

by Michael Nguyen
5 comments
AI-generated deepfakes

Artificial intelligence-driven deepfakes have gained significant attention this year, primarily for their uncanny ability to make it appear as though celebrities are engaging in unusual actions. From Tom Hanks promoting dental plans to Pope Francis donning a stylish puffer jacket and U.S. Sen. Rand Paul lounging on the Capitol steps in a red bathrobe, these AI-generated deepfakes have captured the public’s imagination. However, as the upcoming U.S. presidential election looms, concerns are rising about their potential impact.

Google was the first major tech company to announce plans to introduce new labels for deceptive AI-generated political advertisements that could manipulate a candidate’s voice or actions. Now, several U.S. lawmakers are pressuring social media platforms like X (formerly Twitter), Facebook, and Instagram to follow suit and explain why they have not taken similar measures.

Two Democratic members of Congress, U.S. Sen. Amy Klobuchar of Minnesota and U.S. Rep. Yvette Clarke of New York, have penned a letter to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino, expressing “serious concerns” about the emergence of AI-generated political ads on their platforms. They are seeking explanations regarding any rules these platforms are developing to mitigate the potential harm to free and fair elections.

Klobuchar, in an interview, emphasized the importance of transparency in this matter, stating, “They are two of the largest platforms, and voters deserve to know what guardrails are being put in place. We are simply asking them, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.”

The letter sent by Klobuchar and Clarke warns of the impending 2024 elections and the dangers posed by a lack of transparency in political ads featuring AI-generated content. Such content could lead to a flood of election-related misinformation and disinformation on these platforms, where voters often seek information about candidates and issues.

As of now, neither X nor Meta has responded to these requests for comment. Clarke and Klobuchar have set an October 27 deadline for the executives to provide their responses.

These actions by lawmakers come as part of a broader effort to regulate AI-generated political ads. Clarke introduced a House bill earlier this year that would amend federal election law to mandate labels for election advertisements containing AI-generated images or videos. Klobuchar is sponsoring a companion bill in the Senate.

Google has already announced that starting in mid-November, it will require clear disclaimers on any AI-generated election ads that alter individuals or events on YouTube and other Google platforms. While Meta, the parent company of Facebook and Instagram, does not have a specific rule for AI-generated political ads, it does have policies restricting the use of “faked, manipulated, or transformed” audio and imagery for misinformation.

A bipartisan Senate bill, co-sponsored by Klobuchar and Republican Sen. Josh Hawley of Missouri, aims to go further by banning “materially deceptive” deepfakes related to federal candidates, with exceptions for parody and satire.

AI-generated ads have already made their way into the 2024 election, including one aired by the Republican National Committee that depicts a dystopian future if President Joe Biden is reelected. Such ads could potentially be banned under the proposed Senate bill.

Klobuchar highlighted the need for these regulations, stating that misleading deepfake content during a presidential race could have a significant impact on voters’ perceptions. As the debate over AI-generated political ads continues, lawmakers are navigating the complex balance between free speech rights and the prevention of misleading information.

Clarke’s bill, if passed, would empower the Federal Election Commission to enforce a disclaimer requirement on AI-generated election ads, similar to Google’s current practice. The FEC has taken steps towards regulating AI-generated deepfakes in political ads, opening a public comment period on a petition brought by the advocacy group Public Citizen, which ends on October 16.

Frequently Asked Questions (FAQs) about AI-generated deepfakes

What are AI-generated deepfakes?

AI-generated deepfakes are computer-generated audio or video content that uses artificial intelligence algorithms to convincingly manipulate and alter the appearance or voice of individuals, often making it appear as if they are saying or doing things they never actually did.

Why are lawmakers concerned about AI-generated political deepfakes?

Lawmakers are concerned because AI-generated political deepfakes have the potential to deceive voters and disrupt the democratic process. These deepfakes can be used to create misleading political ads that manipulate a candidate’s voice or actions, leading to misinformation and disinformation during elections.

Which major tech company has taken steps to address AI-generated political deepfakes?

Google was the first major tech company to announce its intention to introduce new labels for deceptive AI-generated political advertisements that could alter a candidate’s voice or actions. This move is aimed at increasing transparency in political advertising.

What actions are U.S. lawmakers taking to regulate AI-generated political ads?

U.S. lawmakers, including U.S. Sen. Amy Klobuchar and U.S. Rep. Yvette Clarke, have introduced bills in Congress to regulate AI-generated political ads. Clarke’s bill seeks to amend federal election law to require labels on such ads, while Klobuchar is sponsoring a companion bill in the Senate. Additionally, a bipartisan Senate bill aims to ban “materially deceptive” deepfakes related to federal candidates, with exceptions for parody and satire.

What concerns do lawmakers have about AI-generated political deepfakes in the lead-up to the 2024 elections?

Lawmakers are concerned that a lack of rules and transparency regarding AI-generated political content on social media platforms could lead to a surge in election-related misinformation and disinformation. They worry that voters may be misled by manipulated content, impacting their perceptions of candidates and issues.

How are social media platforms like Meta (Facebook and Instagram) and X (formerly Twitter) responding to these concerns?

As of the time of this report, Meta and X have not publicly responded to lawmakers’ requests for comment or outlined specific rules for AI-generated political ads. However, Meta has existing policies restricting the use of “faked, manipulated, or transformed” audio and imagery for misinformation.

What is the role of the Federal Election Commission (FEC) in addressing AI-generated political deepfakes?

The FEC has taken steps towards potentially regulating AI-generated deepfakes in political ads. It has opened a public comment period on a petition brought by the advocacy group Public Citizen, which calls for the development of rules on misleading images, videos, and audio clips in political advertising.

How might AI-generated political deepfakes impact the 2024 election?

AI-generated deepfakes could play a significant role in shaping voter perceptions during the 2024 election. Misleading content that appears to feature candidates saying or doing things they never did could influence voters and contribute to a climate of misinformation and distrust.

What is the primary goal of lawmakers in regulating AI-generated political deepfakes?

Lawmakers aim to strike a balance between protecting free speech rights and preventing the dissemination of misleading information in the political arena. They seek to ensure that voters are aware when they encounter AI-generated content and that disclaimers are in place to clarify the nature of such content.

More about AI-generated deepfakes

You may also like

5 comments

PoliticalJunkie23 October 5, 2023 - 6:30 pm

these lawmakers, they wanna control everything. deepfakes can be bad, but gotta watch that free speech, ya kno? tricky balance!

Reply
InfoHound October 6, 2023 - 1:58 am

2024 elections? deepfake chaos! lawmakers pushin’ for rules, but social media giants ain’t talkin’. transparency missin’.

Reply
MediaMaven October 6, 2023 - 4:47 am

AI deepfakes makin’ headlines. Google leads, X & Meta laggin’. 2024 worries. Let’s hope for clarity & fairness in elections!

Reply
TechNerd87 October 6, 2023 - 5:51 am

so, AI deepfakes, like, real trouble, u kno? lawmakers all worried ’bout it. google sayin’ they gonna put labels on fake ads. x & meta? silent. it’s a mess!

Reply
ElectionWatcher55 October 6, 2023 - 7:12 am

FEC, u gotta step in! deepfake ads messin’ with elections. need rules, disclaimers, or it’s chaos out there!

Reply

Leave a Comment

logo-site-white

BNB – Big Big News is a news portal that offers the latest news from around the world. BNB – Big Big News focuses on providing readers with the most up-to-date information from the U.S. and abroad, covering a wide range of topics, including politics, sports, entertainment, business, health, and more.

Editors' Picks

Latest News

© 2023 BBN – Big Big News

en_USEnglish