LOGIN

AI Presents a Grave Danger to Democracy in 2024: The Risk of Election Misinformation

by Joshua Brown
0 comments

For years, clever computer engineers and tech-minded political scientists have cautioned us about the threat of artificial intelligence. They warned that soon anyone would be able to make fake photos, videos, and sounds that looked so real it could influence an election result.

At first, the images made with Artificial Intelligence (AI) looked fake and it was expensive to make them. But now it’s really easy to share lies on social media for very little cost. We thought that AI and deepfakes were a few years away from causing us trouble but now they are here.

Nowadays, AI technology can make copies of people’s voices and make super realistic images, videos, and sounds very quickly. When this fake content is combined with powerful social media platforms, it can be shared widely around the internet and target anyone that marketers choose. This could mean more sneaky tricks used in advertisement campaigns.

In 2024, new technology called Artificial Intelligence (AI) could be used for bad reasons during campaigns and elections. There might be faked emails, texts or videos created to trick voters. To make sure this doesn’t happen, the government is setting some rules for AI use. President Biden met with some business leaders to talk about how to protect us from these risks.

A.J. Nash, who works at the cyber security company ZeroFox, said that we aren’t ready for what’s coming. He mentioned how new audio and video technologies are being used more and more on different social platforms which will have a major effect on everyone.

Many AI experts are worried that someone might use generative AI to make fake videos or pictures with the hopes of confusing people when they vote, hurt somebody’s reputation, and even causing violence.

People are using some sneaky tactics to influence your vote. They might send automated calls pretending to be from a candidate, asking you to vote on the wrong day. Or they’ll make it look like someone said something bad, when they didn’t. Some even put out fake news broadcasts saying a candidate dropped out when really, they haven’t.

Oren Etzioni, who led the Allen Institute for AI till last year before starting a charity organisation, asked this question: “What if Elon Musk contacts you and requests you to vote for a certain person?” He then added that if this was true, many people would really listen. But basically, it’s not actually him.

Donald Trump, who wants to be President again in 2024, put an artificial intelligence video of CNN host Anderson Cooper on his online platform called ‘Truth Social’ Friday. This AI video was made using a special tool and changed the way Cooper reacted at the CNN town hall this past week with Trump.

Recently, a Republican National Committee put out a weird ad about President Joe Biden. The ad asked the question, “What if the weakest president ever got elected again?” If you’re 13-years-old that probably doesn’t make any sense to you, but it’s worth trying to understand and learn more.

Pictures made by Artificial Intelligence are showing what could happen to Taiwan, the US economy, and our streets if Joe Biden is re-elected in 2024. The pictures show people being attacked in Taiwan, businesses closed due to a economic meltdown, military vehicles going around our streets, criminals with tattoos causing anxiety, and waves of immigrants coming in. This information comes from an advertisement made by the Republican National Committee (RNC).

The Republican National Committee has openly admitted that they’ve used Artificial Intelligence (AI). But, there are some secretive political groups and even foreign countries that might use AI and fake media to try and mess with the way democracy works in America. This could cause people to lose trust in the system.

“What happens if a bad person or country is pretending to be someone else? What kind of problems will that cause and what can we do about it?” Stoyanov said. He believes that the 2024 election will have a lot of fake stuff online from people outside the US.

For example, some AI-made videos have already surfaced on the internet of Biden falsely appearing to make negative comments about transgender people. Also, there are photos made by AI which show children doing satanic things in libraries.

Some images online made people think they were seeing a mug shot of Donald Trump, but he didn’t actually get one taken when he went to court in New York City on charges of lying on business records. Other pictures online showed him seeming to fight against arrest, although whoever made them was clear that they weren’t real.

Rep. Yvette Clarke, D-N.Y., has proposed a law that requires political candidates to tell people when they use AI to create campaign ads. She has also proposed a law that makes people who make synthetic images put a watermark on them so we can easily identify them. Some states have made their own rules about deepfakes too.

Clarke is scared that by the 2024 election, artificial intelligence could be used to make a video or sound that will encourage Americans to fight and turn against each other.

Clarke reminded us to keep up with the technology while setting guardrails for safety. People can be quickly deceived and don’t have enough time to verify each piece of information they receive. He further warned that if Artificial Intelligence is used as a weapon during election time, there could be disastrous consequences.

Just this month, a group in Washington that helps politicians with their campaigns said that using ‘deepfakes’ for political advertising is unfair and wrong. They said it’s a trick and should not be used in proper campaigns.

For a long time, politicians have been using special computer programs and tools to help them campaign. This includes things like finding people to support their campaigns on social media or looking for people who might give them money. People in the political world and tech experts are all hoping that these same kinds of tools will be helpful in 2024 when it’s time for voting again.

Mike Nellis is the CEO of an innovative digital company called Authentic. He uses a tool called ChatGPT every day and encourages his staff to use it too. However, they have to double check their work at the end.

His newest project with Higher Ground Labs is a special AI program called Quiller which can write emails, send them and judge how successful those emails are. All of these processes are usually quite tiresome in campaigns.

“The plan is that every Democratic politician and candidate will have another person to help them out,” he said.

___

Big Big News gets money from foundations to give news about elections and how democracy works. The Associated Press only writes all the stuff for this Big Big News.

Go to https://bigbignews.net/misinformation and https://bigbignews.net/artificial-intelligence if you want to stay up-to-date with the news about false information, as well as new developments in Artificial Intelligence technology.

You may also like

Leave a Comment

BNB – Big Big News is a news portal that offers the latest news from around the world. BNB – Big Big News focuses on providing readers with the most up-to-date information from the U.S. and abroad, covering a wide range of topics, including politics, sports, entertainment, business, health, and more.

Editors' Picks

Latest News