LOGIN

Sports Illustrated is the latest media company damaged by an AI experiment gone wrong

by Joshua Brown
5 comments
AI Journalism

Sports Illustrated has become the latest casualty of an AI experiment gone awry, tarnishing its reputation in the process. In an era where artificial intelligence is making its presence felt in various industries, including journalism, the magazine’s attempt to embrace this technology has backfired.

While Sports Illustrated initially denied using artificial intelligence to write its articles, a report by Futurism revealed that the publication had been using unidentified authors for product reviews. These authors, such as Drew Ortiz, were linked to AI-generated portraits found on a website. When questioned about this, Sports Illustrated removed all such authors with AI-generated profiles from its website without offering an explanation.

Furthermore, Futurism’s report quoted an anonymous source from the magazine confirming the use of artificial intelligence in content creation, contrary to their initial denial. Sports Illustrated attributed these articles to a third-party company called AdVon Commerce, which had assured the magazine that humans had written and edited the content. AdVon had used pen names for their writers, a practice disavowed by Sports Illustrated. The magazine announced the removal of the content and the termination of its partnership with AdVon pending an internal investigation.

The Sports Illustrated Union expressed its dismay at these revelations and demanded transparency from Arena Group management regarding what content had been published under the magazine’s name. They insisted on upholding basic journalistic standards, including not publishing computer-generated stories by fictitious authors.

This incident is not an isolated one. Gannett faced a similar situation when it paused an experiment that used AI to generate articles about high school sports events, causing errors and leading to negative publicity. CNET also used AI to create financial service articles attributed to “CNET Money Staff” without clear disclosure to readers, only addressing the issue after it was exposed by other publications.

In contrast, some companies have been more transparent about their use of AI in content creation. Buzzfeed, for example, openly attributes articles to both human writers and their AI assistant, Buzzy the Robot. The Big Big News has been using technology to assist in articles since 2014, always including a note at the end of stories to explain technology’s role in production.

In conclusion, while AI has the potential to enhance content creation, transparency is key. Media organizations should be forthright about the use of AI in their articles to maintain trust with their readers and uphold the values of truth and transparency that are central to journalism.

Frequently Asked Questions (FAQs) about AI Journalism

What happened with Sports Illustrated and AI content?

Sports Illustrated faced controversy when it was revealed that some of its articles were written by authors who could not be identified, and there were suspicions of AI involvement in content creation.

Did Sports Illustrated use AI to write articles?

Initially, Sports Illustrated denied using AI for content creation, but later, an anonymous source confirmed AI’s role in some content. The magazine attributed these articles to a third-party company, AdVon Commerce.

How did Sports Illustrated respond to the situation?

Sports Illustrated removed the content in question and terminated its partnership with AdVon Commerce. An internal investigation was initiated to address the issue.

What was the reaction of the Sports Illustrated Union?

The Sports Illustrated Union expressed concern and demanded transparency from Arena Group management regarding the content published under the SI name. They also insisted on upholding journalistic standards.

Have other media companies faced similar AI-related issues?

Yes, other media companies like Gannett and CNET have experienced challenges with AI-generated content. Gannett paused an AI experiment due to errors, and CNET faced criticism for not transparently disclosing AI involvement in content creation.

What is the key lesson from this incident?

Transparency is crucial when using AI in journalism. Media organizations should openly communicate when AI is used in content creation to maintain trust and uphold journalistic integrity.

More about AI Journalism

  • [Sports Illustrated Article](insert link here)
  • [Futurism Report](insert link here)
  • [Gannett AI Experiment](insert link here)
  • [CNET AI-generated Articles](insert link here)
  • [Buzzfeed AI-assisted Content](insert link here)
  • [The Big Big News Transparency](insert link here)

You may also like

5 comments

Reader101 November 29, 2023 - 2:49 am

wow, this is crazi! I didnt kno AI can rite articles. s.i. had a tuf time widdit, shud be hnst bout it!

Reply
NewsWatchdog November 29, 2023 - 10:18 am

Othr media compnies shud learn from s.i.’s misstep. CNET, Gannett – be upfront abt AI use, or face backlash!

Reply
AIEnthusiast November 29, 2023 - 2:18 pm

AI’s cool, but gotta b clear, not hide it like s.i. & gannett did. Buzzfeed doin it right, showin how AI works!

Reply
JournalismGeek November 29, 2023 - 2:58 pm

Transparency’s key, they gotta say wen AI’s helpin’ write stuff. Good lesson for evry media compny out there!

Reply
JournoEthicist November 29, 2023 - 7:39 pm

This shows the imp of trust & honesty in journalism. S.i. shud kno bettr, cant hide AI involvement!

Reply

Leave a Comment

logo-site-white

BNB – Big Big News is a news portal that offers the latest news from around the world. BNB – Big Big News focuses on providing readers with the most up-to-date information from the U.S. and abroad, covering a wide range of topics, including politics, sports, entertainment, business, health, and more.

Editors' Picks

Latest News

© 2023 BBN – Big Big News

en_USEnglish