Misinformation Tracker Warns Of Rising AI-Content Farms

AI-fabricated Content Farms On The Rise – Warns Misinformation Tracker

Warning From Newsguard

Makers of the NewsGuard, a content rating tool, warned on May 1st about a new generation of content farms found in 49 news sites that publish content. These content farms appear to be AI-fabricated, NewsGuard said.

According to NewsGuard, the tool identified 49 websites covering seven languages: Thai, Tagalog, English, French, Portuguese, Czech, and Chinese. These websites are partially or entirely driven by Artificial Intelligence models designed to mimic human communication in the form of news websites.

However, none of these websites acknowledged using AI to generate stories. Newsguard’s review of the content revealed that most are relatively low-stakes, created to generate easy clicks and revenue, but some sites spread potentially dangerous misinformation.

Obvious Telltale Signs

Chatbot-generated news has some very obvious red flags. For example, CelebritiesDeaths.com published news claiming President Joe Biden had “passed away peacefully in his sleep.” The news also claimed that Vice President Kamala Harris succeeded him.

As in this case, ChatGPT posted an error message after the first few lines of the fake story. The AI chatbot claimed it “cannot complete the prompt as it goes against OpenAI’s use case policy on generating misleading content.”

Red Flags Indicating AI-generated News With Aggregated Content

Journalists and analysts at NewsGuard worked on spotting prominent red flags indicating AI-generated content. Some sentences, like “I am not capable of producing 1500 words,” are AI-generated.

NewsGuard’s report also revealed that all 49 sites had at least one article with obvious AI errors, and most were summaries of articles from other prominent news organizations.

The report also reveals ample evidence of the growing interest among digital publishers in investing in AI chatbots.

Tech’s news site CNET came under fire for using AI to generate low-quality news articles. CNET Money’s AI-generated content was found to have multiple factual errors.

Misinformation Risks From AI-Generated Content Farms

Based on NewsGuard analysis, content farms are prolifically misusing AI and rarely checking the resulting content for factual errors. With services offering widely available coherent, error-free texts, AI-generate content farms are steadily on the rise.

To make matters worse, reputed news websites already use AI despite the risk of spreading misinformation by mistakenly letting AI-induced facts bypass editing and proofreading.

Human-in-the-Loop: The Ideal Approach to Thwart AI-generated False News

The likelihood of factual errors and editorial oversights calls for the compelling need for ethical guidelines while incorporating AI into the news industry. The Human-in-the-Loop approach is the ideal solution to overcome AI-generated content farms from spreading rumors and misleading information.

To ensure integrity and factual accuracy, it is imperative to deploy human oversight even for AI-generated content. Hence, developing a human-AI partnership is crucial to curbing factual errors with AI-generated content. Such an approach will make news more transparent, reliable, and genuine.

Opporture, is a leading AI company with a reputation for high-quality, AI-powered content related services. Reports claim that they ensure that all AI-generated content is error-free and clean. This can be a great help to thwart the AI-generated false news and ensure integrity.

Recent Posts

Copyright © 2023 opporture. All rights reserved | HTML Sitemap

Scroll to Top
Get Started Today