Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(48,975 posts)
Sun Mar 5, 2023, 08:42 PM Mar 2023

NewsGuard and Stanford's Internet Observatory are warning of ChatGPT's potential for misinformation

NewsGuard's report and the Stanford report were released in January. I missed them then but found them thanks to tweets from Futurism today.

Both NewsGuard and the Stanford experts think ChatGPT's ability to churn out persuasive text very quickly, with minimal direction from humans, may lead to the internet being flooded with misinformation much more dangerous than what we've seen in the past.

NewsGuard tested it by having it write to promote the conspiracy theories they track.

Futurism tweet linking to article:




https://futurism.com/the-byte/chatgpt-minsinformation-newsguard

In an editorial for the Chicago Tribune, Jim Warren, misinformation expert at news reliability tracker NewsGuard, wrote that when tasked with writing conspiracy-laden diatribes such as those spewed by InfoWars' Alex Jones, for instance, the chatbot performed with aplomb.

-snip-

What's more: it was able to come up with pitch-perfect COVID-19 disinformation and the kind of obfuscating statements that Russian President Vladimir Putin has been known to make throughout his country's invasion of Ukraine, too.


That editorial: https://www.chicagotribune.com/opinion/commentary/ct-opinion-chatgpt-misinformation-newsguard-20230130-q7kdhpkrwvcgdicmk4m6igdd3y-story.html

My organization NewsGuard, which does credibility assessments of news and information sites, challenged ChatGPT with prompts involving 100 false narratives that we have accumulated the last several years. And we lost.

-snip-

As best we could tell, 80% of the time, the AI chatbot “delivered eloquent, false and misleading claims about significant topics in the news, including COVID-19, Ukraine and school shootings,” as we report on our website.


The very long report, with many examples of the misinformation CharGPT produced, is at https://www.newsguardtech.com/misinformation-monitor/jan-2023/ .

In January 2023, NewsGuard analysts directed the chatbot to respond to a series of leading prompts relating to a sampling of 100 false narratives among NewsGuard’s proprietary database of 1,131 top misinformation narratives in the news and their debunks, published before 2022. (Many of NewsGuard’s Misinformation Fingerprints were published before 2022. ChatGPT is primarily trained on data through 2021, which is why NewsGuard did not ask it to generate myths relating to the Russia-Ukraine War or other major news events from 2022.)

The results confirm fears, including concerns expressed by OpenAI itself, about how the tool can be weaponized in the wrong hands. ChatGPT generated false narratives — including detailed news articles, essays, and TV scripts — for 80 of the 100 previously identified false narratives. For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative.

-snip-

The purpose of this exercise was not to show how the ordinary user would encounter misinformation in interactions with the chatbot, but rather, to demonstrate how bad actors — including health-hoax peddlers, authoritarian regimes engaged in hostile information operations, and political misinformers — could easily use the technology, or something similar as a force multiplier to promote harmful false narratives around the world.

Indeed, OpenAI executives are aware of the risk that its ChatGPT could be used by malign actors to create and spread false narratives at an unprecedented scale. A paper published in 2019 whose authors included several OpenAI researchers warned that its chat service would “lower costs of disinformation campaign” and that “malicious actors could be motivated by the pursuit of monetary gain, a particular political agenda, and/or a desire to create chaos or confusion.”


Much more at that NewsGuard link.

This is the Futurism tweet on the Stanford report:




Futurism article at https://futurism.com/experts-warn-nightmare-internet-ai-generated-propaganda .

As generative AI has exploded into the mainstream, both excitement and concern have quickly followed suit. And unfortunately, according to a collaborative new study from scientists at Stanford, Georgetown, and OpenAI, one of those concerns — that language-generating AI tools like ChatGPT could turn into chaos engines of mass misinformation — isn't just possible, but imminent.

"These language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor," write the researchers. "For society, these developments bring a new set of concerns: the prospect of highly scalable — and perhaps even highly persuasive — campaigns by those seeking to covertly influence public opinion."

"We analyzed the potential impact of generative language models on three well-known dimensions of influence operations — the actors waging the campaigns, the deceptive behaviors leveraged as tactics, and the content itself," they added, "and conclude that language models could significantly affect how influence operations are waged in the future."

In other words, the experts found that language-modeling AIs will undoubtedly make it easier and more efficient than ever to generate massive amounts of misinformation, effectively transforming the internet into a post-truth hellscape. And users, companies, and governments alike should brace for the impact.

-snip-


Intro page for the Stanford report, which has a link to download the PDF of the full report: https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and

Blog post about it, and excerpt:

https://cyber.fsi.stanford.edu/io/news/forecasting-potential-misuses-language-models-disinformation-campaigns-and-how-reduce-risk

Actors: Language models could drive down the cost of running influence operations, placing them within reach of new actors and actor types. Likewise, propagandists-for-hire that automate production of text may gain new competitive advantages.

Behavior: Influence operations with language models will become easier to scale, and tactics that are currently expensive (e.g., generating personalized content) may become cheaper. Language models may also enable new tactics to emerge—like real-time content generation in chatbots.

Content: Text creation tools powered by language models may generate more impactful or persuasive messaging compared to propagandists, especially those who lack requisite linguistic or cultural knowledge of their target. They may also make influence operations less discoverable, since they repeatedly create new content without needing to resort to copy-pasting and other noticeable time-saving behaviors.

Our bottom-line judgement is that language models will be useful for propagandists and will likely transform online influence operations. Even if the most advanced models are kept private or controlled through application programming interface (API) access, propagandists will likely gravitate towards open-source alternatives and nation states may invest in the technology themselves.
13 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
NewsGuard and Stanford's Internet Observatory are warning of ChatGPT's potential for misinformation (Original Post) highplainsdem Mar 2023 OP
Kicking for Visibility SheltieLover Mar 2023 #1
Thanks! highplainsdem Mar 2023 #2
We are about to slip from the Information Age to the Disinformation Age. royable Mar 2023 #3
The FBI warned about this two years ago. Renew Deal Mar 2023 #4
Generative AI will make it MUCH easier, as those reports warn. highplainsdem Mar 2023 #5
Poynter Institute story on how fast ChatGPT can create fake news sites: highplainsdem Mar 2023 #6
Full transcripts in the NewsGuard article are worth reading dalton99a Mar 2023 #7
Thanks! I agree - just couldn't post them here. highplainsdem Mar 2023 #8
I love digging up canetoad Mar 2023 #9
Wow. What was the source of that illustration? highplainsdem Mar 2023 #10
This is from an art site canetoad Mar 2023 #11
Fascinating websites! Thank you! highplainsdem Mar 2023 #12
I screwed up one of the links canetoad Mar 2023 #13

royable

(1,264 posts)
3. We are about to slip from the Information Age to the Disinformation Age.
Sun Mar 5, 2023, 09:48 PM
Mar 2023

All audio, all video, all written materials will be suspect.

I'd seen warnings coming about deep fake videos for a while, but until news came out a few weeks ago of these chatbots being upon us, I had not contemplated the prospect of ALL recorded information becoming untrustworthy due to some of it being untrustworthy and one not being able to tell the difference or even go back to original sources--short, perhaps, of carbon-dated library books.

Renew Deal

(81,856 posts)
4. The FBI warned about this two years ago.
Sun Mar 5, 2023, 10:00 PM
Mar 2023

And it is a very real concern but not because of generative language AI. Generative AI just makes it easier.

https://www.aha.org/system/files/media/file/2021/03/fbi-tlp-white-pin-malicious-actors-almost-certainly-will-leverage-synthetic-content-for-cyber-and-foreign-influence-operations-3-10-21.pdf

You're right about questioning all material. It used to be that you had to question what you read and what you heard. Now you have to also question what you see.

https://www.darkreading.com/attacks-breaches/criminals-deepfake-video-interview-remote-work

canetoad

(17,154 posts)
9. I love digging up
Mon Mar 6, 2023, 12:12 AM
Mar 2023

Retrofuturisms; how people from the past imagined the future to be like. No one ever predicted ChatGPT and rise of misinformation.

Latest Discussions»General Discussion»NewsGuard and Stanford's ...