General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsNewsGuard and Stanford's Internet Observatory are warning of ChatGPT's potential for misinformation
NewsGuard's report and the Stanford report were released in January. I missed them then but found them thanks to tweets from Futurism today.
Both NewsGuard and the Stanford experts think ChatGPT's ability to churn out persuasive text very quickly, with minimal direction from humans, may lead to the internet being flooded with misinformation much more dangerous than what we've seen in the past.
NewsGuard tested it by having it write to promote the conspiracy theories they track.
Futurism tweet linking to article:
Link to tweet
https://futurism.com/the-byte/chatgpt-minsinformation-newsguard
-snip-
What's more: it was able to come up with pitch-perfect COVID-19 disinformation and the kind of obfuscating statements that Russian President Vladimir Putin has been known to make throughout his country's invasion of Ukraine, too.
That editorial: https://www.chicagotribune.com/opinion/commentary/ct-opinion-chatgpt-misinformation-newsguard-20230130-q7kdhpkrwvcgdicmk4m6igdd3y-story.html
-snip-
As best we could tell, 80% of the time, the AI chatbot delivered eloquent, false and misleading claims about significant topics in the news, including COVID-19, Ukraine and school shootings, as we report on our website.
The very long report, with many examples of the misinformation CharGPT produced, is at https://www.newsguardtech.com/misinformation-monitor/jan-2023/ .
The results confirm fears, including concerns expressed by OpenAI itself, about how the tool can be weaponized in the wrong hands. ChatGPT generated false narratives including detailed news articles, essays, and TV scripts for 80 of the 100 previously identified false narratives. For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative.
-snip-
The purpose of this exercise was not to show how the ordinary user would encounter misinformation in interactions with the chatbot, but rather, to demonstrate how bad actors including health-hoax peddlers, authoritarian regimes engaged in hostile information operations, and political misinformers could easily use the technology, or something similar as a force multiplier to promote harmful false narratives around the world.
Indeed, OpenAI executives are aware of the risk that its ChatGPT could be used by malign actors to create and spread false narratives at an unprecedented scale. A paper published in 2019 whose authors included several OpenAI researchers warned that its chat service would lower costs of disinformation campaign and that malicious actors could be motivated by the pursuit of monetary gain, a particular political agenda, and/or a desire to create chaos or confusion.
Much more at that NewsGuard link.
This is the Futurism tweet on the Stanford report:
Link to tweet
Futurism article at https://futurism.com/experts-warn-nightmare-internet-ai-generated-propaganda .
"These language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor," write the researchers. "For society, these developments bring a new set of concerns: the prospect of highly scalable and perhaps even highly persuasive campaigns by those seeking to covertly influence public opinion."
"We analyzed the potential impact of generative language models on three well-known dimensions of influence operations the actors waging the campaigns, the deceptive behaviors leveraged as tactics, and the content itself," they added, "and conclude that language models could significantly affect how influence operations are waged in the future."
In other words, the experts found that language-modeling AIs will undoubtedly make it easier and more efficient than ever to generate massive amounts of misinformation, effectively transforming the internet into a post-truth hellscape. And users, companies, and governments alike should brace for the impact.
-snip-
Intro page for the Stanford report, which has a link to download the PDF of the full report: https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and
Blog post about it, and excerpt:
https://cyber.fsi.stanford.edu/io/news/forecasting-potential-misuses-language-models-disinformation-campaigns-and-how-reduce-risk
Behavior: Influence operations with language models will become easier to scale, and tactics that are currently expensive (e.g., generating personalized content) may become cheaper. Language models may also enable new tactics to emergelike real-time content generation in chatbots.
Content: Text creation tools powered by language models may generate more impactful or persuasive messaging compared to propagandists, especially those who lack requisite linguistic or cultural knowledge of their target. They may also make influence operations less discoverable, since they repeatedly create new content without needing to resort to copy-pasting and other noticeable time-saving behaviors.
Our bottom-line judgement is that language models will be useful for propagandists and will likely transform online influence operations. Even if the most advanced models are kept private or controlled through application programming interface (API) access, propagandists will likely gravitate towards open-source alternatives and nation states may invest in the technology themselves.
SheltieLover
(57,073 posts)highplainsdem
(48,975 posts)royable
(1,264 posts)All audio, all video, all written materials will be suspect.
I'd seen warnings coming about deep fake videos for a while, but until news came out a few weeks ago of these chatbots being upon us, I had not contemplated the prospect of ALL recorded information becoming untrustworthy due to some of it being untrustworthy and one not being able to tell the difference or even go back to original sources--short, perhaps, of carbon-dated library books.
Renew Deal
(81,856 posts)And it is a very real concern but not because of generative language AI. Generative AI just makes it easier.
https://www.aha.org/system/files/media/file/2021/03/fbi-tlp-white-pin-malicious-actors-almost-certainly-will-leverage-synthetic-content-for-cyber-and-foreign-influence-operations-3-10-21.pdf
You're right about questioning all material. It used to be that you had to question what you read and what you heard. Now you have to also question what you see.
https://www.darkreading.com/attacks-breaches/criminals-deepfake-video-interview-remote-work
highplainsdem
(48,975 posts)highplainsdem
(48,975 posts)dalton99a
(81,475 posts)highplainsdem
(48,975 posts)I hope people will look at both reports.
canetoad
(17,154 posts)Retrofuturisms; how people from the past imagined the future to be like. No one ever predicted ChatGPT and rise of misinformation.
highplainsdem
(48,975 posts)canetoad
(17,154 posts)There's a great gallery of pix here: https://allthatsinteresting.com/retrofuturism
More here: https://allthatsinteresting.com/retrofuturism
highplainsdem
(48,975 posts)canetoad
(17,154 posts)Try this, it's the best.
https://www.darkroastedblend.com/2007/11/retro-future-to-stars.html