Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsIs it safe to use ChatGPT for your task?
I first saw this on Reddit's ChatGPT forum. The tweet from its creator is below.
Notice that the first question here separates those who are basically using ChatGPT for entertainment from those who want or need its answers to be truthful.
And there are a LOT of people wanting entertaining chatbots, no matter how wrong their communications are. Which is why Microsoft is lessening the strict limitations it had to suddenly impose on its often loony-sounding Bing chatbot after the initial release got too much bad press. And why Google execs are telling their employees that their Bard AI, designed for search, isn't really about search but instead about entertaining people.
There are also a LOT of people who overestimate their ability to catch the errors that chatbots make. Especially since it makes them in an authoritative tone. People using a chatbot to save time and avoid work are EXACTLY the type of people who should not use chatbots - and the AI-produced text created by those people is the text most likely to cause harm later, whether it's financial or medical harm (which could have been done if anyone had followed the advice in some recent AI-written articles published by magazines online) or accidental propagation of misinformation from non-facts to nonexistent articles and studies ( I'm seeing more and more stories about real experts being contacted by people trying to find articles or books those experts didn't write but ChatGPT said they wrote).
Now we get to the all-important question of taking responsibility for inaccuracies that aren't caught.
The people pushing AI into our society are clearly not intending to take any responsibility. They publish warnings that AI can make mistakes while sounding convincing. They've given AI's tendency to invent facts the charming name of "hallucinating." Don't blame the creators and marketers of AI, and don't blame the AI itself. It's just hallucinating, and sometimes hallucinating can be cute.
And it's a safe bet that most people using AI will not want to be held responsible for its mistakes, if there are any serious consequences for them.
And this flowchart misses one huge risk with ChatGPT and similar AI.
Lack of security, and the chance that any input you provide will become output for someone else, whether unintentionally or intentionally. Including output that might reveal some creative or business information you'd intended to keep to yourself, or some personal information you weren't intending to share with others.
These AI typically come with warnings against sharing anything like that, but as with the warnings that AI can be inaccurate and hallucinate, they tend to get buried by all the hype from the same companies.
Reddit's ChatGPT forum has seen a lot of posts lately from people seeing what's supposed to be a history of their own ChatGPT prompts but instead is apparently a list of ChatGPT prompts given by other users. They weren't happy about the thought that other people might see their chat prompts.
And there are more serious concerns:
https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears
Here's the tweet from the creator of the flowchart:
Link to tweet
InfoView thread info, including edit history
TrashPut this thread in your Trash Can (My DU » Trash Can)
BookmarkAdd this thread to your Bookmarks (My DU » Bookmarks)
4 replies, 654 views
ShareGet links to this post and/or share on social media
AlertAlert this post for a rule violation
PowersThere are no powers you can use on this post
EditCannot edit other people's posts
ReplyReply to this post
EditCannot edit other people's posts
Rec (7)
ReplyReply to this post
4 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Is it safe to use ChatGPT for your task? (Original Post)
highplainsdem
Mar 2023
OP
Actually, I would say that correcting inaccuracies is implied in verifying
highplainsdem
Mar 2023
#3
EYESORE 9001
(25,922 posts)1. May as well write it myself
Yunno?
highplainsdem
(48,959 posts)4. Yep. Also better for your own brain.
Renew Deal
(81,852 posts)2. Gartner says yes, it is safe.
https://democraticunderground.com/100217715298
That flowchart is missing something important. The final question should say "Are you able to modify the output to be accurate or take responsibility for possible inaccuracies?"
Basically, the flowchart is not based in reality.
That flowchart is missing something important. The final question should say "Are you able to modify the output to be accurate or take responsibility for possible inaccuracies?"
Basically, the flowchart is not based in reality.
highplainsdem
(48,959 posts)3. Actually, I would say that correcting inaccuracies is implied in verifying
ChatGPT's output. The creator of that flowchart was not trying to suggest modifications would never be made.
In reality, people willing to make those modifications, and with some expertise, may still miss ChatGPT's mistakes. As happened with a number of AI-written articles published online earlier this year, even though those publications said their AI material was carefully reviewed by editors.
And they're much more likely to miss the mistakes if they're using ChatGPT to save time.