General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsOpenAI finally admits they created a problem with ChatGPT, but they can't fix it
From Gizmodo: OpenAIs New AI-Detector Isnt Great at Detecting AI
https://gizmodo.com/open-ai-chatgpt-ai-text-detector-1850055005
-snip-
However, in OpenAIs own tests, the tool only correctly identified generated text as likely AI-written about a quarter of the time. Moreover, about one in ten times, the classifier falsely lists human-made words as computer-generated, the company noted in a blog post.
-snip-
OpenAI admits that ChatGPT has thrown a complicating wrench into classrooms, newsrooms, and beyondwhere the tool and others like it have stoked fears of rampant cheating, misleading info, and copyright violations. In response, the company now says it wants to help. We recognize that identifying AI-written text has been an important point of discussion among educators, and equally important is recognizing the limits and impacts of AI generated text classifiers in the classroom, the company said in its Tuesday blog. While this resource is focused on educators, we expect our classifier and associated classifier tools to have an impact on journalists, mis/dis-information researchers, and other groups.
But in its current form, this new detection tool probably still isnt accurate enough to meaningfully address growing concern over AI-enabled plagiarism, academic dishonesty, and the propagation of misinformation. Our classifier is not fully reliable, the company wrote. It should not be used as a primary decision-making tool.
-snip-
Gizmodo also tested the new AI-detector and got dismal results. They point out OpenAI is in a race with itself, with each improvement making it harder to detect.
It was IMO criminally stupid and reckless of OpenAI to release ChatGPT, given the problems and disruptions it started causing immediately.
Any 10--year-old could have informed them that it would immediately be used for cheating in school.
And tossing software that will tempt employers to replace workers with free AI was the economic equivalent of lighting a match in a fireworks factory.
But at this point OpenAI sounds mainly concerned - though nowhere near as concerned as they should be if they were operating ethically - about education...but they want teachers to help bail them out.
From the blog post Gizmodo cited:
https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/
We are engaging with educators in the US to learn what they are seeing in their classrooms and to discuss ChatGPTs capabilities and limitations, and we will continue to broaden our outreach as we learn. These are important conversations to have as part of our mission is to deploy large language models safely, in direct contact with affected communities.
If youre directly impacted by these issues (including but not limited to teachers, administrators, parents, students, and education service providers), please provide us with feedback using this form. Direct feedback on the preliminary resource is helpful, and we also welcome any resources that educators are developing or have found helpful (e.g., course guidelines, honor code and policy updates, interactive tools, AI literacy programs).
old as dirt
(1,972 posts)I wonder if I'd pass it...
Johnny2X2X
(19,114 posts)"You were so consumed with finding out if you could, that you never asked yourselves if you should."
AI needs to be pursued carefully and thoughtfully.
highplainsdem
(49,034 posts)Prairie_Seagull
(3,336 posts)highplainsdem
(49,034 posts)TxGuitar
(4,209 posts)Or read any sci-fi? Hell, even if they just watched the first Terminator movie they'd know it was a bad idea!
dalton99a
(81,570 posts)Their AI detector is a joke.
Try a prompt, save the output, wait a while, and feed it back to their detector.... Result: "UNLIKELY to be AI - Nope, you didn't get that from us!" (In other words, your professor is wrong)
eppur_se_muova
(36,289 posts)Leave it to someone else to ask "why would you want to do this"?
Tetrachloride
(7,865 posts)Copying essays has been with us since The Gnostic Gospels.
I didnt have teachers who taught how to write an essay. I faked my way through.
If Chat, Google, FB helps with the learning process, so be it. That Pandoras box is open.
But the grading process of the student need not be linked to any essay. Multiple choice, oral exams, experimenting, volunteering are other ways of evaluating.
GusBob
(7,286 posts)Im sorry Dave I cant do that
48656c6c6f20
(7,638 posts)treestar
(82,383 posts)aren't they concerned that they will be unable to perform at some point and get into bigger trouble?
If you fake your way into admittance to Harvard, that means you could not get in on your own power. How will you not flunk out?
highplainsdem
(49,034 posts)2naSalit
(86,775 posts)Shut it down. Remove it from the web and discontinue support.
highplainsdem
(49,034 posts)hunter
(38,326 posts)I deal with actual living breathing human beings who behave as automatons every damned day.
Just because something was "created" by an actual human being doesn't mean I can set aside my own critical thinking skills.
Christ on Toast, sometimes I myself behave as an automaton. I remember a few university exams I was sure I'd fail because I truly didn't understand the material but miraculously passed because I'd somehow barfed up an adequate amount of stuff I hadn't yet digested. Automaton Hunter.
Some of my own posting here on DU seems to me a bit robotic.
Thomas Kinkade was an actual artist who, at some point, automated his production process for fame, infamy, fortune, and misfortune.
Why the fuck should I care if someone uses a computer to skip the "actual artist" step? Who the fuck cares if the next Thomas Kinkade or Stephen King is a robot? Who the hell cares is the next Marvel Superhero movie is created entirely by robots?
This "Artificial Intelligence" changes nothing in the world of art, absolutely nothing. "True Artists," however you define that, have always been competing with automatons.
God know's I don't want to be a true artist. Those sorts cut off their own ears and eventually kill themselves.
highplainsdem
(49,034 posts)Which should be everyone who cares about humans.
Really? How many artists have you known who cut off their ears or committed suicide?
I'm sorry you're so cynical about art and artists.
But you're wrong.
hunter
(38,326 posts)My parents are artists and my childhood was as weird and unusual as anything you might imagine.
Here's a story from my childhood:
One day I answer the home phone and its Marty Feldman.
Teenage me yells "MOM! It's Marty Feldman!"
My mom yells back, "Tell him I'll be there in a minute!"
So I did, and there I was having the sort of conversation with Marty Feldman any adult would have with a kid until my mom came to the phone a few minutes later.
My dad draws and my mom writes. They met in Hollywood. Until my dad retired with a good union pension they always had day jobs. Their day jobs were generally related to their arts, but their art didn't pay the bills.
My wife and I live in a house stuffed to the rafters with art, much of it purchased from artists who are not full time artists.
If you want real art created by humans buy it or make it yourself.
highplainsdem
(49,034 posts)drawing. But AI threatens the livelihoods of people who do make a living from art, and it does so by ripping off work already done by artists, writers and musicians.
I don't want to see that treated as acceptable either ethically or commercially.
hunter
(38,326 posts)https://en.wikipedia.org/wiki/Computer_%28occupation%29
The women are the computers. Some of them may have been creative mathematicians but that's not what they were hired to do.
Professionally my mom was a wordsmith. Advertising copy, ghostwriting, typing, editing, transcribing, etc., but that's not her art, it's her day job.
My dad has a Fine Arts degree from a very respectable university. His military service as a nearsighted Radar O'Reilly army medical clerk, who by the luck of the draw wasn't sent off to Korea, was his lesson in practical living.
We all have arts. We're human.
I'm an evolutionary biologist by natural inclination and formal training. That art has never paid the bills but I did pick up some other useful skills along the way.
These days I'm thinking every liberal arts or fine arts major should learn a trade as well. It should be required for graduation. Looking back I'd have picked electrician. The way it turned out I did a lot of work in construction, with some work in computers and medicine on the side.
My wife and I were science teachers when we married.
😮
honest.abe
(8,685 posts)Clearly there are issues with students cheating but I think that can be resolved. Universities and schools are already implementing new procedures and rules on tests and essays etc. They will figure it out. Also the threat to artists is real but not a crisis. If someone wants a real painting by a human just get one that is authenticated and signed. Also the copyright stuff seems overblown as well. There were already issues like this pre-AI generated art. If someone copies someone else's work then file a copyright lawsuit.
ChatGPT and other AI systems can be hugely beneficial to society and will no doubt become mainstream.
The same things were said when the public first started getting internet access.
hunter
(38,326 posts)After the internet was opened to everyone I remember finding a story my ex girlfriend had written on some new slash site, posted by a guy who claimed he wrote it. He got some push back from others, "Dude, you didn't write that..." and there was the usual huffing and puffing when some other guy joined the conversation claiming HE wrote it.
I was lurking, reading all this, and laughing.
I'd been there when my girlfriend wrote the story, when she typed it up, and when she made a few copies of it at the library to pass around, about a decade earlier. The story is still floating around the internet with multiple authors claiming it. I'm certain my ex girlfriend is never going to claim it, no more than I'd claim some of the stuff I was writing then.
Hell, some of my early DU posts are embarrassing enough...
But that's what this is all about, same as it ever was. Don't claim stuff that isn't yours, don't misrepresent the stuff you claim as your own.
If you use an AI to create something new don't pretend it all came out of your own head. Give credit where credit is due.
edisdead
(1,956 posts)But wait do I really build guitars if I dont make the neck from scratch? Or if I use a pre-fabbed guitar body? Or even if I use limber does it cou t if I dont grow and chop the tree? What about the tools? Do I need to create those too?
Wherever we are on the human timeline there are advances in technologies and tools. Thats just how things go. I know it wont be in my lifetime but I am excited for a future where labor is something required for the person enjoying the output rather than something enjoyed by the exploiting class. AI *could* help to get there.
joshcryer
(62,276 posts)ThoughtCriminal
(14,049 posts)That is a sign that the ability to write essays is not what we should be grading.
If an AI is writing news stories, that is a sign that reporters need to be doing real journalism.
uponit7771
(90,364 posts)highplainsdem
(49,034 posts)As for the importance of being able to write, whether essays or news stories - it's communication. Learning to write effectively teaches both communication and reasoning skills. Do you really want a society where humans can't communicate effectively, but AI can? Who do you think will be in charge?
It can't be replaced easily by oral exams, which favor extroverts anyway.
Essay tests - as much as some students hate them (usually the ones who hate school in general, or haven't bothered to learn how to think clearly and communicate those thoughts) - are important for educators. Having students cheat with ChatGPT is a disaster.
hunter
(38,326 posts)I always had to buy at least twice as many bluebooks as my classmates because my handwriting hasn't progressed much beyond my third grade scrawl. I never mastered cursive.
Teachers and professors always recognized my writing, both the scrawl and the voice. I've never written anything in the passive voice.
Believe it or not, I minored in English. It was a very great frustration to all concerned.
I once had a professor throw up her hands during an office visit and tell me I wrote like an "angry young man with a head injury..."
We both suffered an extremely long and uncomfortable silence as she sought a way to claw back her words. I wasn't hurt at all but couldn't come up with a way to express that.
highplainsdem
(49,034 posts)heard anything like that from any teacher.
Re smaller classes - we will need a lot more teachers if they're going to have to fight back against AI-assisted cheating.
And they're going to have to do everything they can to fight it, to educate those kids, because kids using AI to do their schoolwork for them are learning nothing but cheating.
And the dumbed-down adults they'll become will be easily taken in by AI-generated political ads tailored for them thanks to data mining. Which those kids will have added more data to with every interaction with AI.
hunter
(38,326 posts)... for the angry hot mess I'd turned in as my term paper.
I'm totally embarrassed now, it's not something I'm proud of.
I stormed into his office when I got my report card and demanded to know why.
He said he wanted to talk to me.
We talked, he changed my grade to a "B-"
I asked what he would have done if I hadn't come back.
He said he'd have left my grade as a fail.
I quit high school at sixteen and it took me nine years to graduate from college. I have too may stories like that.
ThoughtCriminal
(14,049 posts)"While grading essays for his world religions course last month, Antony Aumann, a professor of philosophy at Northern Michigan University, read what he said was easily the best paper in the class. It explored the morality of burqa bans with clean paragraphs, fitting examples and rigorous arguments.
A red flag instantly went up.
Mr. Aumann confronted his student over whether he had written the essay himself. The student confessed to using ChatGPT, a chatbot that delivers information, explains concepts and generates ideas in simple sentences and, in this case, had written the paper."
highplainsdem
(49,034 posts)best-written paper, that indicates students need to be taught how to write better. It does NOT mean they should turn over reasoning and communicating to AI.
Not unless you want an idiocracy controlled by AI.
iemanja
(53,066 posts)A professor tested it.
highplainsdem
(49,034 posts)XorXor
(624 posts)The dawn of artificial intelligence has brought forth a plethora of new opportunities and applications. Of these, GPT-3 and applications such as ChatGPT have become some of the most popular and widely used AI technologies. However, due to their potential for misuse, there has been concern about their implications on society and their potential for causing harm. While it is prudent to be wary of potential dangers posed by such technologies, the fears surrounding GPT-3 and applications like ChatGPT are largely overstated.
Firstly, the potential for disinformation is often cited as a key concern with GPT-3. It is true that with GPT-3s ability to generate text, it is possible to create convincing messages that appear legitimate. However, such concerns are largely invalid as GPT-3s output is, at best, only semi-coherent. Further, GPT-3 is still unable to understand the nuances of human language, and its output is often too formulaic and simplistic to be convincing. As such, the chances of GPT-3s output being used to successfully deceive people are fairly low.
Secondly, there is also concern that GPT-3 could be used to cheat in academics. While it is true that GPT-3 could be used to generate answers for exams, it is unlikely that such methods would be successful. This is due to the fact that GPT-3 does not have the same level of knowledge or comprehension of the topic being tested as a student does. As such, any answers generated by GPT-3 would likely be too generic or incorrect to pass an exam.
In order to mitigate any potential risks posed by GPT-3, it is important to ensure that the technology is used responsibly. This can be achieved by having clear guidelines and regulations in place that dictate how GPT-3 can be used, and by ensuring that any content generated by GPT-3 is properly verified and checked before being distributed. In addition, it is also important to educate people on the limitations of GPT-3, so that they are aware of its potential for misuse.
Finally, while it is important to be aware of the potential dangers posed by GPT-3 and applications like ChatGPT, it is also important to recognize that such technologies have the potential to bring about a great deal of good. GPT-3 can be used to generate content quickly, facilitate research, and even to improve the accuracy of medical diagnoses. As such, it is important to emphasize the potential benefits of GPT-3, rather than focusing solely on the potential risks.
It is understandable to be apprehensive about the potential applications and implications of GPT-3 and similar AI technologies. While it is important to be aware of the potential risks posed by such technologies, it is also important to recognize their potential for good. As such, any fears surrounding GPT-3 and applications like ChatGPT should be kept in perspective. It is also important to recognize that while GPT-3 is a powerful tool, it is still far from perfect, and any potential risks posed by it can be mitigated with the right measures and precautions. As such, while it is important to be mindful of the potential implications of GPT-3 and similar technologies, the fears surrounding them are largely overstated.
Okay, that was actually GPT-3 defending itself. I've used GPT-3 for various projects and plan on using it even more. I also messed around with ChatGPT (which is cool), but the output produced feels synthetic. It also sometimes gets things wrong. Very wrong. Which is funny because it will often be wrong in a very authoritative way. Just last night we made a joke about how it could replace management. It's still very impressive, but it's still pretty limited at this time. Although, I do have to wonder where we are going with this, and where we'll be a decade from now.
highplainsdem
(49,034 posts)to cobble together some arguments in its favor, too.
Where we're going, if this isn't stopped now, is AI taking over almost all types of work.
I've read suggestions that a universal basic income will be needed, but good luck getting that to happen.
Btw, please keep in mind that "generating content quickly" simply means writers, editors and commercial artists being replaced. Research done by AI will again eliminate jobs. Improving the accuracy of medical diagnoses? That remains to be seen...and I don't think any insurance company will want to deal with malpractice suits due to AI, and neither will doctors.
edisdead
(1,956 posts)what happens when people are allowed to just be.
XorXor
(624 posts)It's not a very principled artificial person. There is no doubt a question about the future impact of AI on jobs. Andrew Yang's, before he went weird(Well, weirder), main thing was UBI and the risks that AI posea. While I thought that whole thing made for some interesting discussions to take part in and observe, I don't think we'll be getting that to happen any time soon either.
My point was that at this point, I don't think there is much risk of this taking anyone's job. The stuff it generates feels synthetic. It lacks personality even when you try to force to have one. It also gets stuff wrong a lot. But worry about this specific tech is like worrying about self-driving cars replacing taxi/uber drives, truck drivers, and stuff like that. Sure it might be happening now in some very limited cases, but we're just not there yet. Will we be there someday? I'm sure. It's just not today. I'd rather not throw the baby out with the bathwater. These tools bring lots of positives with them. We should work on ways to mitigate the harm they cause if needed, but not totally do away with them.
LostOne4Ever
(9,290 posts)W_HAMILTON
(7,873 posts)...and then have the ChatGPT recognize whether or not it had been previously created using its software? I'm sure there are some problems I am not aware of, but would it not be able to do that? Then professors and the sort could simply enter a student's essay into the program and see whether or not it matches a previously created AI essay, for instance. I suppose a student could simply change around a few words or add a few sentences, but then couldn't the program simply say how closely it resembled a previously created AI essay?
honest.abe
(8,685 posts)They take the output from ChatGPT and run it through QuillBot which rewrites the text in a different style. Impossible to detect.. I believe.
highplainsdem
(49,034 posts)which I hadn't heard of before.
Another damn crutch so students don't have to learn how to write. Apparently it will also generate summaries, so students don't have to spend as much time reading.
Dumbing the students down while making them seem more competent than they are.
What could go wrong?
honest.abe
(8,685 posts)They are already implementing things like requiring outlines and drafts along with any submitted essay or paper. Also, requiring in-class writing assignments with limited access to the internet. Educational institutions are also seeing the benefits of AI in the classroom. Here is a great article about this.
https://www.chalkbeat.org/2023/1/6/23542142/chatgpt-students-teachers-lesson-ai
highplainsdem
(49,034 posts)told to generate rough drafts as well. The only way to ensure students have actually written something will be having them write it, minus any device they can use for cheating, while they're being watched.
Some students - the ones who really want to learn - won't cheat. But a lot will, and ChatGPT is a cheating tool lazy students could only have dreamed of in the past.
And if they know adults are using it to avoid having to write, or research, or create something themselves, they won't really see it as cheating.
But they're still being dumbed down.
honest.abe
(8,685 posts)I think the more effort the student is required to put into the document the more likely the teacher will be able to determine if he/she is cheating.
highplainsdem
(49,034 posts)and then tweak a few paragraphs so they're clumsier. Instant rough draft.
honest.abe
(8,685 posts)So in the process maybe learns something.
highplainsdem
(49,034 posts)Doesn't learn how to reason and how to communicate via writing.
highplainsdem
(49,034 posts)it just generated as AI-generated.
W_HAMILTON
(7,873 posts)...to match it up to. But maybe there are storage limitations or something like that that would prevent that from being feasible, I don't know...
honest.abe
(8,685 posts)I suppose its possible but the cost would be outrageous.