An AI that can "write" is feeding delusions about how smart artificial intelligence really is
An AI that can "write" is feeding delusions about how smart artificial intelligence really isGPT-3, which can converse and write compelling text, is more like a pseudo-intelligence than a real AI
By GARY N. SMITH
PUBLISHED JANUARY 1, 2023 7:30PM (EST)
(Salon) The internet revolution has made many people rich, and the lure of outrageous fortune has tempted many to exaggerate what computers can do. During the dot-com bubble, many companies discovered they could double the price of their stock simply by adding .com, .net, or internet to their names. Now, we face an comparable AI bubble in which many companies woo customers and investors by claiming to have a business model based on artificial intelligence.
If computers can defeat the most talented human players of chess, Go, and Jeopardy, they surely can outperform humans in any task or so the thinking goes. That brings us to the recent hullabaloo about an AI program that can pen such compelling writing that it seems to be naturally intelligent. It's called OpenAI's GPT-3 large language model (LLM), and though the name is obscure to a layperson GPT-3 is short for Generative Pre-trained Transformer 3, which doesn't explain much more what it does is relatively simple: GPT-3 can engage in remarkably articulate conversations and write compelling essays, stories, and even research papers. Many peopleeven some computer scientistsare convinced that GPT-3 demonstrates that computers now are (or soon will be) smarter than humans. As a finance professor and statistician who has written several books on AI and data science, I find this belief fanciful.
Alas, it is an illusiona powerful illusion, but still an illusion reminiscent of the Eliza computer program that Joseph Weizenbaum created in the 1960s. Eliza was programmed to behave like a caricature of a psychiatrist. When a "patient" entered an input, Eliza would repeat the words and/or ask a followup question ("You were unhappy as a child? Tell me more about that." .
Even though users knew they were interacting with a computer program, many were convinced that the program had human-like intelligence and emotions and were happy to share their deepest feelings and most closely held secrets. Scientists now call this the Eliza effect. We are vulnerable to this illusion because of our inclination to anthropomorphizeto attribute human-like qualities to non-human, even inanimate objects like computers.
....(snip)....
Large Language Models (LLMs) like GPT-3 do not use calculators, attempt any kind of logical reasoning, or try to distinguish between fact and falsehood. They are trained to identify likely sequences of wordsnothing more. It is mind-boggling that statistical text prediction models can generate coherent and convincing text. However, not knowing what words mean, LLMs have no way of assessing whether its utterances are true or false. GPT-3 asserts its BS so confidently that its behavior is not described as lying but rather hallucinating (yet another example of anthropomorphizing). ...................(more)
https://www.salon.com/2023/01/01/an-ai-that-can-write-is-feeding-delusions-about-how-smart-artificial-intelligence-really-is/
MiHale
(9,737 posts)So when I was chastised in college that my writing was a bit robotic (at the time I didnt understand the comment) I was ahead of the times.
old as dirt
(1,972 posts)Quakerfriend
(5,450 posts)my childhood & I was amazed at how clever it was!
My parents, along with the neighbors, built a small ski hill with a rope tow, night lights and a chalet across the street from our house & the poem was about that.
cojoel
(957 posts)More sophisticated but not so much more capable.
As some one told me many years ago during a Computer Science class, it is important to not confuse Artificial Intelligence with Real Stupidity.
stopdiggin
(11,317 posts)there is certainly a difference in regurgitating 'content' - even rearranged and pretty 'sounding' content - which these programs can do to a fairly impressive and convincing degree ... (with all due, and quite warranted, credit on that front)
To then get to a place where a program is actually cognizant, capable of judgement, discerning truth from fiction, or divining something like a 'fair' result, or the lesser of evils ...
A standard hackneyed trope (repeated often enough) will be assigned just as much value (and 'truth' ) - as the words from a celebrated mind - or the conclusions from a years long medical study.
(oh, wait ... have I just described the 'abilities' of major portions of our own population .. ?)
-------- ---------
Martin68
(22,822 posts)content anymore.
Beastly Boy
(9,375 posts)My immediate comment was that humans, in order to differentiate themselves from the AI bots, will eventually have to write like complete morons.
In certain instances, you have to give credit to some of our political figures for being trend setters and ahead of their time (Trump and MTG come to mind...)
hunter
(38,317 posts)Creativity and a deep understanding of any subject are uncommon things.
I had a couple of professors in college who were entirely intolerant of robotic thinking. I once worked for a general contractor who had a similar attitude. They were not popular.
My physics professor was the nicest guy in the world, his lectures were wonderful, his office was always open, his labs exciting, but his exams were absolute hell for anyone who'd gotten through school by rote memorization and regurgitation of facts. If you didn't feel the math and physics in your gut you weren't going to get an "A" in his class no matter how hard you studied, no matter how many equations you'd memorized, no matter how good you were at grinding through the math. He made kids who'd zipped through high school as valedictorians cry.
The trouble with all robots and many humans is that they don't know when they are being especially stupid and robotic.
Chris Koch
(1 post)OpenAI's GPT-3 is great for Internet searches. Add "Sources" to any text it generates and it will give you good starting points ... and also show how conventional its sources are.
gopiscrap
(23,761 posts)XorXor
(621 posts)Like teamspeak and discord. It was a very basic program that listened to audio, would parse it to text, then feed that into GPT-3, then it send the output of GPT-3 as synthesized speech. Despite it's very simple level of complexity, it was still amusing and could make some relevant discussion