General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsOK, I'll be THAT guy. AI is a fraud.
AI is a fraud in the same sense that all the big buzzwords have always been massively deceptive and massively exaggerated. Think of "the Internet" in 1997. Everybody made wild claims about "the Internet", but very few knew what they were talking about. At the time, I worked in Silicon Valley, which was just crawling with people blathering about this.
In the end, 95% of those people and ideas were wrong. But the 5% went forward and, over the next 25 years, the Internet did become transformational. But none of what the Internet is today was predicted by the buzzword champs.
And that's where we are with AI today. There indeed are some apps where AI is doing some useful things. But in most cases, the best we get is a system that is completely unreliable, but may be able to occasionally present some "Gee whiz" results. A perfect example of this is Musk and his Swastikars. He has been promising fully self-driving "in a few months" for an entire decade now, yet the system is still at the primitive level-2 stage. It might eventually work, but it is not close today. Meanwhile others who have used different technologies, not nearly as dependent on AI, are much closer to the goal.
Another example is in the field of music. There are AI engines now that can take a pop song (e.g. from an MP3) and break that into separate rMP3s of drums, guitar, bass, piano and vocal. That is dazzling even though it is only about 90% accurate. That is to say, a trained human musician can identify 100% of the content from each instrument, but it would take a human many hours to create the separated streams, whereas AI can do it in a few minutes -- at a level that is useful, if not commercial-grade.
How much venture capital is being dumped into AI things? How much electricity is it burning? Some say AI will soon consume more than 10% of our total electricity generation.
That brings us to Musk and what he is doing to our government agencies. The big lie is that he is stealing the data because he can turn our government agencies into AI factories that need almost no people to handle everything government does. That is the fraud. So far, Musk hasn't produced a single AI solution that actually does what he promised. And this will be no different.
We have seen what health insurance companies can do with their computer systems. They can train them to deny claims faster than any human can. But it really doesn't take any AI to do that. And that is the same thing Musk is aiming for. Most of government is there to PROTECT the people. From air crashes. From pollution. From poisons. From ignorance. From rotten food. From financial scams. What the billionaires call "oppressive regulations" are actually simply consumer protections.
The game plan is to use the fraud of AI to break government completely. And you can believe that Elon intends to bill the government many billions of dollars for these fraudulent AI systems. The new systems will deny workers compensation for injuries. They will deny veteran benefits. They will send all the education money to charters. And so on. It really doesn't take AI to do all these things. And that's good for Musk because he, and AI, are frauds.

SheltieLover
(68,163 posts)mr715
(1,772 posts)You know it when you see it, and some people love that particular look/sound.
My students use "AI"/fancy autocorrects to replace thinking about stuff. Artists can now use it in lieu of creativity and/or talent.
But it is a real thing, not a fraud. It can be used to predict markets and take money from people. It is something to be vigilant about.
The idea of digital sentience is a fraud.
dweller
(26,571 posts)
✌🏻
Ken Mosul
(45 posts)I know that pains everyone here.
A simple test drive with FSD sold me on the car. It is truly remarkable and light years beyond
any other manufacturer at this time.
As for AI! AI! AI! as the single most annoying thing to come along since 'the Internet' - 100% agree. We have
a few posters here who obsess over it.
Bluetus
(1,054 posts)willing to sit in the back seat blindfolded on an extended drive with challenging circumstances, such as traffic, accidents, constructions zones, detours, snow and whatever. Mist people think that Waymo is 4-6 years ahead of Tesla. They are, after all, already delivering tens of thousands of paid rides, albeit with strict location fencing, condition fencing, supervision, and limitation to city streets.
We'll see how that sorts itself out, but Musk said it was ready to do a coast-to-coast drive with zero human intervention BY THE END OF 2017 !!!. Yes, that would be a fraud.
Anyway, my point was mainly about the fraud Musk is doing TODAY by convincing Trump and others that we can vastly reduce the federal work force by replacing people with AI, yet he has never produced a single AI system that has yet met any of his claims.
Henry203
(552 posts)With some incredible software. It can be amazing. The larger the size of the repository the more accurate it becomes. I introduced the last great technology in litigation and what I see is mind blowing.
Bluetus
(1,054 posts)Last edited Sun Feb 16, 2025, 10:13 PM - Edit history (1)
I have also seen some AI things that are "mind blowing". Some of the movie editing software does some decent effects that would take a human much longer to do. But then we have the generative pictures with 6 fingers and the numerous cases where lawyers have submitted briefs where AI invented cases out of thin air.
Here's the latest one. : https://www.yahoo.com/news/attorney-pleads-mercy-using-ai-110052528.html
Mind-blowing? Sure
Useful? Sometimes
Reliable? Rarely.
Henry203
(552 posts)That happened because those attorneys were lazy. I am currently involved in that area and we use those two attorneys as why chat gp4 is really poor because it hallucinates. You can solve that problem. I actually spoke to an attorney two weeks ago who is good friends with the judge who caught those attorneys. Supposedly, that judge doesnt miss anything.
That being said I can tell you that AI can be amazing. I know the space very well and have worked in it for 17 years.
Bluetus
(1,054 posts)And certainly not the same as being reliable. Maybe some day all AI processes will be fully reliable, lacking any biases that slip in through the training data. Heck, some day we might even have the means to TEST the AI systems to determine how much trust they deserve. There are no means of testing any of the AI today other than "just throw it out there and see what happens."
Henry203
(552 posts)I was involved in the last great technical advance in litigation software and my friend was involved when it became admitted as accurate for court. This will also be accepted.
If George Conway was still practicing he would be someone I would call. That is the level I am at.
Bluetus
(1,054 posts)It is a fact that the ChatGPT apps are completely wrong in a very high percentage of cases today. Sometimes they are right and that is amazing. But it is not amazing when lawyers use ChatGPT to prepare their briefs and it invents cases that never happened. What does any of that have to do with George Conway. Are you saying that George Conway writes his briefs using ChatGPT? I seriously doubt it.
Anyway, that is all off topic. My point was that Musk is trying to justify the wholesale firing of government worked on the basis that if you just give Elon a contract for a few billion dollars, he will just run the government data into his magic AI machine and, BINGO, no more need for government workers. It is all bullshit and all a fraud because Musk has never, ever completed any AI project that had the results be claimed, and he hasn't even started on this one.
Yet, the workers are being fired nonetheless. This is all a fraud to make government fail so that the public will say "See. I knew government was a bad idea." This is the fraud I'm talking about.
Henry203
(552 posts)I have been involved in ediscovery for almost 20 years. I worked on the Blue Cross call action that had a 2.67 billion dollar settlement. Those two attorneys were idiots and lazy. That is the Attorney's fault and not chat GPT and that my current software eliminates those type of issues. If you want to go into the ediscovery area you should know what you are talking about. I have almost 20 years of being in the ediscovery space and I have 20 years in the large law firm vertical. I have gone to the National White collar conference multuple times and legaltech and ILTA for the last 20 years. This is my field. George Conway is someone I would show my software to.
Quixote1818
(30,963 posts)For example I work in volume school and sports league photography and it has made our volume photography production much more efficient. It can take out glass glare, extract a person from a background without needing a green-screen. Sometimes for yearbooks we use the facial recognition to find previous images of a subject if their data was entered wrong. If eyes are closed in a group it will add in eyes open and somehow it just knows what they should look like. It's amazing for organizing data, helping workflow, analyzing sales trends etc. Our company hasn't laid anyone off but it has helped us out in a lot of ways. Unfortunately, on the darker side I have several friends who have lost data entry jobs.
Here are a few ways it is having an impact good and bad:
https://blog.workday.com/en-us/5-industries-where-ai-is-having-an-impact-today.html
A lot of people think there is an AI bubble just like the Dot.com bubble but all the research I have done suggests there might be a small bubble but nothing like the Dot.com bubble which was a huge amount of speculation. Nvidia is selling one hell of a lot of chips as many industries are finding creative and useful ways to improve productivity. Sure, there is speculation driving up stocks but there are so many real-world applications the boom should last a few more years unless China wins out.
That being said, I think the world would probably be a better place without it for a lot of reasons and it may end up being our downfall.
yaesu
(8,622 posts)An AI redneck impersonation singing country music is like fingernails scratching a blackboard
I heard Ed Zitron on the radio talking about AI and the hype. Very interesting
Hanzzy72
(62 posts)Unfortunately corporate America has gotten their hands on it and have turned it into a profit generating/cost saving tool. Instead of being a pancea for humanity its becoming a cancer.
Bluetus
(1,054 posts)I don't dispute that, in the same sense that the Internet had lots of potential as we looked at it in 1997. But at this time, most of the AI "profits" are coming from people like Musk who can attract massive investor dollars even though most of the things he says about AI are grossly exaggerated. How many FDA workers, for example, can AI realistically replace (achieving the same results through AI that humans do today,) in the next 2 years, for example?
When do you think Elon is likely to deliver an AI system that can safely and reliably replace human air traffic controllers? That's roughly the same scale as self-driving cars and Elon has not yet even delivered the things he promised for delivery in 2016. That is a fraud.
The fraud in this case is that he's just firing people willy-nilly without no evaluation of just how AI would magically replace their jobs. AI really has nothing to do with it. They just wanted to do massive firings, just as he did when he burned Twitter to the ground. P.S. Twitter is never coming back from that.
Tarzanrock
(862 posts)I'm in Los Angeles. I see Waymo robotaxis -- autonomous, driverless vehicles operating safely on the streets of Los Angeles every day. In fact, I see them several times each day. These are "facts" not nonsensical hyperbole:
Waymo is currently delivering more than 150,000 autonomous rides per week in Phoenix, San Francisco, and Los Angeles. Just a few months ago, the firm was only completing about 50,000 rides per week meaning its tripled its ride volume in just a few months.
Waymo has plans to expand to 10 new cities in 2025, including Las Vegas, San Diego, Atlanta, Austin, and Miami. Plus, the company has partnered with Uber to autonomously deliver food through Uber Eats in select locations, including Phoenix.
Late last year, Elon Musk unveiled the Cybercab and Cybervan, two fully autonomous vehicles without steering wheels that Musk sees as the future of Tesla. In fact, perhaps even more exciting, Tesla plans to launch its own robotaxi service in Austin, Texas, in just a few months!
Soon, there will be autonomous, driverless 18 wheeled tractor-trailor trucks operating on the Interstate Highways carrying freight from city to city.
I'm a laywer and I'm in the law business -- all legal document review software platforms now incorporate some version of A.I. into their programming. The use of A.I. in Law, Medicine and Engineering (just to name a few professions) has been exponentially growing and markedly so over the past several years. It will soon grow exponentially with the advent of newer and faster semiconductor chips the likes of which Nvidia and Taiwan Semiconductor and AMD are now imagining, designing and creating. The advent of the third decade of the 21st Century (only 5 years away) will be nothing like the world today!
Bluetus
(1,054 posts)they are not relying on AI to solve the problem with camera vision. They are using LIDAR, which is fundamentally more suitable for the task and eliminates many of the issues that have put Tesla so far behind schedule with their camera-only approach.
In addition, WAYMO has been very conservative in fencing the operations -- only certain roads, only certain conditions. They aren't over-promising what they can accomplish with AI today. But Waymo also isn't running around firing government employees left and right, and claiming they will soon have AI systems that will make it all good.
andym
(5,961 posts)There's a reason that the 2024 Nobel Prizes in Physics and in Chemistry went to AI researchers. For example, in chemistry AI can now predict protein structure of most proteins from primary sequence.
Tarzanrock
(862 posts)an A.I. autonomous, driverless vehicle, a Waymo car or some Chinese version of it. I'd purchase one in a heartbeat if they were commercially available. Many Americans will own or rent or possess some kind of a personal robot by +/- 2035 which functions on a very human level. These "robots" already exist in Japan. The future will be on top of us before anyone even realizes it. Meet Ameca:
Meet Optimus and Atlas and others: https://standardbots.com/blog/most-advanced-robot
JoseBalow
(7,474 posts)
BarbD
(1,327 posts)My background is based on reading Isaac Asimov. And, isn't it still garbage in, garbage out?
Bluetus
(1,054 posts)end up behaving like Putin's bots.
The AI systems that are most successful are the ones that have have a closed loop training cycle where the inputs can be correlated with known outcomes. For example, an AI that analyzes an EKG can be trained on many thousands of EKGs where the patient outcome was known to a certainty. You would expect a good result from AI in those closed systems. The AIs that don't do nearly as well are those where the correct answer is not part of the training set and has to be supplied manually by people staring at monitors all day long making snap judgments about outcomes. And in the case of self-driving, the task can have practically an infinite number of circumstances, such that a good response in one case may be a disastrous response in a slightly different case. Very definitely garbage in-garbage out.
And then when we look at eliminating government jobs many of them are quite specialized, so we would need to train AI models for individual jobs. Possible, but might take decades to make a real dent in the labor needs.
The real point of this who discussion is, why are we in a situation where Musk can just run around firing people on the basis that they can be replaced with AI when there is no AI to replace them today? How can this possibly be right? How can it possibly be legal? Those agencies are all funded at their current levels. How can the executive branch just unilaterally say "Nope, we're not going have IRS auditors anymore"?
Norrrm
(1,574 posts)Children are not born with biases, but they learn from their family.
They pick up biases from their learning environment.
Teens and pre-teens can be very racist/sexist.
Stories of self-learning programs (AI) becoming racist from the conversations they participate in.
Just imagine two AIs, liberal/conservative, discussing things,
Each one so set in its ways that it only seeks confirmation bias for its beliefs to the point that it refuses to see other viewpoints or learn any more.
Now, it has truly become human.
Yavin4
(37,182 posts)First, just because a few ideas during the internet boom did not take off, that does not mean that it was all hype. Far from it, the internet delivered 1000 fold.
Second, AI can replace careers that center around information storage and retrieval. My former profession is finished because of AI.
Bluetus
(1,054 posts)that can reliably, safely replace a fully-trained air traffic controller.
Yavin4
(37,182 posts)The US is not the only country developing AI systems.
Bluetus
(1,054 posts)that says exactly what standards the AI system must meet, and have bidders submit proposals, complete with precise milestones that the vendor must meet in order to proceed to the next phase.
I remind you that Musk said in 2016 that his AI Swastikars could be able to drive coast-to-coast with no human intervention by the end of 2017. There are still nowhere close to that, even though Musk has on hundreds of occasions claimed that the "next software release" would be the one.
If we (the people) are going to hire a company to build an AI system, don't you think we ought to select a company that has completed at least one successful AI project. After all, the majority of AI projects, especially bespoke projects (as would be the case here), fail.
https://www.pmi.org/blog/why-most-ai-projects-fail
Yavin4
(37,182 posts)These are two separate things. And yes, there will be a lot of failure, but it's that failure that will lead to superior results.
Bluetus
(1,054 posts)Last edited Mon Feb 17, 2025, 04:16 PM - Edit history (1)
have at it. A few will succeed. Most will fail. The point here is that this guy is running around firing people who are employed to provide services for the American people, using the argument that they can be replaced with AI, when there is absolutely no basis for concluding that is anything but a fraud.
Outside of the Pentagon, Veteran's Affairs has, by far the most employees. What AI is going to replace surgeons, nurses, or orderlies? Many of the VA employees have special expertise about diseases and disabilities that are closely related to military service. The more specialized a job is, the more likely that will require a bespoke AI solution and those might take years to develop. That's my point. Many people, perhaps even you, want to treat AI like waving a magic wand. It isn't magic. Most systems are flops, especially the bespoke ones. And they all take a lot more work and iteration than the pitch-men ever acknowledged.
jeffreyi
(2,350 posts)might actually be beneficial, I don't know. I have a friend suffering from macular degeneration. My fervent wish is for AI to help her and others in her plight.
Bluetus
(1,054 posts)have AI systems to look at drugs and dosages and make recommendations where we could save money and get better results. But that won't happen soon, if ever, because there is so much money in over-prescribing.
It is already used to examine Xrays, EKGs and other tests. If this were additive, bringing another set of "eyes" to the case, that would be good, but the incentive will be to cut the humans out of the review process and just let the AI decide your fate.
And so on. The technology has potential for good and potential for evil. Our economic system favors the evil over the good.
wcmagumba
(3,914 posts)
Jarqui
(10,660 posts)A lot of what is being referred to as AI is BS to me.
It is a very, very complicated subject.
I'm too busy to get into it tonight.
I doubt it would be worthwhile because of the complexity - can't be covered in a thread.
Some books on the subject don't properly cover it.
It is like they've grabbed a buzz word most don't understand and are using it to dupe people into supporting something that isn't real and will not be for some time. And a lot of the people using the buzz word don't understand what they're talking about.
There are pieces of software that kind of supposedly put ones toe into the AI water: loan approval software, planning software, medical diagnosis software. But it is still in its infancy. They're helpful tools but the learning aspects are really iffy at this point.
The whole thing strikes me like a big scam.
Bluetus
(1,054 posts)There is almost always a kernel of reality at the start, from practitioners who really know what they are doing. Once it reaches buzzword status, all the dopes start piling on because they don't want to be left out. And when the noise gets to a high enough pitch, the conmen come in and use the buzzword to sell things they will never really deliver on.
At no point did I say that there were no applications where AI has more positives than negatives today. There certainly are some, I use AI in music and video production daily. But even though it is useful and saves me some time, the results are nothing like the marketers boast about. And these are the best cases where an AI model (or multiple models) can be developed and used across a variety of products. What we are talking about with government agencies is more likely to be bespoke systems that will take years, even decades to make really work.
That doesn't mean AI has no merit or that it should not be pursued. There really is no escape from that. But meanwhile, most of what we see promised today under the AI banner is somewhere between extreme false advertising and outright fraud. In Musk's case, it is complete fraud because that's what he does.
Jarqui
(10,660 posts)"a field of science that uses machines to learn, reason, and act like humans"
Basically humans program 'machines' telling the 'machine' how it is to behave. But to me, that is not AI. It's a software program.
AI really happens when it figures out itself what data or information it should look at and then figures out itself how it should behave or what it should do. It is doing the exploring, hypothesizing, reasoning and thinking - examining alternatives - some that maybe haven't been thought of.
As programmers, we can take our programs up a level or more. On the basis of data we supply, we can have the software look at alternatives or provide results to help automate the overall process. But we're a long way off from the program deciding what data it needs and how to think about it. For much of the last four decades, we did not have the computing horsepower needed for an application like this. They're getting faster every day ..
We hit a wall in much more basic business planning software in the 80s and 90s trying to crunch the problem. By the time the computer figured it out what we were trying, too much data had changed and the results were obsolete - for a relatively simple planning problem for a factory. The processing needed is massive.
In the 90s, I spent 2 weeks in a think tank. My background was civil engineering and software. Had my own R&D software company whose clients were computing companies. They couldn't explain their solution to me much less a lay person or non-software acquainted civil engineer. And it didn't work. etc. This is Einstein / Stephen Hawking - way out there kind of stuff conceptually. It was going to be a big sales problem as you couldn't sell it to a management team because they couldn't understand it - way too involved. Each of the developers had their own outlook.
There are software tools that help us. They're getting better every day. But none of it is real AI. The banking loan software is implementing bank policy decided by management to guide staff on whether to make a loan or not. The bank is the one really deciding what their loans officers should consider - not the software.
The artificial part of today is people are dumbed down enough to think it is intelligence but it really isn't.
Oneironaut
(5,990 posts)It may be amazing in some areas, and, utterly incompetent in others. It depends what kind of AI you're talking about and in one context? Large language models? I wouldn't write a research paper with them. They did, however, help me get unstuck in a few programming problems.
The CEO who tells you that we're going to be able to fire 90% of our workforce in the next 10 years is being a doof, though.
artemisia1
(1,010 posts)of a century. Then it came back with realistic expectations, infrastructure and less-hype. I predict the same for AI.
Johnny2X2X
(22,876 posts)But the major tech players are all in on it and think it will be transformative.
I think it's basically a search engine that organizes results in useful ways right now, but it's pushing forward quickly.
Microsoft isn't investing $80B in AI without knowing it's going to be worth it.
Bluetus
(1,054 posts)when you say "Microsoft isn't investing $80B in AI without knowing it's going to be worth it." Often times vast sums of money are paid out just because "everybody is doing it" and they don't want to be left out.
But let me give Microsoft the benefit of the doubt that they are being deliberate. Even so, their intentions are clearly to have computers pushing their self-serving agenda better and harder than humans can or will. And this is the asymmetry. The entities that have the funds to throw at this most certainly do not have the well-being of humanity in mind. The money is only justified if it can help them suck up more and more wealth.
Pretty soon, AI systems will be granted person-hood and will have their own bank accounts. Microsoft had better watch them closely.
DSandra
(1,574 posts)Would I fully rely on them for critical analysis or to replace a lawyer? No. But they are a good aid to helping me in order to handle a unique situation or when I don't have access to non-critical expertise. I am amazed at how much better it is able to reason about something than the average person. I especially like Perplexity because it cites where it gets its knowledge and I can go to the websites to doublecheck.
Bluetus
(1,054 posts)None of them have anything like intelligence and most can't be used without carefully validating results.
In the car space, we have been adding driver convenience and safety assists for many decades, and some of them now use neutral nets to do things that were not previously possible. But few AI things can be entirely trusted on this own in life-or-death situations without supervision.
Perhaps the closest thing is the Waymo system that uses a combination of procedural (traditional) programming, NNs, specialized hardware (LIDAR), extensive mapping, extreme fencing and active supervision to off a relatively safe service for low-speed city taxi rides. That's great, but that isn't anywhere near the solution that AI boosters were claiming would be widespread by 2020.
That isn't the fraud I'm talking about. I'm talking about the claims that AI can overhaul the FAA systems, practically eliminating humans from the process -- and saying that it is so certain and soon that we can start firing FAA employees NOW. This comes from a guy who has promised full self-driving for 10 years and is still only about 30% there. Meanwhile, we are having commercial plane crashes on a near-daily basis, and coincidentally, this all started at exactly the same time that DOGE started doing whatever they are doing.

Kick in to the DU tip jar?
This week we're running a special pop-up mini fund drive. From Monday through Friday we're going ad-free for all registered members, and we're asking you to kick in to the DU tip jar to support the site and keep us financially healthy.
As a bonus, making a contribution will allow you to leave kudos for another DU member, and at the end of the week we'll recognize the DUers who you think make this community great.