with an internet search providing misinformation, is that the chatbot simplifies the information and often doesn't provide its sources, which it often gets wrong or invents when it does offer them (real authors and reference librarians are already getting tired of having to explain to ChatGPT users that an article or book cited or even quoted by ChatGPT doesn't exist; and businesses are hearing from people wanting products that don't exist but ChatGPT says they sell). And ChatGPT offers people that misinformation in convincing, authoritative prose. When the information comes directly from a trusted source, or a source known to be untrustworthy, people can decide which to trust, rather than letting the chatbot results decide for them.
When Microsoft first introduced Bing AI a couple of months ago (this was before it went haywire in so many extended conversations that MS had to severely limit how long it could interact with users, and on which subjects), it churned out initially impressive results. Initially impressive because no one checked them right away. So MS was spared what happened with Google when their Bard AI had its mistakes caught immediately, and Google stock nosedived. It was only later that those Bing demo results were checked and the media finally let people know Bing had been critiquing a vacuum for having too short a cord when it was in fact cordless, recommending restaurants that didn't exist, etc. It had sounded convincing, and that was all it took to sway most people looking at the results.
It's difficult to fact-check. Difficult and time-consuming.
Btw, if the internet starts to fill up with chatbot-generated mistakes and deliberate misinformation (which chatbots are great for, and they can make even a Lauren Boebert sound superficially intelligent), then it won't be a Brave New World. It will be a Marching Morons New World. An Idiocracy. And that's going to hurt Democrats and liberals much more than it will hurt authoritarians and their followers.