AI like ChatGPT, DALL-E, and voice-cloning tech is already raising big fears for the 2024 election
https://fortune.com/2023/04/08/ai-chatgpt-dalle-voice-cloning-2024-us-presidential-election-misinformation/
Archive page
https://archive.ph/FSdKK
-snip-
These new A.I. systems are collectively referred to as generative A.I. ChatGPT, the popular text-based tool that spits out student term papers and business emails with a few prompts, is just one example of the technology. A company called ElevenLabs has released software that can clone voices from a sample just a few seconds long, and anyone can now order up photorealistic still images using software such as OpenAIs DALL-E 2, Stable Diffusion, or Midjourney. While the ability to create video from a text prompt is more nascentNew Yorkbased startup Runway has created software that produces clips a few seconds in lengtha scammer skilled in deepfake techniques can create fake videos good enough to fool many people.
We should be scared shitless, says Gary Marcus, professor emeritus of cognitive science at New York University and an A.I. expert who has been trying to raise the alarm about the dangers posed to democracy by the large language models underpinning the tech. While people can already write and distribute misinformation (as weve seen with social media in past elections), it is the ability to do so at unprecedented volume and speedand the fact that non-native speakers can now craft fluent prose in most languages with a few keystrokesthat makes the new technology such a threat. It is hard to see how A.I.-generated misinformation will not become a major force in the next election, he says.
The new A.I. tools, Marcus says, are particularly useful for a nation-state, such as Russia, where the goal of propaganda is less about persuasion than simply overwhelming a target audience with an avalanche of lies and half-truths. A Rand Corporation study dubbed this tactic the firehose of falsehood. The objective, it concluded, was to sow confusion and destroy trust, making people more likely to believe information shared by social connections than by experts.
-snip-
Meserole also doesnt think video deepfake technology is good enough yet to play a big role in 2024 (though he says that could change in 2028). What does worry Meserole today is voice clones. He could easily imagine an audio clip surfacing at a key moment in an election, purporting to be a recording of a candidate saying something scandalous in a private meeting. Those present in the meeting might deny the clips veracity, but it would be difficult for anyone to know for sure.
-snip-