Growing up, my mom was a pro at pulling classic tricks on me. “Did you know ‘gullible’ isn’t in the dictionary?” she’d say, and off I’d run to check an old-school dictionary, you know, the thick book with paper and not the internet, only to realize (once again) that I’d been duped. Well, in 2024, the stakes are much higher than childhood pranks—especially with the election less than two weeks away and AI-generated disinformation becoming an increasing threat.
From AI-generated deepfakes to misleading political ads, technology has evolved into a powerful tool for spreading misinformation. But we can’t afford to be gullible. Whether it’s a viral video of a candidate saying something outrageous or a suspiciously polished article that seems a little too extreme, we must remain vigilant and cast a critical eye on the information we encounter online.
The Rise of AI Disinformation
This election season, AI-generated misinformation is everywhere—and it’s alarmingly sophisticated. Deepfake videos, for example, can make it look like celebrities or politicians are saying things they never said. Take a recent public service announcement regarding the election that featured Chris Rock, Amy Schumer, Laura Dern, Rosario Dawson, Orlando Bloom, Jonathan Scott and Michael Douglas—except not all of the celebrities in the video were real. Some were deepfakes, cleverly designed to raise awareness about how AI can manipulate information especially around the election. In their short video, they ask all of us to stay vigilant, learn what false information looks like, and protect our vote this November.