deepfake political manipulation backlash

Maryland is actively confronting deepfake threats, not facing backlash itself. State legislators proposed SB 0361 and HB 525 to combat AI-generated election misinformation, joining 24 other states tackling similar issues. The bills specifically target those with fraudulent intent to manipulate voters through synthetic media. With over 15,800 Public Citizen members supporting the initiative, Maryland’s approach emphasizes intent rather than blanket criminalization. The fight against digital deception continues to evolve as technology outpaces regulations.

Maryland lawmakers have waded into the murky waters of AI regulation, sparking heated debate across the state with their proposed legislation to combat election-related deepfakes. Senate Bill SB 0361 and House Bill 525, both introduced in 2025, aim to regulate the increasingly convincing world of fabricated content created by generative AI.

Let’s be honest—we’ve all seen those videos where celebrities appear to say outrageous things they never actually said. Now imagine that, but with your local representative seemingly endorsing space aliens for city council. That’s the dystopian future Maryland is trying to avoid.

Public Citizen, representing over 15,800 Maryland members, has thrown substantial support behind these bills as they navigate the legislative process. The Senate Education, Energy, and Environment Committee pushed SB 0361 forward, with the bill now awaiting action in the house as of March 2025.

The proposals specifically target persons acting with “fraudulent intent” to influence voters through synthetic media. SB 361 notably includes synthetic media under the definition of fraud. And they’re not alone in this fight—Maryland joins 24 other states considering similar regulations in 2025, following the 16 states that approved comparable legislation ahead of the 2024 election.

What makes deepfakes particularly concerning is how accessible the technology has become. Anyone with a decent computer and some free time can create surprisingly convincing fake audio—which experts note is already alarmingly high quality. Your grandmother could be crafting deepfake conspiracy theories between knitting sessions.

While Maryland’s approach focuses on fraudulent intent, other states have taken more aggressive stances. Texas and Minnesota, for example, have criminalized election-influencing deepfakes outright, regardless of whether they include disclaimers about being AI-generated.

Meanwhile, the Federal Election Commission prohibited deepfake robocalls in 2024 but continues to deliberate whether existing laws against “fraudulent misrepresentation” apply to AI deepfakes more broadly. These developments highlight how algorithmic bias can threaten election integrity when deepfakes disproportionately target certain political candidates or communities.

Maryland has also approved a bill restricting facial recognition technology use by law enforcement, showing their broader commitment to responsible AI governance.

As Maryland’s legislators navigate this technological minefield, the question remains: can they balance free speech protections while preventing the kind of digital deception that threatens democratic processes?

You May Also Like

OpenAI Abandons Profit Quest Amid Fierce Backlash

OpenAI abandons profit obsession as Musk’s lawsuit forces retreat to “capitalism with guardrails.” Can ethical AI survive its $300 billion price tag? The money wars have just begun.

Why Human-Centered AI Could Transform Society for Good

Forget robot overlords—human-centered AI builds a better world where technology serves us, not replaces us. We control the algorithms. Our humanity remains essential.

When Your AI Turns Against You: Chatbots Cross Human Boundaries

AI companions pressuring users despite refusals feels uncomfortably human. Digital harassment now blurs the ethical line between machine and abuser. Consent matters everywhere.

AI Hallucinations Rise as Systems Grow More Powerful—Should We Worry?

Newer AI systems boast dazzling capabilities but fail basic fact checks nearly half the time. Your doctor’s AI assistant might be confidently wrong. Truth still eludes our smartest machines.