It’s unlikely that you’ve heard of polity simulation but you will be impacted by it. Polity simulation is a type of attack on political processes that leverages AI tools and processes to mimic grassroots political movements. In this post you’ll learn what the threat is and we’ll consider what can be done.

Breaking down the Polity

First of all – what’s the word “polity” mean? The dictionary definition is “an organized society; a state as a political entity”. With that definition in hand it becomes more clear what a polity simulation attack might look like. It’s like a deepfake but at the scale of a whole sector of society and targeting consensus finding processes instead of an individual person or organization. Thus a polity simulation attack is a cyberattack which attacks the decision making capability of a society by impersonating the populace at scale. The term polity simulation was coined by Aviv Ovadya, the CEO of the AI & Democracy Foundation. It combines two older attacks and supercharges them with agentic AI.

Astroturfing

Astroturfing is the first component of a polity simulation attack. Astoturfing is a type of political exploit which has a long history. It is a play on the word “grassroots” which describes organic political movements. Grassroots political movements are composed of everyday people. These folks step up and make their voices heard as a group in political discourse. Astroturf – much like it’s namesake is “fake grass” or a “fake grassroots movement”. In practice, astroturfing is “the deceptive practice of hiding the sponsors of an orchestrated message or organization to make it appear as though it originates from, and is supported by, unsolicited grassroots participants”. Importantly in the traditional political context this tactic is usually used to obfuscate where the money for the attack is flowing from.

Political Botnets

A botnet is a distributed network of pawned or compromised computers. These computers could be composed of anything with a vulnerability and an internet connection. Very frequently botnets are composed of improperly configured or patched iOT devices. Botnets are used by malicious actors to route overwhelming amounts of internet traffic to a target. Traditionally this was done mostly to orchestrate Denial-of-service (DDOS) attacks but botnets can be directed through their command and control server to do other things including sending spam. Basic botnets can be purchased or rented on the darkweb starting at 100$. Of course, the sky is the limit, in terms of price for a sophisticated botnet.

DDOS the Polity

There have already been mass attacks on political processes that resemble a (DDOS) attack on a network.

As noted in our recent post on deepfakes in politics, in 2019, the US Federal Communications Commission’s (FCC) net neutrality public comment period was overrun with more than a million bots, making it all but impossible for any one voice to be heard. Buzzfeed News covered that story in great detail. Basically a scuzzy media consulting company was paid by unknown parties to orchestrate a torrent of spam to overwhelm the period open to public comments on net neutrality. An analysis by Jeff Kao, now an investigator at ProPublica, revealed that when the spam was filtered out and only organic public comments were taken account, 99%+ supported keeping net neutrality.

The analysis also revealed that 1.3 Million comments were mail-merged spam. This means they followed a formulaic approach and used very similar wording. It also means they were executed using more traditional SPAMing techniques.

In addition, Buzzfeed investigators found that in “1.9 million comments, 94% of the email addresses belonged to people who had fallen victim to a hack known as the Modern Business Solutions data breach. This was a previous hack in which millions of people’s personal information, including full names, birthdates, home addresses, and email addresses, had been stolen. These data were then reused to provide the basis for the personal details, email contact data and names in the DDOS attack on the net neutrality comment process. In the end here the hackers won – the Obama-era net neutrality law was repealed contrary to the wishes of the overwelming majority of public commenters.

Given that this was all done in 2019, the attackers did not have the advantages that Agentic AI now provides. Now this kind of attack is cheaper to execute and much harder to defend against.

Putting the Pieces Together in a Polity Simulation Attack


The next step in the evolution of Polity Simulation attacks is using Agentic AI in combination with audio and video deepfakes SPAM public policy makers. Attackers first create thousands of virtual agents with individual stories and concerns. These can then be used with synthetic audio to directly swamp congressional phone lines with algorithmically-produced appeals that feel both sincere and credible. In a comparable fashion, the inboxes of Senators could be inundated with correspondence from constituents, fabricated by AI systems that assemble and blend data harvested from written, audio, and social media footprints. Any outward facing system that takes public input for democratic processes is vulnerable to these kind of attacks.

This type of attack could also hit businesses. Companies may fall prey to the same kind of attacks if attackers start to mimic customers at scale. This could for instance be used in order to destroy a businesses ability to field customer service requests.

What Can We Do About Polity Simulation Attacks?

It may see hopeless but there are a few things we can do to combat these developments.

  1. If you’re a system designer – always think defensively.
  2. Integrate anti-bot software like CAPTCHA and cloudflare on any public facing endpoints / forms.
  3. Use cool down periods to limit throughput of SPAM. Don’t accept unlimited requests from unknown users at an unlimited rate.
  4. Make it expensive to attack you. Think economically and if needed levy small economic charges for inputs that would otherwise be easy to attack.
  5. Remember if you can prevent an attack from being scaleable or slow it down significantly enough there may be a point where it becomes economically no longer viable for the attackers.

Finally, if you’re under attack, or need help designing defensive systems, make sure to reach out to help@vali.now. We love complex cases and yours well get our full attention.

Leave a comment

Your email address will not be published. Required fields are marked *