Impact, an app that describes itself as “AI-powered infrastructure for shaping and managing narratives in the modern world,” is testing a way to organize and activate supporters on social media in order to promote certain political messages. The app aims to summon groups of supporters who will flood social media with AI-written talking points designed to game social media algorithms.
In video demos and an overview document provided to people interested in using a prototype of the app that have been viewed by 404 Media, Impact shows how it can send push notifications to groups of supporters directing them at a specific social media post and provide them with AI-generated text they can copy and paste in order to flood the replies with counter arguments.
The overview document describes Impact as a “A Volunteer Fire Department For The Digital World,” empowering “masses of ‘good people’” to “fight active fires, stamp out small brush fires, and even do preventative work (prebunking) to stop fires before they have the potential to start.” However, experts say that what Impact proposes could potentially only continue to blur the lines between authentic and inauthentic behavior online and can also lead to a world where only people with the resources to pay for such a service could “shape reality,” as the Impact overview document says. The app also shows another way AI-generated content could continue to flood the internet and distort reality in the same way it has distorted Google search results, book sold on Amazon, and ghost kitchen menus.
In a section of the overview document titled “Why isn’t Impact ‘bad’/illegal/unethical/etc,” the company explains that “The ‘bad guys’ are doing coordinated inauthentic behavior….Impact empowers coordinated authentic behavior. We need a group of volunteers to help keep online spaces clean. It’s just such a monument [sic] task that without AI and centralized coordination it would be impossible to do at scale.”
One demo video viewed by 404 Media shows one of the people who created the app, Sean Thielen, logged in as “Stop Anti-Semitism,” a fake organization with a Star of David icon (no affiliation to the real organization with the same name), filling out a “New Action Request” form. Thielen decides which users to send the action to and what they want them to do, like “reply to this Tweet with a message of support and encouragement” or “Reply to this post calling out the author for sharing misinformation.” The user can also provide a link to direct supporters to, and provide talking points, like “This post is dishonest and does not reflect actual figures and realities,” “The President’s record on the economy speaks for itself,” and “Inflation has decreased [sic] by XX% in the past six months.” The form also includes an “Additional context” box where the user can type additional detail to help the AI target the right supporters, like “Independent young voters on Twitter.” In this case, the demo shows how Impact could direct a group of supporters to a factual tweet about the International Court of Justice opinion critical of Israel’s occupation of the Palestinian territories and flood the replies with AI-generated responses criticizing the court and Hamas and supporting Israel.
In another section titled “Is it effective?,” the company explains that “It is well-documented that social media algorithms preference [sic] pulses of high-energy activity over pure volume [...] A coordinated group of people working together, saying the same thing in different words from all kinds of different places/accounts/etc., can have a massive impact on what is trending etc.”