Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
After federal immigration officers shot Alex Pretti in Minneapolis, social media users called for the unmasking of the agents responsible. On X, users shared photos of the agents involved. It didn’t take long before A.I.-generated pictures made their appearance: One user posted a seemingly deepfaked picture of a masked ICE agent, writing, “This is one of the soulless lowlife ghouls who executed Alex Pretti in cold blood! Justice will come!” The post received over 680,000 views. Another A.I.-generated video showed an Immigration and Customs Enforcement agent removing his mask.
Earlier this month, after Renee Nicole Good was killed by an ICE agent, X users turned to the platform’s chatbot Grok to try to identify the agent involved. The resulting images of the unmasked agent went viral, but identified the wrong individuals. The names of the wrongly identified men were shared in posts alongside calls to arrest them. Steven Grove, a gun store owner in Springfield, Missouri, who was one of the people misidentified, received death threats and attacks online. (The Minneapolis Star Tribune later correctly identified the agent as Jonathan Ross.)
In recent months, this pattern has emerged on social media following violent encounters with immigration enforcement officers: users seek accountability by attempting to identify the masked agents involved using A.I. tools. Proponents of these technologies say that they help fill ICE’s accountability gap and identify the people behind incidents like the deaths of Pretti and Good. However, they also run the risk of identifying the wrong people—while leveraging the same tools that have been misused by the agencies being targeted.
The danger of online vigilantes is not new. In 2013, Reddit users wrongly identified the Boston Marathon bomber, resulting in harassment and threats for the family of the victim. What has changed, however, is the speed and scale A.I. tools bring to this effort, tempting and fooling a vast crowd of internet users into thinking the tool they are playing with is returning a trustworthy result. At the same time, it is no surprise these efforts exist at all; ICE agents have increasingly operated violently while masked and without identification. 2025 was the deadliest year for the agency in over two decades, with 32 deaths in custody. And since Donald Trump’s second term began, at least 16 people have been shot. As a result, calls to identify these agents have grown.
Some lawmakers argue that the most straightforward solution would be for ICE agents to stop wearing masks altogether. Sen. Patricia Fahy introduced a bill in the New York Senate in July 2025, arguing that “there is mounting evidence that the use of masked and unmarked ICE agents is creating a serious public safety risk.” On a national level, Sen. Alex Padilla of California introduced the VISIBLE Act, which mandates ICE officials to display clear identification. But the Trump administration has shown little interest in limiting a practice that they say is necessary to protect agents from doxing.
There is a deeper tension here caused by the tools ICE uses itself. The agency built its operations around surveillance systems and platforms that combine data to identify possible “targets.” These tools include facial recognition, biometric databases, health data, and social media use. In September 2025, Palantir was awarded a $30 million contract to develop the tool ImmigrationOS, an A.I.-powered platform that combines government records, private sector data, and biometric sources to track immigrants in the United States. 404 Media reported that a previous Palantir database allowed filtering on categories such as “refugee” and “border crossing card,” as well as identifying features such as scars, marks, or tattoos. Even Palantir’s own employees have expressed concerns about potential ethnic profiling and democratic norms.
ICE also uses the A.I. facial-recognition tool Clearview, which scrapes images from the internet. The firm was fined $33 million in 2024 by a Dutch watchdog for building an illegal database containing billions of faces taken from social media and other websites. In addition to the excessive data-gathering, these tools are mistake-prone: In one example, agents used ICE’s Mobile Fortify app on a detained woman to identify her and figure out her immigration status; it returned two different and incorrect names.
The question then is whether the same tools used by citizens solve the problem. Some activists argue that when tools are used responsibly, they could serve as a form of accountability. One of the efforts to professionalize the process is ICEList, a web database created by Dominick Skinner, an Irish activist based in the Netherlands. Together with a team of more than 500 volunteers, Skinner built a wiki that combines incidents, agents, vehicles, and public data related to ICE.
Skinner told me the team does use A.I. to find out what an agent might look like based on an uploaded image, but instead of using that image, they run it through tools such as PimEyes, a facial recognition tool. When there is a match, they compare the person to information they scraped from social media, using reverse image searching and cross-checking public profiles.
Skinner said his organization also imposes internal restrictions: vetting the volunteers, cross-checking submissions, avoiding posting the home addresses of identified agents, removing nurses from the database, and excluding social media platforms that feature kids. A Wired analysis found that most of the data entries were based on Department of Homeland Security employees who posted publicly online themselves.
While ICEList represents an attempt to bring structure and verification to a process that is otherwise chaotic, Skinner also said he is aware of the concerns, but he believes it is necessary to hold ICE agents accountable. He said he deletes the entries of ICE agents who have quit. Earlier this month, a list containing roughly 4,500 names of alleged ICE and Border Patrol employees was leaked to ICEList, reported the Daily Beast.
On Tuesday, ICEList’s wiki was restricted from Meta’s platforms after publicly asking for information on the federal agent responsible for Alex Pretti’s death. Posting the link on Facebook now triggers an error message saying: “Posts that look like spam according to our Community Guidelines are blocked on Facebook and can’t be edited.” This week, TikTok users claimed that videos showing ICE raids and protests in Minneapolis were being censored.
While the risk of individuals being wrongly targeted is real, ICE’s continued anonymity causes a vicious cycle. By operating with even less transparency, the agency triggers more online efforts aimed at exposing its agents—a feedback loop that hopefully doesn’t end like the Boston Bomber Reddit case, but has the same potential.