Image Caption: "Cyber Monday At Amazon HQ" by War on Want is licensed under CC BY 2.0.
The United States Immigration and Customs Enforcement Agency (ICE) have made headlines this past year for its sustained campaign of harassment in the US and beyond. We’ve seen people being hounded at their schools and workplaces, a five-year-old boy being detained, two separate protesters being shot and murdered and most recently a two-month old baby with bronchitis being deported even though they were unresponsive “in the last several hours.” The story that we don’t hear often enough is that of the AI and tech firms behind ICE and how the outsourcing of such tasks as surveillance and data analysis just shifts racist policing policy from officers to algorithms.
Palantir Technologies
Palantir is the company founded by Paypal billionaire Peter Thiell that provides data services to ICE. Its role is designing and managing the software responsible for ICE’s deportation mechanics. Its name is taken from the magical “seeing stones” in Lord of the Rings. In the epic fantasy the stones are meant to grant the users vision into the past and to other places but are susceptible to manipulation and are used by Sauron to deceive and surveille his enemies. Anyone familiar with the story can read the name as tragicomic: they chose, as their brand, an artifact that literally foreshadows hubris, authoritarian overreach, and moral collapse. Likewise, the company offers its clients the ability to fuse huge data sets and “see” patterns, glossing over the fact that such patterns are only the warped reflections of the oppressive structures they serve to reinforce.
To paint a picture of the company, its original clients are the United States Defense Department, National Security Agency, FBI, and CIA. Backed by In-Q-Tel, the venture capital firm funded by the CIA, it began its life in the defense community. Its early practices include extracting data across devices and infiltrating communities. Its CEO, Alex Karp, titled his university thesis “Aggression in the Life World,” and in it wrote “the desire to commit violence is a constant founding fact of human life.” He and Thiell both describe the company as “patriotic” with the latter believing that AI is first and foremost a military technology. However, now the company's skills, which were honed in Afghanistan and Iraq, are now being used against civilians. Palantir is the go-to company used by government agencies for the outsourcing of such tasks as screening air travellers, keeping tabs on immigrants and detecting medicare fraud. It photographs people in short-time-frame encounters and runs their images against their vast databases - even when the person is not under suspicion. Whether because of its military background or its boss’s preternatural fascination with aggression Palantir is pushing a shift in policing that sees everyone as a potential threat. This is the company that creates the framework on which ICE base their arrests and deportations.
Vigilant Solutions
Another example of such outsourcing is Vigilant Solutions. They have installed a network of automatic license plate recognition (ALPR) cameras that cover much of the USA’s roads, recording the license plates of all cars that pass by. Information from these cameras are then stored on databases, access to which is sold to pretty much anyone willing to pay. Kate Crawford summarises their founding premise as follows: “Take surveillance tools that might require judicial oversight if operated by governments and turn them into a thriving private enterprise outside constitutional privacy limits.” ICE’s own privacy policy limits data collection near “sensitive locations” like schools, churches, and protests. However, after signing a contract with Vigilant it has gained access to five billion records of license plates and 1.5 billion data points that have been collected with far fewer restrictions.
When I was little I had a micky mouse clock. After a trade union gathering my nine year old self attached a little blue flag to micky’s hand. It read: prisons are not for profit. Looking back, it should have been obvious. Policing was ripe to be the next cash cow of Startup Superstars. Falcon, a field-ready app created by Palantir, allows ICE agents to snap pictures of car license plates and instantly connect to this quietly expanding surveillance web. Search results include information about drivers, their previous sightings, frequent travel patterns as well as people’s home and work addresses. And while in some states information sharing between federal immigration and local law enforcement is restricted, recent disclosures show such restrictions being willfully bypassed through compliance loopholes. It is a grotesque sleight of hand. Vigilant, for its part, has forged partnerships with local governments that allows it to profit from enforcement, charging a 25% surcharge on fines issued when its systems flag a vehicle to authorities.
Dubious Science
It’s not just that these companies seem to have a vested interest in their softwares detecting “bad actors” and appear unconcerned about casually skirting traditional privacy laws - they are just not fit for purpose. To say the scientific grounds for using AI in the ambit of law enforcement is shaky, is quite frankly to be too generous. The scientific logic behind these systems rests on classification logics that are unsound, facial recognition software that from its inception has been racially biased, and the belief that it is possible to teach computers to “see” in a neutral way. What we have seen time and time again is this is not the case.
Vigilant Solutions have since expanded from license plate recognition to facial recognition. The great granddaddy of facial recognition is a guy called Ekman. As a young psychologist he was intent on proving his hypothesis that there existed a handful of universally felt and expressed affects or emotions across all human societies. He tried going to remote communities in Papua New Guinea to test this thesis but returned frustrated and unsuccessful. That didn’t deter him. The Facial Action Coding System (FACS) is a system created by Ekman to taxonomize human facial movements by their appearance on the face. Despite the fact that a comprehensive review of the scientific data on the topic concluded in 2009 that there was “no reliable evidence” you can predict emotions from reading someone’s face, affect recognition tools can be found in job hiring, schools, security, and hospitals. Numerous studies have shown that such facial recognition softwares regularly view black faces as displaying more negative emotions, especially women.
The issue of image classification more generally is another topic of debate. How do you attach an image to the word “Bad Person,” “Kleptomaniac,” or “Slut?” All three of these words belonged to a database called ImageNet that was instrumental in advancing computer vision and deep learning research. It took nouns from the WordNet hierarchy and paid cheap crowdworkers to hand-annotate images according to synonym sets or ‘synsets.’ The images came from all different sources including social media and porn sites. Although they’ve now been removed, 1,593 “unsafe” and “offensive” terms were presented to crowdworkers for image-assignment and remained there for ten years. The datasets used by private companies are less available for scrutiny however they rely on the same classification logic that sees the labelling of people from images as unproblematic. The labelling for example of as “black”, “male” or “cognitive neuroscientist” without the users’ input suggests essentialist views of race and gender and invites the idea that someone’s physical characteristics can indicate their job or status.
Racist Policing
Problematic training sets like these are what are widely blamed for the plethora of biased AI, that are becoming increasingly more covert in their prejudice - capable of saying they’re not racist but acting as if they were. A recent guardian article claims that AI models were “significantly more likely to recommend the death penalty for hypothetical criminal defendants that used African American Vernacular English in their court statements.” Whilst the AI referred to in this case was not the kind of AI currently used by ICE, tech giants show little concern for the underlying causes of such problems in their products. The generic response is a startled “oh dear,” and a promise to “fix” the model by some improved training of the model or the inclusion of “diversity datasets.” What isn’t addressed is that there are compelling reasons to believe AI models will always display some level of bias, and as such should never be allowed to make such potentially life-altering decisions as who to stop and arrest.
Remember Ekman? He ended up selling his controversial deception detection techniques for a program called Screening of Passengers by Observation Techniques (SPOT.) By scanning air travellers' faces it claimed to be able to “automatically” detect terrorists. Its criteria involved the apparent detection of signs of fear, stress, or deception. However, if you are someone who is routinely harassed, stopped and questioned by police and border guards, and you find it all a bit stressful and scary, you are immediately more likely to be flagged by the technology. This created a novel form of racial profiling that has been widely criticised yet continues to be used despite having provided no evidence of its effectiveness.
It Will Get Worse
In January 2026, ICE announced plans to expand by 25% over 2025 levels—equivalent to 12,000 additional agents—with $45 billion allocated over several years for new detention centers. Is Peter Thiel's Tolkien-inspired name for Palantir a Freudian slip? Or is it just a goading reminder of how untouchable he and other tech giants feel themselves to be? ICE is scary - but we would do well to remember that ICE is merely the visible shell of a much larger apparatus—one that obeys rules we do not know, and that even its architects may no longer fully control.