The U.S. Department of Homeland Security (DHS) has revealed new information that may raise concerns for OpenAI CEO Sam Altman. According to a recently published database, Immigration and Customs Enforcement (ICE) has been using artificial intelligence (AI) tools from OpenAI and Palantir in its operations. The disclosure comes shortly after Altman publicly criticized ICE’s practices, saying the agency has gone “too far.”
ICE Uses AI for Screening and Investigations
The DHS database shows that ICE relies on an AI-assisted resume screening tool powered by OpenAI’s GPT-4. Another company, AIS, sells this tool and designed it to compare job applicants’ resumes with the requirements of a specific role. The tool scores candidates by measuring how closely their experience matches the job description.
Although AIS markets the tool for hiring, ICE’s use of it raises questions. The database also reveals that Palantir, a data analytics company led by CEO Alex Karp, has a “particularly close relationship” with ICE. Palantir provides AI tools that extract addresses and other information from documents. This helps ICE officers find leads for “Enforcement and Removal Operations,” which includes identifying individuals for potential deportation.
AOC backs nationwide anti-ICE shutdown while keeping congressional office open
In addition, Palantir offers “AI-Enhanced ICE Tip Processing” for urgent cases. This tool reviews and categorizes incoming tips from the public and translates them into English. While the exact AI models used are not disclosed, the DHS report notes that Palantir relies on “commercially available large language models” to perform these tasks.
Sam Altman Criticizes ICE’s AI Practices
The DHS disclosure comes just days after OpenAI CEO Sam Altman criticized ICE internally. In a Slack message to OpenAI staff, Altman expressed concern over how the agency was using AI tools. He said, “I love the U.S. and its values of democracy and freedom and will be supportive of the country however I can; OpenAI will too. But part of loving the country is the American duty to push back against overreach. What’s happening with ICE is going too far.”
Altman highlighted the difference between targeting violent criminals and broader immigration enforcement, suggesting that ICE’s current use of AI tools may cross ethical or legal boundaries. His comments signal a tension between OpenAI’s public stance on responsible AI use and the way its technology is being applied by federal agencies.
ICE accused of leaving ‘ace of spades’ intimidation cards after detaining Latino workers
An OpenAI spokesperson clarified that the company does not have commercial contracts with DHS. “It’s possible that DHS is using ChatGPT or accessing the company’s application programming interface the way businesses do,” the spokesperson said. This statement indicates that OpenAI itself may not have directly sold AI tools to ICE, but that the agency could still be using OpenAI’s technology through third-party providers.
Palantir’s Role in Enforcement Operations
Palantir appears to be deeply embedded in ICE operations. The DHS database notes that the company’s AI tools help identify individuals’ addresses and manage tips for urgent enforcement cases. By translating and categorizing information, Palantir enables ICE officers to act more quickly on leads.
Virginia hospital fires nurse after anesthetist suggests drugging and sabotaging ICE agents
The report emphasizes that while the AI tools are “commercially available,” they are actively used in sensitive immigration cases. This level of integration has drawn attention because it highlights how AI can influence law enforcement decisions. The disclosure also comes amid broader criticism of ICE after federal agents shot two people in Minneapolis last month, adding urgency to concerns over AI-assisted enforcement.
Although the database does not specify every model or software used, it clearly shows that both OpenAI and Palantir technologies are part of ICE’s toolkit. This raises questions about how federal agencies use AI ethically. It also questions whether companies like OpenAI can control how others apply their technology. This concern grows once third parties access the tools or when the technology becomes publicly available.
