By Ali Akbar Khalilian
In the landscape of modern warfare, as military pundits acknowledge, the most dangerous weapon is increasingly not a bullet or a drone, but an algorithm.
Palantir Technologies, a data analytics giant founded with the idealistic goal of "saving the West," has evolved into the central nervous system for US military operations and global surveillance.
We examine the duality of Palantir as a private corporate entity that is deeply embedded in state violence, by looking into its financial relationships (specifically a landmark $10 billion Army contract), its role in algorithmic warfare in Ukraine, Gaza, and Iran (Operation Epic Fury), the "dual-use pipeline" that brings military-grade surveillance to domestic policing, and the company's hidden infrastructure alliances with Microsoft and Airbus.
Palantir represents a paradigm shift: an era in which private tech firms hold operational control over targeting and intelligence, creating a "private-sector imperial security complex" that operates with limited oversight and profound ethical consequences.
This explains the unprecedented Iranian response – designating Palantir as a legitimate military target – as a warning to the unaccountable algorithm-driven warfare industry.
The unseen architect of battlefields
When Palantir CEO Alex Karp recently testified in a legal deposition, he offered a chillingly blunt statement about his company's business model: "Our product is used to kill people."
This phrase cuts through all corporate jargon about "data fusion" and "AI integration" to reveal the raw reality of Palantir's function.
Unlike traditional defense contractors like Lockheed Martin or Raytheon, who build physical tanks or missiles, Palantir builds the software that tells those weapons where to go and who to destroy.
Founded in 2003 with funding from the CIA's venture capital arm, In-Q-Tel, Palantir spent two decades operating in the shadows of the intelligence community. However, the current AI revolution and the shifting nature of global tensions have pushed Palantir to the forefront of American military strategy.
We look at the hidden mechanisms of this corporation – a company that has successfully blurred the lines between military targeting, national surveillance, and private profit.
Explainer: Which US tech and arms companies did IRGC declare 'legitimate targets’https://t.co/Zuyk6yQR3G
— Press TV 🔻 (@PressTV) April 1, 2026
The financial glue: Government contracts as a growth engine
To understand Palantir's involvement in America's wars abroad, including against the Islamic Republic of Iran, one must first understand the scale of its financial incentives.
Unlike the volatile commercial sector, government contracts offer stability, scale, and secrecy.
In August 2025, the US Army awarded Palantir a massive "Enterprise Agreement" worth up to $10 billion over ten years. This deal consolidated 75 smaller contracts into a single stream, effectively making Palantir the default software vendor for the Army's digital infrastructure.
The Army's Chief Information Officer, Leo Garciga, said it was about "modernizing our capabilities," but the scale reveals a dependency: the military cannot fight without Palantir's operating system.
The financial results of this dependency are staggering. In Q3 2025, Palantir reported revenues of $1.18 billion, a 63% increase year-over-year.
The US government segment alone generated $486 million, growing 52% annually. The company boasts a "Rule of 40" score of 114% (a metric balancing growth and profit), one of the highest in software history, driven almost entirely by the urgency of defense spending.
This revenue is not limited to the US Army. Recent disclosures show a $446 million contract with the Ned as a military target due to its algorithmic role in warfare.
Palantir, which once operated behind the scenes of war, has now become part of the battlefield itself.
Machine of war: Algorithmic targeting in Gaza and Ukraine
Palantir's true power is realized on the battlefield, where it has moved from a support role to an active combatant in the decision-making cycle.
The genocidal war on Gaza has served as a horrific proving ground for Palantir's Artificial Intelligence Platform (AIP).
Reports indicate that the Israeli regime force used Palantir's software to integrate data from Unit 8200 (Israel's NSA equivalent) with drone feeds and surveillance data.
Human rights groups and analysts argue that this AI-driven targeting lowered the threshold for engagement, reducing human life to statistical data points.
As noted by the Ankara Center for Crisis and Policy Studies, Palestine became an "AI-supported war laboratory" where every strike tested algorithmic models for efficiency, often with devastating civilian casualties.
Palantir exhibits a striking moral duality depending on the client. In Ukraine, Palantir is framed as a force for democratic defense.
CEO Alex Karp has openly boasted that his software reduces the "targeting cycle to minutes," allowing Ukrainian forces to identify and destroy Russian artillery positions faster than traditional methods.
While Western media frames the Ukraine work as "resistance" and the Gaza work as "controversial," the underlying technology is identical.
The same "kill chain" logic that takes out a Russian tank can just as easily target an apartment building in Gaza. This exposes the relativism of tech ethics: the software does not distinguish between a "good" war and a "bad" war; it only optimizes destruction.
✍️ Feature - How CIA-backed tech giant Palantir aids Israeli genocide in Gaza with AI, surveillance
— Press TV 🔻 (@PressTV) August 2, 2025
By @kesic_ivan https://t.co/SH0MtOzguh
Operation Epic Fury: Iran as the first full-fledged AI war
On February 28, 2026, the United States and Israel launched an unprovoked military aggression against the Islamic Republic, codenamed "Operation Epic Fury" against Iran. This operation, dubbed by the media as the "first AI war," marked a critical turning point in Palantir's role.
Palantir's Maven Smart System, integrated with the Claude language model from Anthropic, was deployed as the primary decision-making system for US Central Command (CENTCOM).
According to reports from the Washington Post, before the bombing began, the Maven system had analyzed thousands of satellite images and drone videos, preparing over 1,000 attack plans for commanders.
In the first 12 hours, the US military conducted nearly 900 strikes; within 10 days, the number of strikes exceeded 5,500.
A report from The Times revealed that during the invasion of Iraq, the US Army needed a 2,000-person intelligence team to perform ground target identification. In Operation Epic Fury, the same workload was accomplished by only 20 soldiers using Palantir's software. The Maven system reduced target identification time from several hours to less than one minute.
Professor Elke Schwarz, speaking with France 24, analyzed that in the first 24 hours in the war against Iran, the US military launched approximately 41 missiles per hour, making meaningful human oversight practically impossible.
The bombing of the Minab girls' elementary school in southern Iran, which killed at least 168 children, raised the question of whether AI had identified that target.
Palantir insists that "a human is always in the decision-making loop," but observers note that this "human in the loop" has become a ceremonial rubber stamp.
On March 31, 2026, Iran's Islamic Revolution Guards Corps (IRGC) published an unprecedented list of 18 American technology companies, including Palantir, declaring their facilities in West Asia as "legitimate targets."
Iran said these companies' technology had been used to attack Iran. For the first time in history, a technology giant was formally designated as a military target due to its algorithmic role in warfare.
Palantir, which once operated behind the scenes of war, has now effectively become part of the battlefield itself, directly complicit in the unprovoked and illegal aggression.
"All eyes on AI hubs"
— Press TV 🔻 (@PressTV) April 6, 2026
Iranian armed forces stress that they will target American artificial intelligence centers in West Asia as the US-Israeli enemy continues the assassination of military commanders.
Follow: https://t.co/mLGcUTS2ei pic.twitter.com/3dJDeUTzIN
The domestic pipeline: From drone strikes to policing
One of the most alarming disclosures regarding Palantir is the "war-to-homeland" pipeline. Technologies perfected on battlefields in Iraq and Afghanistan are being repackaged for domestic law enforcement and immigration enforcement.
Palantir's flagship software, Gotham (named after the all-seeing stone in The Lord of the Rings), was designed to predict IED attacks in Afghanistan.
Today, it is used by hundreds of police departments across the United States, allowing officers to scrape massive datasets of license plate records, utility bills, and social media to build intelligence dossiers on civilians.
This surveillance apparatus has been weaponized against immigrant communities. In 2025, Palantir secured a $30 million contract with ICE and developed a tool called ELITE, which reportedly mines Medicaid and other public welfare databases to identify "high potential" targets for arrest. Reports suggest that the algorithm flags specific addresses and individuals, effectively turning social safety nets into deportation dragnets.
The ethical void: Algorithmic black boxes and civil liberties
The core danger of Palantir lies in the "black box" nature of its operations.
When the US military uses the Maven Smart System to identify targets in West Asia, or when ICE uses it to flag a family for deportation, the software provides a recommendation.
However, due to the proprietary nature of the code, it is often impossible to audit why the AI flagged a specific individual or coordinate. Critics fear that if a confidence threshold is met, the system may authorize lethal action without sufficient human oversight.
Furthermore, the Trump administration's push for data sharing across federal agencies has positioned Palantir as the primary architect of a centralized national database.
By integrating CIA, NSA, FBI, and DHS data, Palantir holds the keys to the "digital panopticon."
President Trump himself praised Palantir, stating, "Palantir has proven to be very capable and well-equipped for combat. Just ask our enemies."
This political endorsement cements Palantir's status as a protected entity, immune to the privacy scrutiny faced by other big tech firms.
Epstein advised ex-Israeli PM Ehud Barak to cooperate with US AI firm Palantirhttps://t.co/bymFfq4tu4
— Press TV 🔻 (@PressTV) February 2, 2026
The geopolitical tightrope
Palantir navigates a complex geopolitical landscape. While it claims to serve Western democratic values, its shareholder letters reportedly list active combat zones like Gaza, Ukraine, and Iran as "center elements of the AI-based growth story."
This mercenary logic – profiting from the duration of war, not just the outcome – raises questions about Palantir's incentive to push for peace.
Beyond a single company: The invisible infrastructure of Empire
Palantir is not an isolated actor. It has woven itself into the fabric of global corporate and military infrastructure through strategic alliances that extend its reach far beyond direct government contracts. The three critical dimensions of this hidden empire include:
One of the most dangerous developments is the strategic integration between Palantir and Microsoft. The US Army uses Palantir's Army Vantage platform, which is now being integrated with Microsoft's commercial tool Power BI – a standard dashboard and visualization software used by millions of business analysts worldwide.
Why this matters:
The consequence: A junior officer with minimal training can now generate kill chains with the same effort as creating a pie chart. The banality of the interface masks the horror of the outcome.
However, critical tension has emerged that demands disclosure. The National Security Agency (NSA) has designated Anthropic as a "supply chain risk", effectively limiting its use within Pentagon systems due to concerns about the model's unpredictability and black-box behavior.
The contradiction:
The disclosure: Palantir, unwilling to lose its algorithmic edge, has already begun migrating to alternative large language models. This reveals a dangerous pattern: the tech industry always stays one step ahead of any form of government oversight.
When one model is restricted, another takes its place. The military's reliance on proprietary, unaccountable AI creates a situation where the weapons system is effectively "rogue" by design.
Palantir's influence is not limited to the United States and the Israeli regime. The company has a deep, multi-year partnership with the European aerospace giant Airbus.
Palantir provides the core data platform for Skywise, Airbus's flagship digital aviation platform. Skywise is used by thousands of engineers and technicians across Airbus production lines in Spain (Getafe and Seville), France, and Germany. It manages flight data, maintenance schedules, and supply chain logistics for the majority of the world's commercial and military aircraft.
During the recent 40-day war against Iran, this platform could easily be leveraged, directly or indirectly, for tracking, surveillance, or logistics optimization for US allied military fleets.
This means that American software power has infiltrated the heart of European strategic industry through a legitimate commercial partnership.
The geopolitical implication: European taxpayers, many of whom oppose US military adventures in West Asia, are unknowingly hosting the digital infrastructure that enables those very wars.
When a Palantir-powered system on an Airbus production line in Spain helps optimize a supply chain that ultimately supports a refueling aircraft bound for CENTCOM, the line between civilian commerce and military logistics vanishes.
As noted earlier, Iran's designation of Palantir as a legitimate military target is an unprecedented historical event. The global consequences of that decision are:
For the first time, a sovereign nation has declared that a software company's corporate facilities (data centers, offices, AI research parks) are equivalent to military bases.
Iran's logic is clear and direct: if Palantir's algorithms guide the missiles that kill Iranian citizens, then Palantir's servers are legitimate targets for retaliation.
Traditionally, technology companies have operated from safe havens – California, New York, London – far from the battlefields their products enable. Iran's declaration erases that sanctuary.
If a Palantir data center in the United Arab Emirates, Bahrain or Saudi Arabia is struck, that is because it is a legitimate military response.
This creates a new category of risk: algorithmic geopolitical liability. Shareholders in companies like Palantir, Microsoft, and Anthropic must now ask: Is our data center in Dubai a target? Will our cloud provider be bombed because our software was used in a strike?
This precedent, set by Iran, could be adopted by other countries (China, Russia, North Korea) in future wars, fundamentally altering the calculus of tech investment.
Revealed: US company Palantir contributed to Israel’s 2024 'pager attacks' in Lebanon https://t.co/jb6PDybdzE
— Press TV 🔻 (@PressTV) December 10, 2025
Synthesis: The unaccountable Empire and the global backlash
Palantir has mastered the art of exploiting the gap between national laws and the borderless nature of the internet. By signing contracts with Airbus in Europe and Microsoft in America, it has transformed itself into a natural monopoly in the age of artificial intelligence.
However, the Iranian response – placing Palantir on a list of legitimate military targets – represents perhaps the first example of algorithms being pushed back by physical violence.
This is a warning to all human rights advocates and civil society groups who seek to restrain this giant: we can no longer rely solely on courts, Congress, or public opinion.
The battle over the legitimacy of these algorithms has entered a new and more dangerous phase, one where the response to software-driven killing may be physical retaliation against the infrastructure that enables it.
If the algorithm of war is not regulated through democratic and legal means, we will enter a world where private algorithms are targeted by state missiles, where data centers become battlefields, and where the very notion of civilian infrastructure in the tech sector is permanently destroyed.
Palantir has not only privatized war; it has, through its own illegal and unregulated actions, made the entire technology sector a legitimate target in future conflicts.
Palantir is not merely a contractor; it is complicit in US wars and war crimes. By embedding its AI deep within the "kill chain" of the US military and its allies, and by weaving itself into the global infrastructure of Microsoft and Airbus, Palantir has achieved a level of influence previously reserved for nation-states.
The company's trajectory – from the CIA to Iraq, from Ukraine to Gaza, from Iran to the streets of America – reveals a complete fusion of state power and private software.
The world is witnessing the privatization of warfare and surveillance, and now, the first violent backlash against it. When a publicly traded company, driven by shareholder value, controls the algorithms that decide who lives and who dies, the social contract is broken.
The "black box" of Palantir's code must be opened to public scrutiny. If we fail to regulate the algorithm of war, we risk sleepwalking into a world where violence is automated, efficient, utterly unaccountable – and where the response to that violence is the physical destruction of the digital infrastructure that powers modern life.