“Ice is just around the corner,” my friend said, looking up from his phone. We were writing at a coffee shop in one of the oldest neighborhoods of New York City, where schools and churches support thriving migrant communities as they have since long before the United States existed. Now the agents of this rogue federal agency – recognized for civil rights abuses like racial profiling, wrongful detention, medical neglect and inhumane detentions – were just footsteps away, shaking down our neighbors in their homes and at the park across the street.

A day earlier, I had met with foreign correspondents at the United Nations to explain the AI surveillance architecture that Ice is using across the United States. The law enforcement agency uses targeting technologies which one of my past employers, Palantir Technologies, has both pioneered and proliferated – tools I was once charged with illustrating as a graphic designer and writer, yet the consequences of which I am just coming to understand. Although largely invisible, technology like Palantir’s plays a major role in world events, from wars in Iran, Gaza and Ukraine to the detainment of immigrants and dissident students in the United States. But despite its ubiquity, lawmakers, technologists and the media are failing to protect people from the threat of this particular kind of weaponized AI and its consequences, partly because they haven’t recognized it by name.

Known as intelligence, surveillance, target acquisition and reconnaissance (Istar) systems, these tools, built by several companies, allow users to track, detain and, in the context of war, kill people at scale with the help of AI. They deliver targets to operators by combining immense amounts of publicly and privately sourced data to detect patterns, and are particularly helpful in projects of mass surveillance, forced migration and urban warfare. Also known as “AI kill chains”, they pull us all into a web of invisible tracking mechanisms that we are just beginning to comprehend, yet are starting to experience viscerally in the US as Ice wields these systems near our homes, churches, parks and schools.

  • artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    “targeting technologies” is all you really need to know here. They’re going to use AI to harass anyone it decides is illegal and deport them without due process. And, of course, AI never makes mistakes…

  • wildncrazyguy138@fedia.io
    link
    fedilink
    arrow-up
    6
    ·
    6 days ago

    Remember that Sauron used the Palantir to convince Saruman to the side of evil. Our equivalent is now trying to do the same in the real world.

  • The_Italian_Uncut@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    6 days ago

    Powerful testimony. The fact that someone who worked inside Palantir is now speaking out about the real-world impact of its tools — from ICE raids in New York to bombing campaigns in Gaza — is a crucial wake-up call.

    What’s most disturbing is how these “AI kill chains” blur the line between domestic policing and warfare. They’re not just used in conflict zones. They’re being normalized in our cities, schools, and neighborhoods.

    We’ve covered how war is changing — from the rearmament of Europe to the erosion of international law in Gaza. But this piece shows something deeper: the privatization of violence through algorithmic systems.

    Companies like Palantir aren’t just selling software. They’re selling decision-making power — and doing it without democratic oversight.

    The question isn’t just “how do we stop this?” but “how did we let it start?”

    👉 theitalianuncut.ch