
DJ Outvertigo, who created a playlist for the Take Back The Tech! Liveboard in 2024 around critical issues of surveillance, tech fascism and AI geopolitics joined us for a lively conversation centered around the curated resources in their playlist — here.
DJ Outvertigo is a Palestinian refugee researcher born and raised in Lebanon. She is researching surveillance in academia and anti-colonial knowledge production for my PHD. My work focuses on surveillance, tech fascism and AI geopolitics.
Her playlist offers a gathering of words, ideas, and images—fragments that have lit the way in moments of absolute apocalypse. When the world splits open, when it feels like there’s no ground left to stand on, these have been reminders that the only way is forward.
Palestine is a thread that pulls us together, no matter where we are, because it encompasses everything and summarizes our everyday struggle. Palestine means resistance, it means life refusing to be snuffed out. It means a determination to dream, to hold on, to fight for a better world for all.
This interview took place between Archismita (Take Back The Tech!) And DJ Outvertigo. All the links in the interview can be found collated in DJ Outvertigo’s playlist.
The interview will be published in three parts. This is Part 2 — (continued from Part 1)
Archismita: Yes, so you’re saying — for example, if you're using Google Maps, then you have no right to talk about the tracking of people because it uses the same technology — and do you want to keep using this technology or not? You know, it's the same thing with different applications too. I think what you're saying is that the first step is to really understand what it is that we're selling right now, because it's not just our privacy but also probably our ethical understanding — of which kind of people are expendable and which are not. And if we want to keep using these technologies, then it should develop and at the cost of whoever — we don't, and are not supposed to, care. This is probably the mindset we’re expected to be in.
In your resource titled ‘Stop Arming Israel’, you draw attention to CAAT’s interactive map of UK companies recently involved in arming Israel, “a call to action for our movements to see technology itself is connected and rooted in the arms industry” — reflected in resistance to (Tokyo’s opposition to the GLP data center project) and use of (battle drones against farmers in India) technologies in our daily world.
What are some ways digital rights organisations and activists can start talking about this, not in a silo, but meshed in with larger digital rights advocacy and awareness?
DJ Outvertigo: Thanks for this question. It is exactly what I wanted to do with the real-life examples of farmers in India and highlighting the movements opposing the building of data centers. It’s just been very sort of real in the sense that it’s really impossible nowadays to separate how deeply entangled tech companies are with the arms industry and defense systems used by governments.
There has been a clear declaration by feminists and progressives and people within our movements and ecosystems that specifically state that nowadays you cannot really separate governments from corporates — especially multinational corporations and how they move around the world — how corporates have not only government investments, but government backing in addition to sometimes being deeply entangled with the government itself. So because of this reality, it’s also now impossible to separate tech from the arms industry, as well as the tech and arms industry from governmental and industrial policies.
And I think one way we can begin addressing this question of what separates technology from the arms industry is to sort of understand how they both fuel each other. The arms industry is now relying a lot more on AI and there has been a push towards AI-based technology that is so precise. It is quite reminiscent of how, when drones first came into the world, they were celebrated for “strikes” during the US war on terror and “psychotargets” in Pakistan. This technology was then exported everywhere. At that time, it was actually celebrated as a way to have precise and targeted killings and there was a promise that it would never harm civilians. This basically turned out to be a lie.
We’ve seen people protest against this type of warfare everywhere. Now the drone itself has become a symbolic warfare tool that we see everywhere — the drone itself for surveillance as people are trying to organise. So it is not only used in wars anymore. It makes us wonder about how we can address this without really naming the different entanglements and connections between tech companies and the arms industry and governments.
Also: who does it harm? Because it is being used not just in wars, but also against farmer organising and indigenous people trying to organise and protect their lands from data centers, or protect their water from data centers. We’ve seen this in the African continent as well.
So for this (seeing technology as connected and rooted in the arms industry) to happen, and for it to not exist in a silo, the work should build on what is and has been done by digital rights campaigners and labour unions and anti-war activists around the globe who are trying to make these links. It is, sort of, our duty to — when we’re talking about this, to bring different perspectives into our standpoints around this technology and the new wave of arms/technology.
Archismita: Yes, so technology now is not just something that is a separate thing that arms companies use — but technology has become something that is being actively developed by arms companies for profit around warfare, i.e. the “entanglement.” Thank you, because it ties in so well with our next question. In a resource paper that was linked in your curation, you write that the fact that AI is used for targeted and precise killing is not merely a technological advancement that happened.
Interviewer’s note: In a post by the No Tech for Apartheid campaign, a collective of Google and Amazon workers organising against the companies’ $1B Project Nimbus cloud contract with Israel, it was announced that the Pentagon signed an AI deal to help commanders plan military maneuvers using Google AI technology.
The Washington Post reported on a new contract called "Thunderforge," which would allow U.S. military commanders to use AI tools to plan and help execute movements of ships, planes, and other military assets. The contract would be led by the start-up Scale AI, and AI tools from Microsoft and Google would be integrated into weapons developer Anduril's systems for use in the contract. This deal came off the heels of Google executives' dismantling of the company's AI Principles, which were written to prevent Google technology from being sold for weapons or surveillance — which the campaign commented on as well.
The post continued, “By dismantling the AI Principles and collaborating directly with weapons developers and the imperialist U.S. military, this deal reflects the greater trend of Big Tech companies becoming the infrastructure of the US military industrial complex and powering state violence worldwide. Sundar Pichai, Satya Nadella, and other tech oligarchs — who gleefully provided the tech that Israel used to slaughter Palestinians for over a year — continue to deepen their connection to fascism during the current Trump administration.

This move further shows us what we’ve long stated: Tech is not neutral. AI is not neutral. AI is a weapon in the hands of corporations who see no problem in profiting from empire and state violence. Big Tech’s deepening of their partnerships with imperial military forces shows us that we must resist technosolutionist narratives at every turn.”
Archismita: In a resource paper linked in your curation, you write that AI's use for "targeted and precise" killing is not merely a technological advancement but a continuation of extractive logics, as AI is situated within the dynamics of capitalism rather than neurophysiology, or just “imitating the human mind”. As in, it was used not just out of simple curiosity about the human mind but for profit and expansion. This changes mainstream beliefs about AI use for warfare and technology, and may pique the reader’s interest in the paper. What is another bit you found interesting in the paper that you would like to share with the readers?
DJ Outvertigo: Yes, it’s quite an academic paper. I think what was super interesting about this paper was that it made the links you just made also, with defence and arms. It goes and delves deep into the origins of how AI came to be, and what you said is true. AI was first presented as: How can we imitate the human mind? How can we understand it better through mathematics and logic and mathematical logic and numbers? How can we imitate how our neurons work or how our brain functions?
Then it was sort of seen as a way that can actually accelerate the labour process like you’re saying, accelerate production and get more money. What is so interesting about this paper for me, and I really love it as reference, is it talks about the institutionalisation of AI. This process was really well connected not only to the idea of companies owning or investing in the development of AI, but it was also related very heavily to war and militarism at that time.
I think it’s a very interesting paper because it doesn’t begin with how a lot of the tracing of AI begins around us. We begin tracing it directly in connection to companies that are currently well-known or have stopped functioning. We trace it sometimes from the 60s, sometimes from the 90s. It depends on what a person is trying to achieve through this tracing. But this paper traces the whole logic around funding and investing in AI back to 1947. The paper links it to the Korean war, and that was for me both very interesting and scary — in the sense that this idea of AI for warfare has been around since 1947.
It’s not new at all, and it was arms companies that were hosted in universities that were trying to imagine how this would look like, especially in the US. And now, of course, student protesters are saying that our universities are complicit. This makes me think that this complicity goes back way, way longer than we think — and also, it goes way deeper than we think. And suddenly, it’s like AI’s conceptual origins or institutional origins have become very neutral. Almost to the point where people say this is science and this is technology and this is what has been achieved in the last three decades — this idea of AI as a neutral technology.
This notion is what I think the work of feminist techies, in particular, has challenged, and it has shown the many biases that AI can have. And also, it gives you the idea that data colonialism or the obsession with gathering or collecting data as a tool to suppress is noy something that is recent, or even connected to the 20th century. It’s been there through the ways that the empire used to document and gather data around the colonies even in academic disciplines, like anthropology, that have been used as a way to colonise better — and this is sort of how this paper conceptualises AI. I think it is a very interesting paper that can help us orient our frameworks around the question of the origin of AI, both politically and military-wise.
Part 1 of this interview can be found here.
Click here for Part 3.
- Log in to post comments