Across conflict zones and fragile democracies, technologies once imagined as tools of empowerment and exploration, now serve as instruments of surveillance, repression and even killing. A series of investigations exploring the use of technology against Women Human Rights Defenders, WHRDs, in conflict-affected states of Ethiopia, Kashmir, Lebanon, Pakistan, Sudan and Venezuela demonstrates how technologies, whether simple or advanced, are repeatedly weaponised against the defenders. The similarities in the patterns of technology facilitated violence, across these states, are not incidental. These are intentional and structured through state policies, security doctrines and corporate designs driven by profit and capitalist greed. Armed actors then take these systems further, deploying them as tools of intimidation, oppression and direct violence.
In Ethiopia, conversations are intercepted, leaked and manipulated to silence and stigmatise. In Kashmir, civilians are conscripted into operating surveillance systems that erode privacy and intensify scrutiny of women defenders’ lives. In Lebanon, Israeli forces have deployed AI-driven targeting to strike journalists and defenders, erasing the civilian–combatant distinction. In Pakistan, trans defenders are exposed to digitally amplified hate, doxxing and public humiliation rooted in imported anti-trans narratives. In Sudan, the Rapid Support Forces have enforced long blackouts and monopolised satellite connections, severing defenders from their networks. In Venezuela, laws and applications criminalise dissent and invite communities to report each other, with women defenders singled out for gendered harassment and stripped of the ability to move freely.
Taken together, these contexts reveal a common pattern: technologies are weaponised in places where defenders are already vulnerable, amplifying risks and leaving women with no effective protection or legal recourse. Surveillance, digital harassment, blackouts and targeted strikes are not isolated incidents but part of a broader trend where tools meant for communication and connection become instruments of control. For WHRDs, the consequences are compounded by gendered stigma that transforms political dissent into personal attack. In each case, accountability mechanisms are either absent or inaccessible, meaning the very structures that should safeguard rights are those through which violations are enacted.
During the last few years Israel’s conduct in Palestine and across MENA illustrates the most severe and the most systematic convergence of military technology and systematic violations of fundamental human rights. The systematic targeting of journalists and WHRDs, the large-scale displacement of women and girls, and the use of AI systems to catalogue and strike individuals reveal a pattern of deliberate harm. The attack in Lebanon through pagers demonstrates the reach of this genocidal state across the whole supply chain fueling technology. The use of digital tools in this context, where communication devices are turned into instruments of death, underscores how technologies built for civilian use are reconfigured as weapons of war.
It is extremely challenging to document the layered threats across technological systems due to the opacity around their design. And regulating these tech developments is another mammoth challenge. Technological systems operate across borders and jurisdictions, while the international legal system remains slow, in effective and ill-equipped to address harms that are transnational, rapidly evolving, and embedded in both corporate and state infrastructures. The persistent challenge of regulating structured disinformation, aimed to create harm against individuals is another illustration of this challenge. It has been one of the earliest and most pervasive forms of tech weaponisation, yet efforts to create legal protections against it are constrained by overlapping problems: the risk of states abusing regulation to silence dissent, the dominance of a handful of technology companies whose presence cuts across national boundaries, and the imposition of moderation standards shaped primarily by Western models. These dynamics leave defenders, especially WHRDs, exposed to layered threats without accessible pathways to justice and remedies.
In essence, across contexts, technologies are repeatedly integrated into systems of control and repression, often without effective safeguards or oversight. For WHRDs, the risks are compounded, as political dissent intersects with gendered violence and stigma, making them disproportionately exposed to surveillance, harassment and attack.
On paper, and in the corridors of the UN, there is recognition of these dangers. The UN Secretary-General has described Lethal Autonomous Weapon Systems as “morally repugnant” and called for a binding treaty by 2026. The General Assembly’s adoption of Resolution 79/L.77 signals recognition that autonomous systems require urgent regulation. OHCHR has warned that spyware and surveillance tools are direct threats to privacy and human rights. UNESCO’s Internet for Trust guidelines reinforce the need for platforms to act transparently and align with international human rights standards.
Yet this recognition has not yet translated into any meaningful pressure on states and companies and regulatory standards remain uneven in practice. States continue to use technology in ways that undermine rights, and platforms operate without accountability mechanisms that would protect those most at risk.
This weaponisation of technology is neither neutral nor accidental. It reflects long histories of power in which tools of governance, law and war are shaped to maintain hierarchies rather than dismantle them. For WHRDs, who already work from positions of marginalisation, the imposition of surveillance, blackouts, harassment and targeted violence is a reminder that the digital sphere is built upon the same colonial legacies that structured territorial conquest: extraction, control and erasure. Technologies become another layer through which patriarchal and imperial interests are reproduced, leaving defenders to navigate violence that is both intimate and systemic.
Responding to this requires more than technical regulation. It demands a feminist and post-colonial approach that insists on centring the voices of those most affected, challenges the dominance of Western models in global governance, and recognises that justice must extend across borders. WHRDs are not only targets of repression but also key actors in articulating alternative visions of safety and solidarity. Their experiences reveal how accountability must be grounded in care, equity and collective protection rather than in frameworks that privilege state security or corporate profit. Without such shifts, the global governance of technology will remain incomplete, and its harms will continue to fall most heavily on those who defend rights from already precarious positions.
John Perry Barlow, a pioneering advocate for digital rights, once imagined cyberspace as a realm “independent of the tyrannies you seek to impose on us.” Nearly three decades later, that vision of the internet is no longer imaginable. Far from being a space of freedom, the internet and the technologies empowered by the internet have become a terrain that states and corporations rule with near to complete opacity. These technologies allow those with power to blur the line between civilian and military use, and target journalists and defenders with surveillance, harassment and violence. This is not a free, equalising space, but one defined by systems of control that reinforce the vulnerabilities defenders already face.
For those whose work is already shaped by marginalisation, digital repression compounds existing layers of exclusion. The loss of privacy, the silencing of networks through blackouts, and the weaponisation of stigma do not occur in isolation; they intersect with structural inequalities of class, race, caste, ethnicity and sexuality. And so, the digital sphere reproduces older colonial logics of control while also creating new forms of dependency on corporate and Western power and on state infrastructures. Technologies become not only instruments of surveillance but also barriers to participation, leaving defenders to navigate conditions where their visibility is both necessary for advocacy and a source of heightened danger.
Responding to this requires more than technical regulation.
It demands a feminist and post-colonial approach to technological design and governance, that insists on centring the voices of those most affected, challenges the dominance of Western models in global governance, and recognises that justice must extend across borders. Accountability must be grounded in care, equity and collective protection rather than in frameworks that privilege state security or corporate profit. Without such shifts, the global governance of technology will remain inadequate, and its harms will continue to fall most heavily on those who defend rights from already precarious positions.