Credit: Annegret Hilse/Reuters via Gallo Images
By Inés M. Pousadela
MONTEVIDEO, Uruguay, Dec 17 2025 (IPS)
Machines with no conscience are making split-second decisions about who lives and who dies. This isn’t dystopian fiction; it’s today’s reality. In Gaza, algorithms have generated kill lists of up to 37,000 targets.
Autonomous weapons are also being deployed in Ukraine and were on show at a recent military parade in China. States are racing to integrate them in their arsenals, convinced they’ll maintain control. If they’re wrong, the consequences could be catastrophic.
Unlike remotely piloted drones where a human operator pulls the trigger, autonomous weapons make lethal decisions. Once activated, they process sensor data – facial recognition, heat signatures, movement patterns — to identify pre-programmed target profiles and fire automatically when they find a match. They act with no hesitation, no moral reflection and no understanding of the value of human life.
Speed and lack of hesitation give autonomous systems the potential to escalate conflicts rapidly. And because they work on the basis of pattern recognition and statistical probabilities, they bring enormous potential for lethal mistakes.
Israel’s assault on Gaza has offered the first glimpse of AI-assisted genocide. The Israeli military has deployed multiple algorithmic targeting systems: it uses Lavender and The Gospel to identify suspected Hamas militants and generate lists of human targets and infrastructure to bomb, and Where’s Daddy to track targets to kill them when they’re home with their families. Israeli intelligence officials have acknowledged an error rate of around 10 per cent, but simply priced it in, deeming 15 to 20 civilian deaths acceptable for every junior militant the algorithm identifies and over 100 for commanders.
The depersonalisation of violence also creates an accountability void. When an algorithm kills the wrong person, who’s responsible? The programmer? The commanding officer? The politician who authorised deployment? Legal uncertainty is a built-in feature that shields perpetrators from consequences. As decisions about life and death are made by machines, the very idea of responsibility dissolves.
These concerns emerge within a broader context of alarm about AI’s impacts on civic space and human rights. As the technology becomes cheaper, it’s proliferating across domains, from battlefields to border control to policing operations. AI-powered facial recognition technologies are amplifying surveillance capabilities and undermining privacy rights. Biases embedded in algorithms perpetuate exclusion based on gender, race and other characteristics.
As the technology has developed, the international community has spent over a decade discussing autonomous weapons without producing a binding regulation. Since 2013, when states that have adopted the UN Convention on Certain Conventional Weapons agreed to begin discussions, progress has been glacial. The Group of Governmental Experts on Lethal Autonomous Weapons Systems has met regularly since 2017, yet talks have been systematically stalled by major military powers — India, Israel, Russia and the USA — taking advantage of the requirement to reach consensus to systematically block regulation proposals. In September, 42 states delivered a joint statement affirming their readiness to move forward. It was a breakthrough after years of deadlock, but major holdouts maintain their opposition.
To circumvent this obstruction, the UN General Assembly has taken matters into its hands. In December 2023, it adopted Resolution 78/241, its first on autonomous weapons, with 152 states voting in favour. In December 2024, Resolution 79/62 mandated consultations among member states, held in New York in May 2025. These discussions explored ethical dilemmas, human rights implications, security threats and technological risks. The UN Secretary-General, the International Committee of the Red Cross and numerous civil society organisations have called for negotiations to conclude by 2026, given the rapid development of military AI.
The Campaign to Stop Killer Robots, a coalition of over 270 civil society groups from over 70 countries, has led the charge since 2012. Through sustained advocacy and research, the campaign has shaped the debate, advocating for a two-tier approach currently supported by over 120 states. This combines prohibitions on the most dangerous systems — those targeting humans directly, operating without meaningful human control, or whose effects can’t be adequately predicted — with strict regulations on all others. Those systems not banned would be permitted only under stringent restrictions requiring human oversight, predictability and clear accountability, including limits on types of targets, time and location restrictions, mandatory testing and requirements for human supervision with the ability to intervene.
If it’s to meet the deadline, the international community has just a year to conclude a treaty that a decade of talks has been unable to produce. With each passing month, autonomous weapons systems become more sophisticated, more widely deployed and more deeply embedded in military doctrine.
Once autonomous weapons are widespread and the idea that machines decide who lives and who dies becomes normalised, it will be much hard to impose regulations. States must urgently negotiate a treaty that prohibits autonomous weapons systems directly targeting humans or operating without meaningful human control and establishes clear accountability mechanisms for violations. The technology can’t be uninvented, but it can still be controlled.
Inés M. Pousadela is CIVICUS Head of Research and Analysis, co-director and writer for CIVICUS Lens and co-author of the State of Civil Society Report. She is also a Professor of Comparative Politics at Universidad ORT Uruguay.
For interviews or more information, please contact research@civicus.org
Follow @IPSNewsUNBureau
Residents travel by boat through flooded streets in Colombo after heavy rains from Cyclonic Storm Ditwah. Credit: UNICEF, Sri Lanka
By the Economic and Social Commission for Asia and the Pacific (ESCAP)
BANGKOK, Thailand, Dec 17 2025 (IPS)
Cyclones Ditwah and Senyar are indications of a shifting disaster riskscape, not anomalies. Both storms broke historical patterns: Ditwah tracked unusually south along Sri Lanka’s coast before looping into the Bay of Bengal, dumping over 375 mm of rain in 24 hours and triggering landslides.
Senyar, only the second cyclone ever recorded in the Strait of Malacca, intensified near the equator and stalled over Sumatra, worsening floods in Aceh and North Sumatra.
The rising human and economic toll
According to the ESCAP Asia-Pacific Disaster Report 2025: Rising Heat, Rising Risk, the Asia-Pacific region is entering an era of cascading risks driven by intensifying heat and extreme weather with marine heatwaves and warmer sea surface temperatures fueling this new normal.
Historical low-risk zones like Sri Lanka’s central hills and Thailand’s southern strip are now climate-risk hotspots.
The report projects that in South and South-West Asia alone, average annual flood losses could increase from US$47 billion historically to 57 billion.
Across Indonesia, Malaysia, the Philippines, Sri Lanka, Thailand and Viet Nam, the storms of late November 2025 caused more than 1,600 fatalities, left hundreds of people unaccounted for, and affected well over ten million people.
Widespread flooding and landslides displaced 1.2 million people, disrupted essential services and isolated numerous communities, underscoring the scale of the response required and the substantial economic fallout expected
The value of preparedness
While improved early warnings have reduced loss of life compared to past decades, these storms show that disasters are becoming more destructive. Yes, early warnings saved lives—impact-based forecasts triggered mass evacuations and community drills helped families reach safety. But thousands were still stranded.
Alerts arrived, yet on-the-ground implementation was unclear, and some evacuation routes were already flooded. In many cases, social media became the lifeline when official systems fell short.
The trend is clear: technology alone cannot save lives without trust and rehearsed responses. Warnings work only when people know what to do and feel confident acting.
The ESCAP multi-donor Trust Fund for Tsunami, Disaster and Climate Preparedness shows that investing in preparedness pays off many times over. Its 2025–26 call for proposals offers countries a chance to strengthen coastal resilience, integrate science and technology and embed community-led action — before the next storm season tests our readiness.
The lessons we must learn
Early warnings have their limits. In many areas, alerts were issued and hotlines opened, yet fast-rising floods left families stranded, relying on rescue teams and volunteers. These events show that mobility constraints and uneven household preparedness can limit action even when information is available.
Community-led initiatives, such as those championed following the 2004 Indian Ocean tsunami, demonstrate how local knowledge and regular drills improve decision-making. Twenty years later, social cohesion has become a marker of resilience.
For example, the Bangladesh Cyclone Preparedness Programme (with 76,000 volunteers) has sharply reduced cyclone deaths by delivering house-to-house warnings and guiding evacuations.
Ditwah and Senyar exposed how rapid urban growth without risk-informed planning magnifies disaster impacts. Colombo’s wetlands have shrunk by 40 per cent, while Hat Yai’s drainage was overwhelmed.
Many hard-hit towns in Sumatra were located in known landslide-risk zones, resulting in severe disruptions to hospitals, transport networks and local businesses.
When natural buffers disappear, rainfall that once drained slowly now floods cities within hours. Urban resilience depends on integrating risk into development planning by preserving wetlands, enforcing zoning and investing in drainage and flood defences.
Infrastructure alone is not enough; it must be designed for extremes. Cities that embed resilience into planning and protect natural systems are better positioned to withstand future storms and safeguard economic activity.
The Asia-Pacific region is faced with converging risks, with storms amplifying monsoonal hazards, cascading into mudslides and exacerbated by infrastructure weaknesses. Regional cooperation is no longer optional – it is the foundation for resilience in the most disaster-impacted region of the world.
November 2025 saw 8 countries (including Indonesia, Sri Lanka and Thailand) activate the International Charter on Space and Major Disasters, enabling rapid satellite imagery for emergency planning, proving the value of shared systems (see figure).
As floodwaters surged across the region, participants at the ESCAP Committee on Disaster Risk Reduction reaffirmed their commitment to regional early warning systems and anticipatory action – because hazards do not respect borders.
The Asia-Pacific region’s resilience depends on investing in people and preparedness cultures, regional solidarity, urban planning for extremes, protecting natural buffers and ensuring that last-mile guidance reaches every household.
Building generations and societies equipped to manage rising risks is the smartest investment for a safer future.
Source: ESCAP
IPS UN Bureau
Follow @IPSNewsUNBureau
Le commissaire de police Landry Bignon Delcoz Kindjanhoundé a été conduit, ce mardi 16 décembre 2025 à la Cour de répression des infractions économiques et du terrorisme (CRIET).
Auteur d'une vidéo hostile contre le pouvoir, le commissaire de police Landry Bignon Delcoz Kindjanhoundé est à la Criet. Il sera écouté par le procureur spécial. L'officier de police apparaissait dans une courte séquence de vidéo diffusée sur les réseaux sociaux dans laquelle, il s'en prenait vivement au chef de l'État et exigeait sa démission. Il a été interpellé mardi 9 décembre 2025.
A.A.A
Une seconde vague de mutins et civils interpellés dans le cadre l'affaire de tentative de coup d'Etat déjouée au Bénin est attendue à la Cour de répression des infractions économiques et du terrorisme (CRIET).
Les auditions s'enchaînent pour situer les responsabilités dans l'affaire de mutinerie déjouée au Bénin. D'autres groupes de militaires sont attendus à la Criet. Ils seront présentés au procureur spécial de la Criet, et écoutés par la Commission d'instruction, et la Chambre des libertés et de la détention. Parmi les personnes interpellées, il y a également des civils.
À l'issue des auditions à la CRIET ce mardi, 30 militaires et 1 civil ont été déposés en prison. Le lieutenant-colonel Pascal Tigri à la tête de cette mutinerie est toujours en cavale.
A.A.A
La Commission Électorale Nationale Autonome (Cena) a réceptionné, lundi 15 décembre 2025, au commissariat central de Parakou, 6 700 isoloirs destinés aux départements du Borgou et de l'Alibori.
Le processus électoral pour les élections législatives et communales prévues le 11 janvier 2026 au Bénin progresse normalement. 6 700 isoloirs ont été réceptionnés pour les communes de l'Alibori et du Borgou. « Il y aura un tableau de dispatching qui sera fait par la direction du matériel et des opérations. Les instructions seront données. Dès que le dispatching sera fait, chaque point focal sera chargé de réceptionner le nombre qu'il faut pour couvrir les élections dans sa commune », a expliqué Hervé Arcadius Zinzindohoué, chef de service chargé de la planification des opérations électorales à la CENA. Les membres de poste de vote se chargeront de récupérer le matériel au niveau de chaque arrondissement.
Les isoloirs en fer ne seront pas récupérés après les élections. Ils seront stockés dans les arrondissements, communes et départements pour un usage futur.
A.A.A
Despite strategic rivalry, bureaucratic behavior in China and the United States follows strikingly similar logics. Drawing on comparative research across foreign aid, environmental governance, and pandemic response, we show that Chinese and U.S. bureaucrats are often driven by strikingly similar incentives. Career pressures, blame avoidance, political signaling, and risk aversion shape day-to-day decision-making on both sides — frequently producing comparable outcomes, despite very different political systems. Understanding these shared bureaucratic dynamics helps explain why the two superpowers can appear deeply polarized politically, yet are surprisingly predictable in practice. Beneath geopolitical rivalry, common administrative logics continue to anchor state action.