Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Drones are becoming more important on the current battlefields, as we saw during the Turkish operation “spring shield” in Syria (Seufert, 2020 Le Monde) and the Chinese development of small swarm drones, which they sold to Saudi Arabia (Tucker, 2019). These drones are all hu-man guided and not autonomous, but the future will see autonomous acting drones in every size and on every battlefield. The OSCE is in the unique position of setting rules before this technol-ogy is fully functional. This paper will examine the danger of AI Warfare and the OSCE´s op-portunities to enclose the AI-Warfare. The author doubts the opportunity of a total restriction of this technology and therefore recommends a “softer” approach in restricting AI-technology us-age in warfare to a point where the process of inventing and using the AI for warfare could be held under or shortly above the danger of human guided warfare.
RUSI Journal, 2020
The rapid proliferation of a new generation of artificial intelligence (AI)-augmented and AI-enabled autonomous weapon systems (AWS), most notably drones used in swarming tactics, could have a significant impact on deterrence, nuclear security, escalation, and strategic stability in future warfare. James Johnson argues that emerging iterations of AWS fused with AI systems will presage a powerful interplay of increased range, accuracy, mass, coordination, intelligence, and speed in a future conflict. In turn, the risk of escalation use-them-or-lose-them situations between nuclear-armed military powers and the attendant dangers posed by the use of unreliable, unverified and unsafe AWS will increase, with potentially catastrophic strategic outcomes.
2018
Terrorism, as definitely the most dangerous asymmetrical threat of the 21st century, causes death of thousands of people every year. It is interesting that counterterrorism does the same, maybe even more. In order the number of the innocent victims to be as lower as possible, there is a tendency by authorized counterterrorism institutions to use drones in counterrorism activities. We could notice that use of drones didn't decrease the number of death or injured innocent people and even that their number is still relаtively very high. Based on that, there is a dilemma: should be drones used as counterrorism tool or not?! Key words: artificial intelligence, drones, modern operations, terroris
Revista Tecnológica - ESPOL
This article aims to characterize the unmanned drones used in the new global military defense strategies, considering the technological developments associated with artificial intelligence and robotics. In this sense, a documentary study was carried out to identify the most representative developments in the market of autonomous military drones, observing their implications in present and future war scenarios. The study established the implications that autonomous weapons are taking in global geopolitics, assuming changes in the way of incorporating intelligent technologies to improve the autonomy of drones to face the requirements of defense and attack demanded by the new battlefield scenarios. This scenario gives a glimpse of the coming years of the increase of drones as lethal weapons of low cost and high precision, whose deployment in various scenarios will be more effective in military operations of various kinds.
Stanford Law and Policy Review, 25, 2014
In this Article, I review the military and security uses of robotics and "un-manned" or "uninhabited" (and sometimes "remotely piloted") vehicles in a number of relevant conflict environments that, in turn, raise issues of law and ethics that bear significantly on both foreign and domestic policy initiatives. My treatment applies to the use of autonomous unmanned platforms in combat and low-intensity international conflict, but also offers guidance for the increased domestic uses of both remotely controlled and fully autonomous unmanned aerial , maritime, and ground systems for immigration control, border surveillance, drug interdiction, and domestic law enforcement. I outline the emerging debate concerning "robot morality" and computational models of moral cognition and examine the implications of this debate for the future reliability, safety, and effectiveness of autonomous systems (whether weaponized or unarmed) that might come to be deployed in both domestic and international conflict situations. Likewise , I discuss attempts by the International Committee on Robot Arms Control (ICRAC) to outlaw or ban the use of autonomous systems that are lethally armed, as well an alternative proposal by the eminent Yale University ethicist, Wendell Wallach, to have lethally armed autonomous systems that might be capable of making targeting decisions independent of any human oversight specifically designated "mala in se" under international law. Following the approach of Marchant, et al., however, I summarize the lessons learned and the areas of provisional consensus reached thus far in this debate in the form of "soft-law" precepts that reflect emergent norms and a growing international consensus regarding the proper use and governance of such weapons.
The paper delves into the shifting dynamics of international relations in the age of Drone warfare. Through this paper, I explore the complex and morally ambiguous terrain of Drone Warfare in the era of globalisation as the boundaries between nationalism and terrorism are perpetually blurring, striking concerns with the questions of military regulation and ethics surrounding the battlefield and the delegation of AI in decision making during warfare. This paper throws light on the moral dilemma of the operators sitting miles away from the conflict zone, detached from war’s immediate consequence and also centres around accountability and the implication of abdication of human agency within international law.
CSI Review , 2021
Comprehending and analysing Artificial Intelligence (AI) is fundamental to embrace the next challenges of the future, specifically for the defence sector. Developments in this sector will involve both arms and operations. The debate is linked to the risks that automation could bring into the battlefield, specifically for the Lethal Autonomous Weapons Systems (LAWS). While AI could bring many advantages in risk detection, protection and preparation capabilities, it may bring also several risks on the battlefield and break the basic principles of International Law. Indeed, having the human operator "out of the loop", could lead to unprecedented challenges and issues. Such weapons may also strengthen terroristic groups, allowing them to plan mass attacks or specific assassinations with no human sacrifice. The article, divided into three parts, aims to analyse the LAWS and its related issue. The first one introduces the LAWS and is applications worldwide. The second one summarizes the problems concerning International Humanitarian Law. Eventually, the last part is focused on the research for a proper regulation and the EU position on the topic.
The debate on and around “killer robots” has been firmly established at the crossroads of ethical, legal, political, strategic, and scientific discourses. Flourishing at the two opposite poles, with a few contributors caught in the middle, the polemic still falls short of a detailed, balanced, and systematic analysis. It is for these reasons that we focus on the nitty-gritties, multiple pros and cons, and implications of autonomous weapon systems (AWS) for the prospects of the international order. Moreover, a nuanced discussion needs to feature the considerations of their technological continuity vs. novelty. The analysis begins with properly delimiting the AWS category as fully autonomous (lethal) weapon systems, capable of operating without human control or supervision, including in dynamic and unstructured environments, and capable of engaging in independent (lethal) decision-making, targeting, and firing, including in an offensive manner. As its primary goal, the article aims to move the existing debate to the level of a first-order structure and offers its comprehensive operationalisation. We propose an original framework based on a thorough analysis of six specific dilemmas, and detailing the pro/con argument for each of those: (1) (un)predictability of AWS performance; (2) dehumanization of lethal decision-making; (3) depersonalisation of enemy (non-)combatant; (4) human-machine nexus in coordinated operations; (5) strategic considerations; (6) AWS operation in law(less) zone. What follows are concluding remarks. Keywords: autonomous weapon systems, killer robots, lethal decisionmaking, military ethics, artificial intelligence, security regulation, humanitarian law, revolution in military affairs, military strategy
Wild Blue Yonder, 2021
Here, the focus is the air domain. To make the discussion manageable, it’s tightly constrained to air defense and avoids broadening into joint and combined operations. Even so, this gives scope to explore operational concepts that might stimulate thinking about the future and preparing for it. In all this, it is important to remember that AI enlivens other technologies. AI is not a stand-alone actor, rather it works in the combination with numerous other digital technologies. It provides a form of cognition to these.
Defense & Security Analysis, 2019
Recent developments in artificial intelligence (AI) suggest that this emerging technology will have a deterministic and potentially transformative influence on military power, strategic competition, and world politics more broadly. After the initial surge of broad speculation in the literature related to AI this article provides some much needed specificity to the debate. It argues that left unchecked the uncertainties and vulnerabilities created by the rapid proliferation and diffusion of AI could become a major potential source of instability and great power strategic rivalry. The article identifies several AI-related innovations and technological developments that will likely have genuine consequences for military applications from a tactical battlefield perspective to the strategic level.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
تکنولوجیا التعلیم: سلسلة دراسات وبحوث
Learning and Assessment for Digital Citizenship, 2020
THE ROUTLEDGE HANDBOOK OF THE BYZANTINE CITY, FROM JUSTINIAN TO MEHMET II (ca. 500-ca.1500), EDS. N. BAKIRTZIS, L. ZAVAGNO, LONDON-NEW YORK 2024, pp. 56-72.
TOPOI (special issue on loneliness), 2023
Ensayos críticos sobre Psicopedagogía en Latinoamérica, 2018
Artículo derivado del Diplomado de Terapia Narrativa Posestructuralista, 2021
AMFITEATRU ECONOMIC, 2013
MUSEUM OF CYCLADIC ART, 2013
ISSC 2024, PROSPECTS OF ACCOUNTING DEVELOPMENT: THE YOUNG RESEARCHER’S VIEW, 8 edition., 2024
Frontiers in microbiology, 2018
Handbuch Bildungs- und Erziehungsphilosophie Hrsg.: Gabriele Weiß & Jörg Zirfas, 2020
Ciencia Rural, 2019
Computer Methods in Applied Mechanics and Engineering, 2020
Composite Structures, 2018
Journal of Business Ethics, 1987
bioRxiv (Cold Spring Harbor Laboratory), 2021