BA (Durham University, Modern History), 1983 Royal Military Academy, Sandhurst, 1990 MBA (Cornell University, JGSM), 2013 Masters (Buckingham School of Modern War Studies), 2019 PhD (Buckingham School of Modern War Studies), 2020 Associate Research Fellow at Royal United Services Institute (RUSI), 2020 Senior Research Fellow at Humanities Research Institute, University of Buckingham
Lethal autonomous weapon systems (LAWS) – robotic weapons that have the ability to sense and act ... more Lethal autonomous weapon systems (LAWS) – robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed – will be capable of selecting targets and delivering lethality without any human interaction. This technology may still be in its infancy, but both semi-autonomous and other precursor systems are already in service. This, argues Paddy Walker, requires a material step change in both hardware and software but, once deployed, posits a significant change in how humans wage war. This article considers the behavioural and leadership challenges that arise from the deployment of such weapons and how unsupervised engagements might degrade the commander’s craft. ◼
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and... more Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be capable of selecting targets and delivering lethality without any human interaction. This weapon technology may still be in its infancy, but both semi-autonomous and other pre-cursor systems are already in service. There are several drivers to a move from merely automatic weapons to fully autonomous weapons which are able to engage a target based solely upon algorithm-based decision-making. This requires material step-change in both hardware and software and, once deployed, posits a significant change in how humans wage war. But complex technical difficulties must first be overcome if this new independent and self-learning weapon category can legally be deployed on the battlefield. AWS also pose basic statutory, moral and ethical challenges. This paper identifies the manifest complexity involved in fielding a weapon that can operate without human oversight while still retaining value as a battlefield asset. The subject’s importance is that several well-tried concepts that have long comprised battlecraft may no longer be fit for purpose. While the recent development pace in these technologies may appear extraordinary, fundamental fault lines endure. The paper also notes the inter-dependent and highly coupled nature of the routines that are envisaged for AWS operation, in particular ramifications arising from its machine learning spine, in order to demonstrate how detrimental are these compromises to AWS deployment models. In highlighting AWS deployment challenges, the analysis draws on broad primary and secondary sources to conclude that Meaningful Human Control (MHC) should be a statutory requirement in all violent engagements.
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and... more Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be capable of selecting targets and delivering lethality without any human interaction. This weapon technology may still be in its infancy, but both semi-autonomous and other pre-cursor systems are already in service. There are several drivers to a move from merely automatic weapons to fully autonomous weapons which are able to engage a target based solely upon algorithm-based decision-making. This requires material step-change in both hardware and software and, once deployed, posits a significant change in how humans wage war. But complex technical difficulties must first be overcome if this new independent and self-learning weapon category can legally be deployed on the battlefield. AWS also pose basic statutory, moral and ethical challenges.
Challenges to the deployment of autonomous weapons
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and... more Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be capable of selecting targets and delivering lethality without any human interaction. This weapon technology may still be in its infancy, but both semi-autonomous and other pre-cursor systems are already in service. There are several drivers to a move from merely automatic weapons to fully autonomous weapons which are able to engage a target based solely upon algorithm-based decision-making. This requires material step-change in both hardware and software and, once deployed, posits a significant change in how humans wage war. But complex technical difficulties must first be overcome if this new independent and self-learning weapon category can legally be deployed on the battlefield. AWS also pose basic statutory, moral and ethical challenges.
This thesis identifies the manifest complexity involved in fielding a weapon that can operate without human oversight while still retaining value as a battlefield asset. Its key research question therefore concerns the practical and technical feasibility of removing supervision from lethal engagements. The subject’s importance is that several well-tried concepts that have long comprised battlecraft may no longer be fit for purpose. In particular, legal and other obstacles challenge such weapons remaining compliant under Laws of Armed Conflict. Technical challenges, moreover, include the setting of weapon values and goals, the anchoring of the weapon’s internal representations as well as management of its utility functions, its learning functions and other key operational routines. While the recent development pace in these technologies may appear extraordinary, fundamental fault lines endure. The thesis also notes the inter-dependent and highly coupled nature of the routines that are envisaged for AWS operation, in particular ramifications arising from its machine learning spine, in order to demonstrate how detrimental are these compromises to AWS deployment models. In highlighting AWS deployment challenges, the analysis draws on broad primary and secondary sources to conclude that Meaningful Human Control (MHC) should be a statutory requirement in all violent engagements.
Lethal autonomous weapon systems (LAWS) – robotic weapons that have the ability to sense and act ... more Lethal autonomous weapon systems (LAWS) – robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed – will be capable of selecting targets and delivering lethality without any human interaction. This technology may still be in its infancy, but both semi-autonomous and other precursor systems are already in service. This, argues Paddy Walker, requires a material step change in both hardware and software but, once deployed, posits a significant change in how humans wage war. This article considers the behavioural and leadership challenges that arise from the deployment of such weapons and how unsupervised engagements might degrade the commander’s craft. ◼
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and... more Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be capable of selecting targets and delivering lethality without any human interaction. This weapon technology may still be in its infancy, but both semi-autonomous and other pre-cursor systems are already in service. There are several drivers to a move from merely automatic weapons to fully autonomous weapons which are able to engage a target based solely upon algorithm-based decision-making. This requires material step-change in both hardware and software and, once deployed, posits a significant change in how humans wage war. But complex technical difficulties must first be overcome if this new independent and self-learning weapon category can legally be deployed on the battlefield. AWS also pose basic statutory, moral and ethical challenges. This paper identifies the manifest complexity involved in fielding a weapon that can operate without human oversight while still retaining value as a battlefield asset. The subject’s importance is that several well-tried concepts that have long comprised battlecraft may no longer be fit for purpose. While the recent development pace in these technologies may appear extraordinary, fundamental fault lines endure. The paper also notes the inter-dependent and highly coupled nature of the routines that are envisaged for AWS operation, in particular ramifications arising from its machine learning spine, in order to demonstrate how detrimental are these compromises to AWS deployment models. In highlighting AWS deployment challenges, the analysis draws on broad primary and secondary sources to conclude that Meaningful Human Control (MHC) should be a statutory requirement in all violent engagements.
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and... more Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be capable of selecting targets and delivering lethality without any human interaction. This weapon technology may still be in its infancy, but both semi-autonomous and other pre-cursor systems are already in service. There are several drivers to a move from merely automatic weapons to fully autonomous weapons which are able to engage a target based solely upon algorithm-based decision-making. This requires material step-change in both hardware and software and, once deployed, posits a significant change in how humans wage war. But complex technical difficulties must first be overcome if this new independent and self-learning weapon category can legally be deployed on the battlefield. AWS also pose basic statutory, moral and ethical challenges.
Challenges to the deployment of autonomous weapons
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and... more Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be capable of selecting targets and delivering lethality without any human interaction. This weapon technology may still be in its infancy, but both semi-autonomous and other pre-cursor systems are already in service. There are several drivers to a move from merely automatic weapons to fully autonomous weapons which are able to engage a target based solely upon algorithm-based decision-making. This requires material step-change in both hardware and software and, once deployed, posits a significant change in how humans wage war. But complex technical difficulties must first be overcome if this new independent and self-learning weapon category can legally be deployed on the battlefield. AWS also pose basic statutory, moral and ethical challenges.
This thesis identifies the manifest complexity involved in fielding a weapon that can operate without human oversight while still retaining value as a battlefield asset. Its key research question therefore concerns the practical and technical feasibility of removing supervision from lethal engagements. The subject’s importance is that several well-tried concepts that have long comprised battlecraft may no longer be fit for purpose. In particular, legal and other obstacles challenge such weapons remaining compliant under Laws of Armed Conflict. Technical challenges, moreover, include the setting of weapon values and goals, the anchoring of the weapon’s internal representations as well as management of its utility functions, its learning functions and other key operational routines. While the recent development pace in these technologies may appear extraordinary, fundamental fault lines endure. The thesis also notes the inter-dependent and highly coupled nature of the routines that are envisaged for AWS operation, in particular ramifications arising from its machine learning spine, in order to demonstrate how detrimental are these compromises to AWS deployment models. In highlighting AWS deployment challenges, the analysis draws on broad primary and secondary sources to conclude that Meaningful Human Control (MHC) should be a statutory requirement in all violent engagements.
Uploads
Papers by paddy walker
This thesis identifies the manifest complexity involved in fielding a weapon that can operate without human oversight while still retaining value as a battlefield asset. Its key research question therefore concerns the practical and technical feasibility of removing supervision from lethal engagements. The subject’s importance is that several well-tried concepts that have long comprised battlecraft may no longer be fit for purpose. In particular, legal and other obstacles challenge such weapons remaining compliant under Laws of Armed Conflict. Technical challenges, moreover, include the setting of weapon values and goals, the anchoring of the weapon’s internal representations as well as management of its utility functions, its learning functions and other key operational routines. While the recent development pace in these technologies may appear extraordinary, fundamental fault lines endure. The thesis also notes the inter-dependent and highly coupled nature of the routines that are envisaged for AWS operation, in particular ramifications arising from its machine learning spine, in order to demonstrate how detrimental are these compromises to AWS deployment models. In highlighting AWS deployment challenges, the analysis draws on broad primary and secondary sources to conclude that Meaningful Human Control (MHC) should be a statutory requirement in all violent engagements.
This thesis identifies the manifest complexity involved in fielding a weapon that can operate without human oversight while still retaining value as a battlefield asset. Its key research question therefore concerns the practical and technical feasibility of removing supervision from lethal engagements. The subject’s importance is that several well-tried concepts that have long comprised battlecraft may no longer be fit for purpose. In particular, legal and other obstacles challenge such weapons remaining compliant under Laws of Armed Conflict. Technical challenges, moreover, include the setting of weapon values and goals, the anchoring of the weapon’s internal representations as well as management of its utility functions, its learning functions and other key operational routines. While the recent development pace in these technologies may appear extraordinary, fundamental fault lines endure. The thesis also notes the inter-dependent and highly coupled nature of the routines that are envisaged for AWS operation, in particular ramifications arising from its machine learning spine, in order to demonstrate how detrimental are these compromises to AWS deployment models. In highlighting AWS deployment challenges, the analysis draws on broad primary and secondary sources to conclude that Meaningful Human Control (MHC) should be a statutory requirement in all violent engagements.