Artwork

Content provided by Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AWS, the Alignment problem and regulation - Brendan Walker-Munro and Sam Hartridge

47:08
 
Share
 

Manage episode 373073688 series 2811139
Content provided by Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this interview, we are continuing our series on legal review of AWS, and speaking with two of the Law and Future of war research team, about an issue that impacts the design approaches to AWS: the alignment problem. In May 2023, there were reports of an AWS being tested, that turned upon its operator, and eventually cut its communications links so it could go after its originally planned mission... this prompted discussion about the alignment problem with AWS, impacting future TEVV strategies and regulatory approaches to this technology.
The conference referred to in the episode can be found in the notes to the attached link, with relevant excerpts extracted below: - Highlights from the RAeS Future Combat Air & Space Capabilities Summit (aerosociety.com):

'Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". ]

Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, ... cautioned against relying too much on AI noting how easy it is to trick and deceive.

... Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Dr Brendan Walker-Munro is a Senior Research Fellow with the University of Queensland's Law and the Future of War research group. Brendan's research focus is on criminal and civil aspects of national security law, and the role played by intelligence agencies, law enforcement and the military in investigating and responding to critical incidents. He is also interested in the national security impacts of law on topics such as privacy, identity crime and digital security.

Dr Sam Hartridge is a post-doctoral researcher at the University of Queensland. His research is currently examining the interplay between technical questions of AI safety, AI risk management frameworks and standards, and foundational international and domestic legal doctrine.

Additional Resources:

  continue reading

89 episodes

Artwork
iconShare
 
Manage episode 373073688 series 2811139
Content provided by Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this interview, we are continuing our series on legal review of AWS, and speaking with two of the Law and Future of war research team, about an issue that impacts the design approaches to AWS: the alignment problem. In May 2023, there were reports of an AWS being tested, that turned upon its operator, and eventually cut its communications links so it could go after its originally planned mission... this prompted discussion about the alignment problem with AWS, impacting future TEVV strategies and regulatory approaches to this technology.
The conference referred to in the episode can be found in the notes to the attached link, with relevant excerpts extracted below: - Highlights from the RAeS Future Combat Air & Space Capabilities Summit (aerosociety.com):

'Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". ]

Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, ... cautioned against relying too much on AI noting how easy it is to trick and deceive.

... Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Dr Brendan Walker-Munro is a Senior Research Fellow with the University of Queensland's Law and the Future of War research group. Brendan's research focus is on criminal and civil aspects of national security law, and the role played by intelligence agencies, law enforcement and the military in investigating and responding to critical incidents. He is also interested in the national security impacts of law on topics such as privacy, identity crime and digital security.

Dr Sam Hartridge is a post-doctoral researcher at the University of Queensland. His research is currently examining the interplay between technical questions of AI safety, AI risk management frameworks and standards, and foundational international and domestic legal doctrine.

Additional Resources:

  continue reading

89 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide