Artwork

Content provided by Daniel Clemens from ShadowDragon, LLC, Daniel Clemens from ShadowDragon, and LLC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Clemens from ShadowDragon, LLC, Daniel Clemens from ShadowDragon, and LLC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

OpenAI Disrupts Covert Influence Operations With The Help of OSINT

8:30
 
Share
 

Manage episode 428102193 series 3491074
Content provided by Daniel Clemens from ShadowDragon, LLC, Daniel Clemens from ShadowDragon, and LLC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Clemens from ShadowDragon, LLC, Daniel Clemens from ShadowDragon, and LLC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Key Points Discussed:

• Monitoring and Disruption Efforts: OpenAI collaborates with open-source intelligence practitioners to monitor internet activity and identify potential misuse of their language models by nation-states and other actors. They aim to disrupt sophisticated threats through continuous improvements in their safety systems and collaboration with industry partners.

• Recent Trends: OpenAI has detected and disrupted operations from actors in Russia, China, Iran, and a commercial company in Israel. These operations, including ones named "Bad Grammar" and "Doppelganger," used AI to generate content but failed to engage authentically with audiences.

• Techniques and Tactics: The actors use AI to produce high volumes of content, mixing AI-generated and traditional formats, and faking engagement by generating replies to their own posts. Despite these efforts, they struggled to reach authentic audiences.

• Defensive Strategies: OpenAI employs defensive design policies, such as friction-imposing features, to thwart malicious use. They also share detailed threat indicators with industry peers to enhance the effectiveness of disruptions.

• Case Studies: Examples include Russian and Chinese networks targeting various regions with limited engagement, and an Iranian network generating anti-US and anti-Israeli content. These operations highlight the ongoing challenge of AI misuse.

• Open Source Intelligence: Dekens discusses his work with Shadow Dragon, including a white paper on using open-source intelligence to identify and monitor troll and bot armies. He explains how prompt error messages can be a key indicator of malicious activity.

  continue reading

19 episodes

Artwork
iconShare
 
Manage episode 428102193 series 3491074
Content provided by Daniel Clemens from ShadowDragon, LLC, Daniel Clemens from ShadowDragon, and LLC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Clemens from ShadowDragon, LLC, Daniel Clemens from ShadowDragon, and LLC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Key Points Discussed:

• Monitoring and Disruption Efforts: OpenAI collaborates with open-source intelligence practitioners to monitor internet activity and identify potential misuse of their language models by nation-states and other actors. They aim to disrupt sophisticated threats through continuous improvements in their safety systems and collaboration with industry partners.

• Recent Trends: OpenAI has detected and disrupted operations from actors in Russia, China, Iran, and a commercial company in Israel. These operations, including ones named "Bad Grammar" and "Doppelganger," used AI to generate content but failed to engage authentically with audiences.

• Techniques and Tactics: The actors use AI to produce high volumes of content, mixing AI-generated and traditional formats, and faking engagement by generating replies to their own posts. Despite these efforts, they struggled to reach authentic audiences.

• Defensive Strategies: OpenAI employs defensive design policies, such as friction-imposing features, to thwart malicious use. They also share detailed threat indicators with industry peers to enhance the effectiveness of disruptions.

• Case Studies: Examples include Russian and Chinese networks targeting various regions with limited engagement, and an Iranian network generating anti-US and anti-Israeli content. These operations highlight the ongoing challenge of AI misuse.

• Open Source Intelligence: Dekens discusses his work with Shadow Dragon, including a white paper on using open-source intelligence to identify and monitor troll and bot armies. He explains how prompt error messages can be a key indicator of malicious activity.

  continue reading

19 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide