Artwork

Content provided by N2K Networks, Inc. and N2K Networks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by N2K Networks, Inc. and N2K Networks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Prompts gone rogue. [Research Saturday]

25:44
 
Share
 

Manage episode 433408267 series 112238
Content provided by N2K Networks, Inc. and N2K Networks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by N2K Networks, Inc. and N2K Networks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.

This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.

The research can be found here:

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

3015 episodes

Artwork

Prompts gone rogue. [Research Saturday]

CyberWire Daily

2,513 subscribers

published

iconShare
 
Manage episode 433408267 series 112238
Content provided by N2K Networks, Inc. and N2K Networks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by N2K Networks, Inc. and N2K Networks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.

This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.

The research can be found here:

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

3015 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide