Artwork

Content provided by Andreas Welsch. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andreas Welsch or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Red-Teaming And Safeguards For LLM Apps (Guest: Steve Wilson)

27:06
 
Share
 

Manage episode 443935466 series 3437240
Content provided by Andreas Welsch. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andreas Welsch or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode, Steve Wilson (Co-Lead OWASP Top 10 for LLM Apps & Author) and Andreas Welsch discuss red-teaming and safeguards for LLM applications. Steve shares his insights how Generative AI vulnerabilities have evolved from embarrassing to financially risky and provides valuable advice for listeners looking to improve the security of their Generative AI applications.
Key topics:
- One year after OWASP Top 10 for LLM apps, how have LLM security and vulnerabilities evolved?
- How do you build Generative AI safeguards into your app? What’s the impact on cost for checking and regenerating output?
- How can you red-team your LLM apps?
- Which roles or skillsets are best suited to improve security?
Listen to this episode and learn how to:
- Become aware of new ways bad actors can exploit your LLM-based apps and assistants (prompt injection, supply chain attack, data exfiltration, etc.)
- Understand the characteristics of your LLM supply chain and new vulnerabilities
- Consider the implications of using agents on your security
Watch this episode on YouTube:
https://youtu.be/Rpp7u93mDfM

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.

Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

  continue reading

65 episodes

Artwork
iconShare
 
Manage episode 443935466 series 3437240
Content provided by Andreas Welsch. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andreas Welsch or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode, Steve Wilson (Co-Lead OWASP Top 10 for LLM Apps & Author) and Andreas Welsch discuss red-teaming and safeguards for LLM applications. Steve shares his insights how Generative AI vulnerabilities have evolved from embarrassing to financially risky and provides valuable advice for listeners looking to improve the security of their Generative AI applications.
Key topics:
- One year after OWASP Top 10 for LLM apps, how have LLM security and vulnerabilities evolved?
- How do you build Generative AI safeguards into your app? What’s the impact on cost for checking and regenerating output?
- How can you red-team your LLM apps?
- Which roles or skillsets are best suited to improve security?
Listen to this episode and learn how to:
- Become aware of new ways bad actors can exploit your LLM-based apps and assistants (prompt injection, supply chain attack, data exfiltration, etc.)
- Understand the characteristics of your LLM supply chain and new vulnerabilities
- Consider the implications of using agents on your security
Watch this episode on YouTube:
https://youtu.be/Rpp7u93mDfM

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.

Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com
More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

  continue reading

65 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide