Artwork

Content provided by Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young and Sujit D'Mello, Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young, and Sujit D'Mello. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young and Sujit D'Mello, Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young, and Sujit D'Mello or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 502 - Azure Open AI and Security

 
Share
 

Manage episode 434431864 series 1987528
Content provided by Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young and Sujit D'Mello, Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young, and Sujit D'Mello. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young and Sujit D'Mello, Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young, and Sujit D'Mello or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications.

Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3

YouTube: https://youtu.be/64Achcz97PI

Resources:

AI Tooling:

  1. Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog
    • Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety.
    • Groundedness detection to detect “hallucinations” in model outputs, coming soon.
    • Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon.
    • Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview.
    • Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service.
  2. AI Defender for Cloud
  3. AI Red Teaming Tool

AI Development Considerations:

  1. AI Assessment from Microsoft
  2. Microsoft Responsible AI Processes
  3. Define Use Case and Model Architecture
  4. Content Filtering System
  5. Red Teaming the LLM
  6. Create a Threat Model with OWASP Top 10

Other updates:

  continue reading

262 episodes

Artwork
iconShare
 
Manage episode 434431864 series 1987528
Content provided by Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young and Sujit D'Mello, Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young, and Sujit D'Mello. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young and Sujit D'Mello, Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young, and Sujit D'Mello or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications.

Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3

YouTube: https://youtu.be/64Achcz97PI

Resources:

AI Tooling:

  1. Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog
    • Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety.
    • Groundedness detection to detect “hallucinations” in model outputs, coming soon.
    • Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon.
    • Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview.
    • Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service.
  2. AI Defender for Cloud
  3. AI Red Teaming Tool

AI Development Considerations:

  1. AI Assessment from Microsoft
  2. Microsoft Responsible AI Processes
  3. Define Use Case and Model Architecture
  4. Content Filtering System
  5. Red Teaming the LLM
  6. Create a Threat Model with OWASP Top 10

Other updates:

  continue reading

262 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide