Artwork

Content provided by Scott Logic. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Scott Logic or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Will we ever be able to secure GenAI?

35:21
 
Share
 

Manage episode 427835266 series 3322243
Content provided by Scott Logic. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Scott Logic or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode, Oliver Cronk, Doro Hinrichs and Kira Clark from Scott Logic are joined by Peter Gostev, Head of AI at Moonpig. Together, they explore whether we can ever really trust and secure Generative AI (GenAI), while sharing stories from the front line about getting to grips with this rapidly evolving technology.

With its human-like, non-deterministic nature, GenAI frustrates traditional pass/fail approaches to software testing. The panellists explore ways to tackle this, and discuss Scott Logic’s Spy Logic project which helps development teams investigate defensive measures against prompt injection attacks on a Large Language Model.

Looking to the future, they ask whether risk mitigation measures will ever be effective – and what impact this will have on product and service design – before offering pragmatic advice on what organisations can do to navigate this terrain.

Links from this episode

  continue reading

21 episodes

Artwork

Will we ever be able to secure GenAI?

Beyond the Hype

11 subscribers

published

iconShare
 
Manage episode 427835266 series 3322243
Content provided by Scott Logic. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Scott Logic or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode, Oliver Cronk, Doro Hinrichs and Kira Clark from Scott Logic are joined by Peter Gostev, Head of AI at Moonpig. Together, they explore whether we can ever really trust and secure Generative AI (GenAI), while sharing stories from the front line about getting to grips with this rapidly evolving technology.

With its human-like, non-deterministic nature, GenAI frustrates traditional pass/fail approaches to software testing. The panellists explore ways to tackle this, and discuss Scott Logic’s Spy Logic project which helps development teams investigate defensive measures against prompt injection attacks on a Large Language Model.

Looking to the future, they ask whether risk mitigation measures will ever be effective – and what impact this will have on product and service design – before offering pragmatic advice on what organisations can do to navigate this terrain.

Links from this episode

  continue reading

21 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide