Artwork

Content provided by Allan Alford. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Allan Alford or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Will LLM AI Close The Bad Guys’ Skills Gap? with Adrian Sanabria

33:05
 
Share
 

Manage episode 364209177 series 2932664
Content provided by Allan Alford. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Allan Alford or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This episode is a bit scary. Adrian Sanabria, who on an earlier show busted many cybersecurity myths, is back again, this time analyzing the impact of Large Language Model Artificial Intelligence on a hypothesized skills gap on the bad guy side.

Premise One: Given how many organizations that are vulnerable and that have NOT been breached, the bad guys are suffering the same skills gap we are.

Premise Two: Exploit attacks (think of exploits as ransomware, data hostage situations, threats to publish breached data, etc.) can benefit from LLM AI.

It's really that simple a connecting of the dots. Adrian and Allan deconstruct the steps of an exploit attack, analyze the capabilities of LLM AI and cross-reference the two.

If they are right, then we have a burden of leveraging and learning LLM AI ourselves, as quickly as possible...

Sponsored by our good friends at Dazz:

Dazz takes the pain out of the cloud remediation process using automation and intelligence to discover, reduce, and fix security issues—lightning fast. Visit Dazz.io/demo and see for yourself.

  continue reading

171 episodes

Artwork
iconShare
 
Manage episode 364209177 series 2932664
Content provided by Allan Alford. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Allan Alford or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This episode is a bit scary. Adrian Sanabria, who on an earlier show busted many cybersecurity myths, is back again, this time analyzing the impact of Large Language Model Artificial Intelligence on a hypothesized skills gap on the bad guy side.

Premise One: Given how many organizations that are vulnerable and that have NOT been breached, the bad guys are suffering the same skills gap we are.

Premise Two: Exploit attacks (think of exploits as ransomware, data hostage situations, threats to publish breached data, etc.) can benefit from LLM AI.

It's really that simple a connecting of the dots. Adrian and Allan deconstruct the steps of an exploit attack, analyze the capabilities of LLM AI and cross-reference the two.

If they are right, then we have a burden of leveraging and learning LLM AI ourselves, as quickly as possible...

Sponsored by our good friends at Dazz:

Dazz takes the pain out of the cloud remediation process using automation and intelligence to discover, reduce, and fix security issues—lightning fast. Visit Dazz.io/demo and see for yourself.

  continue reading

171 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide