Artwork

Content provided by Cyber Crime Junkies-David Mauro. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Cyber Crime Junkies-David Mauro or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

How Deep Fake Videos Increase Security Risks.

1:15:03
 
Share
 

Manage episode 410833582 series 3559123
Content provided by Cyber Crime Junkies-David Mauro. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Cyber Crime Junkies-David Mauro or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

NEW! Text Us Direct Here!

Paul Eckloff was a US Secret Service Agent for 23 years. Today we discuss how deep fake videos increase security risks. Topics include: artificial intelligence risks in cyber, new ways to reduce risk of deep fakes, how are deep fakes made, how are deep fake videos made, how are audio deep fakes made, and how is ai making it harder to detect deep fakes.
Catch Video Episode with Sample Deep Fakes here: https://youtu.be/1yFFK6uHt0I?si=qP9F1_uIZ7q6qGSS
Takeaways

  • Sample deepfakes are played. Can you tell? Over 85% of those tested could not.
  • Deepfake technology, created using techniques like GANs, diffusion models, and VAEs, can convincingly substitute one person's face or voice with another's.
  • The advancement of deepfake technology poses risks such as impersonating executives, enhancing social engineering campaigns, avoiding detection in malware, and conducting reconnaissance for future attacks.
  • The widespread availability and low cost of deepfake technology make it accessible to both legitimate businesses and threat actors, increasing the threat surface for organizations.
  • The potential for deepfakes to manipulate and deceive individuals, especially children, is a grave concern.

Chapters

  • 00:00 Introduction to Deepfake Technology and its Impact
  • 03:27 The Challenges of Detecting Deepfakes
  • 08:04 The Erosion of Trust: Seeing is No Longer Believing
  • 11:31 The Advancement of Deepfake Technology
  • 26:53 The Malicious Uses of Deepfake Technology
  • 36:17 The Risks of Deepfake Technology
  • 37:42 Consequences of Deepfakes
  • 40:38 Limitations of Deepfake Detec

Try KiteWorks today at www.KiteWorks.com
Don't Miss our Video on this Exciting KiteWorks Offer!

Click the link above or text 904-867-4468, 2014652: and leave your message!
You can now text our Podcast Studio direct. Ask questions, suggest guests and stories.
We Look Forward To Hearing From You!
Try KiteWorks today at www.KiteWorks.com
Don't miss this Video on it!
The Most Secure Managed File Transfer System.

Custom handmade Women's Clothing, Plushies & Accessories at Blushingintrovert.com. Portions of your purchase go to Mental Health Awareness efforts.


  continue reading

188 episodes

Artwork
iconShare
 
Manage episode 410833582 series 3559123
Content provided by Cyber Crime Junkies-David Mauro. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Cyber Crime Junkies-David Mauro or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

NEW! Text Us Direct Here!

Paul Eckloff was a US Secret Service Agent for 23 years. Today we discuss how deep fake videos increase security risks. Topics include: artificial intelligence risks in cyber, new ways to reduce risk of deep fakes, how are deep fakes made, how are deep fake videos made, how are audio deep fakes made, and how is ai making it harder to detect deep fakes.
Catch Video Episode with Sample Deep Fakes here: https://youtu.be/1yFFK6uHt0I?si=qP9F1_uIZ7q6qGSS
Takeaways

  • Sample deepfakes are played. Can you tell? Over 85% of those tested could not.
  • Deepfake technology, created using techniques like GANs, diffusion models, and VAEs, can convincingly substitute one person's face or voice with another's.
  • The advancement of deepfake technology poses risks such as impersonating executives, enhancing social engineering campaigns, avoiding detection in malware, and conducting reconnaissance for future attacks.
  • The widespread availability and low cost of deepfake technology make it accessible to both legitimate businesses and threat actors, increasing the threat surface for organizations.
  • The potential for deepfakes to manipulate and deceive individuals, especially children, is a grave concern.

Chapters

  • 00:00 Introduction to Deepfake Technology and its Impact
  • 03:27 The Challenges of Detecting Deepfakes
  • 08:04 The Erosion of Trust: Seeing is No Longer Believing
  • 11:31 The Advancement of Deepfake Technology
  • 26:53 The Malicious Uses of Deepfake Technology
  • 36:17 The Risks of Deepfake Technology
  • 37:42 Consequences of Deepfakes
  • 40:38 Limitations of Deepfake Detec

Try KiteWorks today at www.KiteWorks.com
Don't Miss our Video on this Exciting KiteWorks Offer!

Click the link above or text 904-867-4468, 2014652: and leave your message!
You can now text our Podcast Studio direct. Ask questions, suggest guests and stories.
We Look Forward To Hearing From You!
Try KiteWorks today at www.KiteWorks.com
Don't miss this Video on it!
The Most Secure Managed File Transfer System.

Custom handmade Women's Clothing, Plushies & Accessories at Blushingintrovert.com. Portions of your purchase go to Mental Health Awareness efforts.


  continue reading

188 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide