Artwork

Content provided by CNA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CNA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Remember, Remember, the Fakes of November

39:21
 
Share
 

Manage episode 269604465 series 1932286
Content provided by CNA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CNA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In COVID-related AI news, Andy and Dave discuss an article from Wired that describes how COVID confounded most predictive models (such as finance). And NIST investigates the effect of face masks on facial recognition software. In regular-AI news, CSET and the Bipartisan Policy Center release a report on “AI and National Security,” the first of four “meant to be a roadmap for Washington’s future efforts on AI.” The Intelligence Community releases its AI Ethics Principles and AI Ethics Framework. Researchers from the University of Chicago announce “Fawkes,” a way to “cloak” images and befuddle facial recognition software. In research, OpenAI demonstrates that GPT-2, a generator designed for text, can also generate pixels (instead of words) to fill out 2D pictures. Researchers at Texas A&M, University of S&T of China, and MIT-IBM Watson AI Lab create a 3D adversarial logo to cloak people from facial recognition. And other research explores how the brain rewires when given an additional thumb. CSET publishes a Deepfakes: a Grounded Threat Assessment. And MyHeritage provides a “photo enhancer” that uses machine learning to restore old photos.

Click here to visit our website and explore the links mentioned in the episode.

  continue reading

116 episodes

Artwork
iconShare
 
Manage episode 269604465 series 1932286
Content provided by CNA. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CNA or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In COVID-related AI news, Andy and Dave discuss an article from Wired that describes how COVID confounded most predictive models (such as finance). And NIST investigates the effect of face masks on facial recognition software. In regular-AI news, CSET and the Bipartisan Policy Center release a report on “AI and National Security,” the first of four “meant to be a roadmap for Washington’s future efforts on AI.” The Intelligence Community releases its AI Ethics Principles and AI Ethics Framework. Researchers from the University of Chicago announce “Fawkes,” a way to “cloak” images and befuddle facial recognition software. In research, OpenAI demonstrates that GPT-2, a generator designed for text, can also generate pixels (instead of words) to fill out 2D pictures. Researchers at Texas A&M, University of S&T of China, and MIT-IBM Watson AI Lab create a 3D adversarial logo to cloak people from facial recognition. And other research explores how the brain rewires when given an additional thumb. CSET publishes a Deepfakes: a Grounded Threat Assessment. And MyHeritage provides a “photo enhancer” that uses machine learning to restore old photos.

Click here to visit our website and explore the links mentioned in the episode.

  continue reading

116 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide