Artwork

Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01

22:59
 
Share
 

Manage episode 421983738 series 3578042
Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI:
✅ Taylor Swift's deepfake scandal and what it did for RAIg
✅ Do famous people need to be harmed before we do anything about it?
✅ How to address the deepfake problem at the systemic and symptomatic levels
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
0:00 - Introduction
01:20 - Taylor Swift Deepfakes: what happened
02:43 - Does disaster need to strike famous people for us to move the needle?
06:31 - What role can RAI play to address this deepfake problem?
07:19 - Disagreement! Deep fakes have both systemic and symptomatic causes
09:28 - Deep fakes, Elections, EU AI Act, and US State legislations
11:45 - The post-truth era powered by AI
15:40 - Watermarking AI generated content and the difficulty
19:26 - The enshittification of the internet
22:00- Three actionable takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics #taylorswift

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Chapters

1. 🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01 (00:00:00)

2. Taylor Swift Deepfakes: what happened (00:01:20)

3. Does disaster need to strike famous people for us to move the needle? (00:02:43)

4. Does disaster need to strike famous people for us to move the needle? (00:06:31)

5. Disagreement! Deep fakes have both systemic and symptomatic causes (00:07:19)

6. Deep fakes, Elections, EU AI Act, and US State legislations (00:09:28)

7. The post-truth era powered by AI (00:11:45)

8. Watermarking AI generated content and the difficulty (00:15:40)

9. The enshittification of the internet (00:19:26)

10. Three actionable takeaways (00:22:00)

5 episodes

Artwork
iconShare
 
Manage episode 421983738 series 3578042
Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI:
✅ Taylor Swift's deepfake scandal and what it did for RAIg
✅ Do famous people need to be harmed before we do anything about it?
✅ How to address the deepfake problem at the systemic and symptomatic levels
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
0:00 - Introduction
01:20 - Taylor Swift Deepfakes: what happened
02:43 - Does disaster need to strike famous people for us to move the needle?
06:31 - What role can RAI play to address this deepfake problem?
07:19 - Disagreement! Deep fakes have both systemic and symptomatic causes
09:28 - Deep fakes, Elections, EU AI Act, and US State legislations
11:45 - The post-truth era powered by AI
15:40 - Watermarking AI generated content and the difficulty
19:26 - The enshittification of the internet
22:00- Three actionable takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics #taylorswift

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Chapters

1. 🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01 (00:00:00)

2. Taylor Swift Deepfakes: what happened (00:01:20)

3. Does disaster need to strike famous people for us to move the needle? (00:02:43)

4. Does disaster need to strike famous people for us to move the needle? (00:06:31)

5. Disagreement! Deep fakes have both systemic and symptomatic causes (00:07:19)

6. Deep fakes, Elections, EU AI Act, and US State legislations (00:09:28)

7. The post-truth era powered by AI (00:11:45)

8. Watermarking AI generated content and the difficulty (00:15:40)

9. The enshittification of the internet (00:19:26)

10. Three actionable takeaways (00:22:00)

5 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide