Artwork

Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

🤯 Harms in the Algorithm's Afterlife: how to address them | irResponsible AI EP1S01

33:46
 
Share
 

Manage episode 421868625 series 3578042
Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

In this episode of irResponsible AI, Upol & Shea bring the heat to three topics--
🚨 Algorithmic Imprints: harms from zombie algorithms with an example of the LAION dataset
🚨 The FTC vs. Rite Aid Scandal and how it could have been avoided
🚨 NIST's Trustworthy AI Institute and the future of AI regulation
You’ll also learn:
🔥 why AI is a tricky design material and how it impacts Generative AI and LLMs
🔥 how AI has a "developer savior" complex and how to solve it
What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
#ResponsibleAI #ExplainableAI #podcasts #aiethics
Chapters:
00:00 - What is this series about?
01:34 - Personal Updates from Upol & Shea
04:35 - Algorithmic Imprint: How dead algorithms can still hurt people
06:47 - A recent example of the Imprint: LAION Dataset Scandal
11:09 - How can we create imprint-aware algorithms design guidelines?
11:53 - FTC vs Rite Aid Scandal: Biased Facial Recognition
15:48 - Hilarious mistakes: Chatbot selling a car for $1
18:14 - How could Rite Aid prevented this scandal?
21:28 - What's the NIST Trustworthy AI Institute?
25:03 - Shea's wish list for the NIST working group?
27:57 - How AI is different as a design material
30:08 - AI has a developer savior complex
32:29 - You can move fast and break things that you can't fix
32:40 - Audience Requests and Announcements

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Chapters

1. What is this series about? (00:00:00)

2. Personal Updates from Upol & Shea (00:01:34)

3. Algorithmic Imprint: How dead algorithms can still hurt people (00:04:35)

4. A recent example of the Imprint: LAION Dataset Scandal (00:06:47)

5. How can we create imprint-aware algorithms design guidelines? (00:11:09)

6. FTC vs Rite Aid Scandal: Biased Facial Recognition (00:11:53)

7. Hilarious mistakes: Chatbot selling a car for $1 (00:15:48)

8. How could Rite Aid prevented this scandal? (00:18:14)

9. What's the NIST Trustworthy AI Institute? (00:21:28)

10. Shea's wish list for the NIST working group? (00:25:03)

11. How AI is different as a design material (00:27:57)

12. AI has a developer savior complex (00:30:08)

13. You can move fast and break things that you can't fix (00:32:29)

14. Audience Requests and Announcements (00:32:40)

5 episodes

Artwork
iconShare
 
Manage episode 421868625 series 3578042
Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

In this episode of irResponsible AI, Upol & Shea bring the heat to three topics--
🚨 Algorithmic Imprints: harms from zombie algorithms with an example of the LAION dataset
🚨 The FTC vs. Rite Aid Scandal and how it could have been avoided
🚨 NIST's Trustworthy AI Institute and the future of AI regulation
You’ll also learn:
🔥 why AI is a tricky design material and how it impacts Generative AI and LLMs
🔥 how AI has a "developer savior" complex and how to solve it
What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
#ResponsibleAI #ExplainableAI #podcasts #aiethics
Chapters:
00:00 - What is this series about?
01:34 - Personal Updates from Upol & Shea
04:35 - Algorithmic Imprint: How dead algorithms can still hurt people
06:47 - A recent example of the Imprint: LAION Dataset Scandal
11:09 - How can we create imprint-aware algorithms design guidelines?
11:53 - FTC vs Rite Aid Scandal: Biased Facial Recognition
15:48 - Hilarious mistakes: Chatbot selling a car for $1
18:14 - How could Rite Aid prevented this scandal?
21:28 - What's the NIST Trustworthy AI Institute?
25:03 - Shea's wish list for the NIST working group?
27:57 - How AI is different as a design material
30:08 - AI has a developer savior complex
32:29 - You can move fast and break things that you can't fix
32:40 - Audience Requests and Announcements

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Chapters

1. What is this series about? (00:00:00)

2. Personal Updates from Upol & Shea (00:01:34)

3. Algorithmic Imprint: How dead algorithms can still hurt people (00:04:35)

4. A recent example of the Imprint: LAION Dataset Scandal (00:06:47)

5. How can we create imprint-aware algorithms design guidelines? (00:11:09)

6. FTC vs Rite Aid Scandal: Biased Facial Recognition (00:11:53)

7. Hilarious mistakes: Chatbot selling a car for $1 (00:15:48)

8. How could Rite Aid prevented this scandal? (00:18:14)

9. What's the NIST Trustworthy AI Institute? (00:21:28)

10. Shea's wish list for the NIST working group? (00:25:03)

11. How AI is different as a design material (00:27:57)

12. AI has a developer savior complex (00:30:08)

13. You can move fast and break things that you can't fix (00:32:29)

14. Audience Requests and Announcements (00:32:40)

5 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide