Artwork

Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

🔥 Generative AI Use Cases: What's Legit and What's Not? | irResponsible AI EP6S01

26:54
 
Share
 

Manage episode 431306220 series 3578042
Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

In this episode of irResponsible AI, we discuss
✅ GenAI is cool, but do you really need it for your use case?
✅ How can companies end up doing irresponsible AI by using GenAI for the wrong use cases?
✅ How may we get out of this problem?
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - Introduction
01:28 - Misuse of Generative AI
02:27 - Glue example of google gen AI
03:18 - The Challenge of Public Trust and Misinformation
03:45 - Why is this a serious problem?
04:49 - Why should businesses need to worry about it?
05:32 - Auditing Generative AI Systems and Liability Risks
07:18 - Why is this GenAI hype happening?
09:20 - Competitive Pressure and Funding Influence
14:29 - How to avoid failure: investing in Problem Understanding
14:48 - Good use cases of GenAI
17:05 - LLMs are only useful if you know the answer
17:30 - Text-based based video editing as a good example
21:40 - Need for GenAI literacy amongst tech execs
23:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Chapters

1. Introduction (00:00:00)

2. Misuse of Generative AI (00:01:28)

3. The Challenge of Public Trust and Misinformation (00:03:18)

4. Why is this a serious problem? (00:03:45)

5. Why should businesses need to worry about it? (00:04:49)

6. Auditing Generative AI Systems and Liability Risks (00:05:32)

7. Why is this GenAI hype happening? (00:07:18)

8. Competitive Pressure and Funding Influence (00:09:20)

9. How to avoid failure: investing in Problem Understanding (00:14:48)

10. Good use cases of GenAI (00:14:48)

11. LLMs are only useful if you know the answer (00:17:05)

12. Need for GenAI literacy amongst tech execs (00:21:40)

13. Takeaways (00:23:30)

6 episodes

Artwork
iconShare
 
Manage episode 431306220 series 3578042
Content provided by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

In this episode of irResponsible AI, we discuss
✅ GenAI is cool, but do you really need it for your use case?
✅ How can companies end up doing irresponsible AI by using GenAI for the wrong use cases?
✅ How may we get out of this problem?
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - Introduction
01:28 - Misuse of Generative AI
02:27 - Glue example of google gen AI
03:18 - The Challenge of Public Trust and Misinformation
03:45 - Why is this a serious problem?
04:49 - Why should businesses need to worry about it?
05:32 - Auditing Generative AI Systems and Liability Risks
07:18 - Why is this GenAI hype happening?
09:20 - Competitive Pressure and Funding Influence
14:29 - How to avoid failure: investing in Problem Understanding
14:48 - Good use cases of GenAI
17:05 - LLMs are only useful if you know the answer
17:30 - Text-based based video editing as a good example
21:40 - Need for GenAI literacy amongst tech execs
23:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Chapters

1. Introduction (00:00:00)

2. Misuse of Generative AI (00:01:28)

3. The Challenge of Public Trust and Misinformation (00:03:18)

4. Why is this a serious problem? (00:03:45)

5. Why should businesses need to worry about it? (00:04:49)

6. Auditing Generative AI Systems and Liability Risks (00:05:32)

7. Why is this GenAI hype happening? (00:07:18)

8. Competitive Pressure and Funding Influence (00:09:20)

9. How to avoid failure: investing in Problem Understanding (00:14:48)

10. Good use cases of GenAI (00:14:48)

11. LLMs are only useful if you know the answer (00:17:05)

12. Need for GenAI literacy amongst tech execs (00:21:40)

13. Takeaways (00:23:30)

6 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide