Artwork

Content provided by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Explainable AI Explained

25:49
 
Share
 

Manage episode 328623399 series 2487640
Content provided by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

As the field of artificial intelligence (AI) has matured, increasingly complex opaque models have been developed and deployed to solve hard problems. Unlike many predecessor models, these models, by the nature of their architecture, are harder to understand and oversee. When such models fail or do not behave as expected or hoped, it can be hard for developers and end-users to pinpoint why or determine methods for addressing the problem. Explainable AI (XAI) meets the emerging demands of AI engineering by providing insight into the inner workings of these opaque models. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Violet Turri and Rachel Dzombak, both with the SEI's AI Division, discuss explainable AI, which encompasses all the techniques that make the decision-making processes of AI systems understandable to humans.

  continue reading

428 episodes

Artwork
iconShare
 
Manage episode 328623399 series 2487640
Content provided by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

As the field of artificial intelligence (AI) has matured, increasingly complex opaque models have been developed and deployed to solve hard problems. Unlike many predecessor models, these models, by the nature of their architecture, are harder to understand and oversee. When such models fail or do not behave as expected or hoped, it can be hard for developers and end-users to pinpoint why or determine methods for addressing the problem. Explainable AI (XAI) meets the emerging demands of AI engineering by providing insight into the inner workings of these opaque models. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Violet Turri and Rachel Dzombak, both with the SEI's AI Division, discuss explainable AI, which encompasses all the techniques that make the decision-making processes of AI systems understandable to humans.

  continue reading

428 episodes

모든 에피소드

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide