Artwork

Content provided by Dr. Peper. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Peper or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

What is AI Bias?

6:46
 
Share
 

Manage episode 285018210 series 2844564
Content provided by Dr. Peper. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Peper or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

The ethics surrounding AI are complicated yet fascinating to discuss. One issue that sits front and center is AI bias, but what is it?

AI is based on algorithms, fed by data and experiences. The problem is when that data is incorrect, biased or based on stereotypes. Unfortunately, this means that machines, just like humans, are guided by potentially biased information.

This means that your daily threat from AI is not from the machines themselves, but their bias. In this episode of Short and Sweet AI, I talk about this further and discuss a very serious problem: artificial intelligence bias.

In this episode, find out:

  • What AI bias is?
  • The effects of AI bias
  • The three different types of bias and how they affect AI
  • How AI contributes to selection bias

Important Links & Mentions:


Resources:


Episode Transcript:

Today I’m talking about a very serious problem: artificial intelligence bias.

AI Ethics

The ethics of AI are complicated. Every time I go to review this area, I’m dazed by all the issues. There are groups in the AI community who wrestle with robot ethics, the threat to human dignity, transparency ethics, self-driving car liability, AI accountability, the ethics of weaponizing AI, machine ethics, and even the existential risk from superintelligence. But of all these hidden terrors, one is front and center. Artificial intelligence bias. What is it?

Machines Built with Bias

AI is based on algorithms in the form of computer software. Algorithms power computers to make decisions through something called machine learning. Machine learning algorithms are all around us. They supply the Netflix suggestions we receive, the posts appearing at the top of our social media feeds, they drive the results of our google searches. Algorithms are fed on data. If you want to teach a machine to recognize a cat, you feed the algorithm thousands of cat images until it can recognize a cat better than you can.

The problem is machine learning algorithms are used to make decisions in our daily lives that can have extreme consequences. A computer program may help police decide where to send resources, or who’s approved for a mortgage, who’s accepted to a university or who gets the job.

More and more experts in the field are sounding the alarm. Machines, just like humans, are guided by data and experience. If the data or experience is mistaken or based on stereotypes, a biased decision is made, whether it’s a machine or a human.

Types of AI Bias

There are 3 main types of bias in artificial intelligence: interaction bias, latent bias, and selection bias.

Microsoft’s Failed Chatbot

Interaction bias arises from the users who are driving the interaction and their biases. A clear example was Microsoft’s Twitter based chatbot called Tay. Tay was designed to learn from its interactions with users. Unfortunately, the user community on Twitter repeatedly tweeted offensive statements at Tay and Tay used those statements to train itself. As a result, Tay’s responses became racist and misogynistic and had to be shut down after 24 hours.

Amazon’s Recruiting Bias

Latent bias is when an algorithm may incorrectly identify something based on historical data or because of an existing stereotype. A well-known example of this occurred with Amazon’s recruiting algorithm. The company realized after several years their program for selecting and hiring software developers favored men. This was because Amazon’s computer systems were trained with a dataset containing resumes from mainly men.

Because of this, their algorithm penalized resumes that included the word “women’s” as in women’s chess champion. And it downgraded an applicant if they had graduated from an all womens’ college. Amazon ultimately abandoned the program because even with editing, they could not make the program gender neutral.

Selection Bias Ignores the Real Population

In selection bias a dataset overrepresents one certain group and underrepresents another. It doesn’t represent the real population. For example, some machine learning datasets come from scrapping the internet for information. But major search engines and the data in their systems are developed in the West. As a result, algorithms are more likely to recognize a bride and groom in a western style wedding but not in an African wedding.

Can Big Tech Really Self -Police

Researchers are just beginning to understand the effects of bias in the machine learning algorithms. And the big tech companies which create these systems have pledged to address the problem. But others question their ability to self-police. Google recently fired an expert, vocal, high profile employee who they hired to focus on ethical AI. She was concerned about problems in the language models they used. This raises the point that ethical AI has to mean something to the most powerful companies in the world, for it to mean anything at all

The Power of Diversity

So, what can we do about algorithms which judge us and make decisions about us at every stage of our life, without us ever knowing? Experts say we need to be aware of the problem. We need to ensure the datasets are unbiased. We should develop and use programs that can test algorithms to check for bias. And a recent study emphasized that if the people training the systems come from diverse backgrounds, there is less bias.

We know data scientists inject their bias into the algorithms they build. Having diversity means the algorithms are built for all types of people. We’ve come to learn we need AI ethics because as one headline put it, “We Teach AI Systems Everything Including Our Bias.”

Thanks for listening, I hope you found this helpful. Be curious and if you like this episode, please leave a review and subscribe because then you’ll receive my podcasts weekly. From Short and Sweet AI, I’m Dr. Peper.

  continue reading

51 episodes

Artwork

What is AI Bias?

Short & Sweet AI

published

iconShare
 
Manage episode 285018210 series 2844564
Content provided by Dr. Peper. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Peper or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

The ethics surrounding AI are complicated yet fascinating to discuss. One issue that sits front and center is AI bias, but what is it?

AI is based on algorithms, fed by data and experiences. The problem is when that data is incorrect, biased or based on stereotypes. Unfortunately, this means that machines, just like humans, are guided by potentially biased information.

This means that your daily threat from AI is not from the machines themselves, but their bias. In this episode of Short and Sweet AI, I talk about this further and discuss a very serious problem: artificial intelligence bias.

In this episode, find out:

  • What AI bias is?
  • The effects of AI bias
  • The three different types of bias and how they affect AI
  • How AI contributes to selection bias

Important Links & Mentions:


Resources:


Episode Transcript:

Today I’m talking about a very serious problem: artificial intelligence bias.

AI Ethics

The ethics of AI are complicated. Every time I go to review this area, I’m dazed by all the issues. There are groups in the AI community who wrestle with robot ethics, the threat to human dignity, transparency ethics, self-driving car liability, AI accountability, the ethics of weaponizing AI, machine ethics, and even the existential risk from superintelligence. But of all these hidden terrors, one is front and center. Artificial intelligence bias. What is it?

Machines Built with Bias

AI is based on algorithms in the form of computer software. Algorithms power computers to make decisions through something called machine learning. Machine learning algorithms are all around us. They supply the Netflix suggestions we receive, the posts appearing at the top of our social media feeds, they drive the results of our google searches. Algorithms are fed on data. If you want to teach a machine to recognize a cat, you feed the algorithm thousands of cat images until it can recognize a cat better than you can.

The problem is machine learning algorithms are used to make decisions in our daily lives that can have extreme consequences. A computer program may help police decide where to send resources, or who’s approved for a mortgage, who’s accepted to a university or who gets the job.

More and more experts in the field are sounding the alarm. Machines, just like humans, are guided by data and experience. If the data or experience is mistaken or based on stereotypes, a biased decision is made, whether it’s a machine or a human.

Types of AI Bias

There are 3 main types of bias in artificial intelligence: interaction bias, latent bias, and selection bias.

Microsoft’s Failed Chatbot

Interaction bias arises from the users who are driving the interaction and their biases. A clear example was Microsoft’s Twitter based chatbot called Tay. Tay was designed to learn from its interactions with users. Unfortunately, the user community on Twitter repeatedly tweeted offensive statements at Tay and Tay used those statements to train itself. As a result, Tay’s responses became racist and misogynistic and had to be shut down after 24 hours.

Amazon’s Recruiting Bias

Latent bias is when an algorithm may incorrectly identify something based on historical data or because of an existing stereotype. A well-known example of this occurred with Amazon’s recruiting algorithm. The company realized after several years their program for selecting and hiring software developers favored men. This was because Amazon’s computer systems were trained with a dataset containing resumes from mainly men.

Because of this, their algorithm penalized resumes that included the word “women’s” as in women’s chess champion. And it downgraded an applicant if they had graduated from an all womens’ college. Amazon ultimately abandoned the program because even with editing, they could not make the program gender neutral.

Selection Bias Ignores the Real Population

In selection bias a dataset overrepresents one certain group and underrepresents another. It doesn’t represent the real population. For example, some machine learning datasets come from scrapping the internet for information. But major search engines and the data in their systems are developed in the West. As a result, algorithms are more likely to recognize a bride and groom in a western style wedding but not in an African wedding.

Can Big Tech Really Self -Police

Researchers are just beginning to understand the effects of bias in the machine learning algorithms. And the big tech companies which create these systems have pledged to address the problem. But others question their ability to self-police. Google recently fired an expert, vocal, high profile employee who they hired to focus on ethical AI. She was concerned about problems in the language models they used. This raises the point that ethical AI has to mean something to the most powerful companies in the world, for it to mean anything at all

The Power of Diversity

So, what can we do about algorithms which judge us and make decisions about us at every stage of our life, without us ever knowing? Experts say we need to be aware of the problem. We need to ensure the datasets are unbiased. We should develop and use programs that can test algorithms to check for bias. And a recent study emphasized that if the people training the systems come from diverse backgrounds, there is less bias.

We know data scientists inject their bias into the algorithms they build. Having diversity means the algorithms are built for all types of people. We’ve come to learn we need AI ethics because as one headline put it, “We Teach AI Systems Everything Including Our Bias.”

Thanks for listening, I hope you found this helpful. Be curious and if you like this episode, please leave a review and subscribe because then you’ll receive my podcasts weekly. From Short and Sweet AI, I’m Dr. Peper.

  continue reading

51 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide