Artwork

Content provided by Debra J. Farber (Shifting Privacy Left). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Debra J. Farber (Shifting Privacy Left) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)

51:41
 
Share
 

Manage episode 376134903 series 3407760
Content provided by Debra J. Farber (Shifting Privacy Left). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Debra J. Farber (Shifting Privacy Left) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.
In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers.
We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs:

  1. the hallucination problem;
  2. the deliberation problem;
  3. the sleazy salesperson problem; &
  4. the problem of shared responsibility

Topics Covered:

  • What motivated Reid to write his book, "Ethical Machines"
  • The key differences between 'active privacy' & 'passive privacy'
  • Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimization
  • The importance of aligning privacy agendas with business priorities
  • Why what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred data
  • Automated decision making: when it's necessary to have a 'human in the loop'
  • Approaches for mitigating 'AI ethics fatigue'
  • The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated value
  • The differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethics
  • Reid's article, "Generative AI-xiety," & the 4 main risks related to generative AI
  • Reid's advice for technical staff building products & services that leverage LLM's

Resources Mentioned:

Guest Info:

Send us a Text Message.

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Copyright © 2022 - 2024 Principled LLC. All rights reserved.

  continue reading

Chapters

1. S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants) (00:00:00)

2. Introducing Reid Blackman, Founder & CEO, Virtue Consultants and Host, Ethical Machines Podcast; and how Reid got interested in AI Ethics (00:00:53)

3. Reid discusses what motivated him, a Philosophy Professor, to write his book, "Ethical Machines;" and who he wrote it for (00:04:13)

4. Reid makes a distinction between 'active privacy' & 'passive privacy' (00:04:15)

5. Challenges with the fact that 'the fuel of AI is other people's data' and how business leaders should put guardrails around this based on business goals and public privacy commitments (00:10:51)

6. Why what you infer about people can be a privacy violation; and, what engineers should know regarding AI training data & 'input privacy' and whether that effects the output of inferred data (00:17:27)

7. Automated decision making: when it's necessary to have a 'human in the loop' when making decisions with AI (00:22:52)

8. Reid shares how we can avoid 'AI ethics fatigue' to encourage technologists to take action (00:27:20)

9. Reid explains how to back a company's stated 'values' around privacy and AI with actions; why there should always be 3 - 7 guardrails put in place for each stated value (00:32:03)

10. The differences between the terms 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethics (00:34:17)

11. Reid's article, "Generative AI-xiety" (Harvard Business Review) and the 4 main risks related to Generative AI: 1) the hallucination problem; 2) the deliberation problem; 3) the sleazy salesperson problem; & 4) the problem of shared responsibility (00:36:41)

12. Reid's advice for technical staff (i.e. data scientists, architects, product managers, devs) as they build products & services that leverage LLM's in a 'responsible' manner (00:46:09)

62 episodes

Artwork
iconShare
 
Manage episode 376134903 series 3407760
Content provided by Debra J. Farber (Shifting Privacy Left). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Debra J. Farber (Shifting Privacy Left) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.
In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers.
We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs:

  1. the hallucination problem;
  2. the deliberation problem;
  3. the sleazy salesperson problem; &
  4. the problem of shared responsibility

Topics Covered:

  • What motivated Reid to write his book, "Ethical Machines"
  • The key differences between 'active privacy' & 'passive privacy'
  • Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimization
  • The importance of aligning privacy agendas with business priorities
  • Why what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred data
  • Automated decision making: when it's necessary to have a 'human in the loop'
  • Approaches for mitigating 'AI ethics fatigue'
  • The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated value
  • The differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethics
  • Reid's article, "Generative AI-xiety," & the 4 main risks related to generative AI
  • Reid's advice for technical staff building products & services that leverage LLM's

Resources Mentioned:

Guest Info:

Send us a Text Message.

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Copyright © 2022 - 2024 Principled LLC. All rights reserved.

  continue reading

Chapters

1. S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants) (00:00:00)

2. Introducing Reid Blackman, Founder & CEO, Virtue Consultants and Host, Ethical Machines Podcast; and how Reid got interested in AI Ethics (00:00:53)

3. Reid discusses what motivated him, a Philosophy Professor, to write his book, "Ethical Machines;" and who he wrote it for (00:04:13)

4. Reid makes a distinction between 'active privacy' & 'passive privacy' (00:04:15)

5. Challenges with the fact that 'the fuel of AI is other people's data' and how business leaders should put guardrails around this based on business goals and public privacy commitments (00:10:51)

6. Why what you infer about people can be a privacy violation; and, what engineers should know regarding AI training data & 'input privacy' and whether that effects the output of inferred data (00:17:27)

7. Automated decision making: when it's necessary to have a 'human in the loop' when making decisions with AI (00:22:52)

8. Reid shares how we can avoid 'AI ethics fatigue' to encourage technologists to take action (00:27:20)

9. Reid explains how to back a company's stated 'values' around privacy and AI with actions; why there should always be 3 - 7 guardrails put in place for each stated value (00:32:03)

10. The differences between the terms 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethics (00:34:17)

11. Reid's article, "Generative AI-xiety" (Harvard Business Review) and the 4 main risks related to Generative AI: 1) the hallucination problem; 2) the deliberation problem; 3) the sleazy salesperson problem; & 4) the problem of shared responsibility (00:36:41)

12. Reid's advice for technical staff (i.e. data scientists, architects, product managers, devs) as they build products & services that leverage LLM's in a 'responsible' manner (00:46:09)

62 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide