Go offline with the Player FM app!
Shannon Vallor, "The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking" (Oxford UP, 2024)
Manage episode 428131470 series 2421470
There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity -- that somehow the AIs will rise up and destroy us or become our overlords.
In The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking (Oxford UP), Shannon Vallor argues that the actual, and very alarming, existential risk of AI that we face right now is quite different. Because some AI technologies, such as ChatGPT or other large language models, can closely mimic the outputs of an understanding mind without having actual understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. In her compelling book, Vallor, who holds the Baillie Gifford Chair in the Ethics and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, critically examines AI Doomers and Long-termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
914 episodes
Manage episode 428131470 series 2421470
There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity -- that somehow the AIs will rise up and destroy us or become our overlords.
In The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking (Oxford UP), Shannon Vallor argues that the actual, and very alarming, existential risk of AI that we face right now is quite different. Because some AI technologies, such as ChatGPT or other large language models, can closely mimic the outputs of an understanding mind without having actual understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. In her compelling book, Vallor, who holds the Baillie Gifford Chair in the Ethics and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, critically examines AI Doomers and Long-termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
914 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.