Artwork

Content provided by Isabel Knight & Deondre' Jones, Isabel Knight, and Deondre' Jones. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Isabel Knight & Deondre' Jones, Isabel Knight, and Deondre' Jones or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

120. The AI Episode: Immortality and Endless Wealth or Human Extinction?

53:40
 
Share
 

Manage episode 323016555 series 2801148
Content provided by Isabel Knight & Deondre' Jones, Isabel Knight, and Deondre' Jones. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Isabel Knight & Deondre' Jones, Isabel Knight, and Deondre' Jones or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

There's a pretty fundamental shift in our near future and the median AI scientist says it is coming by about the year 2040: the rise of Artificial General Intelligence (AGI). Right now, the types of AI that power TikTok and Facebook and Spotify and Netflix and any number of other major tech companies in the world currently are examples of Artificial Narrow Intelligence (ANI) which may be superhuman, like the ANI's that can beat the best humans in the world at games like chess and Go, but they are only superhuman at one thing. An AGI is an algorithm that is good at learning how to do multiple things, and can attain human-level intelligence across the board. The next step after AGI is ASI, which is Artificial Super Intelligence, which goes beyond the known biological range of intelligence and can improve upon itself at an exponential rate. That's the scary stuff.

We talk with Isabel's friend Shantanu, who works at a company called OpenAI, about the hypothetical scenarios that are possible and likely within our lifetimes. He notes that a lot of the biggest worries out there are projections of our own personal fears about the world and the direction society is going in because at the end of the day, it is hard to make predictions about what we don't know. But there are a lot of AI safety organizations out there putting extensive thought into these questions, because whoever comes up with the first ASI could determine the entire fate of humanity: we could hypothetically use the AI for "good" and bring an end to most human suffering in the world by using nanotechnology to reverse aging and stop most disease, and possibly create a world in which humans never have to work again and we all live off a universal basic income. But we could also use it for "evil," which is of course a moral imposition by humans that likely wouldn't apply to an algorithm without feeling or intention, but we could program an algorithm to optimize for one thing (the classic parable is making paperclips) and it gets so good at it that it turns everything in the world to paperclips.

Links:

The Wait but Why articles about hypothetical scenarios involving recursively self-improving AI:
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

Some examples of what OpenAI's code models can do:
https://openai.com/blog/openai-codex/
https://www.youtube.com/watch?v=SGUCcjHTmGY
Some examples of what gpt-3 can do:
https://www.gwern.net/GPT-3
Openai's blog:
https://openai.com/blog/
https://openai.com/blog/webgpt/
https://openai.com/blog/dall-e/
https://openai.com/blog/instruction-following/
Music is The Beauty of Maths by Meydän.

--- Support this podcast: https://podcasters.spotify.com/pod/show/im-the-villain/support
  continue reading

157 episodes

Artwork
iconShare
 
Manage episode 323016555 series 2801148
Content provided by Isabel Knight & Deondre' Jones, Isabel Knight, and Deondre' Jones. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Isabel Knight & Deondre' Jones, Isabel Knight, and Deondre' Jones or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

There's a pretty fundamental shift in our near future and the median AI scientist says it is coming by about the year 2040: the rise of Artificial General Intelligence (AGI). Right now, the types of AI that power TikTok and Facebook and Spotify and Netflix and any number of other major tech companies in the world currently are examples of Artificial Narrow Intelligence (ANI) which may be superhuman, like the ANI's that can beat the best humans in the world at games like chess and Go, but they are only superhuman at one thing. An AGI is an algorithm that is good at learning how to do multiple things, and can attain human-level intelligence across the board. The next step after AGI is ASI, which is Artificial Super Intelligence, which goes beyond the known biological range of intelligence and can improve upon itself at an exponential rate. That's the scary stuff.

We talk with Isabel's friend Shantanu, who works at a company called OpenAI, about the hypothetical scenarios that are possible and likely within our lifetimes. He notes that a lot of the biggest worries out there are projections of our own personal fears about the world and the direction society is going in because at the end of the day, it is hard to make predictions about what we don't know. But there are a lot of AI safety organizations out there putting extensive thought into these questions, because whoever comes up with the first ASI could determine the entire fate of humanity: we could hypothetically use the AI for "good" and bring an end to most human suffering in the world by using nanotechnology to reverse aging and stop most disease, and possibly create a world in which humans never have to work again and we all live off a universal basic income. But we could also use it for "evil," which is of course a moral imposition by humans that likely wouldn't apply to an algorithm without feeling or intention, but we could program an algorithm to optimize for one thing (the classic parable is making paperclips) and it gets so good at it that it turns everything in the world to paperclips.

Links:

The Wait but Why articles about hypothetical scenarios involving recursively self-improving AI:
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

Some examples of what OpenAI's code models can do:
https://openai.com/blog/openai-codex/
https://www.youtube.com/watch?v=SGUCcjHTmGY
Some examples of what gpt-3 can do:
https://www.gwern.net/GPT-3
Openai's blog:
https://openai.com/blog/
https://openai.com/blog/webgpt/
https://openai.com/blog/dall-e/
https://openai.com/blog/instruction-following/
Music is The Beauty of Maths by Meydän.

--- Support this podcast: https://podcasters.spotify.com/pod/show/im-the-villain/support
  continue reading

157 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide