Go offline with the Player FM app!
LW - Thiel on AI & Racing with China by Ben Pace
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 09, 2024 12:46 ()
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 435238337 series 3314709
This post is a transcript of part of a podcast with Peter Thiel, touching on topics of AI, China, extinction, Effective Altruists, and apocalyptic narratives, published on August 16th 2024.
If you're interested in reading the quotes, just skip straight to them, the introduction is not required reading.
Introduction
Peter Thiel is probably known by most readers, but briefly: he is an venture capitalist, the first outside investor in Facebook, cofounder of Paypal and Palantir, and wrote Zero to One (a book I have found very helpful for thinking about building great companies). He has also been one of the primary proponents of the Great Stagnation hypothesis (along with Tyler Cowen).
More local to the LessWrong scene, Thiel was an early funder of MIRI and a speaker at the first Effective Altruism summit in 2013. He funded Leverage Research for many years, and also a lot of anti-aging research, and the seasteading initiative, and his Thiel Fellowship included a number of people who are around the LessWrong scene. I do not believe he has been active around this scene much in the last ~decade.
He appears rarely to express a lot of positions about society, and I am curious to hear them when he does.
In 2019 I published the transcript of another longform interview of his here with Eric Weinstein. Last week another longform interview with him came out, and I got the sense again, that even though we disagree on many things, conversation with him would be worthwhile and interesting.
Then about 3 hours in he started talking more directly about subjects that I think actively about and some conflicts around AI, so I've quoted the relevant parts below.
His interviewer, Joe Rogan is a very successful comedian and podcaster. He's not someone who I would go to for insights about AI. I think of him as standing in for a well-intentioned average person, for better or for worse, although he is a little more knowledgeable and a little more intelligent and a lot more curious than the average person. The average Joe. I believe he is talking in good faith to the person before him, with curiosity, and making points that seem natural to many.
Artificial Intelligence
Discussion focused on the AI race and China, atarting at 2:56:40. The opening monologue by Rogan is skippable.
Rogan
If you look at this mad rush for artificial intelligence - like, they're literally building nuclear reactors to power AI.
Thiel
Well, they're talking about it.
Rogan
Okay. That's because they know they're gonna need enormous amounts of power to do it.
Once it's online, and it keeps getting better and better, where does that go? That goes to a sort of artificial life-form. I think either we become that thing, or we integrate with that thing and become cyborgs, or that thing takes over. And that thing becomes the primary life force of the universe.
And I think that biological life, we look at like life, because we know what life is, but I think it's very possible that digital life or created life might be a superior life form. Far superior. [...]
I love people, I think people are awesome. I am a fan of people. But if I had to look logically, I would assume that we are on the way out.
And that the only way forward, really, to make an enormous leap in terms of the integration of society and technology and understanding our place in the universe, is for us to transcend our physical limitations that are essentially based on primate biology, and these primate desires for status (like being the captain), or for control of resources, all of these things - we assume these things are standard, and that they have to exist in intelligent species.
I think they only have to exist in intelligent species that have biological limitations. I think in...
2437 episodes
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 09, 2024 12:46 ()
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 435238337 series 3314709
This post is a transcript of part of a podcast with Peter Thiel, touching on topics of AI, China, extinction, Effective Altruists, and apocalyptic narratives, published on August 16th 2024.
If you're interested in reading the quotes, just skip straight to them, the introduction is not required reading.
Introduction
Peter Thiel is probably known by most readers, but briefly: he is an venture capitalist, the first outside investor in Facebook, cofounder of Paypal and Palantir, and wrote Zero to One (a book I have found very helpful for thinking about building great companies). He has also been one of the primary proponents of the Great Stagnation hypothesis (along with Tyler Cowen).
More local to the LessWrong scene, Thiel was an early funder of MIRI and a speaker at the first Effective Altruism summit in 2013. He funded Leverage Research for many years, and also a lot of anti-aging research, and the seasteading initiative, and his Thiel Fellowship included a number of people who are around the LessWrong scene. I do not believe he has been active around this scene much in the last ~decade.
He appears rarely to express a lot of positions about society, and I am curious to hear them when he does.
In 2019 I published the transcript of another longform interview of his here with Eric Weinstein. Last week another longform interview with him came out, and I got the sense again, that even though we disagree on many things, conversation with him would be worthwhile and interesting.
Then about 3 hours in he started talking more directly about subjects that I think actively about and some conflicts around AI, so I've quoted the relevant parts below.
His interviewer, Joe Rogan is a very successful comedian and podcaster. He's not someone who I would go to for insights about AI. I think of him as standing in for a well-intentioned average person, for better or for worse, although he is a little more knowledgeable and a little more intelligent and a lot more curious than the average person. The average Joe. I believe he is talking in good faith to the person before him, with curiosity, and making points that seem natural to many.
Artificial Intelligence
Discussion focused on the AI race and China, atarting at 2:56:40. The opening monologue by Rogan is skippable.
Rogan
If you look at this mad rush for artificial intelligence - like, they're literally building nuclear reactors to power AI.
Thiel
Well, they're talking about it.
Rogan
Okay. That's because they know they're gonna need enormous amounts of power to do it.
Once it's online, and it keeps getting better and better, where does that go? That goes to a sort of artificial life-form. I think either we become that thing, or we integrate with that thing and become cyborgs, or that thing takes over. And that thing becomes the primary life force of the universe.
And I think that biological life, we look at like life, because we know what life is, but I think it's very possible that digital life or created life might be a superior life form. Far superior. [...]
I love people, I think people are awesome. I am a fan of people. But if I had to look logically, I would assume that we are on the way out.
And that the only way forward, really, to make an enormous leap in terms of the integration of society and technology and understanding our place in the universe, is for us to transcend our physical limitations that are essentially based on primate biology, and these primate desires for status (like being the captain), or for control of resources, all of these things - we assume these things are standard, and that they have to exist in intelligent species.
I think they only have to exist in intelligent species that have biological limitations. I think in...
2437 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.