Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EA - Should you work at a frontier AI company? by 80000 Hours

37:36
 
Share
 

Manage episode 437259676 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should you work at a frontier AI company?, published by 80000 Hours on August 30, 2024 on The Effective Altruism Forum.
This is a cross-post of our newly updated article with advice on working at frontier AI companies if you want to help with AI safety.
The original article was published in June 2023. We decided to update it in May 2024, as events in the intervening year caused us to be more concerned about working for frontier AI companies, and we also wanted to explain why we still thought it made sense for some people to take roles at AI companies.
Our overall view is that "it's complicated": working at a frontier AI company is sometimes a good idea, and sometimes a bad idea; it depends on the role and type of work, the company, and your individual case.
The new article tries to more clearly lay out the case in favour and against working at frontier AI companies, give tips for mitigating the downsides, and give more information on alternatives.
Summary
In a nutshell: If you want to help
reduce catastrophic risks from AI, working at a frontier AI company is an important option to consider, but the impact is hard to assess. These roles often come with great potential for career growth, and many could be (or lead to) highly impactful ways of reducing the chances of an AI-related catastrophe. However, there's also a risk of doing substantial harm, and there are roles you should probably avoid.
Pros
Some roles have high potential for a big positive impact via reducing risks from AI
Among the best and most robust ways to gain AI-specific career capital
Highly compensated and prestigious
Cons
Risk of contributing to or accelerating AI systems that could cause extreme harm
Financial and social incentives might make it harder to think objectively about risks
Stress, especially because of a need to carefully and repeatedly assess whether your role is harmful
Recommendation: it's complicated
We think there are people in our audience for whom working in a role at a frontier AI company is their highest-impact option. But some of these roles might also be extremely harmful. This means it's important to be discerning and vigilant when thinking about taking a role at a frontier AI company - both about the role and the company overall - and to actually be willing to leave if you end up thinking the work is harmful.
Review status: Based on an in-depth investigation
This review is informed by three surveys of people we regard as having expertise: one survey on whether you should be open to roles that advance AI capabilities (written up here), and two followup surveys conducted over the past two years. It's likely there are still gaps in our understanding, as many of these considerations remain highly debated. The domain is also fast moving, so this article may become outdated quickly.
This article was originally published in June 2023. It was substantially updated in August 2024 to reflect more recent developments and thinking.
Introduction
We think AI is likely to have transformative effects over the coming decades, and that reducing the chances of an AI-related catastrophe is one of the world's most pressing problems.
So it's natural to wonder whether you should try to work at one of the companies that are doing the most to build and shape these future AI systems.
As of summer 2024, OpenAI, Google DeepMind, Meta, and Anthropic seem to be the leading frontier AI companies - meaning they have produced the most capable models so far and seem likely to continue doing so. Mistral, and xAI are contenders as well - and others may enter the industry from here.1
Why might it be high impact to work for a frontier AI company?
Some roles at these companies might be among the best for reducing risks
We suggest working at frontier AI companies in several of our
career reviews because a lot of i...
  continue reading

2447 episodes

Artwork
iconShare
 
Manage episode 437259676 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should you work at a frontier AI company?, published by 80000 Hours on August 30, 2024 on The Effective Altruism Forum.
This is a cross-post of our newly updated article with advice on working at frontier AI companies if you want to help with AI safety.
The original article was published in June 2023. We decided to update it in May 2024, as events in the intervening year caused us to be more concerned about working for frontier AI companies, and we also wanted to explain why we still thought it made sense for some people to take roles at AI companies.
Our overall view is that "it's complicated": working at a frontier AI company is sometimes a good idea, and sometimes a bad idea; it depends on the role and type of work, the company, and your individual case.
The new article tries to more clearly lay out the case in favour and against working at frontier AI companies, give tips for mitigating the downsides, and give more information on alternatives.
Summary
In a nutshell: If you want to help
reduce catastrophic risks from AI, working at a frontier AI company is an important option to consider, but the impact is hard to assess. These roles often come with great potential for career growth, and many could be (or lead to) highly impactful ways of reducing the chances of an AI-related catastrophe. However, there's also a risk of doing substantial harm, and there are roles you should probably avoid.
Pros
Some roles have high potential for a big positive impact via reducing risks from AI
Among the best and most robust ways to gain AI-specific career capital
Highly compensated and prestigious
Cons
Risk of contributing to or accelerating AI systems that could cause extreme harm
Financial and social incentives might make it harder to think objectively about risks
Stress, especially because of a need to carefully and repeatedly assess whether your role is harmful
Recommendation: it's complicated
We think there are people in our audience for whom working in a role at a frontier AI company is their highest-impact option. But some of these roles might also be extremely harmful. This means it's important to be discerning and vigilant when thinking about taking a role at a frontier AI company - both about the role and the company overall - and to actually be willing to leave if you end up thinking the work is harmful.
Review status: Based on an in-depth investigation
This review is informed by three surveys of people we regard as having expertise: one survey on whether you should be open to roles that advance AI capabilities (written up here), and two followup surveys conducted over the past two years. It's likely there are still gaps in our understanding, as many of these considerations remain highly debated. The domain is also fast moving, so this article may become outdated quickly.
This article was originally published in June 2023. It was substantially updated in August 2024 to reflect more recent developments and thinking.
Introduction
We think AI is likely to have transformative effects over the coming decades, and that reducing the chances of an AI-related catastrophe is one of the world's most pressing problems.
So it's natural to wonder whether you should try to work at one of the companies that are doing the most to build and shape these future AI systems.
As of summer 2024, OpenAI, Google DeepMind, Meta, and Anthropic seem to be the leading frontier AI companies - meaning they have produced the most capable models so far and seem likely to continue doing so. Mistral, and xAI are contenders as well - and others may enter the industry from here.1
Why might it be high impact to work for a frontier AI company?
Some roles at these companies might be among the best for reducing risks
We suggest working at frontier AI companies in several of our
career reviews because a lot of i...
  continue reading

2447 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide