Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - Safety consultations for AI lab employees by Zach Stein-Perlman

2:30
 
Share
 

Manage episode 431044891 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Safety consultations for AI lab employees, published by Zach Stein-Perlman on July 27, 2024 on LessWrong. Edit: I may substantially edit this post soon; don't share it yet. Many people who are concerned about AI x-risk work at AI labs, in the hope of doing directly useful work, boosting a relatively responsible lab, or causing their lab to be safer on the margin. Labs do lots of stuff that affects AI safety one way or another. It would be hard enough to follow all this at best; in practice, labs are incentivized to be misleading in both their public and internal comms, making it even harder to follow what's happening. And so people end up misinformed about what's happening, often leading them to make suboptimal choices. In my AI Lab Watch work, I pay attention to what AI labs do and what they should do. So I'm in a good position to inform interested but busy people. So I'm announcing an experimental service where I provide the following: Calls for current and prospective employees of frontier AI labs. Book here On these (confidential) calls, I can answer your questions about frontier AI labs' current safety-relevant actions, policies, commitments, and statements, to help you to make more informed choices. These calls are open to any employee of OpenAI, Anthropic, Google DeepMind, Microsoft AI, or Meta AI, or to anyone who is strongly considering working at one (with an offer in hand or expecting to receive one). If that isn't you, feel free to request a call and I may still take it. Support for potential whistleblowers. If you're at a lab and aware of wrongdoing, I can put you in touch with: Former lab employees and others who can offer confidential advice Vetted employment lawyers Communications professionals who can advise on talking to the media. If you need this, email zacharysteinperlman@gmail.com or message me on Signal at 734 353 3975. I don't know whether I'll offer this long-term. I'm going to offer this for at least the next month. My hope is that this service makes it much easier for lab employees to have an informed understanding of labs' safety-relevant actions, commitments, and responsibilities. If you want to help - e.g. if maybe I should introduce lab-people to you - let me know. You can give me anonymous feedback. Crossposted from AI Lab Watch. Subscribe on Substack. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1817 episodes

Artwork
iconShare
 
Manage episode 431044891 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Safety consultations for AI lab employees, published by Zach Stein-Perlman on July 27, 2024 on LessWrong. Edit: I may substantially edit this post soon; don't share it yet. Many people who are concerned about AI x-risk work at AI labs, in the hope of doing directly useful work, boosting a relatively responsible lab, or causing their lab to be safer on the margin. Labs do lots of stuff that affects AI safety one way or another. It would be hard enough to follow all this at best; in practice, labs are incentivized to be misleading in both their public and internal comms, making it even harder to follow what's happening. And so people end up misinformed about what's happening, often leading them to make suboptimal choices. In my AI Lab Watch work, I pay attention to what AI labs do and what they should do. So I'm in a good position to inform interested but busy people. So I'm announcing an experimental service where I provide the following: Calls for current and prospective employees of frontier AI labs. Book here On these (confidential) calls, I can answer your questions about frontier AI labs' current safety-relevant actions, policies, commitments, and statements, to help you to make more informed choices. These calls are open to any employee of OpenAI, Anthropic, Google DeepMind, Microsoft AI, or Meta AI, or to anyone who is strongly considering working at one (with an offer in hand or expecting to receive one). If that isn't you, feel free to request a call and I may still take it. Support for potential whistleblowers. If you're at a lab and aware of wrongdoing, I can put you in touch with: Former lab employees and others who can offer confidential advice Vetted employment lawyers Communications professionals who can advise on talking to the media. If you need this, email zacharysteinperlman@gmail.com or message me on Signal at 734 353 3975. I don't know whether I'll offer this long-term. I'm going to offer this for at least the next month. My hope is that this service makes it much easier for lab employees to have an informed understanding of labs' safety-relevant actions, commitments, and responsibilities. If you want to help - e.g. if maybe I should introduce lab-people to you - let me know. You can give me anonymous feedback. Crossposted from AI Lab Watch. Subscribe on Substack. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1817 episodes

Alle Folgen

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide