Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EA - Applications open for AI Safety Fundamentals: Governance Course by Jamie Bernardi

4:03
 
Share
 

Manage episode 365058420 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications open for AI Safety Fundamentals: Governance Course, published by Jamie Bernardi on June 2, 2023 on The Effective Altruism Forum. We are looking for people who currently or might want to work in AI Governance and policy. If you have networks in or outside of EA who might be interested, we would appreciate you sharing this course with them. Full announcement The AI Safety Fundamentals (AISF): Governance Course is designed to introduce the key ideas in AI Governance for reducing extreme risks from future AI systems. Alongside the course, you will be joining our AI Safety Fundamentals Community. The AISF Community is a space to discuss AI Safety with others who have the relevant skills and background to contribute to AI Governance, whilst growing your network and awareness of opportunities. The last time we ran the AI Governance course was in January 2022, then under Effective Altruism Cambridge. The course is now run by BlueDot Impact, founded by members of the same team (and now based in London). We are excited to relaunch the course now, when AI Governance is a focal point for the media and political figures. We feel this is a particularly important time to support high-fidelity discussion of ideas to govern the future of AI. Note we have also renamed the website from AGI Safety Fundamentals to AI Safety Fundamentals. We'll release another post within the next week to explain our reasoning, and we'll respond to any discussion about the rebrand there. Time commitment The course will run for 12 weeks from July-September 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week project. The time commitment is around 5 hours per week, so you can engage with the course and community alongside full-time work or study. The split will be 2-3 hours of preparatory work, and a 1.5-2 hour live session. Course structure The course is 12 weeks long and takes around 5 hours a week to participate in. For the first 8 weeks, participants will work through 2-3 hours of structured content to prepare for a weekly, facilitated small discussion group of 1.5-2 hours. Participants will be grouped depending on their current career stage and policy expertise. The facilitator will be knowledgeable about AI Governance, and can help to answer participants’ questions and point them to further resources. The final 4 weeks are project weeks. Participants can use this time to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career. The course content is designed with input from a wide range of the community thinking about the governance of advanced AI. The curriculum will be updated before the course launches in mid-July. Target audience We think this course will particularly be able to help participants who: Have policy experience, and are keen to apply their skills to reducing risk from AI. Have a technical background, and want to learn about how they can use their skills to contribute to AI Governance. Are early in their career or a student who is interested in exploring a future career in policy to reduce risk from advanced AI. We expect at least 25% of the participants will not fit any of these descriptions. There are many skills, backgrounds and approaches to AI Governance we haven’t captured here, and we will consider all applications accordingly. Within the course, participants will be grouped with others based on their existing policy expertise and existing familiarity with AI safety. This means that admissions can be broad, and discussions will be pitched at the correct level for all participants. Apply now! If you would like to be considered for the next round of the courses, starting in July 20...
  continue reading

2415 episodes

Artwork
iconShare
 
Manage episode 365058420 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications open for AI Safety Fundamentals: Governance Course, published by Jamie Bernardi on June 2, 2023 on The Effective Altruism Forum. We are looking for people who currently or might want to work in AI Governance and policy. If you have networks in or outside of EA who might be interested, we would appreciate you sharing this course with them. Full announcement The AI Safety Fundamentals (AISF): Governance Course is designed to introduce the key ideas in AI Governance for reducing extreme risks from future AI systems. Alongside the course, you will be joining our AI Safety Fundamentals Community. The AISF Community is a space to discuss AI Safety with others who have the relevant skills and background to contribute to AI Governance, whilst growing your network and awareness of opportunities. The last time we ran the AI Governance course was in January 2022, then under Effective Altruism Cambridge. The course is now run by BlueDot Impact, founded by members of the same team (and now based in London). We are excited to relaunch the course now, when AI Governance is a focal point for the media and political figures. We feel this is a particularly important time to support high-fidelity discussion of ideas to govern the future of AI. Note we have also renamed the website from AGI Safety Fundamentals to AI Safety Fundamentals. We'll release another post within the next week to explain our reasoning, and we'll respond to any discussion about the rebrand there. Time commitment The course will run for 12 weeks from July-September 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week project. The time commitment is around 5 hours per week, so you can engage with the course and community alongside full-time work or study. The split will be 2-3 hours of preparatory work, and a 1.5-2 hour live session. Course structure The course is 12 weeks long and takes around 5 hours a week to participate in. For the first 8 weeks, participants will work through 2-3 hours of structured content to prepare for a weekly, facilitated small discussion group of 1.5-2 hours. Participants will be grouped depending on their current career stage and policy expertise. The facilitator will be knowledgeable about AI Governance, and can help to answer participants’ questions and point them to further resources. The final 4 weeks are project weeks. Participants can use this time to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career. The course content is designed with input from a wide range of the community thinking about the governance of advanced AI. The curriculum will be updated before the course launches in mid-July. Target audience We think this course will particularly be able to help participants who: Have policy experience, and are keen to apply their skills to reducing risk from AI. Have a technical background, and want to learn about how they can use their skills to contribute to AI Governance. Are early in their career or a student who is interested in exploring a future career in policy to reduce risk from advanced AI. We expect at least 25% of the participants will not fit any of these descriptions. There are many skills, backgrounds and approaches to AI Governance we haven’t captured here, and we will consider all applications accordingly. Within the course, participants will be grouped with others based on their existing policy expertise and existing familiarity with AI safety. This means that admissions can be broad, and discussions will be pitched at the correct level for all participants. Apply now! If you would like to be considered for the next round of the courses, starting in July 20...
  continue reading

2415 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide