Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EA - How You Can Support SB 1047 by ThomasW

4:45
 
Share
 

Manage episode 439456242 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How You Can Support SB 1047, published by ThomasW on September 12, 2024 on The Effective Altruism Forum.
Posting something about a current issue that I think many people here would be interested in.
California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. We need your help to encourage the Governor to sign it! You can help by writing a quick custom letter and sending it to his office (see instructions below).
About SB 1047 and why it is important
SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages.
AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies.
I believe SB 1047 is the most significant piece of AI safety legislation in the country, and perhaps the world. While AI policy has made great strides in the last couple of years, AI policies have mostly not had teeth - they have relied on government reporting requirements and purely voluntary promises from AI developers to behave responsibly.
SB 1047 would actually prohibit behavior that exposes the public to serious and unreasonable risks, and incentivize AI developers to consider the public interest when developing and releasing powerful models.
If 1047 is vetoed, it's plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon.
The bill's text can be found
here. A summary of the bill can be found
here. Longer summaries can be found
here and
here, and a debate between a bill proponent and opponent is
here. SB 1047 is supported by many
academic researchers (including Turing Award winners
Yoshua Bengio and Geoffrey Hinton),
employees at major AI companies and organizations like Imbue and
Notion. It is opposed by OpenAI, Google, Meta,
venture capital firm A16z as well as
some other academic researchers and
organizations. After a recent round of
amendments, Anthropic
said "we believe its benefits likely outweigh its costs."
SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it.
Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto. A veto decision would set back AI safety legislation significantly, and expose the public to greater risk. He needs to hear from you!
How you can help
There are several ways to help, many of which are detailed on the
SB 1047 website.
The most useful thing you can do is write a custom letter. To do this:
Make a letter addressed to Governor Newsom using the template
here.
Save the document as a PDF and email it to
leg.unit@gov.ca.gov.
In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context - instead, focus on how the risks are serious and how this bill would help keep the public safe.
Once you've written your own custom letter, think of 5 family members or friends who might also be wi...
  continue reading

2437 episodes

Artwork
iconShare
 
Manage episode 439456242 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How You Can Support SB 1047, published by ThomasW on September 12, 2024 on The Effective Altruism Forum.
Posting something about a current issue that I think many people here would be interested in.
California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. We need your help to encourage the Governor to sign it! You can help by writing a quick custom letter and sending it to his office (see instructions below).
About SB 1047 and why it is important
SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages.
AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies.
I believe SB 1047 is the most significant piece of AI safety legislation in the country, and perhaps the world. While AI policy has made great strides in the last couple of years, AI policies have mostly not had teeth - they have relied on government reporting requirements and purely voluntary promises from AI developers to behave responsibly.
SB 1047 would actually prohibit behavior that exposes the public to serious and unreasonable risks, and incentivize AI developers to consider the public interest when developing and releasing powerful models.
If 1047 is vetoed, it's plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon.
The bill's text can be found
here. A summary of the bill can be found
here. Longer summaries can be found
here and
here, and a debate between a bill proponent and opponent is
here. SB 1047 is supported by many
academic researchers (including Turing Award winners
Yoshua Bengio and Geoffrey Hinton),
employees at major AI companies and organizations like Imbue and
Notion. It is opposed by OpenAI, Google, Meta,
venture capital firm A16z as well as
some other academic researchers and
organizations. After a recent round of
amendments, Anthropic
said "we believe its benefits likely outweigh its costs."
SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it.
Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto. A veto decision would set back AI safety legislation significantly, and expose the public to greater risk. He needs to hear from you!
How you can help
There are several ways to help, many of which are detailed on the
SB 1047 website.
The most useful thing you can do is write a custom letter. To do this:
Make a letter addressed to Governor Newsom using the template
here.
Save the document as a PDF and email it to
leg.unit@gov.ca.gov.
In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context - instead, focus on how the risks are serious and how this bill would help keep the public safe.
Once you've written your own custom letter, think of 5 family members or friends who might also be wi...
  continue reading

2437 episodes

Όλα τα επεισόδια

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide