Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EA - Corporate AI Labs' Odd Role in Their Own Governance by Corporate AI Labs' Odd Role in Their Own Governance

22:34
 
Share
 

Manage episode 431347869 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Corporate AI Labs' Odd Role in Their Own Governance, published by Corporate AI Labs' Odd Role in Their Own Governance on July 29, 2024 on The Effective Altruism Forum. Executive Summary Plenty of attention rests on artificial intelligence developers' non-technical contributions to ensuring safe development of advanced AI: Their corporate structure, their internal guidelines ('RSPs'), and their work on policy. We argue that strong profitability incentives increasingly force these efforts into ineffectiveness. As a result, less hope should be placed on AI corporations' internal governance, and more scrutiny should be afforded to their policy contributions. TL;DR Only Profit-Maximizers Stay At The Frontier Investors and compute providers have extensive leverage over labs and need to justify enormous spending As a result, leading AI corporations are forced to maximize profits This leads them to advocate against external regulatory constraints or shape them in their favor Constraints from Corporate Structure Are Dangerously Ineffective Ostensibly binding corporate structures are easily evaded or abandoned Political and public will cannot be enforced or ensured via corporate structure Public pressure can lead to ineffective and economically harmful non-profit signaling Hope In RSPs Is Misguided RSPs on their own can and will easily be discarded once they become inconvenient Public or political pressure is unlikely to enforce RSPs against business interests RSP codification is likely to yield worse results than independent legislative initiative Therefore, much less attention should be afforded to RSPs. For-Profit Policy Work Is Called Corporate Lobbying For-profit work on policy and governance is usually called corporate lobbying. In many other industries, corporate lobbying is an opposing corrective force to advocacy Corporate lobbying output should be understood as constrained by business interests Talent allocation and policy attention should be more skeptical of corporate lobbying. Introduction Advocates for safety-focused AI policy often portray today's leading AI corporations as caught between two worlds: Product-focused, profit-oriented commercial enterprise on the one hand, and public-minded providers of measured advice on transformative AI and its regulation on the other hand. AI corporations frequently present themselves in the latter way, when they invoke the risks and harms and transformative potential of their technology in hushed tones; while at the same time, they herald the profits and economic transformations ushered in by their incoming top-shelf products. When these notions clash and profit maximization prevails, surprise and indignation frequently follow: The failed ouster of OpenAI CEO Sam Altman revealed that profit-driven Microsoft was a much more powerful voice than OpenAI's non-profit board, and the deprioritisation of its superalignment initiative, reportedly in favor of commercial products, reinforced that impression. Anthropic's decision to arguably push the capability frontier with its latest class of models revealed that its reported private commitments to the contrary did not constrain them, and DeepMind's full integration into the Google corporate structure has curtailed hope in its responsible independence. Those concerned about safe AI might deal with that tension in two ways: Put pressure on and engage with the AI corporations to make sure that their better angels have a greater chance at prevailing; or take a more cynical view and treat large AI developers as simply just another private-sector profit maximizer - not 'labs', but corporations. This piece argues one should do the latter. We examine the nature and force of profit incentives and argue they are likely to lead to a misallocation of political and public attention to company stru...
  continue reading

2434 episodes

Artwork
iconShare
 
Manage episode 431347869 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Corporate AI Labs' Odd Role in Their Own Governance, published by Corporate AI Labs' Odd Role in Their Own Governance on July 29, 2024 on The Effective Altruism Forum. Executive Summary Plenty of attention rests on artificial intelligence developers' non-technical contributions to ensuring safe development of advanced AI: Their corporate structure, their internal guidelines ('RSPs'), and their work on policy. We argue that strong profitability incentives increasingly force these efforts into ineffectiveness. As a result, less hope should be placed on AI corporations' internal governance, and more scrutiny should be afforded to their policy contributions. TL;DR Only Profit-Maximizers Stay At The Frontier Investors and compute providers have extensive leverage over labs and need to justify enormous spending As a result, leading AI corporations are forced to maximize profits This leads them to advocate against external regulatory constraints or shape them in their favor Constraints from Corporate Structure Are Dangerously Ineffective Ostensibly binding corporate structures are easily evaded or abandoned Political and public will cannot be enforced or ensured via corporate structure Public pressure can lead to ineffective and economically harmful non-profit signaling Hope In RSPs Is Misguided RSPs on their own can and will easily be discarded once they become inconvenient Public or political pressure is unlikely to enforce RSPs against business interests RSP codification is likely to yield worse results than independent legislative initiative Therefore, much less attention should be afforded to RSPs. For-Profit Policy Work Is Called Corporate Lobbying For-profit work on policy and governance is usually called corporate lobbying. In many other industries, corporate lobbying is an opposing corrective force to advocacy Corporate lobbying output should be understood as constrained by business interests Talent allocation and policy attention should be more skeptical of corporate lobbying. Introduction Advocates for safety-focused AI policy often portray today's leading AI corporations as caught between two worlds: Product-focused, profit-oriented commercial enterprise on the one hand, and public-minded providers of measured advice on transformative AI and its regulation on the other hand. AI corporations frequently present themselves in the latter way, when they invoke the risks and harms and transformative potential of their technology in hushed tones; while at the same time, they herald the profits and economic transformations ushered in by their incoming top-shelf products. When these notions clash and profit maximization prevails, surprise and indignation frequently follow: The failed ouster of OpenAI CEO Sam Altman revealed that profit-driven Microsoft was a much more powerful voice than OpenAI's non-profit board, and the deprioritisation of its superalignment initiative, reportedly in favor of commercial products, reinforced that impression. Anthropic's decision to arguably push the capability frontier with its latest class of models revealed that its reported private commitments to the contrary did not constrain them, and DeepMind's full integration into the Google corporate structure has curtailed hope in its responsible independence. Those concerned about safe AI might deal with that tension in two ways: Put pressure on and engage with the AI corporations to make sure that their better angels have a greater chance at prevailing; or take a more cynical view and treat large AI developers as simply just another private-sector profit maximizer - not 'labs', but corporations. This piece argues one should do the latter. We examine the nature and force of profit incentives and argue they are likely to lead to a misallocation of political and public attention to company stru...
  continue reading

2434 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide