Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - Introduction to French AI Policy by Lucie Philippon

10:24
 
Share
 

Manage episode 427164657 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introduction to French AI Policy, published by Lucie Philippon on July 4, 2024 on LessWrong. This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements. Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered. At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France. The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts. My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I'm confident in the facts, but less in the interpretations, as I'm no policy expert myself. Generative Artificial Intelligence Committee The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1] The goals of the committee were: Strengthening AI training programs to develop more AI talent in France Investing in AI to promoting French innovation on the international stage Defining appropriate regulation for different sectors to protect against abuses. This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member: Co-chairs: Philippe Aghion, an influential French economist specializing in innovation. He thinks AI will give a major productivity boost and that the EU should invest in major research projects on AI and disruptive technologies. Anne Bouverot, chair of the board of directors of ENS, the most prestigious scientific college in France. She was later nominated as leading organizer of the next AI Safety Summit. She is mainly concerned about the risks of bias and discrimination from AI systems, as well as risks of concentration of power. Notable members: Joëlle Barral, scientific director at Google Nozha Boujemaa, co-chair of the OECD AI expert group and Digital Trust Officer at Decathlon Yann LeCun, VP and Chief AI Scientist at Meta, generative AI expert He is a notable skeptic of catastrophic risks from AI Arthur Mensch, founder of Mistral He is a notable skeptic of catastrophic risks from AI Cédric O, consultant, former Secretary of State for Digital Affairs He invested in Mistral and worked to loosen the regulations on general systems in the EU AI Act. Martin Tisné, board member of Partnership on AI He will lead the "AI for good" track of the next Summit. See the full list of members in the announcement: Comité de l'intelligence artificielle générative. "AI: Our Ambition for France" In March 2024, the committee published a report highlighting 25 recommendations to the French government regarding AI. An official English version is available. The report makes recommendations on how to make France competitive and a leader in AI, by investing in training, R&D and compute. This report is not anticipating future development, and treats the current capabilities of AI as a fixed point we need to work with. They don't think about future capabilities of AI models, and are overly dismissive of AI risks. Some highlights from the report: It dismisses most risks from AI, including catastrophic risks, saying that concerns are overblown. They compare fear of...
  continue reading

1702 episodes

Artwork
iconShare
 
Manage episode 427164657 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introduction to French AI Policy, published by Lucie Philippon on July 4, 2024 on LessWrong. This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements. Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered. At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France. The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts. My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I'm confident in the facts, but less in the interpretations, as I'm no policy expert myself. Generative Artificial Intelligence Committee The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1] The goals of the committee were: Strengthening AI training programs to develop more AI talent in France Investing in AI to promoting French innovation on the international stage Defining appropriate regulation for different sectors to protect against abuses. This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member: Co-chairs: Philippe Aghion, an influential French economist specializing in innovation. He thinks AI will give a major productivity boost and that the EU should invest in major research projects on AI and disruptive technologies. Anne Bouverot, chair of the board of directors of ENS, the most prestigious scientific college in France. She was later nominated as leading organizer of the next AI Safety Summit. She is mainly concerned about the risks of bias and discrimination from AI systems, as well as risks of concentration of power. Notable members: Joëlle Barral, scientific director at Google Nozha Boujemaa, co-chair of the OECD AI expert group and Digital Trust Officer at Decathlon Yann LeCun, VP and Chief AI Scientist at Meta, generative AI expert He is a notable skeptic of catastrophic risks from AI Arthur Mensch, founder of Mistral He is a notable skeptic of catastrophic risks from AI Cédric O, consultant, former Secretary of State for Digital Affairs He invested in Mistral and worked to loosen the regulations on general systems in the EU AI Act. Martin Tisné, board member of Partnership on AI He will lead the "AI for good" track of the next Summit. See the full list of members in the announcement: Comité de l'intelligence artificielle générative. "AI: Our Ambition for France" In March 2024, the committee published a report highlighting 25 recommendations to the French government regarding AI. An official English version is available. The report makes recommendations on how to make France competitive and a leader in AI, by investing in training, R&D and compute. This report is not anticipating future development, and treats the current capabilities of AI as a fixed point we need to work with. They don't think about future capabilities of AI models, and are overly dismissive of AI risks. Some highlights from the report: It dismisses most risks from AI, including catastrophic risks, saying that concerns are overblown. They compare fear of...
  continue reading

1702 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide