Artwork

Content provided by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EU's AI Act: A Journey from Open Source Tech to High-Stakes Policy

28:52
 
Share
 

Manage episode 363855869 series 3451197
Content provided by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

When Christopher Detzel and Michael Burke sat down for their podcast episode, they had an in-depth conversation about the potential impact of the European Union's (EU) AI Act on open-source artificial intelligence (AI) technologies like large language models (LLMs). The conversation offers crucial insights into the implications of AI regulation, privacy concerns, and the future of the tech industry.

Starting off on a lighter note, Detzel and Burke exchanged weekend plans, creating an informal atmosphere for their podcast discussion. Soon, the conversation delved into more serious matters—the EU AI Act and its potential ramifications on the open-source AI ecosystem.

The main point of their conversation was centered on the fact that the EU AI Act targets US open software, including LLMs. The potential disruptive impact of this Act on the global AI landscape, particularly around the open-source movement, was of significant concern. Privacy issues around AI models and the Act's intention to control and safeguard user privacy by regulating the use and deployment of AI was another important topic that came up.

One of the critical challenges that Burke pointed out is the potential threat to privacy that large language models could pose. According to him, the possibility that LLMs store information input into them and the lack of clarity on the sources of data these models are trained on, are matters of concern. Burke stressed that organizations and governments alike share this worry, particularly in relation to the accuracy and reliability of the information being processed by these models. He further highlighted the severe implications for users sharing sensitive or private information with AI systems unknowingly or without understanding the potential uses of their data.

  continue reading

42 episodes

Artwork
iconShare
 
Manage episode 363855869 series 3451197
Content provided by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Burke and Chris Detzel, Michael Burke, and Chris Detzel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

When Christopher Detzel and Michael Burke sat down for their podcast episode, they had an in-depth conversation about the potential impact of the European Union's (EU) AI Act on open-source artificial intelligence (AI) technologies like large language models (LLMs). The conversation offers crucial insights into the implications of AI regulation, privacy concerns, and the future of the tech industry.

Starting off on a lighter note, Detzel and Burke exchanged weekend plans, creating an informal atmosphere for their podcast discussion. Soon, the conversation delved into more serious matters—the EU AI Act and its potential ramifications on the open-source AI ecosystem.

The main point of their conversation was centered on the fact that the EU AI Act targets US open software, including LLMs. The potential disruptive impact of this Act on the global AI landscape, particularly around the open-source movement, was of significant concern. Privacy issues around AI models and the Act's intention to control and safeguard user privacy by regulating the use and deployment of AI was another important topic that came up.

One of the critical challenges that Burke pointed out is the potential threat to privacy that large language models could pose. According to him, the possibility that LLMs store information input into them and the lack of clarity on the sources of data these models are trained on, are matters of concern. Burke stressed that organizations and governments alike share this worry, particularly in relation to the accuracy and reliability of the information being processed by these models. He further highlighted the severe implications for users sharing sensitive or private information with AI systems unknowingly or without understanding the potential uses of their data.

  continue reading

42 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide