Artwork

Content provided by One Thing Today in Tech. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by One Thing Today in Tech or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

US President Biden moves to establish AI guardrails with Executive Order

5:26
 
Share
 

Manage episode 381583628 series 3372928
Content provided by One Thing Today in Tech. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by One Thing Today in Tech or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In today’s episode we take a quick look at news of US President Joe Biden’s executive order to regulate AI, but first one other headline that’s caught everyone’s attention at home.

Headlines

Several politicians from various opposition parties in India have been sent notifications by Apple that they were being targeted by “state-sponsored attackers,” according to multiple media reports.

Among those who may have been targeted are members of parliament including TMC's Mahua Moitra, Shiv Sena (UBT's) Priyanka Chaturvedi, Congress's Pawan Khera and Shashi Tharoor, AAP's Raghav Chadha, and CPIM's Sitaram Yechury, Moneycontrol reports, citing the politicians as saying they have received notifications from Apple stating that their devices were being targeted by state-sponsored attackers.

One thing today

US President Joe Biden yesterday issued an executive order outlining new regulations and safety requirements for artificial intelligence (AI) technologies, as the pace at which such technologies are advancing has alarmed governments around the world about the potential for their misuse.

The order, which runs into some 20,000 words, introduces a safety measure by defining a threshold based on computing power for AI models. AI models trained with a computing power of 10^26 floating-point operations, or flops, will be subject to these new rules.

This threshold surpasses the current capabilities of AI models, including GPT-4, but is expected to apply to next-generation models from prominent AI companies such as OpenAI, Google, Anthropic, and others, Casey Newton, a prominent technology writer who attended the Whitehouse conference at which President Biden announced the new rules yesterday, notes in his newsletter, Platformer.

Companies developing models that meet this criterion must conduct safety tests and share the results with the government before releasing their AI models to the public. This mandate builds on voluntary commitments by 15 major tech companies earlier this year, Newton writes in his letter.

The sweeping executive order addresses various potential harms related to AI technologies and their applications ranging from telecom and wireless networks to energy and cybersecurity. It assigns the US Commerce Department the task of establishing standards for digital watermarks and other authenticity verification methods to combat deepfake content.

It mandates AI developers to assess their models' potential for aiding in the development of bioweapons, and orders agencies to conduct risk assessments related to AI's role in chemical, biological, radiological, and nuclear weapons.

Newton references an analysis of the executive order by computer scientists Arvind Narayanan, Sayash Kapoor and Rishi Bommasani to point out that despite these significant steps, the executive order leaves some important issues unaddressed.

Notably, it lacks specific requirements for transparency in AI development, such as pre-training data, fine-tuning data, the labour involved in annotation, model evaluation, usage, and downstream impacts.

Experts like them argue that transparency is essential for ensuring accountability and preventing potential biases and unintended consequences in AI applications.

The order hasn’t also addressed the current debate surrounding open-source AI development versus proprietary tech. The choice between open-source models, as advocated by Meta and Stability AI, and closed models, like those pursued by OpenAI and Google, has become a contentious issue, Newton writes.

Prominent scientists, such as Stanford University Professor Andrew Ng, who previously founded Google Brain, have criticised the large tech companies for seeking industry regulation as a way of stifling open-source competition. They argue that while regulation is necessary, open-source AI research fosters innovation and democratizes technology.

  continue reading

462 episodes

Artwork
iconShare
 
Manage episode 381583628 series 3372928
Content provided by One Thing Today in Tech. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by One Thing Today in Tech or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In today’s episode we take a quick look at news of US President Joe Biden’s executive order to regulate AI, but first one other headline that’s caught everyone’s attention at home.

Headlines

Several politicians from various opposition parties in India have been sent notifications by Apple that they were being targeted by “state-sponsored attackers,” according to multiple media reports.

Among those who may have been targeted are members of parliament including TMC's Mahua Moitra, Shiv Sena (UBT's) Priyanka Chaturvedi, Congress's Pawan Khera and Shashi Tharoor, AAP's Raghav Chadha, and CPIM's Sitaram Yechury, Moneycontrol reports, citing the politicians as saying they have received notifications from Apple stating that their devices were being targeted by state-sponsored attackers.

One thing today

US President Joe Biden yesterday issued an executive order outlining new regulations and safety requirements for artificial intelligence (AI) technologies, as the pace at which such technologies are advancing has alarmed governments around the world about the potential for their misuse.

The order, which runs into some 20,000 words, introduces a safety measure by defining a threshold based on computing power for AI models. AI models trained with a computing power of 10^26 floating-point operations, or flops, will be subject to these new rules.

This threshold surpasses the current capabilities of AI models, including GPT-4, but is expected to apply to next-generation models from prominent AI companies such as OpenAI, Google, Anthropic, and others, Casey Newton, a prominent technology writer who attended the Whitehouse conference at which President Biden announced the new rules yesterday, notes in his newsletter, Platformer.

Companies developing models that meet this criterion must conduct safety tests and share the results with the government before releasing their AI models to the public. This mandate builds on voluntary commitments by 15 major tech companies earlier this year, Newton writes in his letter.

The sweeping executive order addresses various potential harms related to AI technologies and their applications ranging from telecom and wireless networks to energy and cybersecurity. It assigns the US Commerce Department the task of establishing standards for digital watermarks and other authenticity verification methods to combat deepfake content.

It mandates AI developers to assess their models' potential for aiding in the development of bioweapons, and orders agencies to conduct risk assessments related to AI's role in chemical, biological, radiological, and nuclear weapons.

Newton references an analysis of the executive order by computer scientists Arvind Narayanan, Sayash Kapoor and Rishi Bommasani to point out that despite these significant steps, the executive order leaves some important issues unaddressed.

Notably, it lacks specific requirements for transparency in AI development, such as pre-training data, fine-tuning data, the labour involved in annotation, model evaluation, usage, and downstream impacts.

Experts like them argue that transparency is essential for ensuring accountability and preventing potential biases and unintended consequences in AI applications.

The order hasn’t also addressed the current debate surrounding open-source AI development versus proprietary tech. The choice between open-source models, as advocated by Meta and Stability AI, and closed models, like those pursued by OpenAI and Google, has become a contentious issue, Newton writes.

Prominent scientists, such as Stanford University Professor Andrew Ng, who previously founded Google Brain, have criticised the large tech companies for seeking industry regulation as a way of stifling open-source competition. They argue that while regulation is necessary, open-source AI research fosters innovation and democratizes technology.

  continue reading

462 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide