Artwork

One Thing Today in Tech에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 One Thing Today in Tech 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

US President Biden moves to establish AI guardrails with Executive Order

5:26
 
공유
 

Manage episode 381583628 series 3372928
One Thing Today in Tech에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 One Thing Today in Tech 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

In today’s episode we take a quick look at news of US President Joe Biden’s executive order to regulate AI, but first one other headline that’s caught everyone’s attention at home.

Headlines

Several politicians from various opposition parties in India have been sent notifications by Apple that they were being targeted by “state-sponsored attackers,” according to multiple media reports.

Among those who may have been targeted are members of parliament including TMC's Mahua Moitra, Shiv Sena (UBT's) Priyanka Chaturvedi, Congress's Pawan Khera and Shashi Tharoor, AAP's Raghav Chadha, and CPIM's Sitaram Yechury, Moneycontrol reports, citing the politicians as saying they have received notifications from Apple stating that their devices were being targeted by state-sponsored attackers.

One thing today

US President Joe Biden yesterday issued an executive order outlining new regulations and safety requirements for artificial intelligence (AI) technologies, as the pace at which such technologies are advancing has alarmed governments around the world about the potential for their misuse.

The order, which runs into some 20,000 words, introduces a safety measure by defining a threshold based on computing power for AI models. AI models trained with a computing power of 10^26 floating-point operations, or flops, will be subject to these new rules.

This threshold surpasses the current capabilities of AI models, including GPT-4, but is expected to apply to next-generation models from prominent AI companies such as OpenAI, Google, Anthropic, and others, Casey Newton, a prominent technology writer who attended the Whitehouse conference at which President Biden announced the new rules yesterday, notes in his newsletter, Platformer.

Companies developing models that meet this criterion must conduct safety tests and share the results with the government before releasing their AI models to the public. This mandate builds on voluntary commitments by 15 major tech companies earlier this year, Newton writes in his letter.

The sweeping executive order addresses various potential harms related to AI technologies and their applications ranging from telecom and wireless networks to energy and cybersecurity. It assigns the US Commerce Department the task of establishing standards for digital watermarks and other authenticity verification methods to combat deepfake content.

It mandates AI developers to assess their models' potential for aiding in the development of bioweapons, and orders agencies to conduct risk assessments related to AI's role in chemical, biological, radiological, and nuclear weapons.

Newton references an analysis of the executive order by computer scientists Arvind Narayanan, Sayash Kapoor and Rishi Bommasani to point out that despite these significant steps, the executive order leaves some important issues unaddressed.

Notably, it lacks specific requirements for transparency in AI development, such as pre-training data, fine-tuning data, the labour involved in annotation, model evaluation, usage, and downstream impacts.

Experts like them argue that transparency is essential for ensuring accountability and preventing potential biases and unintended consequences in AI applications.

The order hasn’t also addressed the current debate surrounding open-source AI development versus proprietary tech. The choice between open-source models, as advocated by Meta and Stability AI, and closed models, like those pursued by OpenAI and Google, has become a contentious issue, Newton writes.

Prominent scientists, such as Stanford University Professor Andrew Ng, who previously founded Google Brain, have criticised the large tech companies for seeking industry regulation as a way of stifling open-source competition. They argue that while regulation is necessary, open-source AI research fosters innovation and democratizes technology.

  continue reading

474 에피소드

Artwork
icon공유
 
Manage episode 381583628 series 3372928
One Thing Today in Tech에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 One Thing Today in Tech 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

In today’s episode we take a quick look at news of US President Joe Biden’s executive order to regulate AI, but first one other headline that’s caught everyone’s attention at home.

Headlines

Several politicians from various opposition parties in India have been sent notifications by Apple that they were being targeted by “state-sponsored attackers,” according to multiple media reports.

Among those who may have been targeted are members of parliament including TMC's Mahua Moitra, Shiv Sena (UBT's) Priyanka Chaturvedi, Congress's Pawan Khera and Shashi Tharoor, AAP's Raghav Chadha, and CPIM's Sitaram Yechury, Moneycontrol reports, citing the politicians as saying they have received notifications from Apple stating that their devices were being targeted by state-sponsored attackers.

One thing today

US President Joe Biden yesterday issued an executive order outlining new regulations and safety requirements for artificial intelligence (AI) technologies, as the pace at which such technologies are advancing has alarmed governments around the world about the potential for their misuse.

The order, which runs into some 20,000 words, introduces a safety measure by defining a threshold based on computing power for AI models. AI models trained with a computing power of 10^26 floating-point operations, or flops, will be subject to these new rules.

This threshold surpasses the current capabilities of AI models, including GPT-4, but is expected to apply to next-generation models from prominent AI companies such as OpenAI, Google, Anthropic, and others, Casey Newton, a prominent technology writer who attended the Whitehouse conference at which President Biden announced the new rules yesterday, notes in his newsletter, Platformer.

Companies developing models that meet this criterion must conduct safety tests and share the results with the government before releasing their AI models to the public. This mandate builds on voluntary commitments by 15 major tech companies earlier this year, Newton writes in his letter.

The sweeping executive order addresses various potential harms related to AI technologies and their applications ranging from telecom and wireless networks to energy and cybersecurity. It assigns the US Commerce Department the task of establishing standards for digital watermarks and other authenticity verification methods to combat deepfake content.

It mandates AI developers to assess their models' potential for aiding in the development of bioweapons, and orders agencies to conduct risk assessments related to AI's role in chemical, biological, radiological, and nuclear weapons.

Newton references an analysis of the executive order by computer scientists Arvind Narayanan, Sayash Kapoor and Rishi Bommasani to point out that despite these significant steps, the executive order leaves some important issues unaddressed.

Notably, it lacks specific requirements for transparency in AI development, such as pre-training data, fine-tuning data, the labour involved in annotation, model evaluation, usage, and downstream impacts.

Experts like them argue that transparency is essential for ensuring accountability and preventing potential biases and unintended consequences in AI applications.

The order hasn’t also addressed the current debate surrounding open-source AI development versus proprietary tech. The choice between open-source models, as advocated by Meta and Stability AI, and closed models, like those pursued by OpenAI and Google, has become a contentious issue, Newton writes.

Prominent scientists, such as Stanford University Professor Andrew Ng, who previously founded Google Brain, have criticised the large tech companies for seeking industry regulation as a way of stifling open-source competition. They argue that while regulation is necessary, open-source AI research fosters innovation and democratizes technology.

  continue reading

474 에피소드

All episodes

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생