Biden's AI Executive Order
Biden's AI Executive Order
Biden's AI Executive Order
Nov 6, 2023
Nov 6, 2023
Nov 6, 2023
Last Monday Biden invoked the Korean War-era Defense Production Act to push through the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which requires AI companies to notify the government when developing technology or systems that pose “serious risk to national security, national economic security or national public health and safety,”
The order announced eight guiding principles and priorities for AI:
Artificial Intelligence must be safe and secure.
Responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
The responsible development and use of AI require a commitment to supporting American workers.
Artificial Intelligence policies must be consistent with the Biden Administration’s dedication to advancing equity and civil rights.
The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
Americans’ privacy and civil liberties must be protected as AI continues advancing.
It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.
The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.
The order also requires US AI firms to notify the government within 90 days if a foreign company uses their platforms to train extensive AI systems - which is mainly aimed at foreign adversaries like China and Russia who may use US tech to create disinformation campaigns with deepfakes and develop military technologies.
Straight out of the movie Minority Report, the order included the following language:
“within 365 days of the date of this order, submit to the President a report that addresses the use of AI in the criminal justice system, including any use in: crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density ‘hot spots’.”
The White House called the order “the most significant actions ever taken by any government to advance the field of AI safety.” The order was released just before the UK AI Safety Summit, which was attended by the US and China - likely to make a statement that the US is leading the way in AI regulation and set a standard for other countries to follow.
There has been some push back to the order. Prominent ventures capitalists, including Marc Andreessen, submitted a letter to President Biden calling the new guidelines too restricting on open-source AI software - which they see as pivotal to a free and safe world in the age of AI.
MIT Professor Max Tegmark addressed Congress last week about the existential threats posed by superintelligent AI, emphasizing the need for swift regulatory action. While he applauded the order, he said it wasn’t enough, emphasizing the need for Congress to pass actual laws, while highlighting concerns about both China’s AI ambitions and the threat of creating ‘superintelligence’ in the US that would ‘make humans completely obsolete’.
But AI companies appear to be falling in line. To close out the UK’s AI Safety Summit last Thursday, companies including OpenAI, Google, Anthropic, Amazon, Microsoft, and Meta signed a (non-legally binding) letter agreeing to allow governments including the US, UK, and Singapore to test their models for national security and safety risks. China and Chinese tech companies notably did not sign the letter.
Last Monday Biden invoked the Korean War-era Defense Production Act to push through the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which requires AI companies to notify the government when developing technology or systems that pose “serious risk to national security, national economic security or national public health and safety,”
The order announced eight guiding principles and priorities for AI:
Artificial Intelligence must be safe and secure.
Responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
The responsible development and use of AI require a commitment to supporting American workers.
Artificial Intelligence policies must be consistent with the Biden Administration’s dedication to advancing equity and civil rights.
The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
Americans’ privacy and civil liberties must be protected as AI continues advancing.
It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.
The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.
The order also requires US AI firms to notify the government within 90 days if a foreign company uses their platforms to train extensive AI systems - which is mainly aimed at foreign adversaries like China and Russia who may use US tech to create disinformation campaigns with deepfakes and develop military technologies.
Straight out of the movie Minority Report, the order included the following language:
“within 365 days of the date of this order, submit to the President a report that addresses the use of AI in the criminal justice system, including any use in: crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density ‘hot spots’.”
The White House called the order “the most significant actions ever taken by any government to advance the field of AI safety.” The order was released just before the UK AI Safety Summit, which was attended by the US and China - likely to make a statement that the US is leading the way in AI regulation and set a standard for other countries to follow.
There has been some push back to the order. Prominent ventures capitalists, including Marc Andreessen, submitted a letter to President Biden calling the new guidelines too restricting on open-source AI software - which they see as pivotal to a free and safe world in the age of AI.
MIT Professor Max Tegmark addressed Congress last week about the existential threats posed by superintelligent AI, emphasizing the need for swift regulatory action. While he applauded the order, he said it wasn’t enough, emphasizing the need for Congress to pass actual laws, while highlighting concerns about both China’s AI ambitions and the threat of creating ‘superintelligence’ in the US that would ‘make humans completely obsolete’.
But AI companies appear to be falling in line. To close out the UK’s AI Safety Summit last Thursday, companies including OpenAI, Google, Anthropic, Amazon, Microsoft, and Meta signed a (non-legally binding) letter agreeing to allow governments including the US, UK, and Singapore to test their models for national security and safety risks. China and Chinese tech companies notably did not sign the letter.
Last Monday Biden invoked the Korean War-era Defense Production Act to push through the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which requires AI companies to notify the government when developing technology or systems that pose “serious risk to national security, national economic security or national public health and safety,”
The order announced eight guiding principles and priorities for AI:
Artificial Intelligence must be safe and secure.
Responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
The responsible development and use of AI require a commitment to supporting American workers.
Artificial Intelligence policies must be consistent with the Biden Administration’s dedication to advancing equity and civil rights.
The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
Americans’ privacy and civil liberties must be protected as AI continues advancing.
It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.
The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.
The order also requires US AI firms to notify the government within 90 days if a foreign company uses their platforms to train extensive AI systems - which is mainly aimed at foreign adversaries like China and Russia who may use US tech to create disinformation campaigns with deepfakes and develop military technologies.
Straight out of the movie Minority Report, the order included the following language:
“within 365 days of the date of this order, submit to the President a report that addresses the use of AI in the criminal justice system, including any use in: crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density ‘hot spots’.”
The White House called the order “the most significant actions ever taken by any government to advance the field of AI safety.” The order was released just before the UK AI Safety Summit, which was attended by the US and China - likely to make a statement that the US is leading the way in AI regulation and set a standard for other countries to follow.
There has been some push back to the order. Prominent ventures capitalists, including Marc Andreessen, submitted a letter to President Biden calling the new guidelines too restricting on open-source AI software - which they see as pivotal to a free and safe world in the age of AI.
MIT Professor Max Tegmark addressed Congress last week about the existential threats posed by superintelligent AI, emphasizing the need for swift regulatory action. While he applauded the order, he said it wasn’t enough, emphasizing the need for Congress to pass actual laws, while highlighting concerns about both China’s AI ambitions and the threat of creating ‘superintelligence’ in the US that would ‘make humans completely obsolete’.
But AI companies appear to be falling in line. To close out the UK’s AI Safety Summit last Thursday, companies including OpenAI, Google, Anthropic, Amazon, Microsoft, and Meta signed a (non-legally binding) letter agreeing to allow governments including the US, UK, and Singapore to test their models for national security and safety risks. China and Chinese tech companies notably did not sign the letter.