Artificial intelligence (AI) is a powerful technology that can bring many benefits to society but also poses many challenges and risks. To ensure that the United States leads the way in seizing the opportunity and managing the risks of AI, President Biden issued a landmark executive order on October 30, 2023.
How the order uses a 75-year-old law to regulate AI
The Biden administration is using the Defense Production Act, a 75-year-old law that gives the White House wide authority to regulate industries related to national security, to compel companies to tell the federal government about potential national security risks related to their AI work.
This is the first executive order from the federal government that directly regulates AI, and it follows the voluntary commitments of 15 major AI companies, such as Google, Microsoft and OpenAI.
{{#rendered}} {{/rendered}}
The executive order aims to accomplish four major things. It establishes new standards for AI safety and security; protects Americans’ privacy, civil rights and consumer rights; supports workers and innovation; and advances American leadership worldwide. While those focuses are addressed, many critics say it is watered down and likely not enough to install much-needed safety rails for this rapidly progressing technology that is capable of outperforming many human minds.
MORE: WHY IS GOOGLE’S AI CHATBOT DODGING ISRAEL-HAMAS CONFLICT QUESTIONS?
{{#rendered}} {{/rendered}}
1. New standards for AI safety and security
One of the key components of the executive order is to create new standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy.
Sharing safety test results for powerful AI systems
The executive order requires developers of the most powerful AI systems, such as foundation models that pose a serious risk to national security or public health and safety, to share their safety test results and other critical information with the U.S. government before making them public. This will allow the government to assess the potential risks and benefits of these systems and prevent any harmful or malicious use.
Setting rigorous standards for red-team testing
The executive order also directs the National Institute of Standards and Technology (NIST) to set rigorous standards for extensive red-team testing to ensure safety before public release. Red-team testing is a method of evaluating the security and robustness of a system by simulating attacks from adversaries.
{{#rendered}} {{/rendered}}
CHATGPT CHIEF WARNS OF SOME ‘SUPERHUMAN’ SKILLS AI COULD DEVELOP
Applying safety standards to critical infrastructure sectors
The Department of Homeland Security (DHS) will apply these standards to critical infrastructure sectors and establish the AI Safety and Security Board.
The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear and cybersecurity risks.
{{#rendered}} {{/rendered}}
Developing new standards for biological synthesis screening
Additionally, the executive order aims to protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.
Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
These actions are the most significant steps ever taken by any government to advance the field of AI safety and security. They will help ensure that AI systems are aligned with human values and will not cause harm or damage to people or the environment.
{{#rendered}} {{/rendered}}
MORE: PREVENTING MASS SHOOTINGS WITH A DETECTION: NAVY SEALS-INSPIRED INTERVENTION
2. Protecting Americans’ privacy, civil rights and consumer rights
Another important aspect of the executive order is to protect Americans’ privacy, civil rights and consumer rights in the age of AI.
Evaluating privacy techniques used in AI
The executive order creates guidelines that agencies can use to evaluate privacy techniques used in AI, such as differential privacy or federated learning. These techniques are designed to protect sensitive or personal data from unauthorized access or disclosure while still allowing useful analysis or learning.
{{#rendered}} {{/rendered}}
Advancing equity and civil rights in AI applications
The executive order also advances equity and civil rights by providing guidance to landlords and federal contractors to help avoid AI algorithms furthering discrimination, such as in housing or employment decisions. It also creates best practices on the appropriate role of AI in the justice system, such as when it is used in sentencing, risk assessments or crime forecasting. These actions will help prevent bias and unfairness in AI applications that affect people’s lives and opportunities.
Protecting consumers from harmful AI-related healthcare practices
Furthermore, the executive order protects consumers by directing the Department of Health and Human Services (HHS) to create a program to evaluate potentially harmful AI-related health care practices, such as misdiagnosis or over-treatment. It also creates resources on how educators can responsibly use AI tools, such as personalized learning or adaptive testing. These measures will help ensure that AI systems are used in ways that benefit people’s health, education and well-being.
EXPERT WARNS BIDEN’S AI ORDER HAS ‘WRONG PRIORITIES’ DESPITE SOME POSITIVE REVIEWS
{{#rendered}} {{/rendered}}
MORE: THIS DATING APP USES AI TO FIND YOUR SOUL MATE BY YOUR FACE
3. Supporting workers and innovation
The executive order also supports workers and innovation in the AI sector to help foster a vibrant and diverse AI ecosystem that drives scientific discovery and economic prosperity.
Producing a report on the labor market implications of AI
The executive order produces a report on the potential labor market implications of AI and studies the ways the federal government could support workers who are affected by a disruption to the labor market. This will help prepare workers for the changing nature of work and provide them with opportunities for reskilling or upskilling.
{{#rendered}} {{/rendered}}
Increasing investments in AI research and development
The executive order also promotes innovation and competition by directing agencies to increase their investments in AI research and development (R&D), especially in areas that have a high potential for social impact or economic growth. It also encourages agencies to collaborate with industry, academia, civil society and international partners on advancing responsible AI innovation.
MORE: META CONFESSES IT’S USING WHAT YOU POST TO TRAIN ITS AI
4. Advancing American leadership around the world
Finally, the executive order advances American leadership around the world by directing agencies to engage with allies and partners in developing common norms and principles for responsible AI use.
{{#rendered}} {{/rendered}}
It also directs agencies to promote human rights and democratic values in their AI-related activities and to oppose any attempts by authoritarian regimes to misuse AI for repression or surveillance.
These actions will help ensure that the United States remains a global leader in shaping the future of AI in a way that reflects its values and interests.
MORE: HOW TO USE AI TO HELP YOU GET A BETTER JOB INSTEAD OF IT STEALING ONE
{{#rendered}} {{/rendered}}
Rules the executive order omits
The order omits some rules that have been part of this year’s public debates. For instance, there is no regulation for licensing the most advanced models, a proposal endorsed by OpenAI CEO Sam Altman, and there are no restrictions on the most risky uses of the technology.
Also, the order does not compel the release of details about training data and model size, which many experts and critics argue is essential for understanding the technology and anticipating its potential harms.
In addition, there is no guidance on how intellectual property law will apply to works created with or by AI — that is now left to courts to decide.
{{#rendered}} {{/rendered}}
A world without the executive order
Biden’s executive order on AI is a step in the right direction, even though it might not be enough or permanent. Some argue that the executive order is not legally binding, and it can be changed or revoked by future presidents.
They also suggest that more specific and enforceable regulations are needed to address the complex and evolving challenges and opportunities of AI. Without the executive order, AI could still be regulated by existing laws and voluntary standards, but they might not be sufficient or consistent enough to ensure that AI is responsible, ethical, and beneficial for all.
{{#rendered}} {{/rendered}}
Kurt’s key takeaways
Biden’s executive order on AI is a significant government action that hopefully will have a positive impact on the future of AI. It attempts to ensure that AI systems are safe, secure, trustworthy and beneficial for all Americans and the world but ignored core advice from AI leaders like OpenAI’s CEO Sam Altman. It leaves a number of serious unresolved questions to be answered.
How do you feel about the Biden executive order? Do you think it will help the U.S. lead the way in AI innovation and safety? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter
{{#rendered}} {{/rendered}}
Ask Kurt a question or let us know what stories you’d like us to cover
Answers to the most asked CyberGuy questions:
● What is the best way to protect your Mac, Windows, iPhone and Android devices from getting hacked?
{{#rendered}} {{/rendered}}
● What is the best way to stay private, secure and anonymous while browsing the web?
● How can I get rid of robocalls with apps and data removal services?
Copyright 2023 CyberGuy.com. All rights reserved.