The executive order (EO) on artificial intelligence (AI) issued by the US Administration represents a first step in shaping the future landscape of AI. By implementing stringent transparency and safety testing requirements, the order aims to mitigate risks associated with powerful AI systems, ensuring they are secure and reliable before being widely adopted. While developers of advanced AI models must share their safety test results with the U.S. government, and companies are required to notify the federal government when training models that could pose significant risks, the EO, compared with the EU AI Act, may not raise the bar high enough, nor make consequences of non-compliance clear. Businesses operating on a global scale will have significant compliance overlaps between the AI Act and EO requirements.
Sustainable and Trustworthy AI, in all its aspects, is needed now to build and maintain trust and accountability in AI technology. The EO emphasizes the importance of developing robust standards and tools to safeguard critical infrastructure and address potential threats in various domains, including cybersecurity and biological synthesis. The National Institute of Standards and Technology (NIST) will establish rigorous standards for extensive red-team testing, ensuring AI systems undergo thorough safety evaluations before public release. The Department of Homeland Security will apply these standards to critical infrastructure sectors and establish the AI Safety and Security Board, further bolstering national security measures.
This approach highlights the administration's commitment to advancing AI technology responsibly. By addressing both the risks and opportunities of AI, the EO strives to create a balanced environment where innovation can flourish while maintaining safety and security in a trustworthy ecosystem where AI can benefit society at large.
Still, many are asking if the EO goes far enough to temper the rapid rise of AI, especially in sectors beyond critical infrastructure.
In addition to technical safeguards, the EO seeks to protect against the misuse of AI in life sciences by setting new standards for biological synthesis screening. Federal funding agencies will require these standards to fund life-science projects, creating powerful incentives for compliance and risk management.
The EO’s approach discusses eight main areas, such as national security, privacy, fairness, consumer protection rights, labor market practice, innovation potentiality, international cooperation, and governmental AI expertise. Special parts are dedicated to encouraging the ethical application of AI in the education sector, healthcare industry, and criminal justice system.
While some congressional hearings on AI have focused on the possibility of creating a new federal AI regulatory agency, today’s EO spreads out responsibility for AI governance among many federal agencies, tasking each with overseeing AI in their areas of expertise.
The order's signing came at a critical juncture, coinciding with the G7's approval of the AI Code of Conduct and just days before an international summit on AI safety organized by the UK. This timing underscored the urgency of establishing a coherent national strategy on AI governance as world leaders prepare to discuss global security.
The White House has released a fact sheet about the order:
While The EO is a good starting point, it does not address all AI-related challenges and even falls short of the standards set forth by the EU AI Act. In this sense, it also calls upon Congress to pass data privacy legislation and AI because there is a need for more permanent legal frameworks besides an easily undone EO. The policy provides tight deadlines for various actions, but previous experiences with AI-focused executive orders suggest that full implementation may be delayed.
As AI develops rapidly, it may take considerable time before its effects are felt. Businesses operating globally will need to comply with both the AI Act and EO requirements. Nevertheless, there is a tendency among EU countries to have more substantial document-based compliance proof, while the US appears to, at times, allow alignment with industry norms to count.
These next few months and years will be critical in assessing how well such measures can take off and if they can match the unstoppable progress of AI technology. It’s still too early to tell how this regulation will ultimately shake out. One thing is for sure, though: This EO isn’t going to slow down the development of powerful AI nor substantially mitigate the risks posed to everyday consumers and even world governments.
We look forward to more clarity, at least parity with the EU AI Act, and specific consequences for those who cannot comply.
About 4CRisk.ai Products: Our AI products use language models specifically trained for risk, compliance and regulatory domains to automate manual, effort-intensive tasks of risk and compliance professionals, providing results in minutes rather than days; up to 50 times faster than manual methods.
Would you like a walkthrough to see what 4Crisk products can do for your organization? Contactus@4crisk.ai or click here to register for a demo
4CRisk products: Regulatory Research, Compliance Map, Regulatory Change and Ask Aria Co-Pilot are revolutionizing how organizations connect regulations with their business requirements.
Leave a reply