top of page

Key Takeaways for Businesses from Biden’s Executive Order on A.I.

The past few weeks have been pivotal for the regulation of artificial intelligence (“AI”) globally: on October 30th, the G7 released the leaders’ agreement on Guiding Principles and a Code of Conduct on Artificial Intelligence.¹ Later that day, the Biden Administration issued a landmark Executive Order (“EO”)² on AI governance, including development and deployment considerations. The EO comes as a welcome development following growing calls for Washington to regulate AI; with state legislators introducing 200 AI-related bills just this year.³

The EO sets out eight principles and priorities and issues guidance on areas ranging from AI safety and security standards, privacy protection, worker and consumer rights, and more.⁴ The EO’s breadth and the administration’s broad definition of AI means this EO is likely to affect organizations of varying sizes employing AI systems across numerous sectors.⁵ Here are three key takeaways for your business following the EO’s release:

National Institute of Standards and Technology Taking a Leading Role

The Biden administration is entrusting the National Institute of Standards and Technology (“NIST”) to develop best practices, standards, tools, and tests to help ensure that AI systems remain safe, secure, and trustworthy.⁶ NIST’s role will include setting rigorous standards for extensive red-team testing to ensure the safety of AI systems before their public release.⁷

Additionally, in cases where government use of AI could affect individual rights or safety, adherence to the practices outlined in the Office of Science and Technology Policy’s (“OSTP”) Blueprint for an AI Bill of Rights⁸ and NIST’s AI Risk Management Framework⁹ is mandatory. These practices are designed to guarantee AI compliance with federal laws and encourage thorough monitoring. The new mandate detailed in the EO, alongside provisions for stringent testing requirements, are significant signals demonstrating the government’s intention to scrutinize AI systems to ensure they are safe and secure.¹⁰

Privacy Implications

The EO’s sixth guiding principle and priority explicitly makes privacy the focal point in stating that “Americans’ privacy and civil liberties must be protected by ensuring that the collection, use and retention of data is lawful, secure and promotes privacy.”¹¹ While the US has yet to enact national consumer data protection legislation, the EO takes steps to protect commercially available information (CAI) held by federal agencies to mitigate privacy risks potentially exacerbated by AI—including by AI’s facilitation of the collection or use of information about individuals, or the making of inferences about them.¹² Nevertheless, critics point out that although the EO is a step in the right direction, without more comprehensive data privacy legislation, people remain at risk of having their personal or confidential information revealed by AI systems.¹³

AI Readiness

The EO has made it explicitly clear that regulating AI systems is a priority for the Biden administration. As such, it is incumbent that businesses move swiftly to begin implementing an AI governance process if they have not already started to do so. This is particularly apt given the breadth of the issues intersecting AI, such as compliance, liability, intellectual property, contracts, ethics, and human rights.

This EO also presents the opportunity for organizations to evaluate how they currently operate and deliver existing services, especially as they continue to or begin adopting AI systems. In particular, businesses must adhere to the legal and regulatory frameworks governing these AI systems and enhance their AI procedures and digital literacy to ensure that AI systems are used effectively and responsibly.

Looking Ahead

In conclusion, organizations must start prioritizing key AI governance mechanisms and procedures including:

  1. Adopting watermarking for AI-generated content using open-source standards from entities like the Coalition for Content Provenance and Authenticity to help build increased transparency and foster customer trust in AI solutions.¹⁴

  2. Organizations handling large amounts of sensitive data to build AI systems should consider leveraging privacy-enhancing technologies (“PETs”) to protect individuals’ identifies and minimize the risk of re-identification.

  3. Implement red-teaming strategies by employing an independent challenge function or collaborating with third-party experts to assess AI systems' robustness, safety, and security.

  4. Establish an AI governance process that aligns with the best practices in the NIST AI Risk Management Framework. With the evolving landscape of AI regulation, such governance is rapidly shifting to a must-have for organizations using AI systems.

Not sure where to get started? INQ’s portfolio of AI services is customized to fit your specific needs and get you AI-ready. To learn more, visit our website at or contact us at To keep up with the latest in AI news, subscribe to the Think INQ newsletter.


¹ Source: ² Source:,they%20are%20put%20to%20use. ³ Source: ⁴ Source: Source:,they%20are%20put%20to%20use. ⁶ Source: ⁷ Source: ⁸ Source: ⁹ Source: ¹⁰ Source:,they%20are%20put%20to%20use. ¹¹ Source: ¹² Source:

171 views0 comments


bottom of page