
by Michael Pascu, Senior Manager, Artificial Intelligence
AI use has increased significantly over the past year alone, especially for generative AI, which has nearly doubled in adoption across certain organizations.¹ This rapid increase in adoption has coincided with organizations needing a thoughtful plan to find, evaluate, and choose AI providers that fit their needs and risk appetite.
This post discusses the some of the main steps organizations should take to ensure responsible AI procurement. From understanding new AI laws, creating a checklist to evaluate AI providers, to contracting considerations, we cover key points to help you make smart choices about which AI providers to work with.
Step 1: Understand emerging laws governing AI and how obligations differ between AI providers and deployers.

Emerging AI laws such as the EU AI Act, Colorado’s Consumer Protections for Artificial Intelligence (SB205), and Canada’s draft Artificial Intelligence and Data Act promote a risk-based approach to the use of AI, resulting in some important implications for AI providers and deployers:
For example, AI providers of high-risk AI systems may be required to:²
create and share technical documentation and user instructions for AI deployers;
assess AI system for bias, robustness, security, accuracy, and share high-level results;
design AI systems to allow deployers to implement human oversight and keep detailed logs;
meet requirements for accuracy, robustness, and cybersecurity; and
adopt a risk management approach for the lifecycle of the AI system.
AI deployers of high-risk AI systems may be required to:³
keep detailed logs generated by the AI system;
inform end-users about their interactions with AI systems;
assign human oversight to natural persons to oversee AI system operations; and
take technical and organizational steps to ensure AI systems are used according to their intended purpose, including processes for ongoing monitoring.Â
Step 2: Augment vendor assessments to account for AI providers
AI deployers should establish a system to evaluate AI providers consistently and reliably during the procurement process. INQ has developed a starting point that procurers should adapt to meet their specific jurisdictional requirements and needs. To download INQ’s checklist, click here.
Step 3: Embed considerations for AI risk within your procurement contracts
After evaluating a potential AI provider, organizations may enter the contracting stage. At this stage, consider AI-specific risks and clearly outline the terms and conditions with the provider. In addition to complying with relevant AI laws, organizations should consider including terms for the following when contracting with AI providers:
Configurability:Â Many AI solutions are standardized products with limited customization, but this is starting to change. Some AI providers offer customization and can work closely with your organization to tailor their AI algorithms and data processing techniques for specific needs. The parameters and extent of customization should be clearly documented in the contracts. These contracts must align with your unique requirements and should be detailed as explicitly as possible.
Intellectual Property (IP) and Data Ownership:Â IP can be complex, especially when it comes to generative AI. Ownership of algorithms and the data used to train the AI system should be defined. Document how the provider will use data inputs to improve the underlying algorithms.
Handling of Third-party IP:Â AI solutions may use third-party IP or data to produce outputs. AI deployers must assess licensing terms, use rights, and appropriate warranties and indemnities to assess the risk of a third-party IP infringement claim.
Liability:Â Contracts should define liabilities and insurance coverage amounts. Liabilities could include personal injury and death, fraud, product liability, and costs related to privacy breaches.
Operational Measures:Â Consider what happens after the contract is signed. Depending on the nature of the relationship, there may be provisions for the provider to continuously monitor the AI system to ensure it meets the agreed-upon purposes. Ongoing monitoring is crucial for deployers to verify that the provider's claims about output and performance remain accurate.
Conclusion
In this post, we’ve covered the accelerating adoption of AI, emerging legal requirements for providers and deployers, establishing a means to evaluate providers using a checklist, and key considerations to embed within your procurement contracts.
Integrating these responsible AI procurement practices within your organization can help you make informed decisions that align with your organizational needs and risk appetites. As AI continues to evolve and its adoption accelerates, organizations should prioritize these responsible AI procurement practices. Focusing on these key areas will help your organization harness AI's full potential and confidently procure solutions.
Join us next time where we take a look at AI risk assessments and the value behind establishing a Responsible AI Committee.
How can we help?
INQ’s portfolio of AI services is customized to fit your specific needs and get you AI-ready. To learn more, visit our website at www.inq.consulting or contact us at ai@inq.consulting. To keep up with the latest in AI news, subscribe to the Think INQ newsletter.
Comentarios