top of page
Screenshot 2023-02-13 at 2.02.36 PM.png

Are you ready for artificial intelligence?

Responsible AI is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and diligent way. We stay current with the latest global developments in privacy law, data protection, data governance, and emerging technologies, and we offer support to your organization as you embark on your Responsible AI journey. 

With over 130 AI bills proposed or passed in the US in 2021 alone, is your organization ready for Responsible AI?

Get started on your own AI Governance Journey by completing the form below.

AI Maturity Assessment:

INQ uses a proprietary maturity assessment methodology to evaluate your organization's responsible AI readiness. We review your processes, systems, procedures, tools, and technology to align them with emerging AI legislation and industry best practices. Our methodology is framed on:

  1. Strategy & Culture: Assessing the alignment of your organization’s data vision with your corporate strategy, and how your organization fosters a responsible data culture.

  2. Policies & Processes: Assessing existing governance mechanisms and compliance with industry best-practice and emerging or existing law.

  3. Talent & Training: Evaluating talent gaps between data ambitions and existing talent, and how best to upskill internal talent with responsible AI training.

  4. Supplier management & procurement: Reviewing supplier due diligence and aligning procurement practices with responsible AI standards.

Bias Assessment and Model Validation:

Concerns with bias are top of mind for regulators across the health, financial, insurance, employment, and human resource sectors, among others. INQ partners with trusted model validators to provide:

  • Qualitative assessment of your model development practices, operating procedures, data and model design specifications, and metrics.

  • Quantitative assessment of model and data bias, transparency/explainability, and validation of your model’s robustness including adversarial, perturbation, and edge case testing.

  • Post-assessment remediation report to bridge identified technical and procedural gaps.

  • External-ready bias reports to comply with legislation such as NYC LL144 containing assessment findings.  

We routinely develop guidelines and standards for evaluating and testing models for fairness/bias, robustness, transparency, and explainability.  

AI Policies, Procedures, and Incident Response:

Implementing strong policies, procedures, and incident response plans is crucial to establishing an effective AI governance program. INQ supports you to create and implement policies, procedures, and standards for algorithmic transparency, explainability, and fairness/bias. We assist with incident response planning and vendor assessment to optimize and meet your legal obligations and emerging best practices.

AI Impact Assessments:

Does your AI system operate in a ‘high risk’ context according to the law? Have you conducted a comprehensive impact assessment to review the risks of your system, with a plan to mitigate those risks?

Our AI Impact Assessment helps mitigate AI-related risks across each stage of the AI lifecycle, with practical recommendations for model practitioners as well as key teams (AI, Legal & Compliance, and Risk).

AI Ethics:

Our ethics advisory service connects you with a neutral, expert ethicist who can provide you with a framework for ethical decision-making and guidance on AI ethics within your organization. This equips you to make responsible AI decisions that prevent unintended consequences.

Education and Training:

Prepare your executives, leaders, and practitioners for ethical and responsible AI with our custom training solutions. INQ provides a comprehensive learning experience in a variety of formats including online or in-person. Our sessions equip participants with the skills to put ethical AI into practice. Our curriculum includes programs to support AI strategy design, development of an AI governance ecosystem, and ethical AI training.

Stakeholder Engagement Strategy:

INQ’s stakeholder engagement strategies foster improved communication, stronger decision-making, and increased buy-in for your AI initiatives. This tailored approach allows you to identify and engage with stakeholders across your business to maximize the responsible use of data across all stages of the AI lifecycle.

Screenshot 2023-02-13 at 2.02.36 PM.png

Have questions about these services? Contact Us right now! We are here to help. 

bottom of page