Manager, Artificial Intelligence | INQ Consulting
The International Association of Privacy Professionals (IAPP) recently held its inaugural AI Governance Conference in Boston, marking a significant event in artificial intelligence (AI). The conference was abuzz with the necessity of robust AI controls and privacy and data professionals' pivotal role in establishing frameworks to govern AI. Here are the key insights from the event:
Embrace imperfection. The domain of AI governance often appears complex and overwhelming. Organizations may feel a strong urge to perfect their approach from the outset. However, aiming for perfection can be counterproductive. Organizations need to begin with what they have and grow incrementally. This might include improving existing vendor risk assessment frameworks or privacy impact assessments to account for AI-related risks. With their risk identification and control implementation skills, privacy professionals are especially well-equipped to make substantial contributions to these governance efforts.
Privacy is the starting point. Privacy experts are integral to the AI governance dialogue. However, Privacy is just one facet of the AI governance conversation. Many organizations have succeeded with AI governance driven by business and enablement functions, particularly those at the forefront of AI innovation. An inclusive AI governance program must involve diverse stakeholders, including data scientists, compliance, risk, and product teams. The interplay among these groups, underpinned by a robust organizational culture and education, is critical.
Culture is everything. Understanding the ‘why’ behind AI governance is as important as knowing the ‘how.’ AI governance is not merely a procedural formality; it requires proper organizational buy-in. Robust governance aims to expand the possibilities of what a company can achieve with AI, fostering a culture of responsible experimentation. AI governance should be viewed as an enabling mechanism, much like advanced car braking systems that allow for safer, faster driving.
AI regulation is here to stay. The global regulatory landscape is increasingly focused on AI. While many of these regulations are still in the pipeline, it’s evident that organizations using AI, particularly those in high-risk sectors, must develop their AI governance programs. Consequently, organizations considering avoiding AI due to regulatory concerns risk missing significant business opportunities.
Learn from others. There is already so much to be learned from existing guidelines and frameworks established by regulators, standards bodies, and more. Notable examples include the NIST’s AI Risk Management Framework and the International Guiding Principles for Organizations Developing Advanced AI Systems. Furthermore, various data protection agencies, such as the UK Information Commissioner’s Office and Canada’s Innovation Science and Economic Development, offer valuable resources. The IAPP also contributes to this knowledge pool by disseminating resources on AI governance.
In conclusion, organizations must prioritize the implementation of AI governance mechanisms and procedures. These should encompass the following elements:
Formulate AI policies that delineate your organization's risk tolerance, define the acceptable and unacceptable applications of AI, and guide the use of tools such as ChatGPT.
Create a cross-functional stakeholder committee responsible for developing AI policies and procedures for the enterprise and assessing the risk of planned AI initiatives.
Establish AI procedures that equip practitioners with methodologies for designing, developing, and deploying responsible AI solutions.
Conduct AI impact assessments to proactively identify and address risks associated with AI.
Deliver training and education programs for data and AI professionals involved across the stages of the AI lifecycle to understand their role in upholding effective governance.