top of page

AI is Here Now. Stop Waiting to Build Trust in How It's Used.


a canadian flag waving outside of an office building

by Michael Pascu, Senior Manager, Artificial Intelligence and Michelle Niazi, INQ Law Intern



In Brief

  • Two out of three people are now using AI regularly whether organizations are ready or not. Adoption now completely outpaces regulation.

  • It gets worse. 57% of employees report concealing  their use of A ‘at work (known as Shadow AI) with 56% reporting having made mistakes because of this use. Organizations have no means of detecting this and the longer the use of what we call “shadow AI” goes unchecked, the greater the risk to the organization.

  • Restricting AI is not the answer. You can’t afford to, and it won’t stop people from using it.

  • A study of 48,000 people across 47 countries found that institutional factors, including governance structures and confidence in the organizations deploying AI, are the single strongest driver of trust in AI systems.

  • Organizations that build governance now will be able to expand AI use across the business with confidence and with the credibility to prove it's being done responsibly.


 

Everyone is using AI.

Two out of three people now use AI regularly.1 People across every industry and seniority level have now brought AI into their daily lives and into work. Someone uses ChatGPT at home to plan a week of meals. The next morning, they use it to summarize a 40-page vendor proposal before a meeting. The tool is the same. The stakes are not. This is because the barrier to entry for AI is remarkably low, it’s designed that way. Anyone with access to computer can open a browser and start using AI tools in mere seconds. But here’s the problem:  most organizations have no idea the extent to which AI is being used and how, and for what purpose.


We’ve been slow to react.

To compound matters, a whopping 57% of employees admit to hiding their AI use at work.2 They hide it for the reasons you'd expect: no clear policy, fear of judgment, and the quiet knowledge that it makes them faster. Employees are drafting reports, writing emails, summarizing meetings, and generating analysis with tools their employers never approved of and don’t know about. This is shadow AI. In the absence of clear guidance, people default to secrecy.


Shadow AI is a big problem for your organization.

56% of those employees also report having made mistakes as a direct result of AI use.3 These errors seep into deliverables, skew organizational decisions, and ultimately cost the company time, money, and credibility.


The other challenge is that most organizations are not equipped to address the risk. Organizations cannot manage what they cannot see.  Only two in five organizations have policies or provide guidance on the use of AI.4 Mistakes will go undetected and bad habits will compound. The longer shadow AI operates unchecked, the deeper the problem becomes.


There’s no going back.

If the instinct in your organization is to slow things down and restrict access, please understand that your window has closed. If you restrict access now, you’ll be beaten by a business that has embraced it within the year.


Your employees have learned these tools on their own time and brought them to work because the tools made them faster. Locking AI out at this point will only alienate staff, not undo their behaviours. In fact, it will likely push use further into the shadows, where the risk only amplifies.


To complicate matters, there are also external pressures. Regulators are not sitting by. The European Union’s AI Act has set the benchmark, while Canada, China, South Korea, and the United States have each proposed or enacted their own AI legislation.5 Expectations around transparency, risk management, and accountability are accelerating.


So, the situation compounds. Your people are using AI without oversight. Regulators are tightening the requirements. The technology is only becoming more capable, enticing your staff to use it to do more and more.


Governance builds certainty, confidence, and trust.

The risks described above all share a common root: the absence of high calibre governance. Governance is what gives an organization the ability to set expectations for AI use and demonstrate to impacted stakeholders that AI is being used responsibly.


Governance is also how you build trust. Researchers define trust in AI as a willingness to be vulnerable to an AI system, by relying on its output or sharing personal data, based on positive expectations of how the system will operate.6 In short, employees need to believe the tools they are asked to use are reliable and that the organization has their interests in mind.


A global study of over 48,000 people across 47 countries found that institutional factors, including governance structures, regulations, and confidence in entities to develop and use AI in the best interests of the public, are the strongest driver of trust in AI systems.7 These outweighed other factors including AI literacy and perceived benefits and risks.


Inside your organization, you can think of the chain like this. Governance increases certainty, because people know the rules and what they can and cannot do. Certainty increases predictability, because stakeholders can anticipate how AI will be used. Predictability builds confidence, because patterns of responsible behaviour become visible over time. And confidence, sustained and reinforced by consistent action, becomes trust.


At INQ, this is where the work with our clients begins. Your employees may be using AI without policies to guide them. You may have no visibility into how AI tools interact with your data. And you may not be able to demonstrate to regulators that your organization is managing AI responsibly. If any of that sounds familiar, the governance gap is already costing you.


Our AI governance program help organizations close that gap: establishing clear policies, building accountability structures, and creating the conditions where trust in AI can actually take root. Organizations that do this now will scale faster and earn the credibility that matters. Those that wait will be perpetually catching up.

 



Works Cited

1 Gillespie, N., et al. Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025. The University of Melbourne and KPMG, 2025.

2 Ibid.

3 Ibid.

4 Ibid.

5 International Association of Privacy Professionals. Global AI Law and Policy Tracker. IAPP, 2025.

6 Gillespie, N., et al. Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025. The University of Melbourne and KPMG, 2025.

7 Ibid.

 
 
bottom of page