Responsible AI usage: where to begin for security leaders

As AI capabilities accelerate, most—if not all—organizations are already using AI. Sometimes intentionally, but often with employees using it on their own, without security teams being aware of it. That “shadow AI” reality creates a widening gap between how leaders think AI is being used and how it’s actually woven throughout their SaaS stack. According to CISO and forthcoming author Jay Trinckes, that gap is one of the most urgent governance challenges security teams face today.

Trinckes, whose book The Definitive Guide to Responsible AI will be published in December 2025, sat down with us to discuss how organizations can use AI effectively and see the benefits it can offer without compromising compliance, privacy, or trust.

The hidden risk of shadow AI

Most organizations already have processes to evaluate and approve new technology, including AI systems. Those processes exist for a reason: they introduce risk mitigation and protective controls. But when employees use AI tools that haven't been reviewed or approved, Trinckes says the risk profile changes instantly.

Unauthorized AI—what he calls shadow AI—can lead to the unintentional disclosure of sensitive data, violations of intellectual property rights, or exposure to unclear contractual terms that accompany many free AI tools.

“In many cases with newer software, if you have free access, you're the product,” Trinckes notes. “Whatever data you give them, they can utilize.”

Even seemingly safe applications are increasingly embedding AI features by default. That means a tool that was once approved may now introduce new risks or bring third parties—such as LLM providers—into an organization’s fourth-party ecosystem.

“All of our pre-approved applications are now adding AI features, which may change the application's current risk levels,” he says.

Why AI discovery is harder than it sounds

Understanding how AI is already being used across the business requires cross-functional coordination and deeper visibility into the technology stack. Trinckes recommends monitoring browser activity, reviewing extensions, and watching for new integrations that “quietly” activate AI capabilities.

He stresses that discovery isn’t just about inventorying standalone tools. It’s about uncovering where AI is woven into workflows and connected systems.

“AI solutions now attempt to integrate with other software applications in order to get the most out of their features,” he says. Monitoring those integrations is essential.

Responsible AI starts with one principle: accountability

There are multiple guides and principles today around defining what “responsible AI” means. Trinckes prefers to start with the high-level principles from the Organization for Economic Cooperation and Development (OECD):

  • Inclusive growth, sustainable development, and well-being
  • Human rights and democratic values
  • Transparency and explainability
  • Robustness, security, and safety
  • Accountability

For him, accountability is the foundation. It reinforces all the other pillars.

“If your organization is going to be using or providing an AI solution, you need to be accountable for your actions,” he says. “At the end of the day, it is about being ethical in your use or provision over AI.”

When AI adoption goes wrong 

Trinckes points to a recent case in which Deloitte Australia had to refund the government roughly $290,000 for errors in an AI-generated report. The document included fabricated quotes and nonexistent references, issues traced back to insufficient oversight and inadequate governance.

“The lack of governing policies on the use of AI, human oversight, and the lack of quality control over the deliverables led to this situation,” he explains.

The lesson: AI doesn’t eliminate the need for expert review. It amplifies it. It can really amplify productivity for your organization, when utilized properly. But without real human oversight and review, it can mean risk and fines down the line. 

How governance builds trust

Governance isn’t about slowing teams down; it’s about creating guardrails that allow AI innovation to happen responsibly. Trinckes breaks this into three audiences:

Internal stakeholders gain confidence that AI can be used safely while still enabling efficiency.
Customers gain assurance that their data isn’t used in ways they didn’t authorize.
Regulators gain visibility into how privacy, transparency, and explainability requirements are being met.

“Trusted AI can meet contractual obligations,” he says. “It ensures the data shared is not being used in a manner that it was not intended for.”

AI adoption speeds in regulated industries

For many regulated organizations, the biggest blocker to AI adoption is simple: AI tools often cannot meet industry-specific security or privacy requirements. Many free or consumer-grade AI solutions rely on broad data rights or external transfers that these industries can't allow.

“Organizations developing or providing AI solutions must do better and build trust,” Trinckes says, especially when data residency, sector-specific rules, or contractual limitations are at play.

The way to get executive buy-in

Trinckes emphasizes that no responsible AI program can succeed without executive sponsorship. That starts with building a compelling business case showing ROI or risk reduction, then embedding AI governance into integrated management systems—an approach he strongly recommends.

He points to ISO 42001 and the NIST AI Risk Management Framework as emerging standards leaders can use to structure their programs.

A common misstep organizations should avoid

The biggest failure he sees is simple but pervasive: security and privacy teams not knowing AI is being used until it’s too late.

“Compliance departments shouldn't say no, but rather, ‘How can we implement the solution without introducing additional risks?’” he advises. Approvals should happen up front—not during remediation.

Where to Begin: Standards, Training, and Education

For leaders looking to build or mature responsible AI programs, Trinckes recommends starting with foundational AI governance standards and free training resources from major cloud providers like AWS and Microsoft.

And, of course, he points to his upcoming book—The Definitive Guide to Responsible AI—which offers a step-by-step approach to building a complete AI governance program.

“I don't think AI is going away anytime soon,” he says. “But responsible usage will be key for organizations looking to strengthen and improve their security postures, instead of allowing AI to weaken them.”

Learn more about how Thoropass can help you achieve ISO 42001 compliance and manage AI risks more efficiently here.

In this post:

Stay Connected

Subscribe to receive new blog articles and updates from Thoropass in your inbox.


Thoropass Team

See all Posts

Related Posts

Stay connected

Subscribe to receive new blog articles and updates from Thoropass in your inbox.


Want to join our team?

Help Thoropass ensure that compliance never gets in the way of innovation.

View Open Roles

Have any feedback?

Drop us a line and we’ll be in touch.

Contact us