Responsible AI in Practice: Why Trust Is the Real Currency
An Interview with Gwendoline Grollier, Co-Founder of T3, questions by Ramona Thumm
AI is moving from buzzword to basic infrastructure. It shapes how organisations hire, lend, diagnose, and make decisions. At the same time, headlines about AI harms, bias and regulatory fines are multiplying.
We spoke with Gwendoline Grollier, co-founder of T3, a 99 percent female-owned, practitioner-led firm working at the intersection of risk, regulation and Responsible AI. Her core belief is simple: “Risk is not about saying no, it is about enabling smarter yeses.”
D&S: Right now, AI is surrounded by hype and strong narratives. What are the biggest misconceptions, and what should leaders really pay attention to?
Gwendoline Grollier: The biggest misconception is that AI is almost human. It is not. It is pattern prediction, not understanding. Generative AI is brilliant at producing plausible answers, but it has no common sense, no strategic context and no accountability. Treating it like a wise colleague rather than a very fast, very literal intern is how you get into trouble.
The second misconception is the “plug it in and productivity explodes” story. Leaders often underestimate the invisible work: data quality, process redesign, training, change management and governance. AI does not fix broken processes; it amplifies what is already there, both the strengths and the weaknesses.
And finally, there is the fear of being behind if you are not on the very latest model. In reality, most organisations are far from exhausting the value of the tools they already have. The real questions are much more practical:
- Where does AI truly improve decisions or customer experience?
- What are the operational, legal and ethical risks in those specific contexts?
- And how will we measure impact, not just activity?
I often describe AI as a leverage machine. It will leverage your strengths and your weaknesses. The responsible question is not “How advanced is the model?” but “How robust is the environment we are putting it into?”
D&S: And how do unrealistic expectations create risks or blind spots inside organisations?
Gwendoline Grollier: Hype is not harmless. It is a governance issue.
If senior leaders believe AI is a silver bullet, several things tend to happen. People start over-trusting outputs and stop challenging results. Projects are pushed through too quickly, bypassing risk and compliance “just this once” to keep up with competitors. And when the first wave of projects under-delivers, you get organisational cynicism: people disengage just when you need their buy-in the most. One of the biggest issues our clients come to us with is how to truly measure the Return On Investment of AI adoption!
For me, credibility is everything. Credibility compounds like capital. When leaders set realistic expectations and show their work, they build trust in the AI journey over time instead of burning it in one big over-promise.
D&S: Regulation is evolving quickly. Can you give us a simple, practical overview of what companies should understand today, and how they can start building Responsible AI?
Gwendoline Grollier: The core message from regulators is quite simple: if AI can materially affect people or markets, you are expected to understand what it is doing and be in control of it.
I encourage leaders to think in 3 layers:
- Existing laws that already apply:
Data protection, consumer protection, anti-discrimination, sector rules in finance or healthcare: all of these apply regardless of whether you use AI or not. AI does not sit outside the legal system. - AI-specific rules:
In Europe, for example, the AI Act introduces a risk-based approach: some uses are prohibited, some are classed as high-risk with strict requirements, and others come with lighter but still meaningful obligations. Other regions are moving in similar directions through laws or binding guidelines. - Soft law and standards:
Frameworks like NIST’s AI Risk Management Framework and emerging standards (such as ISO/IEC 42001) are becoming the reference for what Responsible AI looks like, even if they are not formally binding.
You do not need to become a legal expert. What you do need is a clear view of where AI is used in your organisation, which use cases touch customers, employees or critical processes, and what obligations attach to those.
A very practical first step is to build an AI use-case inventory, classify each use case by risk and regulatory impact, and define, per category, the required approvals, documentation and controls. That simple structure moves you from “we hope we are compliant” to “we can show our homework”.
D&S: What does good AI governance look like in practice?
Gwendoline Grollier: Good AI governance is not a 200-page policy that nobody reads. It is the set of habits and decision rights that shape how AI moves from idea to production.
Here are five clear, practical pillars anyone can use:
- Ownership: Name an executive AI owner and make clear who approves which use cases.
- Lifecycle gates: Have simple checkpoints from idea to deployment to monitoring.
- Traceability: Document purpose, data, key tests and limits for important AI systems.
- Third-party guardrails: Assess vendors and control access, logging and data use on your side.
- People and escalation: Train users and make it easy to say “this looks wrong” and escalate.
D&S: You often say risk management is not about saying “no”, but about enabling smarter yes-decisions. How can AI risk management become a strategic advantage?
Gwendoline Grollier: I genuinely believe good risk management is a competitive advantage. The classic analogy is brakes on a car: you do not install them to go slower; you install them so you dare to go faster.
When you handle AI risk well, three advantages emerge:
- You prioritise the right use cases instead of chasing everything that sounds innovative.
- You can scale with confidence because risk and compliance are partners in the journey, not late-stage blockers.
- You build trust with stakeholders such as customers, regulators and investors by being able to explain your approach clearly and transparently.
I have seen projects stall for months because risk concerns surfaced only after everything was built. Early, structured risk management saves time. Late, ad-hoc risk management kills momentum.
D&S: What cultural or organisational shifts does that require?
Gwendoline Grollier: First, you have to move from shadow AI to transparent AI. People need to feel safe admitting they are using tools, asking questions and flagging concerns. Punishing experimentation just pushes it underground.
Second, you need cross-functional ownership. AI cannot live solely in IT or data science. Business leaders, risk, legal, compliance and HR all have a stake. Just as everyone is a risk manager in a bank, everyone should be a Responsible AI champion in any firm.
Third, you should reward responsible behaviour, not just speed. If you only celebrate the fastest deployments, you implicitly say that cutting corners is fine. Recognise the teams that raised a red flag early and improved a solution because of fairness, robustness or compliance.
And finally, invest in AI literacy. Not everyone has to be a data scientist, but leaders and employees do need a basic grasp of what AI can and cannot do, and the role they play in supervising it.
D&S: If you could give leaders one piece of advice for navigating AI responsibly in 2026, what would it be?
Gwendoline Grollier: Treat AI like a powerful new colleague or a very willing intern you are bringing into your most sensitive processes, not like a toy and not like an oracle. When you hire a senior person, you define their role, check their track record, set boundaries, decide who supervises them and monitor performance over time. AI deserves the same discipline.
Ask yourself:
- Where does this system sit in my organisation chart?
- Who is ultimately responsible for the decisions it influences?
- What guardrails and monitoring do we have if it goes off track?
The leaders who work through those questions now will be the ones able to say yes to AI in the most meaningful parts of their business, while others are still oscillating between blind enthusiasm and panic.
Do not outsource your judgment to AI and do not outsource your responsibility either. Own it, structure it, and you will be in a very strong position by 2026.
D&S: And finally, what exactly does T3 do in the field of Responsible AI, and how is your approach different from classic tech-driven AI providers?
Gwendoline Grollier: If I had to put it in one sentence: at T3, we help organisations use AI boldly but safely.
Concretely, we do 4 main things:
- AI Governance & Risk
We help you answer: “Where are we using AI, who is in charge, and what could go wrong?”
That means AI registers, clear responsibilities and simple rules so AI stays within your risk and regulatory limits. - AI Adoption & Change
We make AI part of real work, not just a lab experiment. From “Are we ready?” checks to selecting vendors and redesigning processes, we help teams actually use AI in day-to-day operations. - AI Assurance & Testing
We independently “kick the tyres” of your important AI systems. We test models and vendors, check how they behave in practice, and set up monitoring so you know when something starts to drift or break. - AI Literacy & Training
We help people understand AI in practical terms. Boards, executives and teams learn what AI can and cannot do, how to use it safely, and when to ask questions or escalate concerns.
Where we differ from classic tech providers is that we are technology-agnostic. A lot of providers start from the solution: “Here is the platform, here is the model.” Our starting point is: what is materially at stake for your customers, your balance sheet and your reputation? From there, we shape a portfolio of AI use cases that is both ambitious and controllable.