“Almost everyone is experimenting with AI by now, but more than ninety percent admit there’s still no proven customer or business value coming out of it,” says Marcel Timmer, Country Managing Director Netherlands at Red Hat. “The phase of casual trial-and-error is ending. The question now is: how do we deploy AI safely, in a controlled and measurable way, on our own terms?”
That notion — AI on your own terms — was the central message at Red Hat Summit: Connect in Nieuwegein. Red Hat positions itself as the company that enables organizations to run AI within their own infrastructure, with full control over data, policies and lifecycle. “We’re not here to build the flashiest AI models,” Timmer says. “We make sure you can work with AI safely, repeatably and under full control — on your own data, on your own platform.”
Digital sovereignty
Red Hat has argued for years that workloads must remain freely portable between clouds and on-prem environments — a position that is now becoming mainstream. “We said this eight years ago, but at the time it was dismissed as a technical nuance. Now sovereignty is a top priority in every sector.” He warns, however, against buzzword fatigue: “Everyone is shouting ‘sovereign’ now, but it’s not a label. The real question is: can you move from cloud A to B tomorrow without breaking your entire architecture? Do you actually have insight and authority over your data?”
Red Hat’s own AI research shows that companies are ready to invest, but struggle with ROI and risk. “Too often it’s experimentation without predefined KPIs. People hope for value, but haven’t defined how they will measure it,” says Timmer. Meanwhile, shadow AI is already emerging — employees using AI tools outside IT governance. “They’ll do it anyway. That’s not necessarily bad, but it needs to be formally facilitated. Otherwise you get risk without strategy.”
Beyond AI-washing
According to Timmer, the AI-washing phase is over. “AI should not be bolted on as a gimmick — it has to be managed automatically at enterprise level. Version control, governance, security, rollback — it all has to be built in.” He refers to this as ‘automating the automation’: AI should be part of DevOps thinking, not something separate from it.
Interestingly, the AI conversation is happening alongside a renewed look at infrastructure and virtualization. “Organizations are rethinking platform choices because licensing and lock-in constraints are limiting their AI strategy.” The infrastructure layer is now defining who remains agile — and who won’t.
One clear trend Timmer observes is the shift toward domain-specific AI models: smaller, closer to the data and delivering measurable value more quickly. “You don’t always need a 10-billion-parameter model. A smaller model running inside your own environment can generate value faster — and you know exactly what it uses and who controls it.”
Skills as the real bottleneck
Right now, the biggest hurdle isn’t technology, but expertise. “We don’t lack ideas — we lack people who can responsibly bring AI into production. That’s not about prompt engineering, but about MLOps, data governance, security and infrastructure thinking together. The Netherlands needs to invest in this structurally.”
That maturity, he says, is what will now separate organizations that use AI occasionally from those that generate long-term advantage. “The experimentation phase was necessary to build familiarity — but now AI is becoming mission-critical. So you need to know upfront: who maintains the model? How is it updated? How do I prevent it from training on the wrong data? What happens if something breaks?”
Scaling responsibly
That is precisely why, in his view, generic cloud AI services won’t be sustainable long term for many organizations. “You can’t stay indefinitely dependent on black-box services where you don’t know what data they’re trained on, what decisions they make or based on what. Especially not in the public sector, healthcare or financial domains. Transparency and control are not optional. Open source is a crucial factor here.”
Red Hat applies AI internally and in its products — but always in a deliberate, integrated way. “In Red Hat Enterprise Linux we assist system administrators with real-time AI features. In Ansible Lightspeed we use AI to accelerate automation. Red Hat AI is designed to work with any AI model, any hardware and any cloud. We never use AI as a gimmick — only where it adds direct value. AI should not float outside your infrastructure — it must be part of it. Only then can you scale responsibly.”
