During the latest meetup of the Leiden AI Community, researchers and practitioners gathered around a question that increasingly defines the next phase of artificial intelligence: how can AI systems move beyond fluent language generation toward reliable understanding and decision support? Across three talks, the speakers approached this challenge from different perspectives, yet arrived at a shared conclusion. Context is not an optional enhancement to AI—it is a prerequisite. Knowledge graphs, semantic models, and explicit domain grounding emerged as the foundations for trustworthy and usable AI systems.
Grounding LLMs in Enterprise Reality
In Why Context Matters in AI – Grounding LLMs, Aniket Mitra examined the structural gap between large language models and real-world enterprise environments. Organizations, he explained, possess vast amounts of data, but much of it is fragmented across silos, stored in heterogeneous formats, and deeply intertwined with domain-specific processes. Simply making that data accessible through a conversational interface does not resolve the underlying problem of meaning. As Mitra put it, even when data is recorded, it is often “not explicitly available in a form that decision-makers can actually use,” leaving AI systems to operate without the context required to add real value.
He argued that dashboards and reports expose only a thin abstraction layer of enterprise reality, while the deeper operational knowledge remains hidden in processes, exceptions, and informal practices. Connecting large language models directly to enterprise data does little to change that. If information is not represented in the language and structure of the business itself, Mitra noted, “the data has no real meaning—no matter how advanced the AI model you connect it to.”
To address this, he positioned ontologies and knowledge graphs as the semantic backbone of enterprise AI. By explicitly modeling business processes, dependencies, and domain language, AI systems can begin to reason, abstract, and plan rather than merely predict the next token. This shift is particularly important for robotics and physical AI, where systems must understand space, workflows, and constraints. In such domains, Mitra emphasized, AI cannot rely on statistical patterns alone but must be grounded in an explicit model of how the world actually works.
Trustworthy AI in Regulated Sectors
The second talk, Building Trustworthy AI in Regulated Sectors, by Surajeet Bhuinya, shifted the focus to healthcare, finance, and other tightly regulated environments. Bhuinya highlighted a core risk of generic AI systems: inconsistent answers caused by shifting or implicit context. When the same question produces different answers at different times, he argued, the issue is not intelligence but a lack of contextual grounding. In regulated sectors, such unpredictability is unacceptable.
To address this, Bhuinya advocated the use of static and enriched context layers built on knowledge graphs. These layers strictly define which data, entities, and relationships an AI system is allowed to access within a given scenario. Rather than allowing users to ask arbitrary questions across the entire data landscape, the system is constrained to a clearly defined context and remains within it. This, Bhuinya explained, is essential for compliance, governance, and operational trust.
Transparency plays a central role in this approach. Instead of returning answers as opaque outputs, the system exposes how conclusions are reached by showing the underlying documents, entities, and relationships involved. Trust, Bhuinya argued, emerges when users can see why an answer is correct, not merely what the answer is. By structuring and visualizing context in this way, AI systems can also reduce cognitive overload, enabling professionals to focus on decision-making rather than information retrieval.
ReviewGraph: From Text to Insight
In the closing talk, ReviewGraph – Using Knowledge Graphs to Predict Customer Satisfaction, Lifeng Han presented an applied research perspective on context-aware AI. He introduced ReviewGraph, a framework that converts customer reviews from unstructured text into relational knowledge graphs enriched with sentiment information. Rather than treating words as isolated tokens, the model captures relationships between entities—such as services, facilities, and experiences—and links them to positive or negative sentiment.
According to Han, the meaning of a review does not reside in individual words but in how concepts relate to one another within a specific context. Traditional natural language processing approaches often miss this relational layer. By explicitly modeling it, ReviewGraph produces a richer and more interpretable representation of customer feedback.
Using a subset of TripAdvisor data, Han demonstrated that this graph-based approach can match or even outperform large language models in prediction accuracy, while requiring significantly less computational power. The benefits extend beyond efficiency. Because predictions are grounded in explicit graph structures, results can be inspected and visualized. Instead of a black box, Han described the goal as a “glass box,” where users can see which relationships and sentiments actually drive the outcome.
From Talking AI to Understanding AI
Taken together, the three talks outlined a clear shift in how AI systems are being designed and evaluated. The emphasis is moving away from generic language fluency toward systems that understand domain-specific meaning, respect constraints, and can explain their reasoning. For robotics, industrial automation, and other mission-critical applications, this shift is particularly significant. Context—formalized through knowledge graphs and semantic models—emerges not as an enhancement, but as the foundation that allows AI to move from impressive demonstrations to reliable, operational intelligence.
