Does my AI agent need an ontology?

Structuring domain knowledge in ontologies is useful whenever an AI agent needs to reason, manage complex knowledge, or enable collaboration – without hallucinating and making mistakes. Ontologies are particularly beneficial for AI agents that need to be optimised for a vertical, meaning a specific industry or niche market.

What is an ontology?

An ontology is a formal framework of rules, relationships and agreed vocabulary that makes domain knowledge machine-readable for AI services. Ontologies also help to avoid misunderstandings among multidisciplinary or multinational teams, making collaboration easier.

Various sectors have developed their own ontology standards to enable interoperability, so that different systems and teams can integrate data. This tends to require consensus-building methods.

Why do ontologies suit vertical AI agents?

Vertical AI agents need to be trusted and useful within their domains. To inspire trust, a specialised AI model cannot hallucinate or use unverified data. To be useful, it needs to demonstrate strong contextual understanding of the domain’s vocabulary, rules and standards. This delivers the high domain intelligence that makes an AI agent valuable.

Vertical AI agents grounded by ontologies can be less flexible than those using horizontal or general purpose large language models (LLM) alone. But they benefit from the rigour, interoperability and transparent reasoning that ontologies provide – necessary for any authoritative single source of truth.

Do LLMs use ontologies?

LLMs do not use formal ontologies for their core architecture. Instead they learn patterns from data that implicitly capture relationships. This works well for tasks like natural language processing or image recognition; less so in regulated domains where hallucinations and mistakes carry heavy penalties.

LLMs have shown potential for interacting with ontologies using Retrieval Augmented Generation (RAG). A 2024 study in the Journal of Biomedical Semantics evaluated an AI method for retrieving and processing structured data, which improved the accuracy of its responses.

When is an ontology not needed?

Simple agents that use direct sensor input, such as a robot following a line, don’t need high-level conceptual structures provided by ontologies. Nor do agents performing a narrowly scoped task such as controlling a thermostat, which can use hard-coded rules or statistical models instead. Deep learning models – such as LLMs – don’t rely on explicit ontologies either.

However, organisations that need high levels of accuracy to extract value from their complex knowledge often do benefit from ontologies, especially when dealing with large quantities of data.

What are use cases for ontologies?

AI agents in domains that use knowledge graphs, semantic web or expert systems rely on ontologies to organise and query complex information consistently. An ontology is also useful for integrating new knowledge in a structured way within constraints. This is why they are widely used in bioinformatics, advanced manufacturing and insurance, for instance – but also in museums and archives.

How do I build an ontology for my AI agent?

Developing and managing an ontology can be time-consuming and costly, even with ontology software tools. Recently however, more accessible ontology-as-a-service (OaaS) platforms have emerged to help non-technical domain experts get a working ontology over the line.