When does explainable AI (XAI) need an ontology?

Ontologies are helpful whenever non-technical humans need to understand the explanation of an AI’s outputs or decisions. By applying clear rules and definitions to complex datasets, ontologies provide the basis for an AI agent to explain its reasoning more reliably, in natural language that is tailored to the user’s level of expertise.

What is an ontology?

An ontology is a formal framework of rules, relationships and agreed vocabulary that makes domain knowledge machine-readable for AI services. Because ontologies standardise the meanings of words and concepts, they also help to avoid misunderstandings among multidisciplinary or multinational teams, making collaboration easier.

Ontologies can be used to constrain what an AI agent decides or does, and set boundaries that prevent it from using unverified data, for example. In domain intelligent systems, ontologies can also be used for more reliable and accurate explanations of an AI’s output after the event.

How do ontologies enhance XAI?

Ontologies can enhance AI’s contextual understanding of domain knowledge by drawing on relevant rules and guidelines to frame its responses. In tandem with a natural language generation (NLG) system, this can help an AI agent to provide an intelligent human answer to a query about why it did something, instead of just supplying a dry technical read-out of its reasoning process.

A 2023 study published in the Semantic Web Journal explored how an Explanation Ontology (EO) could support user-centered explanations for black-box machine learning models and their results. By making explanations more accessible and reliable, ontologies can raise user trust in autonomous intelligent systems, driving adoption.

What’s slowing Explanation Ontology (EO) adoption?

Explanation ontologies have potential to bring greater transparency, consistency and trustworthiness to AI systems. But they also take time and money to design, implement and maintain. Integrating ontologies with black-box models like neural networks is proving very hard. As a consequence, XAI has traditionally come second to model performance in terms of investment priorities.

It’s also true that legacy ontology platforms have been slow to make the grunt-work of developing an ontology less onerous and technically demanding. New ontology-as-a-service (OaaS) solutions are trying to address this, partly by digitising consensus building approaches, such as the Delphi Method, so that domain experts can reach agreement on definitions.

Where can ontologies help real world XAI?

Improved regulatory compliance, bias detection, interoperability and user adoption are all cited as potential real world benefits of ontologies in the field of XAI, as part of a wider system. While some low risk tasks clearly don’t warrant an ontology – such as explaining why Netflix suggested you a movie – there are high risk domains that do, especially if they handle large quantities of complex data from different systems.

In financial services, for example, ontologies could enable an AI agent to explain in detail why a loan application was rejected, using language that is appropriate either to an expert, a regulator, or the applicant themselves. In domains like data privacy (GDPR), ontologies can encode legal and ethical constraints. This might enable an AI to justify why it rejected a contract, for example, although human verification would still be needed in most circumstances that involve interpretation of the law.