KGs Insights

Ontology Reasoning in Knowledge Graphs

A Python hands-on guide to understand the principles of generating new knowledge by following logical processes

Giuseppe Futia
Towards Data Science
9 min readNov 15, 2024

--

Figure 1 — An end-to-end process illustrating how starting statements lead to inferred ones through ontology reasoning

Introduction

Reasoning capabilities are a widely discussed topic in the context of AI systems. These capabilities are often associated with Large Language Models (LLMs), which are particularly effective in extracting patterns learned from a vast amount of data.

The knowledge captured during this learning process enables LLMs to perform various language tasks, such as question answering and text summarization, showing skills that resemble human reasoning.

It’s not helpful to just say “LLMs can’t reason”, since clearly they do some things which humans would use reasoning for. — Jeremy Howard |
Co-Founder Fast.AI — Digital Fellow at Stanford

Despite their ability to identify and match patterns within data, LLMs show limitations in tasks that require structured and formal reasoning, especially in fields that demand rigorous logical processes.

These limitations highlight the distinction between pattern recognition and proper logical reasoning, a difference humans do not always discern.

--

--

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Written by Giuseppe Futia

AI Consultant and Educator | Ph.D. at Politecnico di Torino (Italy). Passionate about Knowledge Graphs, LLMs, and Graph Neural Networks.

Responses (4)