Ontology Extraction for Large Ontologies via Modularity and Forgetting

Chen, J., Alghamdi, G., Schmidt, R. A., Walther, D. and Gao, Y. (2019)

In Kejriwal, M. and Szekely, P. A. and Troncy, R. (eds), Proceedings of the 10th International Conference on Knowledge Capture (K-CAP’19). ACM, 45-52. BiBTeX, PDF, DOI link to ACM.

We are interested in the computation of ontology extracts based on forgetting from large ontologies in real-world scenarios. Such scenarios require nearly all of the terms in the ontology to be forgot- ten, which poses a significant challenge to forgetting tools. In this paper we show that modularization and forgetting can be combined beneficially in order to compute ontology extracts. While a module is a subset of axioms of a given ontology, the solution of forgetting (also known as a uniform interpolant) is a compact representation of the ontology limited to a subset of the signature. The approach introduced in this paper uses an iterative workflow of four stages: (i) extension of the given signature and, if needed partitioning, (ii) modularization, (iii) forgetting, and (iv) evaluation by domain expert. For modularization we use three kinds of modules: locality- based, semantic and minimal subsumption modules. For forgetting three tools are used: Nui, Lethe and Fame. An evaluation on the SNOMED CT and NCIt ontologies for standard concept name lists showed that precomputing ontology modules reduces the number of terms that need to be forgotten. An advantage of the presented approach is high precision of the computed ontology extracts.


Renate A. Schmidt
Home | Publications | Tools | FM Group | School | Man Univ

Last modified: 21 Feb 20
Copyright © 2019 Renate A. Schmidt, School of Computer Science, Man Univ, schmidt@cs.man.ac.uk