OWL (1.1): An idiosyncratic introduction

[in progress; updated 19-03-2007]

This is a tutorial introduction to the Web Ontology Language (OWL), which is described by a set of W3C recommendations and specifically to the proposed minor revision dubbed OWL 1.1, which has been accepted as a W3C member submission, with the intent that it should be standardized by a future working group.

This introduction is very idiosyncratic indeed. I am trying to impart some of my general, as well as specific, understanding and feel for OWL. I am a digressor at the best of times (I'm writing this to escape thesis writing) and I like to know about design intent, history, and, frankly, dirt. Sometimes these are merely entertaining, but sometimes they help give insight into the subject matter. And sometimes...sometimes...I just don't care! Deal with it.

The OWL family of langauges are all logic based knowledge representation formalisms with strong ties to description logics (though there are other influences as well, and description logics themselves have an intersting history). I will center my presentation from a description logic perspective because I think that's the best way to really understand what's up with OWL.

I strongly dislike those computer books which start with a toy example as the basis of a tour of the features of, say, a programming language and then afterwards get down to the brass tacks of serious description. I hate that. But I just read a post by Dave Thomas claiming that there is empirical evidence that the upfront tour is a good way to go (based on the Dreyfus model; though I don't see experimental data, only phenomenological reflection). Fine. Consider this part of the grumpy, yet humorous, introduction. But in the next intro text I write, I'm going to bore your eyeballs out of your skull with upfront history, philosophy, and quirks of the inventors.

Note: Unless otherwise stated, all examples, in whichever concrete syntax, are OWL 1.1.

What is an OWL Ontology?

I so don't want to be profound here. Explicit profundity in this area leads to endless argument (trust me; I've got reliable witnesses). However, we don't need to be profound to be useful: An OWL ontology is a computational artifact. It, in the simple case, is realized as a document which is often stored in a file, and thus is similar to a web page or a word processing document or perhaps a spreadsheet. Even better, it can help to think of an OWL ontology as a very restricted kind of program. Now, it is not like a program in a lot of ways, but in some key ways it is: OWL ontologies are composed of statements (which I generally call axioms). Tools working with OWL tend to be picky both about the syntax and semantics (no fuzzy-wuzzy snuggling here!). Finally, there is are some canonical actions you can do with your OWL ontologies, akin to running your program.

Oh, and writing (or reading) OWL ontologies is a seriously nontrivial task. Anyone who says otherwise is at best confused.

OWL ontologies consist, roughly, of three kinds of statement. We can make statements describing classes of entities:

Turtle
:C rdfs:subClassOf :D
Functional
SubClassOf(:C :D)
XML
<SubClassOf>
    <OWLClass owl11xml:URI="#C"/>
    <OWLClass owl11xml:URI="#D"/>
  </SubClassOf>

We can also make statements describing properties of entities:

Turtle
:P rdfs:subPropertyOf :Q
:P is transitive
Functional
SubObjectPropertyOf(:P, :Q)
TransitiveObjectProperty(:P)
XML
<SubObjectPropertyOf>
      <ObjectProperty owl11xml:URI="#P"/>
      <ObjectProperty owl11xml:URI="#Q"/>
</SubObjectPropertyOf>
<TransitiveObjectProperty owl11xml:URI="#P"/>

And finally, we can describe the entities themselves, and their relations/properties:

Turtle
:bob rdf:type :C.
:bob :P :mary
Functional
ClassAssertion(:bob :C)
ObjectPropertyAssertion(:P :bob :mary)
XML
<ClassAssertion>
    <OWLClass owl11xml:URI="#bob"/>
    <Individual owl11xml:URI="#C"/>
  </ClassAssertion>
<ObjectPropertyAssertion>
    <ObjectProperty owl11xml:URI="#P"/>
    <Individual owl11xml:URI="#bob"/>
    <Individual owl11xml:URI="#mary"/>
  </ObjectPropertyAssertion>

One thing to get straight right out: OWL has a lot of alternative syntaxes, not to mention graphical notations and GUI tools. The best notation for you depends on you, your tools, on your needs, and on the task at hand (for example, I was very surprised the results in a little paper we did about natural language paraphrases of OWL; the syntax I prefer turned out not to be the best for certain tasks). In this tutorial, I've tried to make the main text abstract away from the details any particular concrete syntax, and have all the examples be available in a variety of syntaxes. Unfortunately, this can be a little impractical for people who are trying to master a particular concrete syntax. Oh well. Sucks to be you.

Most document formats proposed for OWL are largely unstructured. Unlike programs (or HTML), there is no inherent start, place to talk about classes, place to talk about properties, and place to talk about individuals. Everything can be scattered all over the place. In the RDF based syntaxes, even parts of particular statements can be scattered all over. (Screaming is appropriate.) In this tutorial, we'll presume that individual statements are always grouped together, as much as we can. However, statements themselves can come willy nilly.

I'm going to call statements "axioms" until further notice, and for quite a while we'll be working in the "axiom style" that OWL 1.1 encourages (as opposed to the "frame style" that OWL, the original, allowed and somewhat encouraged).

Inference, or The Cool of OWL

I know that there are those who disagree (I work(ed) with some), but I just don't see the point of using OWL if you aren't going for some infermojo, at least some of the time. And really, the infermojo that makes most sense are the traditional classification and realization, plus the various sorts of consistency checking. It's nice when the inferences are plentiful and not so very obvious.

The easiest way to understand inference is in terms of question (or "query) answering. If you ask someone "Are Cheetos a kind of Food?", what do you expect as a response? Of course, it depends on what they know. If they explicitly believe that Cheetos are a kind of Food, then you expect them to answer "Yes, they are!" rather quickly. If all they know is that people ingest Cheetos (and that people ingest Food too), they mightmake the leap that Cheetos are food, but they should admit that Cheetos could be a kind of medicine, thus aren't necessarily a kind of Food. If they know that Cheetos are ingested and digested by people, and they know being the sort of thing that is ingested and digested by people is enough to make you a food, then they should say "yes" as well, though they may want to think about it for a while.

We can model this as a series of ontologies and queries about subsumption (i.e., subclass) relations.

Note: I hope to have these hooked up to an online reasoner at some point.

Turtle
:Cheetos rdfs:subClassOf :Food.
#Explicitly believed; trivial query
Functional
SubClassOf(:Cheestos :Food)
XML
<SubClassOf>
    <Comment>Explicitly believed; trivial query</Comment>
    <OWLClass owl11xml:URI="#Cheetos"/>
    <OWLClass owl11xml:URI="#Food"/>
  </SubClassOf>

Turtle
:People rdfs:subClassOf [a owl:Restriction;
    owl:onProperty :ingests;
    owl:someValuesFrom :Cheetos].
#Explicitly believed; trivial query
Functional
...
XML
...

Describing Classes

A class expression is a description of sets of individuals. A class axiom relates class expressions, 

"Running" the Ontology

Describing Instances of Classes

Describing Properties

Documentation and comments

Advanced topics

Metamodeling

Imports

Queries

Back Matter

Author: Bijan Parsia

To do: