Prof Allan Ramsay

Professor of Formal Linguistics 
School of Computer Science
University of Manchester

Manchester M13 9PL, UK

e-mail: Allan.Ramsay at manchester.ac.uk
phone: +44/0 161 206 3108  

Pen Picture
What is Formal Linguistics?
Research
PhD supervision
Teaching
Publications

[bar]

Pen Picture

I came to Manchester to take a post as Professor of Formal Linguistics in the Centre for Computational Linguistics at UMIST in 1995. Since then there have been various institutional changes, so I am now in the School of Computer Science at the University of Manchester, though I'm still professor of formal linguistics and I still teach and research in the same areas. My first degree, from the University of Sussex, was in Logic and I have an MSc in Logic from London University and a PhD in Artificial Intelligence from Sussex. I came to UMIST from University College Dublin, where I was Professor of Artificial Intelligence.

[bar]

What is Formal Linguistics?

I hold the post of Professor of Formal Linguistics. What this means is that my research is directed at obtaining precise formal descriptions of the way that natural language works. People have studied language for thousands of years. The ability to use language is one of the major distinctions between humans and other intelligent animals, and it is hardly surprising that people have long wanted to know how we do it. Until quite recently, however, most work in this area was rather imprecise. There simply was no way of making theories of language precise, and there was no way of subjecting linguistic theories to detailed extensive tests.

It is clear that language can be described at a variety of levels. Languages have structural properties which characterise when an utterance is well-formed and when it is not. Some of these are about the structure of words -- about `morphology'. Questions such as why reconstructions and unreconstructed, which are both derived from the underlying word construct are legitimate English words, but unreconstructions is not. Questions about the relationships between the Spanish words blanco, blanca, blancos and blancas. Questions about why adding -ing to infer leads to inferring, with a doubled r, whereas enter produces entering with a single r. Other structural questions deal with the way that words are put together to form sentences - what is the relationship between I gave Mary a book and I gave a book to Mary, or between I believe Betty is a fool and Betty, I believe, is a fool? Why does I believe that she loves me with all my heart sound much stranger than I believe with all my heart that she loves me, despite the fact that it actually has a much simpler grammatical structure? Why is the sentence The dessert I thought was revolting acceptable as part of a complex sentence such as I enjoyed the main course but the dessert I thought was revolting but not as a free-standing sentence?

They also have informational properties. Language would not matter if it couldn't be used for conveying ideas. If all we did when we processed language was assign structural analyses to words and sentences then it would be no more than a parlour game - less interesting than chess or draughts. But that's not all we do. We recognise the decisions that lie behind the production of a particular utterance, and we know what these decisions signify.

As a professor of formal linguistics, I am concerned to provide precise formal descriptions of as many aspects of language as I can. My work is, however, always informed by the realisation that structural (syntactic/morphological) choices encode information - that John loves Mary reports a situation where John has a particular set of attitudes towards Mary and says almost nothing about Mary's attitude to John, whereas Mary loves John tells us about Mary but not about John, and that Mary is loved by John reports the same situation as John loves Mary but from a different point of view. My interests are the same as any linguist's; but I am concerned to state any theory I come up with using a framework which lends itself to formality and precision.

What's this got to do with Computation?

Why am I so concerned to use a precise formal language for my linguistic descriptions? I believe that we cannot fully understand how language works unless we consider the processes that underly it. People have studied language for thousands of years, but until recently they did not have a framework for describing processes, and for seeing how process-oriented descriptions might work out. Computer science provides us with just such a framework. All other things being equal, a linguistic theory which supports a description of how to get from a surface form to a mental model is, at least for me, better than one that doesn't. Of course any such theory should cover as much of the data as possible - a computationally tractable theory that covers only a tiny fragment of what is going on in some language is no better than a theory which covers a much larger part of the language with less rigour. But if you can describe a good part of a language and realise your theory as a computer program then it seems to me that you have done something worthwhile.

As I said above, my research falls largely within the area of seeing what happens when you try to formalise linguistic theories - does it turn out to be impossible, can you do it but only in ways that are computationally intractable, can you do it up to a certain point, ...? The aim of this kind of research is partly to see what will be involved in writing programs that embody these theories, since the way a theory is formalised has consequences for its computational tractability. I believe, however, that you cannot do this kind of work very effectively as an abstract study. To get a real understanding of what is involved in formalising some piece of linguistic theory, and of what the abstract results about ease or difficulty of computing with your formalisation mean, you have to actually do it.

I therefore actively work on developing programs which do structural (syntactic and morphological) analysis, use the results of this to compute semantic representations, and then reason about why someone might have said something with a specific semantic content in a given situation. I think you have to do linguistics before you can stand back and talk about how it should be done; and I think that you probably need to have something to say about all levels of linguistic description. Otherwise you will find yourself shovelling all the things you can't do into one of the levels that you don't work on. Pragmatics, for instance, has long been a receptacle for everything that semanticists find too hard to deal with. But if you decide to work on semantics and pragmatics at the same time, you have to actually face up to these problems, because you can't pass them on to someone else - the someone else you'd like to pass them to is yourself!

My recent work concentrates on topics in semantics, and in particular on the notion that in order to understand the significance of an utterance you have to think about its relationship to the rest of what you know - to your general knowledge of the world, to your understanding of the words in the utterance, and to the (linguistic and extra-linguistic) context in which it was uttered. In order to work in this area, you have to have access to an inference engine, i.e. to a program which can perform reasoning. My publications include several papers on theorem proving for a particularly expressive logic (the more expressive a logic, the harder it is to formalise the rules that you need for working with it. But since natural language is itself extremely expressive, you need a very expressive logic if you want to paraphrase it, and hence you have to take up the challenge of reasoning in such languages). But since the meaning of an utterance is encoded at least partly in its grammatical structure you also need to have programs which can carry out grammatical analyses. I have a parser and a grammatical framework which is particularly well-suited to languages with very free phrase order - languages such as Spanish and Persian.

A unitary program for everything

I have a single program which does various amounts of linguistic processing for various languages. At present, it does a pretty good job of going from an English input sentence to reasoning about the circumstances under which someone might utter that sentence; and it does morphological and syntactic processing to various degrees for German, Spanish, Malay, Greek, French, Arabic and Persian. Most of my research students work on various aspects of this program. This seems to me to be a good idea: they get to do interesting work without having to implement all the basics (parsers, feature description languages, etc.) from scratch, and I get people to do work that I actually want done. Contact me if you'd like an evaluation copy of this: the user manual, which gets updated from time to time, is available here.

Computer Vision

I also have a long-standing interest in computer vision. This is not my major research area, but I have at various times taught general AI courses for which I needed to cover a number of topics that lie outside my main interests in language and reasoning. Since I can't bear to teach things that I haven't implemented, I have developed a suite of tools for extracting information from images (the reason why I hate teaching things I haven't implemented is that I know from experience that the techniques described in textbooks always have limitations that the books don't discuss, and the only way to find out what they are is by experimenting with them). Some of these tools are fairly standard - things for doing edge detection and region growing, template matching algorithms, matching up elements of successive frames from a sequence of images, and so on - but there are one or two things I'm quite pleased with. In particular, there is an algorithm for obtaining viewpoint independent representations of images which makes it possible to classify objects seen in silhouette no matter how they have been rotated. I have supervised a number of students in this area, and am always very happy to supervise final-year undergraduate and MSc projects in computer vision.

PhD supervision

I have supervised PhD students in a variety of areas of artificial intelligence, though in recent years I have generally concentrated on language-oriented theses. The following list should give some idea of the topics that I am happy to supervise.

  1. Tariq Ahmad, Classification of Tweets using Multiple Thresholds with Self-correction and Weighted Conditional Probabilities, University of Manchester, 2019

  2. Dena A Alabbad, An investigation into approaches to text-to-speech synthesis for Modern Standard Arabic, University of Manchester, 2019

  3. Fahad Albogamy, Structural analysis of Arabic tweets, University of Manchester, 2018

  4. Amal Mohammad Alshahrani, An investigation into the cross-linguistic robustness of textual equivalence techniques, University of Manchester, 2018

  5. Ali Almiman, Natural language inference over dependency trees, University of Manchester, 2017

  6. Sardar Jaf, The application of constraint rules to data-driven parsing, University of Manchester, 2015

  7. Majed Alsabaan, Pronunciation support for Arabic learners, University of Manchester, 2015

  8. Iman Alsharhan, Exploiting phonological constraints and automatic identification of speaker classes for Arabic speech recognition,  University of Manchester, 2014

  9. Aparna Garg, Video event analysis using activity primitives, University of Manchester, 2013

  10. Maytham Alabbas, Textual Entailment for Modern Standard Arabic, University of Manchester, 2013

  11. Majda Al-Liabi, Computational Support for Learners of Arabic, University of Manchester, 2012

  12. Yasser Muhammad Naguib Sabtan, Lexical Selection for Machine Translation, University of Manchester, 2011

  13. Chen-Li Kuo, Interpreting intonation in English-Chinese spoken  language translation, 2008,

  14. Carlo Lusuardi, The Use of NLP techniques in CALL for the diagnosis of specific errors made by learners of FrenchUniversity of Manchester, 2008

  15. Katherine Hargreaves: A computational treatment of Somali morphosyntax, University of Manchester, 2007

  16. Wafa Idris: Toward the simulation of multi-limbed robots, University of Manchester, 2006

  17. Hanady Mansour Ahmed: Natural Language Processing Engine for Arabic Text-to-speech, University of Manchester, 2005

  18. Ting Law: An investigation into statistical approaches for the resolution of ambiguities in English, University of Manchester, 2005

  19. John Kerins: Modelling temporal discourses: towards the  integration of semantic modelling techniques into computer-assisted  language learning software, UMIST, 2004

  20. Vahid Mirzaeian: Content-based support for Persian learners of  English, UMIST, 2003

  21. Debora Field: A single action, an infinity of effects: an  investigation into reasoning-centred planning for the purposes of  planning dialogue without speech acts, UMIST, 2003

  22. Marie-Josee Hamel: Re-using natural language processing tools in computer assisted language learning, UMIST, 2002

  23. Andromache Areta: Robust parsing of English spoken language, UMIST, 2002

  24. Mathias Schulze: Textana -- Grammar and Grammar Checking in Parser-Based CALL, UMIST, 2001

  25. Irina Reyero-Sans: Semantics of Spanish spatial prepositions, UMIST, 1997

  26. Robert Gaizauskus: Deriving Answers to Logical Queries by Answer Composition, Sussex, 1992

  27. John Kelly: Artificial Intelligence: a critical study, UCD, 1990

  28. Sharon Wood: Planning in a Rapidly Changing Environment, Sussex, 1990

  29. Chris Thornton: Concept Learning as Data Compression, Sussex, 1988

  30. Anthony Robins: Representation in Connectionist Models, Sussex, 1988

Publications

My research has been published in various books and papers . Links to copies of published articles are provided to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by the author(s)  or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

[bar]

Teaching

At the moment I teach first- and second-year courses on artificial intelligence and an advanced course on semantics, though at various times I have taught courses in a wide variety of other areas, including computer vision and a general course in computational linguistics. I played a significant role in the development of our the UMIST BSc in Artificial Intelligence , which has been running since October 1997.

I am the author of books on the use of logic in artificial intelligence (Formal Methods in Artificial Intelligence, Cambridge University Press, 1988) and on the semantics of English (The Logical Structure of English, Pitman, 1990), and co-author of books on using the programming language POP-11 for artificial intelligence, and have edited the proceedings of two AI conferences (Artificial Intelligence: Methodology, Systems, Applications VII, (proceedings AIMSA-96), IOS Press, 1996 and Prospects for Artificial Intelligence, (proceedings of AISB-93)).

Courses currently offered

CT116 Introduction to Artificial Intelligence

This is, obviously enough, an introduction to AI. There are different ways of thinking about AI - as a branch of computer science, as a way of doing cognitive science, as way of exploiting results from computer science, and so on. The aim of this module is to set up the particular view of AI that underpins our degree. Thus the course covers a number of philosophical and methodological issues, rather than providing a rapid introduction to AI techniques. The lecture notes (usually in a rather partial and messy form) can be found online (currently as a .pdf file - late releases of Internet Explorer and Netscape can cope with .pdf files, otherwise you'll have to download it and use acroread to read it (or just double click to open it under Windows)).

CT220 Programming in Prolog

Prolog is an excellent programming language for some tasks and a dreadful one for others. This course covers the major topics in Prolog and illustrates them with various applications for which Prolog is well suited. The aim is to give an introduction to various AI topics (NLP, theorem proving, planning) and to Prolog at the same time. I don't know how well it works because this is the first year I've given this course (and indeed the first time I've taught a course on programming for a very long time). The teaching materials are available online.

L2013 Knowledge Rep Practical

Reasoning and knowledge representation are a key topic in Artificial Intelligence. A very large part of AI is concerned with representing and using knowledge, and one of the best ways (the best way ???) of doing this is to use the formalisms developed by logicians. I currently teach a second year course aimed at getting students to implement a number of well-known knowledge-representation techniques. It's all very well telling students how to do things, but you won't really understand what to do until you try it. This is a course where students produce quite substantial programs, working in groups. At the end of this course, students have first-hand experience of what is involved in producing serious AI systems, and they also have a much deeper appreciation of the limits of AI techniques. AI textbooks tell you how to do things, but they never admit how flawed and fragile the resulting systems are. Doing it yourself tells you more than any book (or any teacher) possibly can.

CT322 Computational Representations of Meaning

This course explores the idea that what matters about a sentence is what you can infer when you hear it. It therefore concentrates on showing how to construct formal paraphrases of natural language utterances, since we can only explore the notion of whether P follows from Q if P and Q are expressed in some formal (logical) language. A large part of the course deals with the extreme subtlety of natural language, and the consequences of this for the idea that we can reasonably provide formal paraphrases in any formal language. Again I believe that such a course should at least be backed up by programs which construct and manipulate the kinds of representation being discussed. Students therefore have access to the program described above , and to a number of smaller systems which illustrate specific points. The teaching material for this course is available on-line.

Last updated: 16 March 2007