As one gets older, one discovers everything is going to be exactly the same
with different hats on.
All animals are equal but some animals are more equal than others.
George Orwell, Animal Farm
In most of Computer Science, we explicitly or implicitly look at differences:
hardware v. software etc.
MU0 v. pentium v. ARM v. MIPS etc.
functional v. imperative v. logical v. OO etc.
pascal v. basic v. c v. fortran etc.
These differences are very real and important, and you must understand them to be good Computer Scientists. However, there is another important thing you need to understand, that we are going to explore in CS2121 - from a certain viewpoint, the differences between the above are too trivial to notice!
This viewpoint entails considering only what is ultimately possible - what computations can we perform (no matter how arcanely) using each language or machine, ignoring any questions of efficiency or fitness for purpose.
Here is an informal proof that everything is the same:
1) all programs run on computers, so no program can do anything more than a
computer can i.e. hardware
2) any computer can be exactly modelled to any required level of detail by a
simulator program (that's how computers are designed nowadays)
== hardware, for all software and hardware
Some strange hardware, like quantum computing and DNA computing, both of which claim to be able to make huge or maybe unlimited numbers of computations simultaneously. (Intriguingly, one hypothesis floating around is that DNA works by using the same effects as quantum computing, so these could the same thing in two different guises!)
Some non-executable software, like specification languages, that can describe huge or unlimited numbers of decisions (using universal/existential quantifiers and ).
Some restricted kinds of languages (the Chomsky hierarchy) and machines (automata), that can do a subset of computations more efficiently. This turns out to be very relevant to grammars.
Grammars can perform computations (although very verbosely), and one of the lab exercises is to write a grammar that acts as a boolean (true/false) calculator. (The amount of code you have to write is i.e. 3-state logic is roughly twice as much work, and an integer calculator would be ridiculous!)
The same small sets of ways of combining pieces of software turn up in many
e.g. Böhm and Jacopini: imperative languages just need repetition and choice, and that is all grammars have.
You don't need to learn the 100's of different programming languages - you just need to know how to program well in each paradigm.
1) a practical introduction to the implementation of programming languages.
We start by distinguishing between between syntax (textual representation) and semantics (meaning) in the definition of programming languages. The bulk of this half of the course-unit consists of looking at syntax in some detail, and in particular becoming familiar with two tools that are used to implement it, LEX and YACC. We then take a quick look at the implementation of some important aspects of semantics. Finally, we look at putting everything together to make a compiler, or any other sort of text-processor.
2) examines the underlying theory and shows how the models underpinning the structure of languages lead to an understanding of the intrinsic limits of computation.
You will learn about the fundamental models underlying computation, and their intrinsic limits.
(Andrea will introduce this properly!)
students + schools (anyone not know SML? C??)
1+10+10+1 lectures, 6*2hour labs
3 examples classes, but also labs 1-4 + paper exercises
assessment: lab=25(10?)%, exam=75(90?)% but really 50:50??
book-list and readings
The first half is mainly an exploration of the practical issues of using grammars to recognise structured pieces of text. However, it also includes an introduction to dealing with the parts of computer languages that grammars can only handle superficially, such as identifiers, so we also explore the limits of what various kinds of languages can do. This is directly relevant to designing languages yourself - if you want to be able to do certain things, you must include certain language features.
The second half is an examination of the underlying theory, including the limitations of grammars. The theory of automata, and the equivalence of grammars and automata, gives us practical methods for implementing recognisers for grammars. Finally, we explore both the generality of computers (hurrah!) and their fundamental limits (boo! - but maybe hurrah if it also limits what we have to learn about them).
This might start off looking like a course with lots of programming, but actually it turns into a course with lots of theory. Despite that, I hope that you will find it a very interesting and useful course, both on the practical side and on the theoretical side.