next up previous
Next: The User Interface Up: ho Previous: ho

Subsections

Operating Systems

Operating Systems

In the last few lectures, we have looked at some typical computer hardware and begun to see how it behaves. However, hardware by itself is not much use, so we also need to talk about the software that makes the hardware useful.

There are two main driving forces behind the creation of this software:

code reuse
if this software did not exist, every program that, say, gets input from the keyboard, would have to include all the code to use the keyboard - it would be much better if this code was written once and for all for everyone to use.
safety
as this code is used by all programs, it is worth while making it as correct as possible and fixing any bug as soon as it is noticed. It also makes sense to protect users from themselves: e.g. by making it impossible for one program to accidentally affect another independant program if they are both active at the same time; or by allowing users to specify files as not to be changed, so that programs cannot erase important information by mistake. This even extends to including similar safeguards against malicious attacks.

This software has several major components:

Kernel
has direct control over all the hardware, and provides both simplified access and extra facilities. It is sometimes known as a Virtual Machine (VM). Safety features are usually implemented in the kernel so that it is impossible to by-pass them.
Kernel Application Programmer's Interface (API)
allows programs to access the facilities provided by the kernel by making system calls. To the programmer a system call looks just like any other call to a predefined library function or method.
Libraries
general-purpose facilities, like those provided by the kernel, but not directly safety-related (e.g. input-output of numbers, using the character input-output provided by the kernel). These may be specific to particular languages, like Java, or applications, like graphics, or intended for more general use.
User Interface (UI)
allows users to directly access the facilities provided by the kernel and libraries including the ability to run applications and development tools. It can be:
Textual
a Command Line Interface/Interpreter (CLI) or Shell
Graphical (GUI)
a desktop, using Windows, Icons, Menus and Pointers (WIMP).
Applications
e.g. office (word/text-processing, database, spreadsheet etc.), internet (browser, ftp etc.), graphics (image manipulation, drawing etc.)
Development tools
compiler, editor, debugger, programming environment (PE), development environment (DE) etc., that can be used e.g. to create new applications.

The boundaries between these components can be vague, and are invisible to most users. For example, development tools and applications are often sold with a system, and their user interfaces are usually based on those for the kernel. Some applications such as database systems have their own development environments, that allow the user to create complex queries and reports. Many kernels essentially only have one user interface, which is often assumed to be a fundamental part of it - e.g. Microsoft Windows. However, Linux makes the distinction very clear, as there are several shells (e.g. sh, ash, bsh, bash, csh, ksh, tcsh, sash, tclsh, mc) and desktops/window-managers (e.g. gnome, kde, twm, fvwm, kwm, afterstep, enlightenment, windowmaker, blackbox) widely available - I find the problem is knowing which to chose!

Some of the boundaries can also be seen on a Linux computer, by looking at the different sections supported by the `man' command e.g., system calls can be seen in `/usr/man/man2', libraries in `/usr/man/man3', and most applications and development tools in `/usr/man/man1'.

The kernel has complete control of all the computer hardware. You might think that this should also be true for ordinary applications, and on some operating systems like DOS it is. However, if an application can do anything, then by mistake or malice it could cause other applications to go wrong or, say, damage the filestore in such a way that you could no longer trust any of your information! Therefore, hardware usually permits two different modes of operation: a privileged mode for the kernel where any action can be performed, and a safe mode for users and applications that prevents some kinds of potentially dangerous actions. Instead, the application must make system calls to ask the kernel to perform these actions, so the kernel can ensure that no damage actually occurs.

The OS Kernel

The Operating System (OS) as originally defined is just the software necessary to hide the details of the hardware from the user: the kernel and its API. In the next few lectures, I am going to talk about OS kernels, and use examples from user intefaces to illustrate the facilities provided by the kernel. I am going to use Linux and Microsoft Windows NT as example OSs.

The kernel essentially manages the resources provided by the hardware: the CPU, the RAM, and the peripherals - disks, communications, and user interface. The resource managers we are going to look at are:

Process Manager
which controls the CPU, and is able to create the illusion that several things can be going on at once.

Memory Manager
which controls RAM so that each of the several things going on at once has as much memory as it needs but can only use the memory it is supposed to.

Peripheral Managers
usually there is a separate manager (often known as a device driver) for each kind of peripheral, or even for each different peripheral, allowing information to be moved to and fro as appropriate. Programs cannot directly manipulate data in peripherals (e.g. disks) but instead have to copy the information to RAM and manipulate it there (and then copy the results back to change the information in the peripheral).

There may be extra layers of software to support more complex activities, such as using the communications to talk to the internet, or using the display for graphics, or running a queue to allow many users to access a printer. In this case, we usually distinguish between the low level device driver, that moves data in and out of the peripheral, and the higher level manager(s), that decide what data is to be moved where.

Filestore Manager
allows you to use your own files, but not to use files that you shouldn't.

Various managers cooperate to provide services to the user, so the disk manager and the memory manager together allow the memory to appear to be bigger than it really is, and the filestore manager relies on the disk manager and communications manager to actually store and retrieve the information in your files.

A completely different kind of manager is the system manager, who is the overworked, underpaid, human who does the many tasks which the operating system can't do unaided, such as telling it who is allowed to use which parts of the system, setting up accounts for new users, installing new software, etc.

Process Manager

Just like human beings, computers have to be able to do more than one thing at once. For example, maybe the user has several web-browser windows open at once, with pages arriving in each of them, or the user is reading email while the computer is running a slow program, or there is more than one user using the computer. Just like a human with only one head, a computer with only one CPU can only really do one thing at a time. (Note that I am writing this as a member of a disadvantaged minority - although the average man has difficulty doing more than one thing at a time, apparently the average woman can easily deal with several things at once. My apologies to the women and above average men who can't understand what all the fuss is about!)

To create the illusion of doing several things at once, the computer has to stop what it is doing, remember where it was, and go onto the next job, and do all this every few milliseconds so each job gets its turn. This is known as multi-programming or multi-tasking.

The separate jobs are known as processes. (There is a similar concept, known as threads, used to split a single job into several parts - you will come across this later on in Java programs.) Strictly speaking, a process consists of a program and its data, but including extra information defining exactly what point the computer has got to in running the program i.e. the exact values of all the variables of the program and the registers of the CPU. At any given time, some processes will be active and some will be waiting or sleeping - at most one will be actually executing on a computer with one CPU.

On Linux computers, you can use the `ps' command to see what processes exist (see example below). Each process corresponds roughly to an independent activity on the computer. There is at least one process for each separate window. Each time you type a Linux command into a shell, like bash, a process is created just to run that command. You can see this using the -H flag, which causes related processes to be grouped together - for example, halfway-down I have a gnome-terminal process (number 805), which is running a bash process, which I have used to run the emacs process to edit this file and the ps process to create the output.

[pjj:os]$ ps -Hu $USER
  PID TTY          TIME CMD
  677 tty1     00:00:00 bash
  688 tty1     00:00:00   startx
  695 tty1     00:00:00     xinit
  700 tty1     00:00:00       gnome-session
 2579 tty1     00:00:00 gnome-terminal
 2580 tty1     00:00:00   gnome-pty-helpe
 2581 pts/5    00:00:00   bash
 2175 tty1     00:00:00 gnome-terminal
 2176 tty1     00:00:00   gnome-pty-helpe
 2177 pts/4    00:00:00   bash
 1309 tty1     00:00:31 netscape-commun
 1319 tty1     00:00:00   netscape-commun
  813 tty1     00:00:01 gnome-terminal
  814 tty1     00:00:00   gnome-pty-helpe
  815 pts/1    00:00:00   bash
  805 tty1     00:00:06 gnome-terminal
  806 tty1     00:00:00   gnome-pty-helpe
  807 pts/0    00:00:00   bash
 2044 pts/0    00:00:07     emacs
 2597 pts/0    00:00:00     ps
 2032 pts/2    00:00:00   bash
 2082 pts/3    00:00:00   bash
  746 tty1     00:00:00 gmc
  743 ?        00:03:28 cpumemusage_app
  739 ?        00:00:00 gen_util_applet
  737 ?        00:00:06 gnomepager_appl
  733 ?        00:00:00 gnome-name-serv
  731 tty1     00:00:01 panel
  728 tty1     00:00:02 xscreensaver
  713 tty1     00:00:05 enlightenment

If you type ps -HAf you can see all the processes on your computer. When I tried it, there were another 43 processes that belonged to the system (variously reported as root, bin, daemon, nobody, xfs, and news). Operating systems which do not have the process concept, like DOS, severely restrict the scope for simultaneous independent activities.


\begin{picture}(16,4)% put(0,0)\{ framebox(16,4)\{\}\}
\put(1,1){\circle{2}}\put...
...}}\put(9.4,2.5){wait}
\put(15,1){\circle{2}}\put(14.5,0.9){Halted}
\end{picture}

A simplified view of the different states of a process.

When a new process is created the operating system allocates memory for the process and records the extra information mentioned above about the process. Any program can ask for a new process to be created, but it cannot create it itself. Instead it must ask the kernel by making a system call. A process terminates voluntarily or is terminated by another process using another system call.

Many processes can co-exist in the memory but only one process can actually be executing at any one time on a single processor. The process scheduler within the operating system kernel is responsible for choosing which process should run next. It chooses one of the processes ready to run and schedules it (also known as dispatching).

If the scheduler decides a process has had its fair share of CPU time for the moment, it will preempt the process, returning it to the set of ready processes and allowing the next ready process to run. In this way, it shares out the CPU time between the competing processes, so that even if one program gets into an infinite loop other programs can still run.

Often processes are blocked (or forced to sleep or wait) because they are waiting for input (or output) to be completed e.g. for the user to type something, or for the next part of a file to arrive from hard disk. When a process becomes blocked, the scheduler takes control and runs another process instead. When an event, such as a signal or the arrival of input, occurs so that a process is able to run again, the scheduler moves the process to the ready state.

Peripheral Managers

We have seen that even a simple desktop computer can have a dozen peripherals, and larger computer systems can have many more, such as large numbers of big disks, magnetic tape for backup, printers, and connections to several different physical networks. As well as the general principles behind operating systems, of code reuse and safety, there are further principles specifically relevant to peripherals:

$\bullet$
Shared peripherals should not be under the direct control of any one application program.
$\bullet$
The interface presented to the application program should be as device independent as possible.
$\bullet$
It should be possible to incorporate new device drivers for new devices.

Linux is particularly good at achieving device independence. As far as most programs know, all input and output is via files, which are treated as just a sequence of bytes. A program opens a file by a system call. A program which writes to a file called `fred' could, with a very small change write to `jim' instead - it would just have to open `jim' instead of `fred'.

To the program, the keyboard and screen are just like two special files, standard input stdin and standard output stdout which always exist. It is easy to divert stdin and stdout at the shell level using the $>$ and $<$ facility to instruct the shell to open the required files. For example to divert output from stdout to a file called `junk':
more lightbulbs > junk

Even peripheral devices are treated like files by Linux. Suppose we have access to a peripheral device `devi'. Then we can just divert the output to that device by writing it to the corresponding special file in the `dev' (device) directory:
more lightbulbs > /dev/devi

However we are not usually given the privilege of writing directly so we may send the output to another process instead. For example the shared printers are accessed using the `lpr' process, so we can say:
lpr lightbulbs
using the `lpr' process directly, or we can pipe the output from the `more' or `cat' process to the lpr process:
cat lightbulbs | lpr

The operating system collects the output from the first process in a buffer, an area of operating system memory, and then passes it as input to the second process.

We can pipe processes of our own together in this way. Suppose I have written two programs which are saved in binary form in `myprocess1' and `myprocess2'. Suppose each takes standard input and produces standard output. We can use file input and output and piping as follows:
myprocess1 < myinput | myprocess2 > myoutput

There is even a special pseudo-device which can always be used to throw output away:
more lightbulbs > /dev/null

Communications Manager

In the local network each computer runs its own copy of the operating system. Across the network a Linux process can communicate with a process in another computer by means of a generalisation of a pipe called a socket. The local operating system must send the information to the operating system running on the remote processor, using the required communication protocols. As with other input-output redirection, to the user processes this communication can look as simple as the first process writing to a file and the second process reading from it.

A model of computing particularly suited to networks is client server computing. In client server computing a process (the client), rather than doing everything itself, requests services of other processes (server processes). Typically user processes act as clients and request services from server processes. The server processes may be elsewhere on the network, or they may be on the same processor. Examples of services might be:

$\bullet$
Print server: To print documents
$\bullet$
Archive server: To archive files or to retrieve archived files
$\bullet$
File server: To access files not directly available (e.g. NFS)
$\bullet$
Database server: To provide a centralised database
$\bullet$
X server: To provide a fully graphical windowed terminal
$\bullet$
Web server: To enable users to find and retrieve web documents

Client-server computing is often implemented using Remote Procedure Call (RPC) - also known as Remote Method Invocation (RMI): in a Java program, when you invoke a method, you expect the method to execute and return any results. You probably assume that it will be executed on the same computer that sent the message from. However, as long as you get the right results you don't actually care. In Java it is possible to use RMI - that is, you send the message and get the results as normal, but the object the message is sent to happens to be remote, in a different process or even in a different computer.

The client process simply invokes a method (e.g. in a library) in the usual way to request a service - this method uses RMI to send the request to the server process, probably on another computer, that can actually provide the service, waits for the response, and then returns this as its result back to the client. The client process does not need to know whether the client server model is being used or whether the invoked method is remote or not. This is hidden within the library. Even if the client server model is always used, the server can be on the same computer without the client needing to know.

RMI is implemented something like this:

$\bullet$
The client process invokes a method of an object.
$\bullet$
The method is just a `stub', which converts the message to the RMI protocol.
$\bullet$
The message is sent to the server by the communications manager.
$\bullet$
The client process waits until a reply is received.
$\bullet$
The server RMI stub receives the message, unpacks the information from the RMI protocol, and invokes the real method on the server as normal.
$\bullet$
The real method returns a result to the server stub.
$\bullet$
The server stub packs up the result in a message.
$\bullet$
The server's communications manager sends the message to the client.
$\bullet$
The client stub unpacks the message and returns the result.


\begin{picture}(18,4)% put(0,0)\{ framebox(18,4)\{\}\}
\put(2.5,0.1){\large {Cli...
...2){reply}
\put(14.6,1.5){\vector(-1,0){2.1}}\put(13.8,1.2){result}
\end{picture}

Disk Manager

Disks, of various kinds, (and similar peripherals, like magnetic tape) are used to provide larger amounts of memory and/or more permanent memory than RAM. As an example of the former, if we want to run a program that is too large to fit in RAM all at once, we use disk to hold parts of the program that are not needed at this instance - this is know as swap space. The most obvious use of disks as permanent memory is to hold our files and directories.

The same disk can be used for both temporary and more permanent storage, so on Linux systems hard disks are partitioned between use for swap space and use for permanent files. For example, the hard disk on my computer is partitioned into several chunks, all of which are visible to Linux, and two of which are also visible to Windows:

  8K blocks Linux Windows
/dev/hda1 261K /mnt/win-c C
/dev/hda3 17K (swap space)  
/dev/hda5 261K /mnt/win-d D
/dev/hda6 131K /  
/dev/hda7 356K /usr  

(You can use the `df' command and look at `/etc/fstab' to investigate this on your own computer - if you feel very brave, you could try using the `fdisk' command, but don't expect any sympathy if you damage your disk!)

We distinguish between on-line storage (e.g. hard disks) where information is permanently available in the computer system and off-line storage (e.g. CD-ROMs, magnetic tape cartridges) where the media is removed from the computer system and human intervention is required to reload the media before the computer can read it.

Usually, when a process needs the disk manager to perform a read or write, it is then made to wait (blocked, sleeping) by the process manager. When the disk transfer is complete, the disk manager informs the process manager so the process can be moved to the ready (runnable) state.

We will see how the filestore manager interacts with the disk manager later.

Memory Manager

We have seen that many processes exist simultaneously. Each process has its own private memory. When a process is created, memory is reserved for use by that process by the operating system. When a process terminates the memory it uses is recovered. Each program is written without knowledge of what processes will be running. In fact two copies of the same program could be running in different processes. Each process assumes it has available memory locations numbered 0 upwards.

In computers, a hardware memory management unit (MMU) translates every address generated by a program to the actual memory address allocated as each memory access is made. It translates so-called virtual addresses to real addresses. The tables used by the memory management unit are maintained by the operating system. Typically memory is allocated in blocks or pages of 4K bytes so there is one table entry for each 4K bytes of memory used by the process


\begin{picture}(12,2)% put(0,0)\{ framebox(12,2)\{\}\}
\put(0,0){\framebox (2,2)...
...t(7.1,0.6){real address}
\put(10,0){\framebox (2,2){\large {RAM}}}
\end{picture}

A hardware memory management system enables processes to be protected from one another and for the operating system kernel to be protected from user processes. Without some form of memory management it is difficult to implement such protection.

What happens if the memory manager starts to run out of real memory?

The simplest solution is to give up at this point. Each program should tell the memory manager how much memory it needs before it starts, so the system can prevent too many programs starting at once. A very big program would have to solve a space shortage by itself, typically by writing data to a temporary file and reading the file back again later.

A better solution is to get the memory manager to deal with the problem by automatically moving information not being used from RAM to disk (into swap space) and back again when it is required, and (as much as possible) hiding what is going on from the programs and the user.

When a page of memory is copied out to the swap space, the corresponding table entry is marked to show that the page is on disk. If the process tries to access the page now on disk, the memory management unit generates a memory exception, which informs the operating system. The operating system must bring the page back from disk before the process can continue. This is called virtual memory using demand paging. Linux and Windows NT both implement this.

In principle, virtual memory allows you to run arbitrarily large programs if you have enough disk. In practice, programs significantly larger than the RAM run appallingly slowly, so there is usually no point in making the swap space much bigger than the RAM. However, this does mean that if you have lots of things going on at once (e.g. many windows open at once), it is possible to run out. If you get a message from the computer along the lines of "out of memory", it means you have run out of swap space.

Virtual memory was invented at the University of Manchester, on a computer called the Atlas, which was one of the fastest computers in the world in the early 1960s.

Virtual memory, and cache memory, is sometimes referred to as part of a memory hierarchy - a series of increasingly slower and larger memories all linked together, where information migrates into progressively faster memories when it is being used by the CPU, and back out to slower memories when it is no longer actively in use. On the departmental PCs, the memory hierarchy is something like: registers, L1 cache, L2 cache, RAM, swap space, local filestore, network fileserver, archive (e.g. removable disks or magnetic tape).

Filestore Manager

The introductory labs gave you lots of information about using files and directories on both Linux and Windows. The basic operations users want to do with a file are things like create it, write to it, modify it, read from it, and delete it. They may also want to do things like execute a file, or rename it or even move it between directories.

Files are held on disks, possibly accessed via a network, but we don't want to allow applications to access disks directly. Instead, makes indirect access via system calls to the filestore manager. The filestore manager converts a file access into a disk access, which it passes to a disk manager (possibly via the network, if the disk is remote).

Some of the tasks of the filestore manager are:

$\bullet$
To maintain directories on the disk(s), as up to date as possible, showing name, ownership, permissions, time and date of creation, last modification and last read.
$\bullet$
Keep track of free space on the disk as files are created, updated and deleted.
$\bullet$
Enforce rules about the permissions on and ownership of files. Whenever a file is opened the user identity (UID) of the owner of the process opening the file is checked to see whether the operation is legal.
$\bullet$
Enforce filestore quotas. Filestore quotas are not intrinsic to a Linux or Windows filestore but if quotas are not imposed then the filestore is less reliable because any file writing operation may fail if the disk is full.

The system managers must put in place some effective backup or archiving mechanism if the software does not do this automatically (Linux and Windows do not). This comprises typically an overnight dump to magnetic tape of all files created or modified in the previous 24 hours and a weekly or monthly dump of the complete filestore. This is to protect against system failure or media failure - not to protect users from themselves!

The Linux and Windows filestores do not automatically keep multiple generations of the same file. Instead the application may create a backup copy e.g. `nedit' is set up to copy file `xxx' to `xxx.bck' in case the user makes a catastrophic error in editing.

File ownership and protection

Every file in the system has an owner - usually the user who created the file. It's usually undesirable for users to be able to do things to files owned by other users - you don't want other people copying your labwork, and you certainly don't want them to be able to delete it! As a result, all decent operating systems have a notion of file protection. Each file has associated with it a set of permissions which determines the types of things that users can do with the file. There are 4 sorts of permission in Linux:

Read permission
A user needs read permission to look at the contents of a file. So you can't look at another student's labwork because you don't have read permission.
Write permission
A user needs write permission to modify or delete a file.
Execute permission
In order to run a program, you need execute permission on the file containing the (compiled) program.
Search permission
Applies to directories only. You need this to look in a directory, so even if you have permission to do something to a file, you can't actually get at it without search permission on the directory containing it.
You can see how these permissions are set on files if you use `ls -l'.

It might appear that you should have all four sorts of permission on your own files and none on anybody else's. However, there are a number of exceptions to this:

$\bullet$
Many of the files on the system need to be readable by everybody (for example, the manual pages displayed by the man command) or executable by everybody (all the generally used programs such as the chess program, and utilities such as `ls', `more', etc.)
$\bullet$
It is often convenient to make information available to others by having files readable by everybody or readable by some group of people. In order to do this you have to make the directories searchable as well.
$\bullet$
It is sometimes convenient to make a file generally writable. An example of this is a calendar from which people edit out dates on which they are unavailable in order to determine when everybody can make some social event.

To accommodate these sorts of things, the owner of a file can change its permissions, for the user him/herself, for a group of people, or for everybody, using the `chmod' command.

By default, many of the files in the system are publicly readable or executable as appropriate, and you are welcome to read anything publicly readable or execute anything publicly executable. However, by default you cannot read files owned by other users.

One user, called the superuser or root has the ability to read, write and execute all files. For obvious reasons, students do not get to be superusers!


The Network File System

You can log onto any PC on the departmental network, and your filestore looks the same. How is this done? On the department LAN, but hidden away in another room, is a fileserver computer known as jeeves. This is similar to the computers you can use, but bigger and faster, with hundreds of gigabytes of disks and tape cartridges attached. If a process tries to access a file that is not available on your local disc, the filestore manager translates the access into a request, sent via the communications manager, to the fileserver.

This is all handled invisibly to user processes, so they are not aware of any difference between local and remote filestore. Some software called the Network File System (NFS) coordinates filestore across the computers. It allows part of the directory tree which is physically stored on one disk to be "grafted" onto the tree for another disk. This grafting process is called mounting. Every computer can mount the filestore from every disk, with the result that the whole filestore looks like one big tree, wherever you are. For instance, suppose we have a main disk whose filestore looks like this:


\begin{picture}(10,2)
\put(5,2.1){/}
\put(5,2){\line(-6,-1){3.5}}
\put(0,0.5){\s...
...
\put(9.0,0.5){\line(0,-1){0.5}}
\put(9.5,0.5){\line(1,-1){0.5}}
\end{picture}

and disks for staff with trees like this:

\begin{picture}(6,2)% put(0,0)\{ framebox(6,2)\{\}\}
\put(3,2){pjj}
\put(3,1.9...
...\put(4,0){CS1011}
\put(3.2,1.9){\line(2,-1){0.8}}\put(4,1){admin}
\end{picture}
and disks for students with trees like these:

\begin{picture}(6,4)% put(0,0)\{ framebox(6,4)\{\}\}
\put(3,4){s01}
\put(3,3.9...
...e(1,-2){0.2}}
\put(3.2,3.9){\line(2,-1){0.8}}\put(3.8,3){jonesd1}
\end{picture}

then we can mount the staff directories and the student directories below `/home', and obtain the complete tree:


\begin{picture}(10,7)
\put(5,7.1){/}
\put(5,7){\line(-6,-1){3.5}}
\put(0,5.5){\s...
...
\put(9.0,5.5){\line(0,-1){0.5}}
\put(9.5,5.5){\line(1,-1){0.5}}
\end{picture}

$\bullet$
Trees occur frequently in computer science, and are conventionally drawn with the root at the top, in the same way as family trees.
$\bullet$
The `leaves' of the tree are those files which are not directories (or empty directories containing no files). In the tree above, the file called `lightbulbs' is an example.
$\bullet$
The above tree is just an example, which bears some resemblance to the real one. However, the real one is different in a number of ways. For more information, use `man hier' on a linux computer.
$\bullet$
Notice that there are two directories called `CS1011' (one under `pjj/teaching', one under `bloggsf1'). The commands for manipulating and moving around the tree obviously need to be able to specify which one is meant in a particular command.
$\bullet$
The root of the filestore is known as "/" in Linux.

By mounting the disks differently on different computers, we can even make different versions of the filestore visible, so that students sat at teaching computers can't see staff directories, but staff can see everything. However, the permissions still control access to each individual file, so we can't actually see any file or directory you don't want us to.

NFS is a client-server system, implemented using remote procedure calls. A single client makes requests which may be to the local filestore or to one of a number of servers. NFS can be used with any filestore which is `sufficiently' compatible. Various implementations exist, including one used in the department with MS Windows, that allows remote filestores to be mounted to look like extra disk drives on the PC.


next up previous
Next: The User Interface Up: ho Previous: ho
Pete Jinks 2003-09-25