The Next Generation Internet

Toby Howard

This article first appeared in Personal Computer World magazine, January 1998.

DOWNLOADING SOFTWARE from the Web the other day, it was just about the time that America was waking up, and my browser was beginning to slumber. Returning to my PC after a 20-minute tea-break I found the following chilling message: "1 byte read". So much for the Information Superhighway.

Despite its growing reputation as the World Wide Wait, and the rubbishing heaped upon it by a mass media increasingly anxious about its own future, the Internet remains poised to play a central role in our futures. Within a few years our workplaces, our homes, our domestic appliances, and perhaps even our own bodies will all be linked into a global hyper-net of information exchange and control.

Estimates vary, but it's generally accepted that Internet traffic is now growing by more than 400% every year. The Internet is struggling under the growing demands being placed on it, and the race is on to upgrade it to cope with the spiralling demands of the next century, before it simply grinds to a halt.

It's over a year since Bill Clinton announced the Next Generation Internet program, funded to the tune of $300 million by the US government over three years, and with additional commercial backing. NGI is a hugely ambitious project, designed to take the Internet into the next century with ample room for future expansion.

The technology of the Internet was designed to cope with thousands, and is now straining under the burden of serving millions, with demands for bulk data transfer unimagined by the original designers. Using a combination of an upgraded physical network, and new software, NGI aims to connect 100 sites at speeds 100 times faster than today, and to link a few selected "prestige" sites at more than 1000 times today's speed.

A key component of NGI is a replacement for the mechanism by which data is moved around the Internet. Apart from utilising physically faster network cabling, the software used to marshall the data is in need of an overhaul. As the Internet grows in size, the current protocols for information exchange are finding it hard to keep up.

The Internet is a collection of networks, only a few of which actually have any direct physical connect with one another. It's a bit like making several thousand separate chains of paper clips and throwing them to the floor. The chances are you'll be able to trace a route from any one individual paper clip to any other, but some routes will be short, others more roundabout.

If different types of computer connected to various types of network are to have any chance of communicating, it's clear that there must be some agreed standards. In Internet terms, there are two protocols which work together to achieve this, and they're collectively known as TCP/IP. TCP, or Transmission Control Protocol, takes the data that one computer wishes to transmit to another, and splits it up into small manageable chunks. It then arranges to send each chunk to the destination computer, making sure that the chunks arrive in the right order to be reassembled correctly, and that any chunks which go missing get sent again. The business of ensuring that an individual chunk gets sent from its source to its destination is handled by the Internet Protocol, or IP.

As the demands on the Internet increase, the current version of IP is simply not going to be up to the job for much longer. For one thing, it's running out of addresses. Currently, Internet addresses are 32 bits (4 bytes) long, and although this potentially allows for millions of addresses, it's been estimated that these will all have been exhaused by around 2005. Related to this is the fact that until now, the only devices (barring the odd Coke machine) which have required Internet addresses have been computers. It's becoming increasingly likely that Internet protocols will be used for the general control of electronic gadgetry in the workplace and the home, and every item, from community basketball court lighting to your garage door, will one day need its own Internet address.

This and other shortcomings in IP are being addressed by a proposed replacement called IPv6. IPv6 offers a staggering 10^39 unique addresses. It also provides built-in encryption at a very low level, preventing snoopers eavesdropping on sensitive data, and a range of technical improvements that will make more efficient use of existing network infrastructure. The ideas in IPv6 is now being tested in a collaborative project covering North America, Europe, and Japan, known as the "6bone" network.

With America leading the way with NGI, and a separate University-based "Internet 2" initiative, where does that leave Europe? Fortunately, there's a EC-funded venture called "Trans European Network interconnect at 34 Mbps", or Ten-34, operated by Dante, a non-profit company based in Cambridge. The scale and goals of Ten-34 are more modest than its US couterparts, but still include plans for data rates of up to 155 Mbps, about a hundred times faster than today's Internet. That's more than enough to support high-resolution videoconferencing, for example.

What are people actually going to do with the speed and power of the streamlined Internet? One of the goals of the US initiative is to figure this out, and to come up with "revolutionary applications". Already on the agenda is the establishment of "virtual collaboraties", shared electronic spaces where scientists can cooperate on huge research projects which would be impossible for a single physical institution to host. As well as nationwide crisis and disaster management, digital libraries, tele-medicine, the days of numb-bummed students sitting in dusty lecture theatres may finally be over, with the advent of the person-free Internet "Televersity".

Researchers are working furiously to modernise the Internet, but until the new technology is in place, there's not much we can do except get up early and hit the Web before America stirs.

Toby Howard teaches at the University of Manchester.