The Internet Newsletter for Lawyers was launched in the late 90s as there was at that time a thirst for guidance on what this new thing called the internet was and what it could offer the lawyer.
Today we all take the internet for granted and few concern ourselves with what it actually is, even fewer how it works. In fact a 2019 survey by HighSpeedInternet.com found that, although 86 per cent of respondents said they understood what the internet was, in fact only two-thirds of those gave a reasonable answer; in other words half those questioned did not understand.
We don’t not need to understand what happens under the bonnet of our car to be a good driver, but we do need a good understanding of our car’s functions and how they work together. So too with the internet, we should know how it works at a “user level”.
So what is it?
The internet is the global network of computer networks that use the internet protocol suite (TCP/IP) to link themselves. It consists of private, public, academic, business and government networks of local to global scope, linked by a broad array of electronic, wireless and optical networking technologies. It carries an unlimited range of information and services, including the web, email, telephony and file sharing.
The internet dates back to research commissioned by the US Federal Government in the 1960s to build robust, fault-tolerant communication via computer networks. The US Department of Defense’s Advanced Research Projects Agency Network (ARPANET) initially served as a backbone for interconnection of regional academic and military networks.
Many different network methods were in use on ARPANET and Vint Cerf of Stanford University was recruited to work on resolving this problem. With Bob Kahn he developed the internet protocol suite (TCP/IP) which was adopted by ARPANET. The specification included the first use of the term “internet”, short for “internetworking”. Consequently Vint and Bob are regarded as the fathers of the internet.
The key to scalability and success was that TCP/IP was “network-agnostic”: instead of the overall network being responsible for reliability, the individual hosts became responsible.
The networks based on ARPANET were government funded and therefore restricted to non-commercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as
Digital Equipment Corporation (DEC) and Hewlett-Packard, which were participating in research projects or providing services to those who were.
In 1986, the National Science Foundation created NSFNET, a backbone to support the NSF-sponsored supercomputing centres. The funding for this, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies and to the merger of many networks. The linking of commercial networks and enterprises in the early 1990s marked the beginning of the transition to the modern internet and generated rapid growth as institutional, personal and mobile computers were connected.
US Senator Ted Stevens famously described the internet in the context of opposing network neutrality in 2006 as “not something that you just dump something on. It’s not a big truck. It’s a series of tubes.” We may laugh, but in some senses his analogy was apt and, indeed, was celebrated in 2012 in the title of the fascinating Tubes: A Journey to the Center of the Internet by Andrew Blum which answers the question: What does the physical internet look like?
The “internet backbone” comprises the principal data routes between large, strategically interconnected computer networks and core routers on the internet. These points exchange internet traffic across continents and oceans.
The original internet backbone was the US Defense Service’s ARPANET (1969). Later the British JANET (1984) and US NSFNET (1985), serving higher education, resulted in the NSFNet backbone. The internet then comprised all networks connected and able to interchange internet protocol datagrams with this backbone. By 1990 when the ARPANET was finally decommissioned, TCP/IP had supplanted most other wide area computer network protocols worldwide.
Thus the internet backbone now is made up of multiple networks owned by numerous companies. It typically comprises fibre optic trunk lines, consisting of many fibre optic cables bundled together to increase capacity, and it is able to reroute traffic in case of a failure.
The original internet hub, providing the routers and connections that enabled the internet networks to act as one, was MAE-East in Virginia. There are now internet exchanges (IXs) all around the world, the three biggest being the Deutscher Commercial Internet Exchange (DE-CIX), the Amsterdam Internet Exchange (AMS-IX) and the London Internet Exchange (LINX).
Whilst the physical connections are made at these exchanges, connectivity is actually established via “peering” agreements between networks.
An internet service provider (ISP) is simply an organisation that connects the subscribers on its own network to the internet backbone. In addition to internet access, most ISPs also provide additional services such as email, domain name registration and web hosting.
Protocols establish how two computers send and receive a message. Data “packets” travel between source and destination from one router to the next in a process known as packet switching.
Protocols manage key points about a message such as the speed of transmission, the size of the message, error checking and deciding if the transmission is synchronous or asynchronous.
The internet protocol suite
The Internet Protocol (IP) is the fundamental communications protocol for relaying data across network boundaries. It essentially established the internet. Historically, IP did not provide for connectivity; it only specified how packets should be created. The Transmission Control Protocol (TCP) provided for the connectivity. The two protocols go hand in hand and are referred to jointly as TCP/IP or the “internet protocol suite”.
TCP/IP organises how data packets are communicated and makes sure packets carry information regarding:
- which computer the message came from (source);
- where the message should go (destination);
- the order in which the message data should be reassembled (packet sequence);
- the data of the message;
- an error check to see that the message has been sent correctly.
IP allows you to address a packet and drop it into the system, but there is no actual direct link between source and destination. Instead, there is a web of interconnecting links. IP tells packets what their destination is and how to get there; TCP ensures a reliable connection, checking packets for errors and requesting a re-transmission if it detects one.
Sitting “on top” of TCP/IP are protocols which govern the handling of specific applications data. The most common include:
- FTP (file transfer protocol) is used to transfer large files. It is often used for organising files on a web server for a website.
- HTTP (hypertext transfer protocol) transfers web pages from web servers to the client. All web page addresses start with http. An https address is a secure web address which has been encrypted.
- SMTP (simple mail transfer protocol) is used to send email and POP (post office protocol) to receive email.
- VOIP (voice over internet protocol) is the set of protocols that enables voice conversations over the internet.
The internet has no centralised governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. Only the overreaching definitions of the two principal name spaces in the internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organisation, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardisation of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organisation of loosely affiliated international participants that anyone may associate with by contributing technical expertise.
The terms “internet” and “web” are often used interchangeably. However, the two are not of course the same. The internet is a network of networks, defined by the TPC/IP standards. The web, on the other hand, is an information space.
What was christened “the World Wide Web” (abbreviated WWW), now usually referred to, simply, as “the web”, is the global information space where documents and other web resources, addressed via the Hypertext Transfer Protocol (HTTP), identified by Uniform Resource Locators (URLs) and coded using the Hypertext Markup Language (HTML), are interlinked and accessed via the internet. It functions as an application layer protocol that is figuratively run “on top of” the internet.
The web was invented in 1989 by Tim Berners-Lee, a British scientist working at the CERN laboratories in Switzerland. It was originally conceived and developed to meet the demand for automatic information sharing between scientists in universities and institutes around the world.
The first website in the world at CERN was dedicated to the World Wide Web project itself and was hosted on Berners-Lee’s NeXT computer. The website described the basic features of the web; how to access other people’s documents and how to set up your own server. The NeXT machine – the original web server – is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world’s first website to its original address (http://info.cern.ch).
On 30 April 1993 CERN put the web software in the public domain. CERN made the next release available with an open licence, as a more sure way to maximise its dissemination. Through these actions, making the software required to run a web server freely available, along with a basic “line-mode” browser and a library of code, the web was allowed to flourish.
Central to the web are embedded hyperlinks which facilitate navigation between pages, effectively joining it together.
The release of the Mosaic web browser in 1993 helped to make the web much more usable, to include the display of images and moving images.
The web has been central to the development of the information age and is the primary tool billions of people now use to interact on the internet.
The smartphone gave rise to small applications specifically designed for mobile devices, commonly called “apps” which are not part of the web. Native mobile apps are built for a specific platform, such as Apple iOS or Android. Confusingly, though, we also have “web apps” – which are websites, designed fluidly, with views optimised for smartphones.
Andrew Blum: Tubes: Behind the Scenes at the Internet
Talk by Andrew Blum for NY Tech Alliance: What is the Internet?
Wikipedia: Internet Protocol
World Wide Web Foundation: History of the Web