Packet Switching
To send a message from a source end system to a destination end system, the source breaks long messages into smaller chunks of data known as packets, between source and destination, each packet travels through communication links and packet switches (Routers and link-layer switches)
Store-and-Forward Transmission
Most packet switches use store-and-forward transmission at the inputs to the links, it means that the packet switch must receive the entire packet before it can begin to transmit the first but of the packet onto the outbound link. To gain some insight into store-and-forward transmission, let’s now caluclate the amount of time that elapses from when the source begins to send the packet until the destination has received the entire packet. (Here we will ignore the propagation delay - tje time it takes for the bits to travel across the wire at near speed of light). The source begins to transmit at time 0; at time L/R (L is the length of a packet in bits and R is the transmission rate in bits per second),the source has transmitted the entire packet, and the entire packet has been received and stored at the router (since there’s no propagation delay). At time L/R seconds, since the router has just received the entire packet, it can begin to transmit the packet onto the outband link towards the destination; at time 2L/R, the router has transmitted the entire packet, and the entire packet has been received by the destination. Thus, the total delay is 2L/R. If the switch instead forwarded bits as soon as they arrive (without first receiving the entire packet), then the total delay would be L/R since bits are not held up at the router. Now let’s calculate the amount of time that elapses from when the source begins to send the first packet until the destination has received all three packets. As before, at time L/R, the router begins to forward the first packet, but also at time LR the source will begin ti send the second packet, since it has just finished sending the entire first packet. Same as the second and the third packets. Finally, at time 4L/R the destination has received all three packets! Let’s now consider the general case of sending one packet from source to destination over a path consisting of N links each of rate R (thus, there are N-1 routers between source and destination). Applying the same logic us above, we see that the end-to-end delay is : d = N * (L/R)
Queuing Delays and Packet Loss
Each packet switch has multiple links attached to it. For each attached link, the packet switch has an output buffer (also called an output queue), which stores packets that the router is about to send into that link. The output buffers play a key role in packet switching. If an arriving packet needs to be transmitted onto a link but finds the link busy with the transmission of another packet, the arriving packet must wait in the output buffer. Thus, in addition to the store-and-forward delays, packets suffer output buffer queuing delays. These delays are variable and depend on the level of congestion in the network. Since the amout of buffer space is finite, an arriving packet may find that the buffer is completely full with other packets waiting for transmission. In this case, packet loss will occur - either the arriving packet or one of the already-queued packets will be dropped.
Forwarding Tables and Routing Protocols
In the Internet, every end system has an address called an IP address. When a source end system wants to send a packet to a destination end system, the source includes the destination’s IP address in the packet’s header. When a packet arrives at a router in the network, the router examines a portion of the packet’s destination address and forwards the packet to an adjacent router. More specifically, each router has a forwarding table that maps destination addresses (or portions of the destination addresses) to the router’s outbound links. When a packet arrives at a router, the router examines the address and searches its forwarding table, using this destination address, to find the appropriate outbound link. The router then directs the packet to this outbound link. Note that the Internet has a number of special routing protocols that are used to automatically set the forwarding tables. A routing protocol may, for example, determine the shortest path from each router to each destination and use the shortest path results to configure the forwarding tbales in the routers.
Circuit Switching
There are two fundamental approaches to moving data through a network of links and switches: circuit switching and packet switching. In circuit-switched networks, the resources needed along a path (buffers, link, transmission rate) to provide communication between the end systems are preserved for the duration of the communication session between the end systems. In packet-switched networks, these resources are not reserved; a sessions’s messages use the resources on demand, and a consequence, may have to wait (that is, queue) for acess to a communication link.
Multiplexing in Circuit-Switched Networks
A circuit in a link is implemented with either frequency-division multiplexing (FDM) or time-division multiplexing (TDM). With FDM, the frequency spectrum of a link is divided among the connections established across the link. For a TDM link, time is divided into frames of fixed duration, and each frame is divided into a fixed number of time slots.
A Network of Networks
Over the years, the network of networks that forms the Internet has evolved into a very complex structure. Much of this evolution is driven by economics and national policy, rather than by performance considerations. In orer to understand today’s Internet network structure, let’s incrementally build a series of network structures, with each new structure being a better approximation of the complex Internet that we have today. Recall that the overarching goal is to interconnect the access ISPs so that all end systems can send packets to each other. One naive approach would be to have each access ISP directly connect with every other access ISP. Such a mesh desing is, of course, much too costly for the access ISPs, as it would require each access ISP to hava a separate communication link to each of the hundreds of thousands of other access ISPs all over the world. Our first network structure, Network structure 1, interconnects all of the access ISPs with a single globale transit ISP. Our (imaginary) global transit ISP is a network of routers and communication links that not only spans the globe, but also has at least one router near each of the hundreds of thousands of access ISPs. Of course, it would be very costly for the global ISP to build such an extensive network. To be profitable, it would naturally charge each of the access ISPs for connectivity, with the pricing reflecting (but not necessarily directly propotional to) the amount of traffic an access ISP exchanges with the global ISP. Since the access ISP pays the global transit ISP, the access ISP is said to be a customer and the global transit ISP is said to be a provider. Now if some company builds and operates a global transit ISP that is profitable, then it is natural for other companies to build their own global transit ISPs and compete with the original global transit ISP. This leads to Network Structure 2, which consists of the hundreds of the thousands of access ISPs and multiple global transit ISPs. The access ISPs certainly prefer Network Structure 2 over Network Structure 1 since they can now choose among the competing global transit providers as a function of their pricing and services. Note, however, that the global transit ISPs themselves must interconnect: Otherwise access ISPs connected to one of the global transit providers would not be able to communicate with access ISPs connected to the other global transit providers. Network Structure, just described, is a two-tier hierarchy with global transit providers residing at the top tier and access ISPs at the bottom tier. This assumers that global transit ISPs are not only capable of getting close to each and every access ISP, but also find it economically desirable to do so. In reality, although some ISPs do have impressive global coverage and do directly connect with many access ISOs, no ISP has presence in each and every city in the world. Instead, in any given region, there may be a regional ISP to which the access ISPs in the region connect. Each regional ISP then connects to tier-1 ISPs. Tier-1 ISPs are similar to our (imaginary) global transit ISP; but tier-1 ISPs, which actually do exist, do not have a presence in every city in the world. Returning to this network of networks, not only are there multiple competing tier-1 ISPs, there may be multiple competing regional ISPs in a region. In such hierarchy, each access ISP pays the regional ISP to which it connects, and each regional ISP pays the tier-1 ISP to which it connects. To build a network that more closely resembles today’s Internet, we must add points of presence (PoPs), multi-homing, peering, and Internet exchange points (IXPs) to the hierarchical Network Structure 3. PoPs exist in all levels of the hierarchy, except for the bottom (access ISP) level. We finally arrive at Network Structure 5, which builds on top of Network stucture 4 by adding content provider networks. Google is currently one of the leading examples of such a content provider network.