Header Ads Widget

Network Performance

Performance of a network pertains to the measure of service quality of a network as perceived by the user. There are different ways to measure the performance of a network, depending upon the nature and design of the network. The characteristics that measure the performance of a network are :

  • Bandwidth
  • Throughput
  • Latency (Delay)
  • Bandwidth – Delay Product
  • Jitter

BANDWIDTH
One of the most essential conditions of a website’s performance is the amount of bandwidth allocated to the network. Bandwidth determines how rapidly the web server is able to upload the requested information. While there are different factors to consider with respect to a site’s performance, bandwidth is every now and again the restricting element.

Bandwidth is characterized as the measure of data or information that can be transmitted in a fixed measure of time. The term can be used in two different contexts with two distinctive estimating values. In the case of digital devices, the bandwidth is measured in bits per second(bps) or bytes per second. In the case of analog devices, the bandwidth is measured in cycles per second, or Hertz (Hz).

Bandwidth is only one component of what an individual sees as the speed of a network. People frequently mistake bandwidth with internet speed in light of the fact that internet service providers (ISPs) tend to claim that they have a fast “40Mbps connection” in their advertising campaigns. True internet speed is actually the amount of data you receive every second and that has a lot to do with latency too.
“Bandwidth” means “Capacity” and “Speed” means “Transfer rate”.

More bandwidth does not mean more speed. Let us take a case where we have double the width of the tap pipe, but the water rate is still the same as it was when the tap pipe was half the width. Hence, there will be no improvement in speed. When we consider WAN links, we mostly mean bandwidth but when we consider LAN, we mostly mean speed. This is on the grounds that we are generally constrained by expensive cable bandwidth over WAN rather than hardware and interface data transfer rates (or speed) over LAN.



Bandwidth in Hertz : It is the range of frequencies contained in a composite signal or the range of frequencies a channel can pass. For example, let us consider the bandwidth of a subscriber telephone line as 4 kHz.

Bandwidth in Bits per Seconds : It refers to the number of bits per second that a channel, a link, or rather a network can transmit. For example, we can say the bandwidth of a fast Ethernet network is a maximum of 100 Mbps, which means that the network can send 100 Mbps of data.

Note: There exists an explicit relationship between the bandwidth in hertz and the bandwidth in bits per second. An increase in bandwidth in hertz means an increase in bandwidth in bits per seconds. The relationship depends upon whether we have baseband transmission or transmission with modulation.

THROUGHPUT
Throughput is the number of messages successfully transmitted per unit time. It is controlled by available bandwidth, the available signal-to-noise ratio and the hardware limitations. The maximum throughput of a network may be consequently higher than the actual throughput achieved in everyday consumption. The terms ‘throughput’ and ‘bandwidth’ are often thought of as the same, yet they are different. Bandwidth is the potential measurement of a link, whereas throughput is an actual measurement of how fast we can send data.

Throughput is measured by tabulating the amount of data transferred between multiple locations during a specific period of time, usually resulting in the unit of bits per second(bps), which has evolved to bytes per second(Bps), kilobytes per second(KBps), megabytes per second(MBps) and gigabytes per second(GBps). Throughput may be affected by numerous factors, such as the hindrance of the underlying analog physical medium, available processing power of the system components, and end-user behavior. When numerous protocol expenses are taken into account, the useful rate of the transferred data can be significantly lower than the maximum achievable throughput.

Let us consider: A highway which has a capacity of moving, say, 200 vehicles at a time. But at a random time someone notices only, say, 150 vehicles moving through it due to some congestion on the road. As a result, the capacity is likely to be 200 vehicles per unit time and the throughput is 150 vehicles at a time.

Example:

Input:A network with bandwidth of 10 Mbps can pass only an average of 12, 000 frames
per minute where each frame carries an average of 10, 000 bits. What will be the
throughput for this network?

Output: We can calculate the throughput as-
Throughput = (12, 000 x 10, 000) / 60 = 2 Mbps
The throughput is nearly equal to one-fifth of the bandwidth in this case.

LATENCY

In a network, during the process of data communication, latency(also known as delay) is defined as the total time taken for a complete message to arrive at the destination, starting with the time when the first bit of the message is sent out from the source and ending with the time when the last bit of the message is delivered at the destination. The network connections where small delays occur are called “Low-Latency-Networks” and the network connections which suffer from long delays are known as “High-Latency-Networks”.

High latency leads to creation of bottlenecks in any network communication. It stops the data from taking full advantage of the network pipe and conclusively decreases the bandwidth of the communicating network. The effect of the latency on a network’s bandwidth can be temporary or never-ending depending on the source of the delays. Latency is also known as a ping rate and measured in milliseconds(ms).

In simpler terms: latency may be defined as the time required to successfully send a packet across a network.

  • It measured in many ways like: round trip, one way, etc.
  • It might be affected by any component in the chain which is utilized to vehiculate data, like: workstation, WAN links, routers, LAN, server and eventually may be limited for large networks, by the speed of light.
    Latency = Propagation Time + Transmission Time + Queuing Time + Processing Delay

    Propagation Time: It is the time required for a bit to travel from the source to the destination. Propagation time can be calculated as the ratio between the link length (distance) and the propagation speed over the communicating medium. For example, for an electric signal, propagation time is the time taken for the signal to travel through a wire.

    Propagation time = Distance / Propagation speed

    Example:

    Input: What will be the propagation time when the distance between two points is
    12, 000 km? Assuming the propagation speed to be 2.4 * 10^8 m/s in cable.

    Output: We can calculate the propagation time as-
    Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms

    Transmission Time: Transmission time is a time based on how long it takes to send the signal down the transmission line. It consists of time costs for an EM signal to propagate from one side to the other, or costs like the training signals that are usually put on the front of a packet by the sender, which helps the receiver synchronize clocks. The transmission time of a message relies upon the size of the message and bandwidth of the channel.

    Transmission time = Message size / Bandwidth

    Example:

    Input:What will be the propagation time and the transmission time for a 2.5-kbyte
    message when the bandwidth of the network is 1 Gbps? Assuming the distance between
    sender and receiver is 12, 000 km and speed of light is 2.4 * 10^8 m/s.

    Output: We can calculate the propagation and transmission time as-
    Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms
    Transmission time = (2560 * 8) / 10^9 = 0.020 ms

    Note: Since the message is short and the bandwidth is high, the dominant factor is the
    propagation time and not the transmission time(which can be ignored).

    Queuing Time: Queuing time is a time based on how long the packet has to sit around in the router. Quite frequently the wire is busy, so we are not able to transmit a packet immediately. The queuing time is usually not a fixed factor, hence it changes with the load thrust in the network. In cases like these, the packet sits waiting, ready to go, in a queue. These delays are predominantly characterized by the measure of traffic on the system. The more the traffic, the more likely a packet is stuck in the queue, just sitting in the memory, waiting.

    Processing Delay: Processing delay is the delay based on how long it takes the router to figure out where to send the packet. As soon as the router finds it out, it will queue the packet for transmission. These costs are predominantly based on the complexity of the protocol. The router must decipher enough of the packet to make sense of which queue to put the packet in. Typically the lower level layers of the stack have simpler protocols. If a router does not know which physical port to send the packet to, it will send it to all the ports, queuing the packet in many queues immediately. Differently, at a higher level, like in IP protocols, the processing may include making an ARP request to find out the physical address of the destination before queuing the packet for transmission. This situation may also be considered as a processing delay.

    BANDWIDTH – DELAY PRODUCT
    Bandwidth and delay are two performance measurements of a link. However, what is significant in data communications is the product of the two, the bandwidth-delay product.



    Let us take two hypothetical cases as examples.

    Case 1: Assume a link is of bandwidth 1bps and the delay of the link is 5s. Let us find the bandwidth-delay product in this case. From the image, we can say that this product 1 x 5 is the maximum number of bits that can fill the link. There can be close to 5 bits at any time on the link.

    Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that there can be a maximum 3 x 5 = 15 bits on the line. The reason is that, at each second, there are 3 bits on the line and the duration of each bit is 0.33s.

    For both the examples, the product of bandwidth and delay is the number of bits that can fill the link. This estimation is significant in the event that we have to send data in bursts and wait for the acknowledgment of each burst before sending the following one. To utilize the maximum ability of the link, we have to make the size of our burst twice the product of bandwidth and delay. Also, we need to fill up the full-duplex channel. The sender ought send a burst of data of (2*bandwidth*delay) bits. The sender at that point waits for the receiver’s acknowledgment for part of the burst before sending another burst. The amount: 2*bandwidth*delay is the number of bits that can be in transition at any time.

    JITTER
    Jitter is another performance issue related to delay. In technical terms, jitter is a “packet delay variance”. It can simply mean that jitter is considered as a problem when different packets of data face different delays in a network and the data at the receiver application is time-sensitive, i.e. audio or video data. Jitter is measured in milliseconds(ms). It is defined as an interference in the normal order of sending data packets. For example: if the delay for the first packet is 10 ms, for the second is 35 ms, and for the third is 50 ms, then the real-time destination application that uses the packets experiences jitter.

    Simply, jitter is any deviation in, or displacement of, the signal pulses in a high-frequency digital signal. The deviation can be in connection with the amplitude, the width of the signal pulse or the phase timing. The major causes of jitter are: electromagnetic interference(EMI) and crosstalk between signals. Jitter can lead to flickering of a display screen, affects in the capability of a processor in a desktop or server to proceed as expected, introducing clicks or other undesired impacts in audio signals, and loss of transmitted data between network devices.

    Jitter is negative and causes network congestion and packet loss.

  • Congestion is like a traffic jam on the highway. In a traffic jam, cars cannot move forward at a reasonable speed. Like the traffic jam, in congestion all the packets come to a junction at the same time. Nothing can get loaded.
  • The second negative effect is packet loss. When packets arrive at unexpected intervals, the receiving system is not able to process the information, which leads to missing information also called “packet loss”. This has negative effects for video viewing. If a video becomes pixelated and is skipping, the network is experiencing jitter. The result of the jitter is packet loss. When you are playing a game online, the effect of packet loss can be that a player begins moving around on the screen randomly. Even worse, the game goes from one scene to the next, skipping over part of the game play.

    In the above image, it can be noticed that the time it takes for packets to be sent is not the same as the time in which he will arrive at the receiver side. One of the packets faces an unexpected delay on its way and is received after the expected time. This is jitter.

    A jitter buffer can reduce the effects of jitter, either in a network, on a router or switch, or on a computer. The system at the destination receiving the network packets usually receive them from the buffer and not from the source system directly. Each packet is fed out of the buffer at a regular rate. Another approach to diminish jitter in case of multiple paths for traffic is to selectively route traffic along the most stable paths, or to always pick the path that can come closest to the targeted packet delivery rate.

  • Post a Comment

    0 Comments