Bandwidth and Congestion

What does application performance in the network mean?

In the case of network-related application performance problems, we usually think of limited bandwidth and network congestion as possible causes. We may assume that higher speed and less traffic will solve all issues. In practice, the elimination of network-related performance problems is one of the most complex tasks in data communication.

Which is better: DSL or CATV? There is no definite answer to this question. The supporters of CATV networks for transmitting data via cable modems praise the high bandwidth of their solution. DSL supporters on the other hand, point out the risks of sharing available bandwidth with competing applications. In this article we examine the relationship between bandwidth and congestion in terms of application performance analysis.

Bandwidth

Bandwidth is a parameter in signal processing that defines the width of the interval in a frequency spectrum in which the dominant frequency components of a transmitted or stored signal lie. The bandwidth is characterised by a lower and an upper critical frequency. Depending on the application, there are various definitions of the two limit values and, depending on the context, different bandwidths exist as characteristic values. The term "bandwidth" is used to describe signal transmission systems.

We define bandwidth delay as the effect of serialisation that occurs between the first transmitted bit of a message (packet) and the last received bit of a message (packet). Most important for performance analysis is what we call "bottleneck bandwidth." This indicates the speed of the connection at its slowest point and is the primary influencing factor on packet arrival rate at the destination. Thus, for each packet, the serialisation delay is given by the connection speed. For example, for a 4Mbps connection, it takes about three milliseconds to serialise a 1500 byte packet. This calculation can easily be extended to the entire operation. We determine the number of bytes sent or received on the connection and multiply them by 8 bits. Then we divide the sum by the bottleneck connection speed. However, we must take into account that asymmetric connections have different upstream and downstream speeds.

Effective bandwidth = [sum of the sent/received bytes] x [8 bits] / [bottleneck speed)

The bandwidth effect is calculated for one operation (100KBytes are transmitted over a 2,048Kbit/s connection and 1,024KB are received) as follows:

  • Upstream bandwidth effect : 100,000 * 8] / 2,048,000] = 390 milliseconds
  • Downstream bandwidth effect: 1,024,000 * 8] / 2,048,000] = 4,000 milliseconds

In order to achieve higher accuracy, the different packet headers of the respective transmission media (Ethernet and the WAN link) should be taken into account. The differences of the packet headers can be between 8 or 10 bytes per packet.

However, the bandwidth limitations only affect the data transmission during the operation in question. Each transmitted data flow experiences additional delays due to the network. TCP flow control and other factors may cause additional delays. With the intensive use of an organisation's communication channels, its sensitivity to network latency increases. In practice, the available bandwidth decreases. However, this fact is masked by the increase in delay.

Does the function use the entire available bandwidth? This question is not always easy to answer. The simplest way to visualise the available bandwidth is to graph the data throughput in each direction, whereby the unidirectional throughput is compared with the measured bandwidth of the relevant link. If the above question can be answered with "Yes", then the operational bottleneck must be sought in the available bandwidth. If the answer is "No" other limitations exist that limit performance. However, this does not mean that the bandwidth is not a significant limitation. It simply implies that there may be other factors to prevent the operation from reaching the bandwidth limit.

FTP transfer
Figure 1: FTP transfer is limited by the available bandwidth of 10MBit/s

Network resources are usually shared between users. If multiple connections are transmitted over one link, TCP flow control prevents a single data stream from consuming all available bandwidth. Therefore, the available bandwidth limits the throughput.

Overload

An overload occurs when data arrives at a network interface faster than the medium can transfer the data. If such an overload occurs, the packets to be transmitted are placed in an output queue. Here the packets remain until all previous packets in the queue have been sent. The individual queue delays in the network add up to the end-to-end network delay. This has a significant effect on all data transmissions. Data intense applications are affected by the increase in round-trip delay, while normal applications can be affected by TCP flow control and congestion avoidance algorithms.

In a data stream, the congestion reduces at first the value of the TCP slow start algorithm by gradually increasing the transmitter's congestion window (CWD). In addition, the delay component of the bandwidth delay product (BDP) is added, which increases the probability that the receiver's TCP window will be fully utilised.

If network congestion increases, the queue may fill the buffer in one of the routers. If incoming packets exceed the storage capacity of the queue, the additional packets must be discarded. Various algorithms have been implemented in routers that determine which packets should be discarded. These can lead to the effect of congestion being spread over several connections or only lower priority traffic being affected. If TCP detects these dropped packets, an overload is always assumed to be the suspected cause. The packet loss causes the sending TCP to reduce its congestion window by 50%, whereupon the slow start algorithm gradually increases again during the congestion avoidance phase.

A network path has a minimal delay. In theory, this is based on the two components: delay of the link (distance) and processing delay of the route. This is usually called the path delay. Any delay beyond this amount can be attributed to congestion.

The most accurate way to measure congested packets is to collect data at the client as well as at the server site, then merge the two trace files and generate a transaction trace. This approach ensures accurate sending and receiving timestamps for each packet of data transmitted. Afterwards we can analyse the runtime of the transactions. If the transmission times are above the minimum detected value (if this is higher than the path latency), there is probably traffic congestion. This is based on the assumption that the minimum observed runtime is equal to the path delay.

An overload can be represented with the aid of a time diagram. This diagram shows the differences between the minimum, mean and maximum delays. You will probably find very short bursts of congestion that affect only a small handful of packets. You may also find longer overloads affecting most of the packets in a data stream. The following timing diagrams illustrate these conditions.

Representation of the packet throughput times
Figure 2: Representation of the packet throughput times: The average runtime is 103 milliseconds, only three milliseconds above the minimum path latency.
Representation of traffic congestion
Figure 3: Representation of traffic congestion; the transit time in the path is five milliseconds, the average transit time is 141 milliseconds and the maximum transit time exceeds 1,000 milliseconds.

Corrective actions: Bandwidth and Traffic Limitations

Removing pure bandwidth limitations is easy: you simply increase the bandwidth of the physical infrastructure. At the logical level (in applications), improving throughput is often more complicated.

Data compression is one method of data reduction. Caching and thin client solutions can also help.

Overloads can be eliminated in a similar manner. Alternatively, prioritisation can be used to ensure that certain data streams are transmitted preferentially in the network.

How do we monitor and manage congestion in the networks?

In one of the next articles we will analyse the effects of packet loss and present the TCP slow start algorithm and the congestion window.

back to the blog

Thousands of IT experts trust in Allegro Packets

Allegro Network Multimeter

Ask for more Information

Contact

Do you have questions ?
+49 341 / 59 16 43 53

Technology Partner

Honours