Application Performance in the Network
Requirements for measurement analysis at the packet layer
As a network administrator, you may have to troubleshoot poor application performance problems.
The network could be responsible for performance problems in multiple areas for many reasons:
- Adequate network bandwidth is now available for many applications, even globally, so delays and packet loss on a corporate LAN are more likely the root cause of problems than the lack of bandwidth.
- Network components (appliances) are in the path between branches and data centres to improve application delivery by optimisation. As transparent proxies in the network, these systems actively interact with network traffic to improve throughput and reduce latency.
In an ideal world, Application Performance Management (APM) or Application-aware Network Performance Management (AANPM) solutions should automatically isolate errors and provide all the diagnostic information that is needed for corrective action. But the reality is much more complex. Interactive problems, unexpected behaviour of applications or networks, poor configuration or the desire for solid evidence require manual troubleshooting by an administrator.
In this article and upcoming posts on our blog, we will explain the possible performance limitations, how to measure and quantify them, show their impact, and suggest meaningful explanations and possible ways to correct them. It's about identifying potential performance issues, solving them faster and more accurately, while working more effectively with users and application owners.
In this article we present the essential requirements of the application and describe how the analyses are carried out by measurements at the packet level.
In this article, packet flow diagrams are used to illustrate message flows on the network. The following diagram conventions have been defined for this purpose:
- Each arrow represents a TCP packet,
- Blue arrows are used to display data packets,
- Red arrows are used to represent TCP-ACK packets,
- The increased length of the arrow represents the network delay,
- The timeline always runs from top to bottom.
Questions and Answers
We assume that the client-to-server communication via request/reply mechanism is based on the TCP/IP transport protocol. This procedure is used in nearly all interactive business applications. These include web-based applications, fat client applications, file server accesses, file transfers, backups, etc. Since only the TCP/IP protocol is considered, this excludes voice and video applications. These use other transport mechanisms.
For each operation, there is at least one request and one response at the application layer. These are called Application Layer Protocol Data Units (PDUs). A simple client-server interaction looks like this: At the application layer, a request message is passed to the client TCP stack (TCP socket) for segmentation (into packets), addressing and transmission. The functions provided by the TCP stack are usually completely transparent to the application layer.
At the receiving end of the connection (the server), the application data is extracted from the packets transmitted over the network and reassembled as application layer messages and delivered to the associated service for processing.
Once application layer internal processing is completed, the server forwards the response to the server's TCP stack. The message text is segmented and transmitted to the client over the network. The performance of this request/reply exchange of information is determined by two factors:
- The processing speed of the messages at the server or client, and,
- The length of time that messages are transmitted over the network.
Therefore, both areas should be considered separately in a performance analysis. The information reassembled after transmission represents the network-centric view of the application, while the packets collected by the Allegro Network Multimeter in a recorded pcap file inform us how efficiently the network is transporting the messages.
The figure shows an example of the measurement results for the duration of the confirmation of network packets. In the upper graph, the global view is shown. It becomes clear that delays occur in the range of seconds. However, in the list below for individual servers you can see that these servers only have delays in the low millisecond range.
From Application Messages to Network Packets
The majority of information at the application layer is transmitted in more than one data packet. The information segment size is typically larger than the maximum segment size MSS or payload size that can be transmitted by a network packet. This is typically 1460 bytes for Ethernet. The packets assigned to a request or the resulting response can be described as one data flow. The combined payload of all data packets of a data flow represents the messages transmitted at the application layer.
Figure 3 shows a simple operation consisting of two request and response sequences. It illustrates the underlying packet flows on the network, where each message is divided into three data packets. The second diagram is an application-oriented view which shows the exchange of information.
Determination of the Processing Times
To analyze application performance in general and operational performance in particular, it is only necessary to examine the factors that affect the transmission of the operational request and the resulting response. The difference lies in the processing times of the clients and servers and the delays occurring on the transmission medium. If the flow chart is extended, the total delay is divided into the following four categories:
- The transmission time of the client when the message is sent,
- Server processing time,
- Transmission time of the server when sending the response,
- Processing time of the client.
The measurement of the server's processing delay begins at the time when the server received the last packet of the client request. This packet also represents the end of the request message. The server's processing delay ends with the first packet of the response. This packet also represents the beginning of the reply message.
The measurement of the transmission delay starts with the first packet transmitted by the server in response to a previously received request and ends with the last packet of the reply sequence. This group of packets represents the entire message transported over the network.
Figure 7 shows how the Allegro Network Multimeter displays application layer response times for SSL traffic.On the left hand side, the times for building the encryption are shown. On the right hand side, the duration of the response to an encrypted request (time between the first client packet and the first server response packet) is visualised.
The described measurements enter into a performance analysis framework. This framework describes nine potential performance bottlenecks between the clients, the network and the servers. Therefore, each application performance assessment should analyze all nine problem areas:
- Server processing delays,
- Client processing delays,
- Bandwidth bottlenecks,
- Packet losses,
- Flow control (zero window),
- Talkative protocols combined with long delays,
- Flow controls at the application layer (application windowing),
- The Nagle algorithm,
- The TCP slow start algorithm.
In this paper, we looked at two potential performance bottlenecks: server and client processing delays. In another blog article we will analyze possible performance issues related to available bandwidth and congestion.