Generic troubleshooting processes: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 1: Line 1:
'''Allegro Network Multimeter troubleshooting workflows'''
'''Allegro Network Multimeter troubleshooting workflows'''


Every now and then we get asked, what a (generic) troubleshooting approach/workflow with an Allegro Network Multimeter would look like. And, rightfully so, because the endless possibilities of an Allegro might be overwhelming for some.
Every now and then we get asked, what a (generic) troubleshooting approach/workflow with an Allegro Network Multimeter would look like.
 
And, rightfully so, because the endless possibilities of an Allegro might be overwhelming for some.


In this tutorial, we’ll go into several topics that might be of interest to you -the user- while working with an Allegro Network Multimeter. 
In this tutorial, we’ll go into several topics that might be of interest to you -the user- while working with an Allegro Network Multimeter. 
Line 58: Line 60:




As becomes clear from the above illustration, LIVE-view will display TOP Talker information -for the selected LIVE timeframe- (in this case 10 minutes) accompanied with live traffic indicators based on packets per second and bits per second.
As becomes clear from the above illustration, LIVE-view will display TOP Talker information -for the selected LIVE timeframe- (in this case 10 minutes),
 
accompanied with live traffic indicators based on packets per second and bits per second.


When selecting the “Last 10 minutes” view mode, the TOP talkers will be accompanied by total traffic in packets and Bytes – during the selected time frame.
When selecting the “Last 10 minutes” view mode, the TOP talkers will be accompanied by total traffic in packets and Bytes – during the selected time frame.
Line 118: Line 122:


All of the most important graphs, related to high level quality and performance monitoring/troubleshooting, are gathered on this page. 
All of the most important graphs, related to high level quality and performance monitoring/troubleshooting, are gathered on this page. 




Line 157: Line 160:
[[File:DHCP.png|1100x1100px]]
[[File:DHCP.png|1100x1100px]]
    
    


Because you already zoomed into to a specific time frame on the graph, this page will now only show you the client / DHCP-server relations, that happened during the time frame that you selected in the graph.         
Because you already zoomed into to a specific time frame on the graph, this page will now only show you the client / DHCP-server relations, that happened during the time frame that you selected in the graph.         
Line 165: Line 169:


=== <u>UDP Jitter & packet loss</u> ===
=== <u>UDP Jitter & packet loss</u> ===
The next two graphs provide trending and actionable insights for UDP-based protocols RTP and Profinet. First up is the graph depicting Jitter over time. Bad jitter can have a very negative impact on business critical production services and on VoIP- / Unified Communication services.
The next two graphs provide trending and actionable insights for UDP-based protocols RTP and Profinet. First up is the graph depicting Jitter over time.
 
Bad jitter can have a very negative impact on business critical production services and on VoIP- / Unified Communication services.




[[File:Jitter.png|700x700px]]
[[File:Jitter.png|700x700px]]




Line 191: Line 198:
As a reference;
As a reference;


For wired infrastructures, a retransmission ratio of up to 2% is generally accepted to still be okay. In wireless infrastructures however, retransmissions of up to 10% are very common and considered to be a well-functioning wireless network.
For wired infrastructures, a retransmission ratio of up to 2% is generally accepted to still be okay.
 
In wireless infrastructures however, retransmissions of up to 10% are very common and considered to be a well-functioning wireless network.




=== <u>TCP Zero window</u> ===
=== <u>TCP Zero window</u> ===
For identifying application performance problems and/or server capacity issues, the “TCP Zero Window” graph is a very, very powerful instrument. Here’s why…
For identifying application performance problems and/or server capacity issues, the “TCP Zero Window” graph is a very, very powerful instrument.
TCP zero window packets are being sent out by a client or (mostly) server, whenever it cannot optimally handle the oncoming traffic any more. Because of whatever reason, its receive buffer is full, and the device will start every sending party to slow down – by means of TCP zero packets.
 
Here’s why…
TCP zero window packets are being sent out by a client or (mostly) server, whenever it cannot optimally handle the oncoming traffic any more.
 
Because of whatever reason, its receive buffer is full, and the device will start every sending party to slow down – by means of TCP zero packets.
 






[[File:Zerowin2.png|1200x1200px]]
[[File:Zerowin2.png|1200x1200px]]


Couple of reasons for high (continuous) counts of TCP zero window packets, may be things like:
Couple of reasons for high (continuous) counts of TCP zero window packets, may be things like:
Line 209: Line 222:
* Applications that are too slow or problematic and therefore are unable to keep up
* Applications that are too slow or problematic and therefore are unable to keep up
* Storage that is too slow or problematic, and therefore is unable to keep up.
* Storage that is too slow or problematic, and therefore is unable to keep up.


== IP statistics (all) ==
== IP statistics (all) ==
Line 219: Line 233:




IPs (of interest) can quickly be found, by entering (part of) an IP or resolved-name information. When searching for a singular IP or IP-range, add a subnet mask to the IP for optimal results. E.g. searching for 192.168.178.1 will give you a filtered list with all IPs matching 192.168.178.1xx. To mitigate this, add a subnet mask like so 192.168.178.1/32.


On the IP Statistics page, it is also possible to only present IPs that match certain (quality) metrics. To start filtering with a “complex filter”, start your entry in the search bar with a "(". The next possible inputs are then shown to you, as a form of help. By using a “complex filter” in the search bar, you can narrow down the number of displayed IPs, based on the following parameters:
IPs (of interest) can quickly be found, by entering (part of) an IP or resolved-name information.
 
When searching for a singular IP or IP-range, add a subnet mask to the IP for optimal results.
 
E.g. searching for 192.168.178.1 will give you a filtered list with all IPs matching 192.168.178.1xx. To mitigate this, add a subnet mask like so 192.168.178.1/32.
 
On the IP Statistics page, it is also possible to only present IPs that match certain (quality) metrics.
 
To start filtering with a “complex filter”, start your entry in the search bar with a "(". The next possible inputs are then shown to you, as a form of help.
 
By using a “complex filter” in the search bar, you can narrow down the number of displayed IPs, based on the following parameters:


"name", "ip", "packets", "bytes", "pps", "bps", "firsttime", "lasttime", "tcppackets", "udppackets", "tcppayload", "tcpRetrans", "tcpRetransRx", "tcpRetransTx", "category", "vlan", "mpls", "outermpls", "innermpls", "interface", "validconnections", "invalidconnections", "tcpZeroWindowRx", "tcpZeroWindowTx", "ipgroup", "mtu", "mtuRx", "mtuTx", "tcpMissedData", "("
"name", "ip", "packets", "bytes", "pps", "bps", "firsttime", "lasttime", "tcppackets", "udppackets", "tcppayload", "tcpRetrans", "tcpRetransRx", "tcpRetransTx", "category", "vlan", "mpls", "outermpls", "innermpls", "interface", "validconnections", "invalidconnections", "tcpZeroWindowRx", "tcpZeroWindowTx", "ipgroup", "mtu", "mtuRx", "mtuTx", "tcpMissedData", "("


The use of and/or/must contain/exact match operators is allowed in the form of: AND, &&, OR, ||, ==, ===
When typing in complex filters, the use of and/or/must contain/exact match operators is allowed in the form of: AND, &&, OR, ||, ==, ===




325

edits