Global settings: Difference between revisions

Jump to navigation Jump to search
Line 122: Line 122:
== Expert settings ==
== Expert settings ==


The Expert settings contains parameter which are often only necessary to change in rare installation scenarios or some specific need for a different operation mode.
The Expert settings contains parameter which are only necessary to change in rare installation scenarios or some specific need for a different operation mode.


=== Packet length accounting ===
=== Packet length accounting ===


This setting allows to configure which packet length is used for all traffic counters and incidents. Following modes are possible:
This setting allows you to configure which packet length is used for all traffic counters and incidents. The following modes are possible:


* Layer 1: Packet length is accounted on layer 1 including preamble (7 Byte), SFD (1 Byte) and inter frame gap (12 Byte)
* Layer 1: Packet length is accounted on Layer 1 including preamble (7 Byte), SFD (1 Byte) and inter-frame gap (12 Bytes)
* Layer 2 without frame check sequence (default): Packet length is accounted on layer 2 without frame check sequence (4 Byte)
* Layer 2 without frame check sequence (default): Packet length is accounted on Layer 2 without a frame check sequence (4 Bytes)
* Layer 2 with frame check sequence: Account packet length on layer 2 with frame check sequence (4 Byte) When switching to another mode, it will only be applied on new packets. Older packet size statistics will not be changed.
* Layer 2 with frame check sequence: Account packet length on Layer 2 with frame check sequence (4 Bytes) When switching to another mode, it will only be applied on new packets. Older packet size statistics will not be changed.


=== VLAN handling ===
=== VLAN handling ===


The Allegro Network Multimeter can ignore VLAN tags for connection tracking. Enabling this option might be necessary if network traffic is seen on the Network Multimeter that contains changing VLAN tags for the same connection. For example, depending on the configuration of the mirror port to which the Network Multimeter is connected, incoming traffic could contain a VLAN tag while outgoing traffic does not. In this example, a connection would appear two times in the statistics which is often the desired behavior to be able to identify a network misconfiguration. But sometimes this behavior is intended and the user want to see only one connection. In this scenario the option can be enabled to ignore varying VLAN tags for a otherwise identical connection.
The Allegro Network Multimeter can ignore VLAN tags for connection tracking. Enabling this option may be necessary if network traffic is seen on the Network Multimeter that contains changing VLAN tags for the same connection. For example, depending on the configuration of the Mirror Port to which the Network Multimeter is connected, incoming traffic could contain a VLAN tag while outgoing traffic does not. In this example, a connection would appear twice in the statistics which is often the desired behaviour to be able to identify a network misconfiguration. But sometimes this behaviour is intended and the user wants to see only one connection. In this scenario the option can be enabled to ignore varying VLAN tags for a otherwise identical connection.


=== Tunnel view mode ===
=== Tunnel view mode ===


The Allegro Network Multimeter can decapsulate ERSPAN type II and type III traffic. In this mode all non-ERSPAN traffic is being discarded. On the dashboard a dropped counter will display dropped non ERSPAN packets for indication if this mode is active. The Multimeter will show the encapsulated content in all analysis modules. When capturing, packets with complete outer layer 2, layer 3, GRE and ERSPAN headers will be stored as seen on the wire.
The Allegro Network Multimeter can decapsulate ERSPAN type II and type III traffic. In this mode all non-ERSPAN traffic is discarded. On the Dashboard a dropped counter will display dropped non ERSPAN packets for indication if this mode is active. The Multimeter will show the encapsulated content in all analysis modules. When capturing, packets with complete outer Layer 2, Layer 3, GRE and ERSPAN headers will be stored as seen on the wire.


=== Database mode settings ===
=== Database mode settings ===


The database mode is a special analysis mode for high-performance Network Multimeters with multiple processors to increase the performance on such systems. It is normally enabled automatically but depending on the actual network traffic and system usage, some parameter tweak might be necessary to improve overall system performance.  
The database mode is a special analysis mode for high-performance Network Multimeters with multiple processors to increase the performance on such systems. It is normally enabled automatically but depending on the actual network traffic and system usage, some parameter tweak might be necessary to improve overall system performance.  
You should only change these parameters in discussion with the Allegro Packets support.
You should only change these parameters in discussion with the Allegro Packets support department.
These settings are only visible if your Network Multimeter is capable of running this mode.
These settings are only visible if your Network Multimeter is capable of running this mode.


Line 150: Line 150:
=== Network performance ===
=== Network performance ===


There are several network performance settings available to improve performance on high-performance systems in case of packet drops during very high receive bandwidth. They are only visible if your Network Multimeter is capable of changing these settings.
There are several network performance settings available to improve performance on high-performance systems in case of packet drops during very high incoming bandwidth. They are only visible if your Network Multimeter is capable of changing these settings.


* Max RX queues per socket: This setting specifies the amount of threads dedicated to read and write interactions with the network interface controllers. By increasing this value, network receive bandwidth can be increased before packet drops occur. By decreasing this value, data analysis will improve. The default setting of 2 RX queues is suitable for most configurations as data analysis typically needs much more processing ressources.
* Max RX queues per socket: This setting specifies the quantity of threads dedicated to read and write interactions with the network interface controllers. By increasing this value, network receive bandwidth can be increased before packet drops occur. By decreasing this value, data analysis will improve. The default setting of 2 RX queues is suitable for most configurations since data analysis typically needs much more processing ressources.
* Use Hyper-Threading for RX queues: This setting allows enabling or disabling Hyper-Threading for the threads dedicated to read and write interactions with the network interface controllers. By disabling it, network performance can be improved as the RX queues will be distributed to physical CPU cores only. By enabling it, RX queues will also be distributed to virtual Hyper-Threading CPU cores which is not as efficient as physical CPU cores. By using Hyper-Threading, data analysis will improve as there are more CPU cores available for these tasks. Hyper-Threading is used by default. This is suitable for most configurations as data analysis typically needs much more processing ressources.
* Use Hyper-Threading for RX queues: This setting allows enabling or disabling Hyper-Threading for the threads dedicated to read and write interactions with the network interface controllers. By disabling it, network performance can be improved as the RX queues will be distributed to physical CPU cores only. By enabling it, RX queues will also be distributed to virtual Hyper-Threading CPU cores which is not as efficient as physical CPU cores. By using Hyper-Threading, data analysis will improve since there are more CPU cores available for these tasks. Hyper-Threading is used by default. This is suitable for most configurations as data analysis typically needs much more processing ressources.
* Preferred Network interface controller: This setting allows fine tuning of network and data analysis performance for dedicated network controllers. The selected set of network controllers will be preferred over the others. Usually the fastest or the network controller with the most traffic should be preferred. The '''Auto''' setting is used by default, preferring the fastest network controller.
* Preferred Network interface controller: This setting allows fine tuning of network and data analysis performance for dedicated network controllers. The selected set of network controllers will be preferred over others. Usually the fastest or the network controller with the most traffic should be preferred. The '''Auto''' setting is used by default, preferring the fastest network controller.


You should only change these parameters in discussion with the Allegro Packets support.
You should only change these parameters in discussion with the Allegro Packets support department.


=== Processing performance ===
=== Processing performance ===


The processing performance may be modified on high-performance systems. This is only visible if your Network Multimeter is capable of changing this setting.
Processing performance may be modified on high-performance systems. This is only visible if your Network Multimeter is capable of changing this setting.


* Processing performance mode: This setting allows for fine tuning processing performance. By using '''Analysing''', as much processing ressources on all CPUs as possible are used for data analysis. By using '''Capturing''', the focus will be on high data throughput and low latency for capturing purposes by using only the CPU where the preferred newtork controller is attached to. This has an impact on data analysis performance. '''Analysing''' is used by default.
* Processing performance mode: This setting allows for fine tuning processing performance. By using '''Analysing''', as much processing ressources on all CPUs as possible are used for data analysis. By using '''Capturing''', the focus will be on high data throughput and low latency for capturing purposes by using only the CPU where the preferred newtork controller is attached to. This has an impact on data analysis performance. '''Analysing''' is used by default.
You should only change this parameter in discussion with the Allegro Packets support.
You should only change this parameter in discussion with the Allegro Packets support department.


=== Packet ring buffer timeouts ===
=== Packet ring buffer timeouts ===
Line 169: Line 169:
Two timeout settings related to the packet ring buffer can be adjusted.
Two timeout settings related to the packet ring buffer can be adjusted.


* The long timeout controls after which maximum period of time a packet is actually written to the packet ring buffer. A lower value may decrease the time difference by which packets are stored out of their real order in the packet ring buffer but it may also increase the amount of unused overhead data in the packet ring buffer.
* The long timeout controls after which maximum period of time a packet is actually written to the packet ring buffer. A lower value may decrease the time difference by which packets are stored out of their order in the packet ring buffer but it may increase the amount of unused overhead data in the packet ring buffer.
* The short timeout controls after which period of time smaller batches of packets are written to the packet ring buffer even if waiting for more packets would result in a more efficient operation. A lower value may decrease the time difference by which packets are stored out of their real order in the packet ring buffer but it may also decrease the performance of the packet ring buffer.
* The short timeout controls after which period of time smaller batches of packets are written to the packet ring buffer even if waiting for more packets would result in a more efficient operation. A lower value may decrease the time difference by which packets are stored out of their order in the packet ring buffer but it may also decrease the performance of the packet ring buffer.


=== Data retention timeout ===
=== Data retention timeout ===


When this timeout is set to a value greater than 0, data will be removed from the system after the given number of minutes. This means that entities like IPs, which have been inactive for longer than the timeout, will be removed. History graph data for entities that are still active will be truncated to cover only the given timespan while the absolute values for the whole runtime will be retained. When a packet ring buffer is active, packets which are older than the timeout will be discarded.
When this timeout is set to a value greater than 0, data will be removed from the system after a given number of minutes. This means that entities like IPs, which have been inactive for longer than the timeout, will be removed. History graph data for entities that are still active will be truncated to cover only the given timespan while the absolute values for the whole runtime will be retained. When a packet ring buffer is active, packets which are older than the timeout will be discarded.


=== L3 tunnel mode ===
=== L3 tunnel mode ===


If L3 tunnel mode is enabled for an interface this interface will only process packets encapsulated in GRE or GRE+ERSPAN and targeted for the configured IP address. All other packets received on that interface will be discarded. The system will process the packets as if only the inner encapsulated packet is seen and any traffic captures will only contain the encapsulated packet. An interface with L3 tunnel mode enabled will respond to ARP requests and to ICMP echo requests so it is possible to use ping to verify that the interface is reachable under the configured IP address. Currently only IPv4 L3 tunnels are supported. It must be noted that if the system is running in bridge packet processing mode any links with an interface configured for L3 tunnel mode will not forward traffic.
If L3 tunnel mode is enabled for an interface, this interface will only process packets encapsulated in GRE or GRE+ERSPAN and targeted for the configured IP address. All other packets received on that interface will be discarded. The system will process the packets as if only the inner encapsulated packet is seen and any traffic captures will only contain the encapsulated packet. An interface with L3 tunnel mode enabled will respond to ARP requests and to ICMP echo requests so it is possible to use ping to verify that the interface is reachable under the configured IP address. Currently only IPv4 L3 tunnels are supported. It must be noted that if the system is running in bridge packet processing mode any links with an interface configured for L3 tunnel mode will not forward traffic.


=== Multithreaded capture analysis ===
=== Multithreaded capture analysis ===


This option enables the use of multiple CPUs for capture analysis like when
This option enables the use of multiple CPUs for capture analysis like when
analyzing a PCAP capture file or analyzing the packet ring buffer. Depending
analyzing a pcap capture file or analyzing the packet ring buffer. Depending
on the number of available CPUs this can speed up the analysis significantly.
on the number of available CPUs this can significantly speed up the analysis.


It is possible to dedicate a number a CPUs exclusively to capture analysis.
It is possible to dedicate a number a CPUs exclusively to capture analysis.
Line 197: Line 197:
This option select the load distribution method. By default, network
This option select the load distribution method. By default, network
traffic is balanced among all processing threads based on the IP
traffic is balanced among all processing threads based on the IP
addresses. This is fast and usually the best way for good load
addresses. This is fast and usually the best way for efficient load
balancing.
balancing.


If the network traffic only happens between few IP addresses, this
If the network traffic only occurs between a few IP addresses, this
method can lead to load imbalance so that some threads doing much more
method can lead to a load imbalance so that some threads are doing more
work while other threads may idle. In this scenario the "flow based
work while other threads may be idle. In this scenario, "flow based
balancing" can be enabled to distribute the traffic based on the IP
balancing" can be enabled to distribute the traffic based on the IP
and port information. This will lead to better utilization of all
and port information. This will lead to better utilization of all
inactive
369

edits

Navigation menu