Global settings

From Allegro Packets Product Wiki
Jump to navigation Jump to search

The Global settings section contains parameters for adjusting the behavior of the system. The settings are split among multiple tabs, described as follows.

Generic settings

Packet processing mode

This section allows for configuring the main packet processing mode:

  • Bridge mode: In Bridge mode, all received packets will be retransmitted on the corresponding mutual port so that the device can be placed inline between any network component. The device will be transparent and will not modify the traffic in any way. The additional latency will be typically around or less than 1 millisecond.
  • Sink mode: In Sink mode, packets are only received and not forwarded. This operation mode allows for installation at a Mirror port of a Switch or when using a network Tap to access the network traffic.

The packet processing mode can be changed during runtime.

Webshark support

The Allegro Network Multimeter allows having a preview of the first Megabyte of packets directly in the browser, called Webshark. To support this, the device needs a small amount of system memory for packet processing. This amount of memory (~100MB) will be reserved by the Multimeter and is not available for the In-Memory database used to store metadata, thus the history of stored metadata is a little shorter. If this is not desired, it is possible to disable the Webshark support. Changing this value requires a restart of the processing.

Limit module processing

This setting allows you to configure which modules are active. With this setting, the performance of the Allegro Network Multimeter can be significantly improved and allows a higher throughput if you do not need to select some analysis modules.

The following modes are possible:

  • Only capturing: Only interface statistics and the capture module is provided. The capture filters are supported except Layer 7 protocol recognition.
  • Up to Layer 2: Additionally all Layer 2 related modules are active such as MAC, MAC protocols, ARP and Burst Analysis.
  • Up to Layer 3: Additionally all Layer 3 related modules are active such as IP and DHCP statistics.
  • Up to Layer 4: Additionally all Layer 4 related modules are active such as TCP and Layer 4 server ports.
  • Unlimited: All modules are active.

When switching to another mode you need to restart the processing in order to activate the new settings.

Graph detail settings

It is possible to modify the detail level of all graphs in the interface. This settings allow you to see a more detailed view (with higher time resolution) or to reduce the detail level so more data can be stored on the device. Changing the default values has an impact on the performance and memory usage. Changing a slider to the left increases the detail level of graphs, but increases memory usage and decreases performance.

  • Best graph resolution: This option configures how detailed the graph information are shown in the best case (the latest information). The default value is one second which means that a graph sample point represents a second of packet time. You can change the resolution up to 1 millisecond which gives a detailed sub-second representation of the traffic. You can also decide to decrease the resolution which enables the Multimeter to store more data for a longer period of time.
  • Reduce graph resolution of old data by up to: The resolution of older graph data is automatically reduced to save memory and to allow a longer view into the traffic history. This option allows you to change this behaviour. With a reduction factor of 1/1 no reduction is done at all which means the selected graph resolution is available for the complete time.
This reduces the time period to see historical data. You can choose to increase the reduction factor to store more data for a longer period. The time printed in parentheses represents the worst-case graph resolution based on the chosen resolution and reduction factor.

Note: Regardless of these settings, the graph values are always converted to represent a value per second (when applicable). For example, the packets per second for IP addresses will always be a value literally per second even if the resolution is larger or smaller than one second. The displayed value is scaled to match this view. Especially with sub-second resolution this might be misleading.

For instance, if there is a network element sending one packet per second and the resolution is set to 100 milliseconds, the value might be shown as 10 packets per second as each sample point is scaled to represent an value per second. For a detailed investigation it is recommended to select a specific time interval since the total packet counters shown in all statistics are unscaled and represent the actual values.

Performance implications: The performance degradation and memory usage depends on the actual network traffic and is not exactly predictable.

Here are some examples for reference on a Multimeter 1000 series with different configuration values (under ideal test conditions):

  • 1 second resolution, 1/1 reduction factor: 90% of default performance
  • 100 millisecond resolution, 1/1 reduction factor: 50% of default performance,
  • 10 millisecond resolution, 1/1 reduction factor: 15% of default performance
  • 1 millisecond resolution, 1/1 reduction factor: 10% of default performance

pcap parallel analysis

The pcap parallel analysis feature allows to analyse pcap files or the packet ring buffer in parallel to the live measurement. The settings allow to enable the feature and choose how much memory of the main memory is used for parallel analysis and how many parallel slots can be used. Detailed description of the configuration values are described here.

IPFIX settings

The Allegro Network Multimeter may be running as an IPFIX exporter. These settings allows for reporting configuration. When enabled, the following settings are possible:

  • IP address: Address of IPFIX collector.
  • Port: Corresponding port.
  • Protocol: TCP or UDP.
  • Update interval: Interval in seconds for sending a status update of flows.
  • UDP resend interval: Interval in seconds for resending IPFIX templates for UDP connections.
  • TCP reconnect timeout: When TCP connections could not be established, wait for this time period until the next attempt to establish a connection.

Individual IPFIX messages can be enabled or disabled by toggling corresponding options. See the NetFlow/IPFIX interface documentation for details about the message types.

Time settings

The Allegro Network Multimeter can be configured to use a time synchronization service. NTP is supported for all variants of the Multimeter. PTP service may be used if management interface supports hardware time stamping. If a GPS-capable PTP grandmaster card is available, GPS time synchronization is available and the antenna cable delay in nanoseconds can be configured.

To enable a time service, switch to the proper type in the dropdown box. The time service field will show whether the selected service is running or not. For NTP time retrieval you can specify and edit dedicated NTP servers. If you do not specify a NTP server, a set of predefined NTP servers will be automatically selected. For PTP time retrieval, the PTP grandmaster clock identity is shown. This is usually an EUI-64 address. The first and last set of octets of the identity represent the (EUI-48) MAC address of the grandmaster.

The following settings are possible for PTP and should match the settings of the PTP grandmaster:

  • Delay mechanism: Use end-to-end (E2E), peer-to-peer (P2P) or automatic delay measurement. In case automatic measurement is selected, E2E is used at the beginning and switched to P2P when a peer delay request is received. Default is Auto.
  • Network transport: Use UDPv4, UDPv6 or Layer 2 as network transport. Default is UDPv4.
  • Domain number: The domain number of the grandmaster. This is used to define logical groups of synchronized clocks.

The GPS time retrieval option is available if a GPS capable PTP grandmaster card is installed in the Multimeter. If no time synchronization mechanism is selected the date and time of the device can be manually configured by entering a properly formatted date and time description. Below the time synchronization settings, the time zone used by the device can be configured. The drop-down list provides a list of cities grouped by world regions to select the appropriate time zone. To make changes take effect, click on the Save settings button at the bottom of the page. To reload the stored settings, click on Reload settings.

Email notification

Certain modules support the sending of email notifications. The following settings are used to globally configure the SMTP server used and the target email address that will receive the notifications:

  • Enable email notifications: globally enables or disables the sending of email notifications.
  • SMTP server address: the address of the SMTP server that will be used to send notification emails.
  • SMTP server port: the TCP port on which the SMTP server is listening.
  • SMTP server uses SSL: must be set to On if the SMTP server expects an SSL connection from the very start. If the SMTP server uses no SSL or STARTTLS this setting must be set to Off.
  • Ignore certificate errors: if the SSL certificate should not be validated because e.g. it is a self-signed certificate this setting can be used to turn off certificate validation.
  • Allow unencrypted connections: if an unencrypted connection must be allowed because e.g. a legacy SMTP server does not support it this setting can be used.
  • Username: the username used to log in to the SMTP server.
  • Password: the password used to log in to the SMTP server.
  • From email address: the email address from which incident notifications will be sent.
  • Target email address: the email address to which incident notifications will be sent.
  • Email links base URL: this base URL will be used to generate the HTML links in notification emails. Since the device cannot by itself determine the proper address by which it is visible to the email recipient this setting can be used to set the correct URL prefix for links sent with the notification emails.
  • Send periodic system status mail: if set to hourly or daily, a system status email will be sent to the configured target address with the selected frequency. It will contain basic system information and system health status, management interface configuration and a list of detected LLDP neighbours if the management LLDP feature is enabled.

The Send test email button can be used to verify that the entered settings are working.

Expert settings

The Expert settings contains parameter which are only necessary to change in rare installation scenarios or some specific need for a different operation mode.

Packet length accounting

This setting allows you to configure which packet length is used for all traffic counters and incidents. The following modes are possible:

  • Layer 1: Packet length is accounted on Layer 1 including preamble (7 Byte), SFD (1 Byte) and inter-frame gap (12 Bytes)
  • Layer 2 without frame check sequence (default): Packet length is accounted on Layer 2 without a frame check sequence (4 Bytes)
  • Layer 2 with frame check sequence: Account packet length on Layer 2 with frame check sequence (4 Bytes) When switching to another mode, it will only be applied on new packets. Older packet size statistics will not be changed.

VLAN handling

The Allegro Network Multimeter can ignore VLAN tags for connection tracking. Enabling this option may be necessary if network traffic is seen on the Network Multimeter that contains changing VLAN tags for the same connection. For example, depending on the configuration of the Mirror Port to which the Network Multimeter is connected, incoming traffic could contain a VLAN tag while outgoing traffic does not. In this example, a connection would appear twice in the statistics which is often the desired behaviour to be able to identify a network misconfiguration. But sometimes this behaviour is intended and the user wants to see only one connection. In this scenario the option can be enabled to ignore varying VLAN tags for a otherwise identical connection.

Tunnel view mode

The Allegro Network Multimeter can decapsulate ERSPAN type II and type III traffic. In this mode all non-ERSPAN traffic is discarded. On the Dashboard a dropped counter will display dropped non ERSPAN packets for indication if this mode is active. The Multimeter will show the encapsulated content in all analysis modules. When capturing, packets with complete outer Layer 2, Layer 3, GRE and ERSPAN headers will be stored as seen on the wire.

Database mode settings

The database mode is a special analysis mode for high-performance Network Multimeters with multiple processors to increase the performance on such systems. It is normally enabled automatically but depending on the actual network traffic and system usage, some parameter tweak might be necessary to improve overall system performance. You should only change these parameters in discussion with the Allegro Packets support department. These settings are only visible if your Network Multimeter is capable of running this mode.

You can read more about the meaning of the settings here.

Network performance

There are several network performance settings available to improve performance on high-performance systems in case of packet drops during very high incoming bandwidth. They are only visible if your Network Multimeter is capable of changing these settings.

  • Max RX queues per socket: This setting specifies the quantity of threads dedicated to read and write interactions with the network interface controllers. By increasing this value, network receive bandwidth can be increased before packet drops occur. By decreasing this value, data analysis will improve. The default setting of 2 RX queues is suitable for most configurations since data analysis typically needs much more processing ressources.
  • Use Hyper-Threading for RX queues: This setting allows enabling or disabling Hyper-Threading for the threads dedicated to read and write interactions with the network interface controllers. By disabling it, network performance can be improved as the RX queues will be distributed to physical CPU cores only. By enabling it, RX queues will also be distributed to virtual Hyper-Threading CPU cores which is not as efficient as physical CPU cores. By using Hyper-Threading, data analysis will improve since there are more CPU cores available for these tasks. Hyper-Threading is used by default. This is suitable for most configurations as data analysis typically needs much more processing ressources.
  • Preferred Network interface controller: This setting allows fine tuning of network and data analysis performance for dedicated network controllers. The selected set of network controllers will be preferred over others. Usually the fastest or the network controller with the most traffic should be preferred. The Auto setting is used by default, preferring the fastest network controller.

You should only change these parameters in discussion with the Allegro Packets support department.

Processing performance

Processing performance may be modified on high-performance systems. This is only visible if your Network Multimeter is capable of changing this setting.

  • Processing performance mode: This setting allows for fine tuning processing performance. By using Analysing, as much processing ressources on all CPUs as possible are used for data analysis. By using Capturing, the focus will be on high data throughput and low latency for capturing purposes by using only the CPU where the preferred newtork controller is attached to. This has an impact on data analysis performance. Analysing is used by default.

You should only change this parameter in discussion with the Allegro Packets support department.

Packet ring buffer timeouts

Two timeout settings related to the packet ring buffer can be adjusted.

  • The long timeout controls after which maximum period of time a packet is actually written to the packet ring buffer. A lower value may decrease the time difference by which packets are stored out of their order in the packet ring buffer but it may increase the amount of unused overhead data in the packet ring buffer.
  • The short timeout controls after which period of time smaller batches of packets are written to the packet ring buffer even if waiting for more packets would result in a more efficient operation. A lower value may decrease the time difference by which packets are stored out of their order in the packet ring buffer but it may also decrease the performance of the packet ring buffer.

Data retention timeout

When this timeout is set to a value greater than 0, data will be removed from the system after a given number of minutes. This means that entities like IPs, which have been inactive for longer than the timeout, will be removed. History graph data for entities that are still active will be truncated to cover only the given timespan while the absolute values for the whole runtime will be retained. When a packet ring buffer is active, packets which are older than the timeout will be discarded.

L3 tunnel mode

If L3 tunnel mode is enabled for an interface, this interface will only process packets encapsulated in GRE or GRE+ERSPAN and targeted for the configured IP address. All other packets received on that interface will be discarded. The system will process the packets as if only the inner encapsulated packet is seen and any traffic captures will only contain the encapsulated packet. An interface with L3 tunnel mode enabled will respond to ARP requests and to ICMP echo requests so it is possible to use ping to verify that the interface is reachable under the configured IP address. Currently only IPv4 L3 tunnels are supported. It must be noted that if the system is running in bridge packet processing mode any links with an interface configured for L3 tunnel mode will not forward traffic.

Multithreaded capture analysis

This option enables the use of multiple CPUs for capture analysis like when analyzing a pcap capture file or analyzing the packet ring buffer. Depending on the number of available CPUs this can significantly speed up the analysis.

It is possible to dedicate a number a CPUs exclusively to capture analysis. Since these CPUs are not available for live packet processing the performance of live traffic analysis may be lower. When set to 0 a lower priority is used for capture analysis than for live analysis but it cannot be ruled out that the performance of the live processing is affected.

Load balancing

This option select the load distribution method. By default, network traffic is balanced among all processing threads based on the IP addresses. This is fast and usually the best way for efficient load balancing.

If the network traffic only occurs between a few IP addresses, this method can lead to a load imbalance so that some threads are doing more work while other threads may be idle. In this scenario, "flow based balancing" can be enabled to distribute the traffic based on the IP and port information. This will lead to better utilization of all processing threads.

Since this option induces additional processing overhead per packet and additional memory for all internal IP statistics, it should only be enabled in cases of significant load imbalance.

Analyzer queue overcommit

This option enables the use of very large analyzer queues which may help in processing bursts of high bandwidth traffic without packet drops. Since the queues are overcommited it is possible that this may cause network interface packet drops due to running out of packet buffers. See Performance Optimization Guide.