Global settings

From Allegro Network Multimeter Manual
Revision as of 11:35, 20 May 2021 by Ralf (talk | contribs) (beta feature for memory extension)
Jump to navigation Jump to search

The Global settings section contains parameters for adjusting the behavior of the system. The settings are split among multiple tabs, described as follows.

Generic settings

Packet processing mode

This section allows for configuring the main packet processing mode:


  • Bridge mode: In Bridge mode A.K.A. In-Line mode, all received packets will be transmitted between corresponding port pairs (e.g. 1+2, 3+4 on Allegro 500) so that the Allegro can be placed in-line between any network component.
    The device will operate transparent and will not modify network traffic in any way. The additional latency will be typically around or less than 1 millisecond.
    In Bridge mode it is also possible to process network traffic from a mirror port or Tap. However, "only" half of the total available ports on each respective Allegro Network Multimeter model can be used to operate in this way. For example on a 4-port Allegro model 500, you can conveniently leave the unit in bridge mode and connect up to two mirror ports on interfaces 1 and 3 respectively. Aforementioned interfaces are physically separated and not an (in-line) interface pair.
    Always avoid connecting mirror port traffic on an Allegro Network Multimeter interface pair (e.g. 1+2, 3+4 on Allegro 500), as this will result in TX traffic in the direction of a mirror port which if highly undesirable.
    Allegro in brigde mode (in-line)
  • Sink mode: In Sink mode, the Allegro will operate "receive only".
    Packets are not forwarded and there's no bi-directional traffic on any of the monitoring ports. This operation mode allows for installation at a Mirror port of a Switch or when using a network Tap to access the network traffic.
    Allegro in Sink mode on Mirror port or Tap
  • Mixed bridge/sink mode: This mode enables to configure the packet processing mode for each interface pair. Interface pairs default to Bridge mode but the mode can be permanently changed in the interface statistics.

The packet processing mode can be changed during runtime.

Attention: Please always keep in mind, that while in bridge mode (Allegro = in-line), a Link will be (shortly) interrupted when switching processing mode or/and rebooting the Allegro Network Multimeter.

Endpoint mode

If the endpoint mode is enabled for an interface, this interface will only process packets sent to the configured IP address (and potentially the port). If the system is running in bridge packet processing mode, links with an interface configured for endpoint mode will not forward traffic. The following types of traffic are supported:

  • Ethernet packets encapsulated in GRE or GRE+ERSPAN and targeted for the configured IP address.
    The system will process the packets as if only the inner encapsulated packet is seen and any traffic captures will only contain the encapsulated packet.
  • UDP packets to specific port (packets will be analyzed as is).

An interface with endpoint mode enabled will respond to ARP requests and to ICMP echo requests so it is possible to use ping to verify that the interface is reachable under the configured IP address.

It must be noted that if the system is running in bridge packet processing mode, any links with an interface configured for endpoint mode will not forward traffic.

Note: In versions 3.0 or older, the mode was named "l3 tunnel mode".

Webshark support

The Allegro Network Multimeter allows having a preview of packets, directly in the browser. This is the so called Webshark. To support Webshark previewing, the Allegro needs to reserve an amount of system memory.

This reserved amount of memory (RAM) is configurable and will not be available for the In-Memory database, thus the history of stored statistics in the dashboard becomes a little shorter. If this is not desired, it is possible to disable the Webshark support.

Multiple parallel sessions/instances of Webshark previewing within one Allegro Network Multimeter is possible. The number of simultaneous Webshark instances and max. preview file size (e.g. 5MB) are configurable but may vary depending on Allegro model and installed RAM.

Changing Webshark parameters requires a restart of the processing.

Built in Webshark - Allegro Network Multimeter

Detail of traffic analysis

Limit module processing, allows you to configure which OSI-layers (2,3,4,7) are actively processed and analyzed by the Allegro Network Multimeter. With this setting, the performance of the Allegro Network Multimeter can be significantly improved and allows for higher throughput when disabling some analysis modules.

For both realtime and offline parallel analysis, the following modes are possible :

  • Only capturing: Only interface statistics and the capture module is provided. Capture filters are supported without Layer 7 protocol specification/recognition.
  • Capturing and protocol detection: Additionally Layer 7 protocol specification in Capture filters will now work.
  • Up to Layer 2: Additionally all Layer 2 related modules are active such as MAC, MAC protocols, ARP and Burst Analysis.
  • Up to Layer 3: Additionally all Layer 3 related modules are active such as IP and DHCP statistics.
  • Up to Layer 4: Additionally all Layer 4 related modules are active such as TCP and Layer 4 server ports.
  • Unlimited: All Allegro Network Multimeter modules are active.

When switching to another mode you need to restart the processing in order to activate the new settings.

NOTE: The data recorded to/stored in the Packet Ring buffer, will NOT be affected by any of the above settings. Packet ring buffer capture rules may be configured under "Generic - Packet Ring Buffer", further explained in our wiki here https://allegro-packets.com/wiki/Packet_ring_buffer#Packet_ring_buffer_snapshot_length_filter

Graph detail settings

It is possible to modify the detail level of all graphs in the interface. This settings allow you to see a more detailed view (with higher time resolution) or to reduce the detail level so more data can be stored on the device. Changing the default values has an impact on the performance and memory usage. Changing a slider to the left increases the detail level of graphs, but increases memory usage and decreases performance.

  • Best graph resolution: This option configures how detailed the graph information are shown in the best case (the latest information). The default value is one second which means that a graph sample point represents a second of packet time. You can change the resolution up to 1 millisecond which gives a detailed sub-second representation of the traffic. You can also decide to decrease the resolution which enables the Multimeter to store more data for a longer period of time.
  • Reduce graph resolution of old data by up to: The resolution of older graph data is automatically reduced to save memory and to allow a longer view into the traffic history. This option allows you to change this behaviour. With a reduction factor of 1/1 no reduction is done at all which means the selected graph resolution is available for the complete time.
This reduces the time period to see historical data. You can choose to increase the reduction factor to store more data for a longer period. The time printed in parentheses represents the worst-case graph resolution based on the chosen resolution and reduction factor.


Note: Regardless of these settings, the graph values are always converted to represent a value per second (when applicable). For example, the packets per second for an IP addresses will always be a value literally per second even if the resolution is larger or smaller than one second. The displayed value is scaled to match this view. Especially with sub-second resolution this might be misleading.

For instance, if there is a network element sending one packet per second and the resolution is set to 100 milliseconds, the value shown in the graph might be shown as 10 packets per second as each sample point is scaled to represent an value per second.
For a detailed investigation it is recommended to always select a specific time interval and look at the (packet) counters shown in all statistics since these are unscaled and represent the actual values.


Performance implications: The performance degradation and memory usage depends on the actual network traffic and is not exactly predictable.

Here are some examples for reference on a Multimeter 1000 series with different configuration values (under ideal test conditions):

  • 1 second resolution, 1/1 reduction factor: 90% of default performance
  • 100 millisecond resolution, 1/1 reduction factor: 50% of default performance,
  • 10 millisecond resolution, 1/1 reduction factor: 15% of default performance
  • 1 millisecond resolution, 1/1 reduction factor: 10% of default performance

PCAP parallel analysis

The PCAP parallel analysis feature allows to analyse PCAP files or the packet ring buffer in parallel to the live measurement. The settings allow to enable the feature and choose how much memory of the main memory is used for parallel analysis and how many parallel slots can be used. Detailed description of the configuration values are described here.

IPFIX settings

The Allegro Network Multimeter may be running as an IPFIX exporter. These settings allows for reporting configuration. When enabled, the following settings are possible:

  • IP address: Address of IPFIX collector.
  • Port: Corresponding port.
  • Protocol: TCP or UDP.
  • Update interval: Interval in seconds for sending a status update of flows.
  • UDP resend interval: Interval in seconds for resending IPFIX templates for UDP connections.
  • TCP reconnect timeout: When TCP connections could not be established, wait for this time period until the next attempt to establish a connection.

Individual IPFIX messages can be enabled or disabled by toggling corresponding options. See the NetFlow/IPFIX interface documentation for details about the message types.

Time settings

The Allegro Network Multimeter can be configured to use a time synchronization service. NTP is supported for all variants of the Multimeter. PTP service may be used if management interface supports hardware time stamping. If a GPS-capable PTP grandmaster card is available, GPS time synchronization is available and the antenna cable delay in nanoseconds can be configured.

To enable a time service, switch to the desired type in the dropdown box. The time service field will show whether the selected service is running or not.

NTP - For active NTP time retrieval, you can specify and edit dedicated NTP servers the Allegro should communicate with. If you do not specify a NTP server, a set of predefined NTP servers will be automatically selected.

NTP from data plane - For passive time retrieval, NTP from data plane can be used to retrieve the time to be synchronized passively from NTP packets within the traffic that is analyzed. The IP address of a desired NTP server must be set. As soon as a NTP server packet is seen, the system time of the Multimeter will be set. The wait period field can be used to set a time period where subsequent updates are ignored. If set to 0, every time packet of that server will be used. NTP from data plane is ideal in situation where the Allegro MGT interface can not or may not actively connect to the network.

PTP - For PTP time retrieval, the PTP grandmaster clock identity is shown. This is usually an EUI-64 address. The first and last set of octets of the identity represent the (EUI-48) MAC address of the grandmaster.

The following settings are possible for PTP and should match the settings of the PTP grandmaster:

  • Delay mechanism: Use end-to-end (E2E), peer-to-peer (P2P) or automatic delay measurement. In case automatic measurement is selected, E2E is used at the beginning and switched to P2P when a peer delay request is received. Default is Auto.
  • Network transport: Use UDPv4, UDPv6 or Layer 2 as network transport. Default is UDPv4.
  • Domain number: The domain number of the grandmaster. This is used to define logical groups of synchronized clocks.

GPS - The GPS time retrieval option will become available when a GPS capable PTP grandmaster card is installed in the Multimeter.

If no time synchronization mechanism is selected the date and time of the device can be manually configured by entering a properly formatted date and time description. Below the time synchronization settings, the time zone used by the device can be configured. The drop-down list provides a list of cities grouped by world regions to select the appropriate time zone.

To make any of the above changes take effect, click on the Save settings button at the bottom of the page. To reload the stored settings, click on Reload settings.

Email notification

Certain modules support the sending of email notifications. The following settings are used to globally configure the SMTP server used and the target email address that will receive the notifications:

  • Enable email notifications: globally enables or disables the sending of email notifications.
  • SMTP server address: the address of the SMTP server that will be used to send notification emails.
  • SMTP server port: the TCP port on which the SMTP server is listening.
  • SMTP server uses SSL: must be set to On if the SMTP server expects an SSL connection from the very start. If the SMTP server uses no SSL or STARTTLS this setting must be set to Off.
  • Ignore certificate errors: if the SSL certificate should not be validated because e.g. it is a self-signed certificate this setting can be used to turn off certificate validation.
  • Allow unencrypted connections: if an unencrypted connection must be allowed because e.g. a legacy SMTP server does not support it this setting can be used.
  • Username: the username used to log in to the SMTP server.
  • Password: the password used to log in to the SMTP server.
  • From email address: the email address from which incident notifications will be sent.
  • Target email address: the email address to which incident notifications will be sent.
  • Email links base URL: this base URL will be used to generate the HTML links in notification emails. Since the device cannot by itself determine the proper address by which it is visible to the email recipient this setting can be used to set the correct URL prefix for links sent with the notification emails.
  • Send periodic system status mail: if set to hourly or daily, a system status email will be sent to the configured target address with the selected frequency. It will contain basic system information and system health status, management interface configuration and a list of detected LLDP neighbours if the management LLDP feature is enabled.

The Send test email button can be used to verify that the entered settings are working.

Memory extension (BETA)

Firmware version >= 3.3 allows to store some old statistical data on attached flash based storage device to extend the amount of time for historical data. When enabled, the system will automatically swap out old history data onto the flash device making more room in the main memory for total increased history data.

This is a BETA feature which is subject to change in future firmware versions. It is only recommended to be used for very fast flash based storage devices and devices with low load.

The configuration dialog allows to check the speed of the active storage device.

  • Green result: this result can only be reached on very fast U.2 or PCIe flash devices, such as Intel Optane storage. This class of devices can be used for low to medium traffic load.
  • Orange result: this result can be reached by fast U.2 or SATA flash devices. It can be used for low traffic load.
  • Red result: the speed is usually too slow to be used effectively for this feature. In very low traffic load situations it might still be usable.

Spinning hard disc drives are in general not recommended to be used for this feature.

The configuration dialog allows to choose a size of the memory extension. Larger values does not automatically increase the history time, as only part of the historical data ca be swapped out. As long as the usage is not reaching 100%, there is no need to increase the memory extension.

Recommended size values:

  • Use at least 1 GB of memory.
  • If possible use 1-2 times the amount of main memory.

Saving the settings will activate the memory extension immediately.

To disable the memory extension, first disable the feature and then restart the packet processing in the administration menu.

Note: The storage device cannot be deactivated as long as the memory extension is enabled.

Tip: Since only data that is no longer changing can be swapped out onto the storage device, the graph detail settings can be adjusted to make more information available for external memory. The graph resolution reduction can be adjusted to lower values. It will however also make showing graph data slower for larger time periods so the best value depends on the actual amount of data stored and the use case. It is possible to start with a value of 1/1 for the reduction parameter and increase it, if the overall performance is not good enough.

Expert settings

The Expert settings contains parameter which are only necessary to change in rare installation scenarios or some specific need for a different operation mode.

Packet length accounting

This setting allows you to configure which packet length is used for all traffic counters and incidents. The following modes are possible:

  • Layer 1: Packet length is accounted on Layer 1 including preamble (7 Byte), SFD (1 Byte) and inter-frame gap (12 Bytes)
  • Layer 2 without frame check sequence (default): Packet length is accounted on Layer 2 without a frame check sequence (4 Bytes)
  • Layer 2 with frame check sequence: Account packet length on Layer 2 with frame check sequence (4 Bytes) When switching to another mode, it will only be applied on new packets. Older packet size statistics will not be changed.

VLAN handling

The Allegro Network Multimeter can ignore VLAN tags for connection tracking. Enabling this option may be necessary if network traffic is seen on the Network Multimeter that contains changing VLAN tags for the same connection. For example, depending on the configuration of the Mirror Port to which the Network Multimeter is connected, incoming traffic could contain a VLAN tag while outgoing traffic does not. In this example, a connection would appear twice in the statistics which often is desired behaviour to be able to identify a network misconfiguration. In some cases however, such "duplicate" data in the dashboard may be misleading, and the user would want to see only one connection. In these scenarios the option ignore VLAN tags may be enabled.

Tunnel view mode

The Allegro Network Multimeter can decapsulate VXLAN, ERSPAN type II and type III as well as L2TPv3 data traffic. In this mode all non-encapsulated traffic will be discarded. On the Dashboard a dropped counter will display dropped non ERSPAN packets for indication if this mode is active. The Multimeter will show the encapsulated content in all analysis modules. When capturing, packets with complete outer Layer 2, Layer 3, GRE, ERSPAN and L2TPv3 headers will be stored as seen on the wire.

Database mode settings

The database mode is a special analysis mode for high-performance Network Multimeters with multiple processors to increase the performance on such systems. It is normally enabled automatically but depending on the actual network traffic and system usage, some parameter tweak might be necessary to improve overall system performance. You should only change these parameters in discussion with the Allegro Packets support department. These settings are only visible if your Network Multimeter is capable of running this mode.

You can read more about the meaning of the settings here.

Network performance

There are several network performance settings available to improve performance on high-performance systems in case of packet drops during very high incoming bandwidth. They are only visible if your Network Multimeter is capable of changing these settings.

  • Max RX queues per socket: This setting specifies the quantity of threads dedicated to read and write interactions with the network interface controllers. By increasing this value, network receive bandwidth can be increased before packet drops occur. By decreasing this value, data analysis will improve. The default setting of 2 RX queues is suitable for most configurations since data analysis typically needs much more processing ressources.
  • Use Hyper-Threading for RX queues: This setting allows enabling or disabling Hyper-Threading for the threads dedicated to read and write interactions with the network interface controllers. By disabling it, network performance can be improved as the RX queues will be distributed to physical CPU cores only. By enabling it, RX queues will also be distributed to virtual Hyper-Threading CPU cores which is not as efficient as physical CPU cores. By using Hyper-Threading, data analysis will improve since there are more CPU cores available for these tasks. Hyper-Threading is used by default. This is suitable for most configurations as data analysis typically needs much more processing ressources.
  • Preferred Network interface controller: This setting allows fine tuning of network and data analysis performance for dedicated network controllers. The selected set of network controllers will be preferred over others. Usually the fastest or the network controller with the most traffic should be preferred. The Auto setting is used by default, preferring the fastest network controller.

You should only change these parameters in discussion with the Allegro Packets support department.

Processing performance

Processing performance may be modified on high-performance systems. This is only visible if your Network Multimeter is capable of changing this setting.

  • Processing performance mode: This setting allows for fine tuning processing performance. By using Analysing, as much processing ressources on all CPUs as possible are used for data analysis. By using Capturing, the focus will be on high data throughput and low latency for capturing purposes by using only the CPU where the preferred network controller is attached to. This has an impact on data analysis performance. Analysing is used by default.

You should only change this parameter in discussion with the Allegro Packets support department.

Packet ring buffer timeouts

Two timeout settings related to the packet ring buffer can be adjusted.

  • Long timeout controls after which maximum amount of time, a packet is actually written to the packet ring buffer. A lower value may decrease the time difference by which packets are stored out of their order in the packet ring buffer but it may increase the amount of unused overhead data in the packet ring buffer.
  • Short timeout controls after which amount of time smaller batches of packets are written to the packet ring buffer, even if waiting for more packets would result in a more efficient operation. A lower value may decrease the time difference by which packets are stored out of their order in the packet ring buffer, but it may also decrease the performance of the packet ring buffer.

Data retention timeout

When the data retention timeout is set to a value greater than 0, data will be removed everywhere throughout the system after the given number of minutes. This means that entities like IPs, which have been inactive for longer than the timeout, will be removed. History graph data for entities that are still active will be truncated to cover only the given timespan, while the absolute values for the whole runtime will be retained. When a packet ring buffer is active, packets which are older than the timeout will be discarded.

Multithreaded capture analysis

This option enables the use of multiple CPUs for capture analysis like when analyzing a pcap capture file or analyzing the packet ring buffer. Depending on the number of available CPUs this can significantly speed up the analysis.

It is possible to dedicate a number a CPUs exclusively to capture analysis. Since these CPUs are not available for live packet processing the performance of live traffic analysis may be lower. When set to 0 a lower priority is used for capture analysis than for live analysis but it cannot be ruled out that the performance of the live processing is affected.

Load balancing

This option select the load distribution method. By default, network traffic is balanced among all processing threads based on the IP addresses. This is fast and usually the best way for efficient load balancing.

If the network traffic only occurs between a few IP addresses, this method can lead to a load imbalance so that some threads are doing more work while other threads may be idle. In this scenario, "flow based balancing" can be enabled to distribute the traffic based on the IP and port information. This will lead to better utilization of all processing threads.

Since this option induces additional processing overhead per packet and additional memory for all internal IP statistics, it should only be enabled in cases of significant load imbalance.

Analyzer queue overcommit

This option enables the use of very large analyzer queues which may help in processing bursts of high bandwidth traffic without packet drops. Since the queues are overcommited it is possible that this may cause network interface packet drops due to running out of packet buffers. See Performance Optimization Guide.