6th March, 2020
The definition of packet duplication is pretty simple; a duplicate packet is any packet that is identical to another packet. However, just because it’s easy to define doesn’t mean it doesn’t cause a significant headache for Network Engineers and System Analysts. Minimising duplicate packets as much as possible helps ensure your network is more efficient and allows your tools to be more accurate. Packet duplication can come from several sources, such as inter-VLAN communication, incorrect switch configuration or SPAN/mirror port misconfigurations. Even when optimally configured, a SPAN port may generate between one and four copies of a packet. The duplicate packets can represent as much as 50% of the network traffic being sent to a monitoring tool. Todays’ multi 100Gbps networks are already transporting traffic volumes at much higher rates than most tools can handle, analysis tools cannot easily handle the extra processing drain of duplicate packets on incoming feeds.
The impact of this avalanche of duplicate packets hitting your analysis and NPM tools could be distorted results when analysing applications or network performance, false positives from your analysis tools for problems that don’t exist and inaccurate flow data in NetFlow/IPFIX reports. This all leads to diminished network visibility, which could lead to blind spots and decreased understanding of application performance, network utilization, and security threats. The answer to these challenges is the process of de-duplication or de-dupe, which eliminates duplicate copies of the packet.
At carrier scale, the most efficient way to do this is by using FPGA accelerated hardware, so as duplicate packets are received on a 100G MAC they are removed from the set of packets that are metered for flow records. A simple and efficient method of duplicate packet detection is to store a fingerprint hash of the IP payload along with a portion of the IP header, then remove packets with the same payload hash and header fields that arrive within a given time window. For flexibility, a solution should enable configuration of the size of the time window and which parts of the packet to use when making the comparison. For example, particular transport protocols or IP header fields such as TTL/hop count or the fragment identification field could optionally be left out of the comparison.
Using this process will significantly decrease the number of duplicate packets being passed on to security monitoring and network performance tools, decreasing expenditure on limited resources and reducing false positives. It also improves network speeds by eliminating redundant copies, which may cause data bottlenecks that result in dropped packets. By reducing duplicate packets, you also ensure that you store only the most useful and relevant data, which allows you to perform better analysis and improve network security. In addition, fewer duplicates means reduced storage space requirements, the captured network traffic can be stored for longer, giving your analysts a larger window for forensic analysis, threat hunting and anomaly detection.
Talk to Telesoft about packet de-duplication at carrier scale