Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I would like to simulate packet delay and loss for UDP and TCP on Linux to measure the performance of an application. Is there a simple way to do this?

Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
689 views
Welcome To Ask or Share your Answers For Others

1 Answer

netem leverages functionality already built into Linux and userspace utilities to simulate networks. This is actually what Mark's answer refers to, by a different name.

The examples on their homepage already show how you can achieve what you've asked for:

Examples

Emulating wide area network delays

This is the simplest example, it just adds a fixed amount of delay to all packets going out of the local Ethernet.

# tc qdisc add dev eth0 root netem delay 100ms

Now a simple ping test to host on the local network should show an increase of 100 milliseconds. The delay is limited by the clock resolution of the kernel (Hz). On most 2.4 systems, the system clock runs at 100?Hz which allows delays in increments of 10?ms. On 2.6, the value is a configuration parameter from 1000 to 100?Hz.

Later examples just change parameters without reloading the qdisc

Real wide area networks show variability so it is possible to add random variation.

# tc qdisc change dev eth0 root netem delay 100ms 10ms

This causes the added delay to be 100 ± 10?ms. Network delay variation isn't purely random, so to emulate that there is a correlation value as well.

# tc qdisc change dev eth0 root netem delay 100ms 10ms 25%

This causes the added delay to be 100 ± 10?ms with the next random element depending 25% on the last one. This isn't true statistical correlation, but an approximation.

Delay distribution

Typically, the delay in a network is not uniform. It is more common to use a something like a normal distribution to describe the variation in delay. The netem discipline can take a table to specify a non-uniform distribution.

# tc qdisc change dev eth0 root netem delay 100ms 20ms distribution normal

The actual tables (normal, pareto, paretonormal) are generated as part of the iproute2 compilation and placed in /usr/lib/tc; so it is possible with some effort to make your own distribution based on experimental data.

Packet loss

Random packet loss is specified in the 'tc' command in percent. The smallest possible non-zero value is:

2?32 = 0.0000000232%

# tc qdisc change dev eth0 root netem loss 0.1%

This causes 1/10th of a percent (i.e. 1 out of 1000) packets to be randomly dropped.

An optional correlation may also be added. This causes the random number generator to be less random and can be used to emulate packet burst losses.

# tc qdisc change dev eth0 root netem loss 0.3% 25%

This will cause 0.3% of packets to be lost, and each successive probability depends by a quarter on the last one.

Probn = 0.25 × Probn-1 + 0.75 × Random

Note that you should use tc qdisc add if you have no rules for that interface or tc qdisc change if you already have rules for that interface. Attempting to use tc qdisc change on an interface with no rules will give the error RTNETLINK answers: No such file or directory.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...