Wednesday, August 22, 2018

Upgrade / Remove Ubuntu kernel

* Build kernel from upstream
* reference
   https://wiki.ubuntu.com/KernelTeam/GitKernelBuild

---
git clone https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-testing.git/
---
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
cp /boot/config-`uname -r` .config
make oldconfig
make clean
make -j `getconf _NPROCESSORS_ONLN` deb-pkg LOCALVERSION=-custom
cd ..
sudo dpkg -i linux-image-xxx-custom_xxx-custom-10.00.Custom_i386.deb
sudo dpkg -i linux-headers-xxx-custom_xxx-custom-10.00.Custom_i386.deb
sudo reboot
---
* trouble shooting
linux$ make oldconfig
/bin/sh: 1: flex: not found
Makefile:557: recipe for target 'oldconfig' failed
---> linux$ sudo apt install flex

/bin/sh: 1: bison: not found
---> linux$ sudo apt install bison
---
* To change default boot kernel
sudo cp /etc/default/grub /etc/default/grub.bak
Look up exact kernel strings
    sudo grub-mkconfig | less

submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-0a503b88-6195-4a3e-8c26-e0e037801d64' {
        menuentry 'Ubuntu, with Linux 5.1.0-poh' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.:

sudo -H gedit /etc/default/grub
Find the line that contains GRUB_DEFAULT
You then combine those two strings with > and set GRUB_DEFAULT to them as:
    from
    GRUB_DEFAULT=0
    to
    GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 5.1.0-poh"
Save it, then build the updated grub menu.
sudo update-grub
reboot
---
Open the Ubuntu kernel list. Go to http://kernel.ubuntu.com/~kernel-ppa/mainline/

Select a kernel version

Find the kernel for your computer.

Download the kernel files.

v4.18.4 mainline build

These binary packages represent builds of the mainline or stable Linux kernel tree at the commit below:

  v4.18.4 (28b2837b7236a273c2c776c06b7eaca971fc381c)


linux-headers-4.18.0-041800_4.18.0-041800.201808122131_all.deb
linux-headers-4.18.4-041804-generic_4.18.4-041804.201808220230_amd64.deb
linux-image-unsigned-4.18.0-041800-generic_4.18.0-041800.201808122131_amd64.deb
linux-modules-4.18.0-041800-generic_4.18.0-041800.201808122131_amd64.deb

Type sudo dpkg -i *.deb and wait for installation
** Installation order:
1. linux-modules
2. linux-headers
3. linux-headers-generic
4. linux-image
(linux-headers-4.18.4-041804-generic has dependency on linux-headers-4.18.0-041800_4.18.0, hence install linux-headers-4.18.0-041800_4.18.0 before linux-headers-4.18.0-041800_4.18.0-generic)

Verify kernel version: uname -sr

*** trouble shooting
ERROR (dkms apport): kernel package linux-headers -xxxx is not supported
Consult /var/lib/dkms/bcmwl/6.30.223.141+bdcom/build/make.log for more information.
-->
remove bcmwl source and then re-install deps
sudo apt-get purge bcmwl-kernel-source

* appstreamcli abort issue
sudo apt-get purge libappstream3

*
 linux-headers-4.18.4-041804-generic depends on libssl1.1 (>= 1.1.0); however:
  Package libssl1.1 is not installed.
peter$ sudo apt-get install libssl1.1
* If no libssl1.1 found, add "deb http://security.ubuntu.com/ubuntu bionic-security main" to
/etc/apt/sources.list

*  List all kernels excluding the current booted:
dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)
dpkg -l | grep 'linux-image'

* remove old kernel
Using Apt
sudo apt-get autoremove --purge

If you want to purge one specific kernel providing package you can do so via the following command in a terminal:
sudo apt-get purge linux-image-4.12.0-12-generic
sudo dpkg --purge linux-headers-4.12.0-12 linux-headers-4.12.0-12-generic

* another method
sudo apt install byobu
sudo purge-old-kernels

* look up installed kernel sources
$ apt-cache search linux-sourc

* download ubunt sources
** kernel source is supposed to be located under /usr/src/

http://people.canonical.com/~kernel/info/kernel-version-map.html

1. check release name
lsb_release -a
Codename: xenial
2. git clone
git clone http://kernel.ubuntu.com/git-repos/ubuntu/ubuntu-xenial.git
git tag -l Ubuntu-*

* manual installation
tar jxf /usr/src/linux-source-3.13.0.tar.bz2

Monday, August 20, 2018

hostapd/wpa_supplicant hwsim test scripts

Files to take a look

hostap/tests/hwsim/README
hostap/tests/hwsim/example-setup.txt

1st. install required packages

sudo apt-get install build-essential git libpcap-dev libsqlite3-dev binutils-dev libnl-3-dev libnl-genl-3-dev libnl-route-3-dev libssl-dev libiberty-dev libdbus-1-dev iw bridge-utils python-pyrad python-crypto tshark python-netifaces binutils-dev libsqlite3-dev libpcap-dev libxml2-dev libcurl4-openssl-dev

2nd. build binaries

#!/bin/bash

pushd ./wpa_supplicant
cp ../tests/hwsim/example-wpa_supplicant.config .config
make clean
make
popd

pushd ./hostapd
cp ../tests/hwsim/example-hostapd.config .config
make clean
make hostapd hostapd_cli hlr_auc_gw
popd

pushd ./wlantest
make clean
make
popd


pushd ./tests/hwsim
./build.sh
popd

3rd. Install a recent wireless kernel components (mac80211_hwsim, mac80211, cfg80211)
wget https://mirrors.edge.kernel.org/pub/linux/kernel/projects/backports/stable/v4.4.2/backports-4.4.2-1.tar.xz
tar xJf backports-4.4.2-1.tar.xz
cd backports-4.4.2-1

make defconfig-hwsim
make
sudo make install

4th. Update based iw on custom iw.git build
wget https://www.kernel.org/pub/software/network/iw/iw-3.17.tar.gz
tar xf iw-3.17.tar.gz
cd iw-3.17
make
sudo mv /sbin/iw{,-distro}
sudo cp iw /sbin/iw

5th. Update wireless-regdb
wget http://kernel.org/pub/software/network/wireless-regdb/wireless-regdb-2018.05.31.tar.xz
tar xJf wireless-regdb-2018.05.31.tar.xz
sudo mv /lib/crda/regulatory.bin{,-distro}
sudo cp wireless-regdb-2018.05.31/regulatory.bin /lib/crda/regulatory.bin

# following command can be used to verify that the new version is trusted
regdbdump /lib/crda/regulatory.bin

6th start running test cases
cd tests/hwsim
# load mac80211_hwsim and start test software
./start.sh

# run a single test case ap_open
sudo ./run-tests.py ap_open

This should print out following style results:

DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START ap_open 1/1
Test: AP with open mode (no security) configuration
Starting AP wlan3
Connect STA wlan0 to AP
PASS ap_open 0.175895 2015-01-17 20:12:07.486006
passed all 1 test case(s)

(If that "PASS ap_open" line does not show up, something unexpected has
happened and the setup is not in working condition.)

# to stop test software and unload mac80211_hwsim
./stop.sh


To run all available test cases (about thousand or so), you can run following:

./run-all.sh

For mesh group test
sudo ./run-tests.py -f wpas_mesh

For SAE group test
sudo ./run-tests.py -f sae

For single test
sudo ./run-tests.py  wpas_mesh_open_ht40

# run normal test cases under valgrind
./run-all.sh valgrind

# run normal test cases with Linux tracing
./run-all.sh trace


Trouble shooting

sudo rfkill unblock wifi; sudo rfkill unblock all
systemctl mask systemd-rfkill.service

sudo rmmod iwldvm
sudo rmmod iwlwifi

Test logs
hostap/tests/hwsim$ ./start.sh
[sudo] password for peter:
Control interface file /tmp/wpa_ctrl_29092-1 exists - remove it
Failed to connect to hostapd - wpa_ctrl_open: No such file or directory

hostap/tests/hwsim$ sudo rfkill unblock wifi; sudo rfkill unblock all
peter@peter-linux-dell:~/works/src/__upstream/hostap_dfs/hostap/tests/hwsim$ ./start.sh 
Failed to connect to hostapd - wpa_ctrl_open: No such file or directory

peter@peter-linux-dell:~/works/src/__upstream/hostap_dfs/hostap/tests/hwsim$ sudo ./run-tests.py ap_open
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START ap_open 1/1
Test: AP with open mode (no security) configuration
Starting AP wlan3
Connect STA wlan0 to AP
PASS ap_open 0.219952 2018-08-20 16:16:40.280540
passed all 1 test case(s)

Test outputs
checkout  tests/hwsim/logs/ folder

Thursday, August 16, 2018

linux kernel bonding interface

https://fedoraproject.org/wiki/Networking/Bonding

https://wiki.linuxfoundation.org/networking/bonding

http://linux-ip.net/html/ether-bonding.html

https://github.com/Mellanox/mlxsw/wiki/Link-Aggregation

https://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html

https://unix.stackexchange.com/questions/429807/syntax-for-changing-the-bond-mode-of-an-interface

https://www.systutorials.com/docs/linux/man/8-ip-link/

https://en.wikipedia.org/wiki/Link_aggregation#Driver_modes

BONDMODE := balance-rr|active-backup|balance-xor|broadcast|802.3ad|balance-tlb|balance-alb

Driver modes[edit]

Modes for the Linux bonding driver[10] (network interface aggregation modes) are supplied as parameters to the kernel bonding module at load time. They may be given as command line arguments to the insmod or modprobe command, but are usually specified in a Linux distribution-specific configuration file. The behavior of the single logical bonded interface depends upon its specified bonding driver mode. The default parameter is balance-rr.
Round-robin (balance-rr)
Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.
Active-backup (active-backup)
Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
XOR (balance-xor)
Transmit network packets based on a hash of the packet's source and destination. The default algorithm only considers MAC addresses (layer2). Newer versions allow selection of additional policies based on IP addresses (layer2+3) and TCP/UDP port numbers (layer3+4). This selects the same NIC slave for each destination MAC address, IP address, or IP address and port combination, respectively. This mode provides load balancing and fault tolerance.
Broadcast (broadcast)
Transmit network packets on all slave network interfaces. This mode provides fault tolerance.
IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification. This mode is similar to the XOR mode above and supports the same balancing policies. The link is set up dynamically between two LACP-supporting peers.
Adaptive transmit load balancing (balance-tlb)
Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Adaptive load balancing (balance-alb)
includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.

Thursday, August 9, 2018

what is the recommended method for adding debug prints in mac80211 based drivers.



https://www.spinics.net/lists/linux-wireless/msg150083.html

On Fri, 2016-04-22 at 17:51 +0530, Krishna Chaitanya wrote:
> What is the recommended method for adding
> debug prints in mac80211 based drivers.
> 
> 1) -DDEBUG + pr_debug ==> used by mac80211, brcm80211
> 2) -DDEBUG + dev_dbg ==> zd1201
> 3) dev_printk(KERN_DEBUG) ==> used by iwlwifi
> 4) printk(KERN_DEBUG) ==> Just to complete the list.

wiphy_dbg -> netif_dbg -> netdev_dbg -> dev_dbg(dev_info) -> pr_debug(pr_info)

and CONFIG_DYNAMIC_DEBUG, no -DDEBUG required

Wednesday, August 8, 2018

TCP latency vs sk_pacing_shift value test

###############
void __ieee80211_subif_start_xmit(struct sk_buff *skb,
  struct net_device *dev,
  u32 info_flags)
{
if (!IS_ERR_OR_NULL(sta)) {
struct ieee80211_fast_tx *fast_tx;

/* We need a bit of data queued to build aggregates properly, so
 * instruct the TCP stack to allow more than a single ms of data
 * to be queued in the stack. The value is a bit-shift of 1
 * second, so 8 is ~4ms of queued data. Only affects local TCP
 * sockets.
 */
sk_pacing_shift_update(skb->sk, 8);

fast_tx = rcu_dereference(sta->fast_tx);
###############

TCP latency values test: The tcp_nup test in Flent (https://flent.org)

https://flent.org/intro.html#quick-start

I used the command but test failed:

flent tcp_download -p 1 -l 60 -H 192.168.1.5 -t text-to-be-included-in-plot -o file1.png

error loading plotter: unable to find plot configuration "1"

Try something like:

flent -H 192.168.1.5 -t "sk_pacing_shift 7" tcp_nup --test-parameter upload_streams=1

you can vary the number of TCP streams by changing the upload_streams parameter.

I'm assuming you are running Flent on the device
with the kernel you are trying to test, so you want a TCP transfer going
*from* the device. If not, change "tcp_nup" to "tcp_ndown" and
"upload_streams" to "download_streams". Upload is netperf TCP_STREAM
test, and download is TCP_MAERTS.

When running the above command you'll get a summary output on the
terminal that you can paste on the list; and also a data file to plot
things form. For instance, you can do something like 'flent -p ping_cdf
*.flent.gz' to get a CDF plot of all your test results afterwards.

###############

Shift 6:
root:~/flent$ flent -H 192.168.1.7 -t "sk_pacing_shift6" tcp_nup --test-parameter upload_streams=1
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-13T110414.699512.sk_pacing_shift6.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift6' (at 2018-08-13 03:04:14.699512):

                           avg       median          # data pts
Ping (ms) ICMP :         9.91         4.99 ms              350
TCP upload avg :       242.48       262.43 Mbits/s         301
TCP upload sum :       242.48       262.43 Mbits/s         301
TCP upload::1  :       242.48       263.34 Mbits/s         271

root:~/flent$ flent -H 192.168.1.7 -t "sk_pacing_shift6" tcp_nup --test-parameter upload_streams=1
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-13T113317.074077.sk_pacing_shift6.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift6' (at 2018-08-13 03:33:17.074077):

                           avg       median          # data pts
Ping (ms) ICMP :         7.75         5.30 ms              350
TCP upload avg :       239.17       250.84 Mbits/s         301
TCP upload sum :       239.17       250.84 Mbits/s         301
TCP upload::1  :       239.17       255.03 Mbits/s         266


Shift 7:
root:~/flent$ flent -H 192.168.1.7 -t "sk_pacing_shift7" tcp_nup --test-parameter upload_streams=1
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-13T122948.020974.sk_pacing_shift7.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift7' (at 2018-08-13 04:29:48.020974):

                           avg       median          # data pts
Ping (ms) ICMP :        14.12         6.61 ms              350
TCP upload avg :       188.19       188.04 Mbits/s         301
TCP upload sum :       188.19       188.04 Mbits/s         301
TCP upload::1  :       188.19       190.88 Mbits/s         258

root:~/flent$ flent -H 192.168.1.7 -t "sk_pacing_shift7" tcp_nup --test-parameter upload_streams=1
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-13T123129.526514.sk_pacing_shift7.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift7' (at 2018-08-13 04:31:29.526514):

                           avg       median          # data pts
Ping (ms) ICMP :        10.31         6.32 ms              350
TCP upload avg :       212.70       233.69 Mbits/s         301
TCP upload sum :       212.70       233.69 Mbits/s         301
TCP upload::1  :       212.70       237.65 Mbits/s         262


Shift 8:
root:~/flent$ flent -H 192.168.1.7 -t "sk_pacing_shift8" tcp_nup --test-parameter upload_streams=1
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-13T121433.187781.sk_pacing_shift8.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift8' (at 2018-08-13 04:14:33.187781):

                           avg       median          # data pts
Ping (ms) ICMP :        17.12         7.07 ms              350
TCP upload avg :       180.05       185.82 Mbits/s         301
TCP upload sum :       180.05       185.82 Mbits/s         301
TCP upload::1  :       180.05       189.41 Mbits/s         253

root:~/flent$ flent -H 192.168.1.7 -t "sk_pacing_shift8" tcp_nup --test-parameter upload_streams=1
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-13T121602.382575.sk_pacing_shift8.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift8' (at 2018-08-13 04:16:02.382575):

                           avg       median          # data pts
Ping (ms) ICMP :        13.90         5.89 ms              350
TCP upload avg :       207.56       228.16 Mbits/s         301
TCP upload sum :       207.56       228.16 Mbits/s         301
TCP upload::1  :       207.56       228.11 Mbits/s         259

Shift 10:
root:~/flent$ flent -H 192.168.1.7 -t "sk_pacing_shift10" tcp_nup --test-parameter upload_streams=1
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-13T121844.493498.sk_pacing_shift10.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift10' (at 2018-08-13 04:18:44.493498):

                           avg       median          # data pts
Ping (ms) ICMP :        15.11         7.41 ms              350
TCP upload avg :       162.38       164.10 Mbits/s         301
TCP upload sum :       162.38       164.10 Mbits/s         301
TCP upload::1  :       162.38       165.47 Mbits/s         252
root:~/flent$ flent -H 192.168.1.7 -t "sk_pacing_shift10" tcp_nup --test-parameter upload_streams=1
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-13T122108.347163.sk_pacing_shift10.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift10' (at 2018-08-13 04:21:08.347163):

                           avg       median          # data pts
Ping (ms) ICMP :        13.69         7.48 ms              350
TCP upload avg :       171.11       170.52 Mbits/s         301
TCP upload sum :       171.11       170.52 Mbits/s         301
TCP upload::1  :       171.11       171.36 Mbits/s         258


################################
Test result with upload_streams=5:
################################

sk_pacing_shift6:
root:~/flent/5stream$ flent -H 192.168.1.7 -t "sk_pacing_shift6" tcp_nup --test-parameter upload_streams=5
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-14T105332.356811.sk_pacing_shift6.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift6' (at 2018-08-14 02:53:32.356811):

                           avg       median          # data pts
Ping (ms) ICMP :        20.46        13.85 ms              350
TCP upload avg :        66.30        68.71 Mbits/s         301
TCP upload sum :       331.49       343.55 Mbits/s         301
TCP upload::1  :        60.80        64.65 Mbits/s         202
TCP upload::2  :        77.72        82.89 Mbits/s         211
TCP upload::3  :        60.52        56.09 Mbits/s         202
TCP upload::4  :        67.39        73.56 Mbits/s         204
TCP upload::5  :        65.06        71.97 Mbits/s         201

root:~/flent/5stream$ flent -H 192.168.1.7 -t "sk_pacing_shift6" tcp_nup --test-parameter upload_streams=5
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-14T105554.583603.sk_pacing_shift6.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift6' (at 2018-08-14 02:55:54.583603):

                           avg       median          # data pts
Ping (ms) ICMP :        20.86        13.80 ms              350
TCP upload avg :        75.88        83.17 Mbits/s         301
TCP upload sum :       379.42       415.84 Mbits/s         301
TCP upload::1  :        82.07        90.73 Mbits/s         225
TCP upload::2  :        74.08        78.29 Mbits/s         204
TCP upload::3  :        70.44        75.65 Mbits/s         217
TCP upload::4  :        82.70        92.86 Mbits/s         223
TCP upload::5  :        70.13        76.87 Mbits/s         210

sk_pacing_shift7:
root:~/flent/5stream$ flent -H 192.168.1.7 -t "sk_pacing_shift7" tcp_nup --test-parameter upload_streams=5
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-14T105759.169367.sk_pacing_shift7.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift7' (at 2018-08-14 02:57:59.169367):

                           avg       median          # data pts
Ping (ms) ICMP :        24.66        15.55 ms              350
TCP upload avg :        65.33        72.83 Mbits/s         301
TCP upload sum :       326.67       363.10 Mbits/s         301
TCP upload::1  :        77.60        92.93 Mbits/s         214
TCP upload::2  :        67.22        75.95 Mbits/s         213
TCP upload::3  :        65.81        74.54 Mbits/s         205
TCP upload::4  :        63.06        70.37 Mbits/s         207
TCP upload::5  :        52.98        55.78 Mbits/s         198

root:~/flent/5stream$ flent -H 192.168.1.7 -t "sk_pacing_shift7" tcp_nup --test-parameter upload_streams=5
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-14T105923.620572.sk_pacing_shift7.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift7' (at 2018-08-14 02:59:23.620572):

                           avg       median          # data pts
Ping (ms) ICMP :        25.03        14.25 ms              350
TCP upload avg :        74.35        85.64 Mbits/s         297
TCP upload sum :       371.77       428.19 Mbits/s         297
TCP upload::1  :        74.12        82.55 Mbits/s         216
TCP upload::2  :        67.78        71.87 Mbits/s         208
TCP upload::3  :        82.40        94.72 Mbits/s         210
TCP upload::4  :        70.77        82.43 Mbits/s         206
TCP upload::5  :        76.70        88.62 Mbits/s         210

sk_pacing_shift8:
root:~/flent/5stream$ flent -H 192.168.1.7 -t "sk_pacing_shift8" tcp_nup --test-parameter upload_streams=5
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-14T110334.670845.sk_pacing_shift8.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift8' (at 2018-08-14 03:03:34.670845):

                           avg       median          # data pts
Ping (ms) ICMP :        25.03        19.50 ms              350
TCP upload avg :        58.11        59.70 Mbits/s         301
TCP upload sum :       290.53       298.51 Mbits/s         301
TCP upload::1  :        51.37        51.74 Mbits/s         197
TCP upload::2  :        42.02        43.66 Mbits/s         192
TCP upload::3  :        61.17        62.33 Mbits/s         200
TCP upload::4  :        66.11        70.31 Mbits/s         198
TCP upload::5  :        69.86        76.31 Mbits/s         202

root:~/flent/5stream$ flent -H 192.168.1.7 -t "sk_pacing_shift8" tcp_nup --test-parameter upload_streams=5
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-14T110557.587769.sk_pacing_shift8.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift8' (at 2018-08-14 03:05:57.587769):

                           avg       median          # data pts
Ping (ms) ICMP :        21.50        13.05 ms              350
TCP upload avg :        61.59        62.00 Mbits/s         301
TCP upload sum :       307.93       310.01 Mbits/s         301
TCP upload::1  :        69.70        76.22 Mbits/s         210
TCP upload::2  :        68.51        74.88 Mbits/s         207
TCP upload::3  :        71.06        77.57 Mbits/s         200
TCP upload::4  :        47.08        51.42 Mbits/s         196
TCP upload::5  :        51.58        51.98 Mbits/s         203

sk_pacing_shift10:
root:~/flent/5stream$ flent -H 192.168.1.7 -t "sk_pacing_shift10" tcp_nup --test-parameter upload_streams=5
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-14T110814.434543.sk_pacing_shift10.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift10' (at 2018-08-14 03:08:14.434543):

                           avg       median          # data pts
Ping (ms) ICMP :        31.57        19.35 ms              350
TCP upload avg :        56.61        62.61 Mbits/s         301
TCP upload sum :       283.07       313.07 Mbits/s         301
TCP upload::1  :        39.39        42.96 Mbits/s         187
TCP upload::2  :        62.20        72.39 Mbits/s         193
TCP upload::3  :        61.72        74.31 Mbits/s         191
TCP upload::4  :        61.86        73.74 Mbits/s         190
TCP upload::5  :        57.90        65.11 Mbits/s         193

root:~/flent/5stream$ flent -H 192.168.1.7 -t "sk_pacing_shift10" tcp_nup --test-parameter upload_streams=5
Started Flent 1.2.2 using Python 2.7.12.
Starting tcp_nup test. Expected run time: 70 seconds.
Data file written to ./tcp_nup-2018-08-14T110931.986159.sk_pacing_shift10.flent.gz.
Summary of tcp_nup test run 'sk_pacing_shift10' (at 2018-08-14 03:09:31.986159):

                           avg       median          # data pts
Ping (ms) ICMP :        19.23        13.20 ms              350
TCP upload avg :        76.36        81.37 Mbits/s         301
TCP upload sum :       381.80       406.85 Mbits/s         301
TCP upload::1  :        64.95        67.91 Mbits/s         212
TCP upload::2  :        82.16        92.35 Mbits/s         215
TCP upload::3  :        67.51        70.18 Mbits/s         213
TCP upload::4  :        77.42        82.11 Mbits/s         232
TCP upload::5  :        89.76        99.96 Mbits/s         226