Today, we’re going to talk about performance and especially network performance. The main goal of this article is to present to you a way to boost progressively your network bandwidth at minor costs. Welcome to the wonderful world of the multipathing!
We can define the multipathing as a method to use more than one way to access the storage. For example, you have to send a file to your NAS from your PC. Your PC is connected on wireless and wired connections. Let’s say we have a 100Mb/s Wi-Fi and 100Mb/s Ethernet connection to the PC. Your NAS is connected to a switch at 1 Gbps. When you send a 1 GB file, it has two ways to go to the NAS. The aim of multipathing is to load-balance the traffic between available paths. In our example the system will send 500MB on the Wi-Fi and 500MB on the Ethernet cable. The result of this is a max bandwidth of 200MB/s.
In this post we’re looking at the Linux kernel’s hypervisor KVM and the session trunking of the NFS4.1 protocol. The session trunking is the way to do multipath from one host to one host of NFS4.1. This feature was implemented in February 2016 by the Linux-NFS team.
The advantage of a multipathing implementation in the application layer is the flexibility that brings in the system management. You can configure the multipath on the hypervisor, put your ISO and disk image on your NAS and all the traffic generated pass by all the path available.
But how to do this? Here is a tutorial to achieve multipathing between KVM and a NFS4.1 Server. Note: the NFS4.1 client session trunking is currently under development, so points could be changed in the future, be careful to use the correct configuration.
Date of realization of the test: 11/3/16
Test environment:

Server Configuration
Commands assuming you are root:
apt-get update && apt-get upgrade
apt-get install nfs-kernel-server
mkdir home/testnfs
chmod 777 home/testnfs
nano etc/exports
#Add these line in the "exports" file to set the "testnfs" folder available
/home/testnfs 192.168.1.2(rw,sync)
/home/testnfs 192.168.2.20(rw,sync)
#end modification of etc/exports
#Enable NFSv4.1
etc/init.d/nfs-kernel-server stop
nano proc/fs/nfsd/versions
#set +2 +3 +4 -4.1 to +2 +3 +4 +4.1
etc/init.d/nfs-kernel-server start
#server ready
Client Configuration
The session trunking feature is under development and under kernel patch form. So if you want to use this feature you have to get a kernel, add the patches, compile and install it!
Step 1 : Install Patches
Commands assuming you are root:
#------------Prepare new Kernel with new patches---------- apt-get install libncurses5-dev gcc make git exuberant-ctags bc libssl-dev patch cd / mkdir kernels cd kernels git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git #it takes a while... cd linux-stable #--------------------Install the patches------------------ mkdir patch cd patch #Trond wget "http://marc.info/?l=linux-nfs&m=145430167801424&q=raw" -O patchTrond1.patch wget "http://marc.info/?l=linux-nfs&m=145430167901425&q=raw" -O patchTrond2.patch wget "http://marc.info/?l=linux-nfs&m=145430168001426&q=raw" -O patchTrond3.patch wget "http://marc.info/?l=linux-nfs&m=145430168101427&q=raw" -O patchTrond4.patch wget "http://marc.info/?l=linux-nfs&m=145430168301428&q=raw" -O patchTrond5.patch wget "http://marc.info/?l=linux-nfs&m=145430168401429&q=raw" -O patchTrond6.patch wget "http://marc.info/?l=linux-nfs&m=145430168601430&q=raw" -O patchTrond7.patch wget "http://marc.info/?l=linux-nfs&m=145430168701431&q=raw" -O patchTrond8.patch wget "http://marc.info/?l=linux-nfs&m=145430168801432&q=raw" -O patchTrond9.patch wget "http://marc.info/?l=linux-nfs&m=145430168901433&q=raw" -O patchTrond10.patch wget "http://marc.info/?l=linux-nfs&m=145430169001434&q=raw" -O patchTrond11.patch wget "http://marc.info/?l=linux-nfs&m=145430169101435&q=raw" -O patchTrond12.patch wget "http://marc.info/?l=linux-nfs&m=145430169201437&q=raw" -O patchTrond13.patch #Andros wget "http://marc.info/?l=linux-nfs&m=145470652924651&q=raw" -O patchAndros1.patch wget "http://marc.info/?l=linux-nfs&m=145470653024652&q=raw" -O patchAndros2.patch wget "http://marc.info/?l=linux-nfs&m=145470653024653&q=raw" -O patchAndros3.patch wget "http://marc.info/?l=linux-nfs&m=145470653124654&q=raw" -O patchAndros4.patch wget "http://marc.info/?l=linux-nfs&m=145470653424655&q=raw" -O patchAndros5.patch wget "http://marc.info/?l=linux-nfs&m=145470653424656&q=raw" -O patchAndros6.patch
#Apply patchs at the root of your kernel, here /kernels/linux-stable/ cd .. patch -p1 < patch/patchTrond1.patch patch -p1 < patch/patchTrond2.patch patch -p1 < patch/patchTrond3.patch patch -p1 < patch/patchTrond4.patch patch -p1 < patch/patchTrond5.patch patch -p1 < patch/patchTrond6.patch patch -p1 < patch/patchTrond7.patch patch -p1 < patch/patchTrond8.patch patch -p1 < patch/patchTrond9.patch patch -p1 < patch/patchTrond10.patch patch -p1 < patch/patchTrond11.patch patch -p1 < patch/patchTrond12.patch patch -p1 < patch/patchTrond13.patch #Andros patch -p1 < patch/patchAndros1.patch patch -p1 < patch/patchAndros2.patch patch -p1 < patch/patchAndros3.patch patch -p1 < patch/patchAndros4.patch patch -p1 < patch/patchAndros5.patch patch -p1 < patch/patchAndros6.patch
#Compile and install kernel
#Copy current kernel config to the new kernel cp /boot/config-`uname -r`* .config
#I got a compilation error in the "net/sunrpc/xprtmultipath.c" line 220 #at WRITE_ONCE(&xpi->xpi_cursor,NULL); #error: lvalue required as unary ‘&’ operand
#Remove the "&"
#the -j4 stand for 4 jobs running at the same time, ideal for multi-core processor.
make -j4 make modules_install install reboot
#mount the NFS4.1 shared folder on a local folder mkdir vmimages #Establish first connexion and session mount -tnfs4 -ominorversion=1 192.168.1.3:/home/testnfs vmimages #Agregate second connexion to the session mount -tnfs4 -ominorversion=1 192.168.2.30:/home/testnfs vmimages #you’re going to get an error on the second mount, that’s because multi host address aren’t supported yet in the mount.nfs4 (Patch under dev)
#End NFS4.1 Session Trunking Client config
The client should use all the paths to send the data.
Step 2 : Install VM
The last step is to install kvm and test the multipath with a real VM in KVM.
Make sure your processors support virtualization
Add ISO and disk image .img on the NFS server’s shared folder created above
Xmodulo wrote a great tutorial to install a VM with KVM.
Assuming you are root on your machine:
apt-get install qemu-kvm libvirt-bin adduser root kvm adduser root libvirt-qemu #to create a disk image on the server use the dd and mkfs commands dd if=/dev/zero of=file.img bs=1M count=5000 mkfs ext3 -F file.img
#KVM create the VM with a XML file who describe the hardware to virtualize. Here is an example of this XML file.
nano vmdebian.xml
#Change the path to the ISO and disk image in the XML file.
#----vmdebian.xml----
<domain type='kvm' id='1'>
<name>debian2</name>
<uuid>746173e3-973f-d02c-1733-93d1f4e10e5a</uuid>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='x86_64' machine='pc-1.1'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>destroy</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<!--
Set your disk image path here
-->
<source file='/home/testmount/file2.img'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<!--
Set your ISO path here
-->
<source file='/home/testmount/images/debian-live-8.3.0-i386-gnome-desktop.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0'>
<alias name='usb0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:30:68:33'/>
<source bridge='br0'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/1'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='-1' autoport="yes" listen='0.0.0.0'/>
<input type='tablet' bus='usb'>
<alias name='input0'/>
</input>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
<seclabel type='none'/>
</domain>
#----vmdebian.xml--END--
virsh create vmdebian.xml
#Your VM is running you can see it with :
virsh list
#See where the VNC server listen
netstat -nap | egrep '(kvm|qemu)'

To access your VM you have to download a VNC client. The one I used is a chrome app.

Results
Without Session trunking: the eth2 has no traffic and the eth1 has transferred the file with NFS4.1

With session trunking : eth1 and eth2 have transferred the file with NFS4.1

As you see all the traffic from the VM pass by all the interfaces without any configuration on the VM. The bandwidth is also aggregated. Here is the network utilization when I add manually each mount during a big file copy.
The paths are limited to 100Mb/s
We can see the available bandwidth (higher purple line) was doubled on the second mount. NFS4.1 detect session trunking and aggregate the new connection on the eth2 to the existing session initialized by the first connection on eth1.
Here we are! Bandwidth has been doubled with no big infrastructure changes. This is a great way to adapt your network performance suiting your business needs. If 1Gb/s is too slow and 10Gb/s is overkill, multipath could be a good solution to get 2…3…4..5 Gb/s….
Comments are welcome!







It would be very interesting to compare the performance of multipath NFS with the performance of NFS over Multipath TCP.
Yes! It’s in my plans, maybe it could be the subject of an another post. I think the main difference between MPTCP and the NFS multipath is the granularity of the network load repartition. MPTCP distribute packet per packet and NFS distribute NFS operation per NFS operation, so maybe a higher CPU load for MPTCP but a better network load repartition … it has to be confirmed with real measures.
A great piece Martin, thank you.