Learning DPDK : KNI interface

KNI (Kernel Network Interface) is an approach that is used in DPDK to connect user space applications with the kernel network stack.

The following slides present the concept using a number of functional block diagrams.

The code that we are interested in is located by the following locations.

  • Sample KNI application
    • example/kni
  • KNI kernel module
    • lib/librte_eal/linuxapp/kni
  • KNI library
    • lib/librte_kni

To begin testing KNI we need to build DPDK libraries first

git clone git://dpdk.org/dpdk
export RTE_SDK=~/dpdk/
make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
cd x86_64-native-linuxapp-gcc
make

Then we need to compile KNI sample application

cd ${RTE_SDK}/examples/kni
export RTE_TARGET=x86_64-native-linuxapp-gcc
make

To run the above application we need to load KNI kernel module

insmod ${RTE_SDK}/${RTE_TARGET}/kmod/rte_kni.ko

The following kernel module options are available in case if a loopback mode is required.

  • kthread_mode=single/multiple – number of kernel threads
  • lo_mode=lo_mode_fifo/lo_mode_fifo_skb – loopback mode

Enable enough huge pages

mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
echo 512 /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages

Load UIO kernel module and bind network interfaces to it. Note that you will not be able to bind interface in case if there exist any route associated with it.

modprobe uio_pci_generic
${RTE_SDK}/tools/dpdk_nic_bind.py --status
${RTE_SDK}/tools/dpdk_nic_bind.py --bind=uio_pci_generic eth1
${RTE_SDK}/tools/dpdk_nic_bind.py --bind=uio_pci_generic eth2
${RTE_SDK}/tools/dpdk_nic_bind.py --status

In the case of PC/VM with four cores we can run KNI application using the following commands.

export LD_LIBRARY_PATH=${RTE_SDK}/${RTE_TARGET}/lib/
${RTE_SDK}/examples/kni/build/kni -c 0x0f -n 4 -- -P -p 0x3 --config="(0,0,1),(1,2,3)"

Where:

  • -c = core bitmask
  • -P = promiscuous mode
  • -p = port hex bitmask
  • –config=”(port, lcore_rx, lcore_tx [,lcore_kthread, …]) …”

Note that each core can do either TX or RX for one port only.

You can use the following script to setup and run KNI test application.


#/bin/sh
#setup path to DPDK
export RTE_SDK=/home/dpdk
export RTE_TARGET=x86_64-native-linuxapp-gcc
#setup 512 huge pages
mkdir -p /mnt/huge
umount -t hugetlbfs nodev /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
#bind eth1 and eth2 to Linux generic UIO
modprobe uio_pci_generic
${RTE_SDK}/tools/dpdk_nic_bind.py –bind=uio_pci_generic eth1
${RTE_SDK}/tools/dpdk_nic_bind.py –bind=uio_pci_generic eth2
#insert KNI kernel driver
insmod ${RTE_SDK}/${RTE_TARGET}/kmod/rte_kni.ko
#start KNI sample application
export LD_LIBRARY_PATH=${RTE_SDK}/${RTE_TARGET}/lib/
${RTE_SDK}/examples/kni/build/kni -c 0x0f -n 4 — -P -p 0x3 –config="(0,0,1),(1,2,3)"

view raw

start_kni.sh

hosted with ❤ by GitHub

Let’s set ip addresses to KNI interfaces

sudo ifconfig vEth0 192.168.56.100
sudo ifconfig vEth1 192.168.56.101

Now we are set to test the application. To see statistics we need to send the SIGUSR1 signal.

watch -n 10 sudo pkill -10 kni

References