Here is a post about How to run mTCP applications in VM with Virtual Function.
At CloudLab, we launch two c220g2 nodes, where X520 10Gb NIC supports SR-IOV, at Wisc site. We choose Ubuntu 16.04 64bit STD image.
To enable the SR-IOV support, you may edit
/etc/default//grub to add
sudo update-grub and reboot the system.
To configure VF for each NIC, you may just reload the NIC driver with addtional parameters. (NOTE: pay attention to your driver, it may be ixgbe, i40e or something else.)
sudo rmmod ixgbe
To see whether your configuration is done correctly, just run
sudo lspci | grep -i ether to see whether there are pci devices with Virtual Functions.
### install some dependencies
### create a disk
### launch a VM named test here
Here, we define a domain named
test with 8 vCPUs and 10GB memory. We add a default network interface just for SSH connection.
To inject a pci-passthrough device, namely VF in this setting, we only need to know the pci address at our host and modify the VM configuration xml file.
To check VFs’ pci addresses, just run
ip link show or
lshw -c network -businfo. And add this info to
sudo virsh edit test.
When you reluanch your VM, you may use
ifconfig -a to check whether you have done correctly.
/mtcp/src/dpdk_module.c, comment lines related to set TX/RX flow control of the NIC.
/* retrieve current flow control settings per port */
And if you rename VF nic name or set IP to it, the MAC may be all-zero. So just modify
dpdk-17.08/lib/librte_eal/linuxapp/igb_uio/igb_uio.h to enable
ifconfig $if_name hw ether $mac_addr.
If you use an image like Ubuntu 16.04 64-bit as the guest OS, you may lost your console when you boot your VM. This is because that
grub file may not add the required setting by default. To solve this problem, just modify the
bootentry and then reboot.
sudo guestmount -d test -i /mnt
modify related fields (NOTE: you can set your own baud rate):
and update the boot entry:
sudo vim /mnt/boot/grub/grub.cfg