Difference between revisions of "Solarflare NVMeoF over TCP Linux kernel drivers"

From Define Wiki
Jump to navigation Jump to search
 
(7 intermediate revisions by the same user not shown)
Line 65: Line 65:
  
 
== Configuration ==
 
== Configuration ==
 +
 +
=== Target System config ===
  
 
On the *target* system, perform the steps below.  The example commands assume
 
On the *target* system, perform the steps below.  The example commands assume
Line 95: Line 97:
 
   # Associate subsystem with port
 
   # Associate subsystem with port
 
   ln -s /sys/kernel/config/nvmet/subsystems/ramdisk /sys/kernel/config/nvmet/ports/1/subsystems/ramdisk
 
   ln -s /sys/kernel/config/nvmet/subsystems/ramdisk /sys/kernel/config/nvmet/ports/1/subsystems/ramdisk
=== My version of the configuration when setting for the NVMe drive if  showing in /dev/ as "nvme0n1" ===
+
My version of the configuration when setting for the NVMe drive if  showing in /dev/ as "nvme0n1"
 +
 
 
<syntaxhighlight>
 
<syntaxhighlight>
 
touch /var/lock/subsys/local
 
touch /var/lock/subsys/local
Line 110: Line 113:
 
ln -s /sys/kernel/config/nvmet/subsystems/nvme0n1 /sys/kernel/config/nvmet/ports/1/subsystems/nvme0n1
 
ln -s /sys/kernel/config/nvmet/subsystems/nvme0n1 /sys/kernel/config/nvmet/ports/1/subsystems/nvme0n1
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
Install the latest Solarflare NET driver which itself needed to be compiled from source using rpmbuild.
 +
 +
Once installed setup the host and clients with static IPv4 addresses. In my example i made the target (storage server) 192.168.0.10 and the client 192.168.0.11 since it was a closed network.
 +
 +
=== Host System Config ===
  
 
On the *host* system, perform the steps below.  The example  
 
On the *host* system, perform the steps below.  The example  
commands assume a target IP of 10.0.0.1 and port 11345.
+
commands assume a target IP of 192.168.0.10and port 11345.
  
 
   # Install a version of the nvme-cli utility that supports NVMeoF over TCP.
 
   # Install a version of the nvme-cli utility that supports NVMeoF over TCP.
Line 121: Line 130:
  
 
   # Connect to the target
 
   # Connect to the target
   nvme connect -t tcp -a 10.0.0.1 -s 11345 -n ramdisk
+
   nvme connect -t tcp -a 192.168.0.10 -s 11345 -n nvme0n1
  
 
   # Confirm NVMe device is present
 
   # Confirm NVMe device is present
 
   lsblk | grep nvme
 
   lsblk | grep nvme
  
== findings ==
+
Once confirmed you can then add file system as normal:
 +
<syntaxhighlight>
 +
mkfs.xfs /dev/nvme0n1
 +
</syntaxhighlight>
 +
== Make everything above stateful ==
  
Everything set above for the kernel is stateless having been set in /sys/ meaning once you reboot then your nvme drive will go missing, the modules wont be loaded and none of the configuration above will be set anymore.
+
Everything set above for the kernel is stateless having been set in /sys/ meaning once you reboot then your NVMe drive will go missing, the modules wont be loaded and none of the configuration above will be set anymore.
  
To get around this all of the configuration shown for nvme0n1 above in /etc/rc.d/rc.local then enabled/started the rc-local service.
+
Target System:
 +
 
 +
Add Kernel module to module-load so centos will load the kernel module on boot:
 +
<syntaxhighlight>
 +
vim /etc/modules-load.d/nvmet.conf
 +
</syntaxhighlight>
 +
Insert the following line into nvmet.conf
 +
<syntaxhighlight>
 +
nvmet_tcp
 +
</syntaxhighlight>
 +
Create a new service unit file at /etc/systemd/system/nvmet-tcp.service:
 +
<syntaxhighlight>
 +
[Unit]
 +
Description=Service unit file for NVMe over Fabrics TCP target system
 +
After=network.target
 +
[Service]
 +
Type=simple
 +
ExecStart=/usr/local/sbin/nvmet-tcp.sh
 +
TimeoutStartSec=0
 +
[Install]
 +
WantedBy=default.target
 +
</syntaxhighlight>
  
However this failed since the nvmet kernel module thats required does not get loaded either and this will be need to be imported into the kernel before rc.local is run on boot.
+
Create the ExecStart script outlined above:
 +
<syntaxhighlight>
 +
vim /usr/local/sbin/nvmet-tcp.sh
 +
</syntaxhighlight>
 +
Make script executable:
 +
<syntaxhighlight>
 +
chmod +x /usr/local/sbin/nvmet-tcp.sh
 +
</syntaxhighlight>
 +
Insert the following into the above script
 +
<syntaxhighlight>
 +
#!/bin/bash
 +
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
 +
#
 +
# It is highly advisable to create own systemd services or udev rules
 +
# to run scripts during boot instead of using this file.
 +
#
 +
# In contrast to previous versions due to parallel execution during boot
 +
# this script will NOT be run after all other services.
 +
#
 +
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
 +
# that this script will be executed during boot.
 +
touch /var/lock/subsys/local
 +
mkdir /sys/kernel/config/nvmet/subsystems/nvme0n1
 +
echo 1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/attr_allow_any_host
 +
mkdir /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1
 +
echo -n /dev/nvme0n1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1/device_path
 +
echo 1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1/enable
 +
mkdir /sys/kernel/config/nvmet/ports/1
 +
echo "ipv4" > /sys/kernel/config/nvmet/ports/1/addr_adrfam
 +
echo "tcp" > /sys/kernel/config/nvmet/ports/1/addr_trtype
 +
echo "11345" > /sys/kernel/config/nvmet/ports/1/addr_trsvcid
 +
echo "192.168.0.10" > /sys/kernel/config/nvmet/ports/1/addr_traddr
 +
ln -s /sys/kernel/config/nvmet/subsystems/nvme0n1 /sys/kernel/config/nvmet/ports/1/subsystems/nvme0n1
 +
</syntaxhighlight>
 +
enable the service to come up on boot
 +
<syntaxhighlight>
 +
systemctl enable nvme-tcp.service
 +
</syntaxhighlight>
  
In comes the following (this will load the module in on every boot):
+
Host System:
  
 +
Add Kernel module to module-load so centos will load the kernel module on boot:
 +
<syntaxhighlight>
 +
vim /etc/modules-load.d/nvme-tcp.conf
 +
</syntaxhighlight>
 +
Insert the following line into nvme-tcp.conf
 +
<syntaxhighlight>
 +
nvme-tcp
 +
</syntaxhighlight>
 +
Create a new service unit file at /etc/systemd/system/nvme-tcp.service:
 +
<syntaxhighlight>
 +
[Unit]
 +
Description=Service unit file for NVMe over Fabrics TCP target system
 +
After=network.target
 +
[Service]
 +
Type=simple
 +
ExecStart=/usr/local/sbin/nvm-tcp.sh
 +
TimeoutStartSec=0
 +
[Install]
 +
WantedBy=default.target
 +
</syntaxhighlight>
 +
Create the ExecStart script outlined above:
 +
<syntaxhighlight>
 +
vim /usr/local/sbin/nvme-tcp.sh
 +
</syntaxhighlight>
 +
Make script executable:
 +
<syntaxhighlight>
 +
chmod +x /usr/local/sbin/nvme-tcp.sh
 +
</syntaxhighlight>
 +
Insert the following into nvme-tcp.sh
 
<syntaxhighlight>
 
<syntaxhighlight>
echo modprobe nvmet-tcp > /etc/rc.modules
+
#!/bin/bash
chmod +x /etc/rc.modules
+
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
 +
#
 +
# It is highly advisable to create own systemd services or udev rules
 +
# to run scripts during boot instead of using this file.
 +
#
 +
# In contrast to previous versions due to parallel execution during boot
 +
# this script will NOT be run after all other services.
 +
#
 +
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
 +
# that this script will be executed during boot.
 +
nvme connect -t tcp -a 192.168.0.10 -s 11345 -n nvme0n1
 +
mount /dev/nvme0n1 /mnt/nvme01/
 
</syntaxhighlight>
 
</syntaxhighlight>

Latest revision as of 15:50, 8 August 2018

Version: v18.1-rev3


Introduction

This is a pre-NVM Express specification release from Solarflare of a TCP transport for NVMe over Fabrics (NVMeoF). The package includes Linux kernel drivers which enable NVMeoF over TCP.


Supported platforms

This release is for Red Hat Enterprise Linux or Centos 7.4 only. Kernel and Devel of 3.10.0-693.el7.x86_64

Prerequisites

When using with Solarflare adapters, we recommend installing the latest Solarflare Linux NET driver for best performance.

The latest Solarflare driver packages are available from https://support.solarflare.com.

The installation of the NVMeoF over TCP drivers requires the kernel-devel package for the target kernel version to be installed on the system.

If you have installed Centos Minimum then you will need to install gcc rpmbuild

Installation

To build for the currently running kernel, use

 rpmbuild --rebuild <source rpm file>

To install the resulting binary rpm:

 Locate the file that was created in the preceding step, prefixed with the
 annotation "Wrote:", then type:
 rpm -Uvh <binary rpm filename>

Repeat the above installation procedure for both the host and target systems. The 'target' system is where the NVMe block device is located. The 'host' system is the system connecting to the device.

On the *host* system, load the nvme-tcp module

 modprobe nvme-tcp

Check it has loaded

 lsmod | grep nvme_tcp

You should see nvme_tcp, nvme_fabrics and nvme_core modules in use.

On the *target* system, load the nvmet-tcp module

 modprobe nvmet-tcp

Check it has loaded

 lsmod | grep nvmet_tcp

You should see nvmet_tcp and nvmet modules in use.


Configuration

Target System config

On the *target* system, perform the steps below. The example commands assume a target IP of 10.0.0.1, a subsytem name of ramdisk and an underlying block device /dev/ram0. Users should specify their NVMe block device in place of this. (If you do wish to test with a ramdisk and /dev/ram0 is not present, then on RHEL 7 you may need to load the brd module. 'modprobe brd rd_size=<x>')

 # Set up storage subsystem. 'ramdisk' is used below as an arbitrary name
 # for the NVMe subsystem.
 mkdir /sys/kernel/config/nvmet/subsystems/ramdisk
 echo 1 > /sys/kernel/config/nvmet/subsystems/ramdisk/attr_allow_any_host
 mkdir /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1
 echo -n /dev/ram0 > /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1/device_path
 echo 1 > /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1/enable
 # Set up port
 mkdir /sys/kernel/config/nvmet/ports/1
 echo "ipv4" > /sys/kernel/config/nvmet/ports/1/addr_adrfam
 echo "tcp" > /sys/kernel/config/nvmet/ports/1/addr_trtype
 echo "11345" > /sys/kernel/config/nvmet/ports/1/addr_trsvcid
  
 # If a firewall is active on the system, ensure it allows TCP traffic
 # to pass using the TCP port number configured in the above commands
 # (11345 in this example).
 # Associate subsystem with port
 ln -s /sys/kernel/config/nvmet/subsystems/ramdisk /sys/kernel/config/nvmet/ports/1/subsystems/ramdisk

My version of the configuration when setting for the NVMe drive if showing in /dev/ as "nvme0n1"

touch /var/lock/subsys/local
mkdir /sys/kernel/config/nvmet/subsystems/nvme0n1
echo 1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/attr_allow_any_host
mkdir /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1
echo -n /dev/nvme0n1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1/device_path
echo 1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1/enable
mkdir /sys/kernel/config/nvmet/ports/1
echo "ipv4" > /sys/kernel/config/nvmet/ports/1/addr_adrfam
echo "tcp" > /sys/kernel/config/nvmet/ports/1/addr_trtype
echo "11345" > /sys/kernel/config/nvmet/ports/1/addr_trsvcid
echo "192.168.0.10" > /sys/kernel/config/nvmet/ports/1/addr_traddr
ln -s /sys/kernel/config/nvmet/subsystems/nvme0n1 /sys/kernel/config/nvmet/ports/1/subsystems/nvme0n1

Install the latest Solarflare NET driver which itself needed to be compiled from source using rpmbuild.

Once installed setup the host and clients with static IPv4 addresses. In my example i made the target (storage server) 192.168.0.10 and the client 192.168.0.11 since it was a closed network.

Host System Config

On the *host* system, perform the steps below. The example commands assume a target IP of 192.168.0.10and port 11345.

 # Install a version of the nvme-cli utility that supports NVMeoF over TCP.
 git clone https://github.com/solarflarecommunications/nvme-cli
 cd nvme-cli
 make
 make install
 # Connect to the target
 nvme connect -t tcp -a 192.168.0.10 -s 11345 -n nvme0n1
 # Confirm NVMe device is present
 lsblk | grep nvme

Once confirmed you can then add file system as normal:

mkfs.xfs /dev/nvme0n1

Make everything above stateful

Everything set above for the kernel is stateless having been set in /sys/ meaning once you reboot then your NVMe drive will go missing, the modules wont be loaded and none of the configuration above will be set anymore.

Target System:

Add Kernel module to module-load so centos will load the kernel module on boot:

vim /etc/modules-load.d/nvmet.conf

Insert the following line into nvmet.conf

nvmet_tcp

Create a new service unit file at /etc/systemd/system/nvmet-tcp.service:

[Unit]
Description=Service unit file for NVMe over Fabrics TCP target system
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/sbin/nvmet-tcp.sh
TimeoutStartSec=0
[Install]
WantedBy=default.target

Create the ExecStart script outlined above:

vim /usr/local/sbin/nvmet-tcp.sh

Make script executable:

chmod +x /usr/local/sbin/nvmet-tcp.sh

Insert the following into the above script

#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
mkdir /sys/kernel/config/nvmet/subsystems/nvme0n1
echo 1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/attr_allow_any_host
mkdir /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1
echo -n /dev/nvme0n1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1/device_path
echo 1 > /sys/kernel/config/nvmet/subsystems/nvme0n1/namespaces/1/enable
mkdir /sys/kernel/config/nvmet/ports/1
echo "ipv4" > /sys/kernel/config/nvmet/ports/1/addr_adrfam
echo "tcp" > /sys/kernel/config/nvmet/ports/1/addr_trtype
echo "11345" > /sys/kernel/config/nvmet/ports/1/addr_trsvcid
echo "192.168.0.10" > /sys/kernel/config/nvmet/ports/1/addr_traddr
ln -s /sys/kernel/config/nvmet/subsystems/nvme0n1 /sys/kernel/config/nvmet/ports/1/subsystems/nvme0n1

enable the service to come up on boot

systemctl enable nvme-tcp.service

Host System:

Add Kernel module to module-load so centos will load the kernel module on boot:

vim /etc/modules-load.d/nvme-tcp.conf

Insert the following line into nvme-tcp.conf

nvme-tcp

Create a new service unit file at /etc/systemd/system/nvme-tcp.service:

[Unit]
Description=Service unit file for NVMe over Fabrics TCP target system
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/sbin/nvm-tcp.sh
TimeoutStartSec=0
[Install]
WantedBy=default.target

Create the ExecStart script outlined above:

vim /usr/local/sbin/nvme-tcp.sh

Make script executable:

chmod +x /usr/local/sbin/nvme-tcp.sh

Insert the following into nvme-tcp.sh

#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
nvme connect -t tcp -a 192.168.0.10 -s 11345 -n nvme0n1
mount /dev/nvme0n1 /mnt/nvme01/