Difference between revisions of "Solarflare NVMeoF over TCP Linux kernel drivers"

From Define Wiki
Jump to navigation Jump to search
Line 1: Line 1:
== Solarflare NVMeoF over TCP Linux kernel drivers ==
+
Version: v18.1-rev3
 +
 
 +
 
 +
== Introduction ==
 +
 
 +
This is a pre-NVM Express specification release from Solarflare
 +
of a TCP transport for NVMe over Fabrics (NVMeoF).  The package
 +
includes Linux kernel drivers which enable NVMeoF over TCP.
 +
 
 +
 
 +
== Supported platforms ==
 +
 
 +
This release is for Red Hat Enterprise Linux or Centos 7.4 only. Kernel and Devel of 3.10.0-693.el7.x86_64
 +
 
 +
== Prerequisites ==
 +
 
 +
When using with Solarflare adapters, we recommend installing the latest
 +
Solarflare Linux NET driver for best performance. 
 +
 
 +
The latest Solarflare
 +
driver packages are available from https://support.solarflare.com.
 +
 
 +
The installation of the NVMeoF over TCP drivers requires the kernel-devel
 +
package for the target kernel version to be installed on the system.
 +
 
 +
If you have installed Centos Minimum then you will need to install gcc rpmbuild
 +
 
 +
== Installation ==
 +
 
 +
To build for the currently running kernel, use
 +
 
 +
  rpmbuild --rebuild <source rpm file>
 +
 
 +
To install the resulting binary rpm:
 +
 
 +
  Locate the file that was created in the preceding step, prefixed with the
 +
  annotation "Wrote:", then type:
 +
 
 +
  rpm -Uvh <binary rpm filename>
 +
 
 +
Repeat the above installation procedure for both the host and target systems.
 +
The 'target' system is where the NVMe block device is located.  The 'host'
 +
system is the system connecting to the device.
 +
 
 +
On the *host* system, load the nvme-tcp module
 +
 
 +
  modprobe nvme-tcp
 +
 
 +
Check it has loaded
 +
 
 +
  lsmod | grep nvme_tcp
 +
 
 +
You should see nvme_tcp, nvme_fabrics and nvme_core modules in use.
 +
 
 +
On the *target* system, load the nvmet-tcp module
 +
 
 +
  modprobe nvmet-tcp
 +
 
 +
Check it has loaded
 +
 
 +
  lsmod | grep nvmet_tcp
 +
 
 +
You should see nvmet_tcp and nvmet modules in use.
 +
 
 +
 
 +
### Configuration ###
 +
 
 +
On the *target* system, perform the steps below.  The example commands assume
 +
a target IP of 10.0.0.1, a subsytem name of ramdisk and an underlying block
 +
device /dev/ram0.  Users should specify their NVMe block device in place
 +
of this. (If you do wish to test with a ramdisk and /dev/ram0 is
 +
not present, then on RHEL 7 you may need to load the brd module.
 +
'modprobe brd rd_size=<x>')
 +
 
 +
  # Set up storage subsystem. 'ramdisk' is used below as an arbitrary name
 +
  # for the NVMe subsystem.
 +
  mkdir /sys/kernel/config/nvmet/subsystems/ramdisk
 +
  echo 1 > /sys/kernel/config/nvmet/subsystems/ramdisk/attr_allow_any_host
 +
 
 +
  mkdir /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1
 +
  echo -n /dev/ram0 > /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1/device_path
 +
  echo 1 > /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1/enable
 +
 
 +
  # Set up port
 +
  mkdir /sys/kernel/config/nvmet/ports/1
 +
  echo "ipv4" > /sys/kernel/config/nvmet/ports/1/addr_adrfam
 +
  echo "tcp" > /sys/kernel/config/nvmet/ports/1/addr_trtype
 +
  echo "11345" > /sys/kernel/config/nvmet/ports/1/addr_trsvcid
 +
 
 +
 
 +
  # If a firewall is active on the system, ensure it allows TCP traffic
 +
  # to pass using the TCP port number configured in the above commands
 +
  # (11345 in this example).
 +
 
 +
  # Associate subsystem with port
 +
  ln -s /sys/kernel/config/nvmet/subsystems/ramdisk /sys/kernel/config/nvmet/ports/1/subsystems/ramdisk
 +
 
 +
On the *host* system, perform the steps below.  The example
 +
commands assume a target IP of 10.0.0.1 and port 11345.
 +
 
 +
  # Install a version of the nvme-cli utility that supports NVMeoF over TCP.
 +
  git clone https://github.com/solarflarecommunications/nvme-cli
 +
  cd nvme-cli
 +
  make
 +
  make install
 +
 
 +
  # Connect to the target
 +
  nvme connect -t tcp -a 10.0.0.1 -s 11345 -n ramdisk
 +
 
 +
  # Confirm NVMe device is present
 +
  lsblk | grep nvme

Revision as of 13:31, 3 August 2018

Version: v18.1-rev3


Introduction

This is a pre-NVM Express specification release from Solarflare of a TCP transport for NVMe over Fabrics (NVMeoF). The package includes Linux kernel drivers which enable NVMeoF over TCP.


Supported platforms

This release is for Red Hat Enterprise Linux or Centos 7.4 only. Kernel and Devel of 3.10.0-693.el7.x86_64

Prerequisites

When using with Solarflare adapters, we recommend installing the latest Solarflare Linux NET driver for best performance.

The latest Solarflare driver packages are available from https://support.solarflare.com.

The installation of the NVMeoF over TCP drivers requires the kernel-devel package for the target kernel version to be installed on the system.

If you have installed Centos Minimum then you will need to install gcc rpmbuild

Installation

To build for the currently running kernel, use

 rpmbuild --rebuild <source rpm file>

To install the resulting binary rpm:

 Locate the file that was created in the preceding step, prefixed with the
 annotation "Wrote:", then type:
 rpm -Uvh <binary rpm filename>

Repeat the above installation procedure for both the host and target systems. The 'target' system is where the NVMe block device is located. The 'host' system is the system connecting to the device.

On the *host* system, load the nvme-tcp module

 modprobe nvme-tcp

Check it has loaded

 lsmod | grep nvme_tcp

You should see nvme_tcp, nvme_fabrics and nvme_core modules in use.

On the *target* system, load the nvmet-tcp module

 modprobe nvmet-tcp

Check it has loaded

 lsmod | grep nvmet_tcp

You should see nvmet_tcp and nvmet modules in use.


      1. Configuration ###

On the *target* system, perform the steps below. The example commands assume a target IP of 10.0.0.1, a subsytem name of ramdisk and an underlying block device /dev/ram0. Users should specify their NVMe block device in place of this. (If you do wish to test with a ramdisk and /dev/ram0 is not present, then on RHEL 7 you may need to load the brd module. 'modprobe brd rd_size=<x>')

 # Set up storage subsystem. 'ramdisk' is used below as an arbitrary name
 # for the NVMe subsystem.
 mkdir /sys/kernel/config/nvmet/subsystems/ramdisk
 echo 1 > /sys/kernel/config/nvmet/subsystems/ramdisk/attr_allow_any_host
 mkdir /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1
 echo -n /dev/ram0 > /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1/device_path
 echo 1 > /sys/kernel/config/nvmet/subsystems/ramdisk/namespaces/1/enable
 # Set up port
 mkdir /sys/kernel/config/nvmet/ports/1
 echo "ipv4" > /sys/kernel/config/nvmet/ports/1/addr_adrfam
 echo "tcp" > /sys/kernel/config/nvmet/ports/1/addr_trtype
 echo "11345" > /sys/kernel/config/nvmet/ports/1/addr_trsvcid
  
 # If a firewall is active on the system, ensure it allows TCP traffic
 # to pass using the TCP port number configured in the above commands
 # (11345 in this example).
 # Associate subsystem with port
 ln -s /sys/kernel/config/nvmet/subsystems/ramdisk /sys/kernel/config/nvmet/ports/1/subsystems/ramdisk

On the *host* system, perform the steps below. The example commands assume a target IP of 10.0.0.1 and port 11345.

 # Install a version of the nvme-cli utility that supports NVMeoF over TCP.
 git clone https://github.com/solarflarecommunications/nvme-cli
 cd nvme-cli
 make
 make install
 # Connect to the target
 nvme connect -t tcp -a 10.0.0.1 -s 11345 -n ramdisk
 # Confirm NVMe device is present
 lsblk | grep nvme