Difference between revisions of "Linux: Installing Mellanox VSA"

From Define Wiki
Jump to navigation Jump to search
 
(25 intermediate revisions by the same user not shown)
Line 11: Line 11:
 
** python-twisted-web is needed by vsacli-2.1.1-2.el6.noarch
 
** python-twisted-web is needed by vsacli-2.1.1-2.el6.noarch
 
** python-twisted-conch is needed by vsacli-2.1.1-2.el6.noarch
 
** python-twisted-conch is needed by vsacli-2.1.1-2.el6.noarch
 +
<syntaxhighlight>
 +
yum install sg3_utils sg3_utils-libs mdadm device-mapper lvm2 python python-twisted-web python-twisted-conch
 +
</syntaxhighlight>
  
 
== Installing ==
 
== Installing ==
Line 53: Line 56:
 
vsa start
 
vsa start
 
</syntaxhighlight>
 
</syntaxhighlight>
== Quick Configuration ==
+
== Configuration ==
 +
First, complete the VSA and license installation, then run VSCLI. Before any configuration step, ensure you are in config mode. To enter config mode, type ''config''.
 
<syntaxhighlight>
 
<syntaxhighlight>
 
[root@Blade7 vsa]# vscli
 
[root@Blade7 vsa]# vscli
Line 63: Line 67:
 
*** Evaluation License: 123 days left before it expires, make sure to obtain a Permanent License ***
 
*** Evaluation License: 123 days left before it expires, make sure to obtain a Permanent License ***
  
VSA-root>
+
VSA-root> config
 +
VSA-/#
 +
</syntaxhighlight>
 +
Configuration details can be viewed by issuing a ''show'' command.
 +
<syntaxhighlight>
 +
VSA-/# show
 +
 
 +
SAN Fabric
 +
State=created
 +
Alarm State=none
 +
Requested (User) State=enabled
 +
name=SAN
 +
description=SAN Fabric
 +
 
 +
  /providers
 +
    State    Name    role        url    ifs FCs procs Cache Zone
 +
    running  Blade7 standalone Blade7:7081  3  0    1    0/0
 +
 
 +
  /disks
 +
    State    Idx                Name                Size  Cache  Vendor      Model                    Serial
 +
    running  L  0  3600304800ead2300195d747d061ccf55 3721GB  0  SMC      SMC2208          0055cf1c067d745d190023ad0e800403
 +
    running    1              Blade7:sdb              0      0  AMI      Virtual Floppy0
 +
    running    2              Blade7:sdc              0      0  AMI      Virtual HDisk0
 +
 
 +
  /pools
 +
    State      Name    Size  Used    Free  PVs LVs Raid Provider  attr  Quality Chunk
 +
    running  vg_blade7 3721GB 3721GB -381038MB  1  3  none  Blade7  wz--n- unknown  4
 +
 
 +
  /servers
 +
    State    Name      OS  IPs      vWWNN      vHBAs vDisks Targets
 +
    running  everyone unknown ALL 0008f1111fff0000  0    0      1
 +
</syntaxhighlight>
 +
'''1. Define a new server group (a set of clients/initiators).''' <br/>
 +
If storage access control is not required, the built-in everyone server group can be used. <br/>
 +
For example
 +
<syntaxhighlight>
 +
add servers dbsrv ips=10.10.10.3;10.10.11.3
 +
</syntaxhighlight>
 +
A new server group is created (dbsrv) with 2 IP addresses that will be allowed by targets.
 +
 
 +
'''2. Create a new storage target and map it to a server group (or everyone), and define the transport (RDMA or TCP).''' <br/>
 +
For example:<br/>
 +
* TCP <br/> <syntaxhighlight>add targets iqn.tar1 server=everyone</syntaxhighlight>
 +
* RDMA <br/> <syntaxhighlight>add targets iqn.tar1 server=dbsrv,transport=iser</syntaxhighlight>
 +
'''Note:''' By default, ISCSI TCP transport is selected, use RDMA only if the initiator supports RDMA.
 +
<syntaxhighlight>
 +
VSA-/# show
 +
.
 +
.
 +
  /targets
 +
    State  Idx  Name    Server  Transport  Provider Luns Sessions
 +
    running  1  iqn.tar1 everyone  iscsi  Blade7:p0  1      0
 +
</syntaxhighlight>
 +
'''3. Add LUNs/storage to target, and map VSA-discovered local or FC-based storage to target LUNs (this enables the initiators to see the storage).'''<br/>
 +
For example:
 +
<syntaxhighlight>
 +
add targets/iqn.tar1/luns volume=v.vg_blade7.lv_home
 +
</syntaxhighlight>
 +
A new LUN is added to the target mapped to object <vol>.
 +
* Replace <vol> with one of the storage object names discovered in the system.
 +
* To see available storage, use the show command. Physical disks are marked d<n> (n=number). For more information on volume naming and options, type ''help class lun'', or see VSA LUN options.
 +
=== Verify Target Configuration ===
 +
VSA incorporates an accelerated version of the Linux SCSI Target (tgt). The accelerated tgt service (isertgtd) and files are installed automatically as part of the VSA installation process. The main tgt management command is tgtadm.
 +
==== To start, stop, or query the status of tgt ====
 +
<syntaxhighlight>
 +
/etc/init.d/isertgtd {start|stop|status}
 +
</syntaxhighlight>
 +
This is different from /etc/init.d/tgtd in the open-source Linux distribution. VSA incorporates a watchdog/monitor service that automatically restarts the isertgtd service if it is terminated.
 +
==== To view the current tgt configuration ====
 +
<syntaxhighlight>
 +
tgtadm --mode target --op show
 +
</syntaxhighlight>
 +
=== Storage Providers ===
 +
The VSA device/provider is a device running the VSA stack in a multi-device or cluster configuration. VSA manager can control multiple providers.
 +
 
 +
Each provider has multiple network interface (ifc) objects.
 +
 
 +
A provider name typically prefixes relevant objects, for example, an interface name is composed of <provider-name>:<local-name> (vsa1:eth0).
 +
 
 +
==== Adding a new provider ====
 +
<syntaxhighlight>
 +
add provider <url> role=compute
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
==== Provider parameters ====
 +
{| class="wikitable"
 +
!CLI
 +
!Description
 +
|-
 +
|<url>
 +
|Remote hostname or IP address
 +
|-
 +
|cachedevices
 +
|Device to be used for disk caching purpose
 +
|-
 +
|zone
 +
|Physical location (for example, rack, site).
 +
|-
 +
|role
 +
|Role in cluster deployment: ''master'', ''standby'' or ''compute''.
 +
|}
 +
 
== Initiator Configuration ==
 
== Initiator Configuration ==
== Overview ==
+
=== Overview ===
 
The Linux initiator stack contains two sets of elements:
 
The Linux initiator stack contains two sets of elements:
 
* iscsi-initiator-utils (user space management utilities):
 
* iscsi-initiator-utils (user space management utilities):
Line 88: Line 191:
 
For other distributions, OFED-based setups are possible with OFED 1.4.x only.
 
For other distributions, OFED-based setups are possible with OFED 1.4.x only.
 
* We strongly recommend using the iSCSI and IB stacks provided by the Linux distribution, which support RHEL 5.4, RHEL 5.5, RHEL 5.6, and RHEL 6.
 
* We strongly recommend using the iSCSI and IB stacks provided by the Linux distribution, which support RHEL 5.4, RHEL 5.5, RHEL 5.6, and RHEL 6.
* We recommend using IB stacks provided by the Linux distribution, not from the OFED version.
+
* '''We recommend using IB stacks provided by the Linux distribution, not from the OFED version.'''
  
 
'''Notes:'''
 
'''Notes:'''
Line 114: Line 217:
 
In the 5.x series, the IB stack service is called "openibd", and in the 6.x series "rdma".
 
In the 5.x series, the IB stack service is called "openibd", and in the 6.x series "rdma".
 
A service start is required following the installation.
 
A service start is required following the installation.
 +
=== Open iSCSI Initiator Configuration ===
 +
1. Configure the network device (IPoIB for Infiniband) and verify its operation <br/>
 +
2. Verify connectivity between the Linux client and the VSA provider. From the Linux client,run the ping command to the relevant IP address in the VSA server (the address/interface that will be used for the iSER or iSCSI connection).<br/>
 +
3. Start the iscsi service (/etc/init.d/iscsi for RH and /etc/open-iscsi for SLES)<br/>
 +
4. Perform iSCSI discovery with the VSA target using the iscsi_discovery script (provided in the VSA installation tar file):
 +
<syntaxhighlight>
 +
iscsi_discovery <ip-addr>
 +
</syntaxhighlight>
 +
where <ip-addr> is the target's IPoIB address. If the target transport was explicitly set to be iser, use: <syntaxhighlight>
 +
iscsi_discovery <ip-addr> -t iser
 +
</syntaxhighlight>
 +
5. Verify that a node was created for the discovered target, and that the transport to be used is iser. Log in to the target, and verify that a session is established using the following sequence of commands (<target-name> represents the target discovered previously):
 +
<syntaxhighlight>iscsiadm –m node
 +
iscsiadm –m node –T <target-name> | grep transport
 +
iscsiadm –m node –T <target-name> --login
 +
iscsiadm –m session
 +
 +
dmesg | tail</syntaxhighlight>
 +
 +
If the initiator failed to connect to VSA after discovery:
 +
# Ping the VSA server IP address to verify connectivity (from the client).
 +
# Verify that the VSA server has a valid target that is mapped to the everyone server group or to a server group with the client IP in the ips list. Verify that this target driver is iser.
 +
# Verify that the firewall is not turned on in the VSA provider machine or in the Linux client.

Latest revision as of 07:42, 30 August 2013

Installation Prerequisites

VSA is dependent on the following RPMs as prerequisites. Before installing VSA, ensure that the following are installed:

  • sg3_utils: SCSI Generic utils, required by VSA management.
  • sg3_utils-libs: Libraries needed for the utils (on RHEL).
  • mdadm: Required to create, manage, and monitor Linux MD (software RAID) devices.
  • device-mapper: User-space files and tools for the device-mapper.
  • lvm2: Required for managing physical volumes and creating logical volumes.
  • Python
    • python
    • python-twisted-web is needed by vsacli-2.1.1-2.el6.noarch
    • python-twisted-conch is needed by vsacli-2.1.1-2.el6.noarch
yum install sg3_utils sg3_utils-libs mdadm device-mapper lvm2 python python-twisted-web python-twisted-conch

Installing

The VSA.tgz archive file contains the installation file install.vsa-<x.x.x.el>.tgz for Linux distributions:

  • RedHat
  • CentOS

The ./install.sh installation script:

  • Installs the package content.
  • Creates VSA users.
  • Configures/initializes the VSA services to start upon reboot.

The installation adds 3 users who can all log in using remote shell (SSH):

  • vsadmin is the privileged user who has access to all features.
  • vsuser is the non-privileged user without access to configuration commands.
  • vsfiles has limited remote SFTP access to VSA log and configuration files.

NOTE: Before installing VSA, ensure that SE Linux is disabled.

tar -xzvf install.vsa-<x.x.x.el>.tgz
cd release.vsa-<x.x.x.el>
./install.sh

VSA License

Save the license file on your local computer, or on the master and standby nodes in a cluster installation, under /opt/vsa/files/.

Viewing License Information

[root@blade7 vsa]# vscli --lic
1 licenses
|-------------------------------------------------------------------------------------------------------------------------------|
| Software name |  Customer ID    |     SN     |     Type      |   MAC Address   |   Exp. Date   |Limit| Functionality | Status |
|-------------------------------------------------------------------------------------------------------------------------------|
|VSA            |Boston Solutions |4076        |Evaluation     |NA               |2013-12-31     |1    |Standard       |Valid   |
|-------------------------------------------------------------------------------------------------------------------------------|

VSA Script

The vsa script is a helper tool to enable easy management of the different VSA services and to initiate VSA cluster management operations. To view the different options, type vsa --help

Starting VSA

vsa start

Configuration

First, complete the VSA and license installation, then run VSCLI. Before any configuration step, ensure you are in config mode. To enter config mode, type config.

[root@Blade7 vsa]# vscli

Welcome to Mellanox Storage Accelerator console!, Ver 2.1
Type: help or help <command> for more details
Type: help quick for quick step by step configuration guide

*** Evaluation License: 123 days left before it expires, make sure to obtain a Permanent License ***

VSA-root> config
VSA-/#

Configuration details can be viewed by issuing a show command.

VSA-/# show

SAN Fabric
State=created
Alarm State=none
Requested (User) State=enabled
name=SAN
description=SAN Fabric

  /providers
     State    Name     role        url     ifs FCs procs Cache Zone
    running  Blade7 standalone Blade7:7081  3   0    1    0/0

  /disks
     State     Idx                Name                Size  Cache  Vendor       Model                    Serial
    running  L  0  3600304800ead2300195d747d061ccf55 3721GB   0   SMC      SMC2208          0055cf1c067d745d190023ad0e800403
    running     1              Blade7:sdb              0      0   AMI      Virtual Floppy0
    running     2              Blade7:sdc              0      0   AMI      Virtual HDisk0

  /pools
     State      Name    Size   Used     Free   PVs LVs Raid Provider  attr  Quality Chunk
    running  vg_blade7 3721GB 3721GB -381038MB  1   3  none  Blade7  wz--n- unknown   4

  /servers
     State     Name      OS   IPs      vWWNN       vHBAs vDisks Targets
    running  everyone unknown ALL 0008f1111fff0000   0     0       1

1. Define a new server group (a set of clients/initiators).
If storage access control is not required, the built-in everyone server group can be used.
For example

add servers dbsrv ips=10.10.10.3;10.10.11.3

A new server group is created (dbsrv) with 2 IP addresses that will be allowed by targets.

2. Create a new storage target and map it to a server group (or everyone), and define the transport (RDMA or TCP).
For example:

  • TCP
    add targets iqn.tar1 server=everyone
  • RDMA
    add targets iqn.tar1 server=dbsrv,transport=iser

Note: By default, ISCSI TCP transport is selected, use RDMA only if the initiator supports RDMA.

VSA-/# show
.
.
  /targets
     State   Idx   Name    Server  Transport  Provider Luns Sessions
    running   1  iqn.tar1 everyone   iscsi   Blade7:p0  1      0

3. Add LUNs/storage to target, and map VSA-discovered local or FC-based storage to target LUNs (this enables the initiators to see the storage).
For example:

add targets/iqn.tar1/luns volume=v.vg_blade7.lv_home

A new LUN is added to the target mapped to object <vol>.

  • Replace <vol> with one of the storage object names discovered in the system.
  • To see available storage, use the show command. Physical disks are marked d<n> (n=number). For more information on volume naming and options, type help class lun, or see VSA LUN options.

Verify Target Configuration

VSA incorporates an accelerated version of the Linux SCSI Target (tgt). The accelerated tgt service (isertgtd) and files are installed automatically as part of the VSA installation process. The main tgt management command is tgtadm.

To start, stop, or query the status of tgt

/etc/init.d/isertgtd {start|stop|status}

This is different from /etc/init.d/tgtd in the open-source Linux distribution. VSA incorporates a watchdog/monitor service that automatically restarts the isertgtd service if it is terminated.

To view the current tgt configuration

tgtadm --mode target --op show

Storage Providers

The VSA device/provider is a device running the VSA stack in a multi-device or cluster configuration. VSA manager can control multiple providers.

Each provider has multiple network interface (ifc) objects.

A provider name typically prefixes relevant objects, for example, an interface name is composed of <provider-name>:<local-name> (vsa1:eth0).

Adding a new provider

add provider <url> role=compute

Provider parameters

CLI Description
<url> Remote hostname or IP address
cachedevices Device to be used for disk caching purpose
zone Physical location (for example, rack, site).
role Role in cluster deployment: master, standby or compute.

Initiator Configuration

Overview

The Linux initiator stack contains two sets of elements:

  • iscsi-initiator-utils (user space management utilities):
    • iscsid(8) daemon
    • iscsiadm(8) administration utility
    • iscsi service
    • additional utilities
  • iSCSI kernel modules:
    • scsi_transport_iscsi
    • libiscsi
    • iscsi_tcp (the TCP transport)
    • ib_iser (the iSER transport)
    • additional transports

For the RHEL, OEL, and CentOS distributions, install the iSCSI initiator package using

yum install iscsi-initiator-utils

Recommended Platforms

  • The OFED 1.5.x series supports the iSER initiator transport only for RHEL 5.4

For other distributions, OFED-based setups are possible with OFED 1.4.x only.

  • We strongly recommend using the iSCSI and IB stacks provided by the Linux distribution, which support RHEL 5.4, RHEL 5.5, RHEL 5.6, and RHEL 6.
  • We recommend using IB stacks provided by the Linux distribution, not from the OFED version.

Notes:

  • The kernel modules in iSCSI depend on other modules (crypto, IB, etc.).

These modules might come from different and potentially unsynchronized sources, leading to binary or version incompatibility.

  • Configurations with mixed OFED and Linux distribution versions are not supported. Thus, when using OFED packages, avoid situations where the iSCSI and/or iSER modules are from the Linux distribution, while IB modules are from OFED, or vice versa. Mixed configurations lead to symbol inconsistency, preventing ib_iser from loading.

Install IB stack provided by the Linux distribution using the standard yum(8) mechanism with a dedicated yum group, which incorporates the various IB packages.

The group name for the EL (RHEL, OEL, CentOS) 5.x series is different from the group name for the EL 6.x series.

The commands to install the group on both environments are:

  • on EL 5.x:
yum groupinstall "OpenFabrics Enterprise Distribution"
  • on EL 6.x:
yum groupinstall "Infiniband Support"

In the 5.x series, the IB stack service is called "openibd", and in the 6.x series "rdma". A service start is required following the installation.

Open iSCSI Initiator Configuration

1. Configure the network device (IPoIB for Infiniband) and verify its operation
2. Verify connectivity between the Linux client and the VSA provider. From the Linux client,run the ping command to the relevant IP address in the VSA server (the address/interface that will be used for the iSER or iSCSI connection).
3. Start the iscsi service (/etc/init.d/iscsi for RH and /etc/open-iscsi for SLES)
4. Perform iSCSI discovery with the VSA target using the iscsi_discovery script (provided in the VSA installation tar file):

iscsi_discovery <ip-addr>

where <ip-addr> is the target's IPoIB address. If the target transport was explicitly set to be iser, use:

iscsi_discovery <ip-addr> -t iser

5. Verify that a node was created for the discovered target, and that the transport to be used is iser. Log in to the target, and verify that a session is established using the following sequence of commands (<target-name> represents the target discovered previously):

iscsiadm –m node 
iscsiadm –m node –T <target-name> | grep transport
iscsiadm –m node –T <target-name> --login
iscsiadm –m session

dmesg | tail

If the initiator failed to connect to VSA after discovery:

  1. Ping the VSA server IP address to verify connectivity (from the client).
  2. Verify that the VSA server has a valid target that is mapped to the everyone server group or to a server group with the client IP in the ips list. Verify that this target driver is iser.
  3. Verify that the firewall is not turned on in the VSA provider machine or in the Linux client.