Difference between revisions of "ScaleIO: Install and configure on Centos 6.2"

From Define Wiki
Jump to navigation Jump to search
 
(One intermediate revision by the same user not shown)
Line 143: Line 143:
 
== Create a Volume ==
 
== Create a Volume ==
 
<syntaxhighlight>
 
<syntaxhighlight>
 +
# from 1_1_806 build, need to include protection zone
 +
$ ./cli --add_volume --protection_domain_name domain_1 --size_gb 100 --volume_name "Volume_1"
 +
 +
# from 1_1_706 build
 
[root@gp-2-0 bin]$ ./cli --add_volume --size_gb 100 --volume_name "Volume_1"
 
[root@gp-2-0 bin]$ ./cli --add_volume --size_gb 100 --volume_name "Volume_1"
 
Rounding up volume size to 104 GB
 
Rounding up volume size to 104 GB
Line 188: Line 192:
 
# or in new versions:
 
# or in new versions:
 
/opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_1 --sdc_ip 192.168.1.3 --mdm_ip 192.168.1.200
 
/opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_1 --sdc_ip 192.168.1.3 --mdm_ip 192.168.1.200
 +
 +
# example for lustre setup
 +
root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_1 --sdc_ip 172.28.1.73 --mdm_ip 172.28.0.209
 +
Successfully mapped volume vol_1 to SDC 172.28.1.73.
 +
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_2 --sdc_ip 172.28.1.99 --mdm_ip 172.28.0.209
 +
Successfully mapped volume vol_2 to SDC 172.28.1.99.
 +
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_3 --sdc_ip 172.28.1.72 --mdm_ip 172.28.0.209
 +
Successfully mapped volume vol_3 to SDC 172.28.1.72.
 +
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_4 --sdc_ip 172.28.1.71 --mdm_ip 172.28.0.209
 +
Successfully mapped volume vol_4 to SDC 172.28.1.71.
 +
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_5 --sdc_ip 172.28.1.68 --mdm_ip 172.28.0.209
 +
Successfully mapped volume vol_5 to SDC 172.28.1.68.
 +
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_LMDS --sdc_ip 172.28.1.66 --mdm_ip 172.28.0.209
 +
Successfully mapped volume vol_LMDS to SDC 172.28.1.66.
 
</syntaxhighlight>
 
</syntaxhighlight>
  

Latest revision as of 17:37, 4 November 2012

  • Beta testing - tested with CentOS 6.2

Terminology

  • MDM - Meta Data Manager (config monitoring, passive system mapping, 1 or 2 mdms)
  • SDS - ScaleIO Data Server (install on all systems exporting storage)
  • SDC - ScaleIO Data Client (install on all servers that require access to the vSAN)
  • TB - Tie Breaker (only required when 2x mdms, install on non-mdm host)

Installation

Very straight forward. In this configuration were only going to use 1x MDM server. We've a 16 node cluster. All hosts are going to be SDSs and SDCs.

Installing the MDM

  rpm -ivh /shared/mpiuser/ScaleIO/scaleio-mdm-1.0-02.82.el6.x86_64.rpm

Install the SDS

  # on headnode
  rpm -ivh /shared/mpiuser/ScaleIO/scaleio-sds-1.0-02.82.el6.x86_64.rpm
  # on compute 
  pdsh -a rpm -ivh /shared/mpiuser/ScaleIO/scaleio-sds-1.0-02.82.el6.x86_64.rpm

Install the SDC

  # on headnode
  MDM_IP=192.168.4.2 rpm -ivh /shared/mpiuser/ScaleIO/scaleio-sdc-1.0-02.82.el6.x86_64.rpm
  # on compute 
  pdsh -a MDM_IP=192.168.4.2 rpm -ivh /shared/mpiuser/ScaleIO/scaleio-sdc-1.0-02.82.el6.x86_64.rpm

RPMs install in /opt/scaleio/[mdm|sds|sdc]

Configure the MDM

Configure a single MDM server:

  /opt/scaleio/mdm/bin/cli --add_primary_mdm --primary_mdm_ip 172.28.0.209

Installing as part of a failover cluster setup

  # on the primary MDM system
  /opt/scaleio/mdm/bin/cli --add_primary_mdm --primary_mdm_ip 192.168.1.4 --virtual_ip 192.168.4.200 --interface_name eth0

  # on the secondary MDM system
  /opt/scaleio/mdm/bin/cli --mdm_ip 192.168.1.4 --add_secondary_mdm --secondary_mdm_ip 192.168.1.6 --interface_name eth0

Setup a Tie Breaker (Cluster Mode)

Install a Tie-Breaker (only required in cluster mode)

  /opt/scaleio/mdm/bin/cli --mdm_ip 192.168.1.200 --add_tb --tb_ip 192.168.1.5

Enable Cluster Mode (only required for the cluster setup)

  /opt/scaleio/mdm/bin/cli --switch_to_cluster_mode --mdm_ip 192.168.1.200

Verify that the cluster is working ok:

  /opt/scaleio/mdm/bin/cli --query_cluster --mdm_ip 192.168.1.200


Set the license

To set system license, you should first get it from Scaleio, according to your system capacity needs. Note that license is flexible and can be increased/decreased on the fly, as long as it does not conflict with the current system SDS capacity.

/opt/scaleio/mdm/bin/cli --mdm_ip 172.28.0.209 --set_license --license_key 1PRWLBLLWUIDLLLECQLI7QD9UKPSPL1G

Enable Data Encryption

Scaleio vSAN supports data encryption on its devices. Setting license is a precondition to enable data encryption.

Note that after SDS modules are added there is no option to change data encryption state from enable to disable or vice versa.

  /opt/scaleio/mdm/bin/cli --mdm_ip 172.28.0.209 --set_encryption_properties --enable_at_rest

Protection Domains

Protection domain is defined as SDS group in which all the SDS servers protect each other.

Scaleio vSAN volumes are spread within the protection domains, so a given volume resides on a specific failure domain.

SDS servers' limitation per Protection domain is 100 servers. Protection domains number limitation is 32 per Scaleio vSAN system.


To setup a protection domain:

  /opt/scaleio/mdm/bin/cli --add_protection_domain --protection_domain_name rack_1.1 --mdm_ip 172.28.0.209

Configure the SDS

Add SDSs (assuming the devices for testing are just files, these can also be devices/partitions):

Perform this action from the MDM node.

  /opt/scaleio/mdm/bin/cli --add_sds \
                                           --protection_domain rack_1.1 \
                                           --device_name /dev/sdb,/dev/sdc \
                                           --sds_ip 192.168.1.9 \
                                           --sds_name sds_1 \
                                           --mdm_ip 192.168.1.200

# To setup our cluster (form the MDM node)
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.0.209 --sds_name "sds_1" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.66 --sds_name "sds_2" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.67 --sds_name "sds_3" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.70 --sds_name "sds_4" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.69 --sds_name "sds_5" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.68 --sds_name "sds_6" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.71 --sds_name "sds_7" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.72 --sds_name "sds_8" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.99 --sds_name "sds_9" --mdm_ip 172.28.0.209
/opt/scaleio/mdm/bin/cli --add_sds --protection_domain_name rack_1.1 --device_name /dev/sdb,/dev/sdc --sds_ip 172.28.1.73 --sds_name "sds_10" --mdm_ip 172.28.0.209

Note: If you add an SDS with some incorrect parameters, you'll need to force_clean when readding:

 ./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.10 --sds_name "sds_9" --force_clean

Example if using more than one device per host:

[root@gpuheadnode shared]$ /opt/scaleio/mdm/bin/cli --add_sds --device_name /dev/sdb,/dev/sdc --sds_ip 10.1.1.1 --sds_name "sds_1"
Parsed device name /dev/sdb
Parsed device name /dev/sdc
Successfully created SDS sds_1. Object ID 726711bd00000000

Verify all the SDSs are working ok:

[root@gpuheadnode shared]$ /opt/scaleio/mdm/bin/cli --query_all_sds
Query-all-SDS returned 4 SDS nodes.
SDS 726711bd00000000 Name: sds_1 IP: 10.1.1.1 Port: 7072
SDS 726711be00000001 Name: sds_2 IP: 10.1.1.2 Port: 7072
SDS 726711bf00000002 Name: sds_3 IP: 10.1.1.3 Port: 7072
SDS 726711c000000003 Name: sds_4 IP: 10.1.1.4 Port: 7072

Create a Volume

# from 1_1_806 build, need to include protection zone
$ ./cli --add_volume --protection_domain_name domain_1 --size_gb 100 --volume_name "Volume_1"

# from 1_1_706 build
[root@gp-2-0 bin]$ ./cli --add_volume --size_gb 100 --volume_name "Volume_1"
Rounding up volume size to 104 GB
Successfully created volume of size 104 GB. Object ID 9ffb781e00000000

Map to an SDC

Mapping to a single SDC, you'll need the volume_id and sdc_id (query these). Lets start with getting the volume id:

[root@gp-2-0 bin]$ ./cli --query_all_volumes
Query all volumes returned 1 volumes .
Volume ID 9ffb781e00000000 Name: Volume_1. Size:104 GB (106496 MB) . Volume is unmapped.
Volume Summary:
        1 volumes. Total size:104 GB (106496 MB)
        1 volumes mapped to no SDC

Now lets get the sdc_id:

[root@gp-2-0 bin]$ ./cli --query_all_sdc
Query all SDC returned 16 SDC nodes.
SDC Id: 35fb9b9100000000 SDC IP 192.168.4.2
SDC Id: 35fb9b9200000001 SDC IP 192.168.4.12
SDC Id: 35fb9b9300000002 SDC IP 192.168.4.13
SDC Id: 35fb9b9400000003 SDC IP 192.168.4.15
SDC Id: 35fb9b9500000004 SDC IP 192.168.4.17
SDC Id: 35fb9b9600000005 SDC IP 192.168.4.4
SDC Id: 35fb9b9700000006 SDC IP 192.168.4.14
SDC Id: 35fb9b9800000007 SDC IP 192.168.4.16
SDC Id: 35fb9b9900000008 SDC IP 192.168.4.10
SDC Id: 35fb9b9a00000009 SDC IP 192.168.4.5
SDC Id: 35fb9b9b0000000a SDC IP 192.168.4.7
SDC Id: 35fb9b9c0000000b SDC IP 192.168.4.9
SDC Id: 35fb9b9d0000000c SDC IP 192.168.4.11
SDC Id: 35fb9b9e0000000d SDC IP 192.168.4.8
SDC Id: 35fb9b9f0000000e SDC IP 192.168.4.3
SDC Id: 35fb9ba00000000f SDC IP 192.168.4.6

Using the info above, lets map the volume to the first sdc

[root@gp-2-0 bin]$ ./cli --map_volume_to_sdc --volume_id 9ffb781e00000000 --sdc_id 35fb9b9100000000
Successfully mapped volume with ID 9ffb781e00000000 to SDC with ID 35fb9b9100000000.

# or in new versions:
/opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_1 --sdc_ip 192.168.1.3 --mdm_ip 192.168.1.200

# example for lustre setup
root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_1 --sdc_ip 172.28.1.73 --mdm_ip 172.28.0.209
Successfully mapped volume vol_1 to SDC 172.28.1.73.
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_2 --sdc_ip 172.28.1.99 --mdm_ip 172.28.0.209
Successfully mapped volume vol_2 to SDC 172.28.1.99.
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_3 --sdc_ip 172.28.1.72 --mdm_ip 172.28.0.209
Successfully mapped volume vol_3 to SDC 172.28.1.72.
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_4 --sdc_ip 172.28.1.71 --mdm_ip 172.28.0.209
Successfully mapped volume vol_4 to SDC 172.28.1.71.
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_5 --sdc_ip 172.28.1.68 --mdm_ip 172.28.0.209
Successfully mapped volume vol_5 to SDC 172.28.1.68.
[root@blade1 ScaleIO]# /opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_name vol_LMDS --sdc_ip 172.28.1.66 --mdm_ip 172.28.0.209
Successfully mapped volume vol_LMDS to SDC 172.28.1.66.

Or if we just want to map the volume to all SDCs:

/opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_id <volumeid> --visible_to_all_sdc

Mount the Volume

This creates a local device: /dev/scinia

  # from dmesg
  Open Storage R1_0:Created device scinia (16,0). Capacity 218103808 LB
   scinia: unknown partition table

Create disk label, partition as normal, format, mount

  parted /dev/scinia -- mklabel msdos
  parted /dev/scinia -- mkpart primary ext3 1 -1
  mkfs.ext3 /dev/scinia1
  mount /dev/scinia1 /mnt

When volumes are exported to all SDCs, create the partition tables on one node and then rescan the partiton tables on the rest of the SDCs so /dev/scinia1 appears:

[root@compute002 ~]$ ls /dev/scini* 
/dev/scini  /dev/scinia
[root@compute002 ~]$ hdparm -z /dev/scinia

/dev/scinia:
[root@compute002 ~]$ ls /dev/scini* 
/dev/scini  /dev/scinia  /dev/scinia1