ScaleIO: Install and configure on Centos 6.2
- Beta testing - tested on PCM 3.1 with CentOS 6.2
Terminology
- MDM - Meta Data Manager (config monitoring, passive system mapping, 1 or 2 mdms)
- SDS - ScaleIO Data Server (install on all systems exporting storage)
- SDC - ScaleIO Data Client (install on all servers that require access to the vSAN)
- TB - Tie Breaker (only required when 2x mdms, install on non-mdm host)
Installation
Very straight forward. In this configuration were only going to use 1x MDM server. We've a 16 node cluster. All hosts are going to be SDSs and SDCs.
Installing the MDM
rpm -ivh /shared/mpiuser/ScaleIO/scaleio-mdm-1.0-02.82.el6.x86_64.rpmInstall the SDS
# on headnode
rpm -ivh /shared/mpiuser/ScaleIO/scaleio-sds-1.0-02.82.el6.x86_64.rpm
# on compute
pdsh -a rpm -ivh /shared/mpiuser/ScaleIO/scaleio-sds-1.0-02.82.el6.x86_64.rpmInstall the SDC
# on headnode
MDM_IP=192.168.4.2 rpm -ivh /shared/mpiuser/ScaleIO/scaleio-sdc-1.0-02.82.el6.x86_64.rpm
# on compute
pdsh -a MDM_IP=192.168.4.2 rpm -ivh /shared/mpiuser/ScaleIO/scaleio-sdc-1.0-02.82.el6.x86_64.rpmRPMs install in /opt/scaleio/[mdm|sds|sdc]
Configure the MDM
Configure a single MDM server:
/opt/scaleio/mdm/bin/cli --add_primary_mdm --primary_mdm_ip 172.28.0.209Installing as part of a failover cluster setup
# on the primary MDM system
/opt/scaleio/mdm/bin/cli --add_primary_mdm --primary_mdm_ip 192.168.1.4 --virtual_ip 192.168.4.200 --interface_name eth0
# on the secondary MDM system
/opt/scaleio/mdm/bin/cli --mdm_ip 192.168.1.4 --add_secondary_mdm --secondary_mdm_ip 192.168.1.6 --interface_name eth0Configure the SDS
Add SDSs (assuming the devices for testing are just files, these can also be devices/partitions):
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.2 --sds_name "sds_1"
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.3 --sds_name "sds_2"
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.4 --sds_name "sds_3"
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.5 --sds_name "sds_4"
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.6 --sds_name "sds_5"
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.7 --sds_name "sds_6"
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.8 --sds_name "sds_7"
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.9 --sds_name "sds_8"Note: If you add an SDS with some incorrect parameters, you'll need to force_clean when readding:
./cli --add_sds --device_name /data/mpiuser/scaleio.fs --sds_ip 192.168.4.10 --sds_name "sds_9" --force_cleanExample if using more than one device per host:
[root@gpuheadnode shared]$ /opt/scaleio/mdm/bin/cli --add_sds --device_name /dev/sdb,/dev/sdc --sds_ip 10.1.1.1 --sds_name "sds_1"
Parsed device name /dev/sdb
Parsed device name /dev/sdc
Successfully created SDS sds_1. Object ID 726711bd00000000Verify all the SDSs are working ok:
[root@gpuheadnode shared]$ /opt/scaleio/mdm/bin/cli --query_all_sds
Query-all-SDS returned 4 SDS nodes.
SDS 726711bd00000000 Name: sds_1 IP: 10.1.1.1 Port: 7072
SDS 726711be00000001 Name: sds_2 IP: 10.1.1.2 Port: 7072
SDS 726711bf00000002 Name: sds_3 IP: 10.1.1.3 Port: 7072
SDS 726711c000000003 Name: sds_4 IP: 10.1.1.4 Port: 7072Create a Volume
[root@gp-2-0 bin]$ ./cli --add_volume --size_gb 100 --volume_name "Volume_1"
Rounding up volume size to 104 GB
Successfully created volume of size 104 GB. Object ID 9ffb781e00000000Map to an SDC
Mapping to a single SDC, you'll need the volume_id and sdc_id (query these). Lets start with getting the volume id:
[root@gp-2-0 bin]$ ./cli --query_all_volumes
Query all volumes returned 1 volumes .
Volume ID 9ffb781e00000000 Name: Volume_1. Size:104 GB (106496 MB) . Volume is unmapped.
Volume Summary:
1 volumes. Total size:104 GB (106496 MB)
1 volumes mapped to no SDCNow lets get the sdc_id:
[root@gp-2-0 bin]$ ./cli --query_all_sdc
Query all SDC returned 16 SDC nodes.
SDC Id: 35fb9b9100000000 SDC IP 192.168.4.2
SDC Id: 35fb9b9200000001 SDC IP 192.168.4.12
SDC Id: 35fb9b9300000002 SDC IP 192.168.4.13
SDC Id: 35fb9b9400000003 SDC IP 192.168.4.15
SDC Id: 35fb9b9500000004 SDC IP 192.168.4.17
SDC Id: 35fb9b9600000005 SDC IP 192.168.4.4
SDC Id: 35fb9b9700000006 SDC IP 192.168.4.14
SDC Id: 35fb9b9800000007 SDC IP 192.168.4.16
SDC Id: 35fb9b9900000008 SDC IP 192.168.4.10
SDC Id: 35fb9b9a00000009 SDC IP 192.168.4.5
SDC Id: 35fb9b9b0000000a SDC IP 192.168.4.7
SDC Id: 35fb9b9c0000000b SDC IP 192.168.4.9
SDC Id: 35fb9b9d0000000c SDC IP 192.168.4.11
SDC Id: 35fb9b9e0000000d SDC IP 192.168.4.8
SDC Id: 35fb9b9f0000000e SDC IP 192.168.4.3
SDC Id: 35fb9ba00000000f SDC IP 192.168.4.6Using the info above, lets map the volume to the first sdc
[root@gp-2-0 bin]$ ./cli --map_volume_to_sdc --volume_id 9ffb781e00000000 --sdc_id 35fb9b9100000000
Successfully mapped volume with ID 9ffb781e00000000 to SDC with ID 35fb9b9100000000.Or if we just want to map the volume to all SDCs:
/opt/scaleio/mdm/bin/cli --map_volume_to_sdc --volume_id <volumeid> --visible_to_all_sdcMount the Volume
This creates a local device: /dev/scinia
# from dmesg
Open Storage R1_0:Created device scinia (16,0). Capacity 218103808 LB
scinia: unknown partition tableCreate disk label, partition as normal, format, mount
parted /dev/scinia -- mklabel msdos
parted /dev/scinia -- mkpart primary ext3 1 -1
mkfs.ext3 /dev/scinia1
mount /dev/scinia1 /mntWhen volumes are exported to all SDCs, create the partition tables on one node and then rescan the partiton tables on the rest of the SDCs so /dev/scinia1 appears:
[root@compute002 ~]$ ls /dev/scini*
/dev/scini /dev/scinia
[root@compute002 ~]$ hdparm -z /dev/scinia
/dev/scinia:
[root@compute002 ~]$ ls /dev/scini*
/dev/scini /dev/scinia /dev/scinia1