RHSS:Lab Exercises from Workshop in London May 14
Jump to navigation
Jump to search
Notes
- Amazon systems setup
- 6 nodes, 2 zones
- Each node has the gluster software installed (using RHS 2.1)
- /dev/md0 setup on all the nodes
Setup Gluster
Add all the nodes to gluster
[root@ip-10-100-1-11 ~]# cat /etc/redhat-*
Red Hat Enterprise Linux Server release 6.4 (Santiago)
Red Hat Storage Server 2.1 Update 2
[root@ip-10-100-1-11 ~]# gluster peer probe 10.100.1.11
peer probe: success. Probe on localhost not needed
[root@ip-10-100-1-11 ~]# gluster peer probe 10.100.2.12
peer probe: success.
[root@ip-10-100-1-11 ~]# gluster peer probe 10.100.1.13
peer probe: success.
[root@ip-10-100-1-11 ~]# gluster peer probe 10.100.2.14
peer probe: success.
[root@ip-10-100-1-11 ~]# gluster peer probe 10.100.1.15
peer probe: success.
[root@ip-10-100-1-11 ~]# gluster peer probe 10.100.2.16
peer probe: success.Check the Status
[root@ip-10-100-1-11 ~]# gluster peer status
Number of Peers: 5
Hostname: 10.100.2.12
Uuid: ff0978d9-7c76-4158-ab46-a6f683a4f398
State: Peer in Cluster (Connected)
Hostname: 10.100.1.13
Uuid: 58740d34-7814-4e8b-9952-cb6f5e7d2666
State: Peer in Cluster (Connected)
Hostname: 10.100.2.14
Uuid: 1371243b-c031-43dd-a92d-db886689a23f
State: Peer in Cluster (Connected)
Hostname: 10.100.1.15
Uuid: d96334c7-2926-4777-9f81-9c820bf249d7
State: Peer in Cluster (Connected)
Hostname: 10.100.2.16
Uuid: 14c4f4fb-f53e-40af-9873-2dace21dc307
State: Peer in Cluster (Connected)Create a Distributed Volume
[root@ip-10-100-1-11 ~]# gluster volume create glustervol 10.100.1.11:/export/brick/glustervol 10.100.2.12:/export/brick/glustervol
volume create: glustervol: success: please start the volume to access data
[root@ip-10-100-1-11 ~]# gluster volume start glustervol
volume start: glustervol: success
[root@ip-10-100-1-11 ~]# gluster volume info
Volume Name: glustervol
Type: Distribute
Volume ID: 9c6b5cbd-c65c-42bc-89d3-82f63ceb8168
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.100.1.11:/export/brick/glustervol
Brick2: 10.100.2.12:/export/brick/glustervolClient accessing using Gluster native
- Commands below are run from the Client01.
- Check the client is ok
[root@ip-10-100-0-196 ~]# rpm -qa | grep gluster
glusterfs-libs-3.4.0.33rhs-1.el6_4.x86_64
glusterfs-3.4.0.33rhs-1.el6_4.x86_64
glusterfs-fuse-3.4.0.33rhs-1.el6_4.x86_64
[root@ip-10-100-0-196 ~]# lsmod | grep fuse
[root@ip-10-100-0-196 ~]# modprobe fuse
[root@ip-10-100-0-196 ~]# lsmod | grep fuse
fuse 69253 0
[root@ip-10-100-0-196 ~]#- Mount using gluster native
[root@ip-10-100-0-196 ~]# mount -t glusterfs 10.100.1.11:/glustervol /mnt/gluster
[root@ip-10-100-0-196 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 6.0G 2.3G 3.6G 39% /
none 828M 0 828M 0% /dev/shm
10.100.1.11:/glustervol
200G 66M 200G 1% /mnt/glusterClient Access using NFS
[root@ip-10-100-0-138 ~]# mount -t nfs -o vers=3 10.100.1.11:/glustervol /mnt/nfs/
[root@ip-10-100-0-138 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 889M 7.0G 12% /
tmpfs 829M 0 829M 0% /dev/shm
10.100.1.11:/glustervol
200G 65M 200G 1% /mnt/nfs
Test File Distribution
- Use any of the linux clients above, create a load of files
[root@ip-10-100-0-196 ~]# cd /mnt/gluster/
[root@ip-10-100-0-196 gluster]# ls
[root@ip-10-100-0-196 gluster]# touch {1..30}.txt
[root@ip-10-100-0-196 gluster]# ls
10.txt 13.txt 16.txt 19.txt 21.txt 24.txt 27.txt 2.txt 4.txt 7.txt
11.txt 14.txt 17.txt 1.txt 22.txt 25.txt 28.txt 30.txt 5.txt 8.txt
12.txt 15.txt 18.txt 20.txt 23.txt 26.txt 29.txt 3.txt 6.txt 9.txt
[root@ip-10-100-0-196 gluster]#- Verify the split in files across the two servers used (test done on distributed volume)
[root@ip-10-100-1-11 ~]# ls /export/brick/glustervol/
13.txt 17.txt 19.txt 23.txt 26.txt 27.txt 28.txt 29.txt 4.txt 8.txt 9.txt
[root@ip-10-100-2-12 ~]# ls /export/brick/glustervol/
10.txt 12.txt 15.txt 18.txt 20.txt 22.txt 25.txt 30.txt 5.txt 7.txt
11.txt 14.txt 16.txt 1.txt 21.txt 24.txt 2.txt 3.txt 6.txtModify Volumes / Expand Rebalance and Shrink
- Add nodes (already ran peer probe above)
[root@ip-10-100-1-11 ~]# gluster volume add-brick glustervol 10.100.1.13:/export/brick/glustervol
volume add-brick: success
[root@ip-10-100-1-11 ~]# gluster volume add-brick glustervol 10.100.2.14:/export/brick/glustervol
volume add-brick: success
[root@ip-10-100-1-11 ~]# gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 10.100.1.11:/export/brick/glustervol 49152 Y 7281
Brick 10.100.2.12:/export/brick/glustervol 49152 Y 7126
Brick 10.100.1.13:/export/brick/glustervol 49152 Y 7272
Brick 10.100.2.14:/export/brick/glustervol 49152 Y 7259
NFS Server on localhost 2049 Y 7536
NFS Server on 10.100.1.13 2049 Y 7314
NFS Server on 10.100.2.14 2049 Y 7271
NFS Server on 10.100.2.16 2049 Y 7271
NFS Server on 10.100.2.12 2049 Y 7335
NFS Server on 10.100.1.15 2049 Y 7249
Task Status of Volume glustervol
------------------------------------------------------------------------------
There are no active volume tasks- Check volume size on the clients (should double up to 400GB, each brick is 100GB
# on the client
[root@ip-10-100-0-196 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 6.0G 2.3G 3.6G 39% /
none 828M 0 828M 0% /dev/shm
10.100.1.11:/glustervol
400G 132M 400G 1% /mnt/gluster
<syntaxhighlight>
* Check the bricks on the new servers added - note there are no files there!
<syntaxhighlight>
# server 3
[root@ip-10-100-2-14 ~]# ls /export/brick/glustervol/
[root@ip-10-100-2-14 ~]#
# server 4
[root@ip-10-100-1-13 ~]# ls /export/brick/glustervol/
[root@ip-10-100-1-13 ~]#- Lets rebalance the Volumes (can be run on any of the gluster nodes)
[root@ip-10-100-2-14 ~]# gluster volume rebalance glustervol start
volume rebalance: glustervol: success: Starting rebalance on volume glustervol has been successful.
ID: 0ceea396-80f5-4f42-b681-5990fd39426b
[root@ip-10-100-2-14 ~]# gluster volume rebalance glustervol status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 35 0 0 completed 1.00
10.100.1.11 3 0Bytes 36 0 0 completed 0.00
10.100.2.12 5 0Bytes 38 0 0 completed 1.00
10.100.1.13 0 0Bytes 35 0 0 completed 0.00
volume rebalance: glustervol: success:- Now lets go back in and check our newly added servers, files should be moved across all nodes now
[root@ip-10-100-2-14 ~]# ls /export/brick/glustervol/
19.txt 27.txt 9.txt
[root@ip-10-100-1-13 ~]# ls /export/brick/glustervol/
24.txt 2.txt 30.txt 5.txt 7.txt
[root@ip-10-100-2-12 ~]# ls /export/brick/glustervol/
10.txt 11.txt 12.txt 14.txt 15.txt 16.txt 18.txt 1.txt 20.txt 21.txt 22.txt 25.txt 3.txt 6.txt
[root@ip-10-100-1-11 ~]# ls /export/brick/glustervol/
13.txt 17.txt 23.txt 26.txt 28.txt 29.txt 4.txt 8.txt- Lets remove a brick / Shrink your volumes
[root@ip-10-100-1-11 ~]# gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 10.100.1.11:/export/brick/glustervol 49152 Y 7281
Brick 10.100.2.12:/export/brick/glustervol 49152 Y 7126
Brick 10.100.1.13:/export/brick/glustervol 49152 Y 7272
Brick 10.100.2.14:/export/brick/glustervol 49152 Y 7259
NFS Server on localhost 2049 Y 7536
NFS Server on 10.100.1.13 2049 Y 7314
NFS Server on 10.100.2.12 2049 Y 7335
NFS Server on 10.100.2.14 2049 Y 7271
NFS Server on 10.100.2.16 2049 Y 7271
NFS Server on 10.100.1.15 2049 Y 7249
Task Status of Volume glustervol
------------------------------------------------------------------------------
Task : Rebalance
ID : 0ceea396-80f5-4f42-b681-5990fd39426b
Status : completed
[root@ip-10-100-1-11 ~]# gluster volume remove-brick glustervol 10.100.2.14:/export/brick/glustervol start
volume remove-brick start: success
ID: ab295169-3f71-4c66-aa7b-05f357c86c8e
[root@ip-10-100-1-11 ~]# gluster volume remove-brick glustervol 10.100.2.14:/export/brick/glustervol status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
10.100.2.14 3 0Bytes 30 0 0 completed 0.00
[root@ip-10-100-1-11 ~]# gluster volume remove-brick glustervol 10.100.2.14:/export/brick/glustervol commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
[root@ip-10-100-1-11 ~]# gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 10.100.1.11:/export/brick/glustervol 49152 Y 7281
Brick 10.100.2.12:/export/brick/glustervol 49152 Y 7126
Brick 10.100.1.13:/export/brick/glustervol 49152 Y 7272
NFS Server on localhost 2049 Y 7719
NFS Server on 10.100.1.13 2049 Y 7443
NFS Server on 10.100.2.12 2049 Y 7447
NFS Server on 10.100.2.14 2049 Y 7467
NFS Server on 10.100.2.16 2049 Y 7380
NFS Server on 10.100.1.15 2049 Y 7358
Task Status of Volume glustervol
------------------------------------------------------------------------------
There are no active volume tasks- Recheck the files on the nodes: Notice the files are redistributed across the other 3 servers
[root@ip-10-100-1-11 ~]# ls /export/brick/glustervol/
13.txt 17.txt 19.txt 23.txt 26.txt 27.txt 28.txt 29.txt 4.txt 8.txt
[root@ip-10-100-2-12 ~]# ls /export/brick/glustervol/
10.txt 11.txt 12.txt 14.txt 15.txt 16.txt 18.txt 1.txt 20.txt 21.txt 22.txt 25.txt 2.txt 30.txt 3.txt 6.txt
[root@ip-10-100-1-13 ~]# ls /export/brick/glustervol/
24.txt 2.txt 30.txt 5.txt 7.txt 9.txt
[root@ip-10-100-2-14 ~]# ls /export/brick/glustervol/
[root@ip-10-100-2-14 ~]#
# Also notice that our total space is reduced to 300GB
[root@ip-10-100-0-196 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 6.0G 2.3G 3.6G 39% /
none 828M 0 828M 0% /dev/shm
10.100.1.11:/glustervol
300G 100M 300G 1% /mnt/gluster- Stop the Volume
gluster volume stop glustervolCreate a Replicated Volume
[root@ip-10-100-1-11 ~]# gluster volume create glustervol-rep replica 2 10.100.1.11:/export/brick/glustervol-rep 10.100.2.12:/export/brick/glustervol-rep 10.100.1.13:/export/brick/glustervol-rep 10.100.2.14:/export/brick/glustervol-rep
volume create: glustervol-rep: success: please start the volume to access data
[root@ip-10-100-1-11 ~]# gluster volume start glustervol-rep
volume start: glustervol-rep: success
[root@ip-10-100-1-11 ~]# gluster volume info glustervol-rep
Volume Name: glustervol-rep
Type: Distributed-Replicate
Volume ID: bbf0ea8c-f1d4-4097-b622-ceb2706fbe40
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.100.1.11:/export/brick/glustervol-rep
Brick2: 10.100.2.12:/export/brick/glustervol-rep
Brick3: 10.100.1.13:/export/brick/glustervol-rep
Brick4: 10.100.2.14:/export/brick/glustervol-rep- Check space (200GB usable, 400GB raw)
[root@ip-10-100-0-196 ~]# mount -t glusterfs 10.100.1.11:/glustervol-rep /mnt/gluster
[root@ip-10-100-0-196 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 6.0G 2.3G 3.6G 39% /
none 828M 0 828M 0% /dev/shm
10.100.1.11:/glustervol-rep
200G 67M 200G 1% /mnt/gluster