Difference between revisions of "Filesystems: Gluster"
Jump to navigation
Jump to search
| Line 4: | Line 4: | ||
* Both hosts then mount the glusterfs | * Both hosts then mount the glusterfs | ||
* Mount point /gluster is a loopback device to a file created with dd (had problems when path was not a dedicated mount) | * Mount point /gluster is a loopback device to a file created with dd (had problems when path was not a dedicated mount) | ||
| + | |||
| + | == Install == | ||
=== Install Gluster === | === Install Gluster === | ||
| Line 9: | Line 11: | ||
apt-get -y install gluster-server gluster-client | apt-get -y install gluster-server gluster-client | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| + | |||
| + | == Configure == | ||
=== Create a storage pool === | === Create a storage pool === | ||
| Line 54: | Line 58: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| + | == Mount == | ||
=== Mount the gluster filesystem === | === Mount the gluster filesystem === | ||
* Using gluster native client below, can also use cifs/nfs. | * Using gluster native client below, can also use cifs/nfs. | ||
Revision as of 17:02, 12 July 2012
Setup tested using ubuntu 12.10 on calxeda hardware. Steps should be applicable on most linux distros. In this configuration we are using 2 hosts which act both as servers and clients.
- Gluster is installed on 2 hosts (cal3 and cal4)
- A gluster filesystem is created across 2 hosts
- Both hosts then mount the glusterfs
- Mount point /gluster is a loopback device to a file created with dd (had problems when path was not a dedicated mount)
Install
Install Gluster
apt-get -y install gluster-server gluster-clientConfigure
Create a storage pool
- Probe peers and check the status. Instructions below were exected on cal4
root@cal4:~$ gluster peer probe cal3
Probe successful
root@cal4:~$ gluster peer status
Number of Peers: 1
Hostname: cal3
Uuid: 5ce71128-9143-424b-b407-c3d4e6a39cf1
State: Peer in Cluster (Connected)Create a distributed volume
- Assuming
/glusteris the location of the space you want to use on each node - Transport is
tcpin this instance as we are just using ethernet,rdmacould be used if we had IB - Not striping, you should use striped volumes only in high concurrency environments accessing very large files.
stripe X
root@cal4:~$ gluster volume create gluster-vol1 transport tcp cal4:/glusterfs cal3:/glusterfs
Creation of volume gluster-vol1 has been successful. Please start the volume to access data.Start the volume
- Start the volume
root@cal4:~$ gluster volume start gluster-vol1
Starting volume gluster-vol1 has been successful- Verify everything is in order
root@cal4:~$ gluster volume info gluster-vol1
Volume Name: gluster-vol1
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: cal4:/glusterfs
Brick2: cal3:/glusterfsMount
Mount the gluster filesystem
- Using gluster native client below, can also use cifs/nfs.
- Make sure you mount the gluster volume name not the data path (as you would usually with NFS)
- Note: The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount).
root@cal3:~$ mount -t glusterfs cal4:/gluster-vol1 /mnt/gluster
root@cal3:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 228G 24G 193G 11% /
udev 2.0G 4.0K 2.0G 1% /dev
tmpfs 808M 184K 808M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
/dev/sda1 228M 6.6M 209M 4% /boot
/dev/loop0 20G 423M 19G 3% /glusterfs # <-- this is the individual file system on each node
cal4:/gluster-vol1 39G 845M 37G 3% /mnt/gluster # <-- this is the aggregated gluster system 2x20GB