Difference between revisions of "Filesystems: Gluster"

From Define Wiki
Jump to navigation Jump to search
Line 33: Line 33:
  
 
=== Start the volume ===
 
=== Start the volume ===
 +
* Start the volume
 
<syntaxhighlight>
 
<syntaxhighlight>
 
root@cal4:~$ gluster volume start gluster-vol1
 
root@cal4:~$ gluster volume start gluster-vol1
 
Starting volume gluster-vol1 has been successful
 
Starting volume gluster-vol1 has been successful
 +
</syntaxhighlight>
 +
 +
* Verify everything is in order
 +
<syntaxhighlight>
 +
root@cal4:~$ gluster volume info gluster-vol1
 +
 +
Volume Name: gluster-vol1
 +
Type: Distribute
 +
Status: Started
 +
Number of Bricks: 2
 +
Transport-type: tcp
 +
Bricks:
 +
Brick1: cal4:/glusterfs
 +
Brick2: cal3:/glusterfs
 
</syntaxhighlight>
 
</syntaxhighlight>

Revision as of 16:18, 12 July 2012

Setup tested using ubuntu 12.10 on calxeda hardware. Steps should be applicable on most linux distros. In this configuration we are using 2 hosts which act both as servers and clients.

  • Gluster is installed on 2 hosts (cal3 and cal4)
  • A gluster filesystem is created across 2 hosts
  • Both hosts then mount the glusterfs

Install Gluster

  apt-get -y install gluster-server gluster-client

Create a storage pool

  • Probe peers and check the status. Instructions below were exected on cal4
root@cal4:~$ gluster peer probe cal3
Probe successful
root@cal4:~$ gluster peer status
Number of Peers: 1

Hostname: cal3
Uuid: 5ce71128-9143-424b-b407-c3d4e6a39cf1
State: Peer in Cluster (Connected)

Create a distributed volume

  • Assuming /gluster is the location of the space you want to use on each node
  • Transport is tcp in this instance as we are just using ethernet, rdma could be used if we had IB
  • Not striping, you should use striped volumes only in high concurrency environments accessing very large files. stripe X
root@cal4:~$ gluster volume create gluster-vol1 transport tcp cal4:/glusterfs cal3:/glusterfs 
Creation of volume gluster-vol1 has been successful. Please start the volume to access data.

Start the volume

  • Start the volume
root@cal4:~$ gluster volume start gluster-vol1
Starting volume gluster-vol1 has been successful
  • Verify everything is in order
root@cal4:~$ gluster volume info gluster-vol1

Volume Name: gluster-vol1
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: cal4:/glusterfs
Brick2: cal3:/glusterfs