Difference between revisions of "Hedvig: Install"
| Line 37: | Line 37: | ||
* All the drives should stay unformatted hedvig process take care of all formatting and setup. | * All the drives should stay unformatted hedvig process take care of all formatting and setup. | ||
* Set the root password to hedvig. | * Set the root password to hedvig. | ||
| − | < | + | <nowiki> |
sudo su - | sudo su - | ||
passwd | passwd | ||
| − | </ | + | </nowiki> |
* Check all other nodes are accessible via SSH at port 22 . | * Check all other nodes are accessible via SSH at port 22 . | ||
* Add host names of all the other storage nodes , proxy and deploy node. | * Add host names of all the other storage nodes , proxy and deploy node. | ||
Revision as of 05:01, 30 July 2016
Hardware
Minimum hardware for Each storage nodes :
General hardware recommendations Here are some general recommendations for your hardware, including nodes, servers, CPUs, memory, network, and disk drives. Number of nodes
- For a single cluster, have a minimum of three storage nodes.
- To scale performance and capacity, simply add nodes.
- It is best to add nodes in multiples of three, although they can be added in any quantity.
Type of servers
- The Hedvig standard is dual-socket x64 servers with onboard drives (SSDs and/or
HDDs).
- It is better to scale the cluster using a larger number of small capacity servers, as
opposed to using a smaller number of large capacity servers. Hedvig is optimized for scale-out. CPUs
- A multi-core Intel E5/E7 CPU, or equivalent, is best.
- In general, the latest CPU generations and models are best.
Memory
- The higher the memory capacity per node the better, because Hedvig utilizes memory
for internal processes and caching.
- The minimum memory configuration for each physical server for a Hedvig Cluster Node
(non-hyperconverged) is usually 48 GB. Typical deployments use servers with 128 to 256 GB. Network
- The Hedvig standard is a 10 GbE network, but 40 GbE is better.
- Gigabit is supported, but it is bandwidth limited.
Hedvig provide prebuild OVF templates / containers / ISO for deploy node and proxies node.
Prepare Storage node
- Install minimal Centos 6.7 on all storage nodes.
- All the drives should stay unformatted hedvig process take care of all formatting and setup.
- Set the root password to hedvig.
sudo su - passwd
- Check all other nodes are accessible via SSH at port 22 .
- Add host names of all the other storage nodes , proxy and deploy node.
- Always enable jumbo frames, if available.
Download hedvig
Hedvig is available in number of formats ovf, iso ,containers.
The Hedvig software package can be downloaded from the Hedvig site, usually found here:
https://hedviginc.zendesk.com/hc/en-us
- Download /download/<release>/hedvig_extract.bin
- Download /download/<release>/rpm_extract.bin
- Download ftp://216.109.131.85/download/releases/images/ESX/cvm.ova
- Download ftp://216.109.131.85/download/releases/images/ESX/deploy.ova
Hedvig cvm and delpoy are available for kvm templates and ISO for bareborne servers.
Hedvig Storage Proxy
Minimum configuration for storage proxy is 4 cpus and 8GB ram. Higher clock speed cpu will be preferable.
- In hypervisor based setup ensure that we have 4 vcpus and 8GB of ram available.
- Install cvm.ova as appliance VM.
- Inside VMWARE Vsphere client > file > Deploy OVF template
- locate the cvm.ova template and follow the setup.
- After deploy start the VM and login via root and "hedvig" password.
- Setup appropriate fully qualified Host name.
- Update the /etc/hosts , so that all the hostnames(storage/proxy/deploy)are resolvable.
- Enable Jumbo frames.
- /usr/local/hedvig/scripts/menu_hedvig.sh
Hedvig Deployment server setup
- Import the deploy.ova into the ESXi or Vcenter. Please ensure you have atleast 1 vcpu and 4Gb for the deploy to work.
- Power up th VM and copy rpm_extract.bin and hedvig_extract.bin to /home/admin .(admin should be present in OVF)
scp rpm_extract.bin root@192.168.1.185:/home/admin scp hedvig_extract.bin root@192.168.1.185:/home/admin
- Change permissions.
chmod +x rpm_extract.bin chmod +x hedvig_extract.bin
- Execute as user admin.
sudo su admin sudo ./rpm_extract.bin sudo ./hedvig_extract.bin
- In deploy.ova VM
sudo su admin ./usr/local/hedvig/scripts/menu_hedvig.sh
- Enter the Cluster setup menu and change the Host name to fully qualified name.
--------------------------------- --- Cluster setup --- --------------------------------- 1. Login to cluster 2. Setup a new cluster 3. Update this machine 4. Restart this machine 5. Set hostname 6. Configure network 7. Exit --------------------------------- Enter choice [ 1 - 7 ] 5
- setup the public network in our case 192.168.1.185 with option 5 (network can be set with syscong scripts as well).
--------------------------------- --- Cluster setup --- --------------------------------- 1. Login to cluster 2. Setup a new cluster 3. Update this machine 4. Restart this machine 5. Set hostname 6. Configure network 7. Exit --------------------------------- Enter choice [ 1 - 7 ] 5
Hedvig cluster Install and config
- In deploy.ova VM , login as root with hedvig default password.
[root@head ~]# sudo su - [root@head ~]# sudo su admin bash-4.1$ /usr/local/hedvig/scripts/menu_hedvig.sh --------------------- --- Cluster setup --- --------------------- 1. Login to cluster 2. Setup a new cluster 3. Upadte this machine 4. Restart this machine 5. Set hostname 6. Configure network 7. Exit ---------------------- Enter choice [ 1 - 7 ] 2
- Select 2 to create the cluster
--- Login to cluster: (default hv_cluster) active cluster will be set to : hv_cluster is this correct (Y/N) y --- Enter cluster hv_cluster password: *****hedvig***** Cluster nodes file to edit: (/home/admin/hv_cluster/nodelists/.new/cluster-list.txt) :
- Enter the hostname of the storage nodes
node1.bostonhpc.in node2.bostonhpc.in node3.bostonhpc.in
- save :wq and exit , hedvig will check validity of hostnames ,are they resolvable and check connectivity.
- In tgt nodes add the proxy node. If proxy are set in HA mode check the HA proxy user guide for info.
---- checking ips in file for validity.. ---- Use specialized disk-mapping file for cluster nodes? (y/N) (default N) N : tgt nodes file to edit: (/home/admin/hv_cluster/nodelists/.new/tgt-list.txt) : proxy.bostonhpc.in
- :wq to save and exit and enter to continue
---- checking ips in file for validity.. ---- config file looks ok ---- # cluster settings --- replication_factor: 3 # vals: 2,3, or 4 replication_policy: RackUnaware # vals: DataCenterAware, RackUnaware, or Rackaware enable_s3: False # vals: True/False generate_hosts: False transfer_hosts: False
- Cluster setting page leave as it is.
- NOTE : replication_policy cant be changed after the cleuter setup , presently only RackUnaware is tried.
- In next section please fill in appropriate details for SNMP setup , rest are left default.
#!/bin/bash export ADMINADDR=admin@hedviginc.com export FROMADDR=donotreply@hedviginc.com export MAILADDR=alerts@hedviginc.com export SMTPHOST=gateway.hedviginc.com export EMAIL_INTERVAL=1h export NTPSERVER=0.pool.ntp.org export MEMWARN=4000000000 export RESTARTFLAG=false export LSITYPE=lsijbod export LSIFORCE=no export LOGRET=15 export CVMLOGRET=15 export HPECTR=0 export CVMEMAILALERT=false export TZ=Asia/Calcutta
- :wq save and exit
- The configuration tool will automatically configure and initialize the cluster and start the appropriate services.
- There are 56 separate and unique steps that are grouped to check each area individually to assure that everything is working correctly. These steps will take approximately 20 minutes.