Difference between revisions of "Caringo: Install"

From Define Wiki
Jump to navigation Jump to search
Line 294: Line 294:
  
 
# mailSubjectTemplate = Quota state change notification
 
# mailSubjectTemplate = Quota state change notification
 +
 +
In the examples file you will need: idsys-pam.example.json  and policy-root.example.json
 +
 +
 +
vim idsys.json
 +
{
 +
  "pam": {
 +
    "name": "idsys-pam",
 +
    "description": "Linux PAM example configuration",
 +
 +
    "cookieName": "token",
 +
    "tokenPath": "/.TOKEN/",
 +
    "tokenAdmin": "caringoadmin@"
 +
  }
 +
}
 +
 +
vim policy.json
 +
 +
{
 +
  "Id": "Root policy -- grant admins full access to everything",
 +
 +
  "Statement": [
 +
    { "Sid": "Grant admins full access",
 +
      "Resource": "*",
 +
      "Principal": {
 +
        "user": ["caringoadmin@"],
 +
        "group": ["clusteradmins@"]
 +
      },
 +
      "Action": [ "*" ],
 +
      "Effect": "Allow"
 +
    }
 +
  ]
 +
}
 +
 +
"./initgateway"
 +
 +
yum install caringo-gateway-webui-5.1.0-1.x86_64.rpm
 +
 +
systemctl start cloudgateway

Revision as of 15:47, 30 January 2017

Caringo CSN

The Swarm CSN is an integrated services software application that centralizes the services and management information needed to install, upgrade, and monitor a Swarm storage cluster onto a single, accessible server with shared configuration. Installed on a dedicated node, CSN simplifies the network services and Caringo product installation process for administrators who are not proficient in Linux-based products or for environments where the network services are not easily available.

Note The CSN cannot anticipate all possible desired configurations. As a result, it may not provide administrators as much flexibility compared to manually configuring the network services.

The Swarm CSN infrastructure is comprised of the following components:

  • Required network services. These services are configured to support a Swarm cluster,

which include DHCP, PXE/Network Boot, TFTP, Syslog, and NTP.

  • Integrated PXE network boot and configuration server. For Swarm nodes booted

onto the Swarm CSN's internal network.

  • SNMP MIBs. Provides operational and status information for all nodes in the associated

Swarm cluster.

  • Swarm and SCSP Proxy. These integrated components are installed and configured for

you.

  • Administrative web console. Provides configuration of some settings and parameters,

as well as access to useful utilities such as updating Swarm software versions and backup and restoring configuration files.

The following illustration provides a logical view of the CSN infrastructure. In a dual network CSN, the primary storage cluster is isolated on a private network. On a single network CSN, the primary storage cluster is directly addressable.

Installing CSN

Since we wanted to test and demo Caringo, the CSN platform seems like a great way to get much quicker results, at this stage i am not sure of limitations or why you would not do it this way....

Downloaded Caringo CSN 8.2 from: https://connect.caringo.com (Matt H has login details), Alternatively i have put the installation files on storage1

Caringo installation documentation is included in the CSN zip.

CSN can do a single or dual network configuration but dual seems like a smart play since it uses PXE/DHCP/TFTP etc to deploy future storage nodes, which you would not want on a open network conflicting with current services.

So i got a system with at least 5 hot swap bays (1 os drive, 4 storage slots), IPMI, dual 1Gbe, dual cpu and ram.

Then installed up with Centos 6.8 basic server final (this is specified within the Caringo documentation as required..)

Once installed setup NIC 0 on 172.28.89.20 (well out side normal ranges on PXE) and NIC 1 on 192.168.0.* since this is a closed network.

Setup and install ntp as its required, pointed to ntp.pool.org and checked that this works.

transfered on the CSN installation files.

Chmod +X caringo-csn-bundle-install.sh

./caringo-csn-bundle-install.sh

at this point it checks it can see all the installation files, installs these through yum and if all good then asks if you want to proceed with CSN network configuration yes/no:


1. Is this the Primary CSN (yes/no)? [yes]:

2. Is this a single or dual network CSN (single/dual)? [dual]

3. The following message then displays if you are setting up a Primary CSN: Half of the NIC ports on this system will be bonded and assigned to the external network. The following questions configure the external network: Enter the CSN IP address [].

4. Enter the cluster IP address. This IP address will remain with the Primary CSN in the event of a CSN failover []:

5. Enter the subnet mask [255.255.255.0]:

6. Enter the gateway IP address []:

7. Half of the NIC ports on this system will be bonded and assigned to the internal network. The following questions configure the internal network: Enter the network address, e.g. 192.168.100.0 (small network), 192.168.0.0, 172.20.0.0 (large network) []:


Example:

Configuring external/internal ports. This may take some time. Checking ... eth0 ...eth1 ...eth2 ...eth3 ... Eth Device | MAC | Public? | Bond eth0 | 00:0c:29:e2:e6:65 | Y | bond1 eth1 | 00:0c:29:e2:e6:6f | Y | bond1 eth2 | 00:0c:29:e2:e6:79 | N | bond0 eth3 | 00:0c:29:e2:e6:83 | N | bond0 Copyright © 2005 - 2016 Caringo, Inc. All rights reserved 10 Version 8.2 August 2016

Disconnected NICs = Recommended ethernet device assignment Internal NICs = eth2 eth3 External NICs = eth0 eth1 Input the list of External NICs.

8. Enter a list of IP addresses (separated by spaces) for external name servers [8.8.8.8 8.8.4.4]:

11.Enter a list of IP addresses or server names (separated by a space) for external time servers [0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org]:

12. Enter a unique storage cluster name. This name cannot be changed once assigned. A fully qualified domain name is recommended []:

13.Enter the multicast group that should be used to uniquely identify the storage cluster on the network. Different storage clusters on the same network must have unique multicast addresses or the nodes will merge into a single cluster (224.0.10.100):

14.Are these values correct (yes/no)?


Dual-network Secondary CSN Configuration

After you configure a dual-network Primary CSN, you can optionally configure a Secondary CSN. The Secondary requires only the entry of a single unique external IP address and identification of the internal interface that is already defined on the Primary. The Secondary will then pull much of its network configuration data from the Primary. To ensure current configuration data is pulled, make sure a backup has occurred after the last configuration change on the Primary prior to installing the Secondary CSN. To facilitate this, a onetime use of the Primary's root password is required as follows (blank passwords are not supported): Additional information about the network will be obtained from the primary CSN Please enter the primary csn root password []: Please re-enter the primary csn root password []: Taken from the confirmation at the end of the initial configuration script, the Secondary configuration parameters are simply: Primary: no CSN Type: dual External CSN IP address: 192.168.66.11 Internal network address: 172.20.20.0 external nics [defaults from system scan]: internal nics [defaults from system scan]: Are these values correct (yes/no)?


Install caringo licensing file

at http://<yourCSNip>:8090

will ask you for login details: admin, caringo

Prior to booting, you should make sure the CSN has had adequate time to sync with a reliable NTP source and is sync ready as a time source for the nodes. To do this simply type '$ ntpq -c rv' at a system command line. The returned output should NOT contain any instances of 'sync_alarm' as these indicate the CSN is not yet ready to serve as an NTP server for the Swarm nodes on the internal network. The output will transition to 'sync_ntp' when all alarms have cleared; this can take as long as 20 minutes.

This seemed to be seamless for me in that i ran the command above and NTP was sunk.

Install Indexer and cloudprovider vm`s

Installed up a system with Vmware ESXI 6.0 and made sure that it had networking that could reach both public + private networks.

Public: 172.28.*.* Private: 192.168.0.*

Install up centos VM`s

Give Static IP address`s to each VM

check what version of java is installed with alternatives "java"

"yum install java- 1.8*"

Disable firewalld on both VM`s if just demo setup.

Indexer Installation

uzip swarm-v9.0.2-x86_64-csn.zip


yum install caringo-elasticsearch-search-2.0-7.noarch.rpm

cd /etc/elasticsearch

vim elasticsearch.yml

with this file uncomment and add to as you wish for example:


  1. ---------------------------------- Cluster -----------------------------------
  2. Use a descriptive name for your cluster:
cluster.name: caringo-bostonlabs
  1. ----------------------------------- Memory -----------------------------------
  2. Lock the memory on startup:
bootstrap.mlockall: true
  1. ---------------------------------- Network -----------------------------------
  2. Set the bind address to a specific IP (IPv4 or IPv6):
network.host: _site_
  1. Set a custom port for HTTP:
http.port: 9200


systemctl start elasticsearch

systemctl status elasticsearch

Some troubleshooting issues of the time were checking your IP addressing and DNS is correct.

Check that your Java version is actually JAVA 1.8* since this is whats required.

Cloudprovider Install

yum install caringo-gateway-5.1.0-1.x86_64.rpm

remove the web gui if you have already installed this along with the above RPM. They both CANNOT be installed at the same time (yum remove caringo-gateway-webui.x86_64)

cd /etc/caringo/cloudgateway/

vim gateway.cfg (everything are the bits out of the config file that have ben

  1. Client communications and handling

[gateway]

adminDomain = caringo-boston threads = 200

  1. multipartSpoolDir = /var/spool/cloudgateway
  1. Storage cluster back-end configuration

[storage_cluster]

locatorType = static hosts = 192.168.3.0 192.168.3.7 192.168.3.4

  1. port = 80
  2. dataProtection = immediate
  1. blockUndeletableWrites = true

indexerHosts = 192.168.3.80 indexerPort = 9200

  1. maxConnectionsPerRoute = 100
  1. SCSP front-end protocol

[scsp]

enabled = true bindAddress = 0.0.0.0 bindPort = 80


  1. S3 front-end protocol

[s3]

enabled = false bindAddress = 0.0.0.0 bindPort = 8090


[metering]

enabled = true

  1. flushIntervalSeconds = 300
  2. retentionDays = 100
  3. storageSampleIntervalSeconds = 3600


  1. Quota Support

[quota]

enabled = false smtpHost = localhost mailFrom = donotreply@localhost

  1. mailSubjectTemplate = Quota state change notification

In the examples file you will need: idsys-pam.example.json and policy-root.example.json


vim idsys.json

{

 "pam": {
   "name": "idsys-pam",
   "description": "Linux PAM example configuration",
   "cookieName": "token",
   "tokenPath": "/.TOKEN/",
   "tokenAdmin": "caringoadmin@"
 }

}

vim policy.json

{

 "Id": "Root policy -- grant admins full access to everything",
 "Statement": [
   { "Sid": "Grant admins full access",
     "Resource": "*",
     "Principal": {
       "user": ["caringoadmin@"],
       "group": ["clusteradmins@"]
     },
     "Action": [ "*" ],
     "Effect": "Allow"
   }
 ]

}

"./initgateway"

yum install caringo-gateway-webui-5.1.0-1.x86_64.rpm

systemctl start cloudgateway