Quobyte
Server Installation
Before installing Quobyte server software there are some tasks that need to be performed on each server to ensure correct functionality.
Configure NTP
Each server needs to have the same time, or some services will not start. Ensure that NTP is configured and running on all servers, and check all clocks are synced.
Disable Swap
Disable swap on each storage server. Running swapoff -a will disable all swap devices found in /proc/swaps and /etc/fstab. Also comment out/remove any swap lines in /etc/fstab to prevent swap being activated if a server is rebooted.
Install dependencies
yum -y install java-1.8.0-openjdk-headless wgetDownload the Quobyte yum repo file
cd /etc/yum.repos.d
wget https://packages.quobyte.com/repo/9/<YOUR_REPO_ID>/rpm/CentOS_7/quobyte.repo
Note: https://packages.quobyte.com/repo/3/8acxjFCHCQ7YMvxKmNEzhYTQ1kr9xA2e/ (for david power a/c)
or
wget https://packages.quobyte.com/repo/3/8acxjFCHCQ7YMvxKmNEzhYTQ1kr9xA2e/rpm/CentOS_7/quobyte.repoInstall Quobyte packages
yum -y install quobyte-server quobyte-clientServer Configuration
Prepare Drives
Any drives being used by Quobyte need to be formatted and mounted before Quobyte can use them. Currently only ext4 and XFS are supported. Each server in our testbed has 3 available drives. 2 SSDs (/dev/sdb and /dev/sdc) and 1 HHD (/dev/sdd). To prepare each drive do the following
# Create a filesystem on each drive and mount them.
# Note it is recommended to use the full drive and not partitions.
mkfs.xfs /dev/sdX
mount /dev/sdX /some/mount/point
# The testbed was configured as per below, where /dev/sda was the OS drive
[root@q01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 238.5G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 512M 0 part /boot
├─sda3 8:3 0 15.6G 0 part
├─sda4 8:4 0 1K 0 part
└─sda5 8:5 0 222.4G 0 part /
sdb 8:16 0 372.6G 0 disk /mnt/quobyte/metadata0
sdc 8:32 0 894.3G 0 disk /mnt/quobyte/data0
sdd 8:48 0 931.5G 0 disk /mnt/quobyte/data1
# The same procedure was performed on each storage serverDefine Registry Servers
Edit /etc/quobyte/host.cfg on each server to state which servers are running the registry servers. In our setup all 4 are so it was updated to read
registry=q01:7861,q02:7861,q03:7861,q04:7861
If name resolution isn't configured on the servers the hostnames can be replaced with IP addresses.
Create registry devices
To create the first registry device do the following on one server only.
qbootstrap /mnt/quobyte/metadata0Then start services on this server.
systemctl start quobyte-registry
systemctl start quobyte-webconsole
systemctl start quobyte-apiTo confirm that the registry services is running and the device is available run the following command.
[root@q01 ~]# qmgmt device list
Id Host Mode Disk Used Disk Avail Services LED Mode
1 q01 ONLINE 34 MB 400 GB REGISTRY OFFNote it make take a minute for the device to initially register.
To create other registry services, do the following on each server
qmkdev -t REGISTRY /mnt/quobyte/metadata0
systemctl start quobyte-registryOnce this is completed on each server you can list and check availability of each registry from the first server.
[root@q01 ~]# qmgmt device list
Id Host Mode Disk Used Disk Avail Services LED Mode
1 q01 ONLINE 34 MB 400 GB REGISTRY OFF
2 q02 ONLINE 34 MB 400 GB REGISTRY OFF
3 q03 ONLINE 34 MB 400 GB REGISTRY OFF
4 q04 ONLINE 34 MB 400 GB REGISTRY OFFAdd Metadata Devices
From the first server run the following command to add Metadata to the registry devices
qmgmt device update add-type <id> METADATA
# id for each registry is listed in the output of 'qmgmt device list'SSH to each host with a metadata device and start the metadata service by running
systemctl start quobyte-metadataConfirm that metadata devices are running
[root@q01 ~]# qmgmt device list
Id Host Mode Disk Used Disk Avail Services LED Mode
1 q01 ONLINE 34 MB 400 GB METADATA REGISTRY OFF
2 q02 ONLINE 34 MB 400 GB METADATA REGISTRY OFF
3 q03 ONLINE 34 MB 400 GB METADATA REGISTRY OFF
4 q04 ONLINE 34 MB 400 GB METADATA REGISTRY OFFAdd Data Devices
To add data devices perform the following on each server.
# Define data devices
qmkdev -t DATA /mnt/quobyte/data0
qmkdev -t DATA /mnt/quobyte/data1
# Start Quobyte Data service
systemctl start quobyte-dataOnce completed on each server check all devices are registered and available.
[root@q01 ~]# qmgmt device list
Id Host Mode Disk Used Disk Avail Services LED Mode
1 q01 ONLINE 34 MB 400 GB METADATA REGISTRY OFF
5 q01 ONLINE 22 GB 960 GB DATA OFF
6 q01 ONLINE 34 GB 1000 GB DATA OFF
2 q02 ONLINE 34 MB 400 GB METADATA REGISTRY OFF
7 q02 ONLINE 36 MB 960 GB DATA OFF
8 q02 ONLINE 46 GB 1000 GB DATA OFF
3 q03 ONLINE 34 MB 400 GB METADATA REGISTRY OFF
9 q03 ONLINE 36 MB 960 GB DATA OFF
10 q03 ONLINE 46 GB 1000 GB DATA OFF
4 q04 ONLINE 34 MB 400 GB METADATA REGISTRY OFF
11 q04 ONLINE 36 MB 400 GB DATA OFF
12 q04 ONLINE 46 GB 1000 GB DATA OFFVolume Management
By default Quobyte creates one volume configuration called BASE, which
Viewing Volume Configurations
Configurations can be viewed through the API or from the web console.
- API
[root@q01 ~]# qmgmt volume config export BASE
configuration_name: "BASE"
volume_metadata_configuration {
placement_settings {
required_device_tags {
}
forbidden_device_tags {
}
prefer_client_local_device: false
optimize_for_mapreduce: false
}
replication_factor: 1
}
default_config {
file_layout {
stripe_width: 1
replication_factor: 1
block_size_bytes: 524288
object_size_bytes: 8388608
segment_size_bytes: 10737418240
crc_method: CRC_32_ISCSI
}
placement {
required_device_tags {
}
forbidden_device_tags {
}
prefer_client_local_device: false
optimize_for_mapreduce: false
}
io_policy {
cache_size_in_objects: 10
enable_async_writebacks: true
enable_client_checksum_verification: true
enable_client_checksum_computation: true
sync_writes: AS_REQUESTED
direct_io: AS_REQUESTED
OBSOLETE_implicit_locking: false
lost_lock_behavior: IO_ERROR
OBSOLETE_keep_page_cache: false
implicit_locking_mode: NO_LOCKING
enable_direct_writebacks: false
notify_dataservice_on_close: false
keep_page_cache_mode: USE_HEURISTIC
rpc_retry_mode: RETRY_FOREVER
lock_scope: GLOBAL
}
}
snapshot_configuration {
snapshot_interval_s: 0
snapshot_lifetime_s: 0
}
metadata_cache_configuration {
cache_ttl_ms: 10000
negative_cache_ttl_ms: 10000
enable_write_back_cache: false
}
- Web console
Login to web console and navigate to "Volume Configuration". Select BASE to view the configuration.
Editing Volume Configuration
- API
qmgmt volume config edit BASEThis will open in your default editor (or use the value of the EDITOR environment variable if it is set).
- Web console
Navigate to 'Volume Configurations' and tick the box beside BASE. Then select 'edit' from the drop down menu.
Creating Volume Configurations
- API
To create a new config use the same command you would to edit a command, but use a configuration name that doesn't exist. For example to create a new configuration called 3x_replication run the following
qmgmt volume config edit 3x_replicationThis will open an empty file in a text editor.
To avoid specifying every setting manually, it is advisable to inherit a different configuration to use as a template. For example to use the BASE configuration as a template for the 3x_replication add the following
base_configuration: "BASE"Now individual parameters can be set, and any setting that isn't defined will inherit the value used in the BASE configuration. The below options were set in the 3x_replication configuration
[root@q01 ~]# qmgmt volume config export 3x_replication
configuration_name: "3x_replication"
base_configuration: "BASE"
volume_metadata_configuration {
replication_factor: 3
}
default_config {
placement {
required_device_tags {
tags: "hdd"
}
forbidden_device_tags {
}
prefer_client_local_device: false
optimize_for_mapreduce: false
}
}This will create 3 replications of data and metadata, as well as only place data on devices tagged with "hdd". The use of tags allows for finer control of what data is placed on what devices. In this example all data is placed on HDDs and not on SSD storage.
- Web console
The Web console only allows new sub-configurations to be created, i.e configurations that inherit from another. To create a new sub-configuration, navigate to 'Volume Configurations' and tick the box next to BASE. Then from the drop down menu select 'Add new sub-configuration'
Creating Volumes
Volumes are created either from the CLI or through the web console.
- CLI
The generic command used to create volumes is
qmgmt volume create <volume name> <user> <group> <volume configuration>In the test bed 3 volumes were created using different volume configurations. They were created by running the following commands
qmgmt volume create home_vol root root 3x_replication
qmgmt volume create scratch_vol root root ssd_performance
qmgmt volume create archive_vol root root 8+3_erasureMounting Volumes
Volumes can be mounted on any server that the quobyte-client package is installed. There is a CLI tool mount.quobyte used to mount the quobyte volumes. The command takes a list of registry servers and volume to mount as well as the directory to mount the volume to. So to mount the home_vol above to /home
mount.quobyte q01:7861,q02:7861,q03:7861,q04:7861/home_vol /homeThis can be repeated to mount any other volumes
mount.quobyte q01:7861,q02:7861,q03:7861,q04:7861/scratch_vol /scratch
mount.quobyte q01:7861,q02:7861,q03:7861,q04:7861/archive_vol /archiveSetup the s3 proxy and access via s3cmd
Enable the service on the storage node
- Click on the cog on the right hand nav plane
- Select s3 from the menu
- Put in the name of the storage node(s) in the first field, user database, port -> 8118 (clash with 80 for web browsing in openstack) and save
- Then in the same menu (cog) select user database from the nav and add the root user to access the s3 bucket
- User -> add, enter name root (rest as is)
- Then check the checkbox for root and 'Create access key'
- Once all this is done start the quo byte s3 service on the storage server
[root@storage001 ~]# systemctl start quobyte-s3
[root@storage001 ~]# systemctl enable quobyte-s3
Created symlink from /etc/systemd/system/multi-user.target.wants/quobyte-s3.service to /usr/lib/systemd/system/quobyte-s3.service.
[root@storage001 ~]# systemctl status quobyte-s3
● quobyte-s3.service - Quobyte S3 Proxy
Loaded: loaded (/usr/lib/systemd/system/quobyte-s3.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-01-11 11:26:59 GMT; 9s ago
Main PID: 9136 (java)
CGroup: /system.slice/quobyte-s3.service
└─9136 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre/bin/java -server com.quobyte.s3.S3ProxyService...
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | main | 1 | Jan 11 11:27:00.366 ] Service is usin...abase
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | main | 1 | Jan 11 11:27:00.404 ] HTTP status ser... 7874
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | main | 1 | Jan 11 11:27:00.472 ] Starting RPC Se...rkers
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | PBRPCSrv@2 | 37 | Jan 11 11:27:00.474 ] Detected 32 ava..., 31]
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | Acceptor@7864 | 34 | Jan 11 11:27:00.487 ] RPC Server list... 7864
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | main | 1 | Jan 11 11:27:00.536 ] Trying to regis...ces])
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | main | 1 | Jan 11 11:27:00.537 ] Using default g....10.1
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | main | 1 | Jan 11 11:27:00.539 ] Detected primar...eth0)
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | main | 1 | Jan 11 11:27:00.540 ] 9c717b32-7cb2-4...0.12]
Jan 11 11:27:00 storage001.vscaler.net quobyte-s3[9136]: [ I | main | 1 | Jan 11 11:27:00.546 ] Service registr...leted
Hint: Some lines were ellipsized, use -l to show in full.
# Give it 10-20 seconds to make sure to service is running and the port is listeningPublish Volume as a Bucket
- If you want to make a volume accessible via s3, go to the volumes tab, check the volume you want to publish, and hit the volume pull down menu.
- From here select 'Publish as an s3 bucket'
- Give it a name and click publish
Accessing via the s3cmd client
- Install the s3cmd utility
yum -y install s3md
# or ubuntu
apt-get install s3cmd- Then run the configuration utility
- NOTE: Go back to the settings 'cog' on the web interface for quobyte and get the user access key and secret key. 'Settings cog' -> User Database, select user, click to key icon, show secret key.
# in our case the storage service was running on storage001 and we've the port at 8118
s3cmd --configure
# You should end up with something like this:
New settings:
Access Key: 3VYasyG6rXbDbcnhmRPS
Secret Key: TagWn7jdX8fTJt+buDaWhf/FSqGa4kQezwssaY1Y
Default Region: US
S3 Endpoint: storage001:8118
DNS-style bucket+hostname:port template for accessing a bucket: storage001:8118
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0Now lets use the s3cmd line tool to view and get files
# check buckets
root@object-access-node:/etc/apt# s3cmd ls
2019-01-11 13:15 s3://hpc-archive-bucket
2019-01-11 13:15 s3://hpc-scratch-vol
# view files in a bucket
root@object-access-node:/etc/apt# s3cmd ls s3://hpc-scratch-vol/
2019-01-11 11:39 0 s3://hpc-scratch-vol/new-object.txt
2019-01-10 23:35 20971520 s3://hpc-scratch-vol/newfile.log
# get file
s3cmd get s3://hpc-scratch-vol/new-object.txt
download: 's3://hpc-scratch-vol/new-object.txt' -> './new-object.txt' [1 of 1]
0 of 0 0% in 0s 0.00 B/s doneUpdate procedure
https://support.quobyte.com/docs/3/latest/runbook_duties.html#updating