Difference between revisions of "Bright:Configuring Hadoop"
Jump to navigation
Jump to search
(Created page with "== Adding Drives in pre-installed instance == Existing configuration uses directory <tt>/disk1</tt> and we need to add <tt>/disk2</tt>. (Assuming hadoop category exists) ===...") |
|||
| (2 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
== Adding Drives in pre-installed instance == | == Adding Drives in pre-installed instance == | ||
| − | Existing configuration uses directory <tt>/disk1</tt> and we need to add <tt>/disk2</tt>. ( | + | Existing configuration uses directory <tt>/disk1</tt> and we need to add <tt>/disk2</tt>. |
| + | |||
| + | # Partition and format the drive | ||
| + | # Add an FS Mount, for that drive, on the nodes/categories required, through Bright (in our case <tt>/disk2</tt>) | ||
=== Adding disks to Datanodes === | === Adding disks to Datanodes === | ||
| + | |||
| + | Assuming <tt>datanode</tt> role assigned on <tt>hadoop</tt> category. | ||
<syntaxhighlight> | <syntaxhighlight> | ||
| Line 13: | Line 18: | ||
[hadooptest->category[hadoop]->roles[hadoopdatanode]->configurations]% set boston-hdfs datadirectories /disk1/hadoop/hdfs/datanode /disk2/hadoop/hdfs/datanode | [hadooptest->category[hadoop]->roles[hadoopdatanode]->configurations]% set boston-hdfs datadirectories /disk1/hadoop/hdfs/datanode /disk2/hadoop/hdfs/datanode | ||
[hadooptest->category*[hadoop*]->roles*[hadoopdatanode*]->configurations*]% commit | [hadooptest->category*[hadoop*]->roles*[hadoopdatanode*]->configurations*]% commit | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | === Adding disks to Namenodes === | ||
| + | |||
| + | Assuming <tt>namenode</tt> role assigned on <tt>hadoop1</tt> node. | ||
| + | |||
| + | <syntaxhighlight> | ||
| + | cmsh | ||
| + | [hadooptest]% device roles hadoop1 | ||
| + | [hadooptest->device[hadoop1]->roles]% configurations hadoopnamenode | ||
| + | [hadooptest->device[hadoop1]->roles[hadoopnamenode]->configurations]% get boston-hdfs datadirectories | ||
| + | /disk1/hadoop/hdfs/namenode | ||
| + | [hadooptest->device[hadoop1]->roles[hadoopnamenode]->configurations]% set boston-hdfs datadirectories /disk1/hadoop/hdfs/namenode /disk2/hadoop/hdfs/namenode | ||
| + | [hadooptest->device*[hadoop1*]->roles*[hadoopnamenode*]->configurations*]% commit | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Latest revision as of 10:44, 17 February 2015
Adding Drives in pre-installed instance
Existing configuration uses directory /disk1 and we need to add /disk2.
- Partition and format the drive
- Add an FS Mount, for that drive, on the nodes/categories required, through Bright (in our case /disk2)
Adding disks to Datanodes
Assuming datanode role assigned on hadoop category.
cmsh
[hadooptest]% category roles hadoop
[hadooptest->category[hadoop]->roles]% configurations hadoopdatanode
[hadooptest->category[hadoop]->roles[hadoopdatanode]->configurations]% get boston-hdfs datadirectories
/disk1/hadoop/hdfs/datanode
[hadooptest->category[hadoop]->roles[hadoopdatanode]->configurations]% set boston-hdfs datadirectories /disk1/hadoop/hdfs/datanode /disk2/hadoop/hdfs/datanode
[hadooptest->category*[hadoop*]->roles*[hadoopdatanode*]->configurations*]% commitAdding disks to Namenodes
Assuming namenode role assigned on hadoop1 node.
cmsh
[hadooptest]% device roles hadoop1
[hadooptest->device[hadoop1]->roles]% configurations hadoopnamenode
[hadooptest->device[hadoop1]->roles[hadoopnamenode]->configurations]% get boston-hdfs datadirectories
/disk1/hadoop/hdfs/namenode
[hadooptest->device[hadoop1]->roles[hadoopnamenode]->configurations]% set boston-hdfs datadirectories /disk1/hadoop/hdfs/namenode /disk2/hadoop/hdfs/namenode
[hadooptest->device*[hadoop1*]->roles*[hadoopnamenode*]->configurations*]% commit