Difference between revisions of "Bright:Configuring Hadoop"
Jump to navigation
Jump to search
| (One intermediate revision by the same user not shown) | |||
| Line 2: | Line 2: | ||
Existing configuration uses directory <tt>/disk1</tt> and we need to add <tt>/disk2</tt>. | Existing configuration uses directory <tt>/disk1</tt> and we need to add <tt>/disk2</tt>. | ||
| + | |||
| + | # Partition and format the drive | ||
| + | # Add an FS Mount, for that drive, on the nodes/categories required, through Bright (in our case <tt>/disk2</tt>) | ||
=== Adding disks to Datanodes === | === Adding disks to Datanodes === | ||
Latest revision as of 10:44, 17 February 2015
Adding Drives in pre-installed instance
Existing configuration uses directory /disk1 and we need to add /disk2.
- Partition and format the drive
- Add an FS Mount, for that drive, on the nodes/categories required, through Bright (in our case /disk2)
Adding disks to Datanodes
Assuming datanode role assigned on hadoop category.
cmsh
[hadooptest]% category roles hadoop
[hadooptest->category[hadoop]->roles]% configurations hadoopdatanode
[hadooptest->category[hadoop]->roles[hadoopdatanode]->configurations]% get boston-hdfs datadirectories
/disk1/hadoop/hdfs/datanode
[hadooptest->category[hadoop]->roles[hadoopdatanode]->configurations]% set boston-hdfs datadirectories /disk1/hadoop/hdfs/datanode /disk2/hadoop/hdfs/datanode
[hadooptest->category*[hadoop*]->roles*[hadoopdatanode*]->configurations*]% commitAdding disks to Namenodes
Assuming namenode role assigned on hadoop1 node.
cmsh
[hadooptest]% device roles hadoop1
[hadooptest->device[hadoop1]->roles]% configurations hadoopnamenode
[hadooptest->device[hadoop1]->roles[hadoopnamenode]->configurations]% get boston-hdfs datadirectories
/disk1/hadoop/hdfs/namenode
[hadooptest->device[hadoop1]->roles[hadoopnamenode]->configurations]% set boston-hdfs datadirectories /disk1/hadoop/hdfs/namenode /disk2/hadoop/hdfs/namenode
[hadooptest->device*[hadoop1*]->roles*[hadoopnamenode*]->configurations*]% commit