Difference between revisions of "MapR: Testing"

From Define Wiki
Jump to navigation Jump to search
 
(One intermediate revision by the same user not shown)
Line 53: Line 53:
 
Job Finished in 19.002 seconds
 
Job Finished in 19.002 seconds
 
Estimated value of Pi is 3.20000000000000000000
 
Estimated value of Pi is 3.20000000000000000000
 +
</syntaxhighlight>
 +
 +
== Test MapR with HiBench ==
 +
 +
See [[HiBench Testing]]

Latest revision as of 10:22, 27 March 2015

Test MapR Installation with pre-installed examples

[mapr@smuk01 ~]$ cd /opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/mapreduce/
[mapr@smuk01 mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.5.1-mapr-1501.jar

An example program must be given as the first argument.
Valid program names are:
  aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
  aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
  bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
  blocklocality: Checking Map job locality
  dbcount: An example job that count the pageview counts from a database.
  distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
  grep: A map/reduce program that counts the matches of a regex in the input.
  join: A job that effects a join over sorted, equally partitioned datasets
  multifilewc: A job that counts words from several files.
  pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
  pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
  randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
  randomwriter: A map/reduce program that writes 10GB of random data per node.
  secondarysort: An example defining a secondary sort to the reduce.
  sleep: A job that sleeps at each map and reduce task.
  sort: A map/reduce program that sorts the data written by the random writer.
  sudoku: A sudoku solver.
  terachecksum: Compute checksum of terasort output to compare with teragen checksum.
  teragen: Generate data for the terasort
  teragenwithcrc: Generate data for the terasort with CRC checksum
  terasort: Run the terasort
  terasortwithcrc: Run the terasort with CRC checksum
  teravalidate: Checking results of terasort
  teravalidaterecords: Checking results of terasort in terms of missing/duplicate records
  teravalidatewithcrc: Checking results of terasort along with crc verification
  wordcount: A map/reduce program that counts the words in the input files.
  wordmean: A map/reduce program that counts the average length of the words in the input files.
  wordmedian: A map/reduce program that counts the median length of the words in the input files.
  wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

[mapr@smuk01 mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.5.1-mapr-1501.jar pi 10 10
Number of Maps  = 10
Samples per Map = 10
...
15/03/27 03:17:26 INFO mapreduce.Job:  map 0% reduce 0%
15/03/27 03:17:30 INFO mapreduce.Job:  map 10% reduce 0%
15/03/27 03:17:31 INFO mapreduce.Job:  map 20% reduce 0%
15/03/27 03:17:32 INFO mapreduce.Job:  map 40% reduce 0%
15/03/27 03:17:33 INFO mapreduce.Job:  map 50% reduce 0%
15/03/27 03:17:34 INFO mapreduce.Job:  map 80% reduce 0%
15/03/27 03:17:35 INFO mapreduce.Job:  map 100% reduce 0%
15/03/27 03:17:39 INFO mapreduce.Job:  map 100% reduce 100%
15/03/27 03:17:39 INFO mapreduce.Job: Job job_1427394736582_0325 completed successfully
...
Job Finished in 19.002 seconds
Estimated value of Pi is 3.20000000000000000000

Test MapR with HiBench

See HiBench Testing