Difference between revisions of "Benchmarks:AI"

From Define Wiki
Jump to navigation Jump to search
 
(8 intermediate revisions by one other user not shown)
Line 1: Line 1:
== Training ==
+
= Glossary =
=== TensorFlow ===
+
=== Common Terms ===
==== Building the docker image ====
+
* '''Epochs'''
TensorFlow provides scripts to run training benchmarks with different models. The scripts are hosted on GitHub [https://github.com/tensorflow/benchmarks here].
+
The number times that the learning algorithm will work through the entire training dataset. Increasing this has a linear effect on runtime, so doubling the number of epochs will double training time. For benchmark purposes the final accuracy of the trained model isn't important so only a small number of epochs is needed. For 16 V100 GPUs 10-20 epochs provides sufficient work to determine realistic performance expectations.
 +
* '''Batch Size'''
 +
The number of samples used per iteration of the algorithm. This value can be increased in accordance with the amount of available GPU memory and also depends on the model being used. For example ResNet50 is a small network and on a 32GB V100 the batch size can increase to 512, where as the same GPU could only use a batch size of 256 when running ResNet 152 because the network is 3 times larger.
  
It is recommended to run the scripts using nvidia-docker2 and the TensorFlow docker image obtained from [https://ngc.nvidia.com NGC].
+
= Benchmarks =
 
+
*[[TensorFlow Benchmarking | TensorFlow Benchmarking using ResNet (on GPU/v100)]]
To simplify the setup I have created a Dockerfile to pull the image and download the scripts. To use this first create a directory to hold your Dockerfiles.
 
 
 
<syntaxhighlight>
 
mkdir ~/Dockerfiles
 
</syntaxhighlight>
 
 
 
Then create a file in this directory and add the following
 
 
 
<syntaxhighlight>
 
FROM nvcr.io/nvidia/tensorflow:18.10-py3
 
RUN apt-get update && apt-get install git && git clone -b cnn_tf_v1.10_compatible https://github.com/tensorflow/benchmarks
 
ENTRYPOINT bash
 
</syntaxhighlight>
 
 
 
To build the image run
 
 
 
<syntaxhighlight>
 
docker build -f ~/Dockerfiles/tf_bench -t tf_bench .
 
</syntaxhighlight>
 
 
 
The best way to run the container is in interactive mode as this allows multiple runs to be performed in quick succession. To start the container run
 
 
 
<syntaxhighlight>
 
docker run --runtime=nvidia -it tf_bench
 
</syntaxhighlight>
 
 
 
The benchmark scripts are located in /workspace/scripts/tf_cnn_benchmarks.
 
 
 
==== Benchmarking using synthetic data ====
 
 
 
To run the benchmark using synthetic data execute
 
 
 
<syntaxhighlight>
 
python tf_cnn_benchmarks.py --num_gpus=1 --batch_size=32 --model=resnet50 --variable_update=parameter_server
 
</syntaxhighlight>
 
 
 
The benchmark can be run with different models. Current supported models are resnet50, resnet152, inception3 and vgg16.
 
 
 
The trained model can be saved by providing a checkpoint directory using the --train_dir flag. For example to train a model using
 
ResNet-152 and 10 epochs with 8 GPUs, and save the trained model use
 
 
 
<syntaxhighlight>
 
python tf_cnn_benchmarks.py --num_gpus=8 --batch_size=256 --model=resnet152 --variable_update=parameter_server --train_dir=/workspace/ckpt_dir --num_epochs=10
 
</syntaxhighlight>
 
 
 
The saved model can then be used to perform other benchmarks for inferencing.
 

Latest revision as of 14:41, 31 January 2019

Glossary

Common Terms

  • Epochs

The number times that the learning algorithm will work through the entire training dataset. Increasing this has a linear effect on runtime, so doubling the number of epochs will double training time. For benchmark purposes the final accuracy of the trained model isn't important so only a small number of epochs is needed. For 16 V100 GPUs 10-20 epochs provides sufficient work to determine realistic performance expectations.

  • Batch Size

The number of samples used per iteration of the algorithm. This value can be increased in accordance with the amount of available GPU memory and also depends on the model being used. For example ResNet50 is a small network and on a 32GB V100 the batch size can increase to 512, where as the same GPU could only use a batch size of 256 when running ResNet 152 because the network is 3 times larger.

Benchmarks