Difference between revisions of "Benchmarking: Gromacs-5.0"
(→CMake) |
|||
| Line 49: | Line 49: | ||
-DCMAKE_INSTALL_PREFIX=/cm/shared/apps/Gromacs-5/gpu \ | -DCMAKE_INSTALL_PREFIX=/cm/shared/apps/Gromacs-5/gpu \ | ||
-DBUILD_SHARED_LIBS=OFF | -DBUILD_SHARED_LIBS=OFF | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | === Intel MPI + Intel MKL === | ||
| + | |||
| + | <syntaxhighlight> | ||
| + | module load intel/compiler/64/15.0/2015.3.187 | ||
| + | module load intel/mkl/64/11.2/2015.3.187 | ||
| + | module load intel-mpi/64/4.1.3/049 | ||
| + | |||
| + | cmake .. \ | ||
| + | -DGMX_FFT_LIBRARY=mkl \ | ||
| + | -DCMAKE_C_COMPILER=mpiicc \ | ||
| + | -DCMAKE_CXX_COMPILER=mpiicpc \ | ||
| + | -DGMX_MPI=ON \ | ||
| + | -DGMX_SIMD=AVX2_256 \ | ||
| + | -DCMAKE_INSTALL_PREFIX=/cm/shared/apps/gromacs-5.0.4/intel-mpi \ | ||
| + | -DBUILD_SHARED_LIBS=OFF | ||
| + | |||
| + | make | ||
| + | make install | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Latest revision as of 13:07, 30 July 2015
Requirements
GPU support
If you wish to use the excellent native GPU support in GROMACS, NVIDIA's CUDA version 4.0 software development kit is required, and the latest version is strongly encouraged. NVIDIA GPUs with at least NVIDIA compute capability 2.0 are required, e.g. Fermi or Kepler cards. You are strongly recommended to get the latest CUDA version and driver supported by your hardware, but beware of possible performance regressions in newer CUDA versions on older hardware. Note that while some CUDA compilers (nvcc) might not officially support recent versions of gcc as the back-end compiler, we still recommend that you at least use a gcc version recent enough to get the best SIMD support for your CPU, since GROMACS always runs some code on the CPU. It is most reliable to use the same C++ compiler version for GROMACS code as used as the back-end compiler for nvcc, but it could be faster to mix compiler versions to suit particular contexts.
CMake
GROMACS 5.0 uses the CMake build system, and requires version 2.8.8 or higher. Lower versions will not work. You can check whether CMake is installed, and what version it is, with cmake --version. If you need to install CMake, then first check whether your platform's package management system provides a suitable version, or visit http://www.cmake.org/cmake/help/install.html for pre-compiled binaries, source code and installation instructions. The GROMACS team recommends you install the most recent version of CMake you can.
Note that CMake 3.1.0 generated a faulty Makefile, which prevents the NVIDIA components to be compiled. Version 2.8.12.2 works fine.
MPI support
If you wish to run in parallel on multiple machines across a network, you will need to have
- an MPI library installed that supports the MPI 1.3 standard, and
- wrapper compilers that will compile code using that library.
The GROMACS team recommends OpenMPI version 1.6 (or higher), MPICH version 1.4.1 (or higher), or your hardware vendor's MPI installation. The most recent version of either of these is likely to be the best. More specialized networks might depend on accelerations only available in the vendor's library. LAMMPI might work, but since it has been deprecated for years, it is not supported.
Often OpenMP parallelism is an advantage for GROMACS, but support for this is generally built into your compiler and detected automatically.
In summary, for maximum performance you will need to examine how you will use GROMACS, what hardware you plan to run on, and whether you can afford a non-free compiler for slightly better performance. Unfortunately, the only way to find out is to test different options and parallelization schemes for the actual simulations you want to run. You will still get good, performance with the default build and runtime options, but if you truly want to push your hardware to the performance limit, the days of just blindly starting programs with mdrun are gone.
Download
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.0.tar.gz
tar xzf gromacs-5.0.4.tar.gz
cd gromacs-5.0.4
mkdir build
cd buildBuild
GPU
module load cuda60/toolkit/6.0.37
/cm/shared/apps/cmake/2.8.12.1/bin/cmake .. \
-DCUDA_TOOLKIT_ROOT_DIR=/cm/shared/apps/cuda60/toolkit/current \
-DGMX_BUILD_OWN_FFTW=ON \
-DCMAKE_C_COMPILER=gcc \
-DCMAKE_CXX_COMPILER=g++ \
-DGMX_GPU=ON \
-DCMAKE_INSTALL_PREFIX=/cm/shared/apps/Gromacs-5/gpu \
-DBUILD_SHARED_LIBS=OFFIntel MPI + Intel MKL
module load intel/compiler/64/15.0/2015.3.187
module load intel/mkl/64/11.2/2015.3.187
module load intel-mpi/64/4.1.3/049
cmake .. \
-DGMX_FFT_LIBRARY=mkl \
-DCMAKE_C_COMPILER=mpiicc \
-DCMAKE_CXX_COMPILER=mpiicpc \
-DGMX_MPI=ON \
-DGMX_SIMD=AVX2_256 \
-DCMAKE_INSTALL_PREFIX=/cm/shared/apps/gromacs-5.0.4/intel-mpi \
-DBUILD_SHARED_LIBS=OFF
make
make installMPI + GPU
module load cuda60/toolkit/6.0.37
module load openmpi/gcc/64/1.8.1
/cm/shared/apps/cmake/2.8.12.1/bin/cmake .. \
-DCUDA_TOOLKIT_ROOT_DIR=/cm/shared/apps/cuda60/toolkit/current \
-DGMX_BUILD_OWN_FFTW=ON \
-DCMAKE_C_COMPILER=mpicc \
-DCMAKE_CXX_COMPILER=mpic++ \
-DGMX_GPU=ON \
-DGMX_MPI=ON \
-DCMAKE_INSTALL_PREFIX=/cm/shared/apps/Gromacs-5/mpi-gpu \
-DBUILD_SHARED_LIBS=OFF
make
make install