Difference between revisions of "HPC engineering TODO List"
Jump to navigation
Jump to search
| Line 37: | Line 37: | ||
== Mike TODO List == | == Mike TODO List == | ||
* PCM cluster | * PCM cluster | ||
| − | ** | + | ** Running tests |
| − | |||
* K20 | * K20 | ||
| − | |||
** Install CUDA | ** Install CUDA | ||
| − | |||
| − | |||
| − | |||
| − | |||
* Benchmarks | * Benchmarks | ||
| Line 60: | Line 54: | ||
== Completed section == | == Completed section == | ||
| + | |||
| + | * PCM cluster | ||
| + | ** Install intel compilers and licenses (you can take from the headnode ~david/software/intel_compilers | ||
| + | ** Install the intel cluster check ~david/software/platform/PCM 3.1 | ||
| + | |||
| + | * Rocks Cluster | ||
| + | ** Get ACML installed in /share/apps/acml/5.x | ||
Revision as of 10:17, 16 October 2012
Benchmarking
- Openfoam benchmarking (ARM vs Sandy bridge)#
- Single node tests
- OpenMPI for full 12/48 calx nodes
- Blender, open source rendering. To get a feel for ARM vs Intel on media projects
- jbench (see what the JVM performance is like on a system, ejre vs openjdk)
- Itaru (BWA genome stuff) pBWA doesnt scale (needs 12 more systems)
- justonedb guys, mysql vs postgres vs justonedb with their data sets (bloom analytics software) Peter
- spec cpu and hepspec on arm
- DP to email spec
- openmpi 1.6 results for all tests
- JH IMB tests, pingpong np=2, all tests 2 nodes, 4 nodes, 8 nodes
- linpack HPL
Software testing
- Hadoop terasort on v1.0.3 apache
- Install / tests CDH (Cloudera distribution of hadoop)
- JH Intel e3-1220L-v2 ivy bridge, sysbench power/performance tests
- JH cxmanage on the headnode, and on wiki
- VDI and spice for ubuntu / fedora, test on ARM, instructions on wiki
- Openstack, newest release with dashboard
- ScaleIO to port software to ARM (source and rebuild)
- Zettaset
- DP put in touch with UK partner #
- Ubuntu stuff to do:
- MaaS
- JuJu
- Landscape
- apt-proxy
Hardware Bits
- Server range for Viglen
- Stage 1: Hardware on SB similar to current range
- Stage 2: Additional storage products for schools (cheap NAS / iSCSI solution) to compete against hitachi
- Stage 3: Solutions, cloud, vmware?
Mike TODO List
- PCM cluster
- Running tests
- K20
- Install CUDA
- Benchmarks
- Get used to building codes, here are some of the standard ones we use as part of HPC testing
- HPL (Linpack)
- Stream (memory bandwidth)
- IMB (Network throughput and latency)
- Iozone (disk performance)
- Give them a try when you get a chance
- ARM Benchmarks
- Oil and Gas: Seismic processing benchmarks: http://geocomputing.narod.ru/benchmark.html (give a go on an ARM box and intel box)
Completed section
- PCM cluster
- Install intel compilers and licenses (you can take from the headnode ~david/software/intel_compilers
- Install the intel cluster check ~david/software/platform/PCM 3.1
- Rocks Cluster
- Get ACML installed in /share/apps/acml/5.x