Difference between revisions of "Emulex OneConnect OCe14102-NM"
Jump to navigation
Jump to search
| Line 29: | Line 29: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | '''Output''' | |
<syntaxhighlight> | <syntaxhighlight> | ||
[root@hft2 elx-rhel6-be2net-dd-10.6.144.21-1]# taskset -c 8 sockperf pp -i 10.10.10.2 -t 5 -m 12 | [root@hft2 elx-rhel6-be2net-dd-10.6.144.21-1]# taskset -c 8 sockperf pp -i 10.10.10.2 -t 5 -m 12 | ||
| Line 68: | Line 68: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | '''Output''' | |
<syntaxhighlight> | <syntaxhighlight> | ||
------------------------------------------------------------ | ------------------------------------------------------------ | ||
| Line 86: | Line 86: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | '''Output''' | |
<syntaxhighlight> | <syntaxhighlight> | ||
| Line 126: | Line 126: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | '''Output''' | |
<syntaxhighlight> | <syntaxhighlight> | ||
# OSU MPI Bandwidth Test v4.4.1 | # OSU MPI Bandwidth Test v4.4.1 | ||
| Line 162: | Line 162: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | '''Output''' | |
<syntaxhighlight> | <syntaxhighlight> | ||
| Line 201: | Line 201: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | '''Output''' | |
<syntaxhighlight> | <syntaxhighlight> | ||
Revision as of 07:39, 4 August 2015
System Configuration
- Drivers: elx-be2net-dd-rhel6-10.6.144.21-1.tar
- OS: CentOS 6.6 - Default Kernel
All steps in "10G NICs General Optimization" Followed
Systems
hft1 hft2
- Cards: OCE14102-NM-NFR
Tests
- SockPerf
- iPerf
- OSU MPI Benchmarks - bw, bibw, latency, mr_mbw
Benchmarks
SockPerf
Command Line Run:
[root@hft1 ~]# taskset -c 8 sockperf sr
[root@hft2 ~]# taskset -c 8 sockperf pp -i 10.10.10.2 -t 5 -m 12Output
[root@hft2 elx-rhel6-be2net-dd-10.6.144.21-1]# taskset -c 8 sockperf pp -i 10.10.10.2 -t 5 -m 12
sockperf: == version #2.5.244 ==
sockperf[CLIENT] send on:sockperf: using recvfrom() to block on socket(s)
[ 0] IP = 10.10.10.1 PORT = 11111 # UDP
sockperf: Warmup stage (sending a few dummy messages)...
sockperf: Starting test...
sockperf: Test end (interrupted by timer)
sockperf: Test ended
sockperf: [Total Run] RunTime=5.100 sec; SentMessages=333984; ReceivedMessages=333983
sockperf: ========= Printing statistics for Server No: 0
sockperf: [Valid Duration] RunTime=5.000 sec; SentMessages=327436; ReceivedMessages=327436
sockperf: ====> avg-lat= 7.616 (std-dev=0.060)
sockperf: # dropped messages = 0; # duplicated messages = 0; # out-of-order messages = 0
sockperf: Summary: Latency is 7.616 usec
sockperf: Total 327436 observations; each percentile contains 3274.36 observations
sockperf: ---> <MAX> observation = 9.900
sockperf: ---> percentile 99.99 = 8.233
sockperf: ---> percentile 99.90 = 7.957
sockperf: ---> percentile 99.50 = 7.847
sockperf: ---> percentile 99.00 = 7.815
sockperf: ---> percentile 95.00 = 7.729
sockperf: ---> percentile 90.00 = 7.687
sockperf: ---> percentile 75.00 = 7.640
sockperf: ---> percentile 50.00 = 7.606
sockperf: ---> percentile 25.00 = 7.578
sockperf: ---> <MIN> observation = 7.337iPerf
Command Line Run:
[root@hft1 ~]# taskset -c 8,9 iperf -s
[root@hft2 ~]# taskset -c 8,9 iperf -c 10.10.10.1Output
------------------------------------------------------------
Client connecting to 10.10.10.1, TCP port 5001
TCP window size: 9.54 MByte (default)
------------------------------------------------------------
[ 3] local 10.10.10.2 port 30403 connected with 10.10.10.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 10.6 GBytes 9.10 Gbits/secosu_bibw
Command Line Run:
[root@hft1 pt2pt]# mpirun -H 10.10.10.1,10.10.10.2 -np 2 --allow-run-as-root ./osu_bibwOutput
# OSU MPI Bi-Directional Bandwidth Test v4.4.1
# Size Bi-Bandwidth (MB/s)
1 0.64
2 1.29
4 2.59
8 5.17
16 10.38
32 20.24
64 39.61
128 79.93
256 153.93
512 296.42
1024 349.68
2048 373.45
4096 413.55
8192 423.65
16384 413.16
32768 423.40
65536 234.21
131072 235.67
262144 247.31
524288 236.90
1048576 242.36
2097152 237.07
4194304 237.14osu_bw
- Pinning Cores to the run made little to no difference for OSU
Command Line Run:
[root@hft1 pt2pt]# mpirun -H 10.10.10.1,10.10.10.2 -np 2 --allow-run-as-root ./osu_bwOutput
# OSU MPI Bandwidth Test v4.4.1
# Size Bandwidth (MB/s)
1 0.30
2 0.61
4 1.23
8 2.48
16 4.97
32 9.77
64 18.96
128 38.91
256 77.56
512 146.62
1024 175.88
2048 198.64
4096 206.83
8192 219.13
16384 231.53
32768 233.15
65536 231.67
131072 234.22
262144 235.75
524288 236.57
1048576 236.94
2097152 237.13
4194304 237.23osu_latency
Command Line Run:
[root@hft1 pt2pt]# mpirun -H 10.10.10.1,10.10.10.2 -np 2 --allow-run-as-root ./osu_latencyOutput
# OSU MPI Latency Test v4.4.1
# Size Latency (us)
0 12.26
1 12.27
2 12.27
4 12.27
8 12.28
16 12.28
32 12.29
64 12.50
128 13.16
256 14.43
512 16.94
1024 23.27
2048 59.06
4096 112.98
8192 115.23
16384 137.94
32768 213.93
65536 402.38
131072 910.54
262144 1490.14
524288 2450.02
1048576 4761.29
2097152 9175.97
4194304 18007.57osu_mbr_mr
Command Line Run:
[root@hft1 pt2pt]# mpirun -H 10.10.10.1,10.10.10.2 -np 2 --allow-run-as-root ./osu_mbr_mrOutput
# OSU MPI Multiple Bandwidth / Message Rate Test v4.4.1
# [ pairs: 1 ] [ window size: 64 ]
# Size MB/s Messages/s
1 0.33 325917.53
2 0.65 326412.92
4 1.31 326714.85
8 2.60 325151.66
16 5.23 326563.82
32 9.79 305954.68
64 19.61 306468.15
128 39.19 306191.99
256 79.89 312087.08
512 147.86 288783.11
1024 175.93 171803.09
2048 198.99 97163.82
4096 206.89 50509.06
8192 218.85 26714.51
16384 231.62 14136.78
32768 233.28 7119.02
65536 231.89 3538.37
131072 234.39 1788.27
262144 235.76 899.37
524288 236.60 451.27
1048576 236.96 225.98
2097152 237.13 113.07
4194304 237.23 56.56