Monday, February 6, 2017

Oracle Database Cloud (DBaaS) Performance - Part 4 - Network

In the last part of this installment I'll have a brief look at the network performance measured in the Oracle DBaaS environment, in particular the network interface that gets used as private interconnect in case of RAC configuration. The network performance could also be relevant when evaluating how to transfer data to the cloud database.

I've used the freely available "iperf" tool to measure the network bandwidth and got the following results:

[root@test12102rac2 ~]# iperf3 -c 10.196.49.126
Connecting to host 10.196.49.126, port 5201
[  4] local 10.196.49.130 port 41647 connected to 10.196.49.126 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   651 MBytes  5.46 Gbits/sec   15    786 KBytes
[  4]   1.00-2.00   sec   823 MBytes  6.90 Gbits/sec   11   1.07 MBytes
[  4]   2.00-3.00   sec   789 MBytes  6.62 Gbits/sec    7   1014 KBytes
[  4]   3.00-4.00   sec   700 MBytes  5.87 Gbits/sec   39   1.04 MBytes
[  4]   4.00-5.00   sec   820 MBytes  6.88 Gbits/sec   21    909 KBytes
[  4]   5.00-6.00   sec   818 MBytes  6.86 Gbits/sec   17   1.17 MBytes
[  4]   6.00-7.00   sec   827 MBytes  6.94 Gbits/sec   21   1005 KBytes
[  4]   7.00-8.00   sec   792 MBytes  6.64 Gbits/sec    8    961 KBytes
[  4]   8.00-9.00   sec   767 MBytes  6.44 Gbits/sec    4   1.11 MBytes
[  4]   9.00-10.00  sec   823 MBytes  6.91 Gbits/sec    6   1.12 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  7.63 GBytes  6.55 Gbits/sec  149             sender
[  4]   0.00-10.00  sec  7.63 GBytes  6.55 Gbits/sec                  receiver

iperf Done.

So the network bandwidth seems to be something between 6 and 7 Gbits/sec, which is not too bad.

For completeness, the UDP results look like the following:

[root@test12102rac2 ~]# iperf3 -c 10.196.49.126 -u -b 10000M
Connecting to host 10.196.49.126, port 5201
[  4] local 10.196.49.130 port 55482 connected to 10.196.49.126 port 5201
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec   494 MBytes  4.14 Gbits/sec  63199
[  4]   1.00-2.00   sec   500 MBytes  4.20 Gbits/sec  64057
[  4]   2.00-3.00   sec   462 MBytes  3.87 Gbits/sec  59102
[  4]   3.00-4.00   sec   496 MBytes  4.16 Gbits/sec  63491
[  4]   4.00-5.00   sec   482 MBytes  4.05 Gbits/sec  61760
[  4]   5.00-6.00   sec   425 MBytes  3.57 Gbits/sec  54411
[  4]   6.00-7.00   sec   489 MBytes  4.10 Gbits/sec  62574
[  4]   7.00-8.00   sec   411 MBytes  3.45 Gbits/sec  52599
[  4]   8.00-9.00   sec   442 MBytes  3.71 Gbits/sec  56541
[  4]   9.00-10.00  sec   481 MBytes  4.04 Gbits/sec  61614
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  4.57 GBytes  3.93 Gbits/sec  0.028 ms  23434/599340 (3.9%)
[  4] Sent 599340 datagrams

iperf Done.

Finally, "ping" results look like the following:

9665 packets transmitted, 9665 received, 0% packet loss, time 9665700ms
rtt min/avg/max/mdev = 0.135/0.308/199.685/3.322 ms

So an average latency of 0.3 ms also doesn't look too bad.

[Update 6.2.2017]: Thanks to Frits Hoogland who pointed out the very high "max" value for the ping. Although I didn't spot the pattern that he saw in a different network test setup ("cross cloud platform"), which was an initial slowness, it's still worth to point out the high "max" value of almost 200 ms for a ping, and also the "mdev" value of 3.322 ms seems to suggest that there were some significant variations in ping times observed that are potentially hidden behind the average values provided. I'll repeat the ping test and see if I can reproduce these outliers and if yes, find out more details.

No comments:

Post a Comment