Posts Tagged ‘perftest’

HCX Perftest Issue

May 9, 2021

Introduction

VMware HCX is a great tool, which simplifies VM migrations between on-prem to on-prem or on-prem to cloud at scale. I’ve worked with many different VM migration tools before and what I particularly like about HCX is it’s ability to stretch network subnets between source and destination environments. It reduces (or completely removes) the need to re-IP VMs, which simplifies the migration and reduces the risk of inadvertently introducing issues into migrated applications.

Perftest Tool

HCX is a complex set of technologies and getting initial deployment right is key to building a reliable migration fabric. Perftest is a CLI tool available on interconnect (IX) and network extension (NE) HCX appliances, which allows you to perform validation testing to ensure everything is functioning correctly, as well as provide you a performance baseline. To run this tool you will need to SSH into HCX Manager, enter CCLI and then go to one of your IX or NE appliances:

# ccli
# list
# go 0
# perftest all

Issue Description

There is one issue you can come across, when running perftest, where it partially completes with the following errors:

Message Error: map[string]interface {}{“grpc_code”:14, “http_code”:503, “http_status”:”Service Unavailable”, “message”:”rpc error: code = Unavailable desc = transport is closing”}

and

Internal failure happens. Err: http.Post(https://appliance_ip:9443/perftest/stoptest) return statusCode: 503

Solution

The reason for this error is blocked connectivity on port TCP/4500. HCX uses ports UDP/4500 and UDP/500 for establishing tunnels between IX and NE appliance pairs, but that’s not enough for perftest.

In the very beginning of the perftest it gives you a hint, but it’s easy to overlook. This requirement is not well documented (at least at the time of writing), so keep that in mind next time you deploy HCX.

Advertisement

Benchmarking InfiniBand

February 2, 2011

As I’ve already mentioned in my previous post called “Activating InfiniBand stack in Linux” there is a perftest package which has simple tests for benchmarking IB bandwidth and latency. Here go my results for default ib_write_bw and ib_write_lat tests. Write, read and send tests results don’t differ much, that’s why I’m posting only write results.

------------------------------------------------------------------
                    RDMA_Write BW Test
Number of qp's running 1
Connection type : RC
Each Qp will post up to 100 messages each time
Inline data is used up to 0 bytes message
  local address:  LID 0x04, QPN 0x18004a, PSN 0xcf8a2e
RKey 0x2c042529 VAddr 0x002af439bf2000
  remote address: LID 0x01, QPN 0x12004a, PSN 0xb446fe,
RKey 0x440428db VAddr 0x002b46ea9b5000
Mtu : 2048
------------------------------------------------------------------
 #bytes #iterations    BW peak[MB/sec]    BW average[MB/sec]
  65536        5000            1350.34               1350.27
------------------------------------------------------------------

------------------------------------------------------------------
                    RDMA_Write Latency Test
Inline data is used up to 400 bytes message
Connection type : RC
   local address: LID 0x04 QPN 0x16004a PSN 0x5d05e8
RKey 0x2a042529 VAddr 0x00000017f88002
  remote address: LID 0x01 QPN 0x10004a PSN 0xb8cade
RKey 0x420428db VAddr 0x00000000ae2002
Mtu : 2048
------------------------------------------------------------------
 #bytes #iterations    t_min[usec]    t_max[usec]  t_typical[usec]
      2        1000           1.16           6.93             1.22
------------------------------------------------------------------