Posts Tagged ‘Intel’

Advanced notes on Unified Parallel C installation

October 8, 2012

I already described basic Berkeley UPC compiler installation here. So now lets go deeper in details.

Backend Compilers

Basically UPC compiler is a translator from UPC language to C. After translation is done, backend C compiler is invoked to actually compile the code. On Linux clusters GCC is used by default, if you have Intel, Sun or any other high performance compiler installed, then use CC and CXX flags in UPC runtime configure step:

./configure CC=icc CXX=icpc --prefix=/opt/bupc-runtime-2.12.1-icc
./configure CC=suncc CXX=sunCC --prefix=/opt/bupc-runtime-2.10.0-suncc

Optional UPC builds

By default Berkeley UPC is installed in two configurations: debug (with GASnet assertions enabled and debugging info compiled in) and opt (optimized version for everyday use). You will see debug and opt subdirectories in your working UPC runtime build. But you can install additional versions of runtime for other uses.

Berkeley UPC has integrated tracing facility. If you upcrun application with the -trace flag, tracing data is collected and you can analyze it with upc_trace utility. Tracing build can be compiled by using opt_trace multiconf option:

./configure --prefix=/opt/bupc-runtime-2.12.1 --with-multiconf=+opt_trace

Berkeley UPC has integrated callbacks (called GASP) for third-party instrumenting utilities. Instrumentation allows developers of performance analysis tools to gather all sorts of information about UPC program execution. Like functions called, their arguments, etc. If you want to develop your own UPC performance analysis tool you can use this feature during development and instruct users to build opt_trace version of UPC to be able to use your tool later.

./configure --prefix=/opt/bupc-runtime-2.12.1 --with-multiconf=+opt_inst

You can debug UPC applications with dbg build, if you are a developer and use instrumented build of UPC and need to debug it, then build a dbg_inst version. There was a dbg_inst.patch (find link below) to add dbg_inst functionality to UPC, but it’s already integrated into compiler as far as I remember.

./configure --prefix=/opt/bupc-runtime-2.12.1 --with-multiconf=+dbg_inst

There was also another bug which broke dbg_inst in 2.12.1 (which was originally implemented in 2.10.0) with the following errors:

/root/install/berkeley_upc-2.12.1/gasnet/gasnet_trace.c: In function ‘gasneti_trace_finish’:

/root/install/berkeley_upc-2.12.1/gasnet/gasnet_trace.c:988: error: ‘gasneti_mallocreport_filename’ undeclared (first use in this function)

/root/install/berkeley_upc-2.12.1/gasnet/gasnet_trace.c:988: error: (Each undeclared identifier is reported only once

/root/install/berkeley_upc-2.12.1/gasnet/gasnet_trace.c:988: error: for each function it appears in.)

To resolve this issue apply mallocreport.patch00 (find link below). But if you use recent Berkeley UPC build you won’t see this bug.

Block size

If you work with huge matrixes and want to distribute them in large chunks of consecutive rows, then you will run into UPC limitation of block size. UPC pack pointer representation into one 64 bit integer. By default 34 bits are allocated for memory address, 10 bits for threads and 20 bits for phase (or block size). 2^20 is basically a 1048576 elements which is a very small number. You can redistribute bits with --with-sptr-packed-bits=value value=’phase,thread,addr’ configure option, but then you will either have small address space or small number of threads.

Another option is to use --enable-sptr-struct configure flag which changes shared pointer representation from int to struct. It will increase block size to 2^(32-1) which is 2147483647. But it could also be too small if you conduct performance measurement and need to run your code for 1 thread. Then the whole matrix is a one huge block. 50000×50000 matrix is already hit the limit.

If 2^(32-1) is not enough, then the last option for you is to use row distributed algorithm instead of row-block distributed.

POSIX shared memory problems with InfiniBand

UPC support two one-node inter-thread shared memory communication types: POSIX shared memory and SYSV shared memory. POSIX is configured by default. If you want to register large amounts of shared memory with many PSHM processes using --shared-heap key you can see errors like these:

*** FATAL ERROR: Unexpected error Bad address (rc=1 errno=14) when registering the segment

NOTICE: Before reporting bugs, run with GASNET_BACKTRACE=1 in the environment to generate a backtrace.

*** Caught a fatal signal: SIGABRT(6) on node 29/32

To solve this problem reinstall runtime using the following options:

./configure --prefix=/opt/bupc-runtime-2.12.1 --enable-pshm --disable-pshm-posix --enable-pshm-sysv

Bug when building translator

For some vendor-build GCC releases, like Red Hat, older versions of translator fail to compile with error like:

/usr/bin/ld: ipl_summarize_util.o: relocation R_X86_64_PC32 against `Phi_To_Idx_Map’ can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: ld returned 1 exit status

It’s a bug number 2202 in UPC Bugzilla and is described here. Solution and patch are described in post 17. Find copy of patch below.

UPC I/O support for large files

UPC have parallel I/O extension. In version 2.14.0 and earlier by default UPC I/O supported files 2GB in length. It led to upc_all_fread_shared() returning -1 “Invalid argument” for data above the 2GB limit. To change defaults from 2^(32-1) bits size to 2^(64-1) use BUPC_IO_64 variable during runtime configure step:

./configure CC=”gcc -DBUPC_IO_64″ CXX=”g++ -DBUPC_IO_64″ --prefix=/opt/bupc-runtime-2.12.1

Replace GCC with your own compiler.

SUN compiler issues

If you run into an error (I had it in version 2.10.0):

“/home/fred/install/berkeley_upc-2.10.0/upcr_profile.c”, line 36: left operand must be modifiable lvalue: op “=”
cc: acomp failed for /home/fred/install/berkeley_upc-2.10.0/upcr_globfiles.c

Apply patch sun_const_field.patch00 (find link below).  Additional info can be found in Berkeley UPC Bugzilla, bug number 2696.

Another bug (not an error, but an annoying warning) looks like numerous warnings throughout compilation:

“/home/fred/install/berkeley_upc-2.10.0/upcr_atomic.h”, line 876: warning: result of paste undefined and not portable: 64_ (E_PASTE_RESULT_NOT_TOKEN)
“/home/fred/install/berkeley_upc-2.10.0/upcr_atomic.h”, line 876: warning: result of paste undefined and not portable: 64_cswap (E_PASTE_RESULT_NOT_TOKEN)

To get rid of it apply patch not_token.patch00 (find link below). It’s described in the same 2696.

Links to patches

Unfortunately WordPress doesn’t allow to upload .txt files due to security reasons. Other formats, such as .doc or .pdf will break the lines. So I decided to give direct links when possible and provide contents of patch in text converted to .jpg format in case direct link will break in future. The drawback is that you will have to type it yourself or OCR it.

Advertisement

Jumbo Frames justified?

March 27, 2012

When it comes to VMware on NetApp, boosting  performance by implementing Jumbo Frames is always taken into consideration. However, it’s not clear if it really has any significant impact on latency and throughput.

Officially VMware doesn’t support Jumbo Frames for NAS and iSCSI. It means that using Jumbo Frames to transfer storage traffic from VMkernel interface to your storage system is the solution which is not tested by VMware, however, it actually works. To use Jumbo Frames you need to activate them throughout the whole communication path: OS, virtual NIC (change to Enchanced vmxnet from E1000), Virtual Switch and VMkernel, physical ethernet switch and storage. It’s a lot of work to do and it’s disruptive at some points, which is not a good idea for production infrastructure. So I decided to take a look at benchmarks, before deciding to spend a great amount of time and effort on it.

VMware and NetApp has a TR-3808-0110 technical report which is called “VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS”. Section 2.2 clearly states that:

  • Using NFS with jumbo frames enabled using both Gigabit and 10GbE generated overall performance that was comparable to that observed using NFS without jumbo frames and required approximately 6% to 20% fewer ESX CPU resources compared to using NFS without jumbo frames, depending on the test configuration.
  • Using iSCSI with jumbo frames enabled using both Gigabit and 10GbE generated overall performance that was comparable to slightly lower than that observed using iSCSI without jumbo and required approximately 12% to 20% fewer ESX CPU resources compared to using iSCSI without jumbo frames depending on the test configuration.
Another important statement here is:
  • Due to the smaller request sizes used in the workloads, it was not expected that enabling jumbo frames would improve overall performance.

I believe that 4K and 8K packet sizes are fair in case of virtual infrastructure. Maybe if you move large amounts of data through your virtual machines it will make sense for you, but I feel like it’s not reasonable to implement Jumbo Frames for virual infrastructure in general.

The another report finding is that Jumbo Frames decrease CPU load, but if you use TOE NICs, then no sense once again.

VMware supports jumbo frames with the following NICs: Intel (82546, 82571), Broadcom (5708, 5706, 5709), Netxen (NXB-10GXxR, NXB-10GCX4), and Neterion (Xframe, Xframe II, Xframe E). We use Broadcom NetXtreme II BCM5708 and Intel 82571EB, so Jumbo Frames implementation is not going to be a problem. Maybe I’ll try to test it by myself when I’ll have some free time.

Links I found useful:

Intel RAID 1

January 8, 2011

When I was overclocking my system I needed to update my BIOS to the latest version. After an update my system didn’t boot. I guess because BIOS configuration options saved in ROM weren’t compatible with new BIOS. I erased configuration and system booted just fine. But I had RAID 1 configuration and it was also erased. I entered BIOS, changed SATA mode from IDE to RAID. Then rebooted and entered Intel Matrix Storage Manager Option ROM Utility. In fact RAID data seemed to be fine. Both hard drives had Member Disk(0) marks and RAID status was Normal. None the less I couldn’t boot my system. After OS selection I saw just black screen.

Solution was simple.  In Option ROM Utility I reseted disks to non-RAID (I have no idea how it’s different from just deleting a RAID volume). Then I loaded Intel Matrix Storage Console from Windows and went Actions -> Create RAID Volume from Existing Hard Drive. In configuration wizard I just chose RAID 1, one drive as source and the other as destination and voila! After an hour I got working RAID 1 configuration without any data loss.