Ruprecht-Karls-Universität Heidelberg

Optimized OpenMPI Collective Support for the EXTOLL Network

Project Report by Felix Zahn


This project is focused on the area of the EXTOLL research project of the Computer Architecture Group at Heidelberg University. EXTOLL is a network designed for high-performance computing (HPC). It allows to build clusters with the ”commodity off the shelf” principle. This means ordinary server nodes with additional add-in cards get statically connected to a direct network, so there are no more additional hubs or switches necessary.

In order to set up a very fast network, several functional units like barrier or multicast are implemented in hardware. To use these special features the software needs to know how to interact with the hardware. I.e. a function call in a parallel programming language must use the hardware barrier instead of creating a software-based one.

A common used parallel programming language is OpenMPI. This is a free MPI (Message Passing Interface) implementation which is fundamentally centred around component concepts and was published in 2004. MPI is a message-passing parallel programming model, in which data is moved from the address space of one process to that of another process through cooperative operations in each process. Also other features like collective operations, remote-memory access operations and parallel I/O are provided.

The problem which was handled in this practical course was the configuration of MPI’s Collective libraries and the communication between MPI and the EXTOLL back-end. In order to implement this functionality C/C++ and XML were used.


« back

back to top