Pierre de Buyl's homepage

Using PMI and h5py

Writing data in a parallel environment is not an easy task. Using available Python packages, I present a solution to write a HDF5 in parallel with the use of the Parallel Method Invocation.

The motivation for this test is to write Molecular Dynamics data from ESPResSo++ to a H5MD file, in parallel. Non-parallel I/O is a bottleneck for large system sizes in molecular simulations. This short test only reviews the practical combination of PMI, h5py and their use with independent NumPy arrays.

PMI, HDF5, h5py and mpi4py

The Parallel Method Invocation, found here or here is an abstraction tool for the Message Passing Interface parallel programming model, in Python. With a properly designed module, the user is left with a simple serial Python script in which the parallel computing paradigm is completely abstracted by PMI. The module developer is still required to devise the proper parallel operations, indeed. PMI is used in the ESPResSo++ Molecular Dynamics program, which is why I am interested in using it.

HDF5 is a binary structured file format for scientific data and the corresponding library has a parallel API. h5py is a convenient Python wrapper for HDF5 that knows NumPy arrays and that interacts well with the mpi4py parallel programming module for Python.

Combining it all

To illustrate the use of all the above-mentioned libraries, I have set up a test project https://github.com/pdebuyl/pmi-h5py. It contains a "worker" class MyTestLocal and a pmi.Proxy class, MyTest. All the code in MyTestLocal will be executed on each MPI worker and all the functionality is exposed through MyTest.

There are two interesting parts to the code:

self.f = h5py.File(self.filename, 'w', driver='mpio', comm=MPI.COMM_WORLD)

where a HDF5 file is created in parallel and

self.f.create_dataset('ds1',
    dtype=self.data.dtype,
    shape=(MPI.COMM_WORLD.size*self.n,),
    chunks=(self.n,))
self.f['ds1'][idx_0:idx_1] = self.data

where data from local NumPy arrays is written to the file, in parallel.

As the chunk size corresponds to the local data size, the operation is normally executed as one collective call. Pointers on how to check the performance are welcome.

The module can be used in a straightforward manner in a regular Python program, that should nonetheless be run with the MPI environment enabled (with mpirun, for instance).

import test_pmi_mod

mytest = test_pmi_mod.MyTest('myllfile.h5', 1024)

mytest.fill()
mytest.close()

There remains details to sort out for molecular simulations, for instance the variable number of data points per MPI worker, but this is mostly bookkeeping. The technical challenges have been overcome already in all the libraries used here.

Comments !

Comments are temporarily disabled.

Generated with Pelican. Theme based on MIT-licensed Skeleton.