Using SHORE with MPI
SHORE supports parallel distributed memory environments through the MPI (Message Passing Interface) standard. MPI parallelization is currently only supported for the alignment module (mapflowcell).
Prerequisites
First, follow the installation instructions for building an MPI-enabled SHORE binary.
The following instructions assume you are using SHORE together with MPICH2 version 1.3 or later, and that you have unrestricted ssh access to all machines SHORE should be run on. Otherwise, contact your local system administrator for instructions e. g. on scheduling MPI enabled applications for your local grid engine.
For MPICH2 versions prior to 1.3, the mpiexec command must be replaced with mpiexec.hydra in the example below.
Usage Example
The following command will start SHORE mapflowcell using a total of ten cores (4 on the host upa and 6 on the host oka). All output will be logged to the file MPI.out.
mpiexec -n 10 -hosts upa:4,oka:6 -errfile-pattern MPI.out shore-mpi -T <GLOBAL_TEMPDIR> mapflowcell -n8% --rplot -i <INDEX_PATH> -f <PATHS>
- <GLOBAL_TEMPDIR> is a temporary directory that must be accessible to all hosts and have sufficient space
- <INDEX_PATH> and <PATH> must be accessible to all hosts
- mpiexec cannot be sent to the shell background using "&"; using screen is recommended
For a description of all other options see the mapflowcell help page.