Difference between revisions of "Using SHORE with MPI"
(→Usage Example) |
(→Prerequisites) |
||
Line 7: | Line 7: | ||
The following instructions assume you are using SHORE together with [http://www.mcs.anl.gov/research/projects/mpich2 MPICH2] version 1.3 or later, and that you have unrestricted ''ssh'' access to all machines SHORE should be run on. Otherwise, contact your local system administrator for instructions e. g. on scheduling MPI enabled applications for your local grid engine. | The following instructions assume you are using SHORE together with [http://www.mcs.anl.gov/research/projects/mpich2 MPICH2] version 1.3 or later, and that you have unrestricted ''ssh'' access to all machines SHORE should be run on. Otherwise, contact your local system administrator for instructions e. g. on scheduling MPI enabled applications for your local grid engine. | ||
− | For MPICH2 versions prior to 1.3, the ''mpiexec'' command must be | + | For MPICH2 versions prior to 1.3, the ''mpiexec'' command must be replaced with ''mpiexec.hydra'' in the example below. |
==Usage Example== | ==Usage Example== |
Revision as of 17:02, 28 April 2011
SHORE supports parallel distributed memory environments through the MPI (Message Passing Interface) standard. MPI parallelization is currently only supported for the alignment module (mapflowcell).
Prerequisites
First, follow the installation instructions for building an MPI-enabled SHORE binary.
The following instructions assume you are using SHORE together with MPICH2 version 1.3 or later, and that you have unrestricted ssh access to all machines SHORE should be run on. Otherwise, contact your local system administrator for instructions e. g. on scheduling MPI enabled applications for your local grid engine.
For MPICH2 versions prior to 1.3, the mpiexec command must be replaced with mpiexec.hydra in the example below.
Usage Example
The following command will start SHORE mapflowcell using a total of ten cores (4 on the host upa and 6 on the host oka). All output will be logged to the file MPI.out.
mpiexec -n 10 -hosts upa:4,oka:6 -errfile-pattern MPI.out shore-mpi -T <GLOBAL_TEMPDIR> mapflowcell -n8% --rplot -i <INDEX_PATH> -f <PATHS>
- <GLOBAL_TEMPDIR> is a temporary directory that must be accessible to all hosts and have sufficient space
- <INDEX_PATH> and <PATH> must be accessible to all hosts
- mpiexec cannot be sent to the shell background using "&"; using screen is recommended
For a description of all other options see the mapflowcell help page.