Difference between revisions of "Using SHORE with MPI"

From SHORE wiki
Jump to: navigation, search
(Created page with 'SHORE supports parallel distributed memory environments through the MPI (''Message Passing Interface'') standard. MPI parallelization is currently only supported for the alignmen…')
 
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
SHORE supports parallel distributed memory environments through the MPI (''Message Passing Interface'') standard. MPI parallelization is currently only supported for the alignment module (''[[mapflowcell]]'').
+
SHORE supports parallel distributed memory environments through the MPI (''Message Passing Interface'') standard. MPI parallelization is currently only supported for the alignment module (''[[shore mapflowcell]]'').
  
 
==Prerequisites==
 
==Prerequisites==
Line 7: Line 7:
 
The following instructions assume you are using SHORE together with [http://www.mcs.anl.gov/research/projects/mpich2 MPICH2] version 1.3 or later, and that you have unrestricted ''ssh'' access to all machines SHORE should be run on. Otherwise, contact your local system administrator for instructions e. g. on scheduling MPI enabled applications for your local grid engine.
 
The following instructions assume you are using SHORE together with [http://www.mcs.anl.gov/research/projects/mpich2 MPICH2] version 1.3 or later, and that you have unrestricted ''ssh'' access to all machines SHORE should be run on. Otherwise, contact your local system administrator for instructions e. g. on scheduling MPI enabled applications for your local grid engine.
  
For MPICH2 versions prior to 1.3, the ''mpiexec'' command must be replace with ''mpiexec.hydra'' in the example below.
+
For MPICH2 versions prior to 1.3, the ''mpiexec'' command must be replaced with ''mpiexec.hydra'' in the example below.
  
 
==Usage Example==
 
==Usage Example==
Line 13: Line 13:
 
The following command will start SHORE ''mapflowcell'' using a total of ten cores (4 on the host ''upa'' and 6 on the host ''oka''). All output will be logged to the file ''MPI.out''.
 
The following command will start SHORE ''mapflowcell'' using a total of ten cores (4 on the host ''upa'' and 6 on the host ''oka''). All output will be logged to the file ''MPI.out''.
  
  mpiexec -n 10 -hosts upa:4,oka:6 -errfile-pattern MPI.out shore-mpi -T <GLOBAL_TEMPDIR> mapflowcell -n10% --rplot -i <INDEX_PATH> -f <PATHS>
+
  mpiexec -n 10 -hosts upa:4,oka:6 -errfile-pattern MPI.out shore-mpi -T <GLOBAL_TEMPDIR> mapflowcell -n8% --rplot -i <INDEX_PATH> -f <PATHS>
  
 
* ''<GLOBAL_TEMPDIR>'' is a temporary directory that must be accessible to all hosts and have [[Downloading_and_Installing_SHORE#TMP_directory|sufficient space]]
 
* ''<GLOBAL_TEMPDIR>'' is a temporary directory that must be accessible to all hosts and have [[Downloading_and_Installing_SHORE#TMP_directory|sufficient space]]
 
* ''<INDEX_PATH>'' and ''<PATH>'' must be accessible to all hosts
 
* ''<INDEX_PATH>'' and ''<PATH>'' must be accessible to all hosts
* ''mpiexec'' cannot be sent to the background using "''&''"; using ''screen'' is recommended
+
* ''mpiexec'' cannot be sent to the shell background using "''&''"; using ''screen'' is recommended
  
For a description of all other options see the ''mapflowcell'' usage.
+
For a description of all other options see the ''mapflowcell'' help page.

Latest revision as of 21:24, 19 July 2011

SHORE supports parallel distributed memory environments through the MPI (Message Passing Interface) standard. MPI parallelization is currently only supported for the alignment module (shore mapflowcell).

Prerequisites

First, follow the installation instructions for building an MPI-enabled SHORE binary.

The following instructions assume you are using SHORE together with MPICH2 version 1.3 or later, and that you have unrestricted ssh access to all machines SHORE should be run on. Otherwise, contact your local system administrator for instructions e. g. on scheduling MPI enabled applications for your local grid engine.

For MPICH2 versions prior to 1.3, the mpiexec command must be replaced with mpiexec.hydra in the example below.

Usage Example

The following command will start SHORE mapflowcell using a total of ten cores (4 on the host upa and 6 on the host oka). All output will be logged to the file MPI.out.

mpiexec -n 10 -hosts upa:4,oka:6 -errfile-pattern MPI.out shore-mpi -T <GLOBAL_TEMPDIR> mapflowcell -n8% --rplot -i <INDEX_PATH> -f <PATHS>
  • <GLOBAL_TEMPDIR> is a temporary directory that must be accessible to all hosts and have sufficient space
  • <INDEX_PATH> and <PATH> must be accessible to all hosts
  • mpiexec cannot be sent to the shell background using "&"; using screen is recommended

For a description of all other options see the mapflowcell help page.