|
Authors:
- Javier Fernández Baldomero
- Mancia Anguita
|
What is MPITB?:
 |
Octave Linux users in a cluster with several PCs
can use MPITB in order to call MPI library
routines from within the Octave environment. |
 |
Parallel applications can be launched using mpirun,
Octave variables can be sent/recvd,
etc. A set of demos of increasing difficulty level
are provided for an easier learning curve: Hello, Pi,
PingPong, Mandelbrot, NPB/EP, Spawn, Wavelets...
|
 |
Although using mpirun is easier for most
applications, in some circumstances it might be
convenient that new processes be spawned from a master Octave
session using MPI_Comm_spawn(_multiple).
Thanks to Octave's eval command, commands for
remote execution may be passed as strings to the
desired Octave computing
instance, and the results may be sent
back to the Octave host instance (master session).
Several protocols
to facilitate these procedures are suggested in
the startups
and utils
subdirs. |
 |
Some citations
to MPITB have been collected here
(papers, conferences, labs, etc.) Last update
Nov'08.
|
 |
(17/Jul/07)
MPITB precompiled for Octave-2.9.12 and (LAM-7.1.3 or
Open-MPI-1.2.3).
The Spawn demos are not completely supported under
OMPI yet, but all mpirun demos work.
|
 |
(20/May/08)
New PelicanHPC
release, including precompiled MPITB with Octave-3.0.1
and OMPI-1.2.6.
|
 |
(12/Jun/08)
MPITB ported to MPICH on a BlueGene/L
supercomputer called Hebb
(5.7
TF, 13.7TB, Top500
379@2007) in KTH PDC, Sweden.
|
 |
(Oct/08)
MPITB ported to BlueGene/P
as well. Document mentions peak 13.93TF, linpack
11.11TF, must be Schrödinger
(Top500
307@2008).
|
 |
(3/Apr/09)
MPITB tutorial file (and Pi demo) ported to NSP.
See Toolboxes.
Read post #6 by jpc in this thread
(01/Oct/08).
|
 |
(17/Apr/09)
MPITB in
use at Lisa64
(11.5TF, 25TB, Top500
140-435@2005/06) in SARA, Netherland.
|
|
Last users search & update Apr'09
(lisa64)
|
|
Sites with links to MPITB:
|
Index:
|
Features:
 |
Source code
included. |
 |
Supports all octave types from cell up to struct (type
"typeinfo" at octave prompt). Support for all
Octave ND arrays
and integer
types. Reduce operators _BAND/OR/XOR also
supported on the integer types.
|
 |
Support for the new
(2.9.x) sparse
types. |
 |
Can be recompiled
for x86, x86_64, ia64 architectures. The main
include file mpitb.h detects the architectural GCC
symbol and changes #defines accordingly. |
 |
Supports all MPI-1.2 calls, except for
MPI_Pcontrol, MPI_Op_create, MPI_Op_free, and
those related to derived datatypes (MPI_Type_*).
They're meaningless under Octave, which has no
corresponding functionality.
|
 |
Also included some MPI-2.0 calls: the Info
object (MPI_Info_*) and Spawn
(MPI_Comm_spawn[_multiple], _Comm_get_parent,
_Open/Close_port, _Comm_accept/connect/disconnect,
_[Un]Publish/Lookup_name). |
 |
Demo subdirs: Hello, Pi, PingPong,
Mandelbrot, NPB/EP, Spawn, Wavelets. See
the README included on each one. |
 |
Help docstring for
all interfaced MPI calls, extracted from LAM/MPI
man pages, which in turn were initially based on
MPICH man pages (thanks to all) |
 |
Utility subdir,
with protocol
commands (such as "LAM_Init", "Octave_Spawn",
"NumCmds_Send") and instrumentation commands (such
as "time_stamp" and "look_instr") which can be
useful if you need to use MPI_Comm_spawn or you
want to assess where time is spent in your
sequential application. See the examples in the Spawn and Wavelets
subdirs to get a hint on their possible uses. |
|
Requirements:
|
|
 |
MPITB needs a
working LAM or Open-MPI
dynamically linked (shared libraries)
installation. Same version on all nodes. This
requirement is frequently overlooked. See long
descriptions under "Requirements"
and "Installing".
|
|
|
 |
MPITB also needs a working Octave installation (with DLD support) on each node on
which you want to spawn an Octave process. Same
version on all nodes. MPITB should work
out-of-the-box if you use the same
Octave version in your cluster (compared
to the tarball), and similar enough LAM
or OMPI configure switches.
See long descriptions under "Requirements" and "Installing".
|
|
|
 |
Please check the MD5 sum before installing. |
 |
Tested under
several Intel/AMD Linux compiling environments,
such as:
- i686 Pentium II 333MHz, RedHat 8.0/9.0,
glibc-2.3.2 (gcc 3.2.2/7), Octave
2.1.50/57/60, LAM 7.0.2/4
- i686 Athlon MP 2400+ (biproc), Fedora Core
6, glibc-2.5 (gcc 4.1.2), Octave 3.0.0, LAM
7.1.4, OMPI
1.2.6
- i686 Xeon 2.66GHz (tetraproc), Red Hat
Enterprise Linux ES release 4, glibc-2.3.4
(gcc 3.4.3), Octave 2.1.73, LAM 7.1.2
- ia64 Itanium II 1GHz (biproc), HP-XC 2.0
LHPC 3, glibc-2.3.2 (gcc-3.2.3),
Octave-2.1.72, LAM-7.1.1
- x86-64 Pentium D 3.2GHz (tetraproc), Fedora
Core 5. gcc-4.1.0, Octave-2.1.73, LAM-7.1.2
The main include file mpitb.h detects the
architectural GCC symbol and changes #defines
accordingly |
|
|
|
Downloads:
|
|
 |
17/Jul/07:
precompiled for Oct-2.9.12 and both LAM-7.1.3 and
Open-MPI-1.2.3
mpitb-beta-FC6-OCT2912-LAM713-OMPI123.tar.bz2
(5062419bytes) (sum file oct2912lam713ompi123sum)
PDFs
comparing performance in Pi and PingPong
subdirs... It seems a good idea to gradually
move towards OMPI, not only for performance
but also for maintainability.
(May/08) the beta works with
octave-3.0.1
|
|
|
 |
Older versions indexed under "Downloads"
|
|
|
|
Installing:
|
|
 |
Make sure you have
a shared-libraries LAM or OMPI installation
available in all nodes in your cluster. |
 |
Make sure your
octave executable has been compiled with DLD
support, and it's the same version in all cluster
nodes. |
 |
Make sure the
octave binary
you wish/choose is in your search path on each
node as well as the desired/chosen LAM/OMPI binaries. Also
double-check that the corresponding LAM/OMPI libraries
(same version / configure switches as the chosen
LAM/OMPI binaries)
are included in your LD_LIBRARY_PATH.
That's frequently overlooked. See a long
description under "Installing". |
 |
A shared (among
nodes, cluster-wide) HOME is strongly advised, as
well as installing there all the required
software. That way, you won't depend on
anybody else. If not, you may need to
play with your configuration and/or ask your
sysadmin until the above requirements (or
equivalent ones for non-ssh clusters) are met.
|
|
|
 |
- Unzip & untar the toolbox. Recomended
location is under ~/octave, so a new
directory ~/octave/mpitb
appears.
- Enter the new mpitb subdir, read/edit the lam-bhost.def
file to describe your cluster and lamboot
your LAM (not applicable if using OMPI)
- Run there octave
and try some MPITB command such as MPI_Initialized
or MPI_Init. It might
work out-of-the-box.
- If not, your octave version or LAM/OMPI
configure switches are probably too different
and MPITB needs a remake: just enter the src subdir, make sure the
commands "octave-config",
"mkoctfile", "(lam/ompi_)info" and "mpiCC" all work and correspond
to your desired Octave / LAM/OMPI --enable-shared versions,
and type "make".
- Read the README
file for more detailed instructions. Read the
Makefile to understand why those 4 commands
are required.
- Work through the demos to learn to use the
toolbox. The recommended order is Hello, Pi,
PingPong, Mandelbrot, NPB/EP.
- Remember to halt LAM when you leave (also
not much of a problem if you let the daemons
there)
|
|
|
|
|
Community:
- This section was started to keep track of Michael
Creel's work with MPITB. Initially, there were just a
couple of acknowledgements in the "Features" section.
- There is also a Parallel-Knoppix papers
page, collecting just those who mention MPITB.
 |
(Apr/2009) Willem Vermin
has installed
MPITB in lisa64,
a Top500 11.5TF cluster in SARA
Netherlands.
|
 |
(Mar/2009) Martin Cech, a
PelicanHPC user, has posted these
timing results in Nabble's PelicanHPC forum.
|
 |
(Oct/2008) Honoré Tapamo
is using MPITB in a Top500 BlueGene/P
platform in KTH PDC Sweden. See this thread
Jul/08 and this report
circa Sep-Oct/2008. See the "MPITB Papers"
page for more details.
|
 |
(Jun/2008) Nils Smeds et
al. have ported
MPITB to hebb,
a Top500 BlueGene/L
supercomputer.
|
 |
(May/2008) Michael
has released the new PelicanHPC-1.5.1
which
includes MPITB precompiled for Octave-3.0.1 and
OpenMPI-1.2.6.
|
 |
(May/2008) Michael was
interviewed for a supercomputation review called
TeraFlop (in catalá). Read Michael's section here.
|
 |
(Jan/2008) Gentur is
popularizing Octave/MPITB code in his cluster, see
the last 2 messages in this forum.
|
 |
(Jan/2008) MPITB
live at Monash. Find it here,
in page 3/7 (linked in the Monash
Sun
Grid web). Credited to Graham Jenkins
(see support
staff).
|
 |
(Jan/2008) Gianvito talks
about MPITB in his cluster, in the OctaveWiki
> here. |
 |
(2/Jan/2008) PelicanHPC
is the sequel to P-KPX after v2.8 release. Didn't
notice the new Nabble
forum name.
|
 |
(9/Nov/07) Gentur Widyaputra
has successfully installed MPITB on his
NPACI-Rocks cluster. (see thread)
|
 |
(26/Sept/07) New
ParallelKnoppix forum
at Nabble.
|
 |
(13/Mar/2007) Michael ran
4E6 probits last night!
|
 |
(16/Nov/06) MPITB
considered for inclusion in VL-E POC Release2.
|
 |
(19/Jun/06) Michael
has made a "screencast" of Octave running in
parallel (announcement).
The
video is a physical proof of the advertised "5
minutes setup". Setup, boot and run the
mle_example took 3:45 to Michael. We normal users
are expected to spend some 5 minutes :-) |
 |
(01/Jun/06) Meeting at ICCS'06 with Breanndán. He
patched some MPITB error (see MPI_Unpack.cc around
lines 360-365). He has also coauthored 3 MPITB
papers with people working in Hydrology,
Meteorology and Geosciences. |
 |
(Feb/06) MPITB for Octave considered for
inclusion in the VL-E
POC Environment, see the proposal
here. Learn more about VL-E here and here. Breanndán
Ó
Nualláin
is responsible for integrating MPITB and
(David Abramson's)
Nimrod
into VL-E (see attributions in Table 1
here and this talk). |
 |
(Feb/06) Gianvito
Quarta offered his 64-biproc-nodes ItaniumII
cluster to adapt MPITB to IA-64. This is his
initial e-mail
and this is the summary
post. |
 |
(2004/05) Dirk
Eddelbuettel has added an MPITB item to Quantian's
to-do
list.
|
 |
(19/Oct/05) Michael
has solved the xterm-window problems and now the
Parallel-Knoppix distribution has an MPITB-debug mode,
very useful to debug parallel MATLAB applications.
See the snaphot here.
|
 |
(11/Feb/05) Thomas Weber
has recompiled MPITB against MPICH-1.2.5.3 and
LAM-7.1.1 with the Debian distro. The Makefile
will be included in the next MPITB tarball. Michael sent
this announcement
to help at octave.org. |
 |
(11/Jan/05) The
Octave wiki
includes a mention to MPITB in CategoryCode
> CategoryParallel |
 |
(10/Jan/05) Michael has
finished the paper on econometrics using MPITB:
"User-Friendly Parallel Computations with
Econometric Examples", see the announcement and
preliminary version here.
The complete version is here.
Also archived at EconPapers
/ IDEAS.
(May/05) Accepted in Computational Economics, PDF / DOI |
 |
(25/Nov/2004) Michael has
finished the example, this is the announcement.
The ParallelKnoppix
live-CD now includes a working example to do a
MonteCarlo study in parallel. Michael also
announced it here
in the Knoppix.net site.
(12/Jan/2005) This is the most updated
version of the examples |
 |
Michael has
built a remastering of the Knoppix live-CD called
ParallelKnoppix
(see this
/ this
and this
and this
other announcement). He has also compiled against
2.1.58 / LAM 7.0.6 (see announcement here),
and is planning to prepare some example. There is
also a Tutorial on ParallelKnoppix (here,
alt)
tailored with an MPITB example. |
 |
Also wholehearted
thanks to Michael
Creel for his interest and patience.
He has also contributed a Makefile for Debian,
included in the src subdir. |
 |
Wholeheartedly
thanks to Christoph L. Spiel for writing
the "Da
Coda Al Fine" (broken, cached
at Google) manual.
And of course to John W. Eaton as well,
for making Octave.
And to Paul Kienzle for his advice
about licensing nomenclature etc. |
|
|