Baseenv
This collection of code contains programs and tools that I find useful in both characterizing a high performance computer and in developing codes for parallel computers. Many but not all of these codes assume a working MPI environment.Programs to characterize the performance of a node
- nodeperf
- Contains some simple tests of single core performance, as well as a stencil performance code that includes both cache memory optimizations and that runs on 1 to all cores on a single node.
- TSVC
- Extensive tests of the vectorization capabilities of compilers, including scripts to simplify the comparison of compilers on the same platform and produce summary information. A version of this was used for "A Comparison of Vectorizing Compilers", ...
- gpuvec
- Under development, this code builds on the vetorization tests in TSVC as well as other operations to explore the performance of GPU programming environments such as OpenACC.
Routines to aid in developing parallel programs
These directories contain routines that can be helpful in developing parallel applications.- io
- faststream contains routines for reading and processing a stream of text consisting of variable length lines in parallel. orderwrite provides routines for a parallel program to write data in rank order.
- mputil
- mputil contains routines and macros to make it easy to run tests on 1 to n cores on a node with n cores. The use of macros allows the 1 core version to run without MPI. One MPI process per core is used; this ensures memory locality and process separation.
- nodecart
- This directory contains a node-aware implementation of MPI_Cart_create, as well as routines to use the MPI profiling interface to let an application use the version of MPI_Cart_create in this directory with changing the application program.
- seq
- Provides efficient routines to provide a "sequential section" with an MPI program. First described in the book Using MPI.
- topo
- Contains routines to get information about the network topology for a few networks, especially IBM BlueGene and Cray XE6 and XK7. Also has routines to determine which processes are on the same node (similar code is used in nodecart).
- util
- Currently contains a few routines to convert strings into arrays of values, typically used to set message sizes in benchmarks. These provide both arithmetic and multiplicative strides.
Programs to characterize the performance of a parallel computer
- haloperf
- nodecomm is a modification of the ping pong test
that measures communication performance between all cores on two
different nodes; this program is discussed in
Modeling MPI Communication Performance on SMP Nodes: Is It Time to Retire the Ping Pong Test . halocompare can be used to measure the performance of some methods for exchanging mesh halos; it is intended to be a more performant but less instructive version of the stencil code in mpi-pattersn. - io
- ioperf writes a 3D mesh using different methods from MPI-IO, and provides a test of collective I/O and I/O with MPI datatypes
- mpit
- mpit_vars writes out information on the performance and control variables available in an MPI implementation. This program was one of the first to make use of the new MPI tools interface. It includes options to make MPI IO and MPI RMA calls for implementations that do not load those modules (and therefore do not define the associated performance or control variables).
- mpi-patterns
- stencil shows different implementations halo cell exchange for a 2D stencil (mesh) code. stencil3/stencil is a 3D version of a few different halo exchanges. These codes are primarily intended as tutorial examples, and are based on code from Torsten Hoefler. The code vecperf measures performance for communication with different approaches for moving strided data, including using MPI_Type_create_resized to create a strided data type.
- perftest
- This directory contains several classic communication performance tests, including mpptest, which was described in Reproducible Measurements of MPI Performance Characteristics. Also included are tests of collective performance, a network correctness test (stress) that has been used by some vendors to test their networks, a test that can determine eager message limits (buflimit), and a test of the overhead in RMA window creation.
Other contents
For completeness, this section describes the other directories in the baseenv code.- maint
- This directory contains scripts used to maintain the baseenve package, in particular, a script to create an index for the HTML versions of the routine and program documentation
- confdb
- This directory contains definitions used in creating the
configure
script using GNUautoconf
Download and Build
Baseenv is available at baseenv.tgz.To build, untar, cd to the directory, and run configure followed by make. More information is in the file INSTALL. Send bugs and comments to wgropp at illinois.edu.
Citing baseenv
If you use any part of baseenv, please cite the relevant paper. These are:- mpptest (in perftest). Use
@InProceedings{pvmmpi99-mpptest, author = {William D. Gropp and Ewing Lusk}, title = {Reproducible Measurements of {MPI} Performance Characteristics}, booktitle = {Recent Advances in {P}arallel {V}irtual {M}achine and {M}essage {P}assing {I}nterface}, editor = {Jack Dongarra and Emilio Luque and Tom\`as Margalef}, volume = 1697, series = {Lecture Notes in Computer Science}, year = 1999, publisher = {Springer Verlag}, pages = {11--18}, note = {6th European PVM/MPI Users' Group Meeting, Barcelona, Spain, September 1999}, }
- nodecomm (in haloperf) use
@inproceedings{Gropp:2016:MMC:2966884.2966919, author = {Gropp, William and Olson, Luke N. and Samfass, Philipp}, title = {Modeling {MPI} Communication Performance on {SMP} Nodes: Is It Time to Retire the Ping Pong Test}, booktitle = {Proceedings of the 23rd European MPI Users' Group Meeting}, series = {EuroMPI 2016}, year = {2016}, isbn = {978-1-4503-4234-6}, location = {Edinburgh, United Kingdom}, pages = {41--50}, numpages = {10}, url = {http://doi.acm.org/10.1145/2966884.2966919}, doi = {10.1145/2966884.2966919}, acmid = {2966919}, publisher = {ACM}, address = {New York, NY, USA}, }
- Cartesian topologies, including the replacement MPI_Cart_create
(nodecart). Use
@inproceedings{Gropp:2018:UNI:3236367.3236377, author = {Gropp, William D.}, title = {Using Node Information to Implement {MPI} {C}artesian Topologies}, booktitle = {Proceedings of the 25th European MPI Users' Group Meeting}, series = {EuroMPI'18}, year = {2018}, isbn = {978-1-4503-6492-8}, location = {Barcelona, Spain}, pages = {18:1--18:9}, articleno = {18}, numpages = {9}, url = {http://doi.acm.org/10.1145/3236367.3236377}, doi = {10.1145/3236367.3236377}, acmid = {3236377}, publisher = {ACM}, address = {New York, NY, USA}, }
and@Article{GROPP2019, title = "Using Node and Socket Information to Implement {MPI} {C}artesian Topologies", journal = "Parallel Computing", year = "2019", issn = "0167-8191", doi = "https://doi.org/10.1016/j.parco.2019.01.001", url = "http://www.sciencedirect.com/science/article/pii/S0167819118303156", author = "William D. Gropp", }
- seqBegin/seqEnd (in seq), Use
@Book{Gropp:1994:UMP, author = "William Gropp and Ewing Lusk and Anthony Skjellum", title = "Using {MPI}: Portable Parallel Programming with the Message-Passing Interface", publisher = "MIT Press", address = "Cambridge, MA", pages = "xx + 307", year = "1994", ISBN = "0-262-57104-8", LCCN = "QA76.642 G76 1994", series = "Scientific and engineering computation", }
(note: the original version of this routine appeared in the first edition ofUsing MPI .) - All others: Use this web page
@Misc{baseenv-webpage, author = {William D. Gropp}, title = {baseenv - A collection of codes and routines for parallel computing}, year = 2019, url = {\url{http://wgropp.cs.illinois.edu/projects/software/baseenv.htm}}, }