Paul Eller
 Hassan Eslami
 Samah Karim
 Tarun Prabhu


My interests are in high performance scientific computing, with particular emphasis on parallel computing. My major projects involve developing the message passing inteface (MPI) and hierarchical numerical methods for the numerical solution of partial differential equations.

Parallel Computing

One of the biggest challenges in parallel computing for scientific computing is efficiently and correctly expressing parallel programs. My work in this area has three main directions:
  1. Developing standards for parallel computing that can be efficiently and widely implemented. This work has focused on the Message Passing Interface (MPI) standard and a freely-available implementation of the MPI standard, MPICH.
  2. Developing tools to understand and improve the performance and correctness of parallel programs.
  3. Innovative methods for parallelism that will match radical changes in computer architecture.
Each of these is described in more detail below.

Blue Waters

Blue Waters is a Petascale computing system, funded by that National Science Foundation and installed at NCSA.


The Message Passing Interface (MPI) is a standard for parallel computing developed by the high performance computing community. The standard is available at the MPI Forum web site. I was involved in the development of the MPI-1, MPI-2, and MPI-3 standards, and I currently edit and maintain the sources for the official document.


For a standard to succeed, there must be an effective implementation of the standard. For this reason, our group developed a portable, efficient implementation of the MPI standard called MPICH. Along with my collaborators at Argonne, I continue to use this project to perform research into implementation issues for MPI, such as how to make best use of emerging high-performance networks.

Parallel I/O

Achieving high performance with I/O in parallel applications requires careful attention to the choice of semantics. MPI-IO defines parallel file semantics that permit a highly efficient implementation. The key here is permits - both new implementation ideas and careful engineering is required to achieve good performance.

Performance Modeling, Visualization, and Tuning

The quantitative study of performance is critical in developing new algorithms, understanding new architectural directions for processors and interconnects, and getting the most out of applications. Some introductory papers on performance modeling

New Execution Models

Decoupled Execution Paradigm is an NSF-funded project that looks at new ways to organize I/O on high-end computing (HEC) systems.

FPMPI is a tool for collecting data on the performance of MPI programs. FPMPI collects detailed summary data about MPI usage, along with some information about the other resources used by a parallel program. FPMPI tries to collect just the right amount of data: rather than collect information either on each separate call (too much data) or combining all calls to a particular routine (too little data), FPMPI collects enough data to understand the overall communication pattern, including the different contributions by short and long message communication. FPMPI complements the abilities of Jumpshot, a portable visualization tool that alas is no longer being developed or maintained.

Numerical Methods


PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
Computer Science Department University of Illinois Urbana-Champaign