Projects
My interests are in high performance scientific computing, with particular emphasis on parallel computing. These include the development of scalable algorithms, programming systems to enable the efficient and correct expression of algorithms, and tools for writing, understanding, and tuning parallel applications. I am also involved in defining and providing the infrastructure for high performance computing and for a large science project, the Large Synoptic Survey Telescope.
A common underlying issue is matching the algorithms and software to the computer architecture, particularly as architectures adapt, possibly with radical innovations, to the end of Dennard scaling (around 2004) and the end of Moore's Law (in rigourous terms, also already ended, in practice, slowly fading since it is really an engineering imperative rather than a natural law).
- Programming systems
- Effective ways to program parallel
computers require (a) a standard approach, (b) a high-quality,
ubiquitous implemenation of that standard, and (c) effective ways
to get high performance on each core and node.
- MPI Standard
- The Message Passing Interface (MPI) standard is the dominant programming system for highly parallel HPC applications. I have been involved from the beginning of MPI process and currently serve as the overall standard editor and the co-chair of the RMA chapter.
- MPICH implementation
- The MPICH implementation of MPI is the most widely used implementation of MPI. I began this implementation while MPI was originally being defined, and it continues with the major work at Argonne National Laboratory. My recent contributions have included better implementations for datatypes, remote memory access, collective communications, and process topologies.
- Code transformations
- Parallelism is only one approach to performance. Real HPC applications ofter require non-trivial code transformations to go beyond what a compiler is willing or able to perform. Recent projects in this area include the Illinois Coding Environment (ICE) and a Just-In-Time (JIT) compilation approach for HPC applications called Moya.
- Parallel I/O
- I/O in parallel applications often performs spectacurally poorly. I have worked on measuring I/O performance and establishing performance expectations, contributing to the ROMIO implementation of MPI-IO, developing application-level libraries (such as MeshIO) for better performance, and exploring the possibilities for post POSIX I/O specifications in HPC.
- Performance Models and Tools
- Quantifing performance
and developing performance models and especially performance
expectations. Recent work has included a new performance
model that better matches the performance of internode
communication than the classical postal model; this new model is
called the max rate model. Software to measure
communication performance include
mpptest
andnodecomm
. FPMPI is a software library that uses the MPI profiling interface to gather details about the use of MPI in an application without requiring any source code changes. Baseenv is a collection of routines and programs for parallel programs. Included in baseenv is a comprehensive set of tests of compiler vectorization. Some introductory papers on performance modeling - Scalable Numerical Methods
-
- PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
- At extreme scale, synchronizing operations such as reductions (dot products) can limit scalability. I have investigated ways to use non-blocking collective operations in MPI to implement scalable Krylov Methods.
- Infrastructure
-
- Blue Waters
- Large Synoptic Survey Telescope (LSST)
- The Deep Learning Major Research Instrument Project is an NSF-funded project, hosted at NCSA, to provide Illinois researchers with a powerful deep learning resource. Note that MPI and collective algorithms developed for HPC have been adopted in the machine learning community.
- Policy
-
- I participate in community discussions about the directions
for HPC. Some of the most important are:
- Opportunities from the Integration of Simulation Science and Data Science is the proceedings of a workshop that examines some of the issues raised in the "Future Directions for NSF" report.
- Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020 is a recent report that I co-chaired for the National Academies. A recent workshop was held do update this report in the area of the convergence of data and HPC; the workshop report is forthcoming.
- Big Data and Extreme-Scale Computing (BDEC)is an international effort to understand the interactions and opportunities in the intersection of big data and extreme scale computing. A draft report, Big Data and Extreme-Scale Computing: Pathways to Convergence, is available.
- International Exascale Software Project (IESP) was an effort of the international community to define an exascale softare stack. The final report of this effort was published as The International Exascale Software Project roadmap.
- Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology was a report for the President's Council of Advisors on Science and Technology
Current Grants
Most of these grants are team projects in which my role runs from being the principle investigator to one of several co-investigators.- Center for the Exascale Simulation of Plasma Coupled Combustion (XPACC), a >$20M, 6 year project in the DOE Predictive Science program.
- LSST; NCSA leads the data facility for the LSST project.
- Blue Waters is NSF's most powerful general purpose supercomputer, and has sustained more than a petaFLOP on a wide variety of applications.
- Midwest Big Data Hub (MBDH) is one of the four regional data hubs funded by the NSF
- Deep Learning Major Research Instrument is an instrument to support deep learning researchers across the University of Illinois Urbana-Champaign campus. This is a development MRI, in partnership with IBM and NVIDIA
- Scalable solvers for reservoir simulation, a joint project with Luke Olson and funded by ExxonMobil.
- Lattice QCP Exascale Computing Project, one of the DOE Exascale Computing Projects.
- Scalable and Highly Accurate Methods for Metagenomics, led at Illinois by Tandy Warnow
- While not a grant in the conventional sense, I administer a Siebel Fellowship to a student at UIUC; I've used this for students pursuing topics in big data and bioinformatics.
- As a Blue Waters Professor, I have a grant of time on the Blue Waters system that I use for a variety of projects
- As part of the PEAC-INCITE award, I have access to time on supercomputers operated by DOE's Office of Science.