&NBsp;Colleen Heinemann
 Ed Karrels
 Margaret Lawson
 Tarun Prabhu
Picture of William Gropp
William D. Gropp
Director, National Center for Supercomputing Applications
Thomas M. Siebel Chair in Computer Science
Computer Science Department
University of Illinois Urbana-Champaign
Urbana, Illinois
Looking for the head (chair) of the CS Department? You want Nancy Amato

Phone: 217 244 6720
Fax: 217 265 6738
email: wgropp at illinois.edu

IEEE Computer Society

I have begun a three year term as President-elect (2021), President (2022), and Past-President (2023) of the Computer Society. Read more about my candidacy here, and stay tuned for more about my vision and programs for the IEEE Computer Society.

Research Interests

My interest is in the use of high performance computing to solve problems that are too hard for other techniques. I have concentrated in two areas: the development of scalable numerical algorithms for partial differential equations (PDEs), especially for the linear systems that arise from approximations to PDEs, and the development of programming models and systems for expressing and implementing highly scalable applications. In each of these areas, I have led the development of software that has been widely adopted. PETSc is a powerful numerical library for the solution of linear and nonlinear systems of equations. MPI is the mostly widely used parallel programming system for large scale scientific computing. The MPICH implementation of MPI is one of the most widely used and is the implementation of choice for the world's fastest machines.

Current Major Research Projects

These are some of my major research projects. I have other projects and collaborations, particularly in parallel I/O and parallel numerical algorithms, as well.
  • A recent project is Delta, which will deploy and operate a supercomputer for the National Science Foundation. The system will provide significant GPU computing, a non-POSIX file system, and special attention to accessibility. The system will in operation late in 2021, and operate for at least five years. See the announcement of Delta here.

  • The Center for Exascale-enabled Scramjet Design is a new center, funded by the US Dept. of Energy in the Predictive Science Academic Alliance Program (PSAAP III). The project has openings for graduate students with interests in numerical analysis, scientific computing, parallel program, performance and I/O, among others.

  • In 2016, we began a project to deploy a Deep Learning Major Research Instrument, which combines my interests in HPC software and numerics and high-performance I/O with the revolution in machine learning.

  • The Midwest Big Data Hub (MBDH) is one of four NSF-funded regional big data an innovation hubs, which serve to bring communities together to apply data science to a wide range of challenges. I led the MBDH from 2017 through mid-2020, and currently am one of the co-principal investigators.

  • The MPI Forum is the organization that continues to develop the Message-Passing Interface standard, which is the dominant programming system for massively parallel applications in science and engineering. I'm lead several of the chapter committees and am the overall editor. I also do work in MPI implementations and design.

  • High performance I/O is another focus of my research, and I have several projects looking at everything from using and implementing collective I/O to using database ideas to better manage data from simulations. One feature that these have in common is that they do not require POSIX I/O semantics - specifically, the strong consistency semantics that contribute to complex implementations and that lead to performance and robustness problems. The Delta system, mentioned above, will feature a fast, non-POSIX file system.

Of Special Interest

2016 Ken Kennedy Lecture at SC16.

Our report on Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020 is now available! The report is freely available at that link, and provides a framework for NSF's advanced computing for the future (not just until 2020).

Video of lectures at the 2016 ATPESC.

Using MPI, 3rd edition and Using Advanced MPI released! Using MPI is an extensive revision, including new material on MPI-2.2 and MPI-3. Using Advanced MPI is a replacement for Using MPI-2, and includes new material on the MPI-3 one-sided interface, the new tools interface, and Fortran, as well as extensive revisions throughout.

Now available: The SC13 opening session video, including award presentations and Genevieve Bell's keynote. See Genevieve Bell's Keynote The Secret Life of Data (a subset of the full opening session)

Changing How Programmers Think about Parallel Programming, ACM Webinar.

PETSc wins an R&D 100 award in 2009.

MPICH2 wins an R&D 100 award in 2005.

Current Conference Committees

Computer Science Department University of Illinois Urbana-Champaign