The PinT 2021 workshop was held virtually, August 2–6, 2021, due to COVID-19. The conference program and recorded lectures are linked below for posterity. The program booklet for the conference contains contact information for the speakers, PinT2021_ProgramBooklet.pdf

### Monday August 2, 2021 (Session #1)

2:55pm GMT: Introductory Remarks / Logistics

3:00pm GMT: Convergence of Parareal with spatial coarsening (Daniel Ruprecht, Hamburg University of Technology) (Abstract)

When solving PDEs, using a coarser spatial mesh in Parareal’s coarse propagator can be an effective way to reduce its cost and improve potential speedup. However, the cost reduction has to be balanced against potentially slower convergence. In the talk, we will prove a theoretical best-case bound for the norm of Parareal’s iteration matrix when spatial coarsening is used. We will discuss implications of the result and compare it to numerical experiments. One consequence of the bound is that for hyperbolic problems, where Parareal is known to struggle, spatial coarsening will eliminate any chance for a theoretical guarantee of fast convergence.

3:30pm GMT: Multigrid Reduction in Time: Two-Level Convergence Theory with Spatial Coarsening (Jacob Schroder, University of New Mexico) <video> (Abstract)

The need for parallel-in-time is being driven by changes in computer architectures, where future speedups will be available through greater concurrency, but not faster clock speeds, which are stagnant. In this talk, we examine the parallel-in-time method, multigrid reduction in time (MGRIT), which applies multigrid to the time dimension for the (non)linear systems that arise when solving for multiple time steps simultaneously. The result is a flexible and nonintrusive approach that wraps existing time-stepping codes; however, basic MGRIT coarsens only in one dimension, time. For explicit time-stepping schemes (i.e., for stability) and for efficiency reasons in general, coarsening in spatial dimensions is attractive. Thus, we will examine the effects of spatial coarsening on MGRIT with a convergence analysis for 1D model problems using injection restriction and linear interpolation in space. The convergence analysis shows that spatial coarsening can be rather detrimental to convergence; however, the use of so-called FCF-relaxation can ameliorate some of these effects. Supporting numerical results are provided for model 1D advection and heat equations.

This talk is based on joint work with Tz. Kolev and V. Dobrev from Lawrence Livermore National Laboratory.

LLNL-PRES-824440

4:00pm GMT: Parareal with Exponential Integrators (Tommaso Buvoli, University of California, Merced) <video>(Abstract)

In this talk we analyze the stability and convergence properties of Parareal with exponential integrators, and compare our results to those obtained with IMEX-based Parareal. We will show that exponential integrators are overall more robust with regards to parameter selection, especially on equations with no diffusion. Lastly, we also present several simple numerical experiments to demonstrate real-world parallel speedup.

4:30pm GMT: Optimized, parallel time integrators for better accuracy with large time steps (Hans Johansen, Lawrence Berkeley National Lab) <video> (Abstract)

5:00pm GMT: (Plenary) Supervised parallel-in-time algorithm for stochastic dynamics (Minlang Yin and Khemraj Shukla, Brown University) <video> (Abstract)

In the second part of the talk, we will discuss the implementation of parallel physics-informed neural networks in space and time. We developed a distributed framework for the physics-informed neural networks (PINNs) based on two recent extensions, namely conservative PINNs (cPINNs) and extended PINNs (XPINNs), which employ domain decomposition in space and in time-space, respectively. This domain decomposition endows cPINNs and XPINNs with several advantages over the vanilla PINNs, such as parallelization capacity, large representation capacity, efficient hyperparameter tuning, and is particularly effective for multi-scale and multi-physics problems. We will present a parallel algorithm for cPINNs and XPINNs constructed with a hybrid programming model described by MPI + X, where X $\in$ {CPUs, GPUs}. The main advantage of cPINN and XPINN over the more classical data and model parallel approaches is the flexibility of optimizing all hyperparameters of each neural network separately in each subdomain. Finally, we will discuss the efficient parallel PINN implemented with para-real algorithm.

### Tuesday August 3, 2021 (Session #2)

2:55pm GMT: Introductory Remarks / Logistics

3:00pm GMT: Error bounds for PFASST and related Block Spectral-Deferred-Correction algorithms (Thibaut Lunet, Hamburg University of Technology) <video> (Abstract)

In this talk, we describe all those different elements of PFASST when applied on a simple linear problem (Dahlquist equation), and show the equivalence of Block Gauss-Seidel with the Parareal algorithm used with specific parts of an SDC integrator. Then we derive convergence bounds for Block SDC, Block Gauss-Seidel SDC and Block Jacobi SDC, using the generating function technique, already used to determine convergence bounds for Parareal. With those bounds, we show the convergence of Block Gauss-Seidel SDC, that can be used as a direct PinT algorithm through pipe-lining of the sweeps. Also, we highlight the particular convergence order for Block Jacobi SDC that explains why order increase in PFASST appears only after a fixed amount of iterations. Finally, we show how the generating function technique can be extended to a Block Gauss-Seidel update with a FAS correction, and ultimately use it to compute a new convergence bound for PFASST.

3:30pm GMT: Quantum Algorithms for Solving Ordinary Differential Equations via Classical Integration Methods (Benjamin Zanger, Technical University of Munich) <video> (Abstract)

This talk is based on joint work with Christian B. Mendl, Martin Schulz, and Martin Schreiber from the Technical University of Munich.

4:00pm GMT: In Search of a Classical Convergence Proof for Waveform Relaxation (Martin Gander, University of Geneva) <video> (Abstract)

see also the page just before], but again without proof. The goal of my presentation is to show how the result announced in [Bellen and Zennaro 1993, Theorem 2.1] with the extra term can be proved.

4:30pm GMT: Parallel exponential Runge–Kutta methods (Luan Vu Thai, Mississippi State University) <video> (Abstract)

5:00pm GMT: Virtual social hour, hosted at http://pint.event.gatherly.io

### Wednesday August 4, 2021 (Session #3)

2:55pm GMT: Introductory Remarks / Logistics

3:00pm GMT: A Space-Time DPG Method for the Heat Equation (Johannes Storn, Bielefeld University) <video> (Abstract)

3:30pm GMT: Parallel space-time residual minimization for parabolic evolution equations (Jan Westerdiep, University of Amsterdam) <video> (Abstract)

In this talk, we dive into the specifics of the algorithmic complexity of such a solve. As it turns out, we can compute quasi-optimal approximations to u at optimal linear cost on a single processor. More interestingly though, defining parallel complexity as the asymptotic runtime given sufficiently many processors and assuming no communication cost, we can compute such approximations in polylogarithmic time, which is on par with the best-known results for elliptic problems. Numerical experiments will show that these abstract results translate to a highly efficient algorithm in practice.

Results in this talk build on our chapter in the proceedings of the previous PinT workshop.

4:00pm GMT: Diagonal SDC Preconditioning via Differentiable Programming and Reinforcement Learning (Ruth Schöbel, Juelich Supercomputing Centre) <video> (Abstract)

4:30pm GMT: (Plenary) Space-time block preconditioning (Ben Southworth, Los Alamos National Lab) <video> (Abstract)

### Thursday August 5, 2021 (Session #4)

8:55am GMT: Introductory Remarks / Logistics

9:00am GMT: An Efficient Parallel-in-Time Method for Explicit Time-Marching Schemes (Yen-Chen Chen, The University of Tokyo) <video> (Abstract)

9:30am GMT: Parallel-in-Time Preconditioner for Optimal Control Problems (Shulin Wu, Northeast Normal University) <video> (Abstract)

10:00am GMT: Asynchronous Truncated Multigrid-reduction-in-time: AT-MGRIT (Jens Hahne, Bergische Universität Wuppertal) <video> (Abstract)

10:30am GMT: (Plenary) From Parallel-in-Time to Full Space-Time Parallelization (Ulrich Langer, Johannes Kepler University) <video> (Abstract)

The traditional approaches to the numerical solution of initial-boundary value problems for parabolic Partial Differential Equations (PDEs) are based on the separation of the discretization in time and space leading to time-stepping methods. This separation of time and space discretizations comes along with some disadvantages with respect to parallelization and adaptivity. Parallel-in-Time methods try to avoid the curse of sequentiality of time-stepping procedures. To overcome the disadvantages connecting with the separation of time and space discretizations, we consider completely unstructured finite element (fe) discretizations of the space-time cylinder into simplicial finite elements, and derive stable fe schemes. Unstructured space-time fe discretizations considerably facilitate the parallelization and simultaneous space-time adaptivity. Moving spatial domains or interfaces can easily be treated since they are fixed in the space-time cylinder. Beside initial-boundary value problems for parabolic PDEs, we will also consider optimal control problems constrained by linear or non-linear parabolic PDEs. Here, unstructured space-time methods are especially suited since the reduced optimality system couples two parabolic equations for the state and adjoint state that are forward and backward in time, respectively. In contrast to time-stepping methods, one has to solve one big linear or non-linear system of algebraic equations. Thus, the memory requirement is an issue. In this connection, adaptivity, parallelization, and matrix-free implementations are very important techniques to overcome this bottleneck. Fast parallel solvers are the most important ingredient of efficient space-time methods.

The talk is based on joint works with C. Hofer, M. Neumüller, A. Schafelner, R. Schneckenleitner, O. Steinbach, I. Toulopoulos, F. Tröltzsch, H. Yang, and M. Zank.

This research was supported by the Austrian Science Fund (FWF) through the DK W1214-04. This support is gratefully acknowledged.

### Friday August 6, 2021 (Session #5)

8:55am GMT: Introductory Remarks / Logistics

9:00am GMT: Spatial re-distribution on the temporal coarse-level for Multigrid Reduction in Time (Ryo Yoda, The University of Tokyo) <video> (Abstract)

9:30am GMT: Parareal for higher index differential algebraic equations (Iryna Kulchytska-Ruchka, Technical University of Darmstadt) <video> (Abstract)

10:00am GMT: Low Rank Parareal: a new combination of Parareal with dynamical low rank approximation (Benjamin Carrel, University of Geneva) <video> (Abstract)