The purpose of the event of the 25th of January is to discuss recent developments in various aspects of parallel in time methods, bringing together mathematicians and computational scientists who are working on this subject.
The presentations will take place in the Room B405 of LAGA.
Time parallel time integration methods have received renewed interest over the last decade because of the advent of massively parallel computers, due to the clock speed limit reached on today's processors. When solving time dependent partial differential equations, the time direction is usually not used for parallelization. But when parallelization in space saturates, the time direction offers itself as a further direction for parallelization. The time direction is however special, and for evolution problems there is a causality principle: the solution later in time is determined by the solution earlier in time, so the flow of information is just into the direction forward in time. Algorithms trying to use the time direction for parallelization must therefore be special, and take this very different property of the time dimension into account. I will show in this talk how time parallel time integration methods were invented over the past five decades, and give a classification into four different groups: methods based on multiple shooting, space-time multigrid methods, methods based on domain decomposition and waveform relaxation, and direct time parallel methods. The performance of these methods depends on the nature of the underlying evolution problem, and it turns out that for the first two classes of methods, time parallelization is only really possible for parabolic problems, while the last two classes can also be used to parallelize hyperbolic problems in time. I will also explain in more detail one of the methods from each class: the parareal algorithm and a space-time multigrid method, which are currently among the most promising methods for parabolic problems, and a Schwarz waveform relaxation method related to tent-pitching and a direct time parallel method based on diagonalization of the time stepping matrix, which are very effective for hyperbolic problems.
With the advent of very large scale parallel computers, it has become more and more important to also use the time direction for parallelization when solving evolution problems. While there are many successful algorithms for diffusive problems, it has turned out to be substantially more difficult to solve hyperbolic problems in a time parallel fashion. We present here a mathematical analysis of a new method based on the diagonalization of the time stepping matrix proposed by Maday and Rönquist in 2007. Like for many time parallelization methods, this seems at first not to be a very promising approach, since this matrix is essentially triangular, and for a fixed time step even a Jordan block, and thus not diagonalizable. If one chooses however different time steps, diagonalization is possible, and one has to trade off between the accuracy due to necessarily having different time steps, and numerical errors in the diagonalization process of these almost non-diagonalizable matrices. We study this trade-off mathematically for the wave equation with a Crank-Nicolson discretization, and propose an optimization strategy for the choice of the parameters. We illustrate our results with numerical experiments for model wave equations in various dimensions, and also an industrial test case for the elasticity equations.
Parareal method is a numerical method to solve time - evolutional problems in parallel, which uses two propagators: the coarse - fast and inaccurate - and the fine - slow but more accurate. Instead of running the fine propagator on the whole time interval, we divide the time space into small time intervals, where we can run the fine propagator in parallel to obtain the desired solution, with the help of the coarse propagator and through parareal steps. Furthermore, each local subproblem can be solved by an iterative method, and instead of doing this local iterative method until convergence, one may perform only a few iterations of it, during parareal iterations. Propagators then become much cheaper but sharply lose their accuracy, and we hope that the convergence will be achieved across parareal iterations. In this talk, we propose to couple Parareal with a well-known iterative method - Optimized Schwarz Waveform Relaxation (OSWR) with only few OSWR iterations in the fine propagator and with a simple coarse propagator deduced from Backward Euler method. We present the analysis of this coupled method for 1-dimensional advection reaction diffusion equation, for this case the convergence is almost linear. We also give some numerical illustrations for 1D and 2D equations, which shows that the convergence is much faster in practice.
This work is in collaboration with Caroline Japhet (Paris 13), Yvon Maday (UPMC), and Pascal Omnes (DM2S CEA-Saclay)
In this talk, we present a new vision of the parareal in time algorithm which carries potential to overcome the classical scalability limitations of domain decomposition schemes for the time domain. For this, we reformulate the algorithm in an infinite dimensional functional space where the realization of each iteration involves approximations of increasing accuracy of time dependent subproblems formulated on small subdomains of the original time interval. We rigorously prove that the resulting parareal algorithm presents a near-optimal convergence rate and has a parallel efficiency which is significantly superior to the traditional version. In some cases, it can even be close to the ideal value of one. We illustrate our findings for two different problems where, in addition, we explore the potential benefits of reusing information from previous parareal iterations.
This is a joint work with Yvon Maday (UPMC) and Walid Kheriji and the companion paper is "Towards full scalability in the parareal in time algorithm".
Several numerical schemes have been proposed to accelerate the time propagation of evolution equations, among which we single out the parareal algorithm; this method has been shown to perform in well in many situations and the next step is to use it for problem with additional structure, such as the Hamiltonian dynamics, where symplectic schemes are needed. As the "plain" parareal iterations do not give rise to a symplectic scheme, several proposals have been put forward to correct this drawback. After a short review of previous work, we present in this talk our new approach, together with theoretical and numerical evidence supporting it.
This work is in collaboration with Frederic Legoll (ENPC) and Yvon Maday (UPMC).
I will present ideas which are used in molecular dynamics simulations in order to simulate efficiently on parallel architectues high dimensional metastable stochastic dynamics. The cornerstone of these techniques is the quasi stationary distribution.
- C. Le Bris, T. Lelièvre, M. Luskin and D. Perez, A mathematical formalization of the parallel replica dynamics, Monte Carlo Methods and Applications, 18(2), 119–146, (2012).
- D. Aristoff, T. Lelièvre and G. Simpson, The parallel replica method for simulating long trajectories of Markov chains, AMRX, 2, 332-352, (2014).
- A. Binder, T. Lelièvre and G. Simpson, A Generalized Parallel Replica Dynamics, Journal of Computational Physics, 284, 595-616, (2015).