Anderson-Accelerated Convergence for Incompressible Navier-StokesEquations
AuthorsPloeg, A. van der
Conference/Journal22nd Numerical Towing Tank Symposium (NuTTS 2019), Tomar, Portugal
DateSep 30, 2019
We consider iterative solvers for the systems of non-linear equations that need to be solved to compute incompressible flows. To ensure that the computed solution is not spoiled by grid dependence, the grids have to be sufficiently fine, which causes these systems to become large. For instationary flows such systems have to be solved every timestep, and to be sure that the instationary behavior is computed correctly, the time step cannot be chosen too large and at each time step the system of non-linear equations has to be solved sufficiently accurate. As a result, for many cases the computational effort is quite substantial. Therefore, in this paper we study the effectiveness of an acceleration strategy introduced in Anderson, 1965 to reduce the computational effort. In recent years, this strategy has been analyzed in the context of solution methods for fixed point problems, Fang and Saad, 2009. In the sequel of this report, this strategy will be referred to as Anderson Acceleration (AA). The basic idea is that, if the problem to solve were linear, at each iteration in which AA is applied some of the history is used to optimize the next update of the approximate solution. Therefore, several vectors from previous iterations have to be stored, and frequently updated. This is very similar to the basic idea of the well-known minimal residual method GMRES, Saad and Schultz, 1986 to solve a nonsymmetric system of linear equations. In Pollock et al., 2018, Anderson-accelerated Picard iterations are analyzed for solving incompressible Navier-Stokes equations, and tested by computing the steady, laminar flow in a 2D and a 3D liddriven cavity. In Pollock et al., 2018 it is shown that Anderson Acceleration can provide a significant, and sometimes dramatic, improvement in the convergence behavior, and it is even proven analytically that, as long as the underlying fixed-point problem satisfies some constraints, AA provides guaranteed improved convergence behavior.