Laurent Jay, University of Iowa

We consider the application of implicit methods to any type of initial value problems (IVPs). We present a new predictor algorithm for the iterates of modified Newton type/fixed point iterations for the solution to the nonlinear equations. The new predictor algorithm is universal since it applies to any implicit method applied to any type of IVPs for ODEs, DAEs, and time-varying PDEs. It generalizes, unifies, and improves previous approaches. We illustrate our results for the internal stages of implicit Runge-Kutta (IRK) type methods applied to ODEs and the iterated corrections of starting approximation algorithms based for example on continuous/dense output. Our methodology relies on the existence of an asymptotic expansion in the stepsize h of the error between the exact discrete values and an initial starting approximation. Arbitrarily high order approximations can be obtained assuming sufficient smoothness of the solution. Not only starting approximations (i.e., the first iterate) can be predicted, but also the inner iterates of modified Newton type/fixed point iterations can also be predicted leading to a new type of iterations. These improved prediction algorithms require some extra memory to store previous exact errors, but they require minimal computational effort since no new evaluation of the functions to describe the vector field and the constraints is needed. Moreover, for a given iteration each component and each correction can be obtained completely independently from the others allowing for an embarrassingly parallel implementation. This methodology allows a drastic reduction of the number of Newton-type iterations in implicit methods at the cost of some extra memory storage illustrating the well-known computer science principle of time-memory trade-off.