A brief description of my current research      G. Söderlind, 22 April 2009


 

Most of my recent research has been connected to the problem of adaptive grid generation in ODEs. This started by investigating the possibility of using control theory to control the step size in initial value problem solvers. It turned out to be more or less straightforward to use classical control structures such as PI controllers, and that they were easily implemented, even in existing codes. A number of other algorithmic changes were also desirable, in order to take full advantage of the new controllers. This led to studies of how to manage e.g. Newton iterations, especially in connection with Jacobian reevaluations and refactorizations. An easy-to-read introduction to the work mentioned below is G. Söderlind: Time-step selection algorithms: Adaptivity, Control and Signal Processing, Appl. Num. Math. 56 (2006) 488-502.

 

In 2001, my interest in this area shifted to using digital signal processing techniques. Here it is of interest to generate smooth step size sequences, even when the data (usually the error estimate) is a nonsmooth function. Here, too, it is possible to use some standard ideas, and develop special controllers based on digital filters. These may have a significant effect on the regularity of the solution, and there is a fairly large collection of filters to choose from. Bets of all is that this approach comes for free: there is no extra computation involved. For full technical details, see G. Söderlind: Digital filters in adaptive time-stepping. ACM Trans. Math. Software 29 (2003), 1-26.

 

The effect of the technique can be seen in the follwing graphs. In a conventional code, various data are plotted as a function of the tolerance setting in a drug release problem. In particular, the actual accuracy produced by the code is highly irregular, and varies by approxiamtely a factor of 30 for any minute change in the tolerance. With digital filters generating the step size, as well as modified management of the Newton iteration, there is a dramatic improvement of the computational stability. The accuracy now varies in a completely predicatble way. For further details, see G. Söderlind, L. Wang: Adaptive time-stepping and computational stability, J. Comp. and Appl. Math.185 (2006), 225-243.

 

You can try out these techniques in two Matlab codes that you can download from this zip archive.  There is a modified  ode45  called  ode45dc, and a modified  ode23s  called  ode23sdc  where  the appended letters  dc  refer to “digital control.” Both codes use a  limiter, supplied in a separate m-file, and an m-file called  Lpfilter, which does the actual lowpass filtering. ode45dc  and  ode23sdc  are otherwise plug compatible with the original codes. The implementation of the new step size control is clearly commented. Note that the only modification in these codes is the step size control (no modifications were done to the Newton process), which implies that the codes are not to be considered as full-fledged implementations of the methods referred to here. The effect on the step size control can be seen in this graph, where a simlpe chemical reaction kinetics problem is solved. The red curve shows the original  ode45  code’s step size as a function of time, and the smooth blue curve shows the effect of using a digital filter, as in  ode45dc. Here the problem is caused by the equations becoming mildly stiff, forcing the step size to be limited by stability. The conventional controller cannot maintain full stability, causing ‘‘ringing’’ in the step size sequence; this is eliminated by the filter in  ode45dc which offers improved stability.

 

The reason why it is important to construct a smooth step size sequence is that, as long as the principal error function is regular, the optimal step size sequence is regular too, i.e., if the controller produces a rough step size, then the solution looses some accuracy. The more visible effect, however, is that with a smooth step size sequence, it is possible to obtain a highly deterministic correlation between the prescribed error tolerance and the actual error.

 

In some cases it appears to be preferable to control the density of the grid points rather than the step size. This idea is not really new, but in the control theoretic context it does make a difference. For example, it was long thought that one couldn’t vary the step size in geometric integration, like for Hamiltonian systems, as any step size variation destroys the discretization’s ability to conserve, say, energy. However, that conclusion was based on control structures that were of a classical, multiplicative type, which is ruled out. In addition, the step size control adds dynamics that has no counterpart in the continuous system.

 

By instead working with grid density control, it is possible overcome these problems. By constructing a Hamiltonian control system and discretizing it with a symplectic method, one obtains a controller that stays near the control objective for exponentially long times. This technique also corresponds to using a time stretching/compression transformation in the continuous system, so that an extra equation of dynmaics is added both to the consituous and the discrete system. It is then possible to prove the near-conservation of invariants in integrable systems. Details are worked out in E. Hairer, G. Söderlind: Explicit, time reversible, adaptive step size control, SIAM J. Sci. Comput., SISC 26 (2005) 1838-1851.

 

Interestingly, by benchmarking this adaptive technique against constant step implementations, it turns out that it has the same advantageous long-term behaviour, but often produces much higher accuracy, as the grid points go to the locations where they really matter to accuracy. The added computational cost is marginal, so the technique may sometimes increase the performance signifciantly.

 

This is so in some problems in celestial mechanics, which can be solved to extremely high precision. An example of the importance of this new Hamiltonian adaptivity can be seen in the Kepler problem. In the graph on the left, you see the trajectory of a planet computed using the constant step size Störmer-Verlet (symplectic) method. The precession is normal, as phase errors cannot be eliminated. The graph on the right uses the same method, but with Hamiltonian adaptivity. Not only is the precession suppressed, but the phase error within the trajectory is reduced by a factor of 30 as well.

 

In a 3D simulation of the outer solar system (Jupiter, Saturn, Uranus, Neptune, 6MB plot) spanning 1.38 billion years and using 10 billion time-steps, one can see that the method is perfectly stable. The widening of the orbits that you see is due to eccentricity and natural (physical) precession. Note that as the average step size is approximately 50 days, and as only every 100,000th  step is plotted in the graph to keep the data file manageable, Jupiter actually completes about 1,150 full orbits between each plotted point. That is, a point (“snapshot”) of the system is plotted only once every 13,800 years.

 

This technique can in principle be applied to any problem requiring geometric numerical integration. But the efficincy has yet to be demonstrated in other applications, such as large-scale computations in molecular dynamics. We have carried out preliminary tests that look promising, but the adaptivity is then less likely to save work.

 

A similar technique can be uesd in two-point boundary value problems. Here it is possible to analyze what the optimal grid point distribution is for local error control, and generate grid density functions according to the minimization principle. As the error is often somewhat irregular, the grid generation can be combined with digital filtering in space, in the form of Toeplitz (convolution) filters.

 

The technique also allows the study of the complexity of solving a problem. Naturally, a uniform grid is rarely optimal, and it is important to find a grid distribution such that the problem can be solved at minimal cost.

 

It is hoped that these techniques can be further developed for partial differential equations. That would entail finding a spatial grid density based on the local error in space, while time-stepping can be carried out using either PI controllers or digital filters. I am also investigating new adaptive techniques for exponential integrators.

 

Apart from this type of research, I’m also interested in various aspects fo nonlinear functional analysis, in particular in relation to stability analysis. Work is going on to construct partitioned schemes for PDEs based on using different time discretizations for different (spatial) frequency segments, showing that such a semi-explicit discretization is viable. This, too, will be combined with new adaptive techniques. Similar analyses have been carried out for differential-algebraic equations.

 

I am also interested in various applications where new techniques such as those mentioned above can be used. Currently this is linked to some software projects, as well as industrial contacts. In one case, we are trying to develop improved integrators for mechanical simulation occuring i rolling bearing dynamics.

 

 

Back to homepage