Since exact solution of the linear Newton systems within (5 ) and (7 ) at each iteration can be costly, modifications are often introduced that significantly reduce these expenses and yet retain the rapid convergence of Newton's method. Inexact or truncated Newton techniques approximately solve the linear systems using an iterative scheme. In comparison with using direct methods for solving the Newton systems, iterative methods have the virtue of requiring little space for matrix storage and potentially saving significant computational work. Within the class of inexact Newton methods, of particular interest are Newton-Krylov methods, where the subsidiary iterative technique for solving the Newton system is chosen from the class of Krylov subspace projection methods. Note that at runtime the user can set any of the linear solver options discussed in Chapter SLES: Linear Equations Solvers , such as -ksp_type <ksp_method> and -pc_type <pc_method>, to set the Krylov subspace and preconditioner methods.
Two levels of iterations occur for the inexact techniques, where
during each global or outer Newton iteration a sequence of
subsidiary inner iterations of a linear solver is performed.
Appropriate control of the accuracy to which the subsidiary
iterative method solves the Newton system
at each global iteration is critical, since these
inner iterations determine the asymptotic convergence rate for
inexact Newton techniques.
While the Newton systems must be solved well enough to retain
fast local convergence of the Newton's iterates, use of excessive
inner iterations, particularly when
is large,
is neither necessary nor economical.
Thus, the number of required inner iterations typically increases
as the Newton process progresses, so that the truncated iterates
approach the true Newton iterates.
A sequence of nonnegative numbers
can be used to
indicate the variable convergence criterion.
In this case, when solving a system of nonlinear equations, the
update step of the Newton process remains unchanged, and direct
solution of the linear system is replaced by iteration on the
system until the residuals
satisfy
Here
is an initial approximation of the solution, and
denotes an arbitrary norm in
.
By default a constant relative convergence tolerance is used for
solving the subsidiary linear systems within the Newton-like methods
of SNES. When solving a system of nonlinear equations, one can
instead employ the techniques of Eisenstat and Walker [6]
to compute
at each step of the nonlinear solver by using the
option -snes_ksp_ew_conv . In addition,
by adding one's own KSP convergence test (see Section
Convergence Tests
), one can easily create one's own,
problem-dependent, inner convergence tests.