Normally, PETSc users will access the matrix solvers through the SLES interface, as discussed in Chapter SLES: Linear Equations Solvers , but the underlying factorization and triangular solve routines are also directly accessible to the user.
The LU and Cholesky
matrix factorizations are split into
two or three stages depending on the user's needs. The first stage is
to calculate an ordering for the matrix. The ordering generally is
done to reduce fill in a sparse factorization; it does not make much
sense for a dense matrix.
ierr = MatGetOrdering(Mat matrix,MatOrderingType type,IS* rowperm,IS* colperm);The currently available alternatives for the ordering type are
Users can add their own ordering routines
by providing a function with the calling sequence
int reorder(Mat A,MatOrderingType type,IS* rowperm,IS* colperm);Here A is the matrix for which we wish to generate a new ordering, type may be ignored and rowperm and colperm are the row and column permutations generated by the ordering routine. The user registers the ordering routine with the command
ierr = MatOrderingRegister(MatOrderingType inname,char *path,char *sname, int (*reorder)(Mat,MatOrderingType,IS*,IS*)));The input argument inname is a string of the user's choice, iname is either an ordering defined in mat.h or a users string, to indicate one is introducing a new ordering, while the output See the code in src/mat/impls/order/sorder.c and other files in that directory for examples on how the reordering routines may be written.
Once the reordering routine has been registered, it can be selected for use at runtime with the command line option -mat_ordering_type sname. If reordering directly, the user should provide the name as the second input argument of MatGetOrdering().
The following routines perform complete, in-place, symbolic, and numerical
factorizations for symmetric and nonsymmetric matrices, respectively:
ierr = MatCholeskyFactor(Mat matrix,IS permutation,double pf); ierr = MatLUFactor(Mat matrix,IS rowpermutation,IS columnpermutation,double pf);The argument pf
For sparse matrices it is very unlikely that the factorization is actually done in-place. More likely, new space is allocated for the factored matrix and the old space deallocated, but to the user it appears in-place because the factored matrix replaces the unfactored matrix.
The
two
factorization
stages
can also be performed separately, by using the out-of-place mode:
ierr = MatCholeskyFactorSymbolic(Mat matrix,IS perm, double pf,Mat *result); ierr = MatLUFactorSymbolic(Mat matrix,IS rowperm,IS colperm,double pf,Mat *result); ierr = MatCholeskyFactorNumeric(Mat matrix,Mat *result); ierr = MatLUFactorNumeric(Mat matrix, Mat *result);In this case, the contents of the matrix result is undefined between the symbolic and numeric factorization stages. It is possible to reuse the symbolic factorization. For the second and succeeding factorizations, one simply calls the numerical factorization with a new input matrix and the same factored result matrix. It is essential that the new input matrix haveexactly the same nonzero structure as the original factored matrix. (The numerical factorization merely overwrites the numerical values in the factored matrix and does not disturb the symbolic portion, thus enabling reuse of the symbolic phase.) In general, calling XXXFactorSymbolic with a dense matrix will do nothing except allocate the new matrix; the XXXFactorNumeric routines will do all of the work.
Why provide the plain XXXfactor routines when one could simply call the two-stage routines? The answer is that if one desires in-place factorization of a sparse matrix, the intermediate stage between the symbolic and numeric phases cannot be stored in a result matrix, and it does not make sense to store the intermediate values inside the original matrix that is being transformed. We originally made the combined factor routines do either in-place or out-of-place factorization, but then decided that this approach was not needed and could easily lead to confusion.
We do not currently support sparse matrix factorization with pivoting
for numerical stability. This is because trying to both reduce fill
and do pivoting can become quite complicated. Instead, we provide a
poor stepchild substitute. After one has obtained a reordering, with
MatGetOrdering(Mat A,MatOrdering type,IS *row,IS *col) one
may call
ierr = MatReorderForNonzeroDiagonal(Mat A,double tol,IS row, IS col);which will try to reorder the columns to ensure that no values along the diagonal are smaller than tol in a absolute value. If small values are detected and corrected for, a nonsymmetric permutation of the rows and columns will result. This is not guaranteed to work, but may help if one was simply unlucky in the original ordering. When using the SLES solver interface the options -pc_ilu_nonzeros_along_diagonal <tol> and -pc_lu_nonzeros_along_diagonal <tol> may be used. Here, tol is an optional tolerance to decide if a value is nonzero; by default it is 1.e-10.
Once a matrix has been factored, it is natural to solve linear systems.
The following four routines enable this process:
ierr = MatSolve(Mat A,Vec x, Vec y); ierr = MatSolveTrans(Mat A, Vec x, Vec y); ierr = MatSolveAdd(Mat A,Vec x, Vec y, Vec w); ierr = MatSolveTransAdd(Mat A, Vec x, Vec y, Vec w);The matrix A of these routines must have been obtained from a factorization routine; otherwise, an error will be generated. In general, the user should use the SLES solvers introduced in the next chapter rather than using these factorization and solve routines directly.