6.4.4. Block Jacobi and Overlapping Additive Schwarz Preconditioners

Up: Contents Next: Shell Preconditioners Previous: LU Factorization

The block Jacobi and overlapping additive Schwarz methods in PETSc are supported in parallel; however, only the uniprocessor version of the block Gauss-Seidel method is currently in place. By default, the PETSc implentations of these methods employ ILU(0) factorization on each individual block ( that is , the default solver on each subblock is PCType=PCILU, KSPType=KSPPREONLY); the user can set alternative linear solvers via the options -sub_ksp_type and -sub_pc_type. In fact, all of the KSP and PC options can be applied to the subproblems by inserting the prefix -sub_ at the beginning of the option name. These options database commands set the particular options for all of the blocks within the global problem. In addition, the routines

   ierr = PCBJacobiGetSubSLES(PC pc,int *n_local,int *first_local,SLES **subsles); 
   ierr = PCASMGetSubSLES(PC pc,int *n_local,int *first_local,SLES **subsles); 
extract the SLES context for each local block. The argument n_local is the number of blocks on the calling processor, and first_local indicates the global number of the first block on the processor. The blocks are numbered successively by processors from zero through gb-1 , where gb is the number of global blocks. The array of SLES contexts for the local blocks is given by subsles. This mechanism enables the user to set different solvers for the various blocks. To set the appropriate data structures, the user must explicitly call SLESSetUp() before calling PCBJacobiGetSubSLES() or PCASMGetSubSLES(). For further details, see the example ${}PETSC_DIR/src/sles/examples/tutorials/ex7.c.

The block Jacobi, block Gauss-Seidel, and additive Schwarz preconditioners allow the user to set the number of blocks into which the problem is divided. The options database commands to set this value are -pc_bjacobi_blocks n and -pc_bgs_blocks n, and, within a program, the corresponding routines are

   ierr = PCBJacobiSetTotalBlocks(PC pc,int blocks,int *size); 
   ierr = PCASMSetTotalSubdomains(PC pc,int n,IS *is); 
   ierr = PCASMSetType(PC pc,PCASMType type); 
The optional argument size, is an array indicating the size of each block. Currently, for certain parallel matrix formats, only a single block per processor is supported. However, the MATMPIAIJ and MATMPIBAIJ formats support the use of general blocks as long as no blocks are shared among processors. The is argument contains the index sets that define the subdomains.

PCASMType is one of PC_ASM_BASIC, PC_ASM_INTERPOLATE, PC_ASM_RESTRICT, PC_ASM_NONE and may also be set with the options database -pc_asm_type [basic,interpolate,restrict,none]. The type PC_ASM_BASIC (or -pc_asm_type basic) corresponds to the standard additive Schwarz method that uses the full restriction and interpolation operators. The type PC_ASM_RESTRICT (or -pc_asm_type restrict) uses a full restriction operator, but during the interpolation process ignores the off-processor values. Similarly, PC_ASM_INTERPOLATE (or -pc_asm_type interpolate) uses a limited restriction process in conjunction with a full interpolation, while PC_ASM_NONE (or -pc_asm_type none) ignores off-processor valies for both restriction and interpolation. The ASM types with limited restriction or interpolation were suggested by Xiao-Chuan Cai and Marcus Sarkis [3]. PC_ASM_RESTRICT is the PETSc default, as it saves substantial communication and for many problems has the added benefit of requiring fewer iterations for convergence than the standard additive Schwarz method.

The user can also set the number of blocks and sizes on a per-processor basis with the commands

   ierr = PCBJacobiSetLocalBlocks(PC pc,int blocks,int *size); 
   ierr = PCASMSetLocalSubdomains(PC pc,int N,IS *is); 
For the ASM preconditioner one can use the following command to set the overlap to compute in constructing the subdomains.
   ierr = PCASMSetOverlap(PC pc,int overlap); 
The overlap defaults to 1, so if one desires that no additional overlap be computed beyond what may have been set with a call to PCASMSetTotalSubdomains() or PCASMSetLocalSubdomains(), then overlap must be set to be 0. In particular, if one does not explicitly set the subdomains in an application code, then all overlap would be computed internally by PETSc, and using an overlap of 0 would result in an ASM variant that is equivalent to the block Jacobi preconditioner. Note that one can define initial index sets is with any overlap via PCASMSetTotalSubdomains() or PCASMSetLocalSubdomains(); the routine PCASMSetOverlap() merely allows PETSc to extend that overlap further if desired.


Up: Contents Next: Shell Preconditioners Previous: LU Factorization