As listed in Table 1
, we have chosen certain
basic vector operations to support within the PETSc vector library.
These operations were selected because they often arise in application
codes. The NormType argument to VecNorm() is one of
NORM_1, NORM_2, or NORM_INFINITY.
The 1-norm is
, the 2-norm is
and the
infinity norm is
.
For parallel vectors that are distributed across the processors by ranges,
it is possible to determine
a processor's local range with the routine
ierr = VecGetOwnershipRange(Vec vec,int *low,int *high);The argument low indicates the first component owned by the local processor, while high specifies one more than the last owned by the local processor. This command is useful, for instance, in assembling parallel vectors.
On occasion, the user needs to access the actual elements of the vector.
The routine VecGetArray()
returns a pointer to the elements local to the processor:
ierr = VecGetArray(Vec v,Scalar **array);When access to the array is no longer needed, the user should call
ierr = VecRestoreArray(Vec v, Scalar **array);Minor differences exist in the Fortran interface for VecGetArray() and VecRestoreArray(), as discussed in Section Array Arguments . It is important to note that VecGetArray() and VecRestoreArray() do not copy the vector elements; they merely give users direct access to the vector elements. Thus, these routines require essentially no time to call and can be used efficiently.
The number of elements stored locally can be accessed with
ierr = VecGetLocalSize(Vec v,int *size);The global vector length can be determined by
ierr = VecGetSize(Vec v,int *size);In addition to VecDot() and VecMDot() and VecNorm() PETSc provides split phase versions of these that allow several independent inner products and/or norms to share the same communication (thus improving parallel efficiency). For example, one may have code such as
ierr = VecDot(Vec x,Vec y,Scalar *dot); ierr = VecNorm(Vec x,NormType NORM_2,double *norm2); ierr = VecNorm(Vec x,NormType NORM_1,double *norm1);This code works fine, the problem is that it performs three seperate parallel communication operations. Instead one can write
ierr = VecDotBegin(Vec x,Vec y,Scalar *dot); ierr = VecNormBegin(Vec x,NormType NORM_2,double *norm2); ierr = VecNormBegin(Vec x,NormType NORM_1,double *norm1); ierr = VecDotEnd(Vec x,Vec y,Scalar *dot); ierr = VecNormEnd(Vec x,NormType NORM_2,double *norm2); ierr = VecNormEnd(Vec x,NormType NORM_1,double *norm1);With this code, the communication is delayed until the first call to VecxxxEnd() at which a single MPI reduction is used to communicate all the required values. It is required that the calls to the VecxxxEnd() are performed in the same order as the calls to the VecxxxBegin(); however if you mistakenly make the calls in the wrong order PETSc will generate an error, informing you of this. There are two additional routines VecTDotBegin() and VecTDotEnd(). These routines where suggested by Victor Eijkhout.
Note: these routines use only MPI 1 functionality; so they do not allow you to overlap computation and communication. Once MPI 2 implementations are more common we'll improve these routines to allow overlap of inner product and norm calculations with other calculations. Also currently these routines only work for the PETSc built in vector types.