5.1.2. Dense Matrices

Up: Contents Next: Basic Matrix Operations Previous: Preallocation of Memory for Parallel AIJ Sparse Matrices

PETSc provides both sequential and parallel dense matrix formats, where each processor stores its entries in a column-major array in the usual Fortran style. To create a sequential, dense PETSc matrix, A of dimensions m by n, the user should call

   ierr = MatCreateSeqDense(PETSC_COMM_SELF,int m,int n,Scalar *data,Mat *A); 
The variable data enables the user to optionally provide the location of the data for matrix storage (intended for Fortran users who wish to allocate their own storage space). Most users should merely set data to PETSC_NULL for PETSc to control matrix memory allocation. To create a parallel, dense matrix, A, the user should call
   ierr = MatCreateMPIDense(MPI_Comm comm,int m,int n,int M,int N,Scalar *data,Mat *A) 
The arguments m, n, M, and N, indicate the number of local rows and columns and the number of global rows and columns, respectively. Either the local or global parameters can be replaced with PETSC_DECIDE, so that PETSc will determine them. The matrix is stored with a fixed number of rows on each processor, given by m, or determined by PETSc if m is PETSC_DECIDE.

PETSc does not currently provide parallel dense direct solvers. Our focus is on sparse iterative solvers.


Up: Contents Next: Basic Matrix Operations Previous: Preallocation of Memory for Parallel AIJ Sparse Matrices