Docs: Installation: UNIX
PETSc 2.0 requires a MINIMUM of 25 megabytes of disk space to
install for a single operating system. The UNIX command "df" can be used to
determine the amount of available disk space. See the PETSc FAQ
for tips on reducing required space. PETSc MUST be compiled with an ANSI C compiler (or
C++ compiler). Many older Sun workstations have the Sun-bundled C compiler, which is NOT
ANSI C and cannot be used. The Gnu compiler gcc can often be used as a replacement for
systems that do not have a native ANSI C compiler.
Required Software:
Prior to installing PETSc 2.0, the machine must have:
- An implementation of MPI. Parallel machines now come with one provided by the vendors
(e.g., IBM SP, Cray T3E, and SGI machines); check with your support staff. We recommend
using the vendor-provided implementation of MPI if it has been installed,
since this usually provides better performance than the freeware versions. Otherwise, we
recommend MPICH, which is available at http://www.mcs.anl.gov/mpi/mpich.
If MPI is not yet installed on your system, retrieve a version of MPI, install it, and run
the example programs before proceeding further.
For users who are
interested ONLY in running PETSc sequentially, we have included a stripped-down version of
MPI that allows PETSc to be compiled without installing an external version of MPI. This
setup allows the user to run ONLY uniprocessor PETSc programs, not parallel ones.
- A copy of the BLAS and LAPACK. Many machines provide math libraries that contain BLAS or
LAPACK. For example, the DEC alpha provides DXML, and the IBM rs6000 provides ESSL. (Note
that LAPACK must still be installed even if linking with ESSL, since ESSL does not contain
all LAPACK routines.) Check with your support staff. The BLAS library on some
machines may be found as /usr/lib/libblas.a. If these libraries are not already installed
on the target architecture, they can be obtained from ftp://info.mcs.anl.gov/pub/petsc/blas_lapack.tar.gz.
The makefiles with this are intentionally simple, since they require editing to fit the
particular machine. We recommend using the vendor-provided BLAS when
possible.
Optional Packages:
PETSc provides an interface to several software packages.. These packages are not
developed, maintained, or supported by the PETSc team; we merely provide an interface to
them. To use any of these optional packages, obtain the following prior to installing
PETSc:
- BlockSolve95, a software package of parallel ICC(0) and ILU(0) preconditioners from http://www.mcs.anl.gov/blocksolve.
Before compiling BlockSolve95 make sure that the BlockSolve file src/makefile
has at the bottom
CFLAGS = ${BS_INCLUDE} ${MPI_INCLUDE} -DMLOG
The -DMLOG is extremely important.
- BlockSolve95 does not support complex numbers; hence, it cannot be used with any of the
complex number versions of PETSc.
- To use BlockSolve95 with PETSc BOPT=g_c++ /O_c++, install BlockSolve95 with
BOPT=O, and use these libraries with PETSc.
- To use BlockSolve with PETSc BOPT=g_c++/O_c++ on the SGI machines,
PETSC_ARCH=[IRIX,IRIX64], you must first fix one of the BlockSolve95 include files. Edit
the file BlockSolve95/include/BSdepend.h and remove the line
- To use BlockSolve95 with PETSc BOPT=g_c++/O_c++ on PETSC_ARCH=freebsd or
PETSC_ARCH=linux, or using the Gnu g++ compiler version 2.7.2 or higher on the sun4, you
must edit the above file and remove the line
- ParMETIS, a parallel graph partitioner, from http://www.cs.umn.edu/~metis
.
- The ALICE Memory Snooper (AMS), from http://www.mcs.anl.gov/ams.
This package allows one to monitor (and change) variables in running PETSc programs
(or, more generally, any programs using MPI). PETSc objects, such as matrices and
solvers, can also be monitored directly from external programs. See the manual page
for ViewerAMSOpen()
for details on AMS usage in PETSc.
- SPAI 3.0 (Sparse approximate inverse code by Steve Barnard)
It may be obtained from http://www.sam.math.ethz.ch/~grote/spai/,
see src/contrib/pc/spai/readme for more details.
Installing PETSc:
- The PETSc distribution can be unbundled with
gunzip -c petsc.tar.gz | tar xof -
By default, this will create a directory called petsc-2.0.28 and unpack the software
there.
- Refer to http://www.mcs.anl.gov/petsc/petsc-patches.html
for fixes for the latest PETSc release.
- Set the environmental variable PETSC_DIR to the full path
of the PETSc home directory, for example,
setenv PETSC_DIR
/home/username/petsc-2.0.28
- Set the environmental variable PETSC_ARCH, which indicates
the architecture on which PETSc will be configured. For example, use
setenv PETSC_ARCH `$PETSC_DIR/bin/petscarch`
- Edit the file ${PETSC_DIR}/bmake/${PETSC_ARCH}/base.site to indicate the local
installation of MPI, LAPACK, BLAS, X-windows, and the
optional software packages.
Note: If installing ONLY a
uniprocessor version of PETSc, then installation of an MPI implementation is not required.
Instead, the following MPI locations can be used in the
${PETSC_DIR}/bmake/${PETSC_ARCH}/base.site file:
MPI_LIB =
${PETSC_DIR}/lib/lib${BOPT}/${PETSC_ARCH}/libmpiuni.a
MPI_INCLUDE = -I${PETSC_DIR}/src/sys/src/mpiuni
MPIRUN = ${PETSC_DIR}/src/sys/src/mpiuni/mpirun
- It may also be necessary to edit the following file
${PETSC_DIR}/bmake/${PETSC_ARCH}/base_variables to change the names of the C, C++, or
Fortran compilers from their defaults:
- Solaris using the GNU compilers: use PETSC_ARCH=solaris_gnu
- IBM rs6000 using the GNU compilers: use PETSC_ARCH=rs6000_gnu
- Sun4 machines: if using the Sun ANSI-C compiler, these files must be edited
accordingly.
- CRAY t3d: make sure the enviornmental variable TARGET is set to cray-t3d
Test programs:
- If the installation went smoothly, then try running some test examples with the command
make BOPT=g testexamples >& examples_log
- If only the uniprocessor version of PETSc has been installed (i.e., MPI has not been
installed), then use the following command to run only sequential examples:
make BOPT=g testexamples_uni >& examples_log
- Examine the file examples_log for any obvious errors or problems.
- The examples can be manually built and run one at a time by changing to the appropriate
directory (for instance, ${PETSC_DIR}/src/sles/examples/tutorials) and running commands
such as
make BOPT=g ex1
make runex1 (or, for example, mpirun ex1)
This alternative may be preferable if "make BOPT=g testexamples" fails for some
reason.
- The automatic tests may not work on systems that use a queue and special commands to run
parallel jobs. Instead, the user can compile and run the examples manually as
discussed above.
- To test the graphics examples, move to ${PETSC_DIR}/src/sys/src/draw/examples/tests;
then make and run the examples manually. These examples will open an X window and draw
some graphics.
Fortran Users:
The PETSc Fortran libraries are built automatically during the installation outlined
above. Before testing the fortran examples, please make sure that the c-version of the examples work correctly.
- To compile and test the Fortran examples, use the command
make BOPT=g testfortran >& fortran_log
PETSc Fortran programs can use the suffix .F rather than the traditional suffix .f, so
that the PETSc header files can be easily included in Fortran programs. See the Fortran
chapter within the users manual for additional details regarding the Fortran interface. In
order to use the suffix .f instead of .F, the user must edit the file ${PETSC_DIR}/include/foldinclude/petsc.h
to hardwire the path for the local MPI include file. See the chapter 'PETSc Fortran Users'
in the users manual for more information.
Multiple Installations:
When building PETSc for two or more machine types that share a common filesystem, for
example sun4 and hpux, multiple PETSc directory trees are NOT needed.
Only a single PETSc directory can (and should) be used; PETSc automatically places the
libraries for each machine in a different location. In particular, the libraries for a
given BOPT and PETSC_ARCH are installed in the directory,
${PETSC_DIR}/lib/lib${BOPT}/${PETSC_ARCH}.
Shared Libraries:
PETSc supports the use of shared libraries for the machines
solaris, alpha, IRIX, IRIX64, freebsd, and linux to
enable faster linking and smaller executables. These libraries are built
automatically during installation. In addition, PETSc now defaults to using these
libraries as dynamic libraries on these machines. For most users this does not matter.