PETSc

PETSc and Threads


.

PETSc is not currently thread-safe; because the issues involved with toolkits and thread safety is complex, this short answer is almost meaningless. Thus, this page attempts to explain how threads and thread safety relate to PETSc. Note that we are only discussing here "software threads" as opposed to "hardware threads"

Threads are used in 2 main ways

  1. loop level compiler control. The C/C++/FORTRAN compiler manages the dispatching of threads at the beginning of a loop, where each thread is assigned non-overlap selections from the loop. OpenMP, for example, defines standards (compiler directives) for indicating to compilers how they should "thread parallelize" loops

  2. user control. The programmer manages the dispatching of threads directly by assigning threads to tasks (eg. a subroutine call). For example POSIX threads or the user thread management in OpenMP.

Threads are merely streams of control and do not have any global data associated with them. Any global variables (e.g., common blocks in FORTRAN) are "shared" by all the threads, that is, any thread can access and change that data. In addition, any space allocated (e.g., in C with malloc or C++ with new) that a thread has a reference to can be read/changed by that thread. The only private data a thread has are the local variables in the subroutines that it has called (i.e., the stack for that thread) or local variables that you explicitly indicate are to be not shared in compiler directives.

In its simplest form, thread safety means that any memory (global or allocated) that more than one thread has access to, has some mechanism to insure that it remains consistent when the various threads act upon it. This can be managed by simply associating a lock with each "memory" and making sure that each thread locks the memory before accessing it and unlocks when it has completed accessing the memory. In an object oriented library, rather than associating locks with individual data items, one can think about associating locks with objects; so that only a single thread can operate on an object at a time.

PETSc is not thread-safe for three reasons:

  1. a few miscellaneous global variables - these may be fixed in later PETSc releases and are not a big deal
  2. a variety of global data structures that are used for profiling and error handling. These are not easily removed or modified to make thread-safe. The reason is that simply putting locks around all of these data accesses would have a major impact on performance since the profiling data structures are being constantly updated by the running code. In some cases, for example updating the flop counters, locks could be avoid and something like atomic fetch and add operations could be used. It is possible to build such operations on all major systems, often using special two-step memory operations (e.g., load-link and store-conditional on MIPS; load-reservation on PowerXX). But you can't do this in a portable, high-performance way, since it isn't part of C and there is no standardization on issuing asm code from within C source.
  3. all the PETSc objects created during a simulation do not have locks associated with them. Again, the reason is performance, having to always lock an object before using it, and then unlocking it at the end would have a large impact on performance. Even with very inexpensive locks, there will still likely be a few "hot-spots" that kill performance, for example if four threads share a matrix and are each calling MatSetValues(), this will be a bottleneck with those threads constantly fighting over the data structure inside the matrix object.

PETSc can be used in a limited way that is thread safe, so long as only one of the threads calls methods on objects, or calls profiled PETSc routines, PETSc will run fine. For example, one could use loop level parallelism to evaluate a finite difference stencil on a grid, this is supported through the PETSc routines VecCreateShared(), see src/snes/examples/tutorials/ex5s.c. However, this is limited power.

Some concerns about a thread model for parallelism. A thread model for parallelism of numerical methods appears to be powerful for problems that can store their data in very simple (well controlled) data structures. For example, if field data is stored in a two dimensional array, then each thread can be assigned a nonoverlapping slice of data to operate on. OpenMP makes managing much of this sort of thing reasonably straightforward.

When data must be stored in a more complicated opaque data structure (for example an unstructured grid or sparse matrix), it is more difficult to partition the data among the threads to prevent conflict and still get good performance. More difficult, but certainly not impossible. For these situations, perhaps it is more natural for each thread to maintain its own private data structure that is later merged into a common data structure; but to do this one has to introduce a great deal of private state associated with the thread,  i.e., it becomes more like a  "light-weight process".

In conclusion, at least for the PETSc package, the concept of being thread-safe is not simple. It has major ramifications about its performance and how it would be used; it is not a simple matter of throwing a few locks around and then every thing is honky-dory.

If you have any comments/brickbats on this summary, please direct them to petsc-maint@mcs.anl.gov; we are interested in alternative viewpoints.