Zoltan Developer's Guide  |  Next  |  Previous

Appendix: Hypergraph Partitioning

Hypergraph partitioning is a useful partitioning and load balancing method when connectivity data is available. It can be viewed as a more sophisticated alternative to the traditional graph partitioning.

A hypergraph consists of vertices and hyperedges. A hyperedge connects one or more vertices. A graph is a special case of a hypergraph where each edge has size two (two vertices). The hypergraph model is well suited to parallel computing, where vertices correspond to data objects and hyperedges represent the communication requirements. The basic partitioning problem is to partition the vertices into k approximately equal sets such that the number of cut hyperedges is minimized. Most partitioners (including Zoltan-PHG) allows a more general model where both vertices and hyperedges can be assigned weights. It has been shown that the hypergraph model gives a more accurate representation of communication cost (volume) than the graph model. In particular, for sparse matrix-vector multiplication, the hypergraph model exactly represents communication volume. Sparse matrices can be partitioned either along rows or columns; in the row-net model the columns are vertices and each row corresponds to an hyperedge, while in the column-net model the roles of vertices and hyperedges are reversed.

Zoltan contains a native parallel hypergraph partitioner, called PHG (Parallel HyperGraph partitioner). In addition, Zoltan provides access to PaToH, a serial hypergraph partitioner. Note that PaToH is not part of Zoltan and should be obtained separately from the PaToH web site. Zoltan-PHG is a fully parallel multilevel hypergraph partitioner. For further technical description, see [Devine et al, 2006].

Algorithm:

The algorithm used is multilevel hypergraph partitioning. For coarsening, several versions of inner product (heavy connectivity) matching are available. The refinement is based on Fiduccia-Mattheysis (FM) but in parallel it is only an approximation.

Parallel implementation:

A novel feature of our parallel implementation is that we use a 2D distribution of the hypergraph. That is, each processor owns partial data about some vertices and some hyperedges. The processors are logically organized in a 2D grid as well. Most communication is limited to either a processor row or column. This design should allow for good scalability on large number of processors.

Data structures:

The hypergraph is the most important data structure. This is stored as a compressed sparse matrix. Note that in parallel, each processor owns a local part of the global hypergraph (a submatrix of the whole matrix). The hypergraph data type is struct HGraph, and contains information like number of vertices, hyperedges, pins, compressed storage of all pins, optional vertex and edge weights, pointers to relevant communicators, and more. One cryptic notation needs an explanation: The arrays hindex, hvertex are used to look up vertex info given a hyperedge, and vindex, vedge are used to look up hyperedge info given a vertex. Essentially, we store the hypergraph as a sparse matrix in both CSR and CSC formats. This doubles the memory cost but gives better performance. The data on each processor is stored using local indexing, starting at zero. In order to get the global vertex or edge number, use the macros VTX_LNO_TO_GNO and EDGE_LNO_TO_GNO. These macros will look up the correct offsets (using the dist_x and dist_y arrays). Note that phg->nVtx is always the local number of vertices, which may be zero on some processors.

Parameters:

In the User's Guide, only the most essential parameters have been documented. There are several other parameters, intended for developers and perhaps expert "power" users. We give a more complete list of all parameters below. Note that these parameters may change in future versions!
For a precise list of parameters in a particular version of Zoltan, look at the source code (phg.c).
Method String: HYPERGRAPH
Parameters:
    HYPERGRAPH_PACKAGE
PHG (parallel) or PaToH (serial)
   CHECK_HYPERGRAPH
Check if input data is valid. (Slows performance;intended for debugging.)
    PHG_OUTPUT_LEVEL
Level of verbosity; 0 is silent.
    PHG_FINAL_OUTPUT
Print stats about final partition? (0/1)
    PHG_NPROC_VERTEX
Desired number of processes in the vertex direction (for 2D internal layout)
    PHG_NPROC_HEDGE
Desired number of processes in the hyperedge direction (for 2D internal layout)
    PHG_COARSENING_METHOD The method to use in matching/coarsening; currently these are available. 
agg - agglomerative inner product matching (a.k.a. heavy connectivity matching)
ipm - inner product matching (a.k.a. heavy connectivity matching)
c-ipm -  column ipm;  faster method based on ipm within processor columns
a-ipm - alternate between fast method (l-ipm ) and ipm
l-ipm -  local ipm on each processor. Fastest option  but often gives poor quality.
h-ipm - hybrid ipm that  uses partial c-ipm followed by ipm on each level

    PHG_COARSENING_LIMIT
Number of vertices at which to stop coarsening.
    PHG_VERTEX_VISIT_ORDER
Ordering of vertices in greedy matching scheme:
0 - random
1 - natural order (as given by the query functions)
2 - increasing vertex weights
3 - increasing vertex degree
4 - increasing vertex degree, weighted by pins
    PHG_EDGE_SCALING
Scale edge weights by some function of size of the hyperedges:
0 - no scaling
1 - scale by 1/(size-1)     [absorption scaling]
2 - scale by 2/((size*size-1)) [clique scaling]
    PHG_VERTEX_SCALING
Variations in "inner product" similarity metric (for matching):
0 - Euclidean inner product: <x,y>
1 - cosine similarity: <x,y>/(|x|*|y|)
2 - <x,y>/(|x|^2 * |y|^2)
3 - scale by sqrt of vertex weights
4 - scale by vertex weights
    PHG_COARSEPARTITION_METHOD Method to partition the coarsest (smallest) hypergraph; typically done in serial:
random - random
linear - linear (natural) order
greedy - greedy method based on minimizing cuts
auto - automatically select from the above methods (in parallel, the processes will do different methods)
    PHG_REFINEMENT_METHOD
Refinement algorithm:
 fm - two-way approximate  FM
none - no refinement
    PHG_REFINEMENT_LOOP_LIMIT Loop limit in FM refinement. Higher number means more refinement.
    PHG_REFINEMENT_MAX_NEG_MOVE
Maximum number of negative moves allowed in FM.
   PHG_BAL_TOL_ADJUSTMENT
Controls how the balance tolerance is adjusted at each level of bisection.
  PHG_RANDOMIZE_INPUT
Randomize layout of vertices and hyperedges in internal parallel 2D layout? (0/1)
  PHG_EDGE_WEIGHT_OPERATION Operation to be applied to edge weights supplied by different processes for the same hyperedge:
add - the hyperedge weight will be the sum of the supplied weights
max - the hyperedge weight will be the maximum of the supplied weights
error - if the hyperedge weights are not equal, Zoltan will flag an error, otherwise the hyperedge weight will be the value returned by the processes
   EDGE_SIZE_THRESHOLD
Ignore hyperedges greater than this fraction times number of vertices.
   PATOH_ALLOC_POOL0
Memory allocation for PaToH; see the PaToH manual for details.
   PATOH_ALLOC_POOL1
Memory allocation for PaToH; see the PaToH manual for details.
Default values:

HYPERGRAPH_PACKAGE = PHG

CHECK_HYPERGRAPH = 0

PHG_OUTPUT_LEVEL=0

PHG_FINAL_OUTPUT=0

PHG_REDUCTION_METHOD=ipm

PHG_REDUCTION_LIMIT=100

PHG_VERTEX_VISIT_ORDER=0

PHG_EDGE_SCALING=0

PHG_VERTEX_SCALING=0

PHG_COARSEPARTITION_METHOD=greedy

PHG_REFINEMENT_METHOD=fm

PHG_REFINEMENT_LOOP_LIMIT=10

PHG_REFINEMENT_MAX_NEG_MOVE=100

PHG_BAL_TOL_ADJUSTMENT=0.7

PHG_RANDOMIZE_INPUT=0

PHG_EDGE_WEIGHT_OPERATION=max

EDGE_SIZE_THRESHOLD=0.25

PATOH_ALLOC_POOL0=0

PATOH_ALLOC_POOL1=0
Required Query Functions:

ZOLTAN_NUM_OBJ_FN

ZOLTAN_OBJ_LIST_FN or ZOLTAN_FIRST_OBJ_FN/ZOLTAN_NEXT_OBJ_FN pair

ZOLTAN_HG_SIZE_CS_FN
ZOLTAN_HG_CS_FN
Optional Query Functions:

ZOLTAN_HG_SIZE_EDGE_WTS_FN

ZOLTAN_HG_EDGE_WTS_FN

It is possible to provide the graph query functions instead of the hypergraph queries, though this is not recommended. If only graph query functions are registered, Zoltan will automatically create a hypergraph from the graph, but some information (specifically, edge weights) will be lost.


[Table of Contents  | Next:  Refinement Tree Partitioning  |  Previous:  ParMetis  |  Privacy and Security]