MOAB: Mesh Oriented datABase
(version 5.4.1)
|
Parallel communications in MOAB. More...
#include <ParallelComm.hpp>
Classes | |
class | Buffer |
struct | SharedEntityData |
Public Member Functions | |
ParallelComm (Interface *impl, MPI_Comm comm, int *pcomm_id_out=0) | |
constructor | |
ParallelComm (Interface *impl, std::vector< unsigned char > &tmp_buff, MPI_Comm comm, int *pcomm_id_out=0) | |
constructor taking packed buffer, for testing | |
int | get_id () const |
Get ID used to reference this PCOMM instance. | |
~ParallelComm () | |
destructor | |
ErrorCode | assign_global_ids (EntityHandle this_set, const int dimension, const int start_id=1, const bool largest_dim_only=true, const bool parallel=true, const bool owned_only=false) |
ErrorCode | assign_global_ids (Range entities[], const int dimension, const int start_id, const bool parallel, const bool owned_only) |
ErrorCode | check_global_ids (EntityHandle this_set, const int dimension, const int start_id=1, const bool largest_dim_only=true, const bool parallel=true, const bool owned_only=false) |
ErrorCode | send_entities (const int to_proc, Range &orig_ents, const bool adjs, const bool tags, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incoming1, int &incoming2, TupleList &entprocs, std::vector< MPI_Request > &recv_remoteh_reqs, bool wait_all=true) |
send entities to another processor, optionally waiting until it's done | |
ErrorCode | send_entities (std::vector< unsigned int > &send_procs, std::vector< Range * > &send_ents, int &incoming1, int &incoming2, const bool store_remote_handles) |
ErrorCode | recv_entities (const int from_proc, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incomming1, int &incoming2, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs, bool wait_all=true) |
Receive entities from another processor, optionally waiting until it's done. | |
ErrorCode | recv_entities (std::set< unsigned int > &recv_procs, int incoming1, int incoming2, const bool store_remote_handles, const bool migrate=false) |
ErrorCode | recv_messages (const int from_proc, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incoming1, int &incoming2, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs) |
Receive messages from another processor in while loop. | |
ErrorCode | recv_remote_handle_messages (const int from_proc, int &incoming2, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs) |
ErrorCode | exchange_ghost_cells (int ghost_dim, int bridge_dim, int num_layers, int addl_ents, bool store_remote_handles, bool wait_all=true, EntityHandle *file_set=NULL) |
Exchange ghost cells with neighboring procs Neighboring processors are those sharing an interface with this processor. All entities of dimension ghost_dim within num_layers of interface, measured going through bridge_dim, are exchanged. See MeshTopoUtil::get_bridge_adjacencies for description of bridge adjacencies. If wait_all is false and store_remote_handles is true, MPI_Request objects are available in the sendReqs[2*MAX_SHARING_PROCS] member array, with inactive requests marked as MPI_REQUEST_NULL. If store_remote_handles or wait_all is false, this function returns after all entities have been received and processed. | |
ErrorCode | post_irecv (std::vector< unsigned int > &exchange_procs) |
Post "MPI_Irecv" before meshing. | |
ErrorCode | post_irecv (std::vector< unsigned int > &shared_procs, std::set< unsigned int > &recv_procs) |
ErrorCode | exchange_owned_meshs (std::vector< unsigned int > &exchange_procs, std::vector< Range * > &exchange_ents, std::vector< MPI_Request > &recv_ent_reqs, std::vector< MPI_Request > &recv_remoteh_reqs, bool store_remote_handles, bool wait_all=true, bool migrate=false, int dim=0) |
Exchange owned mesh for input mesh entities and sets This function should be called collectively over the communicator for this ParallelComm. If this version is called, all shared exchanged entities should have a value for this tag (or the tag should have a default value). | |
ErrorCode | exchange_owned_mesh (std::vector< unsigned int > &exchange_procs, std::vector< Range * > &exchange_ents, std::vector< MPI_Request > &recv_ent_reqs, std::vector< MPI_Request > &recv_remoteh_reqs, const bool recv_posted, bool store_remote_handles, bool wait_all, bool migrate=false) |
Exchange owned mesh for input mesh entities and sets This function is called twice by exchange_owned_meshs to exchange entities before sets. | |
ErrorCode | exchange_tags (const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const Range &entities) |
Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. | |
ErrorCode | exchange_tags (const char *tag_name, const Range &entities) |
Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. | |
ErrorCode | exchange_tags (Tag tagh, const Range &entities) |
Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. | |
ErrorCode | reduce_tags (const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const MPI_Op mpi_op, const Range &entities) |
Perform data reduction operation for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). Operation is any MPI_Op, with result stored in destination tag. | |
ErrorCode | reduce_tags (const char *tag_name, const MPI_Op mpi_op, const Range &entities) |
Perform data reduction operation for all shared and ghosted entities Same as std::vector variant except for one tag specified by name. | |
ErrorCode | reduce_tags (Tag tag_handle, const MPI_Op mpi_op, const Range &entities) |
Perform data reduction operation for all shared and ghosted entities Same as std::vector variant except for one tag specified by handle. | |
ErrorCode | broadcast_entities (const int from_proc, Range &entities, const bool adjacencies=false, const bool tags=true) |
Broadcast all entities resident on from_proc to other processors This function assumes remote handles are *not* being stored, since (usually) every processor will know about the whole mesh. | |
ErrorCode | scatter_entities (const int from_proc, std::vector< Range > &entities, const bool adjacencies=false, const bool tags=true) |
Scatter entities on from_proc to other processors This function assumes remote handles are *not* being stored, since (usually) every processor will know about the whole mesh. | |
ErrorCode | send_recv_entities (std::vector< int > &send_procs, std::vector< std::vector< int > > &msgsizes, std::vector< std::vector< EntityHandle > > &senddata, std::vector< std::vector< EntityHandle > > &recvdata) |
Send and receives data from a set of processors. | |
ErrorCode | update_remote_data (EntityHandle entity, std::vector< int > &procs, std::vector< EntityHandle > &handles) |
ErrorCode | get_remote_handles (EntityHandle *local_vec, EntityHandle *rem_vec, int num_ents, int to_proc) |
ErrorCode | resolve_shared_ents (EntityHandle this_set, Range &proc_ents, int resolve_dim=-1, int shared_dim=-1, Range *skin_ents=NULL, const Tag *id_tag=0) |
Resolve shared entities between processors. | |
ErrorCode | resolve_shared_ents (EntityHandle this_set, int resolve_dim=3, int shared_dim=-1, const Tag *id_tag=0) |
Resolve shared entities between processors. | |
ErrorCode | resolve_shared_sets (EntityHandle this_set, const Tag *id_tag=0) |
ErrorCode | resolve_shared_sets (Range &candidate_sets, Tag id_tag) |
ErrorCode | augment_default_sets_with_ghosts (EntityHandle file_set) |
ErrorCode | get_pstatus (EntityHandle entity, unsigned char &pstatus_val) |
Get parallel status of an entity Returns the parallel status of an entity. | |
ErrorCode | get_pstatus_entities (int dim, unsigned char pstatus_val, Range &pstatus_ents) |
Get entities with the given pstatus bit(s) set Returns any entities whose pstatus tag value v satisfies (v & pstatus_val) | |
ErrorCode | get_owner (EntityHandle entity, int &owner) |
Return the rank of the entity owner. | |
ErrorCode | get_owner_handle (EntityHandle entity, int &owner, EntityHandle &handle) |
Return the owner processor and handle of a given entity. | |
ErrorCode | get_sharing_data (const EntityHandle entity, int *ps, EntityHandle *hs, unsigned char &pstat, unsigned int &num_ps) |
Get the shared processors/handles for an entity Get the shared processors/handles for an entity. Arrays must be large enough to receive data for all sharing procs. Does *not* include this proc if only shared with one other proc. | |
ErrorCode | get_sharing_data (const EntityHandle entity, int *ps, EntityHandle *hs, unsigned char &pstat, int &num_ps) |
Get the shared processors/handles for an entity Same as other version but with int num_ps. | |
ErrorCode | get_sharing_data (const EntityHandle *entities, int num_entities, std::set< int > &procs, int op=Interface::INTERSECT) |
Get the intersection or union of all sharing processors Get the intersection or union of all sharing processors. Processor set is cleared as part of this function. | |
ErrorCode | get_sharing_data (const Range &entities, std::set< int > &procs, int op=Interface::INTERSECT) |
Get the intersection or union of all sharing processors Same as previous variant but with range as input. | |
ErrorCode | get_shared_entities (int other_proc, Range &shared_ents, int dim=-1, const bool iface=false, const bool owned_filter=false) |
Get shared entities of specified dimension If other_proc is -1, any shared entities are returned. If dim is -1, entities of all dimensions on interface are returned. | |
ErrorCode | get_interface_procs (std::set< unsigned int > &iface_procs, const bool get_buffs=false) |
get processors with which this processor shares an interface | |
ErrorCode | get_comm_procs (std::set< unsigned int > &procs) |
get processors with which this processor communicates | |
ErrorCode | get_entityset_procs (EntityHandle entity_set, std::vector< unsigned > &ranks) const |
ErrorCode | get_entityset_owner (EntityHandle entity_set, unsigned &owner_rank, EntityHandle *remote_handle=0) const |
ErrorCode | get_entityset_local_handle (unsigned owning_rank, EntityHandle remote_handle, EntityHandle &local_handle) const |
Given set owner and handle on owner, find local set handle. | |
ErrorCode | get_shared_sets (Range &result) const |
Get all shared sets. | |
ErrorCode | get_entityset_owners (std::vector< unsigned > &ranks) const |
ErrorCode | get_owned_sets (unsigned owning_rank, Range &sets_out) const |
Get shared sets owned by process with specified rank. | |
const ProcConfig & | proc_config () const |
Get proc config for this communication object. | |
ProcConfig & | proc_config () |
Get proc config for this communication object. | |
unsigned | rank () const |
unsigned | size () const |
MPI_Comm | comm () const |
ErrorCode | get_shared_proc_tags (Tag &sharedp_tag, Tag &sharedps_tag, Tag &sharedh_tag, Tag &sharedhs_tag, Tag &pstatus_tag) |
return the tags used to indicate shared procs and handles | |
Range & | partition_sets () |
return partition, interface set ranges | |
const Range & | partition_sets () const |
Range & | interface_sets () |
const Range & | interface_sets () const |
Tag | sharedp_tag () |
return sharedp tag | |
Tag | sharedps_tag () |
return sharedps tag | |
Tag | sharedh_tag () |
return sharedh tag | |
Tag | sharedhs_tag () |
return sharedhs tag | |
Tag | pstatus_tag () |
return pstatus tag | |
Tag | partition_tag () |
return partitions set tag | |
Tag | part_tag () |
void | print_pstatus (unsigned char pstat, std::string &ostr) |
print contents of pstatus value in human-readable form | |
void | print_pstatus (unsigned char pstat) |
print contents of pstatus value in human-readable form to std::cut | |
ErrorCode | get_part_entities (Range &ents, int dim=-1) |
return all the entities in parts owned locally | |
EntityHandle | get_partitioning () const |
ErrorCode | set_partitioning (EntityHandle h) |
ErrorCode | get_global_part_count (int &count_out) const |
ErrorCode | get_part_owner (int part_id, int &owner_out) const |
ErrorCode | get_part_id (EntityHandle part, int &id_out) const |
ErrorCode | get_part_handle (int id, EntityHandle &handle_out) const |
ErrorCode | create_part (EntityHandle &part_out) |
ErrorCode | destroy_part (EntityHandle part) |
ErrorCode | collective_sync_partition () |
ErrorCode | get_part_neighbor_ids (EntityHandle part, int neighbors_out[MAX_SHARING_PROCS], int &num_neighbors_out) |
ErrorCode | get_interface_sets (EntityHandle part, Range &iface_sets_out, int *adj_part_id=0) |
ErrorCode | get_owning_part (EntityHandle entity, int &owning_part_id_out, EntityHandle *owning_handle=0) |
ErrorCode | get_sharing_parts (EntityHandle entity, int part_ids_out[MAX_SHARING_PROCS], int &num_part_ids_out, EntityHandle remote_handles[MAX_SHARING_PROCS]=0) |
ErrorCode | filter_pstatus (Range &ents, const unsigned char pstatus_val, const unsigned char op, int to_proc=-1, Range *returned_ents=NULL) |
ErrorCode | get_iface_entities (int other_proc, int dim, Range &iface_ents) |
Get entities on interfaces shared with another proc. | |
Interface * | get_moab () const |
ErrorCode | clean_shared_tags (std::vector< Range * > &exchange_ents) |
ErrorCode | pack_buffer (Range &orig_ents, const bool adjacencies, const bool tags, const bool store_remote_handles, const int to_proc, Buffer *buff, TupleList *entprocs=NULL, Range *allsent=NULL) |
public 'cuz we want to unit test these externally | |
ErrorCode | unpack_buffer (unsigned char *buff_ptr, const bool store_remote_handles, const int from_proc, const int ind, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< EntityHandle > &new_ents, const bool created_iface=false) |
ErrorCode | pack_entities (Range &entities, Buffer *buff, const bool store_remote_handles, const int to_proc, const bool is_iface, TupleList *entprocs=NULL, Range *allsent=NULL) |
ErrorCode | unpack_entities (unsigned char *&buff_ptr, const bool store_remote_handles, const int from_ind, const bool is_iface, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< EntityHandle > &new_ents, const bool created_iface=false) |
unpack entities in buff_ptr | |
ErrorCode | check_all_shared_handles (bool print_em=false) |
ErrorCode | pack_shared_handles (std::vector< std::vector< SharedEntityData > > &send_data) |
ErrorCode | check_local_shared () |
ErrorCode | check_my_shared_handles (std::vector< std::vector< SharedEntityData > > &shents, const char *prefix=NULL) |
void | set_rank (unsigned int r) |
set rank for this pcomm; USED FOR TESTING ONLY! | |
void | set_size (unsigned int r) |
set rank for this pcomm; USED FOR TESTING ONLY! | |
int | get_buffers (int to_proc, bool *is_new=NULL) |
const std::vector< unsigned int > & | buff_procs () const |
get buff processor vector | |
ErrorCode | unpack_remote_handles (unsigned int from_proc, unsigned char *&buff_ptr, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p) |
ErrorCode | pack_remote_handles (std::vector< EntityHandle > &L1hloc, std::vector< EntityHandle > &L1hrem, std::vector< int > &procs, unsigned int to_proc, Buffer *buff) |
ErrorCode | create_interface_sets (std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs) |
ErrorCode | create_interface_sets (EntityHandle this_set, int resolve_dim, int shared_dim) |
ErrorCode | tag_shared_verts (TupleList &shared_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs, Range &proc_verts, unsigned int i_extra=1) |
ErrorCode | list_entities (const EntityHandle *ents, int num_ents) |
ErrorCode | list_entities (const Range &ents) |
void | set_send_request (int n_request) |
void | set_recv_request (int n_request) |
void | reset_all_buffers () |
reset message buffers to their initial state | |
void | set_debug_verbosity (int verb) |
set the verbosity level of output from this pcomm | |
int | get_debug_verbosity () |
get the verbosity level of output from this pcomm | |
ErrorCode | gather_data (Range &gather_ents, Tag &tag_handle, Tag id_tag=0, EntityHandle gather_set=0, int root_proc_rank=0) |
ErrorCode | settle_intersection_points (Range &edges, Range &shared_edges_owned, std::vector< std::vector< EntityHandle > * > &extraNodesVec, double tolerance) |
ErrorCode | delete_entities (Range &to_delete) |
ErrorCode | correct_thin_ghost_layers () |
Static Public Member Functions | |
static ParallelComm * | get_pcomm (Interface *impl, const int index) |
get the indexed pcomm object from the interface | |
static ParallelComm * | get_pcomm (Interface *impl, EntityHandle partitioning, const MPI_Comm *comm=0) |
get the indexed pcomm object from the interface | |
static ErrorCode | get_all_pcomm (Interface *impl, std::vector< ParallelComm * > &list) |
static ErrorCode | exchange_ghost_cells (ParallelComm **pc, unsigned int num_procs, int ghost_dim, int bridge_dim, int num_layers, int addl_ents, bool store_remote_handles, EntityHandle *file_sets=NULL) |
Static version of exchange_ghost_cells, exchanging info through buffers rather than messages. | |
static ErrorCode | resolve_shared_ents (ParallelComm **pc, const unsigned int np, EntityHandle this_set, const int to_dim) |
static Tag | pcomm_tag (Interface *impl, bool create_if_missing=true) |
return pcomm tag; passes in impl 'cuz this is a static function | |
static ErrorCode | check_all_shared_handles (ParallelComm **pcs, int num_pcs) |
Static Public Attributes | |
static unsigned char | PROC_SHARED |
static unsigned char | PROC_OWNER |
static const unsigned int | INITIAL_BUFF_SIZE = 1024 |
Private Member Functions | |
ErrorCode | reduce_void (int tag_data_type, const MPI_Op mpi_op, int num_ents, void *old_vals, void *new_vals) |
template<class T > | |
ErrorCode | reduce (const MPI_Op mpi_op, int num_ents, void *old_vals, void *new_vals) |
void | print_debug_isend (int from, int to, unsigned char *buff, int tag, int size) |
void | print_debug_irecv (int to, int from, unsigned char *buff, int size, int tag, int incoming) |
void | print_debug_recd (MPI_Status status) |
void | print_debug_waitany (std::vector< MPI_Request > &reqs, int tag, int proc) |
void | initialize () |
ErrorCode | set_sharing_data (EntityHandle ent, unsigned char pstatus, int old_nump, int new_nump, int *ps, EntityHandle *hs) |
ErrorCode | check_clean_iface (Range &allsent) |
void | define_mpe () |
ErrorCode | get_sent_ents (const bool is_iface, const int bridge_dim, const int ghost_dim, const int num_layers, const int addl_ents, Range *sent_ents, Range &allsent, TupleList &entprocs) |
ErrorCode | set_pstatus_entities (Range &pstatus_ents, unsigned char pstatus_val, bool lower_dim_ents=false, bool verts_too=true, int operation=Interface::UNION) |
Set pstatus values on entities. | |
ErrorCode | set_pstatus_entities (EntityHandle *pstatus_ents, int num_ents, unsigned char pstatus_val, bool lower_dim_ents=false, bool verts_too=true, int operation=Interface::UNION) |
Set pstatus values on entities (vector-based function) | |
int | estimate_ents_buffer_size (Range &entities, const bool store_remote_handles) |
estimate size required to pack entities | |
int | estimate_sets_buffer_size (Range &entities, const bool store_remote_handles) |
estimate size required to pack sets | |
ErrorCode | send_buffer (const unsigned int to_proc, Buffer *send_buff, const int msg_tag, MPI_Request &send_req, MPI_Request &ack_recv_req, int *ack_buff, int &this_incoming, int next_mesg_tag=-1, Buffer *next_recv_buff=NULL, MPI_Request *next_recv_req=NULL, int *next_incoming=NULL) |
send the indicated buffer, possibly sending size first | |
ErrorCode | recv_buffer (int mesg_tag_expected, const MPI_Status &mpi_status, Buffer *recv_buff, MPI_Request &recv_2nd_req, MPI_Request &ack_req, int &this_incoming, Buffer *send_buff, MPI_Request &send_req, MPI_Request &sent_ack_req, bool &done, Buffer *next_buff=NULL, int next_tag=-1, MPI_Request *next_req=NULL, int *next_incoming=NULL) |
ErrorCode | pack_entity_seq (const int nodes_per_entity, const bool store_remote_handles, const int to_proc, Range &these_ents, std::vector< EntityHandle > &entities, Buffer *buff) |
ErrorCode | print_buffer (unsigned char *buff_ptr, int mesg_type, int from_proc, bool sent) |
ErrorCode | unpack_iface_entities (unsigned char *&buff_ptr, const int from_proc, const int ind, std::vector< EntityHandle > &recd_ents) |
ErrorCode | pack_sets (Range &entities, Buffer *buff, const bool store_handles, const int to_proc) |
ErrorCode | unpack_sets (unsigned char *&buff_ptr, std::vector< EntityHandle > &entities, const bool store_handles, const int to_proc) |
ErrorCode | pack_adjacencies (Range &entities, Range::const_iterator &start_rit, Range &whole_range, unsigned char *&buff_ptr, int &count, const bool just_count, const bool store_handles, const int to_proc) |
ErrorCode | unpack_adjacencies (unsigned char *&buff_ptr, Range &entities, const bool store_handles, const int from_proc) |
ErrorCode | unpack_remote_handles (unsigned int from_proc, const unsigned char *buff_ptr, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p) |
ErrorCode | find_existing_entity (const bool is_iface, const int owner_p, const EntityHandle owner_h, const int num_ents, const EntityHandle *connect, const int num_connect, const EntityType this_type, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, EntityHandle &new_h) |
given connectivity and type, find an existing entity, if there is one | |
ErrorCode | build_sharedhps_list (const EntityHandle entity, const unsigned char pstatus, const int sharedp, const std::set< unsigned int > &procs, unsigned int &num_ents, int *tmp_procs, EntityHandle *tmp_handles) |
ErrorCode | get_tag_send_list (const Range &all_entities, std::vector< Tag > &all_tags, std::vector< Range > &tag_ranges) |
Get list of tags for which to exchange data. | |
ErrorCode | pack_tags (Range &entities, const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const std::vector< Range > &tag_ranges, Buffer *buff, const bool store_handles, const int to_proc) |
Serialize entity tag data. | |
ErrorCode | packed_tag_size (Tag source_tag, const Range &entities, int &count_out) |
Calculate buffer size required to pack tag data. | |
ErrorCode | pack_tag (Tag source_tag, Tag destination_tag, const Range &entities, const std::vector< EntityHandle > &whole_range, Buffer *buff, const bool store_remote_handles, const int to_proc) |
Serialize tag data. | |
ErrorCode | unpack_tags (unsigned char *&buff_ptr, std::vector< EntityHandle > &entities, const bool store_handles, const int to_proc, const MPI_Op *const mpi_op=NULL) |
ErrorCode | tag_shared_verts (TupleList &shared_verts, Range *skin_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs, Range &proc_verts) |
ErrorCode | get_proc_nvecs (int resolve_dim, int shared_dim, Range *skin_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs) |
ErrorCode | create_iface_pc_links () |
ErrorCode | pack_range_map (Range &this_range, EntityHandle actual_start, HandleMap &handle_map) |
bool | is_iface_proc (EntityHandle this_set, int to_proc) |
returns true if the set is an interface shared with to_proc | |
ErrorCode | update_iface_sets (Range &sent_ents, std::vector< EntityHandle > &remote_handles, int from_proc) |
ErrorCode | get_ghosted_entities (int bridge_dim, int ghost_dim, int to_proc, int num_layers, int addl_ents, Range &ghosted_ents) |
ErrorCode | add_verts (Range &sent_ents) |
add vertices adjacent to entities in this list | |
ErrorCode | exchange_all_shared_handles (std::vector< std::vector< SharedEntityData > > &send_data, std::vector< std::vector< SharedEntityData > > &result) |
ErrorCode | get_remote_handles (const bool store_remote_handles, EntityHandle *from_vec, EntityHandle *to_vec_tmp, int num_ents, int to_proc, const std::vector< EntityHandle > &new_ents) |
ErrorCode | get_remote_handles (const bool store_remote_handles, const Range &from_range, Range &to_range, int to_proc, const std::vector< EntityHandle > &new_ents) |
ErrorCode | get_remote_handles (const bool store_remote_handles, const Range &from_range, EntityHandle *to_vec, int to_proc, const std::vector< EntityHandle > &new_ents) |
same as other version, except packs range into vector | |
ErrorCode | get_local_handles (EntityHandle *from_vec, int num_ents, const Range &new_ents) |
ErrorCode | get_local_handles (const Range &remote_handles, Range &local_handles, const std::vector< EntityHandle > &new_ents) |
same as above except puts results in range | |
ErrorCode | get_local_handles (EntityHandle *from_vec, int num_ents, const std::vector< EntityHandle > &new_ents) |
same as above except gets new_ents from vector | |
ErrorCode | update_remote_data (Range &local_range, Range &remote_range, int other_proc, const unsigned char add_pstat) |
ErrorCode | update_remote_data (const EntityHandle new_h, const int *ps, const EntityHandle *hs, const int num_ps, const unsigned char add_pstat) |
ErrorCode | update_remote_data_old (const EntityHandle new_h, const int *ps, const EntityHandle *hs, const int num_ps, const unsigned char add_pstat) |
ErrorCode | tag_iface_entities () |
Set pstatus tag interface bit on entities in sets passed in. | |
int | add_pcomm (ParallelComm *pc) |
add a pc to the iface instance tag PARALLEL_COMM | |
void | remove_pcomm (ParallelComm *pc) |
remove a pc from the iface instance tag PARALLEL_COMM | |
ErrorCode | check_sent_ents (Range &allsent) |
ErrorCode | assign_entities_part (std::vector< EntityHandle > &entities, const int proc) |
assign entities to the input processor part | |
ErrorCode | remove_entities_part (Range &entities, const int proc) |
remove entities to the input processor part | |
void | delete_all_buffers () |
reset message buffers to their initial state | |
Private Attributes | |
Interface * | mbImpl |
MB interface associated with this writer. | |
ProcConfig | procConfig |
Proc config object, keeps info on parallel stuff. | |
SequenceManager * | sequenceManager |
Sequence manager, to get more efficient access to entities. | |
Error * | errorHandler |
Error handler. | |
std::vector< Buffer * > | localOwnedBuffs |
more data buffers, proc-specific | |
std::vector< Buffer * > | remoteOwnedBuffs |
std::vector< MPI_Request > | sendReqs |
request objects, may be used if store_remote_handles is used | |
std::vector< MPI_Request > | recvReqs |
receive request objects | |
std::vector< MPI_Request > | recvRemotehReqs |
std::vector< unsigned int > | buffProcs |
processor rank for each buffer index | |
Range | partitionSets |
the partition, interface sets for this comm'n instance | |
Range | interfaceSets |
std::set< EntityHandle > | sharedEnts |
all local entities shared with others, whether ghost or ghosted | |
Tag | sharedpTag |
tags used to save sharing procs and handles | |
Tag | sharedpsTag |
Tag | sharedhTag |
Tag | sharedhsTag |
Tag | pstatusTag |
Tag | ifaceSetsTag |
Tag | partitionTag |
int | globalPartCount |
Cache of global part count. | |
EntityHandle | partitioningSet |
entity set containing all parts | |
std::ofstream | myFile |
int | pcommID |
int | ackbuff |
DebugOutput * | myDebug |
used to set verbosity level and to report output | |
SharedSetData * | sharedSetData |
Data about shared sets. | |
Friends | |
class | ParallelMergeMesh |
Parallel communications in MOAB.
This class implements methods to communicate mesh between processors
Definition at line 54 of file ParallelComm.hpp.
moab::ParallelComm::ParallelComm | ( | Interface * | impl, |
MPI_Comm | comm, | ||
int * | pcomm_id_out = 0 |
||
) |
constructor
Definition at line 313 of file ParallelComm.cpp.
References initialize(), pcommID, moab::ProcConfig::proc_rank(), procConfig, and sharedSetData.
Referenced by get_pcomm().
: mbImpl( impl ), procConfig( cm ), sharedpTag( 0 ), sharedpsTag( 0 ), sharedhTag( 0 ), sharedhsTag( 0 ), pstatusTag( 0 ), ifaceSetsTag( 0 ), partitionTag( 0 ), globalPartCount( -1 ), partitioningSet( 0 ), myDebug( NULL ) { initialize(); sharedSetData = new SharedSetData( *impl, pcommID, procConfig.proc_rank() ); if( id ) *id = pcommID; }
moab::ParallelComm::ParallelComm | ( | Interface * | impl, |
std::vector< unsigned char > & | tmp_buff, | ||
MPI_Comm | comm, | ||
int * | pcomm_id_out = 0 |
||
) |
constructor taking packed buffer, for testing
Definition at line 323 of file ParallelComm.cpp.
References initialize(), pcommID, moab::ProcConfig::proc_rank(), procConfig, and sharedSetData.
: mbImpl( impl ), procConfig( cm ), sharedpTag( 0 ), sharedpsTag( 0 ), sharedhTag( 0 ), sharedhsTag( 0 ), pstatusTag( 0 ), ifaceSetsTag( 0 ), partitionTag( 0 ), globalPartCount( -1 ), partitioningSet( 0 ), myDebug( NULL ) { initialize(); sharedSetData = new SharedSetData( *impl, pcommID, procConfig.proc_rank() ); if( id ) *id = pcommID; }
destructor
Definition at line 333 of file ParallelComm.cpp.
References delete_all_buffers(), myDebug, remove_pcomm(), and sharedSetData.
{ remove_pcomm( this ); delete_all_buffers(); delete myDebug; delete sharedSetData; }
int moab::ParallelComm::add_pcomm | ( | ParallelComm * | pc | ) | [private] |
add a pc to the iface instance tag PARALLEL_COMM
Definition at line 374 of file ParallelComm.cpp.
References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pcomm_tag(), moab::Interface::tag_get_data(), and moab::Interface::tag_set_data().
Referenced by initialize().
{ // Add this pcomm to instance tag std::vector< ParallelComm* > pc_array( MAX_SHARING_PROCS, (ParallelComm*)NULL ); Tag pc_tag = pcomm_tag( mbImpl, true ); assert( 0 != pc_tag ); const EntityHandle root = 0; ErrorCode result = mbImpl->tag_get_data( pc_tag, &root, 1, (void*)&pc_array[0] ); if( MB_SUCCESS != result && MB_TAG_NOT_FOUND != result ) return -1; int index = 0; while( index < MAX_SHARING_PROCS && pc_array[index] ) index++; if( index == MAX_SHARING_PROCS ) { index = -1; assert( false ); } else { pc_array[index] = pc; mbImpl->tag_set_data( pc_tag, &root, 1, (void*)&pc_array[0] ); } return index; }
ErrorCode moab::ParallelComm::add_verts | ( | Range & | sent_ents | ) | [private] |
add vertices adjacent to entities in this list
Definition at line 7502 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::equal_range(), ErrorCode, moab::Interface::get_adjacencies(), moab::Interface::get_connectivity(), moab::Interface::get_entities_by_type(), MB_CHK_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, MBPOLYHEDRON, MBVERTEX, moab::Range::subset_by_type(), and moab::Interface::UNION.
Referenced by broadcast_entities(), exchange_owned_mesh(), get_ghosted_entities(), scatter_entities(), and send_entities().
{ // Get the verts adj to these entities, since we'll have to send those too // First check sets std::pair< Range::const_iterator, Range::const_iterator > set_range = sent_ents.equal_range( MBENTITYSET ); ErrorCode result = MB_SUCCESS, tmp_result; for( Range::const_iterator rit = set_range.first; rit != set_range.second; ++rit ) { tmp_result = mbImpl->get_entities_by_type( *rit, MBVERTEX, sent_ents );MB_CHK_SET_ERR( tmp_result, "Failed to get contained verts" ); } // Now non-sets Range tmp_ents; std::copy( sent_ents.begin(), set_range.first, range_inserter( tmp_ents ) ); result = mbImpl->get_adjacencies( tmp_ents, 0, false, sent_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get vertices adj to ghosted ents" ); // if polyhedra, need to add all faces from there Range polyhedra = sent_ents.subset_by_type( MBPOLYHEDRON ); // get all faces adjacent to every polyhedra result = mbImpl->get_connectivity( polyhedra, sent_ents );MB_CHK_SET_ERR( result, "Failed to get polyhedra faces" ); return result; }
ErrorCode moab::ParallelComm::assign_entities_part | ( | std::vector< EntityHandle > & | entities, |
const int | proc | ||
) | [private] |
assign entities to the input processor part
Definition at line 7297 of file ParallelComm.cpp.
References moab::Interface::add_entities(), ErrorCode, get_part_handle(), MB_CHK_SET_ERR, MB_SUCCESS, and mbImpl.
Referenced by exchange_owned_mesh(), and recv_entities().
{ EntityHandle part_set; ErrorCode result = get_part_handle( proc, part_set );MB_CHK_SET_ERR( result, "Failed to get part handle" ); if( part_set > 0 ) { result = mbImpl->add_entities( part_set, &entities[0], entities.size() );MB_CHK_SET_ERR( result, "Failed to add entities to part set" ); } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::assign_global_ids | ( | EntityHandle | this_set, |
const int | dimension, | ||
const int | start_id = 1 , |
||
const bool | largest_dim_only = true , |
||
const bool | parallel = true , |
||
const bool | owned_only = false |
||
) |
assign a global id space, for largest-dimension or all entities (and in either case for vertices too)
owned_only | If true, do not get global IDs for non-owned entities from remote processors. |
Assign a global id space, for largest-dimension or all entities (and in either case for vertices too)
Definition at line 421 of file ParallelComm.cpp.
References dim, moab::Range::end(), entities, ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Range::insert(), MB_CHK_SET_ERR, mbImpl, PSTATUS_NOT_OWNED, pstatus_tag(), size(), moab::subtract(), and moab::Interface::tag_get_data().
Referenced by MetisPartitioner::assemble_graph(), ZoltanPartitioner::assemble_graph(), check_global_ids(), compute_dual_mesh(), create_fine_mesh(), moab::NCHelperDomain::create_mesh(), moab::NCHelperScrip::create_mesh(), iMeshP_assignGlobalIds(), main(), MetisPartitioner::partition_mesh(), resolve_shared_ents(), and test_assign_global_ids().
{ Range entities[4]; ErrorCode result; std::vector< unsigned char > pstatus; for( int dim = 0; dim <= dimension; dim++ ) { if( dim == 0 || !largest_dim_only || dim == dimension ) { result = mbImpl->get_entities_by_dimension( this_set, dim, entities[dim] );MB_CHK_SET_ERR( result, "Failed to get vertices in assign_global_ids" ); } // Need to filter out non-locally-owned entities!!! pstatus.resize( entities[dim].size() ); result = mbImpl->tag_get_data( pstatus_tag(), entities[dim], &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus in assign_global_ids" ); Range dum_range; Range::iterator rit; unsigned int i; for( rit = entities[dim].begin(), i = 0; rit != entities[dim].end(); ++rit, i++ ) if( pstatus[i] & PSTATUS_NOT_OWNED ) dum_range.insert( *rit ); entities[dim] = subtract( entities[dim], dum_range ); } return assign_global_ids( entities, dimension, start_id, parallel, owned_only ); }
ErrorCode moab::ParallelComm::assign_global_ids | ( | Range | entities[], |
const int | dimension, | ||
const int | start_id, | ||
const bool | parallel, | ||
const bool | owned_only | ||
) |
assign a global id space, for largest-dimension or all entities (and in either case for vertices too)
Assign a global id space, for largest-dimension or all entities (and in either case for vertices too)
Definition at line 455 of file ParallelComm.cpp.
References dim, moab::Range::end(), ErrorCode, exchange_tags(), moab::Interface::globalId_tag(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), moab::ProcConfig::proc_size(), procConfig, moab::Range::size(), size(), and moab::Interface::tag_set_data().
{ int local_num_elements[4]; ErrorCode result; for( int dim = 0; dim <= dimension; dim++ ) { local_num_elements[dim] = entities[dim].size(); } // Communicate numbers std::vector< int > num_elements( procConfig.proc_size() * 4 ); #ifdef MOAB_HAVE_MPI if( procConfig.proc_size() > 1 && parallel ) { int retval = MPI_Allgather( local_num_elements, 4, MPI_INT, &num_elements[0], 4, MPI_INT, procConfig.proc_comm() ); if( 0 != retval ) return MB_FAILURE; } else #endif for( int dim = 0; dim < 4; dim++ ) num_elements[dim] = local_num_elements[dim]; // My entities start at one greater than total_elems[d] int total_elems[4] = { start_id, start_id, start_id, start_id }; for( unsigned int proc = 0; proc < procConfig.proc_rank(); proc++ ) { for( int dim = 0; dim < 4; dim++ ) total_elems[dim] += num_elements[4 * proc + dim]; } // Assign global ids now Tag gid_tag = mbImpl->globalId_tag(); for( int dim = 0; dim < 4; dim++ ) { if( entities[dim].empty() ) continue; num_elements.resize( entities[dim].size() ); int i = 0; for( Range::iterator rit = entities[dim].begin(); rit != entities[dim].end(); ++rit ) num_elements[i++] = total_elems[dim]++; result = mbImpl->tag_set_data( gid_tag, entities[dim], &num_elements[0] );MB_CHK_SET_ERR( result, "Failed to set global id tag in assign_global_ids" ); } if( owned_only ) return MB_SUCCESS; // Exchange tags for( int dim = 1; dim < 4; dim++ ) entities[0].merge( entities[dim] ); return exchange_tags( gid_tag, entities[0] ); }
extend shared sets with ghost entities After ghosting, ghost entities do not have yet information about the material set, partition set, Neumann or Dirichlet set they could belong to This method will assign ghosted entities to the those special entity sets In some case we might even have to create those sets, if they do not exist yet on the local processor
The special entity sets all have an unique identifier, in a form of an integer tag to the set. The shared sets data is not used, because we do not use the geometry sets, as they are not uniquely identified
file_set | : file set used per application |
Definition at line 4766 of file ParallelComm.cpp.
References moab::Interface::add_entities(), moab::Interface::contains_entities(), moab::Interface::create_meshset(), moab::ProcConfig::crystal_router(), DIRICHLET_SET_TAG_NAME, moab::TupleList::enableWriteAccess(), ErrorCode, get_debug_verbosity(), moab::Interface::get_entities_by_type_and_tag(), moab::TupleList::get_max(), moab::TupleList::get_n(), get_sharing_data(), moab::Interface::globalId_tag(), moab::TupleList::inc_n(), moab::TupleList::initialize(), MATERIAL_SET_TAG_NAME, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_ANY, MB_TYPE_INTEGER, MBENTITYSET, mbImpl, MESHSET_SET, NEUMANN_SET_TAG_NAME, PARALLEL_PARTITION_TAG_NAME, moab::TupleList::print(), moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_size(), procConfig, PSTATUS_NOT_OWNED, rank(), moab::TupleList::resize(), sharedEnts, moab::Range::size(), size(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), moab::Interface::tag_set_data(), moab::Interface::UNION, moab::TupleList::vi_rd, moab::TupleList::vi_wr, moab::TupleList::vul_rd, and moab::TupleList::vul_wr.
Referenced by moab::ReadParallel::load_file(), and test_read_and_ghost_after().
{ // gather all default sets we are interested in, material, neumann, etc // we will skip geometry sets, because they are not uniquely identified with their tag value // maybe we will add another tag, like category if( procConfig.proc_size() < 2 ) return MB_SUCCESS; // no reason to stop by const char* const shared_set_tag_names[] = { MATERIAL_SET_TAG_NAME, DIRICHLET_SET_TAG_NAME, NEUMANN_SET_TAG_NAME, PARALLEL_PARTITION_TAG_NAME }; int num_tags = sizeof( shared_set_tag_names ) / sizeof( shared_set_tag_names[0] ); Range* rangeSets = new Range[num_tags]; Tag* tags = new Tag[num_tags + 1]; // one extra for global id tag, which is an int, so far int my_rank = rank(); int** tagVals = new int*[num_tags]; for( int i = 0; i < num_tags; i++ ) tagVals[i] = NULL; ErrorCode rval; // for each tag, we keep a local map, from the value to the actual set with that value // we assume that the tag values are unique, for a given set, otherwise we // do not know to which set to add the entity typedef std::map< int, EntityHandle > MVal; typedef std::map< int, EntityHandle >::iterator itMVal; MVal* localMaps = new MVal[num_tags]; for( int i = 0; i < num_tags; i++ ) { rval = mbImpl->tag_get_handle( shared_set_tag_names[i], 1, MB_TYPE_INTEGER, tags[i], MB_TAG_ANY ); if( MB_SUCCESS != rval ) continue; rval = mbImpl->get_entities_by_type_and_tag( file_set, MBENTITYSET, &( tags[i] ), 0, 1, rangeSets[i], Interface::UNION );MB_CHK_SET_ERR( rval, "can't get sets with a tag" ); if( rangeSets[i].size() > 0 ) { tagVals[i] = new int[rangeSets[i].size()]; // fill up with the tag values rval = mbImpl->tag_get_data( tags[i], rangeSets[i], tagVals[i] );MB_CHK_SET_ERR( rval, "can't get set tag values" ); // now for inverse mapping: for( int j = 0; j < (int)rangeSets[i].size(); j++ ) { localMaps[i][tagVals[i][j]] = rangeSets[i][j]; } } } // get the global id tag too tags[num_tags] = mbImpl->globalId_tag(); TupleList remoteEnts; // processor to send to, type of tag (0-mat,) tag value, remote handle // 1-diri // 2-neum // 3-part // int initialSize = (int)sharedEnts.size(); // estimate that on average, each shared ent // will be sent to one processor, for one tag // we will actually send only entities that are owned locally, and from those // only those that do have a special tag (material, neumann, etc) // if we exceed the capacity, we resize the tuple remoteEnts.initialize( 3, 0, 1, 0, initialSize ); remoteEnts.enableWriteAccess(); // now, for each owned entity, get the remote handle(s) and Proc(s), and verify if it // belongs to one of the sets; if yes, create a tuple and append it std::set< EntityHandle > own_and_sha; int ir = 0, jr = 0; for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit ) { // ghosted eh EntityHandle geh = *vit; if( own_and_sha.find( geh ) != own_and_sha.end() ) // already encountered continue; int procs[MAX_SHARING_PROCS]; EntityHandle handles[MAX_SHARING_PROCS]; int nprocs; unsigned char pstat; rval = get_sharing_data( geh, procs, handles, pstat, nprocs ); if( rval != MB_SUCCESS ) { for( int i = 0; i < num_tags; i++ ) delete[] tagVals[i]; delete[] tagVals; MB_CHK_SET_ERR( rval, "Failed to get sharing data" ); } if( pstat & PSTATUS_NOT_OWNED ) continue; // we will send info only for entities that we own own_and_sha.insert( geh ); for( int i = 0; i < num_tags; i++ ) { for( int j = 0; j < (int)rangeSets[i].size(); j++ ) { EntityHandle specialSet = rangeSets[i][j]; // this set has tag i, value tagVals[i][j]; if( mbImpl->contains_entities( specialSet, &geh, 1 ) ) { // this ghosted entity is in a special set, so form the tuple // to send to the processors that do not own this for( int k = 0; k < nprocs; k++ ) { if( procs[k] != my_rank ) { if( remoteEnts.get_n() >= remoteEnts.get_max() - 1 ) { // resize, so we do not overflow int oldSize = remoteEnts.get_max(); // increase with 50% the capacity remoteEnts.resize( oldSize + oldSize / 2 + 1 ); } remoteEnts.vi_wr[ir++] = procs[k]; // send to proc remoteEnts.vi_wr[ir++] = i; // for the tags [i] (0-3) remoteEnts.vi_wr[ir++] = tagVals[i][j]; // actual value of the tag remoteEnts.vul_wr[jr++] = handles[k]; remoteEnts.inc_n(); } } } } } // if the local entity has a global id, send it too, so we avoid // another "exchange_tags" for global id int gid; rval = mbImpl->tag_get_data( tags[num_tags], &geh, 1, &gid );MB_CHK_SET_ERR( rval, "Failed to get global id" ); if( gid != 0 ) { for( int k = 0; k < nprocs; k++ ) { if( procs[k] != my_rank ) { if( remoteEnts.get_n() >= remoteEnts.get_max() - 1 ) { // resize, so we do not overflow int oldSize = remoteEnts.get_max(); // increase with 50% the capacity remoteEnts.resize( oldSize + oldSize / 2 + 1 ); } remoteEnts.vi_wr[ir++] = procs[k]; // send to proc remoteEnts.vi_wr[ir++] = num_tags; // for the tags [j] (4) remoteEnts.vi_wr[ir++] = gid; // actual value of the tag remoteEnts.vul_wr[jr++] = handles[k]; remoteEnts.inc_n(); } } } } #ifndef NDEBUG if( my_rank == 1 && 1 == get_debug_verbosity() ) remoteEnts.print( " on rank 1, before augment routing" ); MPI_Barrier( procConfig.proc_comm() ); int sentEnts = remoteEnts.get_n(); assert( ( sentEnts == jr ) && ( 3 * sentEnts == ir ) ); #endif // exchange the info now, and send to gs_data::crystal_data* cd = this->procConfig.crystal_router(); // All communication happens here; no other mpi calls // Also, this is a collective call rval = cd->gs_transfer( 1, remoteEnts, 0 );MB_CHK_SET_ERR( rval, "Error in tuple transfer" ); #ifndef NDEBUG if( my_rank == 0 && 1 == get_debug_verbosity() ) remoteEnts.print( " on rank 0, after augment routing" ); MPI_Barrier( procConfig.proc_comm() ); #endif // now process the data received from other processors int received = remoteEnts.get_n(); for( int i = 0; i < received; i++ ) { // int from = ents_to_delete.vi_rd[i]; EntityHandle geh = (EntityHandle)remoteEnts.vul_rd[i]; int from_proc = remoteEnts.vi_rd[3 * i]; if( my_rank == from_proc ) std::cout << " unexpected receive from my rank " << my_rank << " during augmenting with ghosts\n "; int tag_type = remoteEnts.vi_rd[3 * i + 1]; assert( ( 0 <= tag_type ) && ( tag_type <= num_tags ) ); int value = remoteEnts.vi_rd[3 * i + 2]; if( tag_type == num_tags ) { // it is global id rval = mbImpl->tag_set_data( tags[num_tags], &geh, 1, &value );MB_CHK_SET_ERR( rval, "Error in setting gid tag" ); } else { // now, based on value and tag type, see if we have that value in the map MVal& lmap = localMaps[tag_type]; itMVal itm = lmap.find( value ); if( itm == lmap.end() ) { // the value was not found yet in the local map, so we have to create the set EntityHandle newSet; rval = mbImpl->create_meshset( MESHSET_SET, newSet );MB_CHK_SET_ERR( rval, "can't create new set" ); lmap[value] = newSet; // set the tag value rval = mbImpl->tag_set_data( tags[tag_type], &newSet, 1, &value );MB_CHK_SET_ERR( rval, "can't set tag for new set" ); // we also need to add the new created set to the file set, if not null if( file_set ) { rval = mbImpl->add_entities( file_set, &newSet, 1 );MB_CHK_SET_ERR( rval, "can't add new set to the file set" ); } } // add the entity to the set pointed to by the map rval = mbImpl->add_entities( lmap[value], &geh, 1 );MB_CHK_SET_ERR( rval, "can't add ghost ent to the set" ); } } for( int i = 0; i < num_tags; i++ ) delete[] tagVals[i]; delete[] tagVals; delete[] rangeSets; delete[] tags; delete[] localMaps; return MB_SUCCESS; }
ErrorCode moab::ParallelComm::broadcast_entities | ( | const int | from_proc, |
Range & | entities, | ||
const bool | adjacencies = false , |
||
const bool | tags = true |
||
) |
Broadcast all entities resident on from_proc to other processors This function assumes remote handles are *not* being stored, since (usually) every processor will know about the whole mesh.
from_proc | Processor having the mesh to be broadcast |
entities | On return, the entities sent or received in this call |
adjacencies | If true, adjacencies are sent for equiv entities (currently unsupported) |
tags | If true, all non-default-valued tags are sent for sent entities |
Definition at line 536 of file ParallelComm.cpp.
References add_verts(), moab::ParallelComm::Buffer::buff_ptr, ErrorCode, INITIAL_BUFF_SIZE, moab::MAX_BCAST_SIZE, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, moab::ParallelComm::Buffer::mem_ptr, pack_buffer(), moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, moab::ParallelComm::Buffer::reserve(), moab::ParallelComm::Buffer::reset_ptr(), moab::ParallelComm::Buffer::set_stored_size(), and unpack_buffer().
Referenced by moab::ReadParallel::load_file().
{ #ifndef MOAB_HAVE_MPI return MB_FAILURE; #else ErrorCode result = MB_SUCCESS; int success; int buff_size; Buffer buff( INITIAL_BUFF_SIZE ); buff.reset_ptr( sizeof( int ) ); if( (int)procConfig.proc_rank() == from_proc ) { result = add_verts( entities );MB_CHK_SET_ERR( result, "Failed to add adj vertices" ); buff.reset_ptr( sizeof( int ) ); result = pack_buffer( entities, adjacencies, tags, false, -1, &buff );MB_CHK_SET_ERR( result, "Failed to compute buffer size in broadcast_entities" ); buff.set_stored_size(); buff_size = buff.buff_ptr - buff.mem_ptr; } success = MPI_Bcast( &buff_size, 1, MPI_INT, from_proc, procConfig.proc_comm() ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "MPI_Bcast of buffer size failed" ); } if( !buff_size ) // No data return MB_SUCCESS; if( (int)procConfig.proc_rank() != from_proc ) buff.reserve( buff_size ); size_t offset = 0; while( buff_size ) { int sz = std::min( buff_size, MAX_BCAST_SIZE ); success = MPI_Bcast( buff.mem_ptr + offset, sz, MPI_UNSIGNED_CHAR, from_proc, procConfig.proc_comm() ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "MPI_Bcast of buffer failed" ); } offset += sz; buff_size -= sz; } if( (int)procConfig.proc_rank() != from_proc ) { std::vector< std::vector< EntityHandle > > dum1a, dum1b; std::vector< std::vector< int > > dum1p; std::vector< EntityHandle > dum2, dum4; std::vector< unsigned int > dum3; buff.reset_ptr( sizeof( int ) ); result = unpack_buffer( buff.buff_ptr, false, from_proc, -1, dum1a, dum1b, dum1p, dum2, dum2, dum3, dum4 );MB_CHK_SET_ERR( result, "Failed to unpack buffer in broadcast_entities" ); std::copy( dum4.begin(), dum4.end(), range_inserter( entities ) ); } return MB_SUCCESS; #endif }
const std::vector< unsigned int > & moab::ParallelComm::buff_procs | ( | ) | const [inline] |
get buff processor vector
Definition at line 1569 of file ParallelComm.hpp.
References buffProcs.
{ return buffProcs; }
ErrorCode moab::ParallelComm::build_sharedhps_list | ( | const EntityHandle | entity, |
const unsigned char | pstatus, | ||
const int | sharedp, | ||
const std::set< unsigned int > & | procs, | ||
unsigned int & | num_ents, | ||
int * | tmp_procs, | ||
EntityHandle * | tmp_handles | ||
) | [private] |
Definition at line 1748 of file ParallelComm.cpp.
References ErrorCode, get_sharing_data(), list_entities(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, moab::ProcConfig::proc_rank(), procConfig, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, and PSTATUS_SHARED.
Referenced by pack_entities().
{ num_ents = 0; unsigned char pstat; ErrorCode result = get_sharing_data( entity, tmp_procs, tmp_handles, pstat, num_ents );MB_CHK_SET_ERR( result, "Failed to get sharing data" ); assert( pstat == pstatus ); // Build shared proc/handle lists // Start with multi-shared, since if it is the owner will be first if( pstatus & PSTATUS_MULTISHARED ) { } else if( pstatus & PSTATUS_NOT_OWNED ) { // If not multishared and not owned, other sharing proc is owner, put that // one first assert( "If not owned, I should be shared too" && pstatus & PSTATUS_SHARED && 1 == num_ents ); tmp_procs[1] = procConfig.proc_rank(); tmp_handles[1] = entity; num_ents = 2; } else if( pstatus & PSTATUS_SHARED ) { // If not multishared and owned, I'm owner assert( "shared and owned, should be only 1 sharing proc" && 1 == num_ents ); tmp_procs[1] = tmp_procs[0]; tmp_procs[0] = procConfig.proc_rank(); tmp_handles[1] = tmp_handles[0]; tmp_handles[0] = entity; num_ents = 2; } else { // Not shared yet, just add owner (me) tmp_procs[0] = procConfig.proc_rank(); tmp_handles[0] = entity; num_ents = 1; } #ifndef NDEBUG int tmp_ps = num_ents; #endif // Now add others, with zero handle for now for( std::set< unsigned int >::iterator sit = procs.begin(); sit != procs.end(); ++sit ) { #ifndef NDEBUG if( tmp_ps && std::find( tmp_procs, tmp_procs + tmp_ps, *sit ) != tmp_procs + tmp_ps ) { std::cerr << "Trouble with something already in shared list on proc " << procConfig.proc_rank() << ". Entity:" << std::endl; list_entities( &entity, 1 ); std::cerr << "pstatus = " << (int)pstatus << ", sharedp = " << sharedp << std::endl; std::cerr << "tmp_ps = "; for( int i = 0; i < tmp_ps; i++ ) std::cerr << tmp_procs[i] << " "; std::cerr << std::endl; std::cerr << "procs = "; for( std::set< unsigned int >::iterator sit2 = procs.begin(); sit2 != procs.end(); ++sit2 ) std::cerr << *sit2 << " "; assert( false ); } #endif tmp_procs[num_ents] = *sit; tmp_handles[num_ents] = 0; num_ents++; } // Put -1 after procs and 0 after handles if( MAX_SHARING_PROCS > num_ents ) { tmp_procs[num_ents] = -1; tmp_handles[num_ents] = 0; } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::check_all_shared_handles | ( | bool | print_em = false | ) |
Call exchange_all_shared_handles, then compare the results with tag data on local shared entities.
Definition at line 8541 of file ParallelComm.cpp.
References buffProcs, check_local_shared(), check_my_shared_handles(), ErrorCode, exchange_all_shared_handles(), MB_SUCCESS, mbImpl, pack_shared_handles(), moab::ProcConfig::proc_rank(), procConfig, and moab::Interface::write_mesh().
Referenced by exchange_ghost_cells(), main(), read_mesh_parallel(), resolve_shared_ents(), moab::ScdInterface::tag_shared_vertices(), test_elements_on_several_procs(), test_ghost_elements(), test_ghosted_entity_shared_data(), test_intx_in_parallel_elem_based(), test_intx_mpas(), test_read(), test_read_parallel(), and test_tempest_to_moab_convert().
{ // Get all shared ent data from other procs std::vector< std::vector< SharedEntityData > > shents( buffProcs.size() ), send_data( buffProcs.size() ); ErrorCode result; bool done = false; while( !done ) { result = check_local_shared(); if( MB_SUCCESS != result ) { done = true; continue; } result = pack_shared_handles( send_data ); if( MB_SUCCESS != result ) { done = true; continue; } result = exchange_all_shared_handles( send_data, shents ); if( MB_SUCCESS != result ) { done = true; continue; } if( !shents.empty() ) result = check_my_shared_handles( shents ); done = true; } if( MB_SUCCESS != result && print_em ) { #ifdef MOAB_HAVE_HDF5 std::ostringstream ent_str; ent_str << "mesh." << procConfig.proc_rank() << ".h5m"; mbImpl->write_mesh( ent_str.str().c_str() ); #endif } return result; }
ErrorCode moab::ParallelComm::check_all_shared_handles | ( | ParallelComm ** | pcs, |
int | num_pcs | ||
) | [static] |
Definition at line 8715 of file ParallelComm.cpp.
References buffProcs, check_my_shared_handles(), ErrorCode, get_buffers(), MB_SUCCESS, and pack_shared_handles().
{ std::vector< std::vector< std::vector< SharedEntityData > > > shents, send_data; ErrorCode result = MB_SUCCESS, tmp_result; // Get all shared ent data from each proc to all other procs send_data.resize( num_pcs ); for( int p = 0; p < num_pcs; p++ ) { tmp_result = pcs[p]->pack_shared_handles( send_data[p] ); if( MB_SUCCESS != tmp_result ) result = tmp_result; } if( MB_SUCCESS != result ) return result; // Move the data sorted by sending proc to data sorted by receiving proc shents.resize( num_pcs ); for( int p = 0; p < num_pcs; p++ ) shents[p].resize( pcs[p]->buffProcs.size() ); for( int p = 0; p < num_pcs; p++ ) { for( unsigned int idx_p = 0; idx_p < pcs[p]->buffProcs.size(); idx_p++ ) { // Move send_data[p][to_p] to shents[to_p][idx_p] int to_p = pcs[p]->buffProcs[idx_p]; int top_idx_p = pcs[to_p]->get_buffers( p ); assert( -1 != top_idx_p ); shents[to_p][top_idx_p] = send_data[p][idx_p]; } } for( int p = 0; p < num_pcs; p++ ) { std::ostringstream ostr; ostr << "Processor " << p << " bad entities:"; tmp_result = pcs[p]->check_my_shared_handles( shents[p], ostr.str().c_str() ); if( MB_SUCCESS != tmp_result ) result = tmp_result; } return result; }
ErrorCode moab::ParallelComm::check_clean_iface | ( | Range & | allsent | ) | [private] |
Definition at line 6256 of file ParallelComm.cpp.
References moab::Interface::add_entities(), moab::Range::begin(), moab::Interface::create_meshset(), moab::Interface::delete_entities(), moab::Range::erase(), ErrorCode, moab::Interface::get_number_entities_by_handle(), get_sharing_data(), moab::Range::insert(), interface_sets(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, MESHSET_SET, proc_config(), moab::ProcConfig::proc_rank(), procConfig, moab::ProcList::procs, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, pstatus_tag(), moab::Range::rbegin(), moab::Interface::remove_entities(), moab::Range::rend(), set_sharing_data(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), swap(), moab::Interface::tag_clear_data(), and moab::Interface::tag_set_data().
Referenced by exchange_ghost_cells().
{ // allsent is all entities I think are on interface; go over them, looking // for zero-valued handles, and fix any I find // Keep lists of entities for which teh sharing data changed, grouped // by set of sharing procs. typedef std::map< ProcList, Range > procmap_t; procmap_t old_procs, new_procs; ErrorCode result = MB_SUCCESS; Range::iterator rit; Range::reverse_iterator rvit; unsigned char pstatus; int nump; ProcList sharedp; EntityHandle sharedh[MAX_SHARING_PROCS]; for( rvit = allsent.rbegin(); rvit != allsent.rend(); ++rvit ) { result = get_sharing_data( *rvit, sharedp.procs, sharedh, pstatus, nump );MB_CHK_SET_ERR( result, "Failed to get sharing data" ); assert( "Should be shared with at least one other proc" && ( nump > 1 || sharedp.procs[0] != (int)procConfig.proc_rank() ) ); assert( nump == MAX_SHARING_PROCS || sharedp.procs[nump] == -1 ); // Look for first null handle in list int idx = std::find( sharedh, sharedh + nump, (EntityHandle)0 ) - sharedh; if( idx == nump ) continue; // All handles are valid ProcList old_list( sharedp ); std::sort( old_list.procs, old_list.procs + nump ); old_procs[old_list].insert( *rvit ); // Remove null handles and corresponding proc ranks from lists int new_nump = idx; bool removed_owner = !idx; for( ++idx; idx < nump; ++idx ) { if( sharedh[idx] ) { sharedh[new_nump] = sharedh[idx]; sharedp.procs[new_nump] = sharedp.procs[idx]; ++new_nump; } } sharedp.procs[new_nump] = -1; if( removed_owner && new_nump > 1 ) { // The proc that we choose as the entity owner isn't sharing the // entity (doesn't have a copy of it). We need to pick a different // owner. Choose the proc with lowest rank. idx = std::min_element( sharedp.procs, sharedp.procs + new_nump ) - sharedp.procs; std::swap( sharedp.procs[0], sharedp.procs[idx] ); std::swap( sharedh[0], sharedh[idx] ); if( sharedp.procs[0] == (int)proc_config().proc_rank() ) pstatus &= ~PSTATUS_NOT_OWNED; } result = set_sharing_data( *rvit, pstatus, nump, new_nump, sharedp.procs, sharedh );MB_CHK_SET_ERR( result, "Failed to set sharing data in check_clean_iface" ); if( new_nump > 1 ) { if( new_nump == 2 ) { if( sharedp.procs[1] != (int)proc_config().proc_rank() ) { assert( sharedp.procs[0] == (int)proc_config().proc_rank() ); sharedp.procs[0] = sharedp.procs[1]; } sharedp.procs[1] = -1; } else { std::sort( sharedp.procs, sharedp.procs + new_nump ); } new_procs[sharedp].insert( *rvit ); } } if( old_procs.empty() ) { assert( new_procs.empty() ); return MB_SUCCESS; } // Update interface sets procmap_t::iterator pmit; // std::vector<unsigned char> pstatus_list; rit = interface_sets().begin(); while( rit != interface_sets().end() ) { result = get_sharing_data( *rit, sharedp.procs, sharedh, pstatus, nump );MB_CHK_SET_ERR( result, "Failed to get sharing data for interface set" ); assert( nump != 2 ); std::sort( sharedp.procs, sharedp.procs + nump ); assert( nump == MAX_SHARING_PROCS || sharedp.procs[nump] == -1 ); pmit = old_procs.find( sharedp ); if( pmit != old_procs.end() ) { result = mbImpl->remove_entities( *rit, pmit->second );MB_CHK_SET_ERR( result, "Failed to remove entities from interface set" ); } pmit = new_procs.find( sharedp ); if( pmit == new_procs.end() ) { int count; result = mbImpl->get_number_entities_by_handle( *rit, count );MB_CHK_SET_ERR( result, "Failed to get number of entities in interface set" ); if( !count ) { result = mbImpl->delete_entities( &*rit, 1 );MB_CHK_SET_ERR( result, "Failed to delete entities from interface set" ); rit = interface_sets().erase( rit ); } else { ++rit; } } else { result = mbImpl->add_entities( *rit, pmit->second );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" ); // Remove those that we've processed so that we know which ones // are new. new_procs.erase( pmit ); ++rit; } } // Create interface sets for new proc id combinations std::fill( sharedh, sharedh + MAX_SHARING_PROCS, 0 ); for( pmit = new_procs.begin(); pmit != new_procs.end(); ++pmit ) { EntityHandle new_set; result = mbImpl->create_meshset( MESHSET_SET, new_set );MB_CHK_SET_ERR( result, "Failed to create interface set" ); interfaceSets.insert( new_set ); // Add entities result = mbImpl->add_entities( new_set, pmit->second );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" ); // Tag set with the proc rank(s) assert( pmit->first.procs[0] >= 0 ); pstatus = PSTATUS_SHARED | PSTATUS_INTERFACE; if( pmit->first.procs[1] == -1 ) { int other = pmit->first.procs[0]; assert( other != (int)procConfig.proc_rank() ); result = mbImpl->tag_set_data( sharedp_tag(), &new_set, 1, pmit->first.procs );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" ); sharedh[0] = 0; result = mbImpl->tag_set_data( sharedh_tag(), &new_set, 1, sharedh );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" ); if( other < (int)proc_config().proc_rank() ) pstatus |= PSTATUS_NOT_OWNED; } else { result = mbImpl->tag_set_data( sharedps_tag(), &new_set, 1, pmit->first.procs );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" ); result = mbImpl->tag_set_data( sharedhs_tag(), &new_set, 1, sharedh );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" ); pstatus |= PSTATUS_MULTISHARED; if( pmit->first.procs[0] < (int)proc_config().proc_rank() ) pstatus |= PSTATUS_NOT_OWNED; } result = mbImpl->tag_set_data( pstatus_tag(), &new_set, 1, &pstatus );MB_CHK_SET_ERR( result, "Failed to tag interface set with pstatus" ); // Set pstatus on all interface entities in set result = mbImpl->tag_clear_data( pstatus_tag(), pmit->second, &pstatus );MB_CHK_SET_ERR( result, "Failed to tag interface entities with pstatus" ); } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::check_global_ids | ( | EntityHandle | this_set, |
const int | dimension, | ||
const int | start_id = 1 , |
||
const bool | largest_dim_only = true , |
||
const bool | parallel = true , |
||
const bool | owned_only = false |
||
) |
check for global ids; based only on tag handle being there or not; if it's not there, create them for the specified dimensions
owned_only | If true, do not get global IDs for non-owned entities from remote processors. |
Definition at line 5532 of file ParallelComm.cpp.
References assign_global_ids(), moab::Range::empty(), ErrorCode, moab::Interface::get_entities_by_type_and_tag(), moab::Interface::globalId_tag(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, and MBVERTEX.
Referenced by moab::ReadParallel::load_file().
{ // Global id tag Tag gid_tag = mbImpl->globalId_tag(); int def_val = -1; Range dum_range; void* tag_ptr = &def_val; ErrorCode result = mbImpl->get_entities_by_type_and_tag( this_set, MBVERTEX, &gid_tag, &tag_ptr, 1, dum_range );MB_CHK_SET_ERR( result, "Failed to get entities by MBVERTEX type and gid tag" ); if( !dum_range.empty() ) { // Just created it, so we need global ids result = assign_global_ids( this_set, dimension, start_id, largest_dim_only, parallel, owned_only );MB_CHK_SET_ERR( result, "Failed assigning global ids" ); } return MB_SUCCESS; }
Definition at line 8588 of file ParallelComm.cpp.
References ErrorCode, moab::Interface::get_connectivity(), get_sharing_data(), list_entities(), MAX_SHARING_PROCS, MB_SUCCESS, MBENTITYSET, mbImpl, MBVERTEX, moab::ProcConfig::proc_rank(), procConfig, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, sharedEnts, and moab::Interface::type_from_handle().
Referenced by check_all_shared_handles().
{ // Do some checks on shared entities to make sure things look // consistent // Check that non-vertex shared entities are shared by same procs as all // their vertices // std::pair<Range::const_iterator,Range::const_iterator> vert_it = // sharedEnts.equal_range(MBVERTEX); std::vector< EntityHandle > dum_connect; const EntityHandle* connect; int num_connect; int tmp_procs[MAX_SHARING_PROCS]; EntityHandle tmp_hs[MAX_SHARING_PROCS]; std::set< int > tmp_set, vset; int num_ps; ErrorCode result; unsigned char pstat; std::vector< EntityHandle > bad_ents; std::vector< std::string > errors; std::set< EntityHandle >::iterator vit; for( vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit ) { // Get sharing procs for this ent result = get_sharing_data( *vit, tmp_procs, tmp_hs, pstat, num_ps ); if( MB_SUCCESS != result ) { bad_ents.push_back( *vit ); errors.push_back( std::string( "Failure getting sharing data." ) ); continue; } bool bad = false; // Entity must be shared if( !( pstat & PSTATUS_SHARED ) ) errors.push_back( std::string( "Entity should be shared but isn't." ) ), bad = true; // If entity is not owned this must not be first proc if( pstat & PSTATUS_NOT_OWNED && tmp_procs[0] == (int)procConfig.proc_rank() ) errors.push_back( std::string( "Entity not owned but is first proc." ) ), bad = true; // If entity is owned and multishared, this must be first proc if( !( pstat & PSTATUS_NOT_OWNED ) && pstat & PSTATUS_MULTISHARED && ( tmp_procs[0] != (int)procConfig.proc_rank() || tmp_hs[0] != *vit ) ) errors.push_back( std::string( "Entity owned and multishared but not first proc or not first handle." ) ), bad = true; if( bad ) { bad_ents.push_back( *vit ); continue; } EntityType type = mbImpl->type_from_handle( *vit ); if( type == MBVERTEX || type == MBENTITYSET ) continue; // Copy element's procs to vset and save size int orig_ps = num_ps; vset.clear(); std::copy( tmp_procs, tmp_procs + num_ps, std::inserter( vset, vset.begin() ) ); // Get vertices for this ent and intersection of sharing procs result = mbImpl->get_connectivity( *vit, connect, num_connect, false, &dum_connect ); if( MB_SUCCESS != result ) { bad_ents.push_back( *vit ); errors.push_back( std::string( "Failed to get connectivity." ) ); continue; } for( int i = 0; i < num_connect; i++ ) { result = get_sharing_data( connect[i], tmp_procs, NULL, pstat, num_ps ); if( MB_SUCCESS != result ) { bad_ents.push_back( *vit ); continue; } if( !num_ps ) { vset.clear(); break; } std::sort( tmp_procs, tmp_procs + num_ps ); tmp_set.clear(); std::set_intersection( tmp_procs, tmp_procs + num_ps, vset.begin(), vset.end(), std::inserter( tmp_set, tmp_set.end() ) ); vset.swap( tmp_set ); if( vset.empty() ) break; } // Intersect them; should be the same size as orig_ps tmp_set.clear(); std::set_intersection( tmp_procs, tmp_procs + num_ps, vset.begin(), vset.end(), std::inserter( tmp_set, tmp_set.end() ) ); if( orig_ps != (int)tmp_set.size() ) { errors.push_back( std::string( "Vertex proc set not same size as entity proc set." ) ); bad_ents.push_back( *vit ); for( int i = 0; i < num_connect; i++ ) { bad_ents.push_back( connect[i] ); errors.push_back( std::string( "vertex in connect" ) ); } } } if( !bad_ents.empty() ) { std::cout << "Found bad entities in check_local_shared, proc rank " << procConfig.proc_rank() << "," << std::endl; std::vector< std::string >::iterator sit; std::vector< EntityHandle >::iterator rit; for( rit = bad_ents.begin(), sit = errors.begin(); rit != bad_ents.end(); ++rit, ++sit ) { list_entities( &( *rit ), 1 ); std::cout << "Reason: " << *sit << std::endl; } return MB_FAILURE; } // To do: check interface sets return MB_SUCCESS; }
ErrorCode moab::ParallelComm::check_my_shared_handles | ( | std::vector< std::vector< SharedEntityData > > & | shents, |
const char * | prefix = NULL |
||
) |
Definition at line 8757 of file ParallelComm.cpp.
References buffProcs, moab::Range::empty(), moab::Range::end(), moab::Range::erase(), ErrorCode, get_pstatus(), get_remote_handles(), get_shared_entities(), moab::Range::insert(), list_entities(), MB_SUCCESS, MBPOLYHEDRON, moab::Range::merge(), PSTATUS_NOT_OWNED, rank(), sharedEnts, and moab::Range::upper_bound().
Referenced by check_all_shared_handles().
{ // Now check against what I think data should be // Get all shared entities ErrorCode result; Range all_shared; std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( all_shared ) ); std::vector< EntityHandle > dum_vec; all_shared.erase( all_shared.upper_bound( MBPOLYHEDRON ), all_shared.end() ); Range bad_ents, local_shared; std::vector< SharedEntityData >::iterator vit; unsigned char tmp_pstat; for( unsigned int i = 0; i < shents.size(); i++ ) { int other_proc = buffProcs[i]; result = get_shared_entities( other_proc, local_shared ); if( MB_SUCCESS != result ) return result; for( vit = shents[i].begin(); vit != shents[i].end(); ++vit ) { EntityHandle localh = vit->local, remoteh = vit->remote, dumh; local_shared.erase( localh ); result = get_remote_handles( true, &localh, &dumh, 1, other_proc, dum_vec ); if( MB_SUCCESS != result || dumh != remoteh ) bad_ents.insert( localh ); result = get_pstatus( localh, tmp_pstat ); if( MB_SUCCESS != result || ( !( tmp_pstat & PSTATUS_NOT_OWNED ) && (unsigned)vit->owner != rank() ) || ( tmp_pstat & PSTATUS_NOT_OWNED && (unsigned)vit->owner == rank() ) ) bad_ents.insert( localh ); } if( !local_shared.empty() ) bad_ents.merge( local_shared ); } if( !bad_ents.empty() ) { if( prefix ) std::cout << prefix << std::endl; list_entities( bad_ents ); return MB_FAILURE; } else return MB_SUCCESS; }
ErrorCode moab::ParallelComm::check_sent_ents | ( | Range & | allsent | ) | [private] |
check entities to make sure there are no zero-valued remote handles where they shouldn't be
Definition at line 7323 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), ErrorCode, moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Range::size(), and moab::Interface::tag_get_data().
Referenced by exchange_ghost_cells(), and exchange_owned_mesh().
{ // Check entities to make sure there are no zero-valued remote handles // where they shouldn't be std::vector< unsigned char > pstat( allsent.size() ); ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), allsent, &pstat[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" ); std::vector< EntityHandle > handles( allsent.size() ); result = mbImpl->tag_get_data( sharedh_tag(), allsent, &handles[0] );MB_CHK_SET_ERR( result, "Failed to get sharedh tag data" ); std::vector< int > procs( allsent.size() ); result = mbImpl->tag_get_data( sharedp_tag(), allsent, &procs[0] );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" ); Range bad_entities; Range::iterator rit; unsigned int i; EntityHandle dum_hs[MAX_SHARING_PROCS]; int dum_ps[MAX_SHARING_PROCS]; for( rit = allsent.begin(), i = 0; rit != allsent.end(); ++rit, i++ ) { if( -1 != procs[i] && 0 == handles[i] ) bad_entities.insert( *rit ); else { // Might be multi-shared... result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, dum_ps ); if( MB_TAG_NOT_FOUND == result ) continue; else if( MB_SUCCESS != result ) MB_SET_ERR( result, "Failed to get sharedps tag data" ); result = mbImpl->tag_get_data( sharedhs_tag(), &( *rit ), 1, dum_hs );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" ); // Find first non-set proc int* ns_proc = std::find( dum_ps, dum_ps + MAX_SHARING_PROCS, -1 ); int num_procs = ns_proc - dum_ps; assert( num_procs <= MAX_SHARING_PROCS ); // Now look for zero handles in active part of dum_hs EntityHandle* ns_handle = std::find( dum_hs, dum_hs + num_procs, 0 ); int num_handles = ns_handle - dum_hs; assert( num_handles <= num_procs ); if( num_handles != num_procs ) bad_entities.insert( *rit ); } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::clean_shared_tags | ( | std::vector< Range * > & | exchange_ents | ) |
Definition at line 8842 of file ParallelComm.cpp.
References moab::Range::begin(), ErrorCode, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pstatus_tag(), sharedh_tag(), sharedp_tag(), moab::Range::size(), moab::Interface::tag_delete_data(), and moab::Interface::tag_get_data().
{ for( unsigned int i = 0; i < exchange_ents.size(); i++ ) { Range* ents = exchange_ents[i]; int num_ents = ents->size(); Range::iterator it = ents->begin(); for( int n = 0; n < num_ents; n++ ) { int sharing_proc; ErrorCode result = mbImpl->tag_get_data( sharedp_tag(), &( *ents->begin() ), 1, &sharing_proc ); if( result != MB_TAG_NOT_FOUND && sharing_proc == -1 ) { result = mbImpl->tag_delete_data( sharedp_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete sharedp tag data" ); result = mbImpl->tag_delete_data( sharedh_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete sharedh tag data" ); result = mbImpl->tag_delete_data( pstatus_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete pstatus tag data" ); } ++it; } } return MB_SUCCESS; }
Definition at line 8268 of file ParallelComm.cpp.
References globalPartCount, MB_SUCCESS, partition_sets(), proc_config(), and moab::Range::size().
Referenced by iMeshP_loadAll(), and iMeshP_syncPartitionAll().
{ int count = partition_sets().size(); globalPartCount = 0; int err = MPI_Allreduce( &count, &globalPartCount, 1, MPI_INT, MPI_SUM, proc_config().proc_comm() ); return err ? MB_FAILURE : MB_SUCCESS; }
MPI_Comm moab::ParallelComm::comm | ( | ) | const [inline] |
Definition at line 652 of file ParallelComm.hpp.
References moab::ProcConfig::proc_comm(), and proc_config().
Referenced by ZoltanPartitioner::balance_mesh(), gather_data(), moab::ParallelMergeMesh::GetGlobalBox(), main(), ZoltanPartitioner::mbFinalizePoints(), ZoltanPartitioner::mbGlobalSuccess(), ZoltanPartitioner::mbInitializePoints(), ZoltanPartitioner::mbPrintGlobalResult(), ZoltanPartitioner::partition_mesh_and_geometry(), ZoltanPartitioner::partition_owned_cells(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), moab::LloydSmoother::perform_smooth(), moab::ParallelMergeMesh::PerformMerge(), moab::ParCommGraph::receive_comm_graph(), ZoltanPartitioner::repartition(), and moab::ParCommGraph::send_graph_partition().
{ return proc_config().proc_comm(); }
Definition at line 9346 of file ParallelComm.cpp.
References buffProcs, ErrorCode, exchange_all_shared_handles(), get_buffers(), get_sharing_data(), moab::ParallelComm::SharedEntityData::local, MAX_SHARING_PROCS, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::ParallelComm::SharedEntityData::owner, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, pstatus_tag(), rank(), moab::ParallelComm::SharedEntityData::remote, sharedEnts, sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_set_data().
Referenced by moab::ReadParallel::load_file(), and test_correct_ghost().
{ // Get all shared ent data from other procs std::vector< std::vector< SharedEntityData > > shents( buffProcs.size() ), send_data( buffProcs.size() ); // will work only on multi-shared tags sharedps_tag(), sharedhs_tag(); /* * domain0 | domain1 | domain2 | domain3 * vertices from domain 1 and 2 are visible from both 0 and 3, but * domain 0 might not have info about multi-sharing from domain 3 * so we will force that domain 0 vertices owned by 1 and 2 have information * about the domain 3 sharing * * SharedEntityData will have : * struct SharedEntityData { EntityHandle local; // this is same meaning, for the proc we sent to, it is local EntityHandle remote; // this will be the far away handle that will need to be added EntityID owner; // this will be the remote proc }; // so we need to add data like this: a multishared entity owned by proc x will have data like multishared procs: proc x, a, b, c multishared handles: h1, h2, h3, h4 we will need to send data from proc x like this: to proc a we will send (h2, h3, b), (h2, h4, c) to proc b we will send (h3, h2, a), (h3, h4, c) to proc c we will send (h4, h2, a), (h4, h3, b) * */ ErrorCode result = MB_SUCCESS; int ent_procs[MAX_SHARING_PROCS + 1]; EntityHandle handles[MAX_SHARING_PROCS + 1]; int num_sharing; SharedEntityData tmp; for( std::set< EntityHandle >::iterator i = sharedEnts.begin(); i != sharedEnts.end(); ++i ) { unsigned char pstat; result = get_sharing_data( *i, ent_procs, handles, pstat, num_sharing );MB_CHK_SET_ERR( result, "can't get sharing data" ); if( !( pstat & PSTATUS_MULTISHARED ) || num_sharing <= 2 ) // if not multishared, skip, it should have no problems continue; // we should skip the ones that are not owned locally // the owned ones will have the most multi-shared info, because the info comes from other // remote processors if( pstat & PSTATUS_NOT_OWNED ) continue; for( int j = 1; j < num_sharing; j++ ) { // we will send to proc int send_to_proc = ent_procs[j]; // tmp.local = handles[j]; int ind = get_buffers( send_to_proc ); assert( -1 != ind ); // THIS SHOULD NEVER HAPPEN for( int k = 1; k < num_sharing; k++ ) { // do not send to self proc if( j == k ) continue; tmp.remote = handles[k]; // this will be the handle of entity on proc tmp.owner = ent_procs[k]; send_data[ind].push_back( tmp ); } } } result = exchange_all_shared_handles( send_data, shents );MB_CHK_ERR( result ); // loop over all shents and add if vertex type, add if missing for( size_t i = 0; i < shents.size(); i++ ) { std::vector< SharedEntityData >& shEnts = shents[i]; for( size_t j = 0; j < shEnts.size(); j++ ) { tmp = shEnts[j]; // basically, check the shared data for tmp.local entity // it should have inside the tmp.owner and tmp.remote EntityHandle eh = tmp.local; unsigned char pstat; result = get_sharing_data( eh, ent_procs, handles, pstat, num_sharing );MB_CHK_SET_ERR( result, "can't get sharing data" ); // see if the proc tmp.owner is in the list of ent_procs; if not, we have to increase // handles, and ent_procs; and set int proc_remote = tmp.owner; // if( std::find( ent_procs, ent_procs + num_sharing, proc_remote ) == ent_procs + num_sharing ) { // so we did not find on proc #ifndef NDEBUG std::cout << "THIN GHOST: we did not find on proc " << rank() << " for shared ent " << eh << " the proc " << proc_remote << "\n"; #endif // increase num_sharing, and set the multi-shared tags if( num_sharing >= MAX_SHARING_PROCS ) return MB_FAILURE; handles[num_sharing] = tmp.remote; handles[num_sharing + 1] = 0; // end of list ent_procs[num_sharing] = tmp.owner; ent_procs[num_sharing + 1] = -1; // this should be already set result = mbImpl->tag_set_data( sharedps_tag(), &eh, 1, ent_procs );MB_CHK_SET_ERR( result, "Failed to set sharedps tag data" ); result = mbImpl->tag_set_data( sharedhs_tag(), &eh, 1, handles );MB_CHK_SET_ERR( result, "Failed to set sharedhs tag data" ); if( 2 == num_sharing ) // it means the sharedp and sharedh tags were set with a // value non default { // so entity eh was simple shared before, we need to set those dense tags back // to default // values EntityHandle zero = 0; int no_proc = -1; result = mbImpl->tag_set_data( sharedp_tag(), &eh, 1, &no_proc );MB_CHK_SET_ERR( result, "Failed to set sharedp tag data" ); result = mbImpl->tag_set_data( sharedh_tag(), &eh, 1, &zero );MB_CHK_SET_ERR( result, "Failed to set sharedh tag data" ); // also, add multishared pstatus tag // also add multishared status to pstatus pstat = pstat | PSTATUS_MULTISHARED; result = mbImpl->tag_set_data( pstatus_tag(), &eh, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to set pstatus tag data" ); } } } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::create_iface_pc_links | ( | ) | [private] |
Definition at line 5093 of file ParallelComm.cpp.
References moab::Interface::add_parent_child(), moab::Range::begin(), moab::Range::clear(), moab::Interface::dimension_from_handle(), moab::Range::empty(), moab::Range::end(), ErrorCode, moab::Interface::get_adjacencies(), moab::Interface::get_entities_by_handle(), interfaceSets, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_DENSE, MB_TYPE_HANDLE, mbImpl, moab::Range::rbegin(), moab::Range::size(), moab::Interface::tag_delete(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), and moab::Interface::tag_set_data().
Referenced by resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().
{ // Now that we've resolved the entities in the iface sets, // set parent/child links between the iface sets // First tag all entities in the iface sets Tag tmp_iface_tag; EntityHandle tmp_iface_set = 0; ErrorCode result = mbImpl->tag_get_handle( "__tmp_iface", 1, MB_TYPE_HANDLE, tmp_iface_tag, MB_TAG_DENSE | MB_TAG_CREAT, &tmp_iface_set );MB_CHK_SET_ERR( result, "Failed to create temporary interface set tag" ); Range iface_ents; std::vector< EntityHandle > tag_vals; Range::iterator rit; for( rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit ) { // tag entities with interface set iface_ents.clear(); result = mbImpl->get_entities_by_handle( *rit, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in interface set" ); if( iface_ents.empty() ) continue; tag_vals.resize( iface_ents.size() ); std::fill( tag_vals.begin(), tag_vals.end(), *rit ); result = mbImpl->tag_set_data( tmp_iface_tag, iface_ents, &tag_vals[0] );MB_CHK_SET_ERR( result, "Failed to tag iface entities with interface set" ); } // Now go back through interface sets and add parent/child links Range tmp_ents2; for( int d = 2; d >= 0; d-- ) { for( rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit ) { // Get entities on this interface iface_ents.clear(); result = mbImpl->get_entities_by_handle( *rit, iface_ents, true );MB_CHK_SET_ERR( result, "Failed to get entities by handle" ); if( iface_ents.empty() || mbImpl->dimension_from_handle( *iface_ents.rbegin() ) != d ) continue; // Get higher-dimensional entities and their interface sets result = mbImpl->get_adjacencies( &( *iface_ents.begin() ), 1, d + 1, false, tmp_ents2 );MB_CHK_SET_ERR( result, "Failed to get adjacencies for interface sets" ); tag_vals.resize( tmp_ents2.size() ); result = mbImpl->tag_get_data( tmp_iface_tag, tmp_ents2, &tag_vals[0] );MB_CHK_SET_ERR( result, "Failed to get tmp iface tag for interface sets" ); // Go through and for any on interface make it a parent EntityHandle last_set = 0; for( unsigned int i = 0; i < tag_vals.size(); i++ ) { if( tag_vals[i] && tag_vals[i] != last_set ) { result = mbImpl->add_parent_child( tag_vals[i], *rit );MB_CHK_SET_ERR( result, "Failed to add parent/child link for interface set" ); last_set = tag_vals[i]; } } } } // Delete the temporary tag result = mbImpl->tag_delete( tmp_iface_tag );MB_CHK_SET_ERR( result, "Failed to delete tmp iface tag" ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::create_interface_sets | ( | std::map< std::vector< int >, std::vector< EntityHandle > > & | proc_nvecs | ) |
Definition at line 5017 of file ParallelComm.cpp.
References moab::Interface::add_entities(), moab::Interface::create_meshset(), moab::CN::EntityTypeName(), ErrorCode, moab::GeomUtil::first(), get_shared_proc_tags(), moab::ID_FROM_HANDLE(), moab::Range::insert(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, MBVERTEX, MESHSET_SET, proc_config(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, moab::Interface::tag_set_data(), moab::TYPE_FROM_HANDLE(), and moab::Interface::type_from_handle().
Referenced by create_interface_sets(), exchange_owned_meshs(), resolve_shared_ents(), moab::ScdInterface::tag_shared_vertices(), and moab::ParallelMergeMesh::TagSharedElements().
{ if( proc_nvecs.empty() ) return MB_SUCCESS; int proc_ids[MAX_SHARING_PROCS]; EntityHandle proc_handles[MAX_SHARING_PROCS]; Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag; ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags in create_interface_sets" ); Range::iterator rit; // Create interface sets, tag them, and tag their contents with iface set tag std::vector< unsigned char > pstatus; for( std::map< std::vector< int >, std::vector< EntityHandle > >::iterator vit = proc_nvecs.begin(); vit != proc_nvecs.end(); ++vit ) { // Create the set EntityHandle new_set; result = mbImpl->create_meshset( MESHSET_SET, new_set );MB_CHK_SET_ERR( result, "Failed to create interface set" ); interfaceSets.insert( new_set ); // Add entities assert( !vit->second.empty() ); result = mbImpl->add_entities( new_set, &( vit->second )[0], ( vit->second ).size() );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" ); // Tag set with the proc rank(s) if( vit->first.size() == 1 ) { assert( ( vit->first )[0] != (int)procConfig.proc_rank() ); result = mbImpl->tag_set_data( shp_tag, &new_set, 1, &( vit->first )[0] );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" ); proc_handles[0] = 0; result = mbImpl->tag_set_data( shh_tag, &new_set, 1, proc_handles );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" ); } else { // Pad tag data out to MAX_SHARING_PROCS with -1 if( vit->first.size() > MAX_SHARING_PROCS ) { std::cerr << "Exceeded MAX_SHARING_PROCS for " << CN::EntityTypeName( TYPE_FROM_HANDLE( new_set ) ) << ' ' << ID_FROM_HANDLE( new_set ) << " on process " << proc_config().proc_rank() << std::endl; std::cerr.flush(); MPI_Abort( proc_config().proc_comm(), 66 ); } // assert(vit->first.size() <= MAX_SHARING_PROCS); std::copy( vit->first.begin(), vit->first.end(), proc_ids ); std::fill( proc_ids + vit->first.size(), proc_ids + MAX_SHARING_PROCS, -1 ); result = mbImpl->tag_set_data( shps_tag, &new_set, 1, proc_ids );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" ); unsigned int ind = std::find( proc_ids, proc_ids + vit->first.size(), procConfig.proc_rank() ) - proc_ids; assert( ind < vit->first.size() ); std::fill( proc_handles, proc_handles + MAX_SHARING_PROCS, 0 ); proc_handles[ind] = new_set; result = mbImpl->tag_set_data( shhs_tag, &new_set, 1, proc_handles );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" ); } // Get the owning proc, then set the pstatus tag on iface set int min_proc = ( vit->first )[0]; unsigned char pval = ( PSTATUS_SHARED | PSTATUS_INTERFACE ); if( min_proc < (int)procConfig.proc_rank() ) pval |= PSTATUS_NOT_OWNED; if( vit->first.size() > 1 ) pval |= PSTATUS_MULTISHARED; result = mbImpl->tag_set_data( pstat_tag, &new_set, 1, &pval );MB_CHK_SET_ERR( result, "Failed to tag interface set with pstatus" ); // Tag the vertices with the same thing pstatus.clear(); std::vector< EntityHandle > verts; for( std::vector< EntityHandle >::iterator v2it = ( vit->second ).begin(); v2it != ( vit->second ).end(); ++v2it ) if( mbImpl->type_from_handle( *v2it ) == MBVERTEX ) verts.push_back( *v2it ); pstatus.resize( verts.size(), pval ); if( !verts.empty() ) { result = mbImpl->tag_set_data( pstat_tag, &verts[0], verts.size(), &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to tag interface set vertices with pstatus" ); } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::create_interface_sets | ( | EntityHandle | this_set, |
int | resolve_dim, | ||
int | shared_dim | ||
) |
Definition at line 4981 of file ParallelComm.cpp.
References create_interface_sets(), moab::Interface::dimension_from_handle(), ErrorCode, moab::Skinner::find_skin(), moab::Interface::get_adjacencies(), moab::Interface::get_entities_by_dimension(), get_proc_nvecs(), get_sharing_data(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, mbImpl, sharedEnts, and moab::Interface::UNION.
{ std::map< std::vector< int >, std::vector< EntityHandle > > proc_nvecs; // Build up the list of shared entities int procs[MAX_SHARING_PROCS]; EntityHandle handles[MAX_SHARING_PROCS]; ErrorCode result; int nprocs; unsigned char pstat; for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit ) { if( shared_dim != -1 && mbImpl->dimension_from_handle( *vit ) > shared_dim ) continue; result = get_sharing_data( *vit, procs, handles, pstat, nprocs );MB_CHK_SET_ERR( result, "Failed to get sharing data" ); std::sort( procs, procs + nprocs ); std::vector< int > tmp_procs( procs, procs + nprocs ); assert( tmp_procs.size() != 2 ); proc_nvecs[tmp_procs].push_back( *vit ); } Skinner skinner( mbImpl ); Range skin_ents[4]; result = mbImpl->get_entities_by_dimension( this_set, resolve_dim, skin_ents[resolve_dim] );MB_CHK_SET_ERR( result, "Failed to get skin entities by dimension" ); result = skinner.find_skin( this_set, skin_ents[resolve_dim], false, skin_ents[resolve_dim - 1], 0, true, true, true );MB_CHK_SET_ERR( result, "Failed to find skin" ); if( shared_dim > 1 ) { result = mbImpl->get_adjacencies( skin_ents[resolve_dim - 1], resolve_dim - 2, true, skin_ents[resolve_dim - 2], Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get skin adjacencies" ); } result = get_proc_nvecs( resolve_dim, shared_dim, skin_ents, proc_nvecs ); return create_interface_sets( proc_nvecs ); }
ErrorCode moab::ParallelComm::create_part | ( | EntityHandle & | part_out | ) |
Definition at line 8210 of file ParallelComm.cpp.
References moab::Interface::add_entities(), moab::Interface::create_meshset(), moab::Interface::delete_entities(), ErrorCode, get_partitioning(), globalPartCount, moab::Range::index(), moab::Range::insert(), MB_SUCCESS, mbImpl, MESHSET_SET, part_tag(), partition_sets(), proc_config(), moab::ProcConfig::proc_rank(), and moab::Interface::tag_set_data().
Referenced by iMeshP_createPart().
{ // Mark as invalid so we know that it needs to be updated globalPartCount = -1; // Create set representing part ErrorCode rval = mbImpl->create_meshset( MESHSET_SET, set_out ); if( MB_SUCCESS != rval ) return rval; // Set tag on set int val = proc_config().proc_rank(); rval = mbImpl->tag_set_data( part_tag(), &set_out, 1, &val ); if( MB_SUCCESS != rval ) { mbImpl->delete_entities( &set_out, 1 ); return rval; } if( get_partitioning() ) { rval = mbImpl->add_entities( get_partitioning(), &set_out, 1 ); if( MB_SUCCESS != rval ) { mbImpl->delete_entities( &set_out, 1 ); return rval; } } moab::Range& pSets = this->partition_sets(); if( pSets.index( set_out ) < 0 ) { pSets.insert( set_out ); } return MB_SUCCESS; }
void moab::ParallelComm::define_mpe | ( | ) | [private] |
Definition at line 4228 of file ParallelComm.cpp.
References moab::DebugOutput::get_verbosity(), MPE_Describe_state, MPE_LOG_OK, and myDebug.
Referenced by resolve_shared_ents().
{ #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { // Define mpe states used for logging int success; MPE_Log_get_state_eventIDs( &IFACE_START, &IFACE_END ); MPE_Log_get_state_eventIDs( &GHOST_START, &GHOST_END ); MPE_Log_get_state_eventIDs( &SHAREDV_START, &SHAREDV_END ); MPE_Log_get_state_eventIDs( &RESOLVE_START, &RESOLVE_END ); MPE_Log_get_state_eventIDs( &ENTITIES_START, &ENTITIES_END ); MPE_Log_get_state_eventIDs( &RHANDLES_START, &RHANDLES_END ); MPE_Log_get_state_eventIDs( &OWNED_START, &OWNED_END ); success = MPE_Describe_state( IFACE_START, IFACE_END, "Resolve interface ents", "green" ); assert( MPE_LOG_OK == success ); success = MPE_Describe_state( GHOST_START, GHOST_END, "Exchange ghost ents", "red" ); assert( MPE_LOG_OK == success ); success = MPE_Describe_state( SHAREDV_START, SHAREDV_END, "Resolve interface vertices", "blue" ); assert( MPE_LOG_OK == success ); success = MPE_Describe_state( RESOLVE_START, RESOLVE_END, "Resolve shared ents", "purple" ); assert( MPE_LOG_OK == success ); success = MPE_Describe_state( ENTITIES_START, ENTITIES_END, "Exchange shared ents", "yellow" ); assert( MPE_LOG_OK == success ); success = MPE_Describe_state( RHANDLES_START, RHANDLES_END, "Remote handles", "cyan" ); assert( MPE_LOG_OK == success ); success = MPE_Describe_state( OWNED_START, OWNED_END, "Exchange owned ents", "black" ); assert( MPE_LOG_OK == success ); } #endif }
void moab::ParallelComm::delete_all_buffers | ( | ) | [inline, private] |
reset message buffers to their initial state
delete all buffers, freeing up any memory held by them
Definition at line 1557 of file ParallelComm.hpp.
References localOwnedBuffs, and remoteOwnedBuffs.
Referenced by ~ParallelComm().
{ std::vector< Buffer* >::iterator vit; for( vit = localOwnedBuffs.begin(); vit != localOwnedBuffs.end(); ++vit ) delete( *vit ); localOwnedBuffs.clear(); for( vit = remoteOwnedBuffs.begin(); vit != remoteOwnedBuffs.end(); ++vit ) delete( *vit ); remoteOwnedBuffs.clear(); }
ErrorCode moab::ParallelComm::delete_entities | ( | Range & | to_delete | ) |
Definition at line 9258 of file ParallelComm.cpp.
References moab::Range::begin(), moab::ProcConfig::crystal_router(), moab::Interface::delete_entities(), moab::TupleList::enableWriteAccess(), moab::Range::end(), ErrorCode, moab::TupleList::get_n(), get_sharing_data(), moab::TupleList::inc_n(), moab::Range::index(), moab::TupleList::initialize(), moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, procConfig, sharedEnts, moab::Range::size(), moab::TupleList::vi_wr, moab::TupleList::vul_rd, and moab::TupleList::vul_wr.
Referenced by moab::NCHelperScrip::create_mesh(), and test_delete_entities().
{ // Will not look at shared sets yet, but maybe we should // First, see if any of the entities to delete is shared; then inform the other processors // about their fate (to be deleted), using a crystal router transfer ErrorCode rval = MB_SUCCESS; unsigned char pstat; EntityHandle tmp_handles[MAX_SHARING_PROCS]; int tmp_procs[MAX_SHARING_PROCS]; unsigned int num_ps; TupleList ents_to_delete; ents_to_delete.initialize( 1, 0, 1, 0, to_delete.size() * ( MAX_SHARING_PROCS + 1 ) ); // A little bit of overkill ents_to_delete.enableWriteAccess(); unsigned int i = 0; for( Range::iterator it = to_delete.begin(); it != to_delete.end(); ++it ) { EntityHandle eh = *it; // Entity to be deleted rval = get_sharing_data( eh, tmp_procs, tmp_handles, pstat, num_ps ); if( rval != MB_SUCCESS || num_ps == 0 ) continue; // Add to the tuple list the information to be sent (to the remote procs) for( unsigned int p = 0; p < num_ps; p++ ) { ents_to_delete.vi_wr[i] = tmp_procs[p]; ents_to_delete.vul_wr[i] = (unsigned long)tmp_handles[p]; i++; ents_to_delete.inc_n(); } } gs_data::crystal_data* cd = this->procConfig.crystal_router(); // All communication happens here; no other mpi calls // Also, this is a collective call rval = cd->gs_transfer( 1, ents_to_delete, 0 );MB_CHK_SET_ERR( rval, "Error in tuple transfer" ); // Add to the range of ents to delete the new ones that were sent from other procs unsigned int received = ents_to_delete.get_n(); for( i = 0; i < received; i++ ) { // int from = ents_to_delete.vi_rd[i]; unsigned long valrec = ents_to_delete.vul_rd[i]; to_delete.insert( (EntityHandle)valrec ); } rval = mbImpl->delete_entities( to_delete );MB_CHK_SET_ERR( rval, "Error in deleting actual entities" ); std::set< EntityHandle > good_ents; for( std::set< EntityHandle >::iterator sst = sharedEnts.begin(); sst != sharedEnts.end(); sst++ ) { EntityHandle eh = *sst; int index = to_delete.index( eh ); if( -1 == index ) good_ents.insert( eh ); } sharedEnts = good_ents; // What about shared sets? Who is updating them? return MB_SUCCESS; }
Definition at line 8248 of file ParallelComm.cpp.
References moab::Interface::delete_entities(), moab::Range::erase(), ErrorCode, get_partitioning(), globalPartCount, moab::Range::index(), MB_SUCCESS, mbImpl, partition_sets(), and moab::Interface::remove_entities().
Referenced by iMeshP_destroyPart().
{ // Mark as invalid so we know that it needs to be updated globalPartCount = -1; ErrorCode rval; if( get_partitioning() ) { rval = mbImpl->remove_entities( get_partitioning(), &part_id, 1 ); if( MB_SUCCESS != rval ) return rval; } moab::Range& pSets = this->partition_sets(); if( pSets.index( part_id ) >= 0 ) { pSets.erase( part_id ); } return mbImpl->delete_entities( &part_id, 1 ); }
int moab::ParallelComm::estimate_ents_buffer_size | ( | Range & | entities, |
const bool | store_remote_handles | ||
) | [private] |
estimate size required to pack entities
Definition at line 1503 of file ParallelComm.cpp.
References ErrorCode, moab::Interface::get_connectivity(), moab::Range::lower_bound(), MB_CHK_SET_ERR_RET_VAL, MBEDGE, MBENTITYSET, mbImpl, MBVERTEX, moab::Range::num_of_type(), t, and moab::TYPE_FROM_HANDLE().
Referenced by pack_entities().
{ int buff_size = 0; std::vector< EntityHandle > dum_connect_vec; const EntityHandle* connect; int num_connect; int num_verts = entities.num_of_type( MBVERTEX ); // # verts + coords + handles buff_size += 2 * sizeof( int ) + 3 * sizeof( double ) * num_verts; if( store_remote_handles ) buff_size += sizeof( EntityHandle ) * num_verts; // Do a rough count by looking at first entity of each type for( EntityType t = MBEDGE; t < MBENTITYSET; t++ ) { const Range::iterator rit = entities.lower_bound( t ); if( TYPE_FROM_HANDLE( *rit ) != t ) continue; ErrorCode result = mbImpl->get_connectivity( *rit, connect, num_connect, false, &dum_connect_vec );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get connectivity to estimate buffer size", -1 ); // Number, type, nodes per entity buff_size += 3 * sizeof( int ); int num_ents = entities.num_of_type( t ); // Connectivity, handle for each ent buff_size += ( num_connect + 1 ) * sizeof( EntityHandle ) * num_ents; } // Extra entity type at end, passed as int buff_size += sizeof( int ); return buff_size; }
int moab::ParallelComm::estimate_sets_buffer_size | ( | Range & | entities, |
const bool | store_remote_handles | ||
) | [private] |
estimate size required to pack sets
Definition at line 1536 of file ParallelComm.cpp.
References moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_handle(), moab::Interface::get_meshset_options(), moab::Interface::get_number_entities_by_handle(), moab::Range::lower_bound(), MB_CHK_SET_ERR_RET_VAL, MBENTITYSET, mbImpl, MESHSET_SET, moab::Interface::num_child_meshsets(), moab::Interface::num_parent_meshsets(), and moab::RANGE_SIZE().
Referenced by pack_sets().
{ // Number of sets int buff_size = sizeof( int ); // Do a rough count by looking at first entity of each type Range::iterator rit = entities.lower_bound( MBENTITYSET ); ErrorCode result; for( ; rit != entities.end(); ++rit ) { unsigned int options; result = mbImpl->get_meshset_options( *rit, options );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get meshset options", -1 ); buff_size += sizeof( int ); Range set_range; if( options & MESHSET_SET ) { // Range-based set; count the subranges result = mbImpl->get_entities_by_handle( *rit, set_range );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get set entities", -1 ); // Set range buff_size += RANGE_SIZE( set_range ); } else if( options & MESHSET_ORDERED ) { // Just get the number of entities in the set int num_ents; result = mbImpl->get_number_entities_by_handle( *rit, num_ents );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get number entities in ordered set", -1 ); // Set vec buff_size += sizeof( EntityHandle ) * num_ents + sizeof( int ); } // Get numbers of parents/children int num_par, num_ch; result = mbImpl->num_child_meshsets( *rit, &num_ch );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get num children", -1 ); result = mbImpl->num_parent_meshsets( *rit, &num_par );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get num parents", -1 ); buff_size += ( num_ch + num_par ) * sizeof( EntityHandle ) + 2 * sizeof( int ); } return buff_size; }
ErrorCode moab::ParallelComm::exchange_all_shared_handles | ( | std::vector< std::vector< SharedEntityData > > & | send_data, |
std::vector< std::vector< SharedEntityData > > & | result | ||
) | [private] |
Every processor sends shared entity handle data to every other processor that it shares entities with. Passed back map is all received data, indexed by processor ID. This function is intended to be used for debugging.
Definition at line 8474 of file ParallelComm.cpp.
References buffProcs, ierr, MB_FILE_WRITE_ERROR, MB_SUCCESS, moab::ProcConfig::proc_comm(), and procConfig.
Referenced by check_all_shared_handles(), and correct_thin_ghost_layers().
{ int ierr; const int tag = 0; const MPI_Comm cm = procConfig.proc_comm(); const int num_proc = buffProcs.size(); const std::vector< int > procs( buffProcs.begin(), buffProcs.end() ); std::vector< MPI_Request > recv_req( buffProcs.size(), MPI_REQUEST_NULL ); std::vector< MPI_Request > send_req( buffProcs.size(), MPI_REQUEST_NULL ); // Set up to receive sizes std::vector< int > sizes_send( num_proc ), sizes_recv( num_proc ); for( int i = 0; i < num_proc; i++ ) { ierr = MPI_Irecv( &sizes_recv[i], 1, MPI_INT, procs[i], tag, cm, &recv_req[i] ); if( ierr ) return MB_FILE_WRITE_ERROR; } // Send sizes assert( num_proc == (int)send_data.size() ); result.resize( num_proc ); for( int i = 0; i < num_proc; i++ ) { sizes_send[i] = send_data[i].size(); ierr = MPI_Isend( &sizes_send[i], 1, MPI_INT, buffProcs[i], tag, cm, &send_req[i] ); if( ierr ) return MB_FILE_WRITE_ERROR; } // Receive sizes std::vector< MPI_Status > stat( num_proc ); ierr = MPI_Waitall( num_proc, &recv_req[0], &stat[0] ); if( ierr ) return MB_FILE_WRITE_ERROR; // Wait until all sizes are sent (clean up pending req's) ierr = MPI_Waitall( num_proc, &send_req[0], &stat[0] ); if( ierr ) return MB_FILE_WRITE_ERROR; // Set up to receive data for( int i = 0; i < num_proc; i++ ) { result[i].resize( sizes_recv[i] ); ierr = MPI_Irecv( (void*)( &( result[i][0] ) ), sizeof( SharedEntityData ) * sizes_recv[i], MPI_UNSIGNED_CHAR, buffProcs[i], tag, cm, &recv_req[i] ); if( ierr ) return MB_FILE_WRITE_ERROR; } // Send data for( int i = 0; i < num_proc; i++ ) { ierr = MPI_Isend( (void*)( &( send_data[i][0] ) ), sizeof( SharedEntityData ) * sizes_send[i], MPI_UNSIGNED_CHAR, buffProcs[i], tag, cm, &send_req[i] ); if( ierr ) return MB_FILE_WRITE_ERROR; } // Receive data ierr = MPI_Waitall( num_proc, &recv_req[0], &stat[0] ); if( ierr ) return MB_FILE_WRITE_ERROR; // Wait until everything is sent to release send buffers ierr = MPI_Waitall( num_proc, &send_req[0], &stat[0] ); if( ierr ) return MB_FILE_WRITE_ERROR; return MB_SUCCESS; }
ErrorCode moab::ParallelComm::exchange_ghost_cells | ( | int | ghost_dim, |
int | bridge_dim, | ||
int | num_layers, | ||
int | addl_ents, | ||
bool | store_remote_handles, | ||
bool | wait_all = true , |
||
EntityHandle * | file_set = NULL |
||
) |
Exchange ghost cells with neighboring procs Neighboring processors are those sharing an interface with this processor. All entities of dimension ghost_dim within num_layers of interface, measured going through bridge_dim, are exchanged. See MeshTopoUtil::get_bridge_adjacencies for description of bridge adjacencies. If wait_all is false and store_remote_handles is true, MPI_Request objects are available in the sendReqs[2*MAX_SHARING_PROCS] member array, with inactive requests marked as MPI_REQUEST_NULL. If store_remote_handles or wait_all is false, this function returns after all entities have been received and processed.
ghost_dim | Dimension of ghost entities to be exchanged |
bridge_dim | Dimension of entities used to measure layers from interface |
num_layers | Number of layers of ghosts requested |
addl_ents | Dimension of additional adjacent entities to exchange with ghosts, 0 if none |
store_remote_handles | If true, send message with new entity handles to source processor |
wait_all | If true, function does not return until all send buffers are cleared. |
Definition at line 5687 of file ParallelComm.cpp.
References moab::Interface::add_entities(), buffProcs, check_all_shared_handles(), check_clean_iface(), check_sent_ents(), moab::Range::compactness(), moab::Range::empty(), ErrorCode, get_sent_ents(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SET_ERR, MB_SUCCESS, mbImpl, MPE_Log_event, moab::msgs, myDebug, pack_entities(), pack_remote_handles(), print_buffer(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recv_buffer(), remoteOwnedBuffs, moab::TupleList::reset(), reset_all_buffers(), send_buffer(), sendReqs, sharedEnts, moab::Range::size(), size(), tag_iface_entities(), moab::DebugOutput::tprintf(), unpack_entities(), and unpack_remote_handles().
Referenced by create_parallel_mesh(), moab::NestedRefine::exchange_ghosts(), iMesh_createStructuredMesh(), iMeshP_createGhostEntsAll(), moab::ReadParallel::load_file(), main(), resolve_and_exchange(), resolve_shared_ents(), moab::ParallelMergeMesh::TagSharedElements(), test_correct_ghost(), test_ghosted_entity_shared_data(), test_interface_owners_common(), test_pack_shared_entities_2d(), test_pack_shared_entities_3d(), test_read_and_ghost_after(), and test_read_with_thin_ghost_layer().
{ #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { if( !num_layers ) MPE_Log_event( IFACE_START, procConfig.proc_rank(), "Starting interface exchange." ); else MPE_Log_event( GHOST_START, procConfig.proc_rank(), "Starting ghost exchange." ); } #endif myDebug->tprintf( 1, "Entering exchange_ghost_cells with num_layers = %d\n", num_layers ); if( myDebug->get_verbosity() == 4 ) { msgs.clear(); msgs.reserve( MAX_SHARING_PROCS ); } // If we're only finding out about existing ents, we have to be storing // remote handles too assert( num_layers > 0 || store_remote_handles ); const bool is_iface = !num_layers; // Get the b-dimensional interface(s) with with_proc, where b = bridge_dim int success; ErrorCode result = MB_SUCCESS; int incoming1 = 0, incoming2 = 0; reset_all_buffers(); // When this function is called, buffProcs should already have any // communicating procs //=========================================== // Post ghost irecv's for ghost entities from all communicating procs //=========================================== #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { MPE_Log_event( ENTITIES_START, procConfig.proc_rank(), "Starting entity exchange." ); } #endif // Index reqs the same as buffer/sharing procs indices std::vector< MPI_Request > recv_ent_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL ), recv_remoteh_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL ); std::vector< unsigned int >::iterator proc_it; int ind, p; sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); for( ind = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, ind++ ) { incoming1++; PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[ind], remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MB_MESG_ENTS_SIZE, incoming1 ); success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind], MB_MESG_ENTS_SIZE, procConfig.proc_comm(), &recv_ent_reqs[3 * ind] ); if( success != MPI_SUCCESS ) { MB_SET_ERR( MB_FAILURE, "Failed to post irecv in ghost exchange" ); } } //=========================================== // Get entities to be sent to neighbors //=========================================== Range sent_ents[MAX_SHARING_PROCS], allsent, tmp_range; TupleList entprocs; int dum_ack_buff; result = get_sent_ents( is_iface, bridge_dim, ghost_dim, num_layers, addl_ents, sent_ents, allsent, entprocs );MB_CHK_SET_ERR( result, "get_sent_ents failed" ); // augment file set with the entities to be sent // we might have created new entities if addl_ents>0, edges and/or faces if( addl_ents > 0 && file_set && !allsent.empty() ) { result = mbImpl->add_entities( *file_set, allsent );MB_CHK_SET_ERR( result, "Failed to add new sub-entities to set" ); } myDebug->tprintf( 1, "allsent ents compactness (size) = %f (%lu)\n", allsent.compactness(), (unsigned long)allsent.size() ); //=========================================== // Pack and send ents from this proc to others //=========================================== for( p = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, p++ ) { myDebug->tprintf( 1, "Sent ents compactness (size) = %f (%lu)\n", sent_ents[p].compactness(), (unsigned long)sent_ents[p].size() ); // Reserve space on front for size and for initial buff size localOwnedBuffs[p]->reset_buffer( sizeof( int ) ); // Entities result = pack_entities( sent_ents[p], localOwnedBuffs[p], store_remote_handles, buffProcs[p], is_iface, &entprocs, &allsent );MB_CHK_SET_ERR( result, "Packing entities failed" ); if( myDebug->get_verbosity() == 4 ) { msgs.resize( msgs.size() + 1 ); msgs.back() = new Buffer( *localOwnedBuffs[p] ); } // Send the buffer (size stored in front in send_buffer) result = send_buffer( *proc_it, localOwnedBuffs[p], MB_MESG_ENTS_SIZE, sendReqs[3 * p], recv_ent_reqs[3 * p + 2], &dum_ack_buff, incoming1, MB_MESG_REMOTEH_SIZE, ( !is_iface && store_remote_handles ? // this used for ghosting only localOwnedBuffs[p] : NULL ), &recv_remoteh_reqs[3 * p], &incoming2 );MB_CHK_SET_ERR( result, "Failed to Isend in ghost exchange" ); } entprocs.reset(); //=========================================== // Receive/unpack new entities //=========================================== // Number of incoming messages for ghosts is the number of procs we // communicate with; for iface, it's the number of those with lower rank MPI_Status status; std::vector< std::vector< EntityHandle > > recd_ents( buffProcs.size() ); std::vector< std::vector< EntityHandle > > L1hloc( buffProcs.size() ), L1hrem( buffProcs.size() ); std::vector< std::vector< int > > L1p( buffProcs.size() ); std::vector< EntityHandle > L2hloc, L2hrem; std::vector< unsigned int > L2p; std::vector< EntityHandle > new_ents; while( incoming1 ) { // Wait for all recvs of ghost ents before proceeding to sending remote handles, // b/c some procs may have sent to a 3rd proc ents owned by me; PRINT_DEBUG_WAITANY( recv_ent_reqs, MB_MESG_ENTS_SIZE, procConfig.proc_rank() ); success = MPI_Waitany( 3 * buffProcs.size(), &recv_ent_reqs[0], &ind, &status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitany in ghost exchange" ); } PRINT_DEBUG_RECD( status ); // OK, received something; decrement incoming counter incoming1--; bool done = false; // In case ind is for ack, we need index of one before it unsigned int base_ind = 3 * ( ind / 3 ); result = recv_buffer( MB_MESG_ENTS_SIZE, status, remoteOwnedBuffs[ind / 3], recv_ent_reqs[base_ind + 1], recv_ent_reqs[base_ind + 2], incoming1, localOwnedBuffs[ind / 3], sendReqs[base_ind + 1], sendReqs[base_ind + 2], done, ( !is_iface && store_remote_handles ? localOwnedBuffs[ind / 3] : NULL ), MB_MESG_REMOTEH_SIZE, // maybe base_ind+1? &recv_remoteh_reqs[base_ind + 1], &incoming2 );MB_CHK_SET_ERR( result, "Failed to receive buffer" ); if( done ) { if( myDebug->get_verbosity() == 4 ) { msgs.resize( msgs.size() + 1 ); msgs.back() = new Buffer( *remoteOwnedBuffs[ind / 3] ); } // Message completely received - process buffer that was sent remoteOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) ); result = unpack_entities( remoteOwnedBuffs[ind / 3]->buff_ptr, store_remote_handles, ind / 3, is_iface, L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, new_ents ); if( MB_SUCCESS != result ) { std::cout << "Failed to unpack entities. Buffer contents:" << std::endl; print_buffer( remoteOwnedBuffs[ind / 3]->mem_ptr, MB_MESG_ENTS_SIZE, buffProcs[ind / 3], false ); return result; } if( recv_ent_reqs.size() != 3 * buffProcs.size() ) { // Post irecv's for remote handles from new proc; shouldn't be iface, // since we know about all procs we share with assert( !is_iface ); recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); for( unsigned int i = recv_ent_reqs.size(); i < 3 * buffProcs.size(); i += 3 ) { localOwnedBuffs[i / 3]->reset_buffer(); incoming2++; PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[i / 3], localOwnedBuffs[i / 3]->mem_ptr, INITIAL_BUFF_SIZE, MB_MESG_REMOTEH_SIZE, incoming2 ); success = MPI_Irecv( localOwnedBuffs[i / 3]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[i / 3], MB_MESG_REMOTEH_SIZE, procConfig.proc_comm(), &recv_remoteh_reqs[i] ); if( success != MPI_SUCCESS ) { MB_SET_ERR( MB_FAILURE, "Failed to post irecv for remote handles in ghost exchange" ); } } recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); } } } // Add requests for any new addl procs if( recv_ent_reqs.size() != 3 * buffProcs.size() ) { // Shouldn't get here... MB_SET_ERR( MB_FAILURE, "Requests length doesn't match proc count in ghost exchange" ); } #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending entity exchange." ); } #endif if( is_iface ) { // Need to check over entities I sent and make sure I received // handles for them from all expected procs; if not, need to clean // them up result = check_clean_iface( allsent ); if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl; // Now set the shared/interface tag on non-vertex entities on interface result = tag_iface_entities();MB_CHK_SET_ERR( result, "Failed to tag iface entities" ); #ifndef NDEBUG result = check_sent_ents( allsent ); if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl; result = check_all_shared_handles( true ); if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl; #endif #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { MPE_Log_event( IFACE_END, procConfig.proc_rank(), "Ending interface exchange." ); } #endif //=========================================== // Wait if requested //=========================================== if( wait_all ) { if( myDebug->get_verbosity() == 5 ) { success = MPI_Barrier( procConfig.proc_comm() ); } else { MPI_Status mult_status[3 * MAX_SHARING_PROCS]; success = MPI_Waitall( 3 * buffProcs.size(), &recv_ent_reqs[0], mult_status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" ); } success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" ); } /*success = MPI_Waitall(3*buffProcs.size(), &recv_remoteh_reqs[0], mult_status); if (MPI_SUCCESS != success) { MB_SET_ERR(MB_FAILURE, "Failed in waitall in ghost exchange"); }*/ } } myDebug->tprintf( 1, "Total number of shared entities = %lu.\n", (unsigned long)sharedEnts.size() ); myDebug->tprintf( 1, "Exiting exchange_ghost_cells for is_iface==true \n" ); return MB_SUCCESS; } // we still need to wait on sendReqs, if they are not fulfilled yet if( wait_all ) { if( myDebug->get_verbosity() == 5 ) { success = MPI_Barrier( procConfig.proc_comm() ); } else { MPI_Status mult_status[3 * MAX_SHARING_PROCS]; success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" ); } } } //=========================================== // Send local handles for new ghosts to owner, then add // those to ghost list for that owner //=========================================== for( p = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, p++ ) { // Reserve space on front for size and for initial buff size remoteOwnedBuffs[p]->reset_buffer( sizeof( int ) ); result = pack_remote_handles( L1hloc[p], L1hrem[p], L1p[p], *proc_it, remoteOwnedBuffs[p] );MB_CHK_SET_ERR( result, "Failed to pack remote handles" ); remoteOwnedBuffs[p]->set_stored_size(); if( myDebug->get_verbosity() == 4 ) { msgs.resize( msgs.size() + 1 ); msgs.back() = new Buffer( *remoteOwnedBuffs[p] ); } result = send_buffer( buffProcs[p], remoteOwnedBuffs[p], MB_MESG_REMOTEH_SIZE, sendReqs[3 * p], recv_remoteh_reqs[3 * p + 2], &dum_ack_buff, incoming2 );MB_CHK_SET_ERR( result, "Failed to send remote handles" ); } //=========================================== // Process remote handles of my ghosteds //=========================================== while( incoming2 ) { PRINT_DEBUG_WAITANY( recv_remoteh_reqs, MB_MESG_REMOTEH_SIZE, procConfig.proc_rank() ); success = MPI_Waitany( 3 * buffProcs.size(), &recv_remoteh_reqs[0], &ind, &status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitany in ghost exchange" ); } // OK, received something; decrement incoming counter incoming2--; PRINT_DEBUG_RECD( status ); bool done = false; unsigned int base_ind = 3 * ( ind / 3 ); result = recv_buffer( MB_MESG_REMOTEH_SIZE, status, localOwnedBuffs[ind / 3], recv_remoteh_reqs[base_ind + 1], recv_remoteh_reqs[base_ind + 2], incoming2, remoteOwnedBuffs[ind / 3], sendReqs[base_ind + 1], sendReqs[base_ind + 2], done );MB_CHK_SET_ERR( result, "Failed to receive remote handles" ); if( done ) { // Incoming remote handles if( myDebug->get_verbosity() == 4 ) { msgs.resize( msgs.size() + 1 ); msgs.back() = new Buffer( *localOwnedBuffs[ind / 3] ); } localOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) ); result = unpack_remote_handles( buffProcs[ind / 3], localOwnedBuffs[ind / 3]->buff_ptr, L2hloc, L2hrem, L2p );MB_CHK_SET_ERR( result, "Failed to unpack remote handles" ); } } #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { MPE_Log_event( RHANDLES_END, procConfig.proc_rank(), "Ending remote handles." ); MPE_Log_event( GHOST_END, procConfig.proc_rank(), "Ending ghost exchange (still doing checks)." ); } #endif //=========================================== // Wait if requested //=========================================== if( wait_all ) { if( myDebug->get_verbosity() == 5 ) { success = MPI_Barrier( procConfig.proc_comm() ); } else { MPI_Status mult_status[3 * MAX_SHARING_PROCS]; success = MPI_Waitall( 3 * buffProcs.size(), &recv_remoteh_reqs[0], mult_status ); if( MPI_SUCCESS == success ) success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status ); } if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" ); } } #ifndef NDEBUG result = check_sent_ents( allsent );MB_CHK_SET_ERR( result, "Failed check on shared entities" ); result = check_all_shared_handles( true );MB_CHK_SET_ERR( result, "Failed check on all shared handles" ); #endif if( file_set && !new_ents.empty() ) { result = mbImpl->add_entities( *file_set, &new_ents[0], new_ents.size() );MB_CHK_SET_ERR( result, "Failed to add new entities to set" ); } myDebug->tprintf( 1, "Total number of shared entities = %lu.\n", (unsigned long)sharedEnts.size() ); myDebug->tprintf( 1, "Exiting exchange_ghost_cells for is_iface==false \n" ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::exchange_ghost_cells | ( | ParallelComm ** | pc, |
unsigned int | num_procs, | ||
int | ghost_dim, | ||
int | bridge_dim, | ||
int | num_layers, | ||
int | addl_ents, | ||
bool | store_remote_handles, | ||
EntityHandle * | file_sets = NULL |
||
) | [static] |
Static version of exchange_ghost_cells, exchanging info through buffers rather than messages.
Definition at line 6588 of file ParallelComm.cpp.
References moab::Interface::add_entities(), buffProcs, check_all_shared_handles(), check_clean_iface(), check_sent_ents(), ErrorCode, get_moab(), get_sent_ents(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, pack_entities(), pack_remote_handles(), moab::Interface::query_interface(), moab::TupleList::reset(), size(), unpack_entities(), and unpack_remote_handles().
{ // Static version of function, exchanging info through buffers rather // than through messages // If we're only finding out about existing ents, we have to be storing // remote handles too assert( num_layers > 0 || store_remote_handles ); const bool is_iface = !num_layers; unsigned int ind; ParallelComm* pc; ErrorCode result = MB_SUCCESS; std::vector< Error* > ehs( num_procs ); for( unsigned int i = 0; i < num_procs; i++ ) { result = pcs[i]->get_moab()->query_interface( ehs[i] ); assert( MB_SUCCESS == result ); } // When this function is called, buffProcs should already have any // communicating procs //=========================================== // Get entities to be sent to neighbors //=========================================== // Done in a separate loop over procs because sometimes later procs // need to add info to earlier procs' messages Range sent_ents[MAX_SHARING_PROCS][MAX_SHARING_PROCS], allsent[MAX_SHARING_PROCS]; //=========================================== // Get entities to be sent to neighbors //=========================================== TupleList entprocs[MAX_SHARING_PROCS]; for( unsigned int p = 0; p < num_procs; p++ ) { pc = pcs[p]; result = pc->get_sent_ents( is_iface, bridge_dim, ghost_dim, num_layers, addl_ents, sent_ents[p], allsent[p], entprocs[p] );MB_CHK_SET_ERR( result, "p = " << p << ", get_sent_ents failed" ); //=========================================== // Pack entities into buffers //=========================================== for( ind = 0; ind < pc->buffProcs.size(); ind++ ) { // Entities pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) ); result = pc->pack_entities( sent_ents[p][ind], pc->localOwnedBuffs[ind], store_remote_handles, pc->buffProcs[ind], is_iface, &entprocs[p], &allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", packing entities failed" ); } entprocs[p].reset(); } //=========================================== // Receive/unpack new entities //=========================================== // Number of incoming messages for ghosts is the number of procs we // communicate with; for iface, it's the number of those with lower rank std::vector< std::vector< EntityHandle > > L1hloc[MAX_SHARING_PROCS], L1hrem[MAX_SHARING_PROCS]; std::vector< std::vector< int > > L1p[MAX_SHARING_PROCS]; std::vector< EntityHandle > L2hloc[MAX_SHARING_PROCS], L2hrem[MAX_SHARING_PROCS]; std::vector< unsigned int > L2p[MAX_SHARING_PROCS]; std::vector< EntityHandle > new_ents[MAX_SHARING_PROCS]; for( unsigned int p = 0; p < num_procs; p++ ) { L1hloc[p].resize( pcs[p]->buffProcs.size() ); L1hrem[p].resize( pcs[p]->buffProcs.size() ); L1p[p].resize( pcs[p]->buffProcs.size() ); } for( unsigned int p = 0; p < num_procs; p++ ) { pc = pcs[p]; for( ind = 0; ind < pc->buffProcs.size(); ind++ ) { // Incoming ghost entities; unpack; returns entities received // both from sending proc and from owning proc (which may be different) // Buffer could be empty, which means there isn't any message to // unpack (due to this comm proc getting added as a result of indirect // communication); just skip this unpack if( pc->localOwnedBuffs[ind]->get_stored_size() == 0 ) continue; unsigned int to_p = pc->buffProcs[ind]; pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) ); result = pcs[to_p]->unpack_entities( pc->localOwnedBuffs[ind]->buff_ptr, store_remote_handles, ind, is_iface, L1hloc[to_p], L1hrem[to_p], L1p[to_p], L2hloc[to_p], L2hrem[to_p], L2p[to_p], new_ents[to_p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to unpack entities" ); } } if( is_iface ) { // Need to check over entities I sent and make sure I received // handles for them from all expected procs; if not, need to clean // them up for( unsigned int p = 0; p < num_procs; p++ ) { result = pcs[p]->check_clean_iface( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" ); } #ifndef NDEBUG for( unsigned int p = 0; p < num_procs; p++ ) { result = pcs[p]->check_sent_ents( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" ); } result = check_all_shared_handles( pcs, num_procs );MB_CHK_SET_ERR( result, "Failed to check on all shared handles" ); #endif return MB_SUCCESS; } //=========================================== // Send local handles for new ghosts to owner, then add // those to ghost list for that owner //=========================================== std::vector< unsigned int >::iterator proc_it; for( unsigned int p = 0; p < num_procs; p++ ) { pc = pcs[p]; for( ind = 0, proc_it = pc->buffProcs.begin(); proc_it != pc->buffProcs.end(); ++proc_it, ind++ ) { // Skip if iface layer and higher-rank proc pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) ); result = pc->pack_remote_handles( L1hloc[p][ind], L1hrem[p][ind], L1p[p][ind], *proc_it, pc->localOwnedBuffs[ind] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to pack remote handles" ); } } //=========================================== // Process remote handles of my ghosteds //=========================================== for( unsigned int p = 0; p < num_procs; p++ ) { pc = pcs[p]; for( ind = 0, proc_it = pc->buffProcs.begin(); proc_it != pc->buffProcs.end(); ++proc_it, ind++ ) { // Incoming remote handles unsigned int to_p = pc->buffProcs[ind]; pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) ); result = pcs[to_p]->unpack_remote_handles( p, pc->localOwnedBuffs[ind]->buff_ptr, L2hloc[to_p], L2hrem[to_p], L2p[to_p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to unpack remote handles" ); } } #ifndef NDEBUG for( unsigned int p = 0; p < num_procs; p++ ) { result = pcs[p]->check_sent_ents( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" ); } result = ParallelComm::check_all_shared_handles( pcs, num_procs );MB_CHK_SET_ERR( result, "Failed to check on all shared handles" ); #endif if( file_sets ) { for( unsigned int p = 0; p < num_procs; p++ ) { if( new_ents[p].empty() ) continue; result = pcs[p]->get_moab()->add_entities( file_sets[p], &new_ents[p][0], new_ents[p].size() );MB_CHK_SET_ERR( result, "p = " << p << ", failed to add new entities to set" ); } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::exchange_owned_mesh | ( | std::vector< unsigned int > & | exchange_procs, |
std::vector< Range * > & | exchange_ents, | ||
std::vector< MPI_Request > & | recv_ent_reqs, | ||
std::vector< MPI_Request > & | recv_remoteh_reqs, | ||
const bool | recv_posted, | ||
bool | store_remote_handles, | ||
bool | wait_all, | ||
bool | migrate = false |
||
) |
Exchange owned mesh for input mesh entities and sets This function is called twice by exchange_owned_meshs to exchange entities before sets.
migrate | if the owner if entities are changed or not |
Definition at line 6912 of file ParallelComm.cpp.
References add_verts(), assign_entities_part(), buffProcs, check_sent_ents(), moab::Range::compactness(), moab::Range::empty(), moab::TupleList::enableWriteAccess(), ErrorCode, filter_pstatus(), get_buffers(), moab::TupleList::get_n(), moab::DebugOutput::get_verbosity(), moab::TupleList::inc_n(), INITIAL_BUFF_SIZE, moab::TupleList::initialize(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SET_ERR, MB_SUCCESS, moab::Range::merge(), MPE_Log_event, moab::msgs, myDebug, pack_buffer(), pack_remote_handles(), print_buffer(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_SHARED, recv_buffer(), remoteOwnedBuffs, remove_entities_part(), moab::TupleList::buffer::reset(), moab::TupleList::reset(), reset_all_buffers(), send_buffer(), sendReqs, moab::Range::size(), size(), moab::TupleList::sort(), moab::subtract(), moab::DebugOutput::tprintf(), unpack_buffer(), unpack_remote_handles(), moab::TupleList::vi_wr, and moab::TupleList::vul_wr.
Referenced by exchange_owned_meshs().
{ #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { MPE_Log_event( OWNED_START, procConfig.proc_rank(), "Starting owned ents exchange." ); } #endif myDebug->tprintf( 1, "Entering exchange_owned_mesh\n" ); if( myDebug->get_verbosity() == 4 ) { msgs.clear(); msgs.reserve( MAX_SHARING_PROCS ); } unsigned int i; int ind, success; ErrorCode result = MB_SUCCESS; int incoming1 = 0, incoming2 = 0; // Set buffProcs with communicating procs unsigned int n_proc = exchange_procs.size(); for( i = 0; i < n_proc; i++ ) { ind = get_buffers( exchange_procs[i] ); result = add_verts( *exchange_ents[i] );MB_CHK_SET_ERR( result, "Failed to add verts" ); // Filter out entities already shared with destination Range tmp_range; result = filter_pstatus( *exchange_ents[i], PSTATUS_SHARED, PSTATUS_AND, buffProcs[ind], &tmp_range );MB_CHK_SET_ERR( result, "Failed to filter on owner" ); if( !tmp_range.empty() ) { *exchange_ents[i] = subtract( *exchange_ents[i], tmp_range ); } } //=========================================== // Post ghost irecv's for entities from all communicating procs //=========================================== #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { MPE_Log_event( ENTITIES_START, procConfig.proc_rank(), "Starting entity exchange." ); } #endif // Index reqs the same as buffer/sharing procs indices if( !recv_posted ) { reset_all_buffers(); recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); for( i = 0; i < n_proc; i++ ) { ind = get_buffers( exchange_procs[i] ); incoming1++; PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[ind], remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MB_MESG_ENTS_SIZE, incoming1 ); success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind], MB_MESG_ENTS_SIZE, procConfig.proc_comm(), &recv_ent_reqs[3 * ind] ); if( success != MPI_SUCCESS ) { MB_SET_ERR( MB_FAILURE, "Failed to post irecv in owned entity exchange" ); } } } else incoming1 += n_proc; //=========================================== // Get entities to be sent to neighbors // Need to get procs each entity is sent to //=========================================== Range allsent, tmp_range; int dum_ack_buff; int npairs = 0; TupleList entprocs; for( i = 0; i < n_proc; i++ ) { int n_ents = exchange_ents[i]->size(); if( n_ents > 0 ) { npairs += n_ents; // Get the total # of proc/handle pairs allsent.merge( *exchange_ents[i] ); } } // Allocate a TupleList of that size entprocs.initialize( 1, 0, 1, 0, npairs ); entprocs.enableWriteAccess(); // Put the proc/handle pairs in the list for( i = 0; i < n_proc; i++ ) { for( Range::iterator rit = exchange_ents[i]->begin(); rit != exchange_ents[i]->end(); ++rit ) { entprocs.vi_wr[entprocs.get_n()] = exchange_procs[i]; entprocs.vul_wr[entprocs.get_n()] = *rit; entprocs.inc_n(); } } // Sort by handle moab::TupleList::buffer sort_buffer; sort_buffer.buffer_init( npairs ); entprocs.sort( 1, &sort_buffer ); sort_buffer.reset(); myDebug->tprintf( 1, "allsent ents compactness (size) = %f (%lu)\n", allsent.compactness(), (unsigned long)allsent.size() ); //=========================================== // Pack and send ents from this proc to others //=========================================== for( i = 0; i < n_proc; i++ ) { ind = get_buffers( exchange_procs[i] ); myDebug->tprintf( 1, "Sent ents compactness (size) = %f (%lu)\n", exchange_ents[i]->compactness(), (unsigned long)exchange_ents[i]->size() ); // Reserve space on front for size and for initial buff size localOwnedBuffs[ind]->reset_buffer( sizeof( int ) ); result = pack_buffer( *exchange_ents[i], false, true, store_remote_handles, buffProcs[ind], localOwnedBuffs[ind], &entprocs, &allsent ); if( myDebug->get_verbosity() == 4 ) { msgs.resize( msgs.size() + 1 ); msgs.back() = new Buffer( *localOwnedBuffs[ind] ); } // Send the buffer (size stored in front in send_buffer) result = send_buffer( exchange_procs[i], localOwnedBuffs[ind], MB_MESG_ENTS_SIZE, sendReqs[3 * ind], recv_ent_reqs[3 * ind + 2], &dum_ack_buff, incoming1, MB_MESG_REMOTEH_SIZE, ( store_remote_handles ? localOwnedBuffs[ind] : NULL ), &recv_remoteh_reqs[3 * ind], &incoming2 );MB_CHK_SET_ERR( result, "Failed to Isend in ghost exchange" ); } entprocs.reset(); //=========================================== // Receive/unpack new entities //=========================================== // Number of incoming messages is the number of procs we communicate with MPI_Status status; std::vector< std::vector< EntityHandle > > recd_ents( buffProcs.size() ); std::vector< std::vector< EntityHandle > > L1hloc( buffProcs.size() ), L1hrem( buffProcs.size() ); std::vector< std::vector< int > > L1p( buffProcs.size() ); std::vector< EntityHandle > L2hloc, L2hrem; std::vector< unsigned int > L2p; std::vector< EntityHandle > new_ents; while( incoming1 ) { // Wait for all recvs of ents before proceeding to sending remote handles, // b/c some procs may have sent to a 3rd proc ents owned by me; PRINT_DEBUG_WAITANY( recv_ent_reqs, MB_MESG_ENTS_SIZE, procConfig.proc_rank() ); success = MPI_Waitany( 3 * buffProcs.size(), &recv_ent_reqs[0], &ind, &status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitany in owned entity exchange" ); } PRINT_DEBUG_RECD( status ); // OK, received something; decrement incoming counter incoming1--; bool done = false; // In case ind is for ack, we need index of one before it unsigned int base_ind = 3 * ( ind / 3 ); result = recv_buffer( MB_MESG_ENTS_SIZE, status, remoteOwnedBuffs[ind / 3], recv_ent_reqs[base_ind + 1], recv_ent_reqs[base_ind + 2], incoming1, localOwnedBuffs[ind / 3], sendReqs[base_ind + 1], sendReqs[base_ind + 2], done, ( store_remote_handles ? localOwnedBuffs[ind / 3] : NULL ), MB_MESG_REMOTEH_SIZE, &recv_remoteh_reqs[base_ind + 1], &incoming2 );MB_CHK_SET_ERR( result, "Failed to receive buffer" ); if( done ) { if( myDebug->get_verbosity() == 4 ) { msgs.resize( msgs.size() + 1 ); msgs.back() = new Buffer( *remoteOwnedBuffs[ind / 3] ); } // Message completely received - process buffer that was sent remoteOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) ); result = unpack_buffer( remoteOwnedBuffs[ind / 3]->buff_ptr, store_remote_handles, buffProcs[ind / 3], ind / 3, L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, new_ents, true ); if( MB_SUCCESS != result ) { std::cout << "Failed to unpack entities. Buffer contents:" << std::endl; print_buffer( remoteOwnedBuffs[ind / 3]->mem_ptr, MB_MESG_ENTS_SIZE, buffProcs[ind / 3], false ); return result; } if( recv_ent_reqs.size() != 3 * buffProcs.size() ) { // Post irecv's for remote handles from new proc recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); for( i = recv_ent_reqs.size(); i < 3 * buffProcs.size(); i += 3 ) { localOwnedBuffs[i / 3]->reset_buffer(); incoming2++; PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[i / 3], localOwnedBuffs[i / 3]->mem_ptr, INITIAL_BUFF_SIZE, MB_MESG_REMOTEH_SIZE, incoming2 ); success = MPI_Irecv( localOwnedBuffs[i / 3]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[i / 3], MB_MESG_REMOTEH_SIZE, procConfig.proc_comm(), &recv_remoteh_reqs[i] ); if( success != MPI_SUCCESS ) { MB_SET_ERR( MB_FAILURE, "Failed to post irecv for remote handles in ghost exchange" ); } } recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); } } } // Assign and remove newly created elements from/to receive processor result = assign_entities_part( new_ents, procConfig.proc_rank() );MB_CHK_SET_ERR( result, "Failed to assign entities to part" ); if( migrate ) { result = remove_entities_part( allsent, procConfig.proc_rank() );MB_CHK_SET_ERR( result, "Failed to remove entities to part" ); } // Add requests for any new addl procs if( recv_ent_reqs.size() != 3 * buffProcs.size() ) { // Shouldn't get here... MB_SET_ERR( MB_FAILURE, "Requests length doesn't match proc count in entity exchange" ); } #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending entity exchange." ); } #endif // we still need to wait on sendReqs, if they are not fulfilled yet if( wait_all ) { if( myDebug->get_verbosity() == 5 ) { success = MPI_Barrier( procConfig.proc_comm() ); } else { MPI_Status mult_status[3 * MAX_SHARING_PROCS]; success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitall in exchange owned mesh" ); } } } //=========================================== // Send local handles for new entity to owner //=========================================== for( i = 0; i < n_proc; i++ ) { ind = get_buffers( exchange_procs[i] ); // Reserve space on front for size and for initial buff size remoteOwnedBuffs[ind]->reset_buffer( sizeof( int ) ); result = pack_remote_handles( L1hloc[ind], L1hrem[ind], L1p[ind], buffProcs[ind], remoteOwnedBuffs[ind] );MB_CHK_SET_ERR( result, "Failed to pack remote handles" ); remoteOwnedBuffs[ind]->set_stored_size(); if( myDebug->get_verbosity() == 4 ) { msgs.resize( msgs.size() + 1 ); msgs.back() = new Buffer( *remoteOwnedBuffs[ind] ); } result = send_buffer( buffProcs[ind], remoteOwnedBuffs[ind], MB_MESG_REMOTEH_SIZE, sendReqs[3 * ind], recv_remoteh_reqs[3 * ind + 2], &dum_ack_buff, incoming2 );MB_CHK_SET_ERR( result, "Failed to send remote handles" ); } //=========================================== // Process remote handles of my ghosteds //=========================================== while( incoming2 ) { PRINT_DEBUG_WAITANY( recv_remoteh_reqs, MB_MESG_REMOTEH_SIZE, procConfig.proc_rank() ); success = MPI_Waitany( 3 * buffProcs.size(), &recv_remoteh_reqs[0], &ind, &status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitany in owned entity exchange" ); } // OK, received something; decrement incoming counter incoming2--; PRINT_DEBUG_RECD( status ); bool done = false; unsigned int base_ind = 3 * ( ind / 3 ); result = recv_buffer( MB_MESG_REMOTEH_SIZE, status, localOwnedBuffs[ind / 3], recv_remoteh_reqs[base_ind + 1], recv_remoteh_reqs[base_ind + 2], incoming2, remoteOwnedBuffs[ind / 3], sendReqs[base_ind + 1], sendReqs[base_ind + 2], done );MB_CHK_SET_ERR( result, "Failed to receive remote handles" ); if( done ) { // Incoming remote handles if( myDebug->get_verbosity() == 4 ) { msgs.resize( msgs.size() + 1 ); msgs.back() = new Buffer( *localOwnedBuffs[ind / 3] ); } localOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) ); result = unpack_remote_handles( buffProcs[ind / 3], localOwnedBuffs[ind / 3]->buff_ptr, L2hloc, L2hrem, L2p );MB_CHK_SET_ERR( result, "Failed to unpack remote handles" ); } } #ifdef MOAB_HAVE_MPE if( myDebug->get_verbosity() == 2 ) { MPE_Log_event( RHANDLES_END, procConfig.proc_rank(), "Ending remote handles." ); MPE_Log_event( OWNED_END, procConfig.proc_rank(), "Ending ghost exchange (still doing checks)." ); } #endif //=========================================== // Wait if requested //=========================================== if( wait_all ) { if( myDebug->get_verbosity() == 5 ) { success = MPI_Barrier( procConfig.proc_comm() ); } else { MPI_Status mult_status[3 * MAX_SHARING_PROCS]; success = MPI_Waitall( 3 * buffProcs.size(), &recv_remoteh_reqs[0], mult_status ); if( MPI_SUCCESS == success ) success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status ); } if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitall in owned entity exchange" ); } } #ifndef NDEBUG result = check_sent_ents( allsent );MB_CHK_SET_ERR( result, "Failed check on shared entities" ); #endif myDebug->tprintf( 1, "Exiting exchange_owned_mesh\n" ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::exchange_owned_meshs | ( | std::vector< unsigned int > & | exchange_procs, |
std::vector< Range * > & | exchange_ents, | ||
std::vector< MPI_Request > & | recv_ent_reqs, | ||
std::vector< MPI_Request > & | recv_remoteh_reqs, | ||
bool | store_remote_handles, | ||
bool | wait_all = true , |
||
bool | migrate = false , |
||
int | dim = 0 |
||
) |
Exchange owned mesh for input mesh entities and sets This function should be called collectively over the communicator for this ParallelComm. If this version is called, all shared exchanged entities should have a value for this tag (or the tag should have a default value).
exchange_procs | processor vector exchanged |
exchange_ents | exchanged entities for each processors |
migrate | if the owner if entities are changed or not |
Definition at line 6842 of file ParallelComm.cpp.
References create_interface_sets(), moab::Interface::dimension_from_handle(), ErrorCode, exchange_owned_mesh(), get_sharing_data(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, recvRemotehReqs, recvReqs, sharedEnts, moab::Range::subset_by_type(), and moab::subtract().
Referenced by iMeshP_exchEntArrToPartsAll().
{ // Filter out entities already shared with destination // Exchange twice for entities and sets ErrorCode result; std::vector< unsigned int > exchange_procs_sets; std::vector< Range* > exchange_sets; int n_proc = exchange_procs.size(); for( int i = 0; i < n_proc; i++ ) { Range set_range = exchange_ents[i]->subset_by_type( MBENTITYSET ); *exchange_ents[i] = subtract( *exchange_ents[i], set_range ); Range* tmp_range = new Range( set_range ); exchange_sets.push_back( tmp_range ); exchange_procs_sets.push_back( exchange_procs[i] ); } if( dim == 2 ) { // Exchange entities first result = exchange_owned_mesh( exchange_procs, exchange_ents, recvReqs, recvRemotehReqs, true, store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh entities" ); // Exchange sets result = exchange_owned_mesh( exchange_procs_sets, exchange_sets, recvReqs, recvRemotehReqs, false, store_remote_handles, wait_all, migrate ); } else { // Exchange entities first result = exchange_owned_mesh( exchange_procs, exchange_ents, recv_ent_reqs, recv_remoteh_reqs, false, store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh entities" ); // Exchange sets result = exchange_owned_mesh( exchange_procs_sets, exchange_sets, recv_ent_reqs, recv_remoteh_reqs, false, store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh sets" ); } for( int i = 0; i < n_proc; i++ ) delete exchange_sets[i]; // Build up the list of shared entities std::map< std::vector< int >, std::vector< EntityHandle > > proc_nvecs; int procs[MAX_SHARING_PROCS]; EntityHandle handles[MAX_SHARING_PROCS]; int nprocs; unsigned char pstat; for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit ) { if( mbImpl->dimension_from_handle( *vit ) > 2 ) continue; result = get_sharing_data( *vit, procs, handles, pstat, nprocs );MB_CHK_SET_ERR( result, "Failed to get sharing data in exchange_owned_meshs" ); std::sort( procs, procs + nprocs ); std::vector< int > tmp_procs( procs, procs + nprocs ); assert( tmp_procs.size() != 2 ); proc_nvecs[tmp_procs].push_back( *vit ); } // Create interface sets from shared entities result = create_interface_sets( proc_nvecs );MB_CHK_SET_ERR( result, "Failed to create interface sets" ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::exchange_tags | ( | const std::vector< Tag > & | src_tags, |
const std::vector< Tag > & | dst_tags, | ||
const Range & | entities | ||
) |
Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.
src_tags | Vector of tag handles to be exchanged |
dst_tags | Tag handles to store the tags on the non-owning procs |
entities | Entities for which tags are exchanged |
Definition at line 7526 of file ParallelComm.cpp.
References buffProcs, moab::Range::empty(), entities, ErrorCode, filter_pstatus(), get_comm_procs(), moab::Interface::get_entities_by_type_and_tag(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, moab::Interface::INTERSECT, moab::intersect(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_TAGS_SIZE, MB_SET_ERR, MB_SUCCESS, mbImpl, MBMAXTYPE, myDebug, pack_tags(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_NOT, PSTATUS_NOT_OWNED, PSTATUS_SHARED, recv_buffer(), remoteOwnedBuffs, reset_all_buffers(), send_buffer(), sendReqs, sharedEnts, moab::Range::size(), moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), moab::Interface::tag_get_default_value(), moab::Interface::tag_set_data(), moab::DebugOutput::tprintf(), and unpack_tags().
Referenced by assign_global_ids(), create_parallel_mesh(), moab::WriteHDF5Parallel::exchange_file_ids(), moab::NestedRefine::exchange_ghosts(), exchange_tags(), iMeshP_pushTags(), iMeshP_pushTagsEnt(), iMOAB_SynchronizeTags(), main(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), moab::LloydSmoother::perform_smooth(), regression_ghost_tag_exchange_no_default(), test_ghost_elements(), and test_ghost_tag_exchange().
{ ErrorCode result; int success; myDebug->tprintf( 1, "Entering exchange_tags\n" ); // Get all procs interfacing to this proc std::set< unsigned int > exch_procs; result = get_comm_procs( exch_procs ); // Post ghost irecv's for all interface procs // Index requests the same as buffer/sharing procs indices std::vector< MPI_Request > recv_tag_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL ); // sent_ack_reqs(buffProcs.size(), MPI_REQUEST_NULL); std::vector< unsigned int >::iterator sit; int ind; reset_all_buffers(); int incoming = 0; for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ ) { incoming++; PRINT_DEBUG_IRECV( *sit, procConfig.proc_rank(), remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MB_MESG_TAGS_SIZE, incoming ); success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, *sit, MB_MESG_TAGS_SIZE, procConfig.proc_comm(), &recv_tag_reqs[3 * ind] ); if( success != MPI_SUCCESS ) { MB_SET_ERR( MB_FAILURE, "Failed to post irecv in ghost exchange" ); } } // Pack and send tags from this proc to others // Make sendReqs vector to simplify initialization sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL ); // Take all shared entities if incoming list is empty Range entities; if( entities_in.empty() ) std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( entities ) ); else entities = entities_in; int dum_ack_buff; for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ ) { Range tag_ents = entities; // Get ents shared by proc *sit result = filter_pstatus( tag_ents, PSTATUS_SHARED, PSTATUS_AND, *sit );MB_CHK_SET_ERR( result, "Failed pstatus AND check" ); // Remote nonowned entities if( !tag_ents.empty() ) { result = filter_pstatus( tag_ents, PSTATUS_NOT_OWNED, PSTATUS_NOT );MB_CHK_SET_ERR( result, "Failed pstatus NOT check" ); } // Pack-send; this also posts receives if store_remote_handles is true std::vector< Range > tag_ranges; for( std::vector< Tag >::const_iterator vit = src_tags.begin(); vit != src_tags.end(); ++vit ) { const void* ptr; int sz; if( mbImpl->tag_get_default_value( *vit, ptr, sz ) != MB_SUCCESS ) { Range tagged_ents; mbImpl->get_entities_by_type_and_tag( 0, MBMAXTYPE, &*vit, 0, 1, tagged_ents ); tag_ranges.push_back( intersect( tag_ents, tagged_ents ) ); } else { tag_ranges.push_back( tag_ents ); } } // Pack the data // Reserve space on front for size and for initial buff size localOwnedBuffs[ind]->reset_ptr( sizeof( int ) ); result = pack_tags( tag_ents, src_tags, dst_tags, tag_ranges, localOwnedBuffs[ind], true, *sit );MB_CHK_SET_ERR( result, "Failed to count buffer in pack_send_tag" ); // Now send it result = send_buffer( *sit, localOwnedBuffs[ind], MB_MESG_TAGS_SIZE, sendReqs[3 * ind], recv_tag_reqs[3 * ind + 2], &dum_ack_buff, incoming );MB_CHK_SET_ERR( result, "Failed to send buffer" ); } // Receive/unpack tags while( incoming ) { MPI_Status status; int index_in_recv_requests; PRINT_DEBUG_WAITANY( recv_tag_reqs, MB_MESG_TAGS_SIZE, procConfig.proc_rank() ); success = MPI_Waitany( 3 * buffProcs.size(), &recv_tag_reqs[0], &index_in_recv_requests, &status ); if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failed in waitany in tag exchange" ); } // Processor index in the list is divided by 3 ind = index_in_recv_requests / 3; PRINT_DEBUG_RECD( status ); // OK, received something; decrement incoming counter incoming--; bool done = false; std::vector< EntityHandle > dum_vec; result = recv_buffer( MB_MESG_TAGS_SIZE, status, remoteOwnedBuffs[ind], recv_tag_reqs[3 * ind + 1], // This is for receiving the second message recv_tag_reqs[3 * ind + 2], // This would be for ack, but it is not // used; consider removing it incoming, localOwnedBuffs[ind], sendReqs[3 * ind + 1], // Send request for sending the second message sendReqs[3 * ind + 2], // This is for sending the ack done );MB_CHK_SET_ERR( result, "Failed to resize recv buffer" ); if( done ) { remoteOwnedBuffs[ind]->reset_ptr( sizeof( int ) ); result = unpack_tags( remoteOwnedBuffs[ind]->buff_ptr, dum_vec, true, buffProcs[ind] );MB_CHK_SET_ERR( result, "Failed to recv-unpack-tag message" ); } } // OK, now wait if( myDebug->get_verbosity() == 5 ) { success = MPI_Barrier( procConfig.proc_comm() ); } else { MPI_Status status[3 * MAX_SHARING_PROCS]; success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], status ); } if( MPI_SUCCESS != success ) { MB_SET_ERR( MB_FAILURE, "Failure in waitall in tag exchange" ); } // If source tag is not equal to destination tag, then // do local copy for owned entities (communicate w/ self) assert( src_tags.size() == dst_tags.size() ); if( src_tags != dst_tags ) { std::vector< unsigned char > data; Range owned_ents; if( entities_in.empty() ) std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( entities ) ); else owned_ents = entities_in; result = filter_pstatus( owned_ents, PSTATUS_NOT_OWNED, PSTATUS_NOT );MB_CHK_SET_ERR( result, "Failure to get subset of owned entities" ); if( !owned_ents.empty() ) { // Check this here, otherwise we get // Unexpected results from get_entities_by_type_and_tag w/ Interface::INTERSECT for( size_t i = 0; i < src_tags.size(); i++ ) { if( src_tags[i] == dst_tags[i] ) continue; Range tagged_ents( owned_ents ); result = mbImpl->get_entities_by_type_and_tag( 0, MBMAXTYPE, &src_tags[0], 0, 1, tagged_ents, Interface::INTERSECT );MB_CHK_SET_ERR( result, "get_entities_by_type_and_tag(type == MBMAXTYPE) failed" ); int sz, size2; result = mbImpl->tag_get_bytes( src_tags[i], sz );MB_CHK_SET_ERR( result, "tag_get_size failed" ); result = mbImpl->tag_get_bytes( dst_tags[i], size2 );MB_CHK_SET_ERR( result, "tag_get_size failed" ); if( sz != size2 ) { MB_SET_ERR( MB_FAILURE, "tag sizes don't match" ); } data.resize( sz * tagged_ents.size() ); result = mbImpl->tag_get_data( src_tags[i], tagged_ents, &data[0] );MB_CHK_SET_ERR( result, "tag_get_data failed" ); result = mbImpl->tag_set_data( dst_tags[i], tagged_ents, &data[0] );MB_CHK_SET_ERR( result, "tag_set_data failed" ); } } } myDebug->tprintf( 1, "Exiting exchange_tags" ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::exchange_tags | ( | const char * | tag_name, |
const Range & | entities | ||
) | [inline] |
Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.
tag_name | Name of tag to be exchanged |
entities | Entities for which tags are exchanged |
Definition at line 1589 of file ParallelComm.hpp.
References ErrorCode, exchange_tags(), MB_SUCCESS, MB_TAG_ANY, MB_TAG_NOT_FOUND, MB_TYPE_OPAQUE, mbImpl, and moab::Interface::tag_get_handle().
{ // get the tag handle std::vector< Tag > tags( 1 ); ErrorCode result = mbImpl->tag_get_handle( tag_name, 0, MB_TYPE_OPAQUE, tags[0], MB_TAG_ANY ); if( MB_SUCCESS != result ) return result; else if( !tags[0] ) return MB_TAG_NOT_FOUND; return exchange_tags( tags, tags, entities ); }
ErrorCode moab::ParallelComm::exchange_tags | ( | Tag | tagh, |
const Range & | entities | ||
) | [inline] |
Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.
tagh | Handle of tag to be exchanged |
entities | Entities for which tags are exchanged |
Definition at line 1602 of file ParallelComm.hpp.
References exchange_tags().
{ // get the tag handle std::vector< Tag > tags; tags.push_back( tagh ); return exchange_tags( tags, tags, entities ); }
ErrorCode moab::ParallelComm::filter_pstatus | ( | Range & | ents, |
const unsigned char | pstatus_val, | ||
const unsigned char | op, | ||
int | to_proc = -1 , |
||
Range * | returned_ents = NULL |
||
) |
Filter the entities by pstatus tag. op is one of PSTATUS_ AND, OR, NOT; an entity is output if: AND: all bits set in pstatus_val are also set on entity OR: any bits set in pstatus_val also set on entity NOT: any bits set in pstatus_val are not set on entity
Results returned in input list, unless result_ents is passed in non-null, in which case results are returned in result_ents.
If ents is passed in empty, filter is done on shared entities in this pcomm instance, i.e. contents of sharedEnts.
ents | Input entities to filter |
pstatus_val | pstatus value to which entities are compared |
op | Bitwise operation performed between pstatus values |
to_proc | If non-negative and PSTATUS_SHARED is set on pstatus_val, only entities shared with to_proc are returned |
result_ents | If non-null, results of filter are put in the pointed-to range |
Definition at line 5577 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::clear(), moab::Range::empty(), moab::Range::end(), ErrorCode, moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, PSTATUS_AND, PSTATUS_MULTISHARED, PSTATUS_NOT, PSTATUS_OR, PSTATUS_SHARED, pstatus_tag(), sharedp_tag(), sharedps_tag(), moab::Range::size(), moab::Range::swap(), and moab::Interface::tag_get_data().
Referenced by closedsurface_uref_hirec_convergence_study(), moab::NCWriteGCRM::collect_mesh_info(), moab::NCWriteHOMME::collect_mesh_info(), moab::NCWriteMPAS::collect_mesh_info(), moab::ScdNCWriteHelper::collect_mesh_info(), count_owned_entities(), create_fine_mesh(), moab::ScdNCHelper::create_quad_coordinate_tag(), moab::WriteHDF5Parallel::exchange_file_ids(), exchange_owned_mesh(), exchange_tags(), moab::WriteHDF5Parallel::gather_interface_meshes(), gather_one_cell_var(), get_ghosted_entities(), get_max_volume(), get_sent_ents(), get_shared_entities(), hcFilter(), iMOAB_UpdateMeshInfo(), moab::HiReconstruction::initialize(), moab::HalfFacetRep::initialize(), moab::LloydSmoother::initialize(), laplacianFilter(), moab::ReadParallel::load_file(), main(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), moab::LloydSmoother::perform_smooth(), read_mesh_parallel(), moab::ScdNCHelper::read_scd_variables_to_nonset_allocate(), moab::NCHelperMPAS::read_ucd_variables_to_nonset_allocate(), moab::NCHelperGCRM::read_ucd_variables_to_nonset_allocate(), reduce_tags(), refine_entities(), resolve_shared_sets(), send_entities(), settle_intersection_points(), test_gather_onevar(), and test_read_parallel().
{ Range tmp_ents; // assert(!ents.empty()); if( ents.empty() ) { if( returned_ents ) returned_ents->clear(); return MB_SUCCESS; } // Put into tmp_ents any entities which are not owned locally or // who are already shared with to_proc std::vector< unsigned char > shared_flags( ents.size() ), shared_flags2; ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), ents, &shared_flags[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus flag" ); Range::const_iterator rit, hint = tmp_ents.begin(); ; int i; if( op == PSTATUS_OR ) { for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ ) { if( ( ( shared_flags[i] & ~pstat ) ^ shared_flags[i] ) & pstat ) { hint = tmp_ents.insert( hint, *rit ); if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] ); } } } else if( op == PSTATUS_AND ) { for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ ) { if( ( shared_flags[i] & pstat ) == pstat ) { hint = tmp_ents.insert( hint, *rit ); if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] ); } } } else if( op == PSTATUS_NOT ) { for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ ) { if( !( shared_flags[i] & pstat ) ) { hint = tmp_ents.insert( hint, *rit ); if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] ); } } } else { assert( false ); return MB_FAILURE; } if( -1 != to_proc ) { int sharing_procs[MAX_SHARING_PROCS]; std::fill( sharing_procs, sharing_procs + MAX_SHARING_PROCS, -1 ); Range tmp_ents2; hint = tmp_ents2.begin(); for( rit = tmp_ents.begin(), i = 0; rit != tmp_ents.end(); ++rit, i++ ) { // We need to check sharing procs if( shared_flags2[i] & PSTATUS_MULTISHARED ) { result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedps tag" ); assert( -1 != sharing_procs[0] ); for( unsigned int j = 0; j < MAX_SHARING_PROCS; j++ ) { // If to_proc shares this entity, add it to list if( sharing_procs[j] == to_proc ) { hint = tmp_ents2.insert( hint, *rit ); } else if( -1 == sharing_procs[j] ) break; sharing_procs[j] = -1; } } else if( shared_flags2[i] & PSTATUS_SHARED ) { result = mbImpl->tag_get_data( sharedp_tag(), &( *rit ), 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedp tag" ); assert( -1 != sharing_procs[0] ); if( sharing_procs[0] == to_proc ) hint = tmp_ents2.insert( hint, *rit ); sharing_procs[0] = -1; } else assert( "should never get here" && false ); } tmp_ents.swap( tmp_ents2 ); } if( returned_ents ) returned_ents->swap( tmp_ents ); else ents.swap( tmp_ents ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::find_existing_entity | ( | const bool | is_iface, |
const int | owner_p, | ||
const EntityHandle | owner_h, | ||
const int | num_ents, | ||
const EntityHandle * | connect, | ||
const int | num_connect, | ||
const EntityType | this_type, | ||
std::vector< EntityHandle > & | L2hloc, | ||
std::vector< EntityHandle > & | L2hrem, | ||
std::vector< unsigned int > & | L2p, | ||
EntityHandle & | new_h | ||
) | [private] |
given connectivity and type, find an existing entity, if there is one
Definition at line 3047 of file ParallelComm.cpp.
References moab::Range::begin(), moab::CN::Dimension(), moab::Range::empty(), ErrorCode, moab::Interface::get_adjacencies(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, and MBVERTEX.
Referenced by unpack_entities(), and unpack_remote_handles().
{ new_h = 0; if( !is_iface && num_ps > 2 ) { for( unsigned int i = 0; i < L2hrem.size(); i++ ) { if( L2hrem[i] == owner_h && owner_p == (int)L2p[i] ) { new_h = L2hloc[i]; return MB_SUCCESS; } } } // If we got here and it's a vertex, we don't need to look further if( MBVERTEX == this_type || !connect || !num_connect ) return MB_SUCCESS; Range tmp_range; ErrorCode result = mbImpl->get_adjacencies( connect, num_connect, CN::Dimension( this_type ), false, tmp_range );MB_CHK_SET_ERR( result, "Failed to get existing entity" ); if( !tmp_range.empty() ) { // Found a corresponding entity - return target new_h = *tmp_range.begin(); } else { new_h = 0; } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::gather_data | ( | Range & | gather_ents, |
Tag & | tag_handle, | ||
Tag | id_tag = 0 , |
||
EntityHandle | gather_set = 0 , |
||
int | root_proc_rank = 0 |
||
) |
Definition at line 8914 of file ParallelComm.cpp.
References moab::Range::begin(), comm(), dim, moab::Interface::dimension_from_handle(), moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_size(), moab::Range::psize(), rank(), moab::Range::size(), size(), moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), and moab::Interface::tag_iterate().
Referenced by gather_one_cell_var(), and test_gather_onevar().
{ int dim = mbImpl->dimension_from_handle( *gather_ents.begin() ); int bytes_per_tag = 0; ErrorCode rval = mbImpl->tag_get_bytes( tag_handle, bytes_per_tag ); if( rval != MB_SUCCESS ) return rval; int sz_buffer = sizeof( int ) + gather_ents.size() * ( sizeof( int ) + bytes_per_tag ); void* senddata = malloc( sz_buffer ); ( (int*)senddata )[0] = (int)gather_ents.size(); int* ptr_int = (int*)senddata + 1; rval = mbImpl->tag_get_data( id_tag, gather_ents, (void*)ptr_int ); if( rval != MB_SUCCESS ) return rval; ptr_int = (int*)( senddata ) + 1 + gather_ents.size(); rval = mbImpl->tag_get_data( tag_handle, gather_ents, (void*)ptr_int ); if( rval != MB_SUCCESS ) return rval; std::vector< int > displs( proc_config().proc_size(), 0 ); MPI_Gather( &sz_buffer, 1, MPI_INT, &displs[0], 1, MPI_INT, root_proc_rank, comm() ); std::vector< int > recvcnts( proc_config().proc_size(), 0 ); std::copy( displs.begin(), displs.end(), recvcnts.begin() ); std::partial_sum( displs.begin(), displs.end(), displs.begin() ); std::vector< int >::iterator lastM1 = displs.end() - 1; std::copy_backward( displs.begin(), lastM1, displs.end() ); // std::copy_backward(displs.begin(), --displs.end(), displs.end()); displs[0] = 0; if( (int)rank() != root_proc_rank ) MPI_Gatherv( senddata, sz_buffer, MPI_BYTE, NULL, NULL, NULL, MPI_BYTE, root_proc_rank, comm() ); else { Range gents; mbImpl->get_entities_by_dimension( gather_set, dim, gents ); int recvbuffsz = gents.size() * ( bytes_per_tag + sizeof( int ) ) + proc_config().proc_size() * sizeof( int ); void* recvbuf = malloc( recvbuffsz ); MPI_Gatherv( senddata, sz_buffer, MPI_BYTE, recvbuf, &recvcnts[0], &displs[0], MPI_BYTE, root_proc_rank, comm() ); void* gvals = NULL; // Test whether gents has multiple sequences bool multiple_sequences = false; if( gents.psize() > 1 ) multiple_sequences = true; else { int count; rval = mbImpl->tag_iterate( tag_handle, gents.begin(), gents.end(), count, gvals ); assert( NULL != gvals ); assert( count > 0 ); if( (size_t)count != gents.size() ) { multiple_sequences = true; gvals = NULL; } } // If gents has multiple sequences, create a temp buffer for gathered values if( multiple_sequences ) { gvals = malloc( gents.size() * bytes_per_tag ); assert( NULL != gvals ); } for( int i = 0; i != (int)size(); i++ ) { int numents = *(int*)( ( (char*)recvbuf ) + displs[i] ); int* id_ptr = (int*)( ( (char*)recvbuf ) + displs[i] + sizeof( int ) ); char* val_ptr = (char*)( id_ptr + numents ); for( int j = 0; j != numents; j++ ) { int idx = id_ptr[j]; memcpy( (char*)gvals + ( idx - 1 ) * bytes_per_tag, val_ptr + j * bytes_per_tag, bytes_per_tag ); } } // Free the receive buffer free( recvbuf ); // If gents has multiple sequences, copy tag data (stored in the temp buffer) to each // sequence separately if( multiple_sequences ) { Range::iterator iter = gents.begin(); size_t start_idx = 0; while( iter != gents.end() ) { int count; void* ptr; rval = mbImpl->tag_iterate( tag_handle, iter, gents.end(), count, ptr ); assert( NULL != ptr ); assert( count > 0 ); memcpy( (char*)ptr, (char*)gvals + start_idx * bytes_per_tag, bytes_per_tag * count ); iter += count; start_idx += count; } assert( start_idx == gents.size() ); // Free the temp buffer free( gvals ); } } // Free the send data free( senddata ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_all_pcomm | ( | Interface * | impl, |
std::vector< ParallelComm * > & | list | ||
) | [static] |
Definition at line 8023 of file ParallelComm.cpp.
References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_NOT_FOUND, pcomm_tag(), and moab::Interface::tag_get_data().
Referenced by moab::Core::deinitialize(), iMeshP_getNumPartitions(), iMeshP_getPartitions(), and save_and_load_on_root().
{ Tag pc_tag = pcomm_tag( impl, false ); if( 0 == pc_tag ) return MB_TAG_NOT_FOUND; const EntityHandle root = 0; ParallelComm* pc_array[MAX_SHARING_PROCS]; ErrorCode rval = impl->tag_get_data( pc_tag, &root, 1, pc_array ); if( MB_SUCCESS != rval ) return rval; for( int i = 0; i < MAX_SHARING_PROCS; i++ ) { if( pc_array[i] ) list.push_back( pc_array[i] ); } return MB_SUCCESS; }
int moab::ParallelComm::get_buffers | ( | int | to_proc, |
bool * | is_new = NULL |
||
) |
get (and possibly allocate) buffers for messages to/from to_proc; returns index of to_proc in buffProcs vector; if is_new is non-NULL, sets to whether new buffer was allocated PUBLIC ONLY FOR TESTING!
Definition at line 514 of file ParallelComm.cpp.
References buffProcs, INITIAL_BUFF_SIZE, localOwnedBuffs, MAX_SHARING_PROCS, moab::ProcConfig::proc_rank(), procConfig, and remoteOwnedBuffs.
Referenced by check_all_shared_handles(), correct_thin_ghost_layers(), exchange_owned_mesh(), get_interface_procs(), pack_shared_handles(), post_irecv(), recv_entities(), recv_messages(), recv_remote_handle_messages(), send_entities(), send_recv_entities(), moab::ScdInterface::tag_shared_vertices(), test_pack_shared_entities_3d(), and unpack_entities().
{ int ind = -1; std::vector< unsigned int >::iterator vit = std::find( buffProcs.begin(), buffProcs.end(), to_proc ); if( vit == buffProcs.end() ) { assert( "shouldn't need buffer to myself" && to_proc != (int)procConfig.proc_rank() ); ind = buffProcs.size(); buffProcs.push_back( (unsigned int)to_proc ); localOwnedBuffs.push_back( new Buffer( INITIAL_BUFF_SIZE ) ); remoteOwnedBuffs.push_back( new Buffer( INITIAL_BUFF_SIZE ) ); if( is_new ) *is_new = true; } else { ind = vit - buffProcs.begin(); if( is_new ) *is_new = false; } assert( ind < MAX_SHARING_PROCS ); return ind; }
ErrorCode moab::ParallelComm::get_comm_procs | ( | std::set< unsigned int > & | procs | ) | [inline] |
get processors with which this processor communicates
Definition at line 1633 of file ParallelComm.hpp.
References buffProcs, ErrorCode, get_interface_procs(), and MB_SUCCESS.
Referenced by exchange_tags(), reduce_tags(), settle_intersection_points(), and test_mesh().
{ ErrorCode result = get_interface_procs( procs ); if( MB_SUCCESS != result ) return result; std::copy( buffProcs.begin(), buffProcs.end(), std::inserter( procs, procs.begin() ) ); return MB_SUCCESS; }
get the verbosity level of output from this pcomm
Definition at line 8872 of file ParallelComm.cpp.
References moab::DebugOutput::get_verbosity(), and myDebug.
Referenced by augment_default_sets_with_ghosts(), moab::ScdInterface::construct_box(), and moab::ScdInterface::tag_shared_vertices().
{ return myDebug->get_verbosity(); }
ErrorCode moab::ParallelComm::get_entityset_local_handle | ( | unsigned | owning_rank, |
EntityHandle | remote_handle, | ||
EntityHandle & | local_handle | ||
) | const |
Given set owner and handle on owner, find local set handle.
Definition at line 8892 of file ParallelComm.cpp.
References moab::SharedSetData::get_local_handle(), and sharedSetData.
Referenced by moab::WriteHDF5Parallel::communicate_shared_set_ids(), and test_shared_sets().
{ return sharedSetData->get_local_handle( owning_rank, remote_handle, local_handle ); }
ErrorCode moab::ParallelComm::get_entityset_owner | ( | EntityHandle | entity_set, |
unsigned & | owner_rank, | ||
EntityHandle * | remote_handle = 0 |
||
) | const |
Get rank of the owner of a shared set. Returns this proc if set is not shared. Optionally returns handle on owning process for shared set.
Definition at line 8882 of file ParallelComm.cpp.
References moab::SharedSetData::get_owner(), and sharedSetData.
Referenced by moab::WriteHDF5Parallel::communicate_shared_set_data(), moab::WriteHDF5Parallel::communicate_shared_set_ids(), moab::WriteHDF5Parallel::print_set_sharing_data(), and test_shared_sets().
{ if( remote_handle ) return sharedSetData->get_owner( entity_set, owner_rank, *remote_handle ); else return sharedSetData->get_owner( entity_set, owner_rank ); }
ErrorCode moab::ParallelComm::get_entityset_owners | ( | std::vector< unsigned > & | ranks | ) | const |
Get ranks of all processes that own at least one set that is shared with this process. Will include the rank of this process if this process owns any shared set.
Definition at line 8904 of file ParallelComm.cpp.
References moab::SharedSetData::get_owning_procs(), and sharedSetData.
Referenced by moab::WriteHDF5Parallel::communicate_shared_set_ids(), and test_shared_sets().
{ return sharedSetData->get_owning_procs( ranks ); }
ErrorCode moab::ParallelComm::get_entityset_procs | ( | EntityHandle | entity_set, |
std::vector< unsigned > & | ranks | ||
) | const |
Get array of process IDs sharing a set. Returns zero and passes back NULL if set is not shared.
Definition at line 8877 of file ParallelComm.cpp.
References moab::SharedSetData::get_sharing_procs(), and sharedSetData.
Referenced by moab::WriteHDF5Parallel::communicate_shared_set_data(), moab::WriteHDF5Parallel::communicate_shared_set_ids(), moab::WriteHDF5Parallel::print_set_sharing_data(), and test_shared_sets().
{ return sharedSetData->get_sharing_procs( set, ranks ); }
ErrorCode moab::ParallelComm::get_ghosted_entities | ( | int | bridge_dim, |
int | ghost_dim, | ||
int | to_proc, | ||
int | num_layers, | ||
int | addl_ents, | ||
Range & | ghosted_ents | ||
) | [private] |
for specified bridge/ghost dimension, to_proc, and number of layers, get the entities to be ghosted, and info on additional procs needing to communicate with to_proc
Definition at line 7437 of file ParallelComm.cpp.
References add_verts(), moab::Range::begin(), moab::Range::empty(), moab::Range::end(), ErrorCode, filter_pstatus(), moab::Interface::get_adjacencies(), moab::MeshTopoUtil::get_bridge_adjacencies(), moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), interfaceSets, is_iface_proc(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::Range::merge(), PSTATUS_NOT, PSTATUS_NOT_OWNED, moab::Range::subset_by_dimension(), and moab::Interface::UNION.
Referenced by get_sent_ents().
{ // Get bridge ents on interface(s) Range from_ents; ErrorCode result = MB_SUCCESS; assert( 0 < num_layers ); for( Range::iterator rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit ) { if( !is_iface_proc( *rit, to_proc ) ) continue; // Get starting "from" entities if( bridge_dim == -1 ) { result = mbImpl->get_entities_by_handle( *rit, from_ents );MB_CHK_SET_ERR( result, "Failed to get bridge ents in the set" ); } else { result = mbImpl->get_entities_by_dimension( *rit, bridge_dim, from_ents );MB_CHK_SET_ERR( result, "Failed to get bridge ents in the set" ); } // Need to get layers of bridge-adj entities if( from_ents.empty() ) continue; result = MeshTopoUtil( mbImpl ).get_bridge_adjacencies( from_ents, bridge_dim, ghost_dim, ghosted_ents, num_layers );MB_CHK_SET_ERR( result, "Failed to get bridge adjacencies" ); } result = add_verts( ghosted_ents );MB_CHK_SET_ERR( result, "Failed to add verts" ); if( addl_ents ) { // First get the ents of ghost_dim Range tmp_ents, tmp_owned, tmp_notowned; tmp_owned = ghosted_ents.subset_by_dimension( ghost_dim ); if( tmp_owned.empty() ) return result; tmp_notowned = tmp_owned; // Next, filter by pstatus; can only create adj entities for entities I own result = filter_pstatus( tmp_owned, PSTATUS_NOT_OWNED, PSTATUS_NOT, -1, &tmp_owned );MB_CHK_SET_ERR( result, "Failed to filter owned entities" ); tmp_notowned -= tmp_owned; // Get edges first if( 1 == addl_ents || 3 == addl_ents ) { result = mbImpl->get_adjacencies( tmp_owned, 1, true, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get edge adjacencies for owned ghost entities" ); result = mbImpl->get_adjacencies( tmp_notowned, 1, false, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get edge adjacencies for notowned ghost entities" ); } if( 2 == addl_ents || 3 == addl_ents ) { result = mbImpl->get_adjacencies( tmp_owned, 2, true, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get face adjacencies for owned ghost entities" ); result = mbImpl->get_adjacencies( tmp_notowned, 2, false, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get face adjacencies for notowned ghost entities" ); } ghosted_ents.merge( tmp_ents ); } return result; }
ErrorCode moab::ParallelComm::get_global_part_count | ( | int & | count_out | ) | const |
Definition at line 8182 of file ParallelComm.cpp.
References globalPartCount, and MB_SUCCESS.
Referenced by iMeshP_getNumGlobalParts().
{ count_out = globalPartCount; return count_out < 0 ? MB_FAILURE : MB_SUCCESS; }
int moab::ParallelComm::get_id | ( | ) | const [inline] |
Get ID used to reference this PCOMM instance.
Definition at line 70 of file ParallelComm.hpp.
References pcommID.
Referenced by iMeshP_createPartitionAll(), iMeshP_loadAll(), iMOAB_RegisterApplication(), main(), and DeformMeshRemap::read_file().
{ return pcommID; }
ErrorCode moab::ParallelComm::get_iface_entities | ( | int | other_proc, |
int | dim, | ||
Range & | iface_ents | ||
) |
Get entities on interfaces shared with another proc.
other_proc | Other proc sharing the interface |
dim | Dimension of entities to return, -1 if all dims |
iface_ents | Returned entities |
Definition at line 7275 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), interfaceSets, is_iface_proc(), MB_CHK_SET_ERR, MB_SUCCESS, and mbImpl.
Referenced by get_sent_ents().
{ Range iface_sets; ErrorCode result = MB_SUCCESS; for( Range::iterator rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit ) { if( -1 != other_proc && !is_iface_proc( *rit, other_proc ) ) continue; if( -1 == dim ) { result = mbImpl->get_entities_by_handle( *rit, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in iface set" ); } else { result = mbImpl->get_entities_by_dimension( *rit, dim, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in iface set" ); } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_interface_procs | ( | std::set< unsigned int > & | iface_procs, |
const bool | get_buffs = false |
||
) |
get processors with which this processor shares an interface
Get processors with which this processor communicates; sets are sorted by processor.
Definition at line 5440 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), ErrorCode, get_buffers(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::ProcConfig::proc_rank(), procConfig, sharedp_tag(), sharedps_tag(), moab::Range::size(), and moab::Interface::tag_get_data().
Referenced by get_comm_procs(), resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().
{ // Make sure the sharing procs vector is empty procs_set.clear(); // Pre-load vector of single-proc tag values unsigned int i, j; std::vector< int > iface_proc( interfaceSets.size() ); ErrorCode result = mbImpl->tag_get_data( sharedp_tag(), interfaceSets, &iface_proc[0] );MB_CHK_SET_ERR( result, "Failed to get iface_proc for iface sets" ); // Get sharing procs either from single-proc vector or by getting // multi-proc tag value int tmp_iface_procs[MAX_SHARING_PROCS]; std::fill( tmp_iface_procs, tmp_iface_procs + MAX_SHARING_PROCS, -1 ); Range::iterator rit; for( rit = interfaceSets.begin(), i = 0; rit != interfaceSets.end(); ++rit, i++ ) { if( -1 != iface_proc[i] ) { assert( iface_proc[i] != (int)procConfig.proc_rank() ); procs_set.insert( (unsigned int)iface_proc[i] ); } else { // Get the sharing_procs tag result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, tmp_iface_procs );MB_CHK_SET_ERR( result, "Failed to get iface_procs for iface set" ); for( j = 0; j < MAX_SHARING_PROCS; j++ ) { if( -1 != tmp_iface_procs[j] && tmp_iface_procs[j] != (int)procConfig.proc_rank() ) procs_set.insert( (unsigned int)tmp_iface_procs[j] ); else if( -1 == tmp_iface_procs[j] ) { std::fill( tmp_iface_procs, tmp_iface_procs + j, -1 ); break; } } } } if( get_buffs ) { for( std::set< unsigned int >::iterator sit = procs_set.begin(); sit != procs_set.end(); ++sit ) get_buffers( *sit ); } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_interface_sets | ( | EntityHandle | part, |
Range & | iface_sets_out, | ||
int * | adj_part_id = 0 |
||
) |
Definition at line 8310 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), moab::Range::erase(), ErrorCode, get_sharing_data(), interface_sets(), MAX_SHARING_PROCS, and MB_SUCCESS.
Referenced by get_boundary_entities(), and get_part_neighbor_ids().
{ // FIXME : assumes one part per processor. // Need to store part iface sets as children to implement // this correctly. iface_sets_out = interface_sets(); if( adj_part_id ) { int part_ids[MAX_SHARING_PROCS], num_parts; Range::iterator i = iface_sets_out.begin(); while( i != iface_sets_out.end() ) { unsigned char pstat; ErrorCode rval = get_sharing_data( *i, part_ids, NULL, pstat, num_parts ); if( MB_SUCCESS != rval ) return rval; if( std::find( part_ids, part_ids + num_parts, *adj_part_id ) - part_ids != num_parts ) ++i; else i = iface_sets_out.erase( i ); } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_local_handles | ( | EntityHandle * | from_vec, |
int | num_ents, | ||
const Range & | new_ents | ||
) | [private] |
goes through from_vec, and for any with type MBMAXTYPE, replaces with new_ents value at index corresponding to id of entity in from_vec
Definition at line 3102 of file ParallelComm.cpp.
References moab::Range::begin(), and moab::Range::end().
Referenced by get_local_handles(), unpack_entities(), unpack_sets(), and unpack_tags().
{ std::vector< EntityHandle > tmp_ents; std::copy( new_ents.begin(), new_ents.end(), std::back_inserter( tmp_ents ) ); return get_local_handles( from_vec, num_ents, tmp_ents ); }
ErrorCode moab::ParallelComm::get_local_handles | ( | const Range & | remote_handles, |
Range & | local_handles, | ||
const std::vector< EntityHandle > & | new_ents | ||
) | [private] |
same as above except puts results in range
Definition at line 3090 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), ErrorCode, get_local_handles(), and moab::Range::size().
{ std::vector< EntityHandle > rh_vec; rh_vec.reserve( remote_handles.size() ); std::copy( remote_handles.begin(), remote_handles.end(), std::back_inserter( rh_vec ) ); ErrorCode result = get_local_handles( &rh_vec[0], remote_handles.size(), new_ents ); std::copy( rh_vec.begin(), rh_vec.end(), range_inserter( local_handles ) ); return result; }
ErrorCode moab::ParallelComm::get_local_handles | ( | EntityHandle * | from_vec, |
int | num_ents, | ||
const std::vector< EntityHandle > & | new_ents | ||
) | [private] |
same as above except gets new_ents from vector
Definition at line 3109 of file ParallelComm.cpp.
References moab::ID_FROM_HANDLE(), MB_SUCCESS, MBMAXTYPE, and moab::TYPE_FROM_HANDLE().
{ for( int i = 0; i < num_ents; i++ ) { if( TYPE_FROM_HANDLE( from_vec[i] ) == MBMAXTYPE ) { assert( ID_FROM_HANDLE( from_vec[i] ) < (int)new_ents.size() ); from_vec[i] = new_ents[ID_FROM_HANDLE( from_vec[i] )]; } } return MB_SUCCESS; }
Interface* moab::ParallelComm::get_moab | ( | ) | const [inline] |
Definition at line 779 of file ParallelComm.hpp.
References mbImpl.
Referenced by check_shared_ents(), moab::ParCommGraph::compute_partition(), count_owned(), create_shared_grid_3d(), exchange_ghost_cells(), get_boundary_entities(), get_ghost_entities(), moab::ParallelMergeMesh::ParallelMergeMesh(), moab::ParCommGraph::receive_mesh(), moab::ParCommGraph::receive_tag_values(), resolve_shared_ents(), moab::ParCommGraph::send_mesh_parts(), and moab::ParCommGraph::send_tag_values().
{ return mbImpl; }
ErrorCode moab::ParallelComm::get_owned_sets | ( | unsigned | owning_rank, |
Range & | sets_out | ||
) | const |
Get shared sets owned by process with specified rank.
Definition at line 8909 of file ParallelComm.cpp.
References moab::SharedSetData::get_shared_sets(), and sharedSetData.
Referenced by moab::WriteHDF5Parallel::communicate_shared_set_ids(), moab::WriteHDF5Parallel::create_meshset_tables(), and test_shared_sets().
{ return sharedSetData->get_shared_sets( owning_rank, sets_out ); }
ErrorCode moab::ParallelComm::get_owner | ( | EntityHandle | entity, |
int & | owner | ||
) | [inline] |
Return the rank of the entity owner.
Definition at line 1643 of file ParallelComm.hpp.
References get_owner_handle().
Referenced by moab::WriteHDF5Parallel::exchange_file_ids(), iMeshP_isEntOwnerArr(), iMOAB_GetElementOwnership(), iMOAB_GetVertexOwnership(), iMOAB_GetVisibleElementsInfo(), pack_shared_handles(), print_output(), test_ghost_tag_exchange(), and test_interface_owners_common().
{ EntityHandle tmp_handle; return get_owner_handle( entity, owner, tmp_handle ); }
ErrorCode moab::ParallelComm::get_owner_handle | ( | EntityHandle | entity, |
int & | owner, | ||
EntityHandle & | handle | ||
) |
Return the owner processor and handle of a given entity.
Return the rank of the entity owner.
Definition at line 8147 of file ParallelComm.cpp.
References ErrorCode, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_get_data().
Referenced by get_owner(), and get_owner_handles().
{ unsigned char pstat; int sharing_procs[MAX_SHARING_PROCS]; EntityHandle sharing_handles[MAX_SHARING_PROCS]; ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &entity, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" ); if( !( pstat & PSTATUS_NOT_OWNED ) ) { owner = proc_config().proc_rank(); handle = entity; } else if( pstat & PSTATUS_MULTISHARED ) { result = mbImpl->tag_get_data( sharedps_tag(), &entity, 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedps tag data" ); owner = sharing_procs[0]; result = mbImpl->tag_get_data( sharedhs_tag(), &entity, 1, sharing_handles );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" ); handle = sharing_handles[0]; } else if( pstat & PSTATUS_SHARED ) { result = mbImpl->tag_get_data( sharedp_tag(), &entity, 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" ); owner = sharing_procs[0]; result = mbImpl->tag_get_data( sharedh_tag(), &entity, 1, sharing_handles );MB_CHK_SET_ERR( result, "Failed to get sharedh tag data" ); handle = sharing_handles[0]; } else { owner = -1; handle = 0; } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_owning_part | ( | EntityHandle | entity, |
int & | owning_part_id_out, | ||
EntityHandle * | owning_handle = 0 |
||
) |
Definition at line 8337 of file ParallelComm.cpp.
References ErrorCode, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), PSTATUS_NOT_OWNED, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Interface::tag_get_by_ptr(), and moab::Interface::tag_get_data().
Referenced by iMeshP_getEntOwnerPartArr(), and iMeshP_getOwnerCopy().
{ // FIXME : assumes one part per proc, and therefore part_id == rank // If entity is not shared, then we're the owner. unsigned char pstat; ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &handle, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" ); if( !( pstat & PSTATUS_NOT_OWNED ) ) { owning_part_id = proc_config().proc_rank(); if( remote_handle ) *remote_handle = handle; return MB_SUCCESS; } // If entity is shared with one other proc, then // sharedp_tag will contain a positive value. result = mbImpl->tag_get_data( sharedp_tag(), &handle, 1, &owning_part_id );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" ); if( owning_part_id != -1 ) { // Done? if( !remote_handle ) return MB_SUCCESS; // Get handles on remote processors (and this one) return mbImpl->tag_get_data( sharedh_tag(), &handle, 1, remote_handle ); } // If here, then the entity is shared with at least two other processors. // Get the list from the sharedps_tag const void* part_id_list = 0; result = mbImpl->tag_get_by_ptr( sharedps_tag(), &handle, 1, &part_id_list ); if( MB_SUCCESS != result ) return result; owning_part_id = ( (const int*)part_id_list )[0]; // Done? if( !remote_handle ) return MB_SUCCESS; // Get remote handles const void* handle_list = 0; result = mbImpl->tag_get_by_ptr( sharedhs_tag(), &handle, 1, &handle_list ); if( MB_SUCCESS != result ) return result; *remote_handle = ( (const EntityHandle*)handle_list )[0]; return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_part_entities | ( | Range & | ents, |
int | dim = -1 |
||
) |
return all the entities in parts owned locally
Definition at line 8126 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), MB_SUCCESS, mbImpl, moab::Range::merge(), and partitionSets.
Referenced by moab::Coupler::initialize_tree(), and main().
{ ErrorCode result; for( Range::iterator rit = partitionSets.begin(); rit != partitionSets.end(); ++rit ) { Range tmp_ents; if( -1 == dim ) result = mbImpl->get_entities_by_handle( *rit, tmp_ents, true ); else result = mbImpl->get_entities_by_dimension( *rit, dim, tmp_ents, true ); if( MB_SUCCESS != result ) return result; ents.merge( tmp_ents ); } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_part_handle | ( | int | id, |
EntityHandle & | handle_out | ||
) | const |
Definition at line 8202 of file ParallelComm.cpp.
References moab::Range::front(), MB_ENTITY_NOT_FOUND, MB_SUCCESS, partition_sets(), and proc_config().
Referenced by assign_entities_part(), iMeshP_getPartHandlesFromPartsIdsArr(), and remove_entities_part().
{ // FIXME: assumes only 1 local part if( (unsigned)id != proc_config().proc_rank() ) return MB_ENTITY_NOT_FOUND; handle_out = partition_sets().front(); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_part_id | ( | EntityHandle | part, |
int & | id_out | ||
) | const |
Definition at line 8195 of file ParallelComm.cpp.
References MB_SUCCESS, proc_config(), and moab::ProcConfig::proc_rank().
Referenced by get_part_neighbor_ids(), iMeshP_getPartIdsFromPartHandlesArr(), and iMeshP_isEntOwnerArr().
{ // FIXME: assumes only 1 local part id_out = proc_config().proc_rank(); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_part_neighbor_ids | ( | EntityHandle | part, |
int | neighbors_out[MAX_SHARING_PROCS], | ||
int & | num_neighbors_out | ||
) |
Definition at line 8276 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), ErrorCode, get_interface_sets(), get_part_id(), get_sharing_data(), iface, MAX_SHARING_PROCS, and MB_SUCCESS.
Referenced by iMeshP_getNumPartNborsArr(), and iMeshP_getPartNborsArr().
{ ErrorCode rval; Range iface; rval = get_interface_sets( part, iface ); if( MB_SUCCESS != rval ) return rval; num_neighbors_out = 0; int n, j = 0; int tmp[MAX_SHARING_PROCS] = { 0 }, curr[MAX_SHARING_PROCS] = { 0 }; int* parts[2] = { neighbors_out, tmp }; for( Range::iterator i = iface.begin(); i != iface.end(); ++i ) { unsigned char pstat; rval = get_sharing_data( *i, curr, NULL, pstat, n ); if( MB_SUCCESS != rval ) return rval; std::sort( curr, curr + n ); assert( num_neighbors_out < MAX_SHARING_PROCS ); int* k = std::set_union( parts[j], parts[j] + num_neighbors_out, curr, curr + n, parts[1 - j] ); j = 1 - j; num_neighbors_out = k - parts[j]; } if( parts[j] != neighbors_out ) std::copy( parts[j], parts[j] + num_neighbors_out, neighbors_out ); // Remove input part from list int id; rval = get_part_id( part, id ); if( MB_SUCCESS == rval ) num_neighbors_out = std::remove( neighbors_out, neighbors_out + num_neighbors_out, id ) - neighbors_out; return rval; }
ErrorCode moab::ParallelComm::get_part_owner | ( | int | part_id, |
int & | owner_out | ||
) | const |
Definition at line 8188 of file ParallelComm.cpp.
References MB_SUCCESS.
Referenced by iMeshP_exchEntArrToPartsAll(), and iMeshP_getRankOfPartArr().
{ // FIXME: assumes only 1 local part owner = part_id; return MB_SUCCESS; }
EntityHandle moab::ParallelComm::get_partitioning | ( | ) | const [inline] |
Definition at line 725 of file ParallelComm.hpp.
References partitioningSet.
Referenced by create_part(), and destroy_part().
{ return partitioningSet; }
ParallelComm * moab::ParallelComm::get_pcomm | ( | Interface * | impl, |
const int | index | ||
) | [static] |
get the indexed pcomm object from the interface
Definition at line 8010 of file ParallelComm.cpp.
References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, pcomm_tag(), and moab::Interface::tag_get_data().
Referenced by ahf_test(), closedsurface_uref_hirec_convergence_study(), moab::ScdInterface::construct_box(), count_owned_entities(), gather_one_cell_var(), get_max_volume(), get_pcomm(), moab::HiReconstruction::HiReconstruction(), iMeshP_assignGlobalIds(), iMeshP_createPartitionAll(), iMeshP_getPartsArrOnRank(), iMeshP_getPartsOnRank(), iMeshP_loadAll(), intersection_at_level(), moab::Core::load_file(), load_meshset_hirec(), main(), moab::MeshGeneration::MeshGeneration(), moab::MeshRefiner::MeshRefiner(), multiple_loads_of_same_file(), moab::NestedRefine::NestedRefine(), moab::WriteHDF5Parallel::parallel_create_file(), moab::ReadNC::parse_options(), moab::WriteNC::parse_options(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), read_file(), read_mesh_parallel(), read_one_cell_var(), moab::ReadParallel::ReadParallel(), moab::RefinerTagManager::RefinerTagManager(), regression_ghost_tag_exchange_no_default(), moab::ReadHDF5::set_up_read(), test_closedsurface_mesh(), test_delete_entities(), test_elements_on_several_procs(), test_gather_onevar(), test_ghost_elements(), test_ghost_tag_exchange(), test_intx_in_parallel_elem_based(), test_intx_mpas(), test_locator(), test_mesh(), test_read(), test_read_all(), test_read_conn(), test_read_eul_onevar(), test_read_fv_onevar(), test_read_no_mixed_elements(), test_read_novars(), test_read_onevar(), test_read_parallel(), and test_write_unbalanced().
{ Tag pc_tag = pcomm_tag( impl, false ); if( 0 == pc_tag ) return NULL; const EntityHandle root = 0; ParallelComm* pc_array[MAX_SHARING_PROCS]; ErrorCode result = impl->tag_get_data( pc_tag, &root, 1, (void*)pc_array ); if( MB_SUCCESS != result ) return NULL; return pc_array[index]; }
ParallelComm * moab::ParallelComm::get_pcomm | ( | Interface * | impl, |
EntityHandle | partitioning, | ||
const MPI_Comm * | comm = 0 |
||
) | [static] |
get the indexed pcomm object from the interface
Get ParallelComm instance associated with partition handle Will create ParallelComm instance if a) one does not already exist and b) a valid value for MPI_Comm is passed.
Definition at line 8042 of file ParallelComm.cpp.
References ErrorCode, get_pcomm(), MB_SUCCESS, MB_TAG_CREAT, MB_TAG_NOT_FOUND, MB_TAG_SPARSE, MB_TYPE_INTEGER, ParallelComm(), moab::PARTITIONING_PCOMM_TAG_NAME, set_partitioning(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), and moab::Interface::tag_set_data().
{ ErrorCode rval; ParallelComm* result = 0; Tag prtn_tag; rval = impl->tag_get_handle( PARTITIONING_PCOMM_TAG_NAME, 1, MB_TYPE_INTEGER, prtn_tag, MB_TAG_SPARSE | MB_TAG_CREAT ); if( MB_SUCCESS != rval ) return 0; int pcomm_id; rval = impl->tag_get_data( prtn_tag, &prtn, 1, &pcomm_id ); if( MB_SUCCESS == rval ) { result = get_pcomm( impl, pcomm_id ); } else if( MB_TAG_NOT_FOUND == rval && comm ) { result = new ParallelComm( impl, *comm, &pcomm_id ); if( !result ) return 0; result->set_partitioning( prtn ); rval = impl->tag_set_data( prtn_tag, &prtn, 1, &pcomm_id ); if( MB_SUCCESS != rval ) { delete result; result = 0; } } return result; }
ErrorCode moab::ParallelComm::get_proc_nvecs | ( | int | resolve_dim, |
int | shared_dim, | ||
Range * | skin_ents, | ||
std::map< std::vector< int >, std::vector< EntityHandle > > & | proc_nvecs | ||
) | [private] |
Definition at line 5156 of file ParallelComm.cpp.
References moab::Range::end(), ErrorCode, moab::Interface::get_connectivity(), get_sharing_data(), INTERSECT, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), procConfig, and moab::Interface::UNION.
Referenced by create_interface_sets(), resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().
{ // Set sharing procs tags on other skin ents ErrorCode result; const EntityHandle* connect; int num_connect; std::set< int > sharing_procs; std::vector< EntityHandle > dum_connect; std::vector< int > sp_vec; for( int d = 3; d > 0; d-- ) { if( resolve_dim == d ) continue; for( Range::iterator rit = skin_ents[d].begin(); rit != skin_ents[d].end(); ++rit ) { // Get connectivity result = mbImpl->get_connectivity( *rit, connect, num_connect, false, &dum_connect );MB_CHK_SET_ERR( result, "Failed to get connectivity on non-vertex skin entities" ); int op = ( resolve_dim < shared_dim ? Interface::UNION : Interface::INTERSECT ); result = get_sharing_data( connect, num_connect, sharing_procs, op );MB_CHK_SET_ERR( result, "Failed to get sharing data in get_proc_nvecs" ); if( sharing_procs.empty() || ( sharing_procs.size() == 1 && *sharing_procs.begin() == (int)procConfig.proc_rank() ) ) continue; // Need to specify sharing data correctly for entities or they will // end up in a different interface set than corresponding vertices if( sharing_procs.size() == 2 ) { std::set< int >::iterator it = sharing_procs.find( proc_config().proc_rank() ); assert( it != sharing_procs.end() ); sharing_procs.erase( it ); } // Intersection is the owning proc(s) for this skin ent sp_vec.clear(); std::copy( sharing_procs.begin(), sharing_procs.end(), std::back_inserter( sp_vec ) ); assert( sp_vec.size() != 2 ); proc_nvecs[sp_vec].push_back( *rit ); } } #ifndef NDEBUG // Shouldn't be any repeated entities in any of the vectors in proc_nvecs for( std::map< std::vector< int >, std::vector< EntityHandle > >::iterator mit = proc_nvecs.begin(); mit != proc_nvecs.end(); ++mit ) { std::vector< EntityHandle > tmp_vec = ( mit->second ); std::sort( tmp_vec.begin(), tmp_vec.end() ); std::vector< EntityHandle >::iterator vit = std::unique( tmp_vec.begin(), tmp_vec.end() ); assert( vit == tmp_vec.end() ); } #endif return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_pstatus | ( | EntityHandle | entity, |
unsigned char & | pstatus_val | ||
) |
Get parallel status of an entity Returns the parallel status of an entity.
entity | The entity being queried |
pstatus_val | Parallel status of the entity |
Definition at line 5488 of file ParallelComm.cpp.
References ErrorCode, MB_CHK_SET_ERR, mbImpl, pstatus_tag(), and moab::Interface::tag_get_data().
Referenced by check_my_shared_handles().
{ ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &entity, 1, &pstatus_val );MB_CHK_SET_ERR( result, "Failed to get pastatus tag data" ); return result; }
ErrorCode moab::ParallelComm::get_pstatus_entities | ( | int | dim, |
unsigned char | pstatus_val, | ||
Range & | pstatus_ents | ||
) |
Get entities with the given pstatus bit(s) set Returns any entities whose pstatus tag value v satisfies (v & pstatus_val)
dim | Dimension of entities to be returned, or -1 if any |
pstatus_val | pstatus value of desired entities |
pstatus_ents | Entities returned from function |
Definition at line 5494 of file ParallelComm.cpp.
References moab::Range::begin(), dim, moab::Interface::dimension_from_handle(), moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), moab::Range::insert(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, pstatus_tag(), moab::Range::size(), and moab::Interface::tag_get_data().
Referenced by main().
{ Range ents; ErrorCode result; if( -1 == dim ) { result = mbImpl->get_entities_by_handle( 0, ents );MB_CHK_SET_ERR( result, "Failed to get all entities" ); } else { result = mbImpl->get_entities_by_dimension( 0, dim, ents );MB_CHK_SET_ERR( result, "Failed to get entities of dimension " << dim ); } std::vector< unsigned char > pstatus( ents.size() ); result = mbImpl->tag_get_data( pstatus_tag(), ents, &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to get pastatus tag data" ); Range::iterator rit = ents.begin(); int i = 0; if( pstatus_val ) { for( ; rit != ents.end(); i++, ++rit ) { if( pstatus[i] & pstatus_val && ( -1 == dim || mbImpl->dimension_from_handle( *rit ) == dim ) ) pstatus_ents.insert( *rit ); } } else { for( ; rit != ents.end(); i++, ++rit ) { if( !pstatus[i] && ( -1 == dim || mbImpl->dimension_from_handle( *rit ) == dim ) ) pstatus_ents.insert( *rit ); } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_remote_handles | ( | EntityHandle * | local_vec, |
EntityHandle * | rem_vec, | ||
int | num_ents, | ||
int | to_proc | ||
) |
Definition at line 1064 of file ParallelComm.cpp.
References moab::error(), ErrorCode, MB_CHK_ERR, and MB_SUCCESS.
Referenced by check_my_shared_handles(), get_remote_handles(), pack_entity_seq(), pack_sets(), pack_tag(), and settle_intersection_points().
{ ErrorCode error; std::vector< EntityHandle > newents; error = get_remote_handles( true, local_vec, rem_vec, num_ents, to_proc, newents );MB_CHK_ERR( error ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_remote_handles | ( | const bool | store_remote_handles, |
EntityHandle * | from_vec, | ||
EntityHandle * | to_vec_tmp, | ||
int | num_ents, | ||
int | to_proc, | ||
const std::vector< EntityHandle > & | new_ents | ||
) | [private] |
replace handles in from_vec with corresponding handles on to_proc (by checking shared[p/h]_tag and shared[p/h]s_tag; if no remote handle and new_ents is non-null, substitute instead CREATE_HANDLE(MBMAXTYPE, index) where index is handle's position in new_ents
Definition at line 1875 of file ParallelComm.cpp.
References moab::CREATE_HANDLE(), ErrorCode, get_shared_proc_tags(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, mbImpl, MBMAXTYPE, moab::ProcConfig::proc_rank(), procConfig, and moab::Interface::tag_get_data().
{ // NOTE: THIS IMPLEMENTATION IS JUST LIKE THE RANGE-BASED VERSION, NO REUSE // AT THIS TIME, SO IF YOU FIX A BUG IN THIS VERSION, IT MAY BE IN THE // OTHER VERSION TOO!!! if( 0 == num_ents ) return MB_SUCCESS; // Use a local destination ptr in case we're doing an in-place copy std::vector< EntityHandle > tmp_vector; EntityHandle* to_vec = to_vec_tmp; if( to_vec == from_vec ) { tmp_vector.resize( num_ents ); to_vec = &tmp_vector[0]; } if( !store_remote_handles ) { int err; // In this case, substitute position in new_ents list for( int i = 0; i < num_ents; i++ ) { int ind = std::lower_bound( new_ents.begin(), new_ents.end(), from_vec[i] ) - new_ents.begin(); assert( new_ents[ind] == from_vec[i] ); to_vec[i] = CREATE_HANDLE( MBMAXTYPE, ind, err ); assert( to_vec[i] != 0 && !err && -1 != ind ); } } else { Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag; ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags" ); // Get single-proc destination handles and shared procs std::vector< int > sharing_procs( num_ents ); result = mbImpl->tag_get_data( shh_tag, from_vec, num_ents, to_vec );MB_CHK_SET_ERR( result, "Failed to get shared handle tag for remote_handles" ); result = mbImpl->tag_get_data( shp_tag, from_vec, num_ents, &sharing_procs[0] );MB_CHK_SET_ERR( result, "Failed to get sharing proc tag in remote_handles" ); for( int j = 0; j < num_ents; j++ ) { if( to_vec[j] && sharing_procs[j] != to_proc ) to_vec[j] = 0; } EntityHandle tmp_handles[MAX_SHARING_PROCS]; int tmp_procs[MAX_SHARING_PROCS]; int i; // Go through results, and for 0-valued ones, look for multiple shared proc for( i = 0; i < num_ents; i++ ) { if( !to_vec[i] ) { result = mbImpl->tag_get_data( shps_tag, from_vec + i, 1, tmp_procs ); if( MB_SUCCESS == result ) { for( int j = 0; j < MAX_SHARING_PROCS; j++ ) { if( -1 == tmp_procs[j] ) break; else if( tmp_procs[j] == to_proc ) { result = mbImpl->tag_get_data( shhs_tag, from_vec + i, 1, tmp_handles );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" ); to_vec[i] = tmp_handles[j]; assert( to_vec[i] ); break; } } } if( !to_vec[i] ) { int j = std::lower_bound( new_ents.begin(), new_ents.end(), from_vec[i] ) - new_ents.begin(); if( (int)new_ents.size() == j ) { std::cout << "Failed to find new entity in send list, proc " << procConfig.proc_rank() << std::endl; for( int k = 0; k <= num_ents; k++ ) std::cout << k << ": " << from_vec[k] << " " << to_vec[k] << std::endl; MB_SET_ERR( MB_FAILURE, "Failed to find new entity in send list" ); } int err; to_vec[i] = CREATE_HANDLE( MBMAXTYPE, j, err ); if( err ) { MB_SET_ERR( MB_FAILURE, "Failed to create handle in remote_handles" ); } } } } } // memcpy over results if from_vec and to_vec are the same if( to_vec_tmp == from_vec ) memcpy( from_vec, to_vec, num_ents * sizeof( EntityHandle ) ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_remote_handles | ( | const bool | store_remote_handles, |
const Range & | from_range, | ||
Range & | to_range, | ||
int | to_proc, | ||
const std::vector< EntityHandle > & | new_ents | ||
) | [private] |
same as other version, except from_range and to_range should be different here
Definition at line 2055 of file ParallelComm.cpp.
References ErrorCode, get_remote_handles(), MB_CHK_SET_ERR, and moab::Range::size().
{ std::vector< EntityHandle > to_vector( from_range.size() ); ErrorCode result = get_remote_handles( store_remote_handles, from_range, &to_vector[0], to_proc, new_ents );MB_CHK_SET_ERR( result, "Failed to get remote handles" ); std::copy( to_vector.begin(), to_vector.end(), range_inserter( to_range ) ); return result; }
ErrorCode moab::ParallelComm::get_remote_handles | ( | const bool | store_remote_handles, |
const Range & | from_range, | ||
EntityHandle * | to_vec, | ||
int | to_proc, | ||
const std::vector< EntityHandle > & | new_ents | ||
) | [private] |
same as other version, except packs range into vector
Definition at line 1974 of file ParallelComm.cpp.
References moab::Range::begin(), moab::CREATE_HANDLE(), moab::Range::empty(), moab::Range::end(), ErrorCode, get_shared_proc_tags(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, mbImpl, MBMAXTYPE, moab::Range::size(), and moab::Interface::tag_get_data().
{ // NOTE: THIS IMPLEMENTATION IS JUST LIKE THE VECTOR-BASED VERSION, NO REUSE // AT THIS TIME, SO IF YOU FIX A BUG IN THIS VERSION, IT MAY BE IN THE // OTHER VERSION TOO!!! if( from_range.empty() ) return MB_SUCCESS; if( !store_remote_handles ) { int err; // In this case, substitute position in new_ents list Range::iterator rit; unsigned int i; for( rit = from_range.begin(), i = 0; rit != from_range.end(); ++rit, i++ ) { int ind = std::lower_bound( new_ents.begin(), new_ents.end(), *rit ) - new_ents.begin(); assert( new_ents[ind] == *rit ); to_vec[i] = CREATE_HANDLE( MBMAXTYPE, ind, err ); assert( to_vec[i] != 0 && !err && -1 != ind ); } } else { Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag; ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags" ); // Get single-proc destination handles and shared procs std::vector< int > sharing_procs( from_range.size() ); result = mbImpl->tag_get_data( shh_tag, from_range, to_vec );MB_CHK_SET_ERR( result, "Failed to get shared handle tag for remote_handles" ); result = mbImpl->tag_get_data( shp_tag, from_range, &sharing_procs[0] );MB_CHK_SET_ERR( result, "Failed to get sharing proc tag in remote_handles" ); for( unsigned int j = 0; j < from_range.size(); j++ ) { if( to_vec[j] && sharing_procs[j] != to_proc ) to_vec[j] = 0; } EntityHandle tmp_handles[MAX_SHARING_PROCS]; int tmp_procs[MAX_SHARING_PROCS]; // Go through results, and for 0-valued ones, look for multiple shared proc Range::iterator rit; unsigned int i; for( rit = from_range.begin(), i = 0; rit != from_range.end(); ++rit, i++ ) { if( !to_vec[i] ) { result = mbImpl->tag_get_data( shhs_tag, &( *rit ), 1, tmp_handles ); if( MB_SUCCESS == result ) { result = mbImpl->tag_get_data( shps_tag, &( *rit ), 1, tmp_procs );MB_CHK_SET_ERR( result, "Failed to get sharedps tag data" ); for( int j = 0; j < MAX_SHARING_PROCS; j++ ) if( tmp_procs[j] == to_proc ) { to_vec[i] = tmp_handles[j]; break; } } if( !to_vec[i] ) { int j = std::lower_bound( new_ents.begin(), new_ents.end(), *rit ) - new_ents.begin(); if( (int)new_ents.size() == j ) { MB_SET_ERR( MB_FAILURE, "Failed to find new entity in send list" ); } int err; to_vec[i] = CREATE_HANDLE( MBMAXTYPE, j, err ); if( err ) { MB_SET_ERR( MB_FAILURE, "Failed to create handle in remote_handles" ); } } } } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_sent_ents | ( | const bool | is_iface, |
const int | bridge_dim, | ||
const int | ghost_dim, | ||
const int | num_layers, | ||
const int | addl_ents, | ||
Range * | sent_ents, | ||
Range & | allsent, | ||
TupleList & | entprocs | ||
) | [private] |
Definition at line 6518 of file ParallelComm.cpp.
References buffProcs, moab::Range::clear(), moab::TupleList::disableWriteAccess(), moab::Range::empty(), moab::TupleList::enableWriteAccess(), moab::Range::end(), ErrorCode, filter_pstatus(), get_ghosted_entities(), get_iface_entities(), moab::TupleList::get_n(), moab::TupleList::inc_n(), moab::TupleList::initialize(), MB_CHK_SET_ERR, MB_SUCCESS, moab::Range::merge(), PSTATUS_AND, PSTATUS_SHARED, moab::TupleList::buffer::reset(), size(), moab::TupleList::sort(), moab::subtract(), moab::TupleList::vi_wr, and moab::TupleList::vul_wr.
Referenced by exchange_ghost_cells().
{ ErrorCode result; unsigned int ind; std::vector< unsigned int >::iterator proc_it; Range tmp_range; // Done in a separate loop over procs because sometimes later procs // need to add info to earlier procs' messages for( ind = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, ind++ ) { if( !is_iface ) { result = get_ghosted_entities( bridge_dim, ghost_dim, buffProcs[ind], num_layers, addl_ents, sent_ents[ind] );MB_CHK_SET_ERR( result, "Failed to get ghost layers" ); } else { result = get_iface_entities( buffProcs[ind], -1, sent_ents[ind] );MB_CHK_SET_ERR( result, "Failed to get interface layers" ); } // Filter out entities already shared with destination tmp_range.clear(); result = filter_pstatus( sent_ents[ind], PSTATUS_SHARED, PSTATUS_AND, buffProcs[ind], &tmp_range );MB_CHK_SET_ERR( result, "Failed to filter on owner" ); if( !tmp_range.empty() ) sent_ents[ind] = subtract( sent_ents[ind], tmp_range ); allsent.merge( sent_ents[ind] ); } //=========================================== // Need to get procs each entity is sent to //=========================================== // Get the total # of proc/handle pairs int npairs = 0; for( ind = 0; ind < buffProcs.size(); ind++ ) npairs += sent_ents[ind].size(); // Allocate a TupleList of that size entprocs.initialize( 1, 0, 1, 0, npairs ); entprocs.enableWriteAccess(); // Put the proc/handle pairs in the list for( ind = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, ind++ ) { for( Range::iterator rit = sent_ents[ind].begin(); rit != sent_ents[ind].end(); ++rit ) { entprocs.vi_wr[entprocs.get_n()] = *proc_it; entprocs.vul_wr[entprocs.get_n()] = *rit; entprocs.inc_n(); } } // Sort by handle moab::TupleList::buffer sort_buffer; sort_buffer.buffer_init( npairs ); entprocs.sort( 1, &sort_buffer ); entprocs.disableWriteAccess(); sort_buffer.reset(); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_shared_entities | ( | int | other_proc, |
Range & | shared_ents, | ||
int | dim = -1 , |
||
const bool | iface = false , |
||
const bool | owned_filter = false |
||
) |
Get shared entities of specified dimension If other_proc is -1, any shared entities are returned. If dim is -1, entities of all dimensions on interface are returned.
other_proc | Rank of processor for which interface entities are requested |
shared_ents | Entities returned from function |
dim | Dimension of interface entities requested |
iface | If true, return only entities on the interface |
owned_filter | If true, return only owned shared entities |
Definition at line 8801 of file ParallelComm.cpp.
References moab::Range::clear(), dim, ErrorCode, filter_pstatus(), moab::Range::lower_bound(), MB_CHK_SET_ERR, MB_SUCCESS, moab::Range::merge(), PSTATUS_AND, PSTATUS_INTERFACE, PSTATUS_NOT, PSTATUS_NOT_OWNED, PSTATUS_SHARED, sharedEnts, moab::CN::TypeDimensionMap, and moab::Range::upper_bound().
Referenced by check_my_shared_handles(), check_shared_ents(), moab::ParCommGraph::compute_partition(), main(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), test_ghost_elements(), test_reduce_tag_explicit_dest(), and test_reduce_tags().
{ shared_ents.clear(); ErrorCode result = MB_SUCCESS; // Dimension if( -1 != dim ) { DimensionPair dp = CN::TypeDimensionMap[dim]; Range dum_range; std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( dum_range ) ); shared_ents.merge( dum_range.lower_bound( dp.first ), dum_range.upper_bound( dp.second ) ); } else std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( shared_ents ) ); // Filter by iface if( iface ) { result = filter_pstatus( shared_ents, PSTATUS_INTERFACE, PSTATUS_AND );MB_CHK_SET_ERR( result, "Failed to filter by iface" ); } // Filter by owned if( owned_filter ) { result = filter_pstatus( shared_ents, PSTATUS_NOT_OWNED, PSTATUS_NOT );MB_CHK_SET_ERR( result, "Failed to filter by owned" ); } // Filter by proc if( -1 != other_proc ) { result = filter_pstatus( shared_ents, PSTATUS_SHARED, PSTATUS_AND, other_proc );MB_CHK_SET_ERR( result, "Failed to filter by proc" ); } return result; }
ErrorCode moab::ParallelComm::get_shared_proc_tags | ( | Tag & | sharedp_tag, |
Tag & | sharedps_tag, | ||
Tag & | sharedh_tag, | ||
Tag & | sharedhs_tag, | ||
Tag & | pstatus_tag | ||
) | [inline] |
return the tags used to indicate shared procs and handles
Definition at line 1574 of file ParallelComm.hpp.
References MB_SUCCESS, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), and sharedps_tag().
Referenced by create_interface_sets(), get_remote_handles(), moab::RefinerTagManager::RefinerTagManager(), resolve_shared_ents(), and tag_shared_verts().
{ sharedp = sharedp_tag(); sharedps = sharedps_tag(); sharedh = sharedh_tag(); sharedhs = sharedhs_tag(); pstatus = pstatus_tag(); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_shared_sets | ( | Range & | result | ) | const |
Get all shared sets.
Definition at line 8899 of file ParallelComm.cpp.
References moab::SharedSetData::get_shared_sets(), and sharedSetData.
Referenced by moab::WriteHDF5Parallel::create_meshset_tables().
{ return sharedSetData->get_shared_sets( result ); }
ErrorCode moab::ParallelComm::get_sharing_data | ( | const EntityHandle | entity, |
int * | ps, | ||
EntityHandle * | hs, | ||
unsigned char & | pstat, | ||
unsigned int & | num_ps | ||
) |
Get the shared processors/handles for an entity Get the shared processors/handles for an entity. Arrays must be large enough to receive data for all sharing procs. Does *not* include this proc if only shared with one other proc.
entity | Entity being queried |
ps | Pointer to sharing proc data |
hs | Pointer to shared proc handle data |
pstat | Reference to pstatus data returned from this function |
Definition at line 3007 of file ParallelComm.cpp.
References ErrorCode, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, PSTATUS_MULTISHARED, PSTATUS_SHARED, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_get_data().
Referenced by augment_default_sets_with_ghosts(), build_sharedhps_list(), check_clean_iface(), check_local_shared(), check_shared_ents(), moab::ParCommGraph::compute_partition(), correct_thin_ghost_layers(), create_interface_sets(), delete_entities(), exchange_owned_meshs(), get_interface_sets(), get_part_neighbor_ids(), get_proc_nvecs(), get_sharing_data(), list_entities(), pack_shared_handles(), update_remote_data(), and update_remote_data_old().
{ ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &entity, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" ); if( pstat & PSTATUS_MULTISHARED ) { result = mbImpl->tag_get_data( sharedps_tag(), &entity, 1, ps );MB_CHK_SET_ERR( result, "Failed to get sharedps tag data" ); if( hs ) { result = mbImpl->tag_get_data( sharedhs_tag(), &entity, 1, hs );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" ); } num_ps = std::find( ps, ps + MAX_SHARING_PROCS, -1 ) - ps; } else if( pstat & PSTATUS_SHARED ) { result = mbImpl->tag_get_data( sharedp_tag(), &entity, 1, ps );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" ); if( hs ) { result = mbImpl->tag_get_data( sharedh_tag(), &entity, 1, hs );MB_CHK_SET_ERR( result, "Failed to get sharedh tag data" ); hs[1] = 0; } // Initialize past end of data ps[1] = -1; num_ps = 1; } else { ps[0] = -1; if( hs ) hs[0] = 0; num_ps = 0; } assert( MAX_SHARING_PROCS >= num_ps ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_sharing_data | ( | const EntityHandle | entity, |
int * | ps, | ||
EntityHandle * | hs, | ||
unsigned char & | pstat, | ||
int & | num_ps | ||
) | [inline] |
Get the shared processors/handles for an entity Same as other version but with int num_ps.
entity | Entity being queried |
ps | Pointer to sharing proc data |
hs | Pointer to shared proc handle data |
pstat | Reference to pstatus data returned from this function |
Definition at line 1685 of file ParallelComm.hpp.
References ErrorCode, get_sharing_data(), and MB_SUCCESS.
{ unsigned int dum_ps; ErrorCode result = get_sharing_data( entity, ps, hs, pstat, dum_ps ); if( MB_SUCCESS == result ) num_ps = dum_ps; return result; }
ErrorCode moab::ParallelComm::get_sharing_data | ( | const EntityHandle * | entities, |
int | num_entities, | ||
std::set< int > & | procs, | ||
int | op = Interface::INTERSECT |
||
) | [inline] |
Get the intersection or union of all sharing processors Get the intersection or union of all sharing processors. Processor set is cleared as part of this function.
entities | Entity list ptr |
num_entities | Number of entities |
procs | Processors returned |
op | Either Interface::UNION or Interface::INTERSECT |
Definition at line 1673 of file ParallelComm.hpp.
References entities, and get_sharing_data().
{ Range dum_range; // cast away constness 'cuz the range is passed as const EntityHandle* ents_cast = const_cast< EntityHandle* >( entities ); std::copy( ents_cast, ents_cast + num_entities, range_inserter( dum_range ) ); return get_sharing_data( dum_range, procs, op ); }
ErrorCode moab::ParallelComm::get_sharing_data | ( | const Range & | entities, |
std::set< int > & | procs, | ||
int | op = Interface::INTERSECT |
||
) |
Get the intersection or union of all sharing processors Same as previous variant but with range as input.
Definition at line 2960 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), ErrorCode, get_sharing_data(), moab::Interface::INTERSECT, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, PSTATUS_SHARED, and moab::Interface::UNION.
{ // Get the union or intersection of sharing data for multiple entities ErrorCode result; int sp2[MAX_SHARING_PROCS]; int num_ps; unsigned char pstat; std::set< int > tmp_procs; procs.clear(); for( Range::const_iterator rit = entities.begin(); rit != entities.end(); ++rit ) { // Get sharing procs result = get_sharing_data( *rit, sp2, NULL, pstat, num_ps );MB_CHK_SET_ERR( result, "Failed to get sharing data in get_sharing_data" ); if( !( pstat & PSTATUS_SHARED ) && Interface::INTERSECT == operation ) { procs.clear(); return MB_SUCCESS; } if( rit == entities.begin() ) { std::copy( sp2, sp2 + num_ps, std::inserter( procs, procs.begin() ) ); } else { std::sort( sp2, sp2 + num_ps ); tmp_procs.clear(); if( Interface::UNION == operation ) std::set_union( procs.begin(), procs.end(), sp2, sp2 + num_ps, std::inserter( tmp_procs, tmp_procs.end() ) ); else if( Interface::INTERSECT == operation ) std::set_intersection( procs.begin(), procs.end(), sp2, sp2 + num_ps, std::inserter( tmp_procs, tmp_procs.end() ) ); else { assert( "Unknown operation." && false ); return MB_FAILURE; } procs.swap( tmp_procs ); } if( Interface::INTERSECT == operation && procs.empty() ) return MB_SUCCESS; } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::get_sharing_parts | ( | EntityHandle | entity, |
int | part_ids_out[MAX_SHARING_PROCS], | ||
int & | num_part_ids_out, | ||
EntityHandle | remote_handles[MAX_SHARING_PROCS] = 0 |
||
) |
Definition at line 8382 of file ParallelComm.cpp.
References ErrorCode, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), PSTATUS_SHARED, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_get_data().
Referenced by iMeshP_getCopies(), iMeshP_getCopyOnPart(), iMeshP_getCopyParts(), and iMeshP_getNumCopies().
{ // FIXME : assumes one part per proc, and therefore part_id == rank // If entity is not shared, then we're the owner. unsigned char pstat; ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &entity, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" ); if( !( pstat & PSTATUS_SHARED ) ) { part_ids_out[0] = proc_config().proc_rank(); if( remote_handles ) remote_handles[0] = entity; num_part_ids_out = 1; return MB_SUCCESS; } // If entity is shared with one other proc, then // sharedp_tag will contain a positive value. result = mbImpl->tag_get_data( sharedp_tag(), &entity, 1, part_ids_out );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" ); if( part_ids_out[0] != -1 ) { num_part_ids_out = 2; part_ids_out[1] = proc_config().proc_rank(); // Done? if( !remote_handles ) return MB_SUCCESS; // Get handles on remote processors (and this one) remote_handles[1] = entity; return mbImpl->tag_get_data( sharedh_tag(), &entity, 1, remote_handles ); } // If here, then the entity is shared with at least two other processors. // Get the list from the sharedps_tag result = mbImpl->tag_get_data( sharedps_tag(), &entity, 1, part_ids_out ); if( MB_SUCCESS != result ) return result; // Count number of valid (positive) entries in sharedps_tag for( num_part_ids_out = 0; num_part_ids_out < MAX_SHARING_PROCS && part_ids_out[num_part_ids_out] >= 0; num_part_ids_out++ ) ; // part_ids_out[num_part_ids_out++] = proc_config().proc_rank(); #ifndef NDEBUG int my_idx = std::find( part_ids_out, part_ids_out + num_part_ids_out, proc_config().proc_rank() ) - part_ids_out; assert( my_idx < num_part_ids_out ); #endif // Done? if( !remote_handles ) return MB_SUCCESS; // Get remote handles result = mbImpl->tag_get_data( sharedhs_tag(), &entity, 1, remote_handles ); // remote_handles[num_part_ids_out - 1] = entity; assert( remote_handles[my_idx] == entity ); return result; }
ErrorCode moab::ParallelComm::get_tag_send_list | ( | const Range & | all_entities, |
std::vector< Tag > & | all_tags, | ||
std::vector< Range > & | tag_ranges | ||
) | [private] |
Get list of tags for which to exchange data.
Get tags and entities for which to exchange tag data. This function was originally part of 'pack_tags' requested with the 'all_possible_tags' parameter.
all_entities | Input. The set of entities for which data is to be communicated. |
all_tags | Output. Populated with the handles of tags to be sent. |
tag_ranges | Output. For each corresponding tag in all_tags, the subset of 'all_entities' for which a tag value has been set. |
Definition at line 3650 of file ParallelComm.cpp.
References moab::Range::empty(), ErrorCode, moab::intersect(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, sequenceManager, moab::Interface::tag_get_name(), and moab::Interface::tag_get_tags().
Referenced by pack_buffer().
{ std::vector< Tag > tmp_tags; ErrorCode result = mbImpl->tag_get_tags( tmp_tags );MB_CHK_SET_ERR( result, "Failed to get tags in pack_tags" ); std::vector< Tag >::iterator tag_it; for( tag_it = tmp_tags.begin(); tag_it != tmp_tags.end(); ++tag_it ) { std::string tag_name; result = mbImpl->tag_get_name( *tag_it, tag_name ); if( tag_name.c_str()[0] == '_' && tag_name.c_str()[1] == '_' ) continue; Range tmp_range; result = ( *tag_it )->get_tagged_entities( sequenceManager, tmp_range );MB_CHK_SET_ERR( result, "Failed to get entities for tag in pack_tags" ); tmp_range = intersect( tmp_range, whole_range ); if( tmp_range.empty() ) continue; // OK, we'll be sending this tag all_tags.push_back( *tag_it ); tag_ranges.push_back( Range() ); tag_ranges.back().swap( tmp_range ); } return MB_SUCCESS; }
void moab::ParallelComm::initialize | ( | ) | [private] |
Definition at line 341 of file ParallelComm.cpp.
References add_pcomm(), buffProcs, errorHandler, localOwnedBuffs, MAX_SHARING_PROCS, mbImpl, myDebug, pcommID, moab::ProcConfig::proc_rank(), procConfig, moab::Interface::query_interface(), remoteOwnedBuffs, moab::Core::sequence_manager(), sequenceManager, and moab::DebugOutput::set_rank().
Referenced by ParallelComm().
{ Core* core = dynamic_cast< Core* >( mbImpl ); sequenceManager = core->sequence_manager(); mbImpl->query_interface( errorHandler ); // Initialize MPI, if necessary int flag = 1; int retval = MPI_Initialized( &flag ); if( MPI_SUCCESS != retval || !flag ) { int argc = 0; char** argv = NULL; // mpi not initialized yet - initialize here retval = MPI_Init( &argc, &argv ); assert( MPI_SUCCESS == retval ); } // Reserve space for vectors buffProcs.reserve( MAX_SHARING_PROCS ); localOwnedBuffs.reserve( MAX_SHARING_PROCS ); remoteOwnedBuffs.reserve( MAX_SHARING_PROCS ); pcommID = add_pcomm( this ); if( !myDebug ) { myDebug = new DebugOutput( "ParallelComm", std::cerr ); myDebug->set_rank( procConfig.proc_rank() ); } }
Range& moab::ParallelComm::interface_sets | ( | ) | [inline] |
Definition at line 673 of file ParallelComm.hpp.
References interfaceSets.
Referenced by check_clean_iface(), moab::NCHelperScrip::create_mesh(), and get_interface_sets().
{ return interfaceSets; }
const Range& moab::ParallelComm::interface_sets | ( | ) | const [inline] |
Definition at line 677 of file ParallelComm.hpp.
References interfaceSets.
{ return interfaceSets; }
bool moab::ParallelComm::is_iface_proc | ( | EntityHandle | this_set, |
int | to_proc | ||
) | [private] |
returns true if the set is an interface shared with to_proc
Definition at line 5556 of file ParallelComm.cpp.
References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, mbImpl, sharedp_tag(), sharedps_tag(), and moab::Interface::tag_get_data().
Referenced by get_ghosted_entities(), and get_iface_entities().
{ int sharing_procs[MAX_SHARING_PROCS]; std::fill( sharing_procs, sharing_procs + MAX_SHARING_PROCS, -1 ); ErrorCode result = mbImpl->tag_get_data( sharedp_tag(), &this_set, 1, sharing_procs ); if( MB_SUCCESS == result && to_proc == sharing_procs[0] ) return true; result = mbImpl->tag_get_data( sharedps_tag(), &this_set, 1, sharing_procs ); if( MB_SUCCESS != result ) return false; for( int i = 0; i < MAX_SHARING_PROCS; i++ ) { if( to_proc == sharing_procs[i] ) return true; else if( -1 == sharing_procs[i] ) return false; } return false; }
ErrorCode moab::ParallelComm::list_entities | ( | const EntityHandle * | ents, |
int | num_ents | ||
) |
Definition at line 2573 of file ParallelComm.cpp.
References ErrorCode, moab::Interface::get_coords(), get_sharing_data(), moab::Interface::id_from_handle(), moab::Interface::list_entities(), MAX_SHARING_PROCS, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::Range::print(), PSTATUS_GHOST, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, and sharedEnts.
Referenced by build_sharedhps_list(), check_local_shared(), check_my_shared_handles(), list_entities(), moab::ReadParallel::load_file(), and moab::ScdInterface::tag_shared_vertices().
{ if( NULL == ents ) { Range shared_ents; std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( shared_ents ) ); shared_ents.print( "Shared entities:\n" ); return MB_SUCCESS; } unsigned char pstat; EntityHandle tmp_handles[MAX_SHARING_PROCS]; int tmp_procs[MAX_SHARING_PROCS]; unsigned int num_ps; ErrorCode result; for( int i = 0; i < num_ents; i++ ) { result = mbImpl->list_entities( ents + i, 1 );MB_CHK_ERR( result ); double coords[3]; result = mbImpl->get_coords( ents + i, 1, coords ); std::cout << " coords: " << coords[0] << " " << coords[1] << " " << coords[2] << "\n"; result = get_sharing_data( ents[i], tmp_procs, tmp_handles, pstat, num_ps );MB_CHK_SET_ERR( result, "Failed to get sharing data" ); std::cout << "Pstatus: "; if( !num_ps ) std::cout << "local " << std::endl; else { if( pstat & PSTATUS_NOT_OWNED ) std::cout << "NOT_OWNED; "; if( pstat & PSTATUS_SHARED ) std::cout << "SHARED; "; if( pstat & PSTATUS_MULTISHARED ) std::cout << "MULTISHARED; "; if( pstat & PSTATUS_INTERFACE ) std::cout << "INTERFACE; "; if( pstat & PSTATUS_GHOST ) std::cout << "GHOST; "; std::cout << std::endl; for( unsigned int j = 0; j < num_ps; j++ ) { std::cout << " proc " << tmp_procs[j] << " id (handle) " << mbImpl->id_from_handle( tmp_handles[j] ) << "(" << tmp_handles[j] << ")" << std::endl; } } std::cout << std::endl; } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::list_entities | ( | const Range & | ents | ) |
Definition at line 2621 of file ParallelComm.cpp.
References moab::Range::begin(), moab::Range::end(), list_entities(), and MB_SUCCESS.
{ for( Range::iterator rit = ents.begin(); rit != ents.end(); ++rit ) list_entities( &( *rit ), 1 ); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::pack_adjacencies | ( | Range & | entities, |
Range::const_iterator & | start_rit, | ||
Range & | whole_range, | ||
unsigned char *& | buff_ptr, | ||
int & | count, | ||
const bool | just_count, | ||
const bool | store_handles, | ||
const int | to_proc | ||
) | [private] |
Definition at line 3450 of file ParallelComm.cpp.
{
return MB_FAILURE;
}
ErrorCode moab::ParallelComm::pack_buffer | ( | Range & | orig_ents, |
const bool | adjacencies, | ||
const bool | tags, | ||
const bool | store_remote_handles, | ||
const int | to_proc, | ||
Buffer * | buff, | ||
TupleList * | entprocs = NULL , |
||
Range * | allsent = NULL |
||
) |
public 'cuz we want to unit test these externally
Definition at line 1418 of file ParallelComm.cpp.
References moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), ErrorCode, get_tag_send_list(), MB_CHK_SET_ERR, pack_entities(), moab::PACK_INT(), pack_sets(), pack_tags(), and moab::ParallelComm::Buffer::set_stored_size().
Referenced by broadcast_entities(), exchange_owned_mesh(), pack_unpack_noremoteh(), scatter_entities(), send_entities(), moab::ParCommGraph::send_mesh_parts(), and test_packing().
{ // Pack the buffer with the entity ranges, adjacencies, and tags sections // // Note: new entities used in subsequent connectivity lists, sets, or tags, // are referred to as (MBMAXTYPE + index), where index is into vector // of new entities, 0-based ErrorCode result; Range set_range; std::vector< Tag > all_tags; std::vector< Range > tag_ranges; Range::const_iterator rit; // Entities result = pack_entities( orig_ents, buff, store_remote_handles, to_proc, false, entprocs, allsent );MB_CHK_SET_ERR( result, "Packing entities failed" ); // Sets result = pack_sets( orig_ents, buff, store_remote_handles, to_proc );MB_CHK_SET_ERR( result, "Packing sets (count) failed" ); // Tags Range final_ents; if( tags ) { result = get_tag_send_list( orig_ents, all_tags, tag_ranges );MB_CHK_SET_ERR( result, "Failed to get tagged entities" ); result = pack_tags( orig_ents, all_tags, all_tags, tag_ranges, buff, store_remote_handles, to_proc );MB_CHK_SET_ERR( result, "Packing tags (count) failed" ); } else { // Set tag size to 0 buff->check_space( sizeof( int ) ); PACK_INT( buff->buff_ptr, 0 ); buff->set_stored_size(); } return result; }
ErrorCode moab::ParallelComm::pack_entities | ( | Range & | entities, |
Buffer * | buff, | ||
const bool | store_remote_handles, | ||
const int | to_proc, | ||
const bool | is_iface, | ||
TupleList * | entprocs = NULL , |
||
Range * | allsent = NULL |
||
) |
Definition at line 1582 of file ParallelComm.cpp.
References moab::Range::begin(), moab::ParallelComm::Buffer::buff_ptr, build_sharedhps_list(), moab::ParallelComm::Buffer::check_space(), moab::Range::clear(), moab::Range::empty(), moab::Range::end(), moab::EntitySequence::end_handle(), moab::CN::EntityTypeName(), ErrorCode, estimate_ents_buffer_size(), moab::SequenceManager::find(), moab::Range::find(), moab::Interface::get_coords(), moab::TupleList::get_n(), moab::Range::lower_bound(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, MBMAXTYPE, MBVERTEX, myDebug, moab::ElementSequence::nodes_per_element(), moab::PACK_DBLS(), moab::PACK_EH(), pack_entity_seq(), moab::PACK_INT(), moab::PACK_INTS(), pstatus_tag(), moab::Range::rbegin(), sequenceManager, moab::ParallelComm::Buffer::set_stored_size(), sharedp_tag(), moab::Range::size(), moab::Range::subset_by_type(), moab::Interface::tag_get_data(), moab::DebugOutput::tprintf(), moab::EntitySequence::type(), moab::TYPE_FROM_HANDLE(), moab::TupleList::vi_rd, and moab::TupleList::vul_rd.
Referenced by exchange_ghost_cells(), and pack_buffer().
{ // Packed information: // 1. # entities = E // 2. for e in E // a. # procs sharing e, incl. sender and receiver = P // b. for p in P (procs sharing e) // c. for p in P (handle for e on p) (Note1) // 3. vertex/entity info // Get an estimate of the buffer size & pre-allocate buffer size int buff_size = estimate_ents_buffer_size( entities, store_remote_handles ); if( buff_size < 0 ) MB_SET_ERR( MB_FAILURE, "Failed to estimate ents buffer size" ); buff->check_space( buff_size ); myDebug->tprintf( 3, "estimate buffer size for %d entities: %d \n", (int)entities.size(), buff_size ); unsigned int num_ents; ErrorCode result; std::vector< EntityHandle > entities_vec( entities.size() ); std::copy( entities.begin(), entities.end(), entities_vec.begin() ); // First pack procs/handles sharing this ent, not including this dest but including // others (with zero handles) if( store_remote_handles ) { // Buff space is at least proc + handle for each entity; use avg of 4 other procs // to estimate buff size, but check later buff->check_space( sizeof( int ) + ( 5 * sizeof( int ) + sizeof( EntityHandle ) ) * entities.size() ); // 1. # entities = E PACK_INT( buff->buff_ptr, entities.size() ); Range::iterator rit; // Pre-fetch sharedp and pstatus std::vector< int > sharedp_vals( entities.size() ); result = mbImpl->tag_get_data( sharedp_tag(), entities, &sharedp_vals[0] );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" ); std::vector< char > pstatus_vals( entities.size() ); result = mbImpl->tag_get_data( pstatus_tag(), entities, &pstatus_vals[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" ); unsigned int i; int tmp_procs[MAX_SHARING_PROCS]; EntityHandle tmp_handles[MAX_SHARING_PROCS]; std::set< unsigned int > dumprocs; // 2. for e in E for( rit = entities.begin(), i = 0; rit != entities.end(); ++rit, i++ ) { unsigned int ind = std::lower_bound( entprocs->vul_rd, entprocs->vul_rd + entprocs->get_n(), *rit ) - entprocs->vul_rd; assert( ind < entprocs->get_n() ); while( ind < entprocs->get_n() && entprocs->vul_rd[ind] == *rit ) dumprocs.insert( entprocs->vi_rd[ind++] ); result = build_sharedhps_list( *rit, pstatus_vals[i], sharedp_vals[i], dumprocs, num_ents, tmp_procs, tmp_handles );MB_CHK_SET_ERR( result, "Failed to build sharedhps" ); dumprocs.clear(); // Now pack them buff->check_space( ( num_ents + 1 ) * sizeof( int ) + num_ents * sizeof( EntityHandle ) ); PACK_INT( buff->buff_ptr, num_ents ); PACK_INTS( buff->buff_ptr, tmp_procs, num_ents ); PACK_EH( buff->buff_ptr, tmp_handles, num_ents ); #ifndef NDEBUG // Check for duplicates in proc list unsigned int dp = 0; for( ; dp < MAX_SHARING_PROCS && -1 != tmp_procs[dp]; dp++ ) dumprocs.insert( tmp_procs[dp] ); assert( dumprocs.size() == dp ); dumprocs.clear(); #endif } } // Pack vertices Range these_ents = entities.subset_by_type( MBVERTEX ); num_ents = these_ents.size(); if( num_ents ) { buff_size = 2 * sizeof( int ) + 3 * num_ents * sizeof( double ); buff->check_space( buff_size ); // Type, # ents PACK_INT( buff->buff_ptr, ( (int)MBVERTEX ) ); PACK_INT( buff->buff_ptr, ( (int)num_ents ) ); std::vector< double > tmp_coords( 3 * num_ents ); result = mbImpl->get_coords( these_ents, &tmp_coords[0] );MB_CHK_SET_ERR( result, "Failed to get vertex coordinates" ); PACK_DBLS( buff->buff_ptr, &tmp_coords[0], 3 * num_ents ); myDebug->tprintf( 4, "Packed %lu ents of type %s\n", (unsigned long)these_ents.size(), CN::EntityTypeName( TYPE_FROM_HANDLE( *these_ents.begin() ) ) ); } // Now entities; go through range, packing by type and equal # verts per element Range::iterator start_rit = entities.find( *these_ents.rbegin() ); ++start_rit; int last_nodes = -1; EntityType last_type = MBMAXTYPE; these_ents.clear(); Range::iterator end_rit = start_rit; EntitySequence* seq; ElementSequence* eseq; while( start_rit != entities.end() || !these_ents.empty() ) { // Cases: // A: !end, last_type == MBMAXTYPE, seq: save contig sequence in these_ents // B: !end, last type & nodes same, seq: save contig sequence in these_ents // C: !end, last type & nodes different: pack these_ents, then save contig sequence in // these_ents D: end: pack these_ents // Find the sequence holding current start entity, if we're not at end eseq = NULL; if( start_rit != entities.end() ) { result = sequenceManager->find( *start_rit, seq );MB_CHK_SET_ERR( result, "Failed to find entity sequence" ); if( NULL == seq ) return MB_FAILURE; eseq = dynamic_cast< ElementSequence* >( seq ); } // Pack the last batch if at end or next one is different if( !these_ents.empty() && ( !eseq || eseq->type() != last_type || last_nodes != (int)eseq->nodes_per_element() ) ) { result = pack_entity_seq( last_nodes, store_remote_handles, to_proc, these_ents, entities_vec, buff );MB_CHK_SET_ERR( result, "Failed to pack entities from a sequence" ); these_ents.clear(); } if( eseq ) { // Continuation of current range, just save these entities // Get position in entities list one past end of this sequence end_rit = entities.lower_bound( start_rit, entities.end(), eseq->end_handle() + 1 ); // Put these entities in the range std::copy( start_rit, end_rit, range_inserter( these_ents ) ); last_type = eseq->type(); last_nodes = eseq->nodes_per_element(); } else if( start_rit != entities.end() && TYPE_FROM_HANDLE( *start_rit ) == MBENTITYSET ) break; start_rit = end_rit; } // Pack MBMAXTYPE to indicate end of ranges buff->check_space( sizeof( int ) ); PACK_INT( buff->buff_ptr, ( (int)MBMAXTYPE ) ); buff->set_stored_size(); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::pack_entity_seq | ( | const int | nodes_per_entity, |
const bool | store_remote_handles, | ||
const int | to_proc, | ||
Range & | these_ents, | ||
std::vector< EntityHandle > & | entities, | ||
Buffer * | buff | ||
) | [private] |
pack a range of entities with equal # verts per entity, along with the range on the sending proc
Definition at line 1836 of file ParallelComm.cpp.
References moab::Range::begin(), moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), moab::Range::end(), moab::CN::EntityTypeName(), ErrorCode, moab::Interface::get_connectivity(), moab::ParallelComm::Buffer::get_current_size(), get_remote_handles(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, myDebug, moab::PACK_EH(), moab::PACK_INT(), moab::Range::size(), moab::DebugOutput::tprintf(), and moab::TYPE_FROM_HANDLE().
Referenced by pack_entities().
{ int tmp_space = 3 * sizeof( int ) + nodes_per_entity * these_ents.size() * sizeof( EntityHandle ); buff->check_space( tmp_space ); // Pack the entity type PACK_INT( buff->buff_ptr, ( (int)TYPE_FROM_HANDLE( *these_ents.begin() ) ) ); // Pack # ents PACK_INT( buff->buff_ptr, these_ents.size() ); // Pack the nodes per entity PACK_INT( buff->buff_ptr, nodes_per_entity ); myDebug->tprintf( 3, "after some pack int %d \n", buff->get_current_size() ); // Pack the connectivity std::vector< EntityHandle > connect; ErrorCode result = MB_SUCCESS; for( Range::const_iterator rit = these_ents.begin(); rit != these_ents.end(); ++rit ) { connect.clear(); result = mbImpl->get_connectivity( &( *rit ), 1, connect, false );MB_CHK_SET_ERR( result, "Failed to get connectivity" ); assert( (int)connect.size() == nodes_per_entity ); result = get_remote_handles( store_remote_handles, &connect[0], &connect[0], connect.size(), to_proc, entities_vec );MB_CHK_SET_ERR( result, "Failed in get_remote_handles" ); PACK_EH( buff->buff_ptr, &connect[0], connect.size() ); } myDebug->tprintf( 3, "Packed %lu ents of type %s\n", (unsigned long)these_ents.size(), CN::EntityTypeName( TYPE_FROM_HANDLE( *these_ents.begin() ) ) ); return result; }
ErrorCode moab::ParallelComm::pack_range_map | ( | Range & | this_range, |
EntityHandle | actual_start, | ||
HandleMap & | handle_map | ||
) | [private] |
pack a range map with keys in this_range and values a contiguous series of handles starting at actual_start
Definition at line 3136 of file ParallelComm.cpp.
References moab::Range::const_pair_begin(), moab::Range::const_pair_end(), moab::RangeMap< KeyType, ValType, NullVal >::insert(), and MB_SUCCESS.
{ for( Range::const_pair_iterator key_it = key_range.const_pair_begin(); key_it != key_range.const_pair_end(); ++key_it ) { int tmp_num = ( *key_it ).second - ( *key_it ).first + 1; handle_map.insert( ( *key_it ).first, val_start, tmp_num ); val_start += tmp_num; } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::pack_remote_handles | ( | std::vector< EntityHandle > & | L1hloc, |
std::vector< EntityHandle > & | L1hrem, | ||
std::vector< int > & | procs, | ||
unsigned int | to_proc, | ||
Buffer * | buff | ||
) |
Definition at line 7370 of file ParallelComm.cpp.
References moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), MB_SUCCESS, moab::PACK_EH(), moab::PACK_INT(), moab::PACK_INTS(), and moab::ParallelComm::Buffer::set_stored_size().
Referenced by exchange_ghost_cells(), exchange_owned_mesh(), recv_entities(), and recv_messages().
{ assert( std::find( L1hloc.begin(), L1hloc.end(), (EntityHandle)0 ) == L1hloc.end() ); // 2 vectors of handles plus ints buff->check_space( ( ( L1p.size() + 1 ) * sizeof( int ) + ( L1hloc.size() + 1 ) * sizeof( EntityHandle ) + ( L1hrem.size() + 1 ) * sizeof( EntityHandle ) ) ); // Should be in pairs of handles PACK_INT( buff->buff_ptr, L1hloc.size() ); PACK_INTS( buff->buff_ptr, &L1p[0], L1p.size() ); // Pack handles in reverse order, (remote, local), so on destination they // are ordered (local, remote) PACK_EH( buff->buff_ptr, &L1hrem[0], L1hrem.size() ); PACK_EH( buff->buff_ptr, &L1hloc[0], L1hloc.size() ); buff->set_stored_size(); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::pack_sets | ( | Range & | entities, |
Buffer * | buff, | ||
const bool | store_handles, | ||
const int | to_proc | ||
) | [private] |
Definition at line 3149 of file ParallelComm.cpp.
References moab::Range::begin(), moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), moab::Range::empty(), moab::Range::end(), ErrorCode, estimate_sets_buffer_size(), moab::Interface::get_child_meshsets(), moab::Interface::get_entities_by_handle(), moab::Interface::get_meshset_options(), moab::Interface::get_parent_meshsets(), get_remote_handles(), moab::ID_FROM_HANDLE(), MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_NOT_FOUND, MB_TAG_SPARSE, MB_TYPE_INTEGER, MBENTITYSET, mbImpl, MBMAXTYPE, myDebug, moab::Interface::num_child_meshsets(), moab::Interface::num_parent_meshsets(), moab::PACK_EH(), moab::PACK_INT(), moab::PACK_INTS(), moab::PACK_RANGE(), moab::PACK_VOID(), moab::RANGE_SIZE(), moab::ParallelComm::Buffer::set_stored_size(), moab::Range::size(), moab::Range::subset_by_type(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), moab::DebugOutput::tprintf(), and moab::TYPE_FROM_HANDLE().
Referenced by pack_buffer().
{ // SETS: // . #sets // . for each set: // - options[#sets] (unsigned int) // - if (unordered) set range // - else if ordered // . #ents in set // . handles[#ents] // - #parents // - if (#parents) handles[#parents] // - #children // - if (#children) handles[#children] // Now the sets; assume any sets the application wants to pass are in the entities list ErrorCode result; Range all_sets = entities.subset_by_type( MBENTITYSET ); int buff_size = estimate_sets_buffer_size( all_sets, store_remote_handles ); if( buff_size < 0 ) MB_SET_ERR( MB_FAILURE, "Failed to estimate sets buffer size" ); buff->check_space( buff_size ); // Number of sets PACK_INT( buff->buff_ptr, all_sets.size() ); // Options for all sets std::vector< unsigned int > options( all_sets.size() ); Range::iterator rit; std::vector< EntityHandle > members; int i; for( rit = all_sets.begin(), i = 0; rit != all_sets.end(); ++rit, i++ ) { result = mbImpl->get_meshset_options( *rit, options[i] );MB_CHK_SET_ERR( result, "Failed to get meshset options" ); } buff->check_space( all_sets.size() * sizeof( unsigned int ) ); PACK_VOID( buff->buff_ptr, &options[0], all_sets.size() * sizeof( unsigned int ) ); // Pack parallel geometry unique id if( !all_sets.empty() ) { Tag uid_tag; int n_sets = all_sets.size(); bool b_pack = false; std::vector< int > id_data( n_sets ); result = mbImpl->tag_get_handle( "PARALLEL_UNIQUE_ID", 1, MB_TYPE_INTEGER, uid_tag, MB_TAG_SPARSE | MB_TAG_CREAT );MB_CHK_SET_ERR( result, "Failed to create parallel geometry unique id tag" ); result = mbImpl->tag_get_data( uid_tag, all_sets, &id_data[0] ); if( MB_TAG_NOT_FOUND != result ) { if( MB_SUCCESS != result ) MB_SET_ERR( result, "Failed to get parallel geometry unique ids" ); for( i = 0; i < n_sets; i++ ) { if( id_data[i] != 0 ) { b_pack = true; break; } } } if( b_pack ) { // If you find buff->check_space( ( n_sets + 1 ) * sizeof( int ) ); PACK_INT( buff->buff_ptr, n_sets ); PACK_INTS( buff->buff_ptr, &id_data[0], n_sets ); } else { buff->check_space( sizeof( int ) ); PACK_INT( buff->buff_ptr, 0 ); } } // Vectors/ranges std::vector< EntityHandle > entities_vec( entities.size() ); std::copy( entities.begin(), entities.end(), entities_vec.begin() ); for( rit = all_sets.begin(), i = 0; rit != all_sets.end(); ++rit, i++ ) { members.clear(); result = mbImpl->get_entities_by_handle( *rit, members );MB_CHK_SET_ERR( result, "Failed to get entities in ordered set" ); result = get_remote_handles( store_remote_handles, &members[0], &members[0], members.size(), to_proc, entities_vec );MB_CHK_SET_ERR( result, "Failed in get_remote_handles" ); buff->check_space( members.size() * sizeof( EntityHandle ) + sizeof( int ) ); PACK_INT( buff->buff_ptr, members.size() ); PACK_EH( buff->buff_ptr, &members[0], members.size() ); } // Pack parent/child sets if( !store_remote_handles ) { // Only works not store remote handles // Pack numbers of parents/children unsigned int tot_pch = 0; int num_pch; buff->check_space( 2 * all_sets.size() * sizeof( int ) ); for( rit = all_sets.begin(), i = 0; rit != all_sets.end(); ++rit, i++ ) { // Pack parents result = mbImpl->num_parent_meshsets( *rit, &num_pch );MB_CHK_SET_ERR( result, "Failed to get num parents" ); PACK_INT( buff->buff_ptr, num_pch ); tot_pch += num_pch; result = mbImpl->num_child_meshsets( *rit, &num_pch );MB_CHK_SET_ERR( result, "Failed to get num children" ); PACK_INT( buff->buff_ptr, num_pch ); tot_pch += num_pch; } // Now pack actual parents/children members.clear(); members.reserve( tot_pch ); std::vector< EntityHandle > tmp_pch; for( rit = all_sets.begin(), i = 0; rit != all_sets.end(); ++rit, i++ ) { result = mbImpl->get_parent_meshsets( *rit, tmp_pch );MB_CHK_SET_ERR( result, "Failed to get parents" ); std::copy( tmp_pch.begin(), tmp_pch.end(), std::back_inserter( members ) ); tmp_pch.clear(); result = mbImpl->get_child_meshsets( *rit, tmp_pch );MB_CHK_SET_ERR( result, "Failed to get children" ); std::copy( tmp_pch.begin(), tmp_pch.end(), std::back_inserter( members ) ); tmp_pch.clear(); } assert( members.size() == tot_pch ); if( !members.empty() ) { result = get_remote_handles( store_remote_handles, &members[0], &members[0], members.size(), to_proc, entities_vec );MB_CHK_SET_ERR( result, "Failed to get remote handles for set parent/child sets" ); #ifndef NDEBUG // Check that all handles are either sets or maxtype for( unsigned int __j = 0; __j < members.size(); __j++ ) assert( ( TYPE_FROM_HANDLE( members[__j] ) == MBMAXTYPE && ID_FROM_HANDLE( members[__j] ) < (int)entities.size() ) || TYPE_FROM_HANDLE( members[__j] ) == MBENTITYSET ); #endif buff->check_space( members.size() * sizeof( EntityHandle ) ); PACK_EH( buff->buff_ptr, &members[0], members.size() ); } } else { buff->check_space( 2 * all_sets.size() * sizeof( int ) ); for( rit = all_sets.begin(); rit != all_sets.end(); ++rit ) { PACK_INT( buff->buff_ptr, 0 ); PACK_INT( buff->buff_ptr, 0 ); } } // Pack the handles if( store_remote_handles && !all_sets.empty() ) { buff_size = RANGE_SIZE( all_sets ); buff->check_space( buff_size ); PACK_RANGE( buff->buff_ptr, all_sets ); } myDebug->tprintf( 4, "Done packing sets.\n" ); buff->set_stored_size(); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::pack_shared_handles | ( | std::vector< std::vector< SharedEntityData > > & | send_data | ) |
Definition at line 8441 of file ParallelComm.cpp.
References buffProcs, ErrorCode, get_buffers(), get_owner(), get_sharing_data(), moab::ParallelComm::SharedEntityData::local, MAX_SHARING_PROCS, MB_SUCCESS, moab::ParallelComm::SharedEntityData::owner, proc_config(), moab::ParallelComm::SharedEntityData::remote, and sharedEnts.
Referenced by check_all_shared_handles().
{ // Build up send buffers ErrorCode rval = MB_SUCCESS; int ent_procs[MAX_SHARING_PROCS]; EntityHandle handles[MAX_SHARING_PROCS]; int num_sharing, tmp_int; SharedEntityData tmp; send_data.resize( buffProcs.size() ); for( std::set< EntityHandle >::iterator i = sharedEnts.begin(); i != sharedEnts.end(); ++i ) { tmp.remote = *i; // Swap local/remote so they're correct on the remote proc. rval = get_owner( *i, tmp_int ); tmp.owner = tmp_int; if( MB_SUCCESS != rval ) return rval; unsigned char pstat; rval = get_sharing_data( *i, ent_procs, handles, pstat, num_sharing ); if( MB_SUCCESS != rval ) return rval; for( int j = 0; j < num_sharing; j++ ) { if( ent_procs[j] == (int)proc_config().proc_rank() ) continue; tmp.local = handles[j]; int ind = get_buffers( ent_procs[j] ); assert( -1 != ind ); if( (int)send_data.size() < ind + 1 ) send_data.resize( ind + 1 ); send_data[ind].push_back( tmp ); } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::pack_tag | ( | Tag | source_tag, |
Tag | destination_tag, | ||
const Range & | entities, | ||
const std::vector< EntityHandle > & | whole_range, | ||
Buffer * | buff, | ||
const bool | store_remote_handles, | ||
const int | to_proc | ||
) | [private] |
Serialize tag data.
source_tag | The tag for which data will be serialized |
destination_tag | Tag in which to store unpacked tag data. Typically the same as source_tag. |
entities | The entities for which tag values will be serialized |
whole_range | Calculate entity indices as location in this range |
buff_ptr | Input/Output: As input, pointer to the start of the buffer in which to serialize data. As output, the position just passed the serialized data. |
count_out | Output: The required buffer size, in bytes. |
store_handles | The data for each tag is preceded by a list of EntityHandles designating the entity each of the subsequent tag values corresponds to. This value may be one of: 1) If store_handles == false: An invalid handle composed of {MBMAXTYPE,idx}, where idx is the position of the entity in "whole_range". 2) If store_hanldes == true and a valid remote handle exists, the remote handle. 3) If store_hanldes == true and no valid remote handle is defined for the entity, the same as 1). |
to_proc | If 'store_handles' is true, the processor rank for which to store the corresponding remote entity handles. |
Definition at line 3556 of file ParallelComm.cpp.
References moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), ErrorCode, moab::TagInfo::get_data_type(), moab::TagInfo::get_default_value(), moab::TagInfo::get_default_value_size(), moab::TagInfo::get_name(), get_remote_handles(), moab::TagInfo::get_size(), moab::DebugOutput::get_verbosity(), MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MB_TYPE_OPAQUE, MB_TYPE_OUT_OF_RANGE, MB_VARIABLE_LENGTH, mbImpl, myDebug, moab::PACK_BYTES(), moab::PACK_EH(), moab::PACK_INT(), moab::PACK_INTS(), moab::PACK_VOID(), PC, moab::Range::print(), moab::Range::size(), moab::TagInfo::size_from_data_type(), moab::Interface::tag_get_by_ptr(), moab::Interface::tag_get_data(), moab::Interface::tag_get_type(), TagType, and moab::DebugOutput::tprintf().
Referenced by pack_tags().
{ ErrorCode result; std::vector< int > var_len_sizes; std::vector< const void* > var_len_values; if( src_tag != dst_tag ) { if( dst_tag->get_size() != src_tag->get_size() ) return MB_TYPE_OUT_OF_RANGE; if( dst_tag->get_data_type() != src_tag->get_data_type() && dst_tag->get_data_type() != MB_TYPE_OPAQUE && src_tag->get_data_type() != MB_TYPE_OPAQUE ) return MB_TYPE_OUT_OF_RANGE; } // Size, type, data type buff->check_space( 3 * sizeof( int ) ); PACK_INT( buff->buff_ptr, src_tag->get_size() ); TagType this_type; result = mbImpl->tag_get_type( dst_tag, this_type ); PACK_INT( buff->buff_ptr, (int)this_type ); DataType data_type = src_tag->get_data_type(); PACK_INT( buff->buff_ptr, (int)data_type ); int type_size = TagInfo::size_from_data_type( data_type ); // Default value if( NULL == src_tag->get_default_value() ) { buff->check_space( sizeof( int ) ); PACK_INT( buff->buff_ptr, 0 ); } else { buff->check_space( src_tag->get_default_value_size() ); PACK_BYTES( buff->buff_ptr, src_tag->get_default_value(), src_tag->get_default_value_size() ); } // Name buff->check_space( src_tag->get_name().size() ); PACK_BYTES( buff->buff_ptr, dst_tag->get_name().c_str(), dst_tag->get_name().size() ); myDebug->tprintf( 4, "Packing tag \"%s\"", src_tag->get_name().c_str() ); if( src_tag != dst_tag ) myDebug->tprintf( 4, " (as tag \"%s\")", dst_tag->get_name().c_str() ); myDebug->tprintf( 4, "\n" ); // Pack entities buff->check_space( tagged_entities.size() * sizeof( EntityHandle ) + sizeof( int ) ); PACK_INT( buff->buff_ptr, tagged_entities.size() ); std::vector< EntityHandle > dum_tagged_entities( tagged_entities.size() ); result = get_remote_handles( store_remote_handles, tagged_entities, &dum_tagged_entities[0], to_proc, whole_vec ); if( MB_SUCCESS != result ) { if( myDebug->get_verbosity() == 3 ) { std::cerr << "Failed to get remote handles for tagged entities:" << std::endl; tagged_entities.print( " " ); } MB_SET_ERR( result, "Failed to get remote handles for tagged entities" ); } PACK_EH( buff->buff_ptr, &dum_tagged_entities[0], dum_tagged_entities.size() ); const size_t num_ent = tagged_entities.size(); if( src_tag->get_size() == MB_VARIABLE_LENGTH ) { var_len_sizes.resize( num_ent, 0 ); var_len_values.resize( num_ent, 0 ); result = mbImpl->tag_get_by_ptr( src_tag, tagged_entities, &var_len_values[0], &var_len_sizes[0] );MB_CHK_SET_ERR( result, "Failed to get variable-length tag data in pack_tags" ); buff->check_space( num_ent * sizeof( int ) ); PACK_INTS( buff->buff_ptr, &var_len_sizes[0], num_ent ); for( unsigned int i = 0; i < num_ent; i++ ) { buff->check_space( var_len_sizes[i] ); PACK_VOID( buff->buff_ptr, var_len_values[i], type_size * var_len_sizes[i] ); } } else { buff->check_space( num_ent * src_tag->get_size() ); // Should be OK to read directly into buffer, since tags are untyped and // handled by memcpy result = mbImpl->tag_get_data( src_tag, tagged_entities, buff->buff_ptr );MB_CHK_SET_ERR( result, "Failed to get tag data in pack_tags" ); buff->buff_ptr += num_ent * src_tag->get_size(); PC( num_ent * src_tag->get_size(), " void" ); } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::pack_tags | ( | Range & | entities, |
const std::vector< Tag > & | src_tags, | ||
const std::vector< Tag > & | dst_tags, | ||
const std::vector< Range > & | tag_ranges, | ||
Buffer * | buff, | ||
const bool | store_handles, | ||
const int | to_proc | ||
) | [private] |
Serialize entity tag data.
This function operates in two passes. The first phase, specified by 'just_count == true' calculates the necessary buffer size for the serialized data. The second phase writes the actual binary serialized representation of the data to the passed buffer.
First two arguments are not used. (Legacy interface?)
entities | NOT USED |
start_rit | NOT USED |
whole_range | Should be the union of the sets of entities for which tag values are to be serialized. Also specifies ordering for indexes for tag values and serves as the superset from which to compose entity lists from individual tags if just_count and all_possible_tags are both true. |
buff_ptr | Buffer into which to write binary serialized data |
count | Output: The size of the serialized data is added to this parameter. NOTE: Should probably initialize to zero before calling. |
just_count | If true, just calculate the buffer size required to hold the serialized data. Will also append to 'all_tags' and 'tag_ranges' if all_possible_tags == true. |
store_handles | The data for each tag is preceded by a list of EntityHandles designating the entity each of the subsequent tag values corresponds to. This value may be one of: 1) If store_handles == false: An invalid handle composed of {MBMAXTYPE,idx}, where idx is the position of the entity in "whole_range". 2) If store_hanldes == true and a valid remote handle exists, the remote handle. 3) If store_hanldes == true and no valid remote handle is defined for the entity, the same as 1). |
to_proc | If 'store_handles' is true, the processor rank for which to store the corresponding remote entity handles. |
all_tags | List of tags to write |
tag_ranges | List of entities to serialize tag data, one for each corresponding tag handle in 'all_tags. |
Definition at line 3470 of file ParallelComm.cpp.
References moab::Range::begin(), moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), moab::Range::end(), ErrorCode, MB_SUCCESS, myDebug, moab::PACK_INT(), pack_tag(), packed_tag_size(), moab::ParallelComm::Buffer::set_stored_size(), moab::Range::size(), and moab::DebugOutput::tprintf().
Referenced by exchange_tags(), pack_buffer(), and reduce_tags().
{ ErrorCode result; std::vector< Tag >::const_iterator tag_it, dst_it; std::vector< Range >::const_iterator rit; int count = 0; for( tag_it = src_tags.begin(), rit = tag_ranges.begin(); tag_it != src_tags.end(); ++tag_it, ++rit ) { result = packed_tag_size( *tag_it, *rit, count ); if( MB_SUCCESS != result ) return result; } // Number of tags count += sizeof( int ); buff->check_space( count ); PACK_INT( buff->buff_ptr, src_tags.size() ); std::vector< EntityHandle > entities_vec( entities.size() ); std::copy( entities.begin(), entities.end(), entities_vec.begin() ); for( tag_it = src_tags.begin(), dst_it = dst_tags.begin(), rit = tag_ranges.begin(); tag_it != src_tags.end(); ++tag_it, ++dst_it, ++rit ) { result = pack_tag( *tag_it, *dst_it, *rit, entities_vec, buff, store_remote_handles, to_proc ); if( MB_SUCCESS != result ) return result; } myDebug->tprintf( 4, "Done packing tags." ); buff->set_stored_size(); return MB_SUCCESS; }
ErrorCode moab::ParallelComm::packed_tag_size | ( | Tag | source_tag, |
const Range & | entities, | ||
int & | count_out | ||
) | [private] |
Calculate buffer size required to pack tag data.
source_tag | The tag for which data will be serialized |
entities | The entities for which tag values will be serialized |
count_out | Output: The required buffer size, in bytes. |
Definition at line 3513 of file ParallelComm.cpp.
References ErrorCode, errorHandler, moab::TagInfo::get_data(), moab::TagInfo::get_default_value(), moab::TagInfo::get_default_value_size(), moab::TagInfo::get_name(), moab::TagInfo::get_size(), MB_CHK_SET_ERR, MB_SUCCESS, MB_VARIABLE_LENGTH, sequenceManager, and moab::Range::size().
Referenced by pack_tags().
{ // For dense tags, compute size assuming all entities have that tag // For sparse tags, get number of entities w/ that tag to compute size std::vector< int > var_len_sizes; std::vector< const void* > var_len_values; // Default value count += sizeof( int ); if( NULL != tag->get_default_value() ) count += tag->get_default_value_size(); // Size, type, data type count += 3 * sizeof( int ); // Name count += sizeof( int ); count += tag->get_name().size(); // Range of tag count += sizeof( int ) + tagged_entities.size() * sizeof( EntityHandle ); if( tag->get_size() == MB_VARIABLE_LENGTH ) { const int num_ent = tagged_entities.size(); // Send a tag size for each entity count += num_ent * sizeof( int ); // Send tag data for each entity var_len_sizes.resize( num_ent ); var_len_values.resize( num_ent ); ErrorCode result = tag->get_data( sequenceManager, errorHandler, tagged_entities, &var_len_values[0], &var_len_sizes[0] );MB_CHK_SET_ERR( result, "Failed to get lenghts of variable-length tag values" ); count += std::accumulate( var_len_sizes.begin(), var_len_sizes.end(), 0 ); } else { // Tag data values for range or vector count += tagged_entities.size() * tag->get_size(); } return MB_SUCCESS; }
Tag moab::ParallelComm::part_tag | ( | ) | [inline] |
Definition at line 703 of file ParallelComm.hpp.
References partition_tag().
Referenced by create_part().
{ return partition_tag(); }
Range& moab::ParallelComm::partition_sets | ( | ) | [inline] |
return partition, interface set ranges
Definition at line 665 of file ParallelComm.hpp.
References partitionSets.
Referenced by check_parallel_read(), collective_sync_partition(), create_part(), moab::ReadParallel::create_partition_sets(), moab::ReadParallel::delete_nonlocal_entities(), destroy_part(), get_part_handle(), iMeshP_getLocalParts(), iMeshP_getNumLocalParts(), moab::ReadParallel::load_file(), resolve_and_exchange(), set_partitioning(), moab::ScdInterface::tag_shared_vertices(), and test_ghost_elements().
{ return partitionSets; }
const Range& moab::ParallelComm::partition_sets | ( | ) | const [inline] |
Definition at line 669 of file ParallelComm.hpp.
References partitionSets.
{ return partitionSets; }
return partitions set tag
return partition set tag
Definition at line 7975 of file ParallelComm.cpp.
References ErrorCode, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_SPARSE, MB_TYPE_INTEGER, mbImpl, PARALLEL_PARTITION_TAG_NAME, partitionTag, and moab::Interface::tag_get_handle().
Referenced by part_tag(), and ZoltanPartitioner::partition_inferred_mesh().
{ if( !partitionTag ) { int dum_id = -1; ErrorCode result = mbImpl->tag_get_handle( PARALLEL_PARTITION_TAG_NAME, 1, MB_TYPE_INTEGER, partitionTag, MB_TAG_SPARSE | MB_TAG_CREAT, &dum_id ); if( MB_SUCCESS != result ) return 0; } return partitionTag; }
Tag moab::ParallelComm::pcomm_tag | ( | Interface * | impl, |
bool | create_if_missing = true |
||
) | [static] |
return pcomm tag; passes in impl 'cuz this is a static function
return pcomm tag; static because might not have a pcomm before going to look for one on the interface
Definition at line 7989 of file ParallelComm.cpp.
References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_SPARSE, MB_TYPE_OPAQUE, PARALLEL_COMM_TAG_NAME, and moab::Interface::tag_get_handle().
Referenced by add_pcomm(), get_all_pcomm(), get_pcomm(), remove_pcomm(), and set_partitioning().
{ Tag this_tag = 0; ErrorCode result; if( create_if_missing ) { result = impl->tag_get_handle( PARALLEL_COMM_TAG_NAME, MAX_SHARING_PROCS * sizeof( ParallelComm* ), MB_TYPE_OPAQUE, this_tag, MB_TAG_SPARSE | MB_TAG_CREAT ); } else { result = impl->tag_get_handle( PARALLEL_COMM_TAG_NAME, MAX_SHARING_PROCS * sizeof( ParallelComm* ), MB_TYPE_OPAQUE, this_tag, MB_TAG_SPARSE ); } if( MB_SUCCESS != result ) return 0; return this_tag; }
ErrorCode moab::ParallelComm::post_irecv | ( | std::vector< unsigned int > & | exchange_procs | ) |
Post "MPI_Irecv" before meshing.
exchange_procs | processor vector exchanged |
Definition at line 6768 of file ParallelComm.cpp.
References buffProcs, get_buffers(), INITIAL_BUFF_SIZE, moab::MB_MESG_ENTS_SIZE, MB_SET_ERR, MB_SUCCESS, PRINT_DEBUG_IRECV, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recvRemotehReqs, recvReqs, remoteOwnedBuffs, reset_all_buffers(), and sendReqs.
{ // Set buffers int n_proc = exchange_procs.size(); for( int i = 0; i < n_proc; i++ ) get_buffers( exchange_procs[i] ); reset_all_buffers(); // Post ghost irecv's for entities from all communicating procs // Index requests the same as buffer/sharing procs indices int success; recvReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL ); recvRemotehReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL ); sendReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL ); int incoming = 0; for( int i = 0; i < n_proc; i++ ) { int ind = get_buffers( exchange_procs[i] ); incoming++; PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[ind], remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MB_MESG_ENTS_SIZE, incoming ); success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind], MB_MESG_ENTS_SIZE, procConfig.proc_comm(), &recvReqs[2 * ind] ); if( success != MPI_SUCCESS ) { MB_SET_ERR( MB_FAILURE, "Failed to post irecv in owned entity exchange" ); } } return MB_SUCCESS; }
ErrorCode moab::ParallelComm::post_irecv | ( | std::vector< unsigned int > & | shared_procs, |
std::set< unsigned int > & | recv_procs | ||
) |
Definition at line 6801 of file ParallelComm.cpp.
References buffProcs, get_buffers(), INITIAL_BUFF_SIZE, localOwnedBuffs, moab::MB_MESG_ENTS_SIZE, MB_SET_ERR, MB_SUCCESS, PRINT_DEBUG_IRECV, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recvRemotehReqs, recvReqs, remoteOwnedBuffs, reset_all_buffers(), and sendReqs.
{ // Set buffers int num = shared_procs.size(); for( int i = 0; i < num; i++ ) get_buffers( shared_procs[i] ); reset_all_buffers(); num = remoteOwnedBuffs.size(); for( int i = 0; i < num; i++ ) remoteOwnedBuffs[i]->set_stored_size(); num = localOwnedBuffs.size(); for( int i = 0; i < num; i++ ) localOwnedBuffs[i]->set_stored_size(); // Post ghost irecv's for entities from all communicating procs // Index requests the same as buffer/sharing procs indices int success; recvReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL ); recvRemotehReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL ); sendReqs.resize( 2 * buffProcs.size(), MPI_