yade.mpy module

This module defines mpirun(), a parallel implementation of run() using a distributed memory approach. Message passing is done with mpi4py mainly, however some messages are also handled in c++ (with openmpi).

Note

Many internals of the mpy module listed on this page are not helpful to the user. Instead, please find introductory material on mpy module in user manual.

Logic:

The logic for an initially centralized scene is as follows:

  1. Instanciate a complete, ordinary, yade scene

  2. Insert subdomains as special yade bodies. This is somehow similar to adding a clump body on the top of clump members

  3. Broadcast this scene to all workers. In the initialization phase the workers will:

    • define the bounding box of their assigned bodies and return it to other workers

    • detect which assigned bodies are virtually in interaction with other domains (based on their bounding boxes) and communicate the lists to the relevant workers

    • erase the bodies which are neither assigned nor virtually interacting with the subdomain

  4. Run a number of ‘regular’ iterations without re-running collision detection (verlet dist mechanism). In each regular iteration the workers will:

    • calculate internal and cross-domains interactions

    • execute Newton on assigned bodies (modified Newton skips other domains)

    • send updated positions to other workers and partial force on floor to master

  5. When one worker triggers collision detection all workers will follow. It will result in updating the intersections between subdomains.

  6. If enabled, bodies may be re-allocated to different domains just after a collision detection, based on a filter. Custom filters are possible. One is predidefined here (medianFilter)

Rules:

#- intersections[0] has 0-bodies (to which we need to send force) #- intersections[thisDomain] has ids of the other domains overlapping the current ones #- intersections[otherDomain] has ids of bodies in _current_ domain which are overlapping with other domain (for which we need to send updated pos/vel)

Hints:

#- handle subD.intersections with care (same for mirrorIntersections). subD.intersections.append() will not reach the c++ object. subD.intersections can only be assigned (a list of list of int)

class yade.mpy.Timing_comm(inherits object)[source]
Allgather(timing_name, *args, **kwargs)[source]
Gather(timing_name, *args, **kwargs)[source]
Gatherv(timing_name, *args, **kwargs)[source]
allreduce(timing_name, *args, **kwargs)[source]
bcast(timing_name, *args, **kwargs)[source]
clear()[source]
enable_timing()[source]
mpiSendStates(timing_name, *args, **kwargs)[source]
mpiWait(timing_name, *args, **kwargs)[source]
mpiWaitReceived(timing_name, *args, **kwargs)[source]
print_all()[source]
recv(timing_name, *args, **kwargs)[source]
send(timing_name, *args, **kwargs)[source]
yade.mpy.eraseRemote()[source]
yade.mpy.initialize(np)[source]
yade.mpy.makeColorScale(n=None)[source]
yade.mpy.makeMpiArgv()[source]
yade.mpy.mergeScene()[source]
yade.mpy.mpiStats()[source]
yade.mpy.pairOp(talkTo)[source]
yade.mpy.parallelCollide()[source]
yade.mpy.probeRecvMessage(source, tag)[source]
yade.mpy.sendRecvStates()[source]
yade.mpy.spawnedProcessWaitCommand()[source]
yade.mpy.updateAllIntersections()[source]
yade.mpy.updateMirrorOwners()[source]