MOOSE Newsletter (June 2018)

DistributedGeneratedMesh

Since the beginning, MOOSE has had the GeneratedMesh for generating cartesian meshes in 1D, 2D and 3D. However, the way this has worked in parallel for distributed-mesh is non-optimal. In parallel, the complete GeneratedMesh is actually built on every processor. If you're using parallel_type = distributed the entire mesh will still be built on every processor, partitioned, and then each processor will delete the portion not assigned to it.

We have added a new capability called DistributedGeneratedMesh. This new capability allows for each processor to uniquely create only the portion of the mesh that belongs on that processor. In this way, much larger generated meshes can be created in parallel much more quickly and efficiently.

VectorPostprocessor Parallel Consistency

Just as with Postprocessors, the outputs of VectorPostprocessors (VPPs) are able to be coupled back into other MOOSE objects. However, there is one issue: the vectors a VPP produces are only required to be complete on "processor 0" (the root MPI process). So, if you are needing the value of a VPP in a Kernel (which will most-likely be running on all of the MPI processes) something needs to be done to make the complete vectors available on all of the MPI processes.

A new capability was added to MOOSE this month that automatically handles the parallelism of VPP output. To tell MOOSE that you need a complete copy of the vector on every processor you will now add an extra argument to getVectorPostprocessorValue() like so:


getVectorPostprocessorValue('the_vpp_parameter_name', 'the_vector_name', true)

The true is the new part. By passing this MOOSE will completely handle the parallelism for you and you are free to use the vector values on all processors.

In addition a new function called getScatterVectorPostprocessorValue() was added to help in the case that a VPP produces vectors that are num_procs in length and your object only needs to access it's entry in that vector.

For more information on all of this check out the bottom of the VectorPostprocessors System page.

2x2 GridPartitioner Example

GridPartitioner

The Partitioner System is responsible for assigning portions of the domain to each MPI process when running in parallel. Most of you never interact with this and are using the default LibmeshPartitioner which defaults to using the METIS package for partitioning.

However, sometimes you want more control over the partitioning. This month we added the GridPartitioner. The GridPartitioner allows you to use a perfect grid (similar to a GeneratedMesh) to do the element assignment. To the right is an example of using the GridPartitioner on a GeneratedMesh with a 2x2 grid for use on 4 processors.

Moose Tagging System

Previously, a global Jacobian matrix and a global residual vector are filled out with contributions from all kernels when a sweep is done through all elements . Sometimes, for example for eigenvalue solvers or explicit time steppers, we need a more flexible way to associate kernels with vectors and matrices in an arbitrary way. A kernel can contribute to one or more vectors and matrices, and similarly a matrix and a vector can receive contributes from one or more kernels. This capability currently is used to build explicit sweep and eigenvalue solvers. More details for the MOOSE tagging system is at Tagging System.

MooseMesh::clone()

The MooseMesh::clone() interface returns a reference to the object it allocates, which makes it easy to forget to explicitly delete the object when you are done with it. The new MooseMesh::safeClone() method, which returns a std::unique_ptr<MooseMesh> should be used instead. MooseMesh::clone() has been reimplemented as a mooseError() at the base class level, and should no longer be called or overridden by user code. It will eventually be removed altogether.

PorousFlowJoiner materials

The Porous Flow module now automatically adds all of the PorousFlowJoiner materials that are required, so there is no need to include these pesky objects yourself! While your current input files will still continue to run unchanged, you may want to delete these objects, and forget about them forevermore.

Strain Periodicity

A new global strain calculation approach has been implemented in the TensorMechanics module that relaxes the stresses along the periodic directions and allow corresponding deformation. It generates an auxiliary displacement field which combined with periodic displacements enforces the strain periodicity. This approach enables capturing volume change, shear deformation, etc. while still maintaining periodic BC on the displacements.

TensorMechanics Documentation

The Tensor Mechanics module has a more complete set of documentation for those classes used in engineering scale thermo-mechanical simulations. We've also improved on the documentation for the actions by linking among the actions and the created classes, including the recommended TensorMechanics MasterAction that ensures consistency among the strain and stress divergence formulations.

Mesh Exploder

The visualization tool (Chigger) that is the basis of the Peacock GUI now has the ability to visualize a partitioned mesh by "exploding" the elements on each processor. The test script below creates the image shown in Figure 1.

import vtk
import chigger

# Create a common camera
camera = vtk.vtkCamera()
camera.SetViewUp(0.1889, 0.9412, -0.2800)
camera.SetPosition(3.4055, -0.8236, -1.9897)
camera.SetFocalPoint(0.5000, 0.5000, 0.5000)

reader = chigger.exodus.NemesisReader('../input/nemesis.e.24.*')
result = chigger.exodus.ExodusResult(reader, variable='u', explode=0.3, camera=camera)

window = chigger.RenderWindow(result, size=[300,300], test=True)
window.write('explode.png')
window.start()
(python/chigger/tests/nemesis/explode.py)

Figure 1: Visualization of parallel partition using the Chigger script.