HPC Scale Engineering Analysis

Scaling CFD workflows to larger HPC resources, local or cloud, requires rethinking data flows

Simply doing “more of the same” is not a strategy for success. 

“we were fine when everything was steady state but now LES is killing our file system”

At some point, organizations need to realize that the important knowledge they need is buried in the huge volume files produced by CFD.  Solver codes may need a large domain around the article being analyzed by quantities like forces, moments and heat fluxes are usually located on the walls of the article.  This can be a few percent of the total stored data size.

fugaku crop

New supercomputers have heterogeneous computing architectures and ARM64 is now proving itself.  Users need to benchmark and plan end-to-end workflows in order to get the most from these resources.

navair

The CREATE-AV animation shown here was computed on supercomputers in Hawaii, with surface extracts produced in situ.  These were transferred over the internet to the engineer’s desktop in the eastern US for post-processing and animation.  Without the 1000 fold reduction in file size, the project would not have been possible.

Unsteady LES runs or large parameter sweeps can generate huge and numerous files that become a bottleneck to effective engineering workflows

Is disk space and write/read time limiting the cases I run?

Why do catalyst and libsim use far more memory than my solver?

How can I really use unsteady results for analysis at high timestep frequency?

Are my competitors gaining with new technology that we can’t seem to use?

HPC scale engineering analysis

Kombyne™

Next-generation in situ/in transit, easy to integrate, low memory overhead.  Bi-directional coupling supports run-time solver steering

kombyne
Click logo above for more information

Extract Workflows

‘Extracts’ (1D, 2D, 3D and 4D subsets of solution data) can be obtained early in the workflow and used to effectively post-process, saving time, disk space and network load

qc crop

In-situ visualization at JAXA

The tightly-coupled in-situ visualization using libSim & VisIt is implemented into the in-house CFD program and tested in JAXA’s supercomputer system…Introduction of the in-situ vis. should be considered in advance for the design of supercomputer system.

Seiji Tsutsumi, PhD.From PARCFD 2017

Intelligent Light
301 Route 17N, 7th Floor,
Rutherford, NJ 07070

Sign Up &Get Notified

Copyright© 2019-2021 –Privacy Policy

Scroll to Top