The particle-based nature of the nanoFluidX code allows for an elegant and efficient approach to flows which undergo high deformation during the simulation, such as sloshing, violent multiphase flows or rapid movement through complex geometry.
General free-surface flows
Simulate sloshing of oil in the powertrain systems, free flowing fluids in an open environment, open or closed tanks under high accelerations and similar phenomena.
High-density ratio multiphase flows
The Smooth Particle Hydrodynamics (SPH) method of nanoFluidX allows for easy treatment of high-density ratio multiphase flows (e.g. water-air) without additional computational effort. The fluid interfaces are a natural by-product of the SPH method and no additional interface reconstruction is required, therefore saving computational time.
Rotating gears, crankshafts and connecting rods
nanoFluidX has implemented options for prescribing different types of motion, therefore simulating rotating gears, crankshafts and connecting rods comes easy. Measure forces and torques experienced by the solid elements as they interact with the surrounding fluid.
Measure forces experienced by the tank or a vehicle during drastic acceleration, e.g. braking or sudden lane change.
GPU computing provides a significant performance advantage and power savings with respect to their more cumbersome CPU counterparts. GPU revolution in scientific and engineering computing is rapidly progressing and nanoFluidX is one of the pioneering commercial software packages which utilizes this technology, bringing significant speed to the overall product development.
For comparison a real life example of a complex Double Clutch Transmission (DCT) with 13.5 million particles, reference RPM of 3000 and physical time of 3.4 s was simulated with both nanoFluidX and a commercial CPU-based SPH code. The CPU code used a 32-core system, while nanoFluidX ran on 4 NVIDIA Tesla V100 cards. nanoFluidX executed the simulation in 48 hours, while the CPU code took 255 hours. That is 530% faster, while consuming 865% less energy (See image gallery).
Standard Finite Volume CFD codes would most likely fail to even initialize the simulation of such a complex geometry and even if they did, the pre-processing times would take weeks and the computational cost of such a simulation would be prohibitively large.
32-core CPU system (Intel Xeon E5-2665) vs. nanoFluidX on 4 NVIIDIA Tesla V100 GPU cards.*Energy consumption assumptions: only processor, no peripheral devices; 8-core CPU 95W; Tesla V100 250 W
Mesh in a classic sense is not needed. Import the geometry, select the element and generate the particles. No more hours of pre-processing and devising a good-enough mesh.
Rigid body motion
Besides the rotational motion, the nanoFluidX code allows for element trajectories prescribed by an input file. Study the interaction of an arbitrary translationally moving solid and the surrounding fluid.
*The quoted numbers are case and configuration dependent.
The nanoFluidX team recommends NVIDIA Tesla V100, P100 and K80 accelerators, as they are well-established GPU cards for scientific computing in data centers and nanoFluidX has thoroughly been tested on them. Nvidia Tesla M series (M40, M60) is also suitable for running nanoFluidX, however these cards only have meaningful single precision performance – running double precision is essentially impossible.
A number of other NVIDIA GPU cards (Quadro series, GeForce series, etc.) have a Compute Capability that in principle is suitable for nanoFluidX. However, the development team does not guarantee accuracy, stability and overall performance of nanoFluidX on these cards. Be aware that the current NVIDIA EULA prohibits commercial usage of non-Tesla series cards as a computational resource in bundles of 4 GPU’s or more.
The code also has dynamic load-balancing ensuring optimal hardware utilization and can run on multi-node clusters as well.
64 GB or more RAM
# CPU cores should equal # GPU devices – message passing between GPU devices is handled by the CPU(s). Ideally, the number of CPU cores will slightly exceed the number of available GPU devices to ensure some computational overhead for result output, etc.
2TB or more HDD space
Infiniband or OmniPath connections for multi-node systems.
All Unix-based OS with GCC newer than 4.4.7 and GLIBC 2.12 (RHEL 6.x and 7.x and compatible Scientific Linux, CentOS, Ubuntu 14.04 and 16.04, OpenSUSE 13.2 etc.)
NVIDIA CUDA 8.0 and OpenMPI 1.10.2 – shipped with the binary.