DNS 1-3 Storage Format: Difference between revisions

From KBwiki
Jump to navigation Jump to search
No edit summary
 
(35 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=Flow in a 3D diffuser=


{{DNSHeaderLib
{{DNSHeader
|area=1
|area=1
|number=3
|number=3
Line 7: Line 8:


= Storage Format =
= Storage Format =
The data provided is stored in HDF5 format. This can be easily read through the [https://www.hdfgroup.org/solutions/hdf5/ HDF5 library] or python's [https://www.h5py.org/ h5py]. The dataset consists of a master file which links a number of external files in the following way (in cursive the links to the external files):
The data provided is stored in HDF5 format. This can be easily read through the [https://www.hdfgroup.org/solutions/hdf5/ HDF5 library] or python's [https://www.h5py.org/ h5py]. Any parallel partition has been taken out from the dataset for easier reading both in serial and parallel.
* '''Statistics.h5''', master file
 
** 01_Info
=== Notes on the HDF5 library ===
*** Dimensions
 
** 02_Entries
The following instructions are intended for users who wish to compile and obtain the parallel '''h5py''' package. Note that the serial '''h5py''' also works, however, its parallel capabilities will be deactivated.
*** ''Inputs''
 
*** 01_Output
'''manual install'''
**** ''AdditionalQuantities''
 
**** ''Convection''
The package '''h5py''' can be manually installed with parallel support provided the right libraries are in the system. To get them use (on a Debian-like distribution):
**** ''Production''
<pre class="brush: bash">
**** ''TurbulentDiffusion01''
sudo apt install libhdf5-mpi-dev
**** ''TurbulentDiffusion02''
</pre>
**** ''PressureStrain''
or make sure that the environment variable ''HDF5_DIR'' is pointing to your '''hdf5''' installation. Then install '''h5py''' from pip using:
**** ''Dissipation''
<pre class="brush: bash">
**** ''TripleCorrelation''
CC=mpicc HDF5_MPI="ON" pip install --no-binary=h5py h5py
**** ''PressureVelocity''
</pre>
** 03_Nodes
 
*** ''Nodes''
== Instantaneous data format ==
Each of these external files contain an array of a certain number of positions (1 if scalar, 3 if vectorial or 6 if tensorial with the exception of the velocity triple correlation).
 
The dataset consists of a single file per snapshot, directly containing all the variables as well as the node positions. It also contains some metadata elements such as the number of points, current simulation time and instant. The names of the provided variables are:
* '''xyz''', are the node positions as an array of (npoints,3).
* '''PRESS''', is the instantaneous pressure as a scalar array of (npoints,).
* '''VELOC''', is the instantaneous velocity as a vectorial array of (npoints,3).
* '''GRADP''', is the gradient of pressure as a vectorial array of (npoints,3).
* '''GRADV''', is the gradient of velocity as a tensorial array of (npoints,9).
 
=== Reading the data with python ===
An example on how to read this dataset in python follows:
 
<pre class="brush: python">
import h5py, numpy as np
 
filename = 'duct_0.h5' # duct area, first snapshot
 
# Open HDF5 file in serial
file    = h5py.File(filename,'r')
 
# Read metadata variables
npoints = int(file['metadata']['npoints'])
time    = float(file['metadata']['time'])
instant = int(file['metadata']['instant'])
 
# Read variables
PRESS = np.array(file['PRESS'],dype=np.double)
VELOC = np.array(file['VELOC'],dype=np.double)
GRADP = np.array(file['GRADP'],dype=np.double)
GRADV = np.array(file['GRADV'],dype=np.double)
 
# Close file
file.close()
</pre>
 
== Statistical data format ==
 
The dataset consists of a master file which links a number of external files, thus creating a tree-like database. Each of these external files contain an array of a certain number of positions (1 if scalar, 3 if vectorial and 6 if tensorial, with the exception of the velocity triple correlation). A graphical representation of the database is provided in Fig. [[lib:DNS_1-3_format_#figure26|26]].
 
<div id="figure26"></div>
{|align="center" width=750
|[[Image:DNS_1_3_data_scheme.png|750px]]
|-
|'''Figure 26:''' Schematic representation of the HDF5 statistical database.
|}
 
The data is structured in different external files, as shown in Fig. [[lib:DNS_1-3_format_#figure26|26]]. The list number minus 1 corresponds to the array position on python (since python starts counting on 0).
 
'''Inputs'''
<div style="column-count:4;-moz-column-count:4;-webkit-column-count:4">
# <math>{\overline{P}}</math>
# <math>{\overline{U}}</math>
# <math>{\overline{V}}</math>
# <math>{\overline{W}}</math>
# <math>{\overline{\tau}_{11}}</math>
# <math>{\overline{\tau}_{12}}</math>
# <math>{\overline{\tau}_{22}}</math>
# <math>{\overline{\tau}_{13}}</math>
# <math>{\overline{\tau}_{23}}</math>
# <math>{\overline{\tau}_{33}}</math>
# <math>{R_{11}}</math>
# <math>{R_{12}}</math>
# <math>{R_{22}}</math>
# <math>{R_{13}}</math>
# <math>{R_{23}}</math>
# <math>{R_{33}}</math>
# <math>{\partial\overline{P}/\partial x}</math>
# <math>{\partial\overline{U}/\partial x}</math>
# <math>{\partial\overline{V}/\partial x}</math>
# <math>{\partial\overline{W}/\partial x}</math>
# <math>{\partial\overline{P}/\partial y}</math>
# <math>{\partial\overline{U}/\partial y}</math>
# <math>{\partial\overline{V}/\partial y}</math>
# <math>{\partial\overline{W}/\partial y}</math>
# <math>{\partial\overline{P}/\partial z}</math>
# <math>{\partial\overline{U}/\partial z}</math>
# <math>{\partial\overline{V}/\partial z}</math>
# <math>{\partial\overline{W}/\partial z}</math>
</div>
 
'''Additional Quantities'''
# <math>{\overline{pp}}</math>
# <math>{\eta_{T}}</math>
# <math>{\eta_{K}}</math>
# <math>{\eta_{tK}}</math>
 
'''Triple Correlation'''
<div style="column-count:2;-moz-column-count:2;-webkit-column-count:2">
# <math>{\rho\overline{uuu}}</math>
# <math>{\rho\overline{uvu}}</math>
# <math>{\rho\overline{vvu}}</math>
# <math>{\rho\overline{uwu}}</math>
# <math>{\rho\overline{vwu}}</math>
# <math>{\rho\overline{wwu}}</math>
# <math>{\rho\overline{vvv}}</math>
# <math>{\rho\overline{vwv}}</math>
# <math>{\rho\overline{wwv}}</math>
# <math>{\rho\overline{www}}</math>
</div>
 
''' Pressure Velocity Correlation '''
# <math>{\overline{pu}}</math>
# <math>{\overline{pv}}</math>
# <math>{\overline{pw}}</math>
 
''' Budget Equation Components '''
The components of the Reynolds stress budget equation come in the following order (for a generic budget component <math>\phi</math>):
<div style="column-count:2;-moz-column-count:2;-webkit-column-count:2">
# <math>{\phi_{11}}</math>
# <math>{\phi_{12}}</math>
# <math>{\phi_{22}}</math>
# <math>{\phi_{13}}</math>
# <math>{\phi_{23}}</math>
# <math>{\phi_{33}}</math>
</div>
 
=== Reading the data with python ===
The [https://kbwiki-data.s3-eu-west-2.amazonaws.com/DNS-1/3/HiFiTurbReader.py following script] is provided to facilitate the reading of the dataset. An example on how to use this script follows:
<pre class="brush: python">
from HiFiTurbDB_Reader import HiFiTurbDB_Reader
 
db  = HiFiTurbDB_Reader(FILENAME,return_matrix=True,parallel=False) # Return outputs as matrices
print(db,flush=True) # We can print the database information
 
# Recover some variables
xyz          = db.points
grad_velocity = db.velocity_gradient # Velocity gradients
Rij          = db.reynolds_stress  # Reynolds stresses
</pre>
 
Alternatively, the dataset can be read raw by directly using the '''h5py''' library. An example follows:
<pre class="brush: python">
import h5py, numpy as np
 
file = h5py.File('Statistics.h5','r')
 
# Read node positions
xyz  = np.array(file['03_Nodes']['Nodes'],dype=np.double)
 
# Read and parse inputs
# array indices are these of the list above minus 1
inp          = np.array(file['02_Entries']['Inputs'],dype=np.double)
grad_velocity = inp[:,[17,21,25,18,22,26,19,23,27]].copy() # Velocity gradients
Rij          = inp[:,[10,11,13,11,12,14,13,14,15]].copy() # Reynolds stresses
</pre>
 
<br/>
<br/>
----
----
Line 32: Line 177:
| organisation=Barcelona Supercomputing Center (BSC)
| organisation=Barcelona Supercomputing Center (BSC)
}}
}}
{{DNSHeaderLib
{{DNSHeader
|area=1
|area=1
|number=3
|number=3

Latest revision as of 09:26, 5 January 2023

Flow in a 3D diffuser

Front Page

Description

Computational Details

Quantification of Resolution

Statistical Data

Instantaneous Data

Storage Format

Storage Format

The data provided is stored in HDF5 format. This can be easily read through the HDF5 library or python's h5py. Any parallel partition has been taken out from the dataset for easier reading both in serial and parallel.

Notes on the HDF5 library

The following instructions are intended for users who wish to compile and obtain the parallel h5py package. Note that the serial h5py also works, however, its parallel capabilities will be deactivated.

manual install

The package h5py can be manually installed with parallel support provided the right libraries are in the system. To get them use (on a Debian-like distribution):

sudo apt install libhdf5-mpi-dev

or make sure that the environment variable HDF5_DIR is pointing to your hdf5 installation. Then install h5py from pip using:

CC=mpicc HDF5_MPI="ON" pip install --no-binary=h5py h5py

Instantaneous data format

The dataset consists of a single file per snapshot, directly containing all the variables as well as the node positions. It also contains some metadata elements such as the number of points, current simulation time and instant. The names of the provided variables are:

  • xyz, are the node positions as an array of (npoints,3).
  • PRESS, is the instantaneous pressure as a scalar array of (npoints,).
  • VELOC, is the instantaneous velocity as a vectorial array of (npoints,3).
  • GRADP, is the gradient of pressure as a vectorial array of (npoints,3).
  • GRADV, is the gradient of velocity as a tensorial array of (npoints,9).

Reading the data with python

An example on how to read this dataset in python follows:

import h5py, numpy as np

filename = 'duct_0.h5' # duct area, first snapshot

# Open HDF5 file in serial
file     = h5py.File(filename,'r')

# Read metadata variables
npoints = int(file['metadata']['npoints'])
time    = float(file['metadata']['time'])
instant = int(file['metadata']['instant'])

# Read variables
PRESS = np.array(file['PRESS'],dype=np.double)
VELOC = np.array(file['VELOC'],dype=np.double)
GRADP = np.array(file['GRADP'],dype=np.double)
GRADV = np.array(file['GRADV'],dype=np.double)

# Close file
file.close()

Statistical data format

The dataset consists of a master file which links a number of external files, thus creating a tree-like database. Each of these external files contain an array of a certain number of positions (1 if scalar, 3 if vectorial and 6 if tensorial, with the exception of the velocity triple correlation). A graphical representation of the database is provided in Fig. 26.

DNS 1 3 data scheme.png
Figure 26: Schematic representation of the HDF5 statistical database.

The data is structured in different external files, as shown in Fig. 26. The list number minus 1 corresponds to the array position on python (since python starts counting on 0).

Inputs

Additional Quantities

Triple Correlation

Pressure Velocity Correlation

Budget Equation Components The components of the Reynolds stress budget equation come in the following order (for a generic budget component ):

Reading the data with python

The following script is provided to facilitate the reading of the dataset. An example on how to use this script follows:

from HiFiTurbDB_Reader import HiFiTurbDB_Reader

db  = HiFiTurbDB_Reader(FILENAME,return_matrix=True,parallel=False) # Return outputs as matrices
print(db,flush=True) # We can print the database information

# Recover some variables
xyz           = db.points
grad_velocity = db.velocity_gradient # Velocity gradients
Rij           = db.reynolds_stress   # Reynolds stresses

Alternatively, the dataset can be read raw by directly using the h5py library. An example follows:

import h5py, numpy as np

file = h5py.File('Statistics.h5','r')

# Read node positions
xyz  = np.array(file['03_Nodes']['Nodes'],dype=np.double)

# Read and parse inputs
# array indices are these of the list above minus 1
inp           = np.array(file['02_Entries']['Inputs'],dype=np.double)
grad_velocity = inp[:,[17,21,25,18,22,26,19,23,27]].copy() # Velocity gradients
Rij           = inp[:,[10,11,13,11,12,14,13,14,15]].copy() # Reynolds stresses




Contributed by: Oriol Lehmkuhl, Arnau Miro — Barcelona Supercomputing Center (BSC)

Front Page

Description

Computational Details

Quantification of Resolution

Statistical Data

Instantaneous Data

Storage Format


© copyright ERCOFTAC 2024