CFD Simulations AC7-04: Difference between revisions
Line 56: | Line 56: | ||
'''Phase-averaging''' | '''Phase-averaging''' | ||
First, the CFD was phase-averaged. Indeed during a conventional 4D Flow scan, the k-space (spatial frequencies space) is sequentially filled over hundreds of RF-pulses and magnetic gradients sequences. Thus it does not appear relevant to compare instantaneous CFD velocity fields with MRI measurements. Another argument is that high Reynolds number unsteady flows may lead to cycle-to-cycle fluctuations. | First, the CFD was phase-averaged. Indeed during a conventional 4D Flow scan, the k-space (spatial frequencies space) is sequentially filled over hundreds of RF-pulses and magnetic gradients sequences. Thus it does not appear relevant to compare instantaneous CFD velocity fields with MRI measurements. Another argument is that high Reynolds number unsteady flows may lead to cycle-to-cycle fluctuations. Hence an appropriate way seem to be to phase-average the simulated flow velocity field over a sufficiently representative number of cardiac cycles. The phase-averaged velocity <math>\bar{\mathbf{u}}(\mathbf{x},t)</math> at spatial coordinates <math>\mathbf{x} = (x,y,z)</math> and at time <math>t</math> was defined as: | ||
<math> | <math> | ||
Line 62: | Line 62: | ||
</math> | </math> | ||
where <math>\mathbf{u}(\mathbf{x},t)</math> refers to instantaneous velocity vector and <math>N = 30</math> the total number of cycles of period <math>T = | where <math>\mathbf{u}(\mathbf{x},t)</math> refers to the instantaneous velocity vector and <math>N = 30</math> is the total number of cycles of period <math>T = 1</math> s. As will be further expanded on in the numerical accuracy section, the first 10 simulated cycles were removed to cancel the effect of the arbitrary initial condition, and thus <math>N</math> corresponds to the number of cycles computed afterwards. | ||
'''Downsampling''' | '''Downsampling''' | ||
Then, both | Then, the results of both methods have to be given on the same grid in order to allow a quantitative comparison. One possibility is to interpolate the MRI velocity fields on the finer CFD grid. Another method, which is the one chosen here, is to downsample the CFD velocity fields onto the coarser MRI grid. The idea of the downsampling method is to localize the CFD nodes within a given MRI voxel and to average the contributions of each CFD node corresponding to this MRI voxel in order to build a low resolution CFD (LR-CFD) velocity field. The steps of the downsampling process (Fig. 6) were the following: first the MRI grid was subsampled with 5 voxels in each direction (5<math>^3</math> = 125 subvoxels per voxel). Then the CFD solution was linearly interpolated on the subsampled Cartesian grid. Finally these interpolated velocity fields were averaged over the 125 subvoxels included in each MRI voxel, which lead to the LR-CFD velocities expressed on the MRI grid. | ||
[[File:AC7-04_Downsampling.png|600px|center]] | [[File:AC7-04_Downsampling.png|600px|center]] | ||
<div style="text-align: center;"> | <div style="text-align: center;"> | ||
'''Figure 6:''' Illustration of the downsampling process | '''Figure 6:''' Illustration of the 3D downsampling process as viewed from the coronal (xz)-plane. The MRI grid is first subsampled with 5 voxels in each direction (125 subvoxels per voxel). The CFD solution is linearly interpolated on the subsampled grid, which is downsampled back to the MRI grid by averaging the velocity over the 125 subvoxels included in each MRI voxel. | ||
The MRI grid is first subsampled with 5 voxels in each direction (125 subvoxels per voxel). The CFD solution is linearly interpolated on the subsampled grid, which is downsampled back to the MRI grid by averaging the velocity over the 125 subvoxels included in each MRI voxel. | |||
</div> | </div> | ||
Revision as of 15:37, 17 December 2021
A pulsatile 3D flow relevant to thoracic hemodynamics: CFD - 4D MRI comparison
Application Challenge AC7-04 © copyright ERCOFTAC 2021
CFD Simulations
Overview of CFD Simulation
Large Eddy Simulations were carried out using the in-house, massively parallel and multiphysics YALES2BIO solver based on YALES2 [4] developed at CORIA (Rouen, France). YALES2BIO is dedicated to the simulation of blood flows at the macroscopic and microscopic scales. The base is a solver for the incompressible Navier-Stokes equations. The equations are discretised using a finite-volume fourth-order scheme, adapted to unstructured meshes [5,6]. The divergence-free property of the velocity field is ensured thanks to the projection method introduced by Chorin [7]. The velocity field is first advanced in time using a low-storage fourth-order Runge-Kutta scheme [6,8] in a prediction step. This predicted field is then corrected by a pressure gradient, obtained by solving a Poisson equation to calculate pressure. This equation is solved with the Deflated Preconditioned Conjugate Gradient algorithm [9]. YALES2BIO was validated and successfully used in many configurations relevant to cardiovascular biomechanics (see [10] for a list of publications). The boundary conditions applied at the inlet came from the data acquired during the experiment (2D cine PC-MRI).
Simulation Case
Computational Domain
The geometry used in the calculation is the same as the one presented in the test data section. The computational domain has one inlet and one single outlet, so that the flow split at the main tube/collateral junction does not depend on the details of the outlet boundary condition. The mesh (Fig. 5) used to perform the simulation consisted of an unstructured tetrahedral mesh generated with GAMBIT 2.4.6 (ANSYS, Inc., Canonsburg, PA). The mesh includes 3,812,438 cells with an average cell volume of 38 mm3 (representative cell size of 0.7 mm).
Figure 5: Computational domain and numerical mesh
Left: Geometry - Dimensions are given with respect to the center of the collateral, Right: Mesh at the inlet plane
Solution Strategy
The blood, or blood-mimicking fluid, is described by the filtered Navier-Stokes equations (NSE):
where and are the filter operator, velocity field, pressure field, density, kinematic viscosity and subgrid-scale kinematic viscosity of the fluid, respectively.
The governing equations are discretized and solved using the YALES2BIO solver. In the present simulation a centred fourth-order numerical scheme with an explicit fourth-order Runge-Kutta time advancement scheme was used. To ensure numerical stability, the time step was computed to keep a CFL number equal to 0.9. The pressure advancement and divergence-free condition were met thanks to a fractional-step method (Kim & Moin, 1985 [11]), a modified version of the Chorin’s algorithm (Chorin, 1968 [7]). The Sigma eddy-viscosity-based LES model (Nicoud et al., 2011 [12]) was employed to take into account the turbulence effects. In this model the subgrid-scale kinematic viscosity is modelled as:
where is the model constant, is the typical size of the local cell of the mesh and are the three singular values of the local velocity gradient tensor [12]. The Sigma model guarantees that no eddy viscosity is applied in canonical laminar flows and is well adapted to wall-bounded flows and transitional flows [13,14].
Boundary Conditions
Zero pressure condition was prescribed at the outlet section while no-slip condition was imposed at the solid boundaries. Over the whole cycle, an averaged was reported. The highest values were found at peak systole with a mean and maximal .
To prescribe the velocity field at the inlet, a pixel-based inflow was derived from the 2D cine PC-MRI measurements. Beforehand the velocity field from these measurements were corrected to account for partial volume effects caused by the phantom walls and random noise, in particular for the through-plane velocity component . Thereby, a linear -distribution was assumed between at the wall and at , where is an user-defined inner radius, here chosen to be of the radius of the circular cross-sectional inlet surface. Then a 2D bilinear interpolation from this corrected velocity field onto the inlet surface of the CFD domain was performed for each of the 30 cardiac phases. Once the velocity field had been spatially interpolated, its discrete temporal evolution had to be converted into a time-continuous signal. As the flow rate delivered by the pump was periodic, a trigonometric interpolation was used by applying at each node a discrete Fourier transform of the temporally discretized velocity signal. Thereby 15 non-redundant complex Fourier coefficients were extracted and the continuous inlet velocity signal could be achieved by injected back these coefficients into a Fourier series expansion for each node of the inlet surface.
CFD post-processing
In order to enable comparison with the 4D Flow MRI measurements, two main post-processing steps were implemented to be coherent with the MRI acquisition process.
Phase-averaging
First, the CFD was phase-averaged. Indeed during a conventional 4D Flow scan, the k-space (spatial frequencies space) is sequentially filled over hundreds of RF-pulses and magnetic gradients sequences. Thus it does not appear relevant to compare instantaneous CFD velocity fields with MRI measurements. Another argument is that high Reynolds number unsteady flows may lead to cycle-to-cycle fluctuations. Hence an appropriate way seem to be to phase-average the simulated flow velocity field over a sufficiently representative number of cardiac cycles. The phase-averaged velocity at spatial coordinates and at time was defined as:
where refers to the instantaneous velocity vector and is the total number of cycles of period s. As will be further expanded on in the numerical accuracy section, the first 10 simulated cycles were removed to cancel the effect of the arbitrary initial condition, and thus corresponds to the number of cycles computed afterwards.
Downsampling
Then, the results of both methods have to be given on the same grid in order to allow a quantitative comparison. One possibility is to interpolate the MRI velocity fields on the finer CFD grid. Another method, which is the one chosen here, is to downsample the CFD velocity fields onto the coarser MRI grid. The idea of the downsampling method is to localize the CFD nodes within a given MRI voxel and to average the contributions of each CFD node corresponding to this MRI voxel in order to build a low resolution CFD (LR-CFD) velocity field. The steps of the downsampling process (Fig. 6) were the following: first the MRI grid was subsampled with 5 voxels in each direction (5 = 125 subvoxels per voxel). Then the CFD solution was linearly interpolated on the subsampled Cartesian grid. Finally these interpolated velocity fields were averaged over the 125 subvoxels included in each MRI voxel, which lead to the LR-CFD velocities expressed on the MRI grid.
Figure 6: Illustration of the 3D downsampling process as viewed from the coronal (xz)-plane. The MRI grid is first subsampled with 5 voxels in each direction (125 subvoxels per voxel). The CFD solution is linearly interpolated on the subsampled grid, which is downsampled back to the MRI grid by averaging the velocity over the 125 subvoxels included in each MRI voxel.
Numerical Accuracy
The numerical accuracy studies were conducted on previously obtained data and are detailed in T. Puiseux's PhD thesis [1]. The CFD simulation process was the same as the one described above. The only difference with the data exposed in this document is the shape of the inlet velocity profile. Note however that the peak systolic Reynolds numbers at the inlet are very similar (1830 in this previous study against 1915 here). This allows us to be confident on these previous analyses, which were conducted for mesh convergence, phase-averaging sensitivity and turbulence modelling employed for the subgrid scales.
Mesh sensitivity analysis
To ensure spatial convergence of the simulations, the effects of the mesh were studied on 3 different meshes: a coarse one with 622 thousands cells (cell size = 1.3 mm), a medium one with 1 284 thousands cells (cell size = 1.0 mm) and a fine one with 3 812 thousands cells (cell size = 0.7 mm). The magnitude of the phase-averaged velocity in the aneurysm resulted in a numerical uncertainty of 1.88\% for the latter mesh, with an apparent spatial order of convergence p = 2.0 (see Sigüenza et al., 2016 [15] for further details on the method). Therefore the fine mesh was considered fine enough for the velocity field to be spatially converged and independent of the spatial resolution.
Phase-averaging sensitivity analysis
A sensitivity analysis was carried out to estimate the minimal number of cycles needed for the phase-averaged velocities to converge. First, as it was mentioned in the paragraph about the phase-averaging process, the first 10 cycles were removed to cancel the impact of the initial condition. 60 cycles were then simulated. The results are presented on Fig. 7 and show that whereas the main components of the velocity ( and ) were well-converged after about 5 cycles, around 30 cycles were required for the transverse component of the velocity (). Thereby, 30 cycles were considered sufficient to reach a reasonable asymptotic value for the velocity field. Furthermore, the reproducibility of the process was tested by extracting and independently phase-averaging two subsequent set of 30 cycles from the 60 cycles simulated. High correlation levels of the Pearson’s correlation coefficient were found , which confirms that the process was independent from the set of cycles selected.
Figure 7: Evolution of the Pearson’s correlations () between phase-averaged CFD velocity computed over different numbers of cycles and the 60-cycle phase-averaged CFD velocity. The correlations were performed on each node of the computational domain.
Turbulence resolution analysis
In the LES strategy the subgrid scales are not explicitly resolved but modelled. A general criterion of the turbulence resolution in LES was proposed by Pope (2000, [16]), where LES is considered well-resolved if at least 80% of the turbulent kinetic energy is solved. Formally, this can be written as:
where is the turbulent kinetic energy of the resolved eddies and the turbulent kinetic energy of the unresolved subgrid scales. This latter quantity is estimated since it is not explicitly calculated in LES. The following estimate was proposed by Yoshizawa and Horitu (1985, [17]):
where is the subgrid scale viscosity added by the model, the local characteristic mesh size and . Another formulation derived by Pope (2000,[16]) is given by:
where is the characteristic length of the largest structures, the dissipation rate and . As reported on Fig. 8, about 5% of the nodes show turbulence resolution superior to 20%. The highest ratio of unresolved turbulent kinetic energy arise at peak systole (phase 5), when the Reynolds number reaches its maximum.
Figure 8: Temporal evolution along a cycle of the node fraction verifying .
Note that in these previous simulations, peak systole was obtained at phase 5
Another simulation was performed with a finer mesh (27 million cells with a characteristic size of 0.35 mm) for which the Pope criterion was satisfied ( for all the nodes). For the reference mesh (3.8 million elements), the turbulent viscosity ratio is generally quite low (less than 20%) and viscosity is added mainly in the aneurysm, and in the jet outflows the collateral branch. For the refined simulation, almost no viscosity is added. The phase-averaged velocity relative error between the two simulations equals 0.8% of the maximum velocity. Given the small gain associated with this mesh refinement, the reference mesh is considered sufficiently well-resolved to estimate the velocity with accuracy.
Contributed by: Morgane Garreau — University of Montpellier, France
© copyright ERCOFTAC 2021