# 7. The viscous term

In this chapter, we` concern ourselves with the viscous term. I.e. the under-lined term in the equations below,

$\frac{\partial \mathbf{u}}{\partial t} = -(\mathbf{u} \cdot \nabla)\mathbf{u} - \nabla p + \underline{\nu \nabla^2 \mathbf{u}}.$

This term aims to describes the diffusion of momentum originating from the exchange of momentum by colliding molecules. For sufficiently continuous fluids, the effects of the incredibly-complicated statistical dynamics of these collisions are remarkably well modeled with the relatively simple diffusion term with a constant diffusivity of momentum (kinematic viscosity, $\nu$).

Again, it will prove useful to consider a helper equation: The diffusion of a scalar field $s$ in a medium with diffusivity $\kappa$,

$\frac{\partial s}{\partial t} = \kappa \nabla^2s,$

where $\nabla^2s$ symbolizes the divergence of the gradient of $s$,

$\frac{\partial s}{\partial t} = \kappa (\nabla \cdot \nabla s),$

but if we write it without vector-operator notation it reads,

$\frac{\partial s}{\partial t} = \kappa \left(\frac{\partial^2 s}{\partial x^2}+\frac{\partial^2 s}{\partial y^2}\right).$

It appears we need to estimate the second derivative of the field
$s$
with respect to the spatial directions. The second derivative in any
direction can be evaluated by differentiating the first derivative.
Conceptually, it would be an OK idea to implement a
`gradient_x(s[N][N], ds_dx[N][N]])`

function (and an
`gradient_y()`

counterpart), which compute the spatial
derivatives of a field. We could then use it to compute the higher-order
derivatives by calling it successively. For our second derivative,

```
// We will not use this!
double s[N][N];
...
// Storage for first and second derivative in x direction
double ds_dx[N][N], d2s_dx2[N][N];
// Differntiate twice
(s, ds_dx);
gradient_x (ds_dx, d2s_dx2); gradient_x
```

Although performance was not a concern for the design of our solver, the code above is wasteful to such a degree that is would be an insult to the didactic nature of this text. This is because we can achieve the same result without introducing new fields by analyzing the stencil operations. Say we use the 3-point second-order accurate gradient,

$\frac{\partial s}{\partial x}[0,0] \approx \frac{s[1,0] - s[-1,0]}{2\Delta},$

where
$\frac{\partial s}{\partial x}[0,0]$
denotes the derivative in the current cell and
$s[1,0]$
denotes the right-hand-side neighbor value (`val(s, 1, 0)`

).
The second derivative is then approximated by,

$\frac{\partial^2 s}{\partial x^2}[0,0] \approx \frac{\frac{\partial s}{\partial x}[1,0] - \frac{\partial s}{\partial x}[-1,0]}{2\Delta},$

Combining the equations,

$\frac{\partial^2 s}{\partial x^2}[0,0] \approx \frac{s[-2,0] - 2s[0,0] + s[2,0]}{4\Delta^2}.$

Now we can directly evaluate this expression when needed instead of
declaring fields for storage, which is so much better, that the
“successive gradients” method becomes a bit silly. It is however odd
that our expression is 5 points wide (from
$s[-2,0]$
to
$s[2,0]$),
but only uses three values. We have inherited this from the original
conception where the (intermediate) first derivative was evaluated at
the grid points. The field `s`

and its two derivatives needed
to be co-located for the successive gradient approach. But for the
theoretical analysis we could have computed these values at the mid
points in between cells,

$\frac{\partial s}{\partial x}[\frac{1}{2},0] \approx \frac{s[1,0] - s[0,0]}{\Delta}.$

Redoing the exercise in the so-called *staggered* fashion,
where the gradients are (on paper) evaluated in between the grid points,
results in,

$\frac{\partial^2 s}{\partial x^2}[0,0] \approx \frac{s[-1,0] - 2s[0,0] + s[1,0]}{\Delta^2}.$

We are now ready to implement the diffusion term for a scalar field
$s$.
In `ns.h`

, insert

```
...
void advection (...) {
...
}
void diffusion (double s[N][N], double kappa, double ds_dt[N][N]) {
() {
foreach(ds_dt, 0, 0) += kappa/sq(Delta)*
val(val(s, -1, 0) - 2*val(s, 0, 0) + val(s, 1, 0));
(ds_dt, 0, 0) += kappa/sq(Delta)*
val(val(s, 0, -1) - 2*val(s, 0, 0) + val(s, 0, 1));
}
}
double dt_CFL () {
...
```

Notice that we have chosen to *add* ( using `+=`

)
the diffusive tendency to the input filed `ds_dt`

. This way,
we can simply add the viscous tendency to an earlier initialized
(advection) tendency.

## A stability criterion

Just as for the advection equation, explicit time integration of the diffusion equation is prone to instabilities for too large time steps ($\mathrm{d}t$) on relatively fine grids. We can again compare the time-step size and the grid size ($\Delta$) using the equation parameter $\kappa$ with a dimensionless number ($Pe$),

$Pe = \frac{\kappa \mathrm{d}t}{\Delta^2}.$

A (sometimes) so-called critical cell-Peclet number ($Pe_{\mathrm{crit}}$) can be used to compute a stable time-step size,

$\mathrm{d}t = Pe_{\mathrm{crit}}\frac{\Delta^2}{\kappa}.$

Clearly, this diffusion-based limit gets smaller faster for small
$\Delta$
compared to the CFL-based limit. For our solver, we will simply take the
minimum of these constraints. This warrants to bump our
`min/max`

definitions in the `visual.h`

file to
the common utilities file `common.h`

. In `ns.h`

,
we will consider the diffusivity for a scalar field
($\kappa$),
and the fluid’ viscosity
($\nu$)
separately. Since the latter is just a fancy word for the diffusivity of
momentum, we take their maximum to compute the Peclet-based limit.

```
...
double CFL = 0.8, Pe = 0.1;
double kappa = 0, nu = 0;
...
double dt_CFL() {
...
}
double dt_diffusion() {
return Pe*sq(Delta)/max(kappa, nu);
}
double dt_next (double t_end) {
double DT = min(dt_CFL(), dt_diffusion());
if (t + DT > t_end)
return t_end - t;
return DT;
}
```

## Testing

The diffusion of a Gaussian bump has an analytical solution. In 2D,

$s(t, x, y) = \frac{e^{\frac{-x^2 -y^2}{4\kappa (t + t_0)}}}{4\pi\kappa (t + t_0)},$

With arbitrary time-shift parameter
$t_0$.
We can set up a test using this. First qualitatively in
`diffusion_vis.c`

,

```
#include "ns.h"
#include "visual.h"
#define pi (3.14159265)
#define SOL(t) ((exp((-sq(x) - sq(y))/(4*kappa*(t + t0))))/(4*pi*(t + t0)))
double t0 = 0.5, t_end = 1;
double kappa_val = 2.;
double s[N][N];
int main() {
= 20;
L0 = Y0 = -L0/2.;
X0 = kappa_val;
kappa
()
foreach(s, 0, 0) = SOL(t);
val
int iter = 0;
for (t = 0; t < t_end; t += dt) {
= dt_next(t_end);
dt double ds_dt[N][N];
()
foreach(ds_dt, 0, 0) = 0;
val (s, kappa, ds_dt);
diffusion ()
foreach(s, 0, 0) += dt*val(ds_dt, 0, 0);
val char fname[99];
(fname, "s-%04d.ppm", iter);
sprintf (s, fname, -0.1, 0.1);
output_ppm ++;
iter}
("# Solver finished at t = %g after %d iterations\n",
printf , iter);
t}
```

Compile and run,

```
$ rm s-*.ppm
$ gcc -DN=200 diffusion_vis.c -lm
# Solver finished at t = 1 after 2001 iterations
```

Oef! 2001 iterations, that movie will take about 80 seconds at 25
frames per second. Lets reduce it a bit by only writing a frame every
`iter_interval`

interations.

```
...
int iter_interval = 20;
int main() {
...
if ((iter % iter_interval) == 0) {
char fname[99];
(fname, "s-%04d.ppm", iter);
sprintf (s, fname, -0.1, 0.1);
output_ppm }
++;
iter...
```

Lets check the movie (remember to remove the old images).

This looks good, and we can continue to do the convergence test,
which is left as an exercise for the reader. Notice that for a
well-behaved solver, the effort now scales with
$\propto \Delta^4$,
which is truly horrific. At this point it can become advantageous to
compile with *optimization*: At the expense of a more
time-consuming compilation step (i.e. the `gcc ...`

command),
a more optimized executable is generated. The steps taken in this
optimization process are truly remarkable (see),
but are far outside the scope of this course. Instead, we simply “turn
on optimization” as a command-line option. The highest level of
optimization supported by the gcc compiler is achieved with the option
`-O3`

.

`$ gcc -O3 -DN=200 diffusion_vis.c -lm`

It is now time for another Antr’acte before we add the last term (or is it?).