# 6. Testing the tracer_advection() function

## A qualitative test

We can now start to put together our previous lessons to setup a tracer advection test case. For any advection problem, we have a velocity field and since we are preparing for a full Navier-Stokes equations solver, it makes sense to declare the fields for its components in the ns.h file.

# include "common.h"

double ux[N][N], uy[N][N];

double t, dt = 1;

void tracer_advection (...

We have also introduced global variables for the time (t) and the time step size (dt), whose value we can alter in the test file test_advection.c. In this file, we will setup the test, which consists of including the relevant header files and declare a scalar field. In test_advection.c :

#include "ns.h"
#include "visual.h"

// Scalar field
double s[N][N];

int main() {
// setup domain size
L0 = 10;
X0 = Y0 = -L0/2;
}

All OK? We can continue by initializing the relevant field values. For s we could reuse the compact functions from the previous chapter. For the velocity field, we can use a simple translation. The case is fully defined by setting an end time that corresponds to a complete cycle though the periodic domain.

...
X0 = Y0 = -L0/2.;
// Initlauze s, ux and uy
foreach() {
val (s, 0, 0) =   exp(-sq(x + 2) - sq(y + 2))
- exp(-sq(x - 1) - sq(y - 2));
val (ux, 0, 0) = -2.;
val (uy, 0, 0) = 0.;
}
// end time
double t_end = 5;
}

Now we setup the time loop (c.f. Chapt. 2) with a small value for dt.

  ...
// end time
double t_end = 5;
double dt = 0.05;
// step counter
int iter = 0;
// Time loop
for (t = 0; t < t_end; t += dt) {
// Tendency
double ds_dt[N][N];
foreach()
val(s, 0, 0) += dt*val(ds_dt, 0, 0);
// Visualize
char fname;
sprintf (fname, "s-%04d.ppm", iter);
output_ppm (s, fname, -0.7, 0.7);
// Increment step counter
iter++;
}
printf ("# Solver finished after %d steps at t = %g\n",
iter, t);
}

Most of the code above should look familiar from the previous chapters. After some cleaning up, we can compile and run our test.

$rm s-*.ppm$ gcc test_advection.c -lm
$./a.out # Solver finished after 101 steps at t = 5.05 Great! Lets generate a movie, $ convert s-*.ppm s_advect.mp4

If you have arrived at this point, you should realize that a lot has gone well. Indeed the blobs move(!), in the expected direction(!) (ux < 0), and the expected distance(!) (single cycle). All is well then?

Well… lets push our solver a bit, and increase the resolution.

$rm s-*.ppm$ gcc -DN=400 test_advection.c -lm
$./a.out # Solver finished after 101 steps at t = 5.05$ convert s-*.ppm s_400.mp4

Disaster!

### A stability criterion

We have obtained an nonphysical solution due to the rapidly accelerating growth of errors. Indeed, our upwinding strategy does not yield unconditionally stable time integration. Even before the advent of the digital computer it was known that the time-step size cannot be chosen freely for such advection problems. Rather, for explicit time-integration (such as forward Euler) it must be chosen small enough so that the cells do not “over flow” in a single step. This is formalized by the so-called CFL condition, named after its “inventors”, Richard Courant, Kurt Friedrichs, and Hans Lewy. In can be interpreted as follows: There exists a dimensionless number ($\mathrm{CFL}$) that compares the time-step size ($\mathrm{d}t$) against the mesh-element size ($\Delta$), based on the (maximum) velocity in the domain.

$\mathrm{CFL} = \frac{\mathrm{d}t \|u\|_{\mathrm{max}}}{\Delta}.$

For any explicit time integration method, there exists a maximum critical finite value for which the solution will remain stable. Typically, $\mathrm{CFL}_{\mathrm{crit.}} \approx 1$. In practice, this entails selecting a time-step size based on this so-called CFL condition,

$\mathrm{d}t = \mathrm{CFL}_{\mathrm{crit.}}\frac{\Delta}{\|u\|_{\mathrm{max}}}.$

Notice that this stability criterion is separate from any accuracy criterion. Our test in the second chapter was therefore not representative for the time integration of flow problems. Further, the CFL criterion directly relates the spatial discretization parameter ($\Delta$) to one for time. This is somewhat natural when doing a convergence study for a spatio-temporal problem.

For now, we have to extent our code in ns.h with a function that returns the CFL-based limit. Notice that the C language does not come with a simple function that finds the absolute maximum of an array. So we need to code it ourselves. In ns.h,

...
double t, dt = 1;
double CFL = 0.7;

...
}

double dt_CFL () {
double max_v = -1.;
foreach() {
if (fabs(val(ux, 0, 0)) > max_v)
max_v = fabs(val(ux, 0, 0));
if (fabs(val(uy, 0, 0)) > max_v)
max_v = fabs(val(uy, 0, 0));
}
if (max_v > 0) // Dont divide by zero
return CFL*Delta/max_v;
return 1e30;
}

After setting a safe value for CFL, the new function without input determines the absolute maximum value of the velocity component fields and computes an appropriate time step. A special check is performed for the case where max_v = 0, to prevent dividing by zero. Note that once a return statement is executed, the control loop stops further function evaluation. I.e. this function either returns with the CFL-based limit or with $10^{30}$. Finally, I will admit that it is somewhat debatable and inconsistent to set ux and uy as input to the function tracer_advection(), and rely on the global variables in dt_CFL() for the velocity data. You may choose to do otherwise.

While we are in ns.h, I would also like our solver to stop at t = t_end, and not at t = t_end + dt, which was the case in our previous experiment (see the terminal output). Once the time parameter t is close to t_end, t + dt should be smaller or equal to t_end, not to over step. For this purpose, we add a new function in ns.h:

...
return 1e30;
}

double dt_next (double t_end) {
double DT = dt_CFL();
if (t + DT > t_end)
return t_end - t;
return DT;
}

We use this to update our time stepping in test_advection.c

  ...
// end time
double t_end = 5; //L0/ux
// step counter
int iter = 0;
// Time loop
for (t = 0; t < t_end; t += dt) {
// compute timestep size
dt = dt_next(t_end);
// Tendency
double ds_dt[N][N];
...

Now test

$rm s-*.ppm$ gcc -DN=400 test_advection.c -lm
$./a.out # Solver finished after 572 steps at t = 5$ convert s-*.ppm s_400.mp4

It appears to solver took more steps and ends at to correct time. You can verify the contents of s.mp4 yourself. To test our upwinding implementation, it would also be good to redo the experiment with positive and negative values for both ux and uy.

This chapter continues with two exercises.

## (1) A quantitative test

It would be good to not only visually inspect the solution, but also diagnose the convergence rate of our formulations. For this purpose, you could setup a workflow as in Chapt. 3 using the initialized solution as the reference analytical solution after one full cycle. Indeed, pure translation should not alter the shape of the solution. For this case, the domain size needs to be enlarged as the initialized solution is not consistent with the periodic boundaries. This effect is exponentially reduced by increasing L0. Note that when analyzing your results, both spatial and temporal discretization are only first-order accurate.

## (2) A Well-behaved solver?

Although the speed performance of our solver is not a critical concern, we should verify if the time efficiency at least behaves well. This entails verifying the scaling of the time effort against the expected scaling of the iteration effort. For this purpose we turn of the output routine,

...
// output_ppm (...
...

so that the disk-writing performance does not affect our wall-clock time to solution. Next, we generate 3 executables with increasingly finer grids. In order to not overwrite a previously generated program, we name them differently using the -o option, to name the output programs other than the default a.out.

$gcc -DN=100 test_advection.c -lm -o ta-100$ gcc -DN=200 test_advection.c -lm -o ta-200
$gcc -DN=400 test_advection.c -lm -o ta-400 We can use a stopwatch that is build into most shells to measure the time spend on the execution of the programs. I get (using bash for illustrative purposes): $ time ./ta-100
# Solver finished after 143 steps at t = 5

real 0m0.136s
...
$time ./ta-200 # Solver finished after 286 steps at t = 5 real 0m0.885s ...$ time ./ta-400
# Solver finished after 572 steps at t = 5

real    0m6.819s
...

As we increase the resolution by a factor of two, the effort required to run the simulation increased by a factor of eight! This somewhat worrying scaling behavior is in fact the expected result, and so far, our solver is well behaved. Can you see why?

Lets continue with the next term in the Navier-Stokes equations in chapter 7

The marvelous design of this website is taken from Suckless.org