# 8. Antre’acte: A Poisson-problem solver

## Poisson’s equation

Poisson problems arise in many fields in science and engineering. For example, it relates the electric or gravity potential ($\phi$) to the spatial charge-density or mass-density distribution ($\rho$), respectively. It reads,

$\nabla^2 \phi = \rho.$

From the previous chapter, we know how to approximate the left-hand side ($\nabla^2 \phi$) to obtain an approximation of the scalar field $\rho$, so what is Poisson’s problem? It is about the reversed case, finding $\phi$, for a given so-called source term (i.e. the right-hand side, $\rho$) that satisfies the above equation. This is an implicit problem, as we can only verify a solution after is has already been computed. On the other hand, it is a relatively simple linear expression, and an analytical solution has been found. In 2D,

$\phi(\mathbf{x}) = -\iint\frac{\rho(\mathbf{x}')}{2 \pi \|\mathbf {x} - \mathbf{x}'\|} \mathrm{d}^2x'.$

Our goal is to implement a function poisson (double phi[N][N], double rho[N][N]), that approximates this solution. It is tempting to use this analytical form, and integrate the solution over our domain. Although this is easier said than formalized, it would indeed be a valid strategy. However, many alternative methods exist, and we will take another route where we have more easy control over some numerical properties of our solution.

Notice that the Poisson equation alone does not fully specify the problem as any solution ($\phi_1$) that solves the Poisson problem, can be altered by a solution ($\phi_2$) to an Laplace equation

$\nabla^2 \phi_2 = 0.$

Given the linearity of the problem, $\phi_3 = \phi_1 + \phi_2$ would also solve the Poisson problem. Examples for $\phi_2$ include a constant field, and fields with constant gradients. This ambiguity can be taken away by setting suitable boundary conditions for $\phi$ on the domain. However, since we limit ourselves to periodic boundaries on square domains, the solution is ambiguous and can be changed by adding and subtracting an arbitrary constant.

No worries, there could exist many solutions ant that could make it easier to find at least one?. Well, that is assuming that there exists a regular solution. Consider for example the case of an infinitely periodically repeating positive charge distribution. The corresponding electric potential well would be infinitely deep $\phi \rightarrow -\infty$, making it obvious to state that for such a case, no regular solution exists. As such, the boundary conditions to a Poisson problem often impose a compatibility constraint on $\rho$. For our periodic solver, it means that the domain ($\mathcal{D}$) integral of $\rho$ must net zero.

$\iint _{\mathcal{D}} \rho \mathrm{d}x\mathrm{d}y = 0.$

## A numerical solving strategy

From the previous chapter, we know a discrete second-order accurate version of the problem is that for all cell indices $i$ and $j$,

$\frac{\phi[i-1,j] + \phi[i,j-1] - 4\phi[i,j] + \phi[i+1,j] + \phi[i,j+1]}{\Delta^2} = \rho[i,j].$

A rather crude, yet incredibly simple, strategy for finding such $\phi$ is to itteratively solve for it. For a given cell we could satisfy the equation locally by assigning the corresponding value to the current cell,

$\phi[0,0] \leftarrow \frac{-\Delta^2 \rho[0,0] + \phi[-1,0] + \phi[0,-1] \phi[1,0] + \phi[0,1]}{4}.$

If we do this for every cell in the grid, we are only guaranteed to satisfy the Poisson equation in the last cell of our iterator. However, if the initial guess of the $\phi$-field is not particularly bad, the changes to the local cell ($\phi[0,0]$) can be small. Of the then repeatedly sweep over all cells, the difference between the replacement values and the current solution may shrink to a very small value. This could indicate that the solution converges by successive direct replacement. Lets implement this idea in a file called poisson.h.

int max_sweeps = 100;

void poisson (double phi[N][N], double rho[N][N]) {
for (int sweep_nr = 0; sweep_nr < max_sweeps; sweep_nr++) {
foreach() {
double c = 0;
for (int i = -1; i <= 1; i += 2)
c += val(phi, i,0) + val(phi, 0, i);
val(phi, 0, 0) = (c - val(rho, 0, 0)*sq(Delta))/4.;
}
}
}

It would also be nice to trace the so-called residual (res) as the $\phi$ field “relaxes” towards the solution.

$\mathrm{res} = \nabla^2 \phi - \rho$

We are interested in its absolute maximum,

...
foreach() {
...
}
double max_res = -1;
foreach() {
double c = 0;
for (int i = -1; i <= 1; i += 2)
c += val(phi, i,0) + val(phi, 0, i);
double res = (c - 4*val(phi, 0, 0))/sq(Delta) - val(rho, 0, 0);
if (fabs(res) > max_res)
max_res = fabs(res);
}
printf ("%d %g\n", sweep_nr, max_res);
}
}

We can now see if our approach works using test-poisson.c,

#include "common.h"
#include "poisson.h"

int main() {
L0 = 2*3.1415;
// phi and rho (a and b)
double a[N][N], b[N][N];
foreach() {
val (a, 0, 0) = 0;
val (b, 0, 0) = sin(x) + cos(y);
}
poisson (a, b);
}

Note that we have initialized the a-field values to some initial guess as it could contain garbage upon declaration. Further, the source field b is carefully chosen such that it satisfies the compatibly requirement. Compile and run,

$gcc test_poisson.c -lm$ ./a.out
0 2.77167
1 2.91396
2 2.94642
3 2.95452
4 2.95533
...
95 2.48183
96 2.47694
97 2.47206
98 2.46719
99 2.46233

The output reveals that the residual is somewhat decreasing, but after 100 iterations the maximum residual (which should go to zero) is still relatively close to its initial value. We can change test_poisson.c to take more sweeps in to hope for a better solution,

...

max_sweeps = 10000;
poisson (a, b);
}

Resulting in,

$gcc -O3 test_poisson.c -lm$ ./a.out
...
9996 5.89834e-05
9997 5.89834e-05
9998 5.89834e-05
9999 5.89834e-05

Much better. lets investigate the convergence of the solution a bit,

$./a.out > out$ gnuplot
gnuplot> set logscale y
gnuplot> plot 'out'

If seems that for the fist few thousand iterations, there is a constant reduction factor for the residual with each sweep ($\approx 0.998$), but after some point the solution stops to improve. This is a reminder of the compatibility requirement. Because we have only defined $\pi$ with a few digits, the discrete integral of the source term is not exactly zero, and there does not exist a proper solution to the discrete problem. You can verify by setting L0 = 2*3.14159265;, that the residual reduces to 6.62...e-9. It is nice to know that our iterative solver was quite robust in the previous case, as is did manage to find a field $\phi$ that nearly solves the impossible problem.

## A criterion to stop sweeping

The accurate solution to the discrete problem (i.e. a small residual), is rather expensive to obtain due to the high number of relaxation sweeps. It would be nice to stop iterating once the field $\phi$ is sufficiently close to solving the discrete problem. This can be especially rewarding for cases when the initial guess is quite close to the solution and only a few iterations are needed for a small residual. As such we will tolerate a small maximum absolute residual on our solution, and stop sweeping once it is achieved. In poisson.h, we replace our for loop with a do-while construction.

int max_sweeps = 1e4;
double tolerance = 1e-3;

void poisson (double phi[N][N], double rho[N][N]) {
int sweep_nr = 0;
double max_res = -1;
do {
foreach() {
...
}
sweep_nr++;
max_res = -1;
foreach() {
...
}
} while (max_res > tolerance && sweep_nr < max_sweeps);
// Warn when the solution did not converge within max_sweeps
if (max_res > tolerance)
printf ("# Warning: max res = %g is greather than its tolerance (%g)\n",
max_res, tolerance);
printf ("%d %g\n", sweep_nr, max_res);
}

We can test it,

$gcc test_poisson.c -lm$ ./a.out
4054 0.000999368

The relevance becomes apparent by perturbing the solution with a high-frequency component. In test_poisson.c,

...
poisson (a, b);
foreach()
val (a, 0, 0) += 0.1*(sin(10*x)*cos(20*y));
poisson (a, b);
}

Gives as output,

...
4054 0.000999368
38 0.000992947

The second line shows that when the initial guess of the solution was already close to the final solution, the Poisson solver automatically stopped after only a few sweeps (38). Note that the residual reduction per sweep is likely to be smaller (i.e. favorable) than the 0.998 from the previous case.

The last coding in this chapter is to remove the not strictly needed messaging. It would clutter output as we call it many times during a flow simulation.

...
// printf ("%d %g\n", sweep_nr, max_res);
...

## Solving for $\nabla \phi$

Often, and not in the last place for computational fluid dynamics, we are not really interested in $\phi$, but use it to compute its gradients. This is why we do not really care about the constant offset any solution may have on a periodic domain. It should be noted however, that the computed discrete gradients of our solution field $\phi$ may not inherit the expected properties of such gradient fields from the small tolerance on $\phi$ to solve the discrete Poisson problem. This is a bit cryptic way of repeating that the 3-point stencil for the Laplacian operator was derived from gradients evaluated at staggered locations. So when we are computing gradients of our solution at grid points, the divergence of these gradients will not match the source term exactly (irrespective of the value for tolerance). A way around this problem is to change the discrete problem to the earlier derived 5-point stencil,

$\frac{\phi[i-2,j] + \phi[i,j-2] - 4\phi[i,j] + \phi[i+2,j] + \phi[i,j+2]}{4\Delta^2} = \rho[i,j],$

and require that the gradients must be evaluated using the central 3-point stencil. Another way around this issue it is to define the variables that interact with the gradients to also be staggered compared to $\phi$. We will discuss what this narrative actually entails for our solver in a future chapter. But it turns out, we can mostly ignore it for now.

We can now apply our linear solver to compute the pressure-gradient term in Chapt. 9.

The marvelous design of this website is taken from Suckless.org