I have been working on a game recently, and I ran into some interesting math along the way. I wanted an explosion animation that started out fast, and as it got larger it would get slower. Below you can click, drag or swipe to create the explosions I ended up building.

For simplicity, I am rendering the explosion as a ring, with expanding inner and outer radii. I wanted the rate of expansion of the two radii to start fast and end slow, but this could just as easily be applied to a collection of particles. One possible formula for velocity would be a linear ramp, changing from the initial to the final velocity in equal steps over the animation. That is, a constant change in velocity.

\[\frac{dV(t)}{dt} = \frac{V_T - V_0}{T}\]

But that didn't have the effect I was looking for; I wanted the leading edge of the ring to be moving really quickly and to slow down really quickly. And as it was near the end of the animation, moving more slowly, I wanted it to be slowing down more slowly as well. I wanted the change in velocity to be proportional to the current velocity.

\[ \begin{equation}\label{eqn:exponential_decay} \frac{dV(t)}{dt} = -\lambda V(t) \end{equation} \]

This differential equation, known as an Exponential Decay, is conventionally written with a negative sign so that the proportionality constant, \(\lambda\), will be positive. When \(\lambda\) is positive it continually subtracts a proportion of the velocity from itself, so when the velocity is large, the portion subtracted will be large, and when the velocity is small, the portion subtracted will be small. For completeness we'll also need a differential equation that relates position and velocity.

\[ \begin{equation}\label{eqn:position_diffeq} \frac{dX(t)}{dt} = V(t) \end{equation} \]

The combination of \(\eqref{eqn:exponential_decay}\) and \(\eqref{eqn:position_diffeq}\) is a system of differential equations; it says that the change in something's velocity (in our case the two ring radii) at some time will be proportional to the thing's velocity at that time. It also says that the change in the thing's position at some time is the thing's velocity. You can use these equations exactly as they are, via numeric integration. You iteratively update the velocity and position of the value you are simulating. The simplest way to do the update would be with the Euler method. You would select a small increment in time (a timestep) and move the simulation forward by that amount of time over and over again. For every timestep you update velocity by evaluating \(\eqref{eqn:exponential_decay}\), and then position by evaluating \(\eqref{eqn:position_diffeq}\). But if you want to know the position or velocity at a specific time you'll need to rerun the simulation from the beginning to get it, which could be very costly if you have selected a small timestep, which you would have to do to ensure an accurate simulation. Alternatively you could store all previous steps of the simulation, which would take a lot of memory. Either way, what values should you use for \(\lambda\) and the initial position and velocity? A better approach (at least for my needs) is to solve this differential equation analytically, and use the result to pick initial values and to evaluate position and velocity at arbitrary times.

Here is another version of the same expanding ring, rendered using the awesome online calculator Desmos. You can manually control the time slider (if you can grab it) to get a feel for how the two exponentials control the inner and outer radii of the ring.

So where's the interesting math? It turns out that picking the initial position, velocity, and \(\lambda\) values to get a good-looking explosion takes some care. For one thing, I wanted to make the final positions of the two exponenitals that control the inner and outer radii equal. I could have done that by trial an error, maybe, but there's a better way. In this post I'll go through solving \(\eqref{eqn:exponential_decay}\) and \(\eqref{eqn:position_diffeq}\) for a number of different parameters. We'll want an equation that can tell us the radius at a given time in the simulation. We'll also want to be able to compute initial velocity given some more intuitive parameters that make specifying the explosion simpler. In this case, it turns out to be easier to specify the final velocity, duration of explosion, and amount that the ring should expand. Given these, the initial velocity and \(\lambda\) are completely defined.

## Velocity

The first step will be to solve \(\eqref{eqn:exponential_decay}\) so that we have an equation for velocity at an arbitrary time. This is a pretty simple differential equation, and we can solve it directly by integration.

The result is the solution for velocity at any time \(t\).

\[ \begin{equation}\label{eqn:velocity} V(t) = V_0 e^{-\lambda t} \end{equation} \]

Nothing too surprising here; this tells us how a value changes over time if it
is continually being acted on in proportion to itself. Here is the same
function written in Dart^{1}. This is one method from a class called
ExponentialRate that I am using
to model this system of differential equations in the game. The complete
source for ExponentialRate
can be found here.

```
///
/// Return the rate of change of the value at a given time.
///
/// Time can be negative, positive, or zero, it should also be able to be
/// positive or negative infinity, but I haven't verified that Dart's exp
/// function correctly handles those cases.
///
double rate(double time) => initial_rate * exp(-lambda * time);
```

## Position

Next we want to know about the position over time, both so that we can query it for animations and so that we can solve the equation for our initial velocity. Now that we have \(\eqref{eqn:velocity}\), which is an equation for \(V(t)\), we can substitute it into \(\eqref{eqn:position_diffeq}\), our definition of position. This gives us a differential equation that we can solve.

The result is an explicit equation for \(X(t)\).

\[ \begin{equation}\label{eqn:position} X(t) = X_0 + \frac{1}{-\lambda} (V_0 e^{-\lambda t} - V_0) \end{equation} \]

Below you can experiment with \(\eqref{eqn:velocity}\) and \(\eqref{eqn:position}\), try changing \(X_0\), \(V_0\), \(\lambda\), and the total time of the simulation. \(V_0\) is controlled by manipulating the tangent line to the graph at the origin.

I noticed that the first term (\(V_0 e^{-\lambda t}\)) is just \(\eqref{eqn:velocity}\), so I used that in its place.

\[ \begin{equation}\label{eqn:position_from_rate} X(t) = X_0 + \frac{1}{-\lambda}(V(t) - V_0) \end{equation} \]

There is something very pleasing about this equation; the change in position is proportional to the change in velocity. But this form of the equation has some issues. One thing to note about this solution is that it doesn't work if \(\lambda\) is equal to zero. Fortunately \(\eqref{eqn:exponential_decay}\) then becomes trivial (\(\frac{dV(t)}{dt} = 0\)), and the solutions are \(V(t) = V_0\) and \(X(t) = X_0 + V_0 t\). However, when I thought about this more, I realized that this substitution of \(\eqref{eqn:velocity}\) had hidden a numerical issue in the implementation as well. If we start back with \(\eqref{eqn:position}\) and rearrange a little by factoring out \(V_0\) and moving \(\frac{1}{-\lambda}\) in we can see more clearly what the problem is.

\[X(t) = X_0 + V_0 \frac{e^{-\lambda t} - 1}{-\lambda}\]

The right term numerator is \(e^{-\lambda t} - 1\), which, for values of \(-\lambda t\) close to zero, can lead to Loss of Significance because we are subtracting two numbers that are nearly identical (\(e^0 = 1\)). It wasn't clear at first to me how to handle this; in C/C++ I would have used expm1 and then used the trivial solutions for \(\eqref{eqn:exponential_decay}\) and \(\eqref{eqn:position_diffeq}\) when \(\lambda\) was zero, but there would still be a possible problem with the division when \(\lambda\) is very close to zero because both numerator and denominator would be approaching zero. In hopes that I could come up with an alternate representation that would work for all values of \(\lambda\) and \(t\), I started to work with the infinite sum representation of the exponential function.

\[\def\exp_sum#1#2{\sum_{k=#1}^{\infty} \frac{#2^k}{k!}}\]

\[e^x = \exp_sum{0}{x}\]

I used this sum in place of the exponential in \(\eqref{eqn:position}\).

\[X(t) = X_0 + \frac{V_0}{-\lambda} (\exp_sum{0}{(-\lambda t)} - 1)\]

The first term in the sum is just one.

\[e^{-\lambda t} = 1 + \exp_sum{1}{(-\lambda t)}\]

That one cancels out the subtraction by one, leaving all of the remaining terms from \(\numrange{1}{\infty}\). All of those terms have at least one \(-\lambda\) in them, so I was able to divide the sum by \(-\lambda\), canceling out one of the \(-\lambda\) factors in each term. The result had no divisions by \(-\lambda\). Nice.

\[X(t) = X_0 + V_0 (\sum_{k=1}^{\infty} \frac{-\lambda^{k-1} t^k}{k!})\]

Then I saw that I could pull a \(t\) out of the sum as well. And I could shift the sum index (\(k\)) by one, so that the sum started at zero again.

\[X(t) = X_0 + V_0 t (\sum_{k=0}^{\infty} \frac{(-\lambda t)^k}{(k + 1)!})\]

The sum now looked so much like an exponential by itself that I was distracted into thinking that I could do something to recover a simple exponential form, with some additional damping factor (accounting for the \(+1\) in the factorial), that didn't have the division by zero problem that the original version had. Eventually I realized that the pulling out of \(t\) was the crucial piece that I had missed at first. I went back to \(\eqref{eqn:position}\) to see what would happen when I took a \(t\) out of the second factor.

\[ \begin{equation}\label{eqn:position_with_hex} X(t) = X_0 + V_0 t \frac{e^{-\lambda t} - 1}{-\lambda t} \end{equation} \]

I had to put a \(t\) in the denominator, which nicely turns the problematic
portion into a one variable problem. No longer did I have to treat
\(-\lambda\) and \(t\) as separate variables; I could just lump them together
and work on solving \(\frac{e^x-1}{x}\), which at this point I feel like needs
a name. If anyone knows of a name for this particular function^{2} I would be
interested, for now I'm calling it a hexponential, or \(hex(x)\), short for
"half exponential" because the derivative at zero is one half instead of one,
but otherwise it looks like an exponential.

\[hex(x) := \frac{e^x - 1}{x} = \sum_{k=0}^{\infty} \frac{x^k}{(k + 1)!}\]

Hex is well behaved at the origin, even with the division by zero, because the division is in some sense just an artifact of writing the function down in a Closed Form. We can use the sum representation to compute it around zero nicely. We can also see this by using L'HÃ´pital's rule to compute the limit of the closed form of hex. Here is the resulting version of \(\eqref{eqn:position_with_hex}\) written in Dart.

```
///
/// Return the value at a given time.
///
/// Time can be negative, positive, or zero, it should also be able to be
/// positive or negative infinity, but I haven't verified that Dart's exp
/// function correctly handles those cases.
///
double value(double time)
{
//
// This version of hex maintains about 14 digits of precision, not full
// machine precision (~16 digits for doubles).
//
double hex(double x)
{
//
// A threshold of 10^-2 seems like a good compromise. The smaller
// the threshold is, the fewer terms we need to use to accurately
// evaluate hex. But smaller thresholds increase the loss of
// significance when using the exponential form below. The
// truncation error introduced by a threshold of 10^-2 is O(x^6),
// or on the order of 1e-12. In fact, the first term that we lose
// from the sum is x^6/6!, or 10^-12/720, or 1.38e-15, so our
// truncation error is closer to 1e-15.
//
if (x.abs() < 1e-2)
return 1 + (1 + (1 + (1 + x / 5) * x / 4) * x / 3) * x / 2;
//
// The worst case loss of significance from the subtraction below
// will be when x = 10^-2.
//
// log2(1-(1/e^(10^-2))) ~ -6.6
//
// So we can expect to loose 7 bits of precision, from our doubles
// 53 bit mantissa. Or ~2 digits from our ~16 digits of precision.
//
return (exp(x) - 1) / x;
}
return initial_value + initial_rate * time * hex(-lambda * time);
}
```

## Lambda

We now have our solutions for \(\eqref{eqn:exponential_decay}\) and \(\eqref{eqn:position_diffeq}\), but we're still stuck specifying \(X_0\), \(V_0\), \(\lambda\), and \(T\) (the total time of the simulation). Instead, we would like to be specifying \(X_0\), \(X_T\), \(V_T\), and \(T\), where

\[ \begin{align*} X_T &:= X(T)\\ V_T &:= V(T). \end{align*} \]

In other words, we need to come up with equations that give us \(V_0\) and \(\lambda\) in terms of \(X_0\), \(X_T\), \(V_T\), and \(T\).

We can use \(\eqref{eqn:velocity}\), the solution for our velocity at a given time, to compute \(\lambda\) if we know \(V_0\), \(V_T\), and \(T\). This can then be substituted into \(\eqref{eqn:position}\) which gives us an equation relating \(X_0\), \(X_T\), \(V_0\), \(V_T\), and \(T\). We can then solve that equation for \(V_0\).

\[ \begin{equation}\label{eqn:lambda} \lambda = -\frac{log(\frac{V_T}{V_0})}{T} \end{equation} \]

We will need to use this equation once we've solved \(\eqref{eqn:position}\) for \(V_0\) to compute \(\lambda\). Again, we can write \(\eqref{eqn:lambda}\) in Dart.

```
///
/// Compute the proportionality coefficient lambda given the start and end
/// rates as well as total time taken.
///
static double _lambda(double final_rate,
double initial_rate,
double total_time)
{
//
// If final_rate, or initial_rate is zero then the other must also be
// zero. This is because the only way to actually get to zero with an
// exponential is to already be there. And if you're there, there is
// no way to leave. In this case lambda should be zero, because our
// rate is not changing.
//
assert((final_rate == 0.0) == (initial_rate == 0.0));
if (final_rate == 0.0)
return 0.0;
//
// Lambda is not a complex number, we are only interested in solving
// for real values. So final_rate and initial_rate must also have the
// same sign.
//
assert((final_rate > 0.0) == (initial_rate > 0.0));
return -log(final_rate / initial_rate) / total_time;
}
```

We can now substitute \(\eqref{eqn:lambda}\) into \(\eqref{eqn:position}\), giving us an equation for position that is parameterized by \(X_0\), \(V_0\), \(V_T\), and \(T\).

\[ \begin{equation}\label{eqn:position_no_lambda} X(t) = X_0 + \frac{T}{log(\frac{V_T}{V_0})} (V_0 e^{\frac{log(\frac{V_T}{V_0})}{T} t} - V_0) \end{equation} \]

We can play with this function in Desmos as well, but we still don't get to specify the final position of the animation (\(X_T\)).

You'll probably notice that \(V_0\) and \(V_T\) have to have the same sign. If they do not then desmos doesn't show a graph at all. This is because \(\eqref{eqn:lambda}\) takes the logarithm of their ratio, and as long as \(V_0\) and \(V_T\) have the same sign their ratio is positive. The logarithm for real numbers is not defined for negative numbers. So Desmos (and the Dart implementation above) give up. But if we were to use the complex exponential and logarithm then there is actually a solution when \(V_0\) and \(V_T\) have different signs. The solution ends up being a combination of sine, cosine and exponential functions. The sine and cosine allow for the change in sign of the rate. I touched on this briefly at the end of my 2D Rotation post. It is never the case in the game I'm writing that I want the initial and final velocities to have different signs, so I opted to keep the solution real. I could imagine an explosion animation that contracts at the end, but that wasn't what I was going for visually.

## Initial Velocity

We now have \(\eqref{eqn:position}\) that tells us \(X_T\) given \(X_0\), \(V_0\), \(\lambda\), and \(T\). And we just found \(\eqref{eqn:lambda}\), an equation for \(\lambda\). Then we substituted \(\eqref{eqn:lambda}\) into \(\eqref{eqn:position}\) to get \(\eqref{eqn:position_no_lambda}\). Now we'll solve the result for \(V_0\). This is where the math got interesting; I had never encountered the Lambert W function before. And it turns out that solving for \(V_0\) will need it. To apply it we need to get one side of this equation to look like \(f(V_0) e^{f(V_0)}\), and the other side to not have \(V_0\) in it. Because it will make the remaining math easier to follow I'm going define \(\Delta X\) here.

\[\Delta X := X_T - X_0\]

We now have an equation that relates all of our coefficients (\(X_0, X_T, V_0, V_T\), and \(T\)); We just have to solve it for \(V_0\).

\[ \begin{equation}\label{eqn:initial_velocity_symetric} -T \frac{V_T}{\Delta X} e^{-T \frac{V_T}{\Delta X}} = -T \frac{V_0}{\Delta X} e^{-T \frac{V_0}{\Delta X}} \end{equation} \]

Now that the right hand side is in the correct form \(f(V_0) = -T \frac{V_0}{\Delta X}\), we can apply W to both sides. You might notice that the left hand side is also in the same form, it is \(f(V_T) = -T \frac{V_T}{\Delta X}\). More about that later.

\[ W(-T \frac{V_T}{\Delta X} e^{-T \frac{V_T}{\Delta X}}) = -T \frac{V_0}{\Delta X} \]

And finally, multiply both sides by \(-\frac{\Delta X}{T}\) to solve for \(V_0\).

\[ V_0 = -W(-T \frac{V_T}{\Delta X} e^{-T \frac{V_T}{\Delta X}}) \frac{\Delta X}{T} \]

This is a lot to look at, but it has some repetition that we can factor out, making it easier to understand; substitute \(A = -T \frac{V_T}{\Delta X}\).

\[ \begin{equation}\label{eqn:initial_velocity} V_0 = V_T \frac{W(A e^A)}{A} \end{equation} \]

I wanted to add a Desmos graph here where you could play with the new set of parameters, but Desmos doesn't currently support the Lambert W function, which makes it pretty hard to evaluate the above equation. It could be approximated with a lot of work, but this post has taken long enough to write already.

But we can get an idea for the sort of solutions that the Lambert W function would return. Below is a graph of \(f(x) = xe^x\) that shows that for any value of \(x_1 < 0\) other than \(x_1 = -1\) there is another value \(x_2 \neq x_1\) such that \(f(x_1) = f(x_2)\). In other words, if \(x\) is less than zero, and not equal to negative one, then the inverse of \(f(x)\) has two values. That other value can be computed using one of the two branches of the Lambert W function.

I'd like to return briefly to the interesting symmetry of \(\eqref{eqn:initial_velocity_symetric}\), it was of the form:

\[X e^X = Y e^Y\]

where

\[ \begin{align*} X &:= f(V_T) = -T \frac{V_T}{\Delta X}\\ Y &:= f(V_0) = -T \frac{V_0}{\Delta X}. \end{align*} \]

Our problem was that we were given \(Y\) and we needed to compute \(X\), more or less. It's probably obvious that there's a trivial solution to this problem; \(X = Y\). That makes both sides identical, and so obviously it's a solution to \(\eqref{eqn:initial_velocity_symetric}\), but we arrived at \(\eqref{eqn:initial_velocity_symetric}\) by manipulating \(\eqref{eqn:position}\). And \(\eqref{eqn:position}\) is not valid when \(V_T = V_0\) (because then \(\lambda\) is zero), and since \(X\) and \(Y\) are just \(f(V_T)\) and \(f(V_0)\), \(X = Y\) can't be a solution we're actually looking for. Fortunately, there is a second solution for all values of \(A\) other than \(-1\). When \(A = -1\) the initial rate is just the final rate.

```
///
/// Return the initial rate that will result in the given change in value
/// and rate, over the specified total time.
///
/// delta_value in [ 0 .. infinity)
/// final_rate in (-infinity .. infinity)
/// total_time in [ 0 .. infinity)
///
static double _initial_rate(double delta_value,
double final_rate,
double total_time)
{
///
/// If delta_value is zero, then final_rate must also be zero. It is
/// not possible to have zero change in value with a non-zero
/// final_rate because the exponential is monotonic.
///
assert(delta_value != 0.0 || final_rate == 0.0);
///
/// Since we're not expected to change the value at all, and final_rate
/// is zero, _initial_rate should also be zero.
///
if (delta_value == 0.0)
return 0.0;
///
/// If total_time is zero, then delta_value must also be zero. If
/// delta_value were not zero we would have to somehow give two
/// different values for the same instant in time.
///
assert(total_time != 0.0 || delta_value == 0.0);
///
/// If total_time is zero then _initial_rate will match final_rate.
///
if (total_time == 0.0)
return final_rate;
final double A = -total_time * final_rate / delta_value;
if (A > -1.0)
return W_negative_one(A * exp(A)) * final_rate / A;
if (A < -1.0)
return W_zero(A * exp(A)) * final_rate / A;
return final_rate;
}
```

To evaluate the Lambert W function I wrote a version of the approximations found in Having Fun with Lambert W(x) Function. A post about that implementation might be a good idea. For now, the source can be found here.

## Dimensional Analysis

It's a good idea to double check our work and make sure that the resulting equations make sense from a units perspective. Let's add units. First we'll figure out the units of \(\lambda\). We start with \(\eqref{eqn:lambda}\) and add units to the right side.

\[ \lambda = -\frac{log(\frac{V(t) \si{m s^{-1}}}{V_0 \si{m s^{-1}}})}{t \si{s}} \]

The units of velocity (\(\si{m s^{-1}}\)) cancel, leaving the numerator unitless. The denominator still has units of seconds, so lambda has units of inverse seconds, also known as Hertz, a unit of frequency.

The units for \(\eqref{eqn:velocity}\) are also simple.

\[V(t) = V_0 \si{m s^{-1}} e^{-\lambda \si{s^{-1}} t \si{s}}\]

The units of \(\lambda\) and \(t\) in the exponential cancel, leaving that unitless. The only remaining units are from \(V_0\), so \(V(t)\) has the same units, those of velocity.

Next we can check the units for \(\eqref{eqn:position_from_rate}\).

\[ X(t) \si{m} - X_0 \si{m} = \frac{1}{-\lambda \si{s^{-1}}} (V(t) \si{m s^{-1}} - V_0\si{m s^{-1}}) \]

We can cancel \(\si{s^{-1}}\) from numerator and denominator, leaving meters on both sides, giving the correct units for position.

Finally let's check \(\eqref{eqn:initial_velocity}\) - to do that we just have to show that \(A(t)\) is unitless.

\[A(t) = -t \si{s} \frac{V(t) \si{m s^{-1}}}{X(t) \si{m}}\]

Pretty easy to see that seconds and meters cancel out completely.

Why Dart? you're probably asking right now. Well, I wanted to write a web game, and I got about one git commit into doing it in Javascript when I remembered that Javascript doesn't have operator overloading. For me, operator overloading is a language requirement; without it you just can't write much math beyond scalar algebra. I also like that Dart has some static typing abilities. I know that operator overloading, as well as static vs. dynamic typing, are contentious topics for a lot of people, but I'm not going to say much more about them in this post, perhaps later. I will say that I'm very much looking forward to using something compiled to WebAssembly instead.

^{[return]}My brother in law, Michael Shulman, pointed out that what I've called the Hexponential can also be viewed as the Difference Quotient of \(e^x\) at zero. And that the general operation on an infinite series of subtracting the first term and dividing by the argument produces a difference quotient for the function at zero.

^{[return]}