Euler's formula, the most remarkable formula in mathematics, according to Richard Feynman and many others, states that for any real number \(x\),
\[ e^{ix} = \cos x + i \sin x \]
In this section we will derive Euler's formula to get a better understanding of how Euler's formula can be used to describe rotations in space and time, eventually including the Lorentz transformation.
Euler's formula comes about by first considering a function of \( x \), for which the derivative of the function for any value of \(x\) has the same value as function itself. That is:
\[ \frac{d}{dx}f(x) = f(x) \]
If we start with a guess of \( f(x) = 1 + x \) then we see it is not quite right as
\[ \frac{d}{dx}(1 + x) = 1 \ne (1 + x) \]
but we can improve it by adding another term
\[ f(x) = 1 + x + \frac{x^2}{2} \]
which is a little closer in that
\[ \frac{d}{dx}(1 + x + \frac{x^2}{2}) = 1 + x \ne f(x) \]
but still not the same. So we improve it again with
\[ f(x) = 1 + x + \frac{x^2}{2} + \frac{x^3}{2\times 3}\]
and so on, until we end up with an infinite sum of terms:
\[ f(x) = \frac{x^0}{0!} + \frac{x^1}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + ... = \sum_{k=0}^{\infty} \frac{x^k}{k!} \]
We now have a function satisfying \( \frac{d}{dx} f(x) = f(x) \).
Solution
We can work out the derivative of each term in the infinite sum of terms,
\[ \begin{aligned} \frac{d}{dx} f(x) &= \frac{d}{dx} \sum_{k=0}^{\infty} \frac{x^k}{k!} \\ &= \sum_{k=0}^{\infty} \frac{kx^{k-1}}{k!} \\ \end{aligned} \]
so we can completely remove the term for \(k=0\) (as zero times anything is still zero), and continue:
\[ \begin{aligned} \frac{d}{dx} f(x) &= \sum_{k=1}^{\infty} k\frac{x^{k-1}}{k!} \\ &= \sum_{k=1}^{\infty} \frac{x^{k-1}}{(k - 1)!} \\ &= \sum_{k=0}^{\infty} \frac{x^k}{k!} \\ &= f(x) \end{aligned} \]
Done!
Great, but where does \(e\) fit in?
We learn in high school that \( 3^2 \times 3 = 3^3 \) and \( 2^2 \times 2 \times 2 = 2^4 \) and later that \( x^2 \times x^2 = x^4 \) etc., but the general definition of exponentiation is that when multiplying two numbers with the same base (\( 3^2 \times 3^1 \)), the powers (or exponents) are added (\( 3^{2+1} = 3^3 \)):
\[ b^m \times b^n = b^{m + n} \]
If we take our function which satisfies the property \(\frac{d}{dx}f(x) = f(x)\) we can show that it behaves just like an exponent should, in that:
\( f(m) \times f(n) = f(m+n) \)
To show this requires quite a bit of re-arranging:
\[ \begin{aligned} f(m) \times f(n) &= \sum_{k=0}^{\infty} \frac{m^k}{k!} \times \sum_{k=0}^{\infty} \frac{n^k}{k!} \\ &= (\frac{m^0}{0!} + \frac{m^1}{1!} + \frac{m^2}{2!} + \frac{m^3}{3!} + ...) \times (\frac{n^0}{0!} + \frac{n^1}{1!} + \frac{n^2}{2!} + \frac{n^3}{3!} + ...) \\ &= 1 + (\frac{m^1}{1!} + \frac{n^1}{1!}) + (\frac{m^2}{2!} + \frac{m^1 n^1}{1! 1!} + \frac{n^2}{2!}) + (\frac{m^3}{3!} + \frac{m^2 n^1}{2! 1!} + \frac{m^1 n^2}{1! 2!} + \frac{n^3}{3!}) ... \\ &= 1 + (\frac{m^1}{1!} + \frac{n^1}{1!}) + (\frac{m^2}{2!} + \frac{2}{2}\frac{m^1 n^1}{1! 1!} + \frac{n^2}{2!}) + (\frac{m^3}{3!} + \frac{3}{3}\frac{m^2 n^1}{2! 1!} + \frac{3}{3}\frac{m^1 n^2}{1! 2!} + \frac{n^3}{3!}) ... \\ &= \sum_{k=0}^{\infty} \frac{(m + n)^k}{k!} \\ &= f(m + n) \end{aligned} \]
Also similar to an exponent, we can show for our function \( f(x) \), that \(f(0) = 1\).
Solution:
Substituting \(x=0\),
\[ \begin{aligned} f(0) &= \sum_{k=0}^{\infty} \frac{0^k}{k!} \\ &= \frac{0^0}{0!} + \frac{0^1}{1!} + \frac{0^2}{2!} + ... \\ &= 1 + 0 + 0 + ... \\ &= 1 \end{aligned} \]
Done!
Given that our function \(f(x)\) behaves just like an exponent should behave, it must be equivalent to some number to the power of \(x\). We can find that number by working out \(f(1)\).
Solution:
Evaluating,
\[ \begin{aligned} f(1) &= \sum_{k=0}^{\infty} \frac{1!}{k!} \\ &= \sum_{k=0}^{\infty} \frac{1}{k!} \\ &= \frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \frac{1}{5!} + ... \\ &= 1 + 1 + 0.5 + 0.1667 + 0.0417 + 0.0083... \\ &= 2.717 \end{aligned} \]
The exact value (if we summed to infinity) is assigned a special name in maths, e.
So,
\[ e^1 = f(1) = \sum_{k=0}^{\infty} \frac{1}{k!} \approx 2.71828... \]
Now any value of our function can be evaluated simply as a power of \(e\):
\[ f(3) = e^3 = 2.71828... ^3 = 20.08534...\]
What if we need to work with other exponents of \(e\), such as \(e^{3x}\)? How do we work out the derivative - how fast \(e^{3x}\) is changing as \(x\) changes? Your memory may (or may not) tell you that:
\[ \frac{d}{dx} e^{3x} = 3e^{3x} \]
or more generally, if \(g\) is another function of \(x\), that
\[ \frac{d}{dx} e^{g} = \frac{dg}{dx}e^{g} \]
This is all we need for our tooling here, though we can also see why this is the case given the chain rule for derivatives which says that:
\[ \frac{d}{dx}f(g) = \frac{dg}{dx} \times \frac{d}{d g}f(g) \]
together with the special property of our function \(f(x) = e^x\) that the derivative at any point is equal to the function itself, \(\frac{d}{dx}f(x) = f(x)\).
Now we have enough background to appreciate the beauty of Euler's formula. Euler, like Roger Cotes before him, noticed that if he evaluated this exponent function with a special type of value, an "imaginary" value whose square is negative, the result is a combination of the trigonometric functions \(\cos\) and \(sin\). That is, if we define \(i\) such that \(i^2 = -1\), and evaluate \(e^{ix}\), it conveniently splits into two sums, one of the odd powers and one of the even powers:
\[ \begin{aligned} e^{ix} &= \sum_{k=0}^{\infty} \frac{(ix)^k}{k!} \\ &= \frac{(ix)^0}{0!} + \frac{(ix)^1}{1!} + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \frac{(ix)^4}{4!} ... \\ &= (1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} ...) + i(\frac{x^1}{1!} - \frac{x^3}{3!} + \frac{x^5}{5!} ...) \\ &= \sum_{k=0}^{\infty}(-1)^{k}\frac{x^{2k}}{2k!} + i \sum_{k=0}^{\infty}(-1)^k\frac{x^{2k+1}}{(2k+1)!} \\ &= \cos(x) + i\sin(x) \end{aligned} \]
That is, the two separate summations are the actual definitions of \(\cos(x)\) and \(\sin(x)\).
Notice that, with our introduction to Geometric Algebra, we did not need to invent an imaginary unit \(i\) here, as we have already seen that the product of basis vectors \(\mathbf{xy}\) has the same property that \((\mathbf{xy})^2 = -1\), but we will pick up this thread in Euler's formula and Geometric Algebra next.
It's worth noting that we can calculate the derivative of \(e^{ix}\) and as a result work out the standard derivations of \(\cos\) and \(sin\) that we may have memorized in school or otherwise:
\[ \begin{aligned} \frac{d}{dx}e^{ix} &= ie^{ix} \\ &= i\cos(x) + i^2\sin(x) \\ &= -\sin(x) + i\cos(x) \end{aligned} \]
and so comparing the real parts of \(e^{ix}\) and \(\frac{d}{dx}e^{ix}\) show that
\[ \frac{d}{dx}\cos(x) = -\sin(x)\]
while comparing the imaginary parts show that
\[ \frac{d}{dx}\sin(x) = \cos(x) \]