We use the changes of a Brownian motion to model randomness. We build other stochastic processes using those changes. The general idea is \[\text{change} = \text{mean} + \text{std dev} \times \text{change in Brownian motion}\,.\] The mathematical foundations for our construction were created by K. Ito. The key concepts are the Ito integral, Ito processes, and Ito’s formula (also called Ito’s lemma). Using these foundations, we can build quite general processes from changes in Brownian motions, including processes with non-normal distributions.
6.1 Examples
We begin with some simple examples. Consider a discrete partition of a time interval: \[0=t_0 < t_1< \cdots < t_{n-1} < t_n=t\] with equally spaced times. Let \(\Delta t\) denote the difference between successive times.
First, let’s drop the randomness entirely. Consider the equation \[X_{t_i} - X_{t_{i-1}} = \mu X_{t_{i-1}}\Delta t \qquad(6.1)\] for a constant \(\mu\). Thus, we have change \(\,=\,\) mean, where the mean is proportional to the previous value with proportionality factor \(\mu \Delta t\). Figure 6.1 presents a plot of \(X\), for particular values of \(X_0\), \(\mu\), and \(\Delta t\).
If we increase \(n\), making \(\Delta t\) smaller, then \(X\) converges to the solution of the ordinary differential equation \[\mathrm{d} X_t = \mu X_t\mathrm{d} t\,. \qquad(6.2)\]Equation 6.2 has a known solution, which is \[X_t = X_0 \mathrm{e}^{\mu t}\,. \qquad(6.3)\] To verify this, we only need to differentiate \(X\) defined in Equation 6.3: \[\frac{\mathrm{d} X_t}{\mathrm{d} t} = \mu X_0 \mathrm{e}^{\mu t} = \mu X_t\,.\] The function presented in Equation 6.3 is also shown in Figure 6.1.
Theory Extra
To see how one might guess that Equation 6.3 is the solution of Equation 6.2, we can examine the logarithm of \(X\). A general rule gives us \(\mathrm{d} \log X = \mathrm{d} X / X\), so \(\mathrm{d} \log X = \mu \mathrm{d} t\). We can integrate both sides of this to obtain \(\log X_t - \log X_0 = \mu t\). Now, rearranging and exponentiating gives \(X_t = X_0\mathrm{e}^{\mu t}\). Later, we follow similar steps to see that Equation 6.6 is the solution of Equation 6.5.
Figure 6.1: The functions \(X\) satisfying Equation 6.1 (difference equation) and Equation 6.3 (differential equation) for X0 = 1, mu = 1, and Delta t = 0.1.
Now, let’s include randomness. Let’s make the noise proportional to the value of \(X\), So, let \(B\) be a standard Brownian motion and consider the equation \[X_{t_i} - X_{t_{i-1}} = \mu X_{t_{i-1}}\Delta t + \sigma X_{t_{i-1}} \Delta B_{t_i} \qquad(6.4)\] where \(\sigma\) is another constant, and \(\Delta B_{t_i} = B_{t_i} - B_{t_{i-1}}\). A solution \(X\) of this equation has random paths, due to the random noise \(\Delta B_{t_i}\). An example of a path is shown in Figure 6.2. Ito showed how we can take the limit of this equation as we make \(\Delta t\) smaller and make sense of the equation \[
\mathrm{d} X_t = \mu X_t\mathrm{d} t + \sigma X_t \mathrm{d} B_t\,.
\qquad(6.5)\] The solution \(X\) of Equation 6.5 is \[X_t = \mathrm{e}^{(\mu - \sigma^2/2)t + \sigma B_t}\,. \qquad(6.6)\] We can show that \(X\) defined in Equation 6.6 satisfies Equation 6.5 by differentiating, as we showed that \(X\) defined in Equation 6.3 satisfies Equation 6.2. However, we first need to explain Ito’s formula, which is a formula for differentiating functions of Brownian motions and, more generally, functions of Ito processes. An approximate path of \(X\) is shown in Figure 6.2. It is generated by taking \(\Delta t\) very small, just as we generated approximate paths of Brownian motions in Chapter 5.
Figure 6.2: Paths of the processes satisfying Equation 6.4 (difference equation) and Equation 6.6 (differential equation) for X0 = 1, mu = 1, Delta t = 0.1, and sigma = 1.
The functions and processes \(X\) defined in this section have important interpretations. Equation 6.1 can be rewritten to say that the percent change in \(X\) is \(\mu \Delta t\). This could represent the value of a savings account that earns interest of \(\mu \Delta t\) in each period of length \(\Delta t\). This is the common way of calculating, for example, monthly interest, where \(\mu\) is called the annual rate of interest and \(\Delta t\) would be \(1/12\). The limiting Equation 6.3 is called continuous compounding of interest.
Similarly, Equation 6.4 can be rewritten to say that the percent change in \(X\) is \(\mu \Delta t + \sigma \Delta B\). This represents a random rate of return – for example, the return of a stock. The expected rate of return in this case is \(\mu \Delta t\), and the variance of the rate of return is \(\sigma^2 \Delta t\). The limiting Equation 6.3 is called continuous compounding of returns.
6.2 Ito Processes
The meaning of Equation 6.2 is that, for all \(t > 0\), \[X_t = X_0 + \int_0^t \mu X_s \mathrm{d} s\,.\] We assume the reader is familiar with integrals, so we do not explain this further. The function of time \(X\) defined in Equation 6.3 satisfies this equation. Similarly, the meaning of Equation 6.5 is that, for all \(t > 0\), \[X_t = X_0 + \int_0^t \mu X_s \mathrm{d} s + \int_0^t \sigma X_s \mathrm{d} B_s\,.\] The first integral in this formula is an ordinary integral. The second is an Ito integral, which is to be explained. The sum of an ordinary integral and an Ito integral is called an Ito process. An Ito process always has continuous paths.
Let’s depart from the example of the previous section and consider a process \(X\) satisfying, for all \(t>0\), \[X_t = \int_0^t \alpha_s\mathrm{d} s + \int_0^t \theta_s \mathrm{d} B_s\,. \qquad(6.7)\] where \(\alpha\) and \(\theta\) can be stochastic processes. The example of the previous section fits this form, because we could take \(\alpha_s = \mu X_s\) and \(\theta_s = \sigma X_s\). The definition of the Ito integral \[\int_0^t \theta_s \mathrm{d} B_s\] is relatively complicated. It is enough for our purposes to know that it can be approximated by a discrete sum \[\sum_{i=1}^n \theta_{t_{i-1}}(B_{t_i} - B_{t_{i-1}})\,,\] given a partition \[ 0 = t_0 < \cdots < t_n = t\,,\] when \(n\) is large and the time between successive dates is small. The Ito integral exists provided \(\theta\) does not anticipate the future (so \(\theta_{t_{i-1}}\) is independent of the increment \(B_{t_i}- B_{t_{i-1}}\)) and provided \(\theta\) does not explode to \(\pm \infty\) in finite time, so \[\int_0^t \theta_s^2 \mathrm{d} s < \infty\] for all \(t > 0\), with probability one.
We also write Equation 6.7 as \[
\mathrm{d} X_t = \alpha_t\mathrm{d} t + \theta_t \mathrm{d} B_t\,.
\qquad(6.8)\] We interpret this as change \(=\) mean \(+\) noise with \(\alpha_t \mathrm{d} t\) being the mean and \(\theta_t\mathrm{d} B_t\) being the mean-zero random noise. The quantity \(\alpha_t\) is also called the drift of the process \(X\) at time \(t\). The coefficient \(\theta_t\) is called the diffusion coefficient of \(X\) at time \(t\). If \(\alpha\) and \(\theta\) are constant, it is standard to refer to an Ito process \(X\) as a \((\alpha,\theta)\)–Brownian motion. When they are constant, we obtain \[X_t= X_0 + \alpha t + \theta B_t\,.\]
An Ito process as in Equation 6.7 can be a martingale only if \(\alpha=0\). This should seem sensible, because \(\alpha\mathrm{d} t\) is the expected change in \(X\), and a process is a martingale only if its expected change is zero. This observation plays a fundamental role in deriving asset pricing formulas. Conversely, if \(\alpha=0\) and \[
\mathbb{E} \left[\int_0^t \theta^2_s\mathrm{d} s\right] < \infty
\qquad(6.9)\] for each \(t\), then the Ito process is a continuous martingale, and the variance of its date–\(t\) value, calculated with the information available at date \(0\), is: \[\mathrm{var}(X_t) = \mathbb{E} \left[\int_0^t \theta^2_t\mathrm{d} s\right]\; .\]
6.3 Quadratic and Joint Variation of Ito Processes
To compute the quadratic variation of an Ito process, we use the following simple and important rules (for the sake of brevity, we drop the subscript \(t\) from \(B_t\) here and sometimes later). These rules should be regarded as mnemonic devices. The calculations we do with them lead to the correct results, but the objects have no real mathematical meaning.
Important Principle
\[
(\mathrm{d} t)^2 = 0\;,
\qquad(6.10)\]
\[
(\mathrm{d} t)(\mathrm{d} B) =0\;,
\qquad(6.11)\]
We apply these rules to compute the quadratic variation of any Ito proces \(X\) as follows:
Important Principle
If \(\mathrm{d} X = \alpha\mathrm{d} t + \theta\mathrm{d} B\) for a Brownian motion \(B\), then \[\begin{align}
(\mathrm{d} X)^2 &= (\alpha\mathrm{d} t+\theta\mathrm{d} B)^2\\
&= \alpha^2(\mathrm{d} t)^2 + 2\alpha\theta(\mathrm{d} t)(\mathrm{d} B) + \theta^2(\mathrm{d} B)^2\\
&= \theta^2\mathrm{d} t\;.
\end{align}\] To compute the quadratic variation of the Ito process \(X\) over any particular period of time, we integrate \((\mathrm{d} X)^2\) over that period as1] \[
\int_0^t (\mathrm{d} X_s)^2 = \int_0^t \theta^2_s\mathrm{d} s\;.
\qquad(6.13)\]
Now consider two Ito processes: \[
\mathrm{d} X_{1t} = \mu_{1t}\mathrm{d} t + \sigma_{1t}\mathrm{d} B_{1t}\;,
\qquad(6.14)\]
\[
\mathrm{d} X_{2t} = \mu_{2t}\mathrm{d} t + \sigma_{2t}\mathrm{d} B_{2t}\;,
\qquad(6.15)\]
where \(B_1\) and \(B_2\) are standard Brownian motions. We calculate the product of differentials of Ito processes as follows.
Important Principle
If \(X_1\) and \(X_2\) are Ito processes as in Equation 6.14 and Equation 6.15, then \[(\mathrm{d} X_1)(\mathrm{d} X_2) = (\sigma_{1}\mathrm{d} B_{1})(\sigma_{2}\mathrm{d} B_{2})= \sigma_1\sigma_2\rho\mathrm{d} t \qquad(6.16)\] where \(\rho\) is the correlation process of the two Brownian motions.
The real meaning of this rule is that it is possible to calculate the joint variation (i.e., limit of sum of products of changes) of the two Ito processes from \(0\) to \(t\) as \[\int_0^t (\mathrm{d} X_{1s})(\mathrm{d} X_{2s}) = \int_0^t (\sigma_{1s}\mathrm{d} B_{1s})(\sigma_{2s}\mathrm{d} B_{2s}) = \int_0^t \sigma_{1s}\sigma_{2s}\rho_s\mathrm{d} s\,.
\qquad(6.17)\] The last integral in this equation is the correct formula for the quadratic variation. As with squaring differentials, taking products of differentials is a mnemonic device to get us to the correct formula.2
6.4 Introduction to Ito’s Formula
First we recall some facts of the ordinary calculus. If \(y=f(x)\) and \(x_t = g(t)\) with \(f\) and \(g\) being continuously differentiable functions, then \[\frac{\mathrm{d} y}{\mathrm{d} t} = \frac{\mathrm{d} y}{\mathrm{d} x}\times \frac{\mathrm{d} x}{\mathrm{d} t} = f**(x_t)g**(t)\; .\] This implies that, for each \(t>0\), \[y_t = f(x_t) = y_0 + \int_0^t \frac{\mathrm{d} y}{\mathrm{d} s}\mathrm{d} s = y_0 + \int_0^t f**(x_s)g**(s)\mathrm{d} s\; .\] Substituting \(\mathrm{d} x_s = g**(s)\mathrm{d} s\), we can also write this as \[
y_t = f(x_t) = y_0 + \int_0^t f**(x_s)\mathrm{d} x_s\,,
\qquad(6.18)\] or, in differential form, \[
dy_t = f**(x_t)\mathrm{d} x_t \,.
\qquad(6.19)\] What people frequently remember about integrals from their calculus courses is that there are a lot of tricky substitutions that can be made to simplify the calculation of various integrals. We won’t need those in this book. All we will use are equations of the form of Equation 6.18, which is a special case of the Fundamental Theorem of Calculus, which says that a function is the integral of its derivative. Intuitively, we can think of Equation 6.18 as saying that the change in \(y\) over a discrete interval (from \(0\) to \(t\)) is the continuous sum (integral) of its infinitesimal changes.
We will contrast Equation 6.18 with the following special case of Ito’s formula for the calculus of Ito processes.
Important Principle
If \(B\) is a Brownian motion and \(Y = f(B)\) for a twice-continuously differentiable function \(f\), then \[\mathrm{d} Y_t = f**(B_t)\mathrm{d} B_t + \frac{1}{2}f****(B_t)\mathrm{d} t \,.
\qquad(6.20)\]
Comparing Equation 6.20 to Equation 6.19, we see that Ito’s formula has an extra term involving the second derivative \(f****\).
Equation 6.20 implies that \(Y=f(B)\) is an Ito process with drift \(f****(B_t)/2\) and diffusion coefficient \(f**(B_t)\). The real meaning of Equation 6.20 is the integrated form: \[
Y_t = f(B_t) = Y_0 + \int_0^t f**(B_s)\mathrm{d} B_s + \frac{1}{2}\int_0^t f****(B_s)\mathrm{d} s\;.
\qquad(6.21)\] Thus, the change in \(Y\) over a discrete interval is again the continuous sum of its infinitesimal changes, but now the infinitesimal changes are given by Equation 6.20. Note that the first integral in Equation 6.21 is an Ito integral.
To gain some intuition for the extra term in Ito’s formula, we return to the ordinary calculus. Given dates \(s<t\), the derivative defines a linear approximation of the change in \(y\) from \(s\) to \(t\); that is, setting \(\Delta x = x_t-x_s\) and \(\Delta y = y_t - y_s\), we have the approximation \[\Delta y \approx f**(x_s) \,\Delta x\; .\] A better approximation is given by the second-order Taylor series expansion \[\Delta y \approx f**(x_s)\,\Delta x + \frac{1}{2} f****(x_s)\,(\Delta x)^2\; .\] An interpretation of Equation 6.18 is that the linear approximation works perfectly for infinitesimal time periods \(\mathrm{d} s\), because we can compute the change in \(y\) over the time interval \([0,t]\) by summing up the infinitesimal changes \(f**(x_s)\mathrm{d} x_s\). In other words, the second-order term \(\frac{1}{2} f****(x_s)\,(\Delta x)^2\) vanishes when we consider very short time periods.
The second-order Taylor series expansion in the case of \(Y=f(B)\) is \[\Delta Y \approx f**(B_s)\,\Delta B + \frac{1}{2} f****(B_s)\,(\Delta B)^2\; .\] For example, given a partition \(0=t_0 < t_1 < \cdots < t_n=t\) of the time interval \([0,t]\), we have, with the same notation we have used earlier,
If we make the time intervals \(t_i-t_{i-1}\) shorter, letting \(n \rightarrow \infty\), then we cannot expect that the extra term here will disappear, leading to the result of the ordinary calculus shown in Equation 6.18, because we know that \[\lim_{n \rightarrow \infty} \sum_{i=1}^n (\Delta B_{t_i})^2 = t\; ,\] whereas for the continuously differentiable function \(x_t = g(t)\), the same limit is zero. In fact it seems sensible to interpret the limit of \((\Delta B)^2\) as \((\mathrm{d} B)^2 =\mathrm{d} t\). This is perfectly consistent with Ito’s formula: if we take the limit in Equation 6.22, replacing the limit of \((\Delta B_{t_i})^2\) with \((\mathrm{d} B)^2 = \mathrm{d} t\), we obtain Equation 6.21.
To see the accuracy of Ito’s approximation over different time steps, as well as the impact of the second-derivative term \(\int_0^t (1/2)f****(B_s)\mathrm{d} s\), we encourage readers to interact with the plot below. It examines the function \(f(x)=\mathrm{e}^{x}\) (for which we have \(f**(x)=\mathrm{e}^x\) and \(f****(x) = \mathrm{e}^x\)). It simulates an approximate path of a Brownian motion as we have done before. It then compares the true value of \(\mathrm{e}^{B_{t_i}}\) to the Ito expansion \[\mathrm{e}^{B_t}=1 + \int_0^t \mathrm{e}^{B_s} \mathrm{d} B_s + \frac{1}{2}\int_0^t \mathrm{e}^{B_s} \mathrm{d} s\] using the discretization \[\Delta \mathrm{e}^{B_{t_i}}= \mathrm{e}^{B_{t_{i-1}}} \Delta B_{t_i} + \frac{1}{2} \mathrm{e}^{B_{t_{i-1}}} \Delta t \,.\] Notice that the discretization is just a second-order Taylor series expansion. The discretization approximates the true value better if we take \(n\) larger and \(\Delta t\) smaller. The important take-away from the figure is that the cumulative second-derivative terms in the discretization do not vanish as we take \(n\) larger but instead continue to contribute significantly to the approximation.
Figure 6.3: Accuracy of the Ito Approximation.
6.5 Functions of Time and a Brownian Motion
We extend the example in the previous section slightly. Consider a process \(Y\) defined as \(Y_t = f(t, B_t)\) for some function \(f\). The following rule states that \(Y\) is an Ito process with drift equal to \[\frac{\partial f}{\partial t} + \frac{1}{2} \frac{\partial^2 f}{\partial B^2}\] and diffusion coefficient equal to \(\partial f/\partial B\). This is what we need to remember to make calculations. The real meaning of \((\mathrm{d} B)^2\) is \(\mathrm{d} t\), so we can (and will) substitute that in the following rule, but it may be easier to remember \((\mathrm{d} B)^2\). This becomes more important when we consider more complex examples in the next sections.
Important Principle
If \(f(t, B)\) is continuously differentiable in \(t\) and twice continuously differentiable in \(B\) and \(Y_t= f(t, B_t)\) for a standard Brownian motion \(B\), then \[\mathrm{d} Y = \frac{\partial f(t, B)}{\partial t}\mathrm{d} t + \frac{\partial f(t, B)}{\partial B}\mathrm{d} B + \frac{1}{2} \frac{\partial^2 f(t, B)}{\partial B^2}(\mathrm{d} B)^2\,.
\qquad(6.23)\]
Example
We can finish the discussion in Section 6.1 regarding the process defined in Equation 6.6 by applying Equation 6.23. We want to show that the process satisfies Equation 6.5. We do that by differentiating and applying the Equation 6.23. For \(f(t, B) = \mathrm{e}^{(\mu - \sigma^2/2)t + \sigma B}\), we have \[\frac{\partial f}{\partial t} = \left(\mu-\frac{1}{2}\sigma^2\right)f(t, B)\,,\quad \frac{\partial f}{\partial B} = \sigma f(t, B)\,, \quad \frac{\partial^2 f}{\partial B^2} = \sigma^2f(t, B)\,.\] Therefore, \[\mathrm{d} Y = \left(\mu-\frac{1}{2}\sigma^2\right)Y\mathrm{d} t + \sigma Y \mathrm{d} B + \frac{1}{2}\sigma^2 Y\,(\mathrm{d} B)^2 = \mu Y\mathrm{d} t + \sigma Y \mathrm{d} B\] This verifies Equation 6.5.
6.6 Functions of Time and an Ito Process
Now consider the more general case \(Y_t = f(t, X_t)\) where \(X\) is an Ito process. As explained before, this means that \[\mathrm{d} X_t = \alpha_t\mathrm{d} t + \theta_t \mathrm{d} B_t
\qquad(6.24)\] for some stochastic processes \(\alpha\) and \(\theta\), where \(B\) is a standard Brownian motion. Then, from our previous rules, \((\mathrm{d} X)^2 = \theta^2\mathrm{d} t\). Ito’s formula in this more general case takes the same form as Calculation Rule \(\ref{ruleito1}\), replacing the Brownian motion \(B\) with the Ito process \(X\).
Important Principle
If \(f(t, X)\) is continuously differentiable in \(t\) and twice continuously differentiable in \(X\) and \(Y_t= f(t, X_t)\) where \(X\) is an Ito process, then \[\mathrm{d} Y = \frac{\partial f(t, X)}{\partial t}\mathrm{d} t + \frac{\partial f(t, X)}{\partial X}\mathrm{d} X + \frac{1}{2} \frac{\partial^2 f(t, X)}{\partial X^2}(\mathrm{d} X)^2\,.
\qquad(6.25)\]
We can write Equation 6.25 in terms of \(\mathrm{d} t\) and \(\mathrm{d} B\) terms by substituting from Equation 6.24 and using \((\mathrm{d} X)^2 = \theta^2\mathrm{d} t\). This produces \[\mathrm{d} Y = \left(\frac{\partial f(t, X)}{\partial t} + \alpha\frac{\partial f(t, X)}{\partial X} + \frac{1}{2}\theta^2 \frac{\partial^2 f(t, X)}{\partial X^2}\right)\mathrm{d} t + \theta\frac{\partial f(t, X)}{\partial X}\mathrm{d} B\,.\]
Here are some important examples of Ito’s formula.
Key Result
If \(Y = X^\alpha\) for a constant \(\alpha\), then \[\mathrm{d} Y = \alpha X^{\alpha-1}\mathrm{d} X + \frac{1}{2}\alpha (\alpha - 1)X^{\alpha-2}(\mathrm{d} X)^2\,.\] This is equivalent to \[\frac{\mathrm{d} Y}{Y} = \alpha \frac{\mathrm{d} X}{X} + \frac{\alpha(\alpha-1)}{2}\left(\frac{\mathrm{d} X}{X}\right)^2\,.
\] {#ito-powerformula}
Key Result
If \(Y=\mathrm{e}^X\), then \[\frac{\mathrm{d} Y}{Y}=\mathrm{d} X + \frac{(\mathrm{d} X)^2}{2}\;.
\qquad(6.26)\]
Key Result
If \(Y=\log X\), then \[
\mathrm{d} Y=\frac{\mathrm{d} X}{X} - \frac{1}{2}\left(\frac{\mathrm{d} X}{X}\right)^2\;.
\qquad(6.27)\]
Example
We showed in the previous Example that \(X\) defined in Equation 6.6 satisfies Equation 6.5, but it is also useful to see how we can start from Equation 6.5 and deduce that Equation 6.6 is the solution. We can do that by taking logarithms. Set \(Y_t = \log X_t\). Then, using Equation 6.27 and substituting from Equation 6.5, we have \[\mathrm{d} \log X = \left(\mu - \frac{1}{2}\sigma^2\right)\mathrm{d} t + \sigma\mathrm{d} B\,.\] There is no \(X\) on the right-hand side of this, so we can simply integrate to compute \(\log X_t\) as \[\log X_t = \log X_0 + \left(\mu - \frac{1}{2}\sigma^2\right)t + \sigma B_t\,.\] Exponentiating gives Equation 6.6.
6.7 Functions of Time and Multiple Ito Processes
Using Equation 6.16 for products of differentials, we can state Ito’s formula for a function of time and two Ito processes as follows.
Important Principle
If \(Y_t = f(t, X_{1t}, X_{2t})\) where \(X_1\) and \(X_2\) are Ito processes and \(f\) is continuously differentiable in \(t\) and twice continuously differentiable in \(X_1\) and \(X_2\), then \[\begin{multline}
\mathrm{d} Y = \frac{\partial f}{\partial t}\mathrm{d} t + \frac{\partial f}{\partial X_1}\mathrm{d} X_1 + \frac{\partial f}{\partial X_2}\mathrm{d} X_2 \\+ \frac{1}{2} \frac{\partial^2 f}{\partial X_1^2}\,(\mathrm{d} X_1)^2 + \frac{1}{2} \frac{\partial^2 f}{\partial X_2^2}\,(\mathrm{d} X_2)^2
+ \frac{\partial^2 f}{\partial X_1\partial X_2}\,(\mathrm{d} X_1)(\mathrm{d} X_2)\;.
\end{multline} \qquad(6.28)\]
This is analogous to a second-order Taylor series expansion in the variables \(X_1\) and \(X_2\). A similar formula applies to functions of more two Ito processes. We just need to include a term for each \(\mathrm{d} X_i\), each \((\mathrm{d} X_i)^2\) and each \((\mathrm{d} X_i)(\mathrm{d} X_j)\).
Here are some important examples. We switch notation from \(X_1\) and \(X_2\) to \(X\) and \(Y\) and from \(Y\) to \(Z\) so we can drop the subscripts. These formulas follow from Equation 6.28 by taking \(f(x,y)=xy\) or \(f(x,y)=y/x\).
Key Result
If \(Z=XY\), then \(\mathrm{d} Z=X\mathrm{d} Y+Y\mathrm{d} X + (\mathrm{d} X)(\mathrm{d} Y)\). We can write this as \[
\frac{\mathrm{d} Z}{Z}=\frac{\mathrm{d} X}{X} + \frac{\mathrm{d} Y}{Y} + \left(\frac{\mathrm{d} X}{X}\right)\left(\frac{\mathrm{d} Y}{Y}\right)\;.
\qquad(6.29)\]
Key Result
If \(Z=Y/X\), then \[\frac{\mathrm{d} Z}{Z} = \frac{\mathrm{d} Y}{Y} -\frac{\mathrm{d} X}{X} - \left(\frac{\mathrm{d} Y}{Y}\right)\left(\frac{\mathrm{d} X}{X}\right) + \left(\frac{\mathrm{d} X}{X}\right)^2\;.
\qquad(6.30)\]
The following is a special case of Equation 6.29 that we encounter often.
Key Result
Let \[Y_t =\exp\left(\int_0^t q_s\mathrm{d} s\right)\] for some (possibly random) process \(q\) and define \(Z=XY\) for any Ito process \(X\). Equation 6.29 gives us \[
\frac{\mathrm{d} Z}{Z}=q\mathrm{d} t + \frac{\mathrm{d} X}{X}\;.
\qquad(6.31)\] This is the same as in the usual calculus.
6.8 Exercises
Exercise 6.1 Ito’s Lemma can be used in different ways to get the same answer. For example, let \(X_t = a t + b B_t\) and use Ito’s lemma on the function \(e^{X_t}\). Alternatively, let \(f(t, B_t) = e^{a t + bB_t}\). Use Ito’s lemma on \(f(,)\).
Exercise 6.2 Let \(\mathrm{d} X_t = \mu X_t \mathrm{d} t + \sigma X_t \mathrm{d} B_t\). Use Ito’s lemma to find \(\log(X_t)\) . What is the expected value and variance of \(\log(X_t)\) ?
In a more formal mathematical presentation, one normally writes \(\mathrm{d} \langle X,X\rangle\) for what we are writing here as \((\mathrm{d} X)^2\). This is the differential of the quadratic variation process, and the quadratic variation through date \(t\) is \[
\langle X,X\rangle _t = \int_0^t \mathrm{d} \langle X,X\rangle_s = \int_0^t \sigma^2_s\mathrm{d} s\;.
\] Our mnenomic device of squaring differentials leads us to the correct formula.↩︎
A somewhat more precise definition than our previous description of the stochastic integral \(\int_0^t \sigma_{1,t} dB_{1t}\) is when Equation 6.9 holds, the stochastic integral is the (unique) martingale with joint variation with any other Ito process given by Equation 6.17.↩︎