![]() |
Prev | Next |
# include <cppad/ode_gear.hpp>
OdeGear(
F,
m,
n,
T,
X,
e)
f : \R \times \R^n \rightarrow \R^n
be a smooth function.
This routine solves the following initial value problem
\[
\begin{array}{rcl}
x( t_{m-1} ) & = & x^0 \\
x^\prime (t) & = & f[t , x(t)]
\end{array}
\]
for the value of
x( t_m )
.
If your set of ordinary differential equations are not stiff
an explicit method may be better (perhaps Runge45
.)
cppad/ode_gear.hpp
is included by cppad/cppad.hpp
but it can also be included separately with out the rest of
the CppAD
routines.
Fun &
F
This must support the following set of calls
F.Ode(
t,
x,
f)
F.Ode_dep(
t,
x,
f_x)
const
Scalar &
t
(see description of Scalar
below).
const
Vector &
x
and has size n
(see description of Vector
below).
F.Ode
has prototype
Vector &
f
On input and output, f is a vector of size n
and the input values of the elements of f do not matter.
On output,
f is set equal to
f(t, x)
(see f(t, x) in Purpose
).
Vector &
f_x
On input and output, f_x is a vector of size
n * n
and the input values of the elements of f_x do not matter.
On output,
\[
f\_x [i * n + j] = \partial_{x(j)} f_i ( t , x )
\]
&
in the prototype for
f and f_x.
size_t
m
It specifies the order (highest power of
t
)
used to represent the function
x(t)
in the multi-step method.
Upon return from OdeGear
,
the i-th component of the polynomial is defined by
\[
p_i ( t_j ) = X[ j * n + i ]
\]
for
j = 0 , \ldots , m
(where
0 \leq i < n
).
The value of
m
must be greater than or equal one.
size_t
n
It specifies the range space dimension of the
vector valued function
x(t)
.
const
Vector &
T
and size greater than or equal to
m+1
.
For
j = 0 , \ldots m
,
T[j]
is the time
corresponding to time corresponding
to a previous point in the multi-step method.
The value
T[m]
is the time
of the next point in the multi-step method.
The array
T
must be monotone increasing; i.e.,
T[j] < T[j+1]
.
Above and below we often use the shorthand
t_j
for
T[j]
.
Vector &
X
and size greater than or equal to
(m+1) * n
.
On input to OdeGear
,
for
j = 0 , \ldots , m-1
, and
i = 0 , \ldots , n-1
\[
X[ j * n + i ] = x_i ( t_j )
\]
Upon return from OdeGear
,
for
i = 0 , \ldots , n-1
\[
X[ m * n + i ] \approx x_i ( t_m )
\]
\[
e[i] \geq | X[ m * n + i ] - x_i ( t_m ) |
\]
The order of this approximation is one less than the order of
the solution; i.e.,
\[
e = O ( h^m )
\]
where
h
is the maximum of
t_{j+1} - t_j
.
Operation | Description |
a < b |
less than operator (returns a bool object)
|
cppad/ode_gear.hpp
.
x_j
for the value
x ( t_j ) \in \R^n
which is not to be confused
with
x_i (t) \in \R
in the notation above.
The interpolating polynomial
p(t)
is given by
\[
p(t) =
\sum_{j=0}^m
x_j
\frac{
\prod_{i \neq j} ( t - t_i )
}{
\prod_{i \neq j} ( t_j - t_i )
}
\]
The derivative
p^\prime (t)
is given by
\[
p^\prime (t) =
\sum_{j=0}^m
x_j
\frac{
\sum_{i \neq j} \prod_{k \neq i,j} ( t - t_k )
}{
\prod_{k \neq j} ( t_j - t_k )
}
\]
Evaluating the derivative at the point
t_\ell
we have
\[
\begin{array}{rcl}
p^\prime ( t_\ell ) & = &
x_\ell
\frac{
\sum_{i \neq \ell} \prod_{k \neq i,\ell} ( t_\ell - t_k )
}{
\prod_{k \neq \ell} ( t_\ell - t_k )
}
+
\sum_{j \neq \ell}
x_j
\frac{
\sum_{i \neq j} \prod_{k \neq i,j} ( t_\ell - t_k )
}{
\prod_{k \neq j} ( t_j - t_k )
}
\\
& = &
x_\ell
\sum_{i \neq \ell}
\frac{ 1 }{ t_\ell - t_i }
+
\sum_{j \neq \ell}
x_j
\frac{
\prod_{k \neq \ell,j} ( t_\ell - t_k )
}{
\prod_{k \neq j} ( t_j - t_k )
}
\\
& = &
x_\ell
\sum_{k \neq \ell} ( t_\ell - t_k )^{-1}
+
\sum_{j \neq \ell}
x_j
( t_j - t_\ell )^{-1}
\prod_{k \neq \ell ,j} ( t_\ell - t_k ) / ( t_j - t_k )
\end{array}
\]
We define the vector
\alpha \in \R^{m+1}
by
\[
\alpha_j = \left\{ \begin{array}{ll}
\sum_{k \neq m} ( t_m - t_k )^{-1}
& {\rm if} \; j = m
\\
( t_j - t_m )^{-1}
\prod_{k \neq m,j} ( t_m - t_k ) / ( t_j - t_k )
& {\rm otherwise}
\end{array} \right.
\]
It follows that
\[
p^\prime ( t_m ) = \alpha_0 x_0 + \cdots + \alpha_m x_m
\]
Gear's method determines
x_m
by solving the following
nonlinear equation
\[
f( t_m , x_m ) = \alpha_0 x_0 + \cdots + \alpha_m x_m
\]
Newton's method for solving this equation determines iterates,
which we denote by
x_m^k
, by solving the following affine
approximation of the equation above
\[
\begin{array}{rcl}
f( t_m , x_m^{k-1} ) + \partial_x f( t_m , x_m^{k-1} ) ( x_m^k - x_m^{k-1} )
& = &
\alpha_0 x_0^k + \alpha_1 x_1 + \cdots + \alpha_m x_m
\\
\left[ \alpha_m I - \partial_x f( t_m , x_m^{k-1} ) \right] x_m
& = &
\left[
f( t_m , x_m^{k-1} ) - \partial_x f( t_m , x_m^{k-1} ) x_m^{k-1}
- \alpha_0 x_0 - \cdots - \alpha_{m-1} x_{m-1}
\right]
\end{array}
\]
In order to initialize Newton's method; i.e. choose
x_m^0
we define the vector
\beta \in \R^{m+1}
by
\[
\beta_j = \left\{ \begin{array}{ll}
\sum_{k \neq m-1} ( t_{m-1} - t_k )^{-1}
& {\rm if} \; j = m-1
\\
( t_j - t_{m-1} )^{-1}
\prod_{k \neq m-1,j} ( t_{m-1} - t_k ) / ( t_j - t_k )
& {\rm otherwise}
\end{array} \right.
\]
It follows that
\[
p^\prime ( t_{m-1} ) = \beta_0 x_0 + \cdots + \beta_m x_m
\]
We solve the following approximation of the equation above to determine
x_m^0
:
\[
f( t_{m-1} , x_{m-1} ) =
\beta_0 x_0 + \cdots + \beta_{m-1} x_{m-1} + \beta_m x_m^0
\]