A kind of problem that turns up quite often in physics and elsewhere is to find the solution of an ordinary differential equation, given some initial conditions. That is, you have a system with some state x represented as a vector of real numbers, you have an initial state x0, and you have a rule describing the evolution of the state:
dxdt=F(x,t)
And your goal is to find x(t). This standard problem has some standard solution techniques, some quite advanced - Runge-Kutta methods, symplectic methods, Hermite integrators. A few are implemented in scipy. But it sometimes happens that solving this problem is only part of a more complicated process, say of fitting, where it would be nice to have the derivatives of the solution with respect to the various initial conditions. It turns out this isn't too hard to work out, usually.
The trick is to think of it as the problem of transporting a vector along the solution. That is, in addition to x as a system state, you also have a vector v. As x evolves, so does v. Notionally, v could be evolved by giving it some infinitesimal length δ and then evolving its tip (x+δv) as well as its tail (x). Actually doing this numerically is possible, but very vulnerable to discontinuities in the differential equation solver (for example when the step size changes). But mathematically, it turns out that
dvdt=DF(x,t)
Going to high dimension in an ODE solver definitely makes the problem harder. For implicit solvers, there is a certain amount of matrix inversion going on, which can become really quite expensive. But solvers for non-stiff problems are probably not much hurt by this expansion. Importantly, the right-hand side depends only on x, not all the vk, so the step sizes are probably not much affected by the expansion.
where DF is the Jacobian of F, that is, the matrix whose i,jth entry is the derivative of the ith component of F with respect to the jth component of x (or vice versa, I'm not quite sure). But this means that you can simply view the problem as a much-higher-dimensional problem whose state is x and any partial derivative vectors vk, and where the right-hand side function gives the derivatives of x as F(x,t) and the derivatives of vk as the matrix-vector product DF(x,t)⋅vk.
Going to high dimension in an ODE solver definitely makes the problem harder. For implicit solvers, there is a certain amount of matrix inversion going on, which can become really quite expensive. But solvers for non-stiff problems are probably not much hurt by this expansion. Importantly, the right-hand side depends only on x, not all the vk, so the step sizes are probably not much affected by the expansion.
One final comment: if you are integrating from t0 to t1, you might want the derivatives with respect to t0 or t1, if your problem allows modifying those too. Fortunately, that's pretty easy - the derivative with respect to t1 is just F(x(t1),t1). With respect to t0 it should be obtained by transporting −F(x(t0),t0) along the curve as above.
Let's try this on a sample problem, that of a particle moving in a Keplerian orbit.
In [5]:
import scipy.integrate
In [8]:
def acceleration(r):
return -r/np.dot(r,r)**(3./2)
def deriv(rv, t):
d = np.zeros_like(rv)
d[:3] = rv[3:]
d[3:] = acceleration(rv[:3])
return d
In [2]:
eccentric_initial_rv = np.array([1,0,0,0,0.5,0])
In [3]:
def plot_orbits(ts, xs):
plt.plot(xs[:,0], xs[:,1])
plt.gca().set_aspect('equal')
In [51]:
ts = np.linspace(0,2*np.pi,1000)
posns = scipy.integrate.odeint(deriv, eccentric_initial_rv, ts, atol=1e-10)
In [52]:
plot_orbits(ts,posns[:,:3])
So far so good - a nice ellipse, just what we should see. (It's a bit more than one turn, since the orbital period is less than 2π, but no matter.) Now let's try to work out the partial derivatives. First we need the Jacobian:
In [28]:
def jacobian(rv, t):
J = np.zeros((6,6), dtype=rv.dtype)
r = rv[:3]
rl = np.dot(r,r)
for i in range(3):
J[i,3+i] = 1
for j in range(3):
J[3+i,j] = r[i]*r[j]*3*rl**(-5./2)
if i==j:
J[3+i,j] += -rl**(-3./2)
return J
Just to check, since these days my calculus is as sloppy as my arithmetic:
In [85]:
rv = np.array([1,2,3,4,5,6], dtype=float)
delta = 1e-6
n_jac = np.zeros((6,6))
for j in range(6):
drv = rv.copy()
drv[j] += delta
n_jac[:,j] = (deriv(drv,0)-deriv(rv,0))/delta
print n_jac[3:,:3]
print
print jacobian(rv, 0)[3:,:3]
print
print (jacobian(rv, 0) - n_jac)[3:,:3]
Okay. Now we're going to need to convert a position and list of partial derivatives into a single vector, since that's what odeint wants for a state vector.
In [23]:
def pack_vec(rv, vecs):
vecs = np.asarray(vecs)
packed = np.zeros(6*(vecs.shape[0]+1))
packed[:6] = rv
packed[6:] = vecs.flatten()
return packed
def unpack_vec(packed):
rv = packed[:6]
vecs = packed[6:].reshape(-1,6)
return rv, vecs
In [24]:
p = pack_vec([0,0,0,0,0,0],[[1,0,0,0,0,0],[1,0,0,0,0,0]])
print p
print unpack_vec(p)
Here's where the magic happens: the position and velocity vector evolves just like before, and the partial derivatives evolve according to the Jacobian:
In [71]:
def extended_deriv(rvv, t):
rv, vecs = unpack_vec(rvv)
drv = deriv(rv, t)
jac = jacobian(rv, t)
dvecs = np.dot(jac, vecs.T).T
return pack_vec(drv, dvecs)
For simplicity, we'll just test the derivatives at one point, specifically at t=2π.
In [72]:
t0 = 0
t1 = 2*np.pi
For a list of partial derivatives, the simplest is the matrix of all six partial derivative vectors, that is, the identity matrix. (Of course, testing with a rectangular matrix is a good way to catch stupid bugs. Just speaking hypothetically, you understand.)
In [89]:
vecs = np.eye(6)
posns2, extra = scipy.integrate.odeint(extended_deriv,
pack_vec(eccentric_initial_rv, vecs),
[t0,t1],
mxstep = 1000,
full_output=1)
final_rv, final_deriv = unpack_vec(posns2[-1])
print final_rv
print final_deriv
print "Number of steps:", extra['nst']
Let's quickly check that a simple integration without any partial derivatives winds up at the same place. In addition, it's worth comparing to see how many additional steps the integrator has to take to get the same estimated accuracy.
In [87]:
posns3, extra = scipy.integrate.odeint(deriv,
eccentric_initial_rv,
[t0,t1],
mxstep = 1000,
full_output=1)
final_rv_simple = posns3[-1]
print final_rv_simple
print "Number of steps:", extra['nst']
Turns out adding the partial derivatives forces the integrator to take half again as many steps. Not too bad, speed-wise, really.
Now let's check that these derivatives are actually right: just solve the problem with slightly different initial conditions and see where it winds up. Getting a really good answer out of this is a finicky business, since it normally involves subtracting two similar quantites (the final positions); the amount by which you change the initial conditions has to be tuned to balance error due to non-linearities in the problem against roundoff error. And a method like this with adaptive stepsizes can have some ugly discontinuities as you vary the initial conditions. But a little fiddling and it works:
In [88]:
delta = 1e-11
num_deriv = np.empty_like(vecs)
for i in range(vecs.shape[0]):
p4 = scipy.integrate.odeint(deriv,
eccentric_initial_rv+delta*vecs[i],
[t0,t1],
mxstep = 1000)
num_deriv[i] = (p4[-1]-final_rv_simple)/delta
print num_deriv
print
print final_deriv
Now what about a partial derivative with respect to the start or end times?
In [96]:
delta = 1e-10
p5 = scipy.integrate.odeint(deriv,
eccentric_initial_rv,
[t0,t1+delta],
mxstep = 1000)
diff_t1_num = (p5[-1]-final_rv_simple)/delta
diff_t1 = deriv(p5[-1],t1)
print diff_t1_num
print diff_t1
In [99]:
delta = 1e-10
p6 = scipy.integrate.odeint(deriv,
eccentric_initial_rv,
[t0+delta,t1],
mxstep = 1000)
diff_t0_num = (p6[-1]-final_rv_simple)/delta
diff_t0 = -np.dot(final_deriv.T, deriv(p6[0],t0))
print diff_t0_num
print diff_t0
So, in short, you can find the partial derivatives of an ODE solution by solving an enlarged ODE where the partial derivative vectors evolve by multiplication with the Jacobian of the original ODE. And it doesn't cost all that much in terms of smaller steps.
No comments:
Post a Comment