Sunday, May 13, 2012

How would you write an AI rocket pilot? This is what I've been thinking about for the past few months in my spare time. I've been reading some books and papers about optimal control, and am still learning and collecting my thoughts, and writing code. So this is just a brief update.

A few years back I did some work in this direction. The rocket is treated as a point mass that can instantaneously accelerate in any direction, with a limit on the acceleration. One control problem is: given a starting state (position and velocity) and a desired final state (position and velocity), fly the rocket between the two states in minimum time. (No gravitational acceleration in this simple scenario.)

When I wrote that last page I conjectured that the optimal route would consist of accelerating in one direction for a duration and then (if necessary) accelerating in a second direction for an additional duration. (Flying two parabolic arcs, essentially.) I based this on an attempt to extend the one-dimensional optimal control, which is to accelerate in one direction at maximum for a duration, and then potentially reverse direction and accelerate for an additional duration.

Since reading some optimal control theory I've worked out enough to realize that this is not the optimal control. The optimal control turns out to be to thrust (with maximum acceleration) in the direction of a point that is moving along a straight line. The point is moving through a "control space" and you basically normalize the vector toward it and multiply that by the maximum acceleration to get the acceleration vector. You can see how the one-dimensional case extends to this, but in the two- or three-dimensional case the rocket potentially pivots through a range of directions as it accelerates. The trick, then, is to come up with the equation of motion for the control point that results in the rocket getting from the initial state to the final state.

That's all I have for now; hopefully more to follow.

3 comments:

owen said...

Solar wind and rogue asteroids

Patrick said...

Rocket science is hard. AI can be hard, both together are trying.

In the little time I can make to work on my project I've decided to forego continuous acceleration for the purposes of getting something working. I'm going to use instant velocity change and a Lambert Solver to plot my trajectories. I'm currently working on the issue that the lambert solutions it presents don't care if they would pass you too close to other bodies, which would perturb your path, or at worse cause collision. Determining the point where the ship encounters the sphere of influence isn't trivial, especially since I might have to do it a lot for multiple ships.

Next will be adjusting the path to avoid/account for those perturbations. Hopefully later once that works I can try bring back gradual acceleration, maybe through some sort of vector matching/path following steering behaviour, but that would probably have very different delta-v cost that planned.

I'd forgotten about your blog until someone replied to one of my previous posts.

James McNeill said...

Yeah. Someone was asking me at work today if I still was posting. Sadly, I am not, really. My children have grown enough to have busy calendars, I bought a house and the neighbor kids are over all the time, the lawn needs mowing, I have a project in crunch at work, etc. I don't really play games any more, even. It's a different phase of life, I guess.

On the bus I am still thinking about this problem though. I've been putting in time coding on it every now and then, but haven't gotten enough done that I feel like I have useful info to share on the blog.

The Lambert solver can be a useful first approach to solving any sort of interplanetary transfer. There's still an element of optimization there, though, picking the transfer time. How are you doing that? Just trying a set of times and picking the most-optimal result?

One recommendation: John Betts' book "Practical Methods for Optimal Control and Estimation Using Nonlinear Programming." I found a PDF of it somewhere online that convinced me it would be worthwhile to buy.

Recently I also noticed that Robert Stengler's got his course slides online: http://www.princeton.edu/~stengel/MAE546Lectures.html

I've been paring my optimal-control example problems back to simpler and simpler problems in the hopes of coming up with something I can get to work. We shall see.

Best of luck to you with your project!