Monday, September 24, 2007

Even Less

I've spent my free time this week playing Metroid Prime 3, and trying to resolve my computer situation.

Last weekend I bought an eMachines T5230 from Best Buy. It's an Athlon 64 machine with 1 GB of RAM and an NForce chipset, so it's got an integrated NVidia video card. I thought that it ought to be at least as fast as my three-year-old Dell with the hard drive that went kaput. I was wrong. It was not even half as fast. I'm a programmer, not a computer expert, so I don't know exactly why it was so bad. I bought an $80 hard drive, put it in my old Dell, and returned the eMachines box. Then I spent the rest of the day installing Windows and its various updates.

The days of easy performance progress in computers are over. I've bought a new computer approximately every three years since 1994 or so. Through the 90s my computers would get about four times faster for the same price or less, every three years. Once processor speeds started outstripping memory clock speeds (I believe the 66 MHz 486 was the last computer with the same clock speed for CPU and RAM) the apparent performance increases withered away. It turns out that we use computers not so much for computing as for shuttling large quantities of data around, so faster CPUs don't do a whole lot.

Now, we have hit a wall on CPU speed. My three-year-old Dell is clocked at 3 GHz. You can hardly buy any computer today that fast. Instead, they are all going multi-core. (The eMachines machine was clocked around 2.2 GHz, but had two cores.) Existing applications won't get any faster. Instead, we programmers will have to attempt to eke the CPUs' full potential out with sweat and tears.

Metroid Prime 3 is fun, albeit pretty much the same game as its two predecessors. They did some good work to make the game use the Wii's control scheme. Motion sensing is inherently weaker than buttons and joysticks for many things, though. First, you get no tactile feedback, so you have to rely on visual feedback to know if you've made the correct input. Second, with a joystick you are free to remove your hand to do other things, such as scratching your nose. I find myself holding a rigid position for long stretches when playing with the Wii, which is far less comfortable.

In hindsight I think Nintendo may have goofed up in a few ways with their latest controller. The “hardcore” games like Metroid and Zelda all end up using all the buttons on the wand controller, even though the buttons weren't designed to be used when holding it like a wand. Only A and B are readily accessible; the rest require contortions. I think their idea of making the wand holdable in two different positions (as a wand, and sideways as a more classic NES controller) was a mistake.

I also don't understand why Nintendo didn't make the camera on the front of the wand have a fisheye lens. The field of view is too narrow. I was watching my niece and nephew play Duck Hunt and they were continually having problems with letting the pointer drift off-screen, and then it just kind of got stuck. Other games in Wii Play are especially bad this way. For instance, in the air hockey game the paddle gets stuck when the wand loses sight of the tracking LEDs, and it can be very hard for people who aren't used to keeping the wand pointed right at the screen.

Monday, September 17, 2007

Not much

My family and I have been sick this weekend so I haven't gotten much done.

I've been working on implementing a piston-type joint in the Chipmunk physics library. This is a joint that allows one object to move freely along a straight track, but stops it at the endpoints. The sliding object is allowed to rotate freely. Obviously this is to simulate wheels on suspension; it will be combined with a damped spring to do the suspension.

To do this, I need to solve for an impulse which, when applied to the two bodies at the joint, will eliminate relative velocity in the cross-track direction. Additionally, if the sliding object has reached a track extent, I must generate an impulse which will kill any relative velocity that would carry it past the end of the track. This latter velocity adjustment is essentially the same as a contact non-penetration constraint, in that we don't want to ever produce an impulse that would keep the sliding object from being able to move the other direction, away from the track's endpoint.

Here's some initial math going the opposite direction: given an impulse, what is the change in relative velocity at the joint? I've used the Greek letter Delta (the triangle) to indicate change, as in “change in velocity”. Here are the variable definitions:



And here is development of the equation for change in relative velocity at point r, given an impulse j at that same point:



It's built up from the changes in linear and angular velocities at the two bodies' centers of mass.

By inverting this, it's possible to solve for an impulse which will produce a desired change in local relative velocity. I'll also need to rotate the coordinate system so that velocities along and across the track are easily separated. As I said before, the along-the-track velocity changes are really like contact constraints so they'll need to be handled slightly differently from the cross-track velocity constraint.

Monday, September 10, 2007

Random Terrain, Joints, Version Control

Terrain Development

I spent some time this week fooling around with my random terrain generator to try and come up with something more varied for driving. I will have to switch to hand-authored terrain soon but want to put it off as long as possible so I can focus on getting the vehicle to feel right.

The terrain generator produces a one-dimensional array of elevation values (since this is a 2D game). It puts in seed values at a large interval (every 256th element, currently), then subdivides by powers of two. The general idea is to subdivide and smooth (using a Loop-type subdivision scheme), then add random offsets which are scaled to the current step size. The subdivision smoothing is very simple: new points that fall halfway between the old points just get the average of their two adjacent old points. New points that fall on the old points get 1/8 of each of the two neighboring old points, plus 3/4 of the old point directly under them.

To make the terrain more interesting, I generate a second height map beforehand which I call the roughness map. When generating the actual height map, I scale the random offsets by the roughness value for that spot. This breaks the map up into smooth, rounded areas and rough, jagged areas.

Here's the code:

const int height_field_size = 1024;
static float height[height_field_size];
static float height_old[height_field_size];

void generate_height_field()
{
int mask = height_field_size - 1;

// Generate a height field that represents surface roughness.

const int initial_roughness_step_size = 32;

float roughness[height_field_size];
float roughness_old[height_field_size];

for (int i = 0; i < height_field_size; i += initial_roughness_step_size)
{
roughness[i] = frand();
}

for (int step_size = initial_roughness_step_size; step_size > 1; step_size /= 2)
{
memcpy(roughness_old, roughness, sizeof(roughness_old));

for (int i = 0; i < height_field_size; i += step_size)
{
float h0 = roughness_old[(i + (height_field_size - step_size)) & mask];
float h1 = roughness_old[i];
float h2 = roughness_old[(i + step_size) & mask];

float h1_new = h0 * 0.125f + h1 * 0.75f + h2 * 0.125f;
float h3_new = h1 * 0.5f + h2 * 0.5f;

roughness[i] = h1_new;
roughness[i + step_size / 2] = h3_new;
}
}

// Generate the actual height field, scaling the randomness by the
// roughness at each point.

const int initial_step_size = 256;

for (int i = 0; i < height_field_size; i += initial_step_size)
{
height[i] = grand() * 20.0f;
}

for (int step_size = initial_step_size; step_size > 1; step_size /= 2)
{
memcpy(height_old, height, sizeof(height_old));

float range = float(step_size) * 0.1f;

for (int i = 0; i < height_field_size; i += step_size)
{
float h0 = height_old[(i + (height_field_size - step_size)) & mask];
float h1 = height_old[i];
float h2 = height_old[(i + step_size) & mask];

float h1_new = h0 * 0.125f + h1 * 0.75f + h2 * 0.125f;
float h3_new = h1 * 0.5f + h2 * 0.5f;

h1_new += range * grand() * roughness[i];
h3_new += range * grand() * roughness[i + step_size / 2];

height[i] = h1_new;
height[i + step_size / 2] = h3_new;
}
}
}


I've got an extremely simple random number generator right now since I don't care much about quality. frand() generates uniform values from 0 to 1, and grand() generates Gaussian (i.e. normal-distributed) random values with a mean of 0 and a standard deviation of 1:

float frand()
{
return float(rand()) / float(RAND_MAX);
}

float grand()
{
float x1, x2, w;

do
{
x1 = 2.0f * frand() - 1.0f;
x2 = 2.0f * frand() - 1.0f;
w = x1 * x1 + x2 * x2;
}
while (w >= 1.0f);

w = sqrtf((-2.0f * logf(w)) / w);

float y1 = x1 * w;
// float y2 = x2 * w;

return y1;
}


The grand() algorithm actually generates two Gaussian random numbers per call; I'm being lazy and throwing out the second one.

I've done a lot of thinking about what kind of hand-authored terrain would work best. Currently I'm leaning toward a subdivision curve (like subdivision surfaces but in 2D). This is because it's easy to author rounded things, and you're not constrained to work at a particular scale. I've thought about a tiled landscape but I don't think that will give me the control I need to get surfaces that are fun to drive. Height fields can't handle overhangs or vertical cliffs; not horrible limitations, but I think I might like to have some cliffs.

Vehicle Development

Regarding the vehicle, I have been experimenting with different combinations of friction, elasticity, torque, and top speed. I am also writing a new joint type for the suspension since none of the joints provided with Chipmunk do exactly what I need. The new joint will constrain the wheel to move along a line segment relative to the vehicle chassis.

Version Control

In other news, the computer I've been using as my version control server died a horrible death. It's amazingly inconvenient. I've been using Perforce since in general it's awesome (albeit expensive for more than two users), but have been having second thoughts about my situation. Since I'm working from a laptop I spend a fair amount of time disconnected from the server, and with Perforce that means you can't check files out to work on them, or diff them to see what you've changed. I may switch to Subversion (since I have experience with that), or something else that is aimed at distributed development. I dislike having version-control-related files strewn about in my codebase, though. Subversion (like its older brother CVS) keeps entire duplicate directories of your source code, for diffing purposes. If it kept them off in some version-control directory I wouldn't mind but it keeps them as subdirectories of the real directories, so I have to wade around them in Explorer.

Monday, September 3, 2007

Knytt



I found out about Nifflas' games this week. They are fun, polished, atmospheric platformers, well worth checking out. I started with Knytt, which focuses on exploration and atmosphere and is lighter on combat. Knytt Stories is a collection of games that add more combat to the mix (although many of them have an easy difficulty which removes some of the enemies). Within a Deep Forest is the oldest game, with a more difficult control scheme which involves moving a bouncing ball.

In all of the games Nifflas has assembled great atmospheric music from a variety of composers, and clean, charming tile design.

Initialization order

This week I've integrated the Chipmunk 2D physics library into my own game and have done the initial setup for my vehicle, based on the moon buggy tutorial that is available with the Chipmunk library.

Several problems reared their heads in the course of this. One is that, because Chipmunk uses basic Euler integration, springs can be made to misbehave very easily. If you dial the stiffness of the shock absorbers up, there is a point beyond which they overshoot in the integration step, which leads to wild oscillations and the complete destruction of the simulation.

The other big problem had to do with initialization order. This is a major problem in professional game engines, too. Here's an example from my program:

const vec2 chassis_verts[] =
{
vec2(-5, 1),
vec2(-2, 2),
vec2(2, 2),
vec2(5, 1),
vec2(6, 0),
vec2(6, -1),
vec2(5, -2),
vec2(-5, -2),
vec2(-6, -1),
vec2(-6, 0),
};
const int num_chassis_verts = sizeof(chassis_verts) / sizeof(chassis_verts[0]);

const float chassis_mass = 5.0f;
const float chassis_moment = cpMomentForPoly(chassis_mass, num_chassis_verts, chassis_verts, vec2(0, 0));

class Rover
{
public:

Rover()
{
chassis = new cpBody(chassis_mass, chassis_moment);
}

cpBody * chassis;
};

Rover g_rover;


The code above does not, in general, work.

In C, you can lay out data structures using initializers; the data is resolved into a memory image at compile time, so nothing needs to be done at runtime.

C++ has classes, which have constructors. If you make a statically-defined object, C++ inserts its constructor into a list of constructors to be run at startup. If one startup constructor depends on the results of another startup constructor, you may or may not get what you want, depending on the order they are run. In general you can't control the order (though some compilers have #pragmas for helping establish rough ordering), and the compiler makes no attempt to analyze dependencies to put them in the right order.

The original Chipmunk sample application dynamically allocated all of its objects. I generally try to avoid explicit dynamic allocation when I can since then you have to explicitly deallocate objects. The one thing explicit allocation does give you, though, is explicit control over the order in which constructors are run.

All this has made me think about whether other languages would be better. Lots of other languages would not have this problem, but they won't do any compile-time cooking of data, either.

At the moment I am debugging a problem with the wheels when the torque is too high.