The core task of many, many games is to move forward through a hazard-filled “tunnel”. (Examples: Half-Life, Sly Cooper 1.) This is economical from a storytelling perspective: you know fairly accurately what players will have seen at any given point, and you can closely control how they encounter each situation.
From a content-production standpoint, though, the tunnel design is fairly wasteful. The reward for making progress is to see what comes next, so you can't afford to have the environment look too repetitive. In addition, players move through each part of the world once and then go on, never to return. If you've played the vehicle levels in Half-Life 2, or the rail-grind sequences in the original Ratchet and Clank, you know that in addition to seeing each part of the level once, players tend to see it only briefly. This is expensive.
As computers get more powerful they are capable of displaying higher-resolution textures, higher-detail models, and so forth. Environments take increasingly more time and money to construct. As a result, game designers are trying to induce the player to spend more time in the same place. I think of this as “arena” game design.
Later Ratchet and Clank games dropped the rail-grind sequences because they weren't cost-effective. (Rail-grind sequences are back in the latest installment. Insomniac must have found ways to bring their costs in line with the rest of the game. You can probably guess some ways from looking at it.) Later Sly Cooper games adopted a hub model, with a large central world that had lots of missions set in the same environment. Games like Grand Theft Auto take this to its logical conclusion. There are no distinct levels, just a single, large world.
Designers also use collectibles to encourage players to spend time looking at all the expensive scenery. Urban Chaos, a game by defunct developer Muckyfoot that came out the year before Grand Theft Auto 3 had similar gameplay to the GTA series. It had powerups tucked away in corners of the worlds which would give your character permanent boosts in speed, strength, and so forth. Not unlike Crackdown, really, to name a more recent example. It would be amusing to make a collectible out of the world's triangles themselves.
Monday, December 31, 2007
Monday, December 24, 2007
Scripting languages
I've been busy this week with the new baby so I haven't done more than a few minutes of programming.
Issue Trackers
I did look at wikipedia's big list of issue trackers again to see if something wonderful has appeared. Sadly, no. I used FogBugz here at home until my server crashed. It turns out the version I've got doesn't work well with Vista, so I haven't gotten it going again. I've used (not administered) various other web-based bug trackers at my various jobs. Web-based is inherently sluggish and icky-interfaced and usually requires a Linux server and/or facility with setting up an ecosystem of database, scripting language, and web server that I don't possess. I keep hoping there will be some open-source tracker that just installs and works and doesn't require a bunch of steps. FogBugz came pretty close, although I think I still had to mess around with my IIS settings. And it is web-based. Sigh.
My life in scripting languages
Over the years I've worked on a variety of games, some of which shipped. Nearly all of them used at least two programming languages.
The first game I worked on was Blade Runner, a modestly successful point-and-click adventure game. The game was coded entirely in C/C++ except for a small amount of assembly language in the renderer. There were three engine programmers and five scripting programmers. The scripting language was C. We used Windows' DLL interface to dynamically load the code for each location in the game. The script DLL's interface to the game was well-defined, consisting of a header file full of functions to do things like play lines of dialog, play animations, and check or set game state flags.
Later, I worked on Loose Cannon, an ill-fated project that was begun by Digital Anvil for Microsoft, sold to Ubisoft and given to Sinister Games (where I was) to finish. After two or three years there it was canceled. I learned a lot from that project, and perhaps the most important lesson was:
The code may leak memory like a sieve (Loose Cannon), contain its own hacky implementation of vtables, or use Hungarian notation that drains a little bit of your soul every day you have to look at it. You may be filled with loathing for the code; it doesn't matter. All code is loathsome, in its own way. Learn to do mechanical modification of code; you'd be surprised what you can do step by step while preserving a working game.
But I digress. One of the restarts involved trying to write the game in Python, with the idea that we would reimplement functionality in optimized C++ as necessary. That effort didn't get terribly far beyond the point of driving a car over the landscape at two frames a second. I have heard tell of shipped games (the Pirates! remake, for one) that use Python as the top-level language, though, so it's not impossible.
After that I helped finish Psi-Ops, a sleeper hit third-person action game featuring your typical brawny, buzz-cut Caucasian super-soldier, albeit augmented in this case with psychic superpowers. (The big debate was whether to name him Jack or Nick.)
Psi-Ops used C/C++ as its primary language, as does practically every game, with Python for scripting. The C++/Python interface was modeled on Boost.Python. We could do gnarly things like deriving a Python class from a C++ class, then deriving a C++ class from the Python class. There were about a dozen programmers, and I think they were all bilingual with regards to the programming. There were a half dozen designers, but if my memory serves, they did not do much (if any) Python scripting.
Finally, I worked on Sly 3, which used (a variant of) Scheme as its scripting language. The use of Scheme dates back to Sly 1 or 2 so I can't say for certain why it was chosen, but my hunch is that it was a combination of:
Why Have a Scripting Language?
Why would you use more than one language to write a game?
There are certainly some big negatives to overcome for it to be a net gain. You wind up writing and maintaining an interface layer between the two languages. The two languages will compete for control of memory and you will have to work out how they should share it. You also have to decide what each language is for: how to decide which language to use to implement any given piece of functionality. You have to train programmers in both languages and in the ways the game's concepts are represented in each. If one of the languages is home-grown, it probably has terrible or nonexistent editing, compiling, and debugging tools.
Despite all this, game programmers keep including scripting languages in their projects. Here are the reasons I can think of:
To a large degree, the productivity of your day can be measured by the number of edit/build/load/test cycles you are able to perform. If the build/load phase takes five minutes, say, then you've got a maximum of 96 cycles in an eight-hour workday. The link phase, in particular, can get rather long (i.e. minutes) for some of the C++ compilers that game developers have to work with. Programmers look for ways to bypass as much of the link and load phases as possible, and scripting languages are seen as a way to do this.
Another reason often cited is to devolve some of the programming duties onto less-expensive programmers (a.k.a. designers). Scripting languages are expected to be simpler to use (by having no pointers, say, or no explicit type annotation), and thus more suitable for use by inexperienced programmers.
Bolting on a scripting language may be a way to use a more productive programming language for the parts of the game where performance is less critical. Languages that support higher levels of abstraction allow you to do more with less typing. Languages with more advanced type systems can verify more properties of your program before it is run, reducing the number of edit/test cycles needed.
Finally, we can't discount that programmers love to complicate things. If we don't have enough problems to solve we will create them! Let's face it: C++ is not a sexy language. It just happens to be the current language of choice for console development. There are lots of other languages that are more popular or exciting and it's fun to use them.
It is very important when you are responsible for a scripting language in a game not to lose sight of the goals. For instance, a scripting language may reduce the build time, but in my experience we've ended up gaining back a lot of the build time as we strive to make the game more efficient. (There's a conflict between malleability and efficiency.) Scripting languages may be simpler; but if they lack good abstraction capabilities (and people who know how to use them) you can end up with copy/paste code monstrosities. In addition, the use of dynamic languages defers detection of the most common types of errors (typos and signature mismatches) to runtime, which means more edit/test cycles.
Over the years I haven't been able to help thinking that the approach we used on Blade Runner was perhaps one of the best. By using a mainstream language, we ensured that our scripters had an integrated development environment. The compiler gave good error messages, and if they had a problem we could easily debug it. By having a clear, simple interface to the game we ensured that compile times were fast. The DLL mechanism ensured that link times were fast, too. Having a single language meant that there weren't any interface issues regarding memory usage, or object representation, to worry about.
If you do end up using a scripting language, please don't roll your own! It may sound like fun, but you have only to look at Maya's Mel language to see the horrors this can inflict on people. Autodesk (the current owners of Maya) have grafted Python onto Maya so that people won't have to deal with their misbegotten scripting language any more than necessary.
Issue Trackers
I did look at wikipedia's big list of issue trackers again to see if something wonderful has appeared. Sadly, no. I used FogBugz here at home until my server crashed. It turns out the version I've got doesn't work well with Vista, so I haven't gotten it going again. I've used (not administered) various other web-based bug trackers at my various jobs. Web-based is inherently sluggish and icky-interfaced and usually requires a Linux server and/or facility with setting up an ecosystem of database, scripting language, and web server that I don't possess. I keep hoping there will be some open-source tracker that just installs and works and doesn't require a bunch of steps. FogBugz came pretty close, although I think I still had to mess around with my IIS settings. And it is web-based. Sigh.
My life in scripting languages
Over the years I've worked on a variety of games, some of which shipped. Nearly all of them used at least two programming languages.
The first game I worked on was Blade Runner, a modestly successful point-and-click adventure game. The game was coded entirely in C/C++ except for a small amount of assembly language in the renderer. There were three engine programmers and five scripting programmers. The scripting language was C. We used Windows' DLL interface to dynamically load the code for each location in the game. The script DLL's interface to the game was well-defined, consisting of a header file full of functions to do things like play lines of dialog, play animations, and check or set game state flags.
Later, I worked on Loose Cannon, an ill-fated project that was begun by Digital Anvil for Microsoft, sold to Ubisoft and given to Sinister Games (where I was) to finish. After two or three years there it was canceled. I learned a lot from that project, and perhaps the most important lesson was:
Starting over from scratch is the wrong thing to do.
The code may leak memory like a sieve (Loose Cannon), contain its own hacky implementation of vtables, or use Hungarian notation that drains a little bit of your soul every day you have to look at it. You may be filled with loathing for the code; it doesn't matter. All code is loathsome, in its own way. Learn to do mechanical modification of code; you'd be surprised what you can do step by step while preserving a working game.
But I digress. One of the restarts involved trying to write the game in Python, with the idea that we would reimplement functionality in optimized C++ as necessary. That effort didn't get terribly far beyond the point of driving a car over the landscape at two frames a second. I have heard tell of shipped games (the Pirates! remake, for one) that use Python as the top-level language, though, so it's not impossible.
After that I helped finish Psi-Ops, a sleeper hit third-person action game featuring your typical brawny, buzz-cut Caucasian super-soldier, albeit augmented in this case with psychic superpowers. (The big debate was whether to name him Jack or Nick.)
Psi-Ops used C/C++ as its primary language, as does practically every game, with Python for scripting. The C++/Python interface was modeled on Boost.Python. We could do gnarly things like deriving a Python class from a C++ class, then deriving a C++ class from the Python class. There were about a dozen programmers, and I think they were all bilingual with regards to the programming. There were a half dozen designers, but if my memory serves, they did not do much (if any) Python scripting.
Finally, I worked on Sly 3, which used (a variant of) Scheme as its scripting language. The use of Scheme dates back to Sly 1 or 2 so I can't say for certain why it was chosen, but my hunch is that it was a combination of:
- a programmer with an MIT degree and
- very little time
Why Have a Scripting Language?
Why would you use more than one language to write a game?
There are certainly some big negatives to overcome for it to be a net gain. You wind up writing and maintaining an interface layer between the two languages. The two languages will compete for control of memory and you will have to work out how they should share it. You also have to decide what each language is for: how to decide which language to use to implement any given piece of functionality. You have to train programmers in both languages and in the ways the game's concepts are represented in each. If one of the languages is home-grown, it probably has terrible or nonexistent editing, compiling, and debugging tools.
Despite all this, game programmers keep including scripting languages in their projects. Here are the reasons I can think of:
To a large degree, the productivity of your day can be measured by the number of edit/build/load/test cycles you are able to perform. If the build/load phase takes five minutes, say, then you've got a maximum of 96 cycles in an eight-hour workday. The link phase, in particular, can get rather long (i.e. minutes) for some of the C++ compilers that game developers have to work with. Programmers look for ways to bypass as much of the link and load phases as possible, and scripting languages are seen as a way to do this.
Another reason often cited is to devolve some of the programming duties onto less-expensive programmers (a.k.a. designers). Scripting languages are expected to be simpler to use (by having no pointers, say, or no explicit type annotation), and thus more suitable for use by inexperienced programmers.
Bolting on a scripting language may be a way to use a more productive programming language for the parts of the game where performance is less critical. Languages that support higher levels of abstraction allow you to do more with less typing. Languages with more advanced type systems can verify more properties of your program before it is run, reducing the number of edit/test cycles needed.
Finally, we can't discount that programmers love to complicate things. If we don't have enough problems to solve we will create them! Let's face it: C++ is not a sexy language. It just happens to be the current language of choice for console development. There are lots of other languages that are more popular or exciting and it's fun to use them.
It is very important when you are responsible for a scripting language in a game not to lose sight of the goals. For instance, a scripting language may reduce the build time, but in my experience we've ended up gaining back a lot of the build time as we strive to make the game more efficient. (There's a conflict between malleability and efficiency.) Scripting languages may be simpler; but if they lack good abstraction capabilities (and people who know how to use them) you can end up with copy/paste code monstrosities. In addition, the use of dynamic languages defers detection of the most common types of errors (typos and signature mismatches) to runtime, which means more edit/test cycles.
Over the years I haven't been able to help thinking that the approach we used on Blade Runner was perhaps one of the best. By using a mainstream language, we ensured that our scripters had an integrated development environment. The compiler gave good error messages, and if they had a problem we could easily debug it. By having a clear, simple interface to the game we ensured that compile times were fast. The DLL mechanism ensured that link times were fast, too. Having a single language meant that there weren't any interface issues regarding memory usage, or object representation, to worry about.
If you do end up using a scripting language, please don't roll your own! It may sound like fun, but you have only to look at Maya's Mel language to see the horrors this can inflict on people. Autodesk (the current owners of Maya) have grafted Python onto Maya so that people won't have to deal with their misbegotten scripting language any more than necessary.
Monday, December 17, 2007
Failed word graph gameplay
I've been thinking about ways to make a game based on the graph of words that differ by one letter. There's the puzzle type that originally got me thinking about this, with two words separated by blanks to fill in. That's not much of a game, though; more of a puzzle.
I had the idea of moving around through the graph, discovering words along the way. My first idea was of a 4X space exploration type of game, where each word would represent a star system, and the adjacencies would represent wormholes (wordholes?) between them.
There are a lot of unknowns about this idea. The main unknown for me is how the graph feels. How do you map it to a plane? Is the connectivity tractable for human brains? There are 2415 connected four-letter words. This seems like rather a lot of stars for a typical space game, and there's not necessarily an easy way to restrict the player to a subset of words, since they've already got all the words in their head.
My second idea was to score the player on the longest chain of adjacent words they can string together without revisiting a word they've already used. This puts an element of Hamiltonian pathfinding into the mix.
To test this gameplay I took my DOS 80x25 text console simulator and threw together a prototype of moving through the graph. Each word you type goes in the center of the screen, and the adjacent words are placed around it. I put question marks for words you haven't typed yet.
“Ware” has the most neighbors (25) so I used it when deciding how to lay out the neighbors.
Words form into groups based on which letter is varying. Four-letter words belong to up to four groups. Since I was dealing with a text grid, I tried splitting the words into the four groups and putting them in clusters next to the current word.
It ought to be possible to represent further graph connections to some extent. I've thought of listing words that are two steps away horizontally, extending out from each of the cyan one-step words in the picture above. I don't think there's space in my 80x25 layout to list all words in every case, though. There is also the issue that two one-step words may both have links to the same two-step word. In this case, would you list the word in both places? In a graphical graph representation you could try to put the words next to each other so they could both point to a single node for the two-step word.
Grouping words like I've done in the picture may give away too much of the graph structure. In particular, listing the words alphabetically as I've done gives away too much. It's not that hard to come up with the missing words.
More problematic, though, is the longest-path gameplay. It turns out to be pretty easy to keep going, and going, and going, so you're likely to succumb to boredom before you complete a scoring run. Also, with me showing only one step ahead in the graph, you don't really see dead-ends coming so there isn't much chance for strategy as you choose words.
Any ideas for gameplay involving the word adjacency graph?
I had the idea of moving around through the graph, discovering words along the way. My first idea was of a 4X space exploration type of game, where each word would represent a star system, and the adjacencies would represent wormholes (wordholes?) between them.
There are a lot of unknowns about this idea. The main unknown for me is how the graph feels. How do you map it to a plane? Is the connectivity tractable for human brains? There are 2415 connected four-letter words. This seems like rather a lot of stars for a typical space game, and there's not necessarily an easy way to restrict the player to a subset of words, since they've already got all the words in their head.
My second idea was to score the player on the longest chain of adjacent words they can string together without revisiting a word they've already used. This puts an element of Hamiltonian pathfinding into the mix.
To test this gameplay I took my DOS 80x25 text console simulator and threw together a prototype of moving through the graph. Each word you type goes in the center of the screen, and the adjacent words are placed around it. I put question marks for words you haven't typed yet.
“Ware” has the most neighbors (25) so I used it when deciding how to lay out the neighbors.
Words form into groups based on which letter is varying. Four-letter words belong to up to four groups. Since I was dealing with a text grid, I tried splitting the words into the four groups and putting them in clusters next to the current word.
It ought to be possible to represent further graph connections to some extent. I've thought of listing words that are two steps away horizontally, extending out from each of the cyan one-step words in the picture above. I don't think there's space in my 80x25 layout to list all words in every case, though. There is also the issue that two one-step words may both have links to the same two-step word. In this case, would you list the word in both places? In a graphical graph representation you could try to put the words next to each other so they could both point to a single node for the two-step word.
Grouping words like I've done in the picture may give away too much of the graph structure. In particular, listing the words alphabetically as I've done gives away too much. It's not that hard to come up with the missing words.
More problematic, though, is the longest-path gameplay. It turns out to be pretty easy to keep going, and going, and going, so you're likely to succumb to boredom before you complete a scoring run. Also, with me showing only one step ahead in the graph, you don't really see dead-ends coming so there isn't much chance for strategy as you choose words.
Any ideas for gameplay involving the word adjacency graph?
Shipped!
Yesterday we finally delivered a product that's been in the works for nine months. I did conceptual work and production assistance; my wife did the rest. We named her Esther.
Monday, December 10, 2007
Boobs, Booty, and Booze: Adventures In Edit Distance
This week I got to thinking about word puzzles like this:
Fill in the four intermediate words. Each word is four letters long and differs by exactly one letter from the word above it and the word below it.
This is an example of a type of minimum edit distance. “Spry” is five steps away from “moat” when each step consists of changing a single letter.
How can puzzles like this be created? How can a puzzle's difficulty be evaluated? What does the space of possible puzzles look like?
To answer these questions, I went to wordlist.sourceforge.net and picked up the 12dicts package. This is a collection of word lists which have been compiled from twelve dictionaries. I chose the 2of12inf list, which contains any word listed in at least two of the twelve dictionaries, plus all of that word's inflections.
After filtering the list to words of a specific length the next step is to build an adjacency graph. I tried brute force first: a doubly-nested loop over all the words, testing each pair to see if they were adjacent. This takes a very long time. As an example, with the 2,546 four-letter words there are 3,242,331 pairs of words to consider.
Because the graph is pretty sparse there are much faster ways to build it. One way is to consider each letter position in turn, sorting the words into groups in which all the other letters are the same. All the words in a given group are neighbors of each other. Here's an implementation in Python:
Once we have the graph structure in hand we can begin analyzing it in various ways. The simplest is to identify words that have no adjacencies. For instance, “sexy” and “odor” stand alone. These words can't be used in puzzles.
These “loner” words are a special case of identifying the connected components of the graph. These are the groups of words that are interconnected. A puzzle needs to have both of its endpoints in the same connected component or it won't be solvable. The basic algorithm for identifying connected components is to flood-fill from every word that hasn't already been visited, keeping track of all the words collected on each fill.
It turns out that, for lengths of three to six letters, most words belong to a single large connected component. As words get longer, they become less and less interconnected and the number of components increases rapidly.
In the table above, Adjacencies are the number of edges in the graph. Loners counts connected components with only a single word. Groups counts components with two or more words. Biggest is the size of the largest group.
The final column, Bridges, counts edges which, if removed, break a component into two separate pieces. These represent the word transformations which you might expect to use frequently, since they are the only way to get from words on one side to words on the other side.
As it happens, though, words tend to be very well connected, and so there are never more than 20 or so words on one side of a bridge. Most of the time there is only one word on the other side of the bridge, i.e. the word is a dead-end with only one neighbor. An example of a dead-end is the word SPRY in the puzzle above. There is only one possible word to go above it in the puzzle (at least in my dictionary).
The diagram below shows several bridges in the five-letter word graph.
The edge between boozy and booze represents the most common sort of bridge, dividing one word from the rest. In comparison, cutting the edge between booby and bobby isolates nine words. Other bridges are the edge from hobby to hubby, and from tubby to tabby.
The algorithm for finding bridges took a while for me to understand. It's hinted at in the exercises of Introduction to Algorithms (Corman, Leiserson, Rivest) and is built on a depth-first traversal of the graph. I found pseudocode for it but it happens to be incorrect.
The gist of it is that, if you do a depth-first traversal, any edges you find that point at already-visited nodes are back edges: they close a loop, meaning that none of the edges on that loop can be bridges. Related to this, all bridge edges must be edges of the tree produced by the depth-first traversal. During the recursive descent of the depth-first traversal you can compute the highest (closest to root) position that each subtree points to. If a subtree does not point to a node outside of itself, then the edge to that subtree is a bridge.
Another metric for good words to know is the words with the most connections. For example, here are the top few words of length four, along with how many neighboring words each has:
I'm out of time now, and haven't gotten to the questions of generating puzzles or evaluating their difficulty. Difficulty might be a function of how many choices you have at each step of the puzzle, combined with the relative frequency of each of the words in the English language (and hence, how likely you are to know them). Nevertheless, I feel I understand the word space a lot better, and have managed to generate some interesting puzzles along the way. I find that five steps is about the maximum that I can reasonably do. There are minimum paths that get up to 17 and higher steps, though.
MOAT
____
____
____
____
SPRY
Fill in the four intermediate words. Each word is four letters long and differs by exactly one letter from the word above it and the word below it.
This is an example of a type of minimum edit distance. “Spry” is five steps away from “moat” when each step consists of changing a single letter.
How can puzzles like this be created? How can a puzzle's difficulty be evaluated? What does the space of possible puzzles look like?
To answer these questions, I went to wordlist.sourceforge.net and picked up the 12dicts package. This is a collection of word lists which have been compiled from twelve dictionaries. I chose the 2of12inf list, which contains any word listed in at least two of the twelve dictionaries, plus all of that word's inflections.
After filtering the list to words of a specific length the next step is to build an adjacency graph. I tried brute force first: a doubly-nested loop over all the words, testing each pair to see if they were adjacent. This takes a very long time. As an example, with the 2,546 four-letter words there are 3,242,331 pairs of words to consider.
Because the graph is pretty sparse there are much faster ways to build it. One way is to consider each letter position in turn, sorting the words into groups in which all the other letters are the same. All the words in a given group are neighbors of each other. Here's an implementation in Python:
adjacent_words = {}
for i in range(word_len):
grouped_words = {}
for word in words:
group = word[:i] + word[i+1:]
grouped_words.setdefault(group, []).append(word)
for group_words in grouped_words.itervalues():
for word1 in group_words:
for word2 in group_words:
if word1 != word2:
adjacent_words.setdefault(word1, set()).add(word2)
Once we have the graph structure in hand we can begin analyzing it in various ways. The simplest is to identify words that have no adjacencies. For instance, “sexy” and “odor” stand alone. These words can't be used in puzzles.
These “loner” words are a special case of identifying the connected components of the graph. These are the groups of words that are interconnected. A puzzle needs to have both of its endpoints in the same connected component or it won't be solvable. The basic algorithm for identifying connected components is to flood-fill from every word that hasn't already been visited, keeping track of all the words collected on each fill.
It turns out that, for lengths of three to six letters, most words belong to a single large connected component. As words get longer, they become less and less interconnected and the number of components increases rapidly.
Length | Words | Adjacencies | Loners | Groups | Biggest | Bridges |
---|---|---|---|---|---|---|
3 | 642 | 3840 | 11 | 1 | 631 | 9 |
4 | 2546 | 11476 | 79 | 19 | 2415 | 125 |
5 | 5122 | 12435 | 613 | 179 | 3935 | 547 |
6 | 8303 | 11218 | 2239 | 670 | 3943 | 752 |
7 | 11571 | 10039 | 4689 | 1173 | 2235 | 9 |
8 | 12687 | 5012 | 7052 | 1695 | 433 | 0 |
9 | 11768 | 2426 | 8135 | 1467 | 37 | 3 |
10 | 9896 | 1525 | 7388 | 1103 | 21 | 2 |
11 | 7362 | 905 | 5820 | 699 | 14 | 1 |
12 | 5067 | 500 | 4155 | 433 | 6 | 0 |
In the table above, Adjacencies are the number of edges in the graph. Loners counts connected components with only a single word. Groups counts components with two or more words. Biggest is the size of the largest group.
The final column, Bridges, counts edges which, if removed, break a component into two separate pieces. These represent the word transformations which you might expect to use frequently, since they are the only way to get from words on one side to words on the other side.
As it happens, though, words tend to be very well connected, and so there are never more than 20 or so words on one side of a bridge. Most of the time there is only one word on the other side of the bridge, i.e. the word is a dead-end with only one neighbor. An example of a dead-end is the word SPRY in the puzzle above. There is only one possible word to go above it in the puzzle (at least in my dictionary).
The diagram below shows several bridges in the five-letter word graph.
The edge between boozy and booze represents the most common sort of bridge, dividing one word from the rest. In comparison, cutting the edge between booby and bobby isolates nine words. Other bridges are the edge from hobby to hubby, and from tubby to tabby.
The algorithm for finding bridges took a while for me to understand. It's hinted at in the exercises of Introduction to Algorithms (Corman, Leiserson, Rivest) and is built on a depth-first traversal of the graph. I found pseudocode for it but it happens to be incorrect.
The gist of it is that, if you do a depth-first traversal, any edges you find that point at already-visited nodes are back edges: they close a loop, meaning that none of the edges on that loop can be bridges. Related to this, all bridge edges must be edges of the tree produced by the depth-first traversal. During the recursive descent of the depth-first traversal you can compute the highest (closest to root) position that each subtree points to. If a subtree does not point to a node outside of itself, then the edge to that subtree is a bridge.
Another metric for good words to know is the words with the most connections. For example, here are the top few words of length four, along with how many neighboring words each has:
ware: 25
pats: 24
wine: 24
bale: 23
bare: 23
care: 23
core: 23
mare: 23
mate: 23
paps: 23
tars: 23
bars: 22
caps: 22
cars: 22
done: 22
line: 22
lore: 22
male: 22
mats: 22
mine: 22
pale: 22
pare: 22
pits: 22
tons: 22
tots: 22
I'm out of time now, and haven't gotten to the questions of generating puzzles or evaluating their difficulty. Difficulty might be a function of how many choices you have at each step of the puzzle, combined with the relative frequency of each of the words in the English language (and hence, how likely you are to know them). Nevertheless, I feel I understand the word space a lot better, and have managed to generate some interesting puzzles along the way. I find that five steps is about the maximum that I can reasonably do. There are minimum paths that get up to 17 and higher steps, though.
Monday, December 3, 2007
Powered trajectory plot
Soonest-descent trajectory
I've spent a bit of time attempting to simplify the math for the soonest-descent trajectory angle. The basic equation relates the rocket angle (which is presumed to be held constant during the horizontal braking phase) to the difference between the height when horizontal braking finishes and the height at which vertical braking must start.
The equation ends up having the sine of the rocket angle in its denominators, which means it gets squirrelly near zero. However, as you can see in the last plot from two weeks ago, the curve does not look complicated at all, and it looks well-behaved at zero.
It turns out that the equation can be simplified down to this form:
If that was all we knew about the equation, we'd be stuck. However, it turns out that the constants b and c will always have opposite signs. Likewise d and e will have opposite signs. This means that their respective terms cancel each other out as the angle approaches zero. I figured this out by writing out the series expansions for the sines and cosines.
On a separate note, I put a very simple solver for the equation into my lander game and draw an arrow indicating the direction to fire the rocket for soonest descent. It turns out that, if horizontal braking finishes at the same instant that vertical braking starts (which is what the equation is aiming for), there's often quite a large instantaneous turn necessary. This is hard to fly. If I was intending to produce a flyable trajectory I'd need to add in sufficient time for the rocket to turn upright before commencing vertical braking.
Powered trajectory plot
The ballistic trajectory plot has turned out to be very useful. It shows you where the rocket will go if you don't do anything. I got to thinking that perhaps a similar plot for powered flight might be useful. The idea is to show where the rocket will go if you keep the rocket firing continuously but don't do any additional steering. (My rocket attitude is defined in screen space, which rotates to keep the rocket at the top of the screen, so technically the rocket does rotate when you don't do anything. This turned out to be the most useful frame of reference for controlling rocket direction.)
The powered trajectory plot allows you to swivel the rocket and experimentally find the horizontal braking trajectory that I was aiming for in the section above. Here's a sample:
I may cut off the trajectory plot when (local) horizontal velocity reaches zero. It would also be good to mark that point with a circle or something so it's clear. Next, I plan to compute the height at which vertical braking must start at the point where horizontal velocity reaches zero. This could be color-coded based on whether it's above or below the height where the rocket stop moving horizontally. Using this you can visually solve the equation to figure out how to fly an approach trajectory.
I've spent a bit of time attempting to simplify the math for the soonest-descent trajectory angle. The basic equation relates the rocket angle (which is presumed to be held constant during the horizontal braking phase) to the difference between the height when horizontal braking finishes and the height at which vertical braking must start.
The equation ends up having the sine of the rocket angle in its denominators, which means it gets squirrelly near zero. However, as you can see in the last plot from two weeks ago, the curve does not look complicated at all, and it looks well-behaved at zero.
It turns out that the equation can be simplified down to this form:
If that was all we knew about the equation, we'd be stuck. However, it turns out that the constants b and c will always have opposite signs. Likewise d and e will have opposite signs. This means that their respective terms cancel each other out as the angle approaches zero. I figured this out by writing out the series expansions for the sines and cosines.
On a separate note, I put a very simple solver for the equation into my lander game and draw an arrow indicating the direction to fire the rocket for soonest descent. It turns out that, if horizontal braking finishes at the same instant that vertical braking starts (which is what the equation is aiming for), there's often quite a large instantaneous turn necessary. This is hard to fly. If I was intending to produce a flyable trajectory I'd need to add in sufficient time for the rocket to turn upright before commencing vertical braking.
Powered trajectory plot
The ballistic trajectory plot has turned out to be very useful. It shows you where the rocket will go if you don't do anything. I got to thinking that perhaps a similar plot for powered flight might be useful. The idea is to show where the rocket will go if you keep the rocket firing continuously but don't do any additional steering. (My rocket attitude is defined in screen space, which rotates to keep the rocket at the top of the screen, so technically the rocket does rotate when you don't do anything. This turned out to be the most useful frame of reference for controlling rocket direction.)
The powered trajectory plot allows you to swivel the rocket and experimentally find the horizontal braking trajectory that I was aiming for in the section above. Here's a sample:
I may cut off the trajectory plot when (local) horizontal velocity reaches zero. It would also be good to mark that point with a circle or something so it's clear. Next, I plan to compute the height at which vertical braking must start at the point where horizontal velocity reaches zero. This could be color-coded based on whether it's above or below the height where the rocket stop moving horizontally. Using this you can visually solve the equation to figure out how to fly an approach trajectory.
Monday, November 26, 2007
Super Mario Galaxy and Camera Control
Thanksgiving was a busy weekend, with lots of friends and family at our house. I played a bunch of Super Mario Galaxy, too. I like the bite-size gameplay pieces, and the live orchestral soundtrack.
On the downside, the camera is even crazier than in other third-person action games, which reduces the game's appeal for non-hardcore gamers. There are a couple of reasons for this. One is that there is almost no manual control over the camera. Sometimes you can press the D-pad to reorient the camera, but due to the controller geometry this is not something you can do easily. The other reason the camera is squirrelly is because the worlds have unusual geometries. As a result there is a lot of camera hinting (at my day job we call them camera volumes). This is where designers mark a region of space and give a preferred direction for the camera for that region. Due to the crazy gravity it's necessary to swing the camera direction wildly from region to region.
As anyone who's worked on 3D action games knows, there is a fundamental conflict between character control and camera control. You've got two basic choices for character control: character-relative and screen-relative. Character-relative is characterized by the Tomb Raider series (at least the early games; I'm not familiar with the more recent efforts). I call this “radio-control car” control because it has the familiar problem that if the character is moving toward the camera, you have to push right on the stick to get them to go left on-screen, and vice versa. As a result of this, virtually all third-person games (including Super Mario) use screen-relative control. The joystick deflection is projected to the screen and the character moves directly in that direction. Right is always right and left is always left.
Screen-relative control is great, until you need to rotate the camera. Suddenly the frame of reference that the player is using to control their character has shifted. If they're trying to run along a narrow beam, say, and you rotate the camera, suddenly their character will veer off and plunge to her death. If the player manually rotated the camera then they're expecting it and can adjust their movement to compensate, but if the game is controlling the camera the problem is much worse.
There are various tricks that we've tried to get around this. For instance, if the movement joystick is deflected and then the camera rotates, you can continue the character on their previous course until the player lets up on the joystick, after which any subsequent deflection will be projected to the current camera frame. This has mixed results, and probably works best when you're outright cutting the camera from one view to another.
Super Mario Galaxy has an additional problem with screen-relative control that most other games don't. I mentioned above that you project the joystick deflection onto the screen. There are various ways to do this, but usually they involve projecting the controller deflection onto a ground plane, so that pushing up on the stick moves the character further away on the ground (upstage), and pushing down moves the character nearer (downstage). In Mario, the player is sometimes running around on the ceiling, which introduces a new problem: should pushing up on the stick move Mario further away, or toward the top of the screen (which means running closer to the camera)?
In order to make screen-relative character movement work as well as possible, you basically want to make smooth, slow adjustments to the camera so the player has ample time to correct for the change in their character movement. On top of that you can add a manual control to snap the camera quickly to a better direction. That way, the player can stand still when they adjust the camera.
Unfortunately Super Mario Galaxy's environments don't lend themselves well to smooth camera adjustments. Gravity changes directions suddenly, or worlds will be tight and twisty, necessitating rapid camera movements to keep Mario in view. I'm having fun with the game, but it makes my wife really dizzy just to watch it.
On the downside, the camera is even crazier than in other third-person action games, which reduces the game's appeal for non-hardcore gamers. There are a couple of reasons for this. One is that there is almost no manual control over the camera. Sometimes you can press the D-pad to reorient the camera, but due to the controller geometry this is not something you can do easily. The other reason the camera is squirrelly is because the worlds have unusual geometries. As a result there is a lot of camera hinting (at my day job we call them camera volumes). This is where designers mark a region of space and give a preferred direction for the camera for that region. Due to the crazy gravity it's necessary to swing the camera direction wildly from region to region.
As anyone who's worked on 3D action games knows, there is a fundamental conflict between character control and camera control. You've got two basic choices for character control: character-relative and screen-relative. Character-relative is characterized by the Tomb Raider series (at least the early games; I'm not familiar with the more recent efforts). I call this “radio-control car” control because it has the familiar problem that if the character is moving toward the camera, you have to push right on the stick to get them to go left on-screen, and vice versa. As a result of this, virtually all third-person games (including Super Mario) use screen-relative control. The joystick deflection is projected to the screen and the character moves directly in that direction. Right is always right and left is always left.
Screen-relative control is great, until you need to rotate the camera. Suddenly the frame of reference that the player is using to control their character has shifted. If they're trying to run along a narrow beam, say, and you rotate the camera, suddenly their character will veer off and plunge to her death. If the player manually rotated the camera then they're expecting it and can adjust their movement to compensate, but if the game is controlling the camera the problem is much worse.
There are various tricks that we've tried to get around this. For instance, if the movement joystick is deflected and then the camera rotates, you can continue the character on their previous course until the player lets up on the joystick, after which any subsequent deflection will be projected to the current camera frame. This has mixed results, and probably works best when you're outright cutting the camera from one view to another.
Super Mario Galaxy has an additional problem with screen-relative control that most other games don't. I mentioned above that you project the joystick deflection onto the screen. There are various ways to do this, but usually they involve projecting the controller deflection onto a ground plane, so that pushing up on the stick moves the character further away on the ground (upstage), and pushing down moves the character nearer (downstage). In Mario, the player is sometimes running around on the ceiling, which introduces a new problem: should pushing up on the stick move Mario further away, or toward the top of the screen (which means running closer to the camera)?
In order to make screen-relative character movement work as well as possible, you basically want to make smooth, slow adjustments to the camera so the player has ample time to correct for the change in their character movement. On top of that you can add a manual control to snap the camera quickly to a better direction. That way, the player can stand still when they adjust the camera.
Unfortunately Super Mario Galaxy's environments don't lend themselves well to smooth camera adjustments. Gravity changes directions suddenly, or worlds will be tight and twisty, necessitating rapid camera movements to keep Mario in view. I'm having fun with the game, but it makes my wife really dizzy just to watch it.
Monday, November 19, 2007
Soonest-landing trajectory
In my rocket game, it is difficult to know when to begin a braking burn in order to land at a given target. I've been working on the math for a trajectory that lands at the nearest possible point. (To clarify: I want the nearest point that doesn't involve backtracking or waiting for the next orbit. Given unlimited fuel and time you can land anywhere.)
To start with I'm making some simplifying assumptions. These boil down to assuming you're close to the ground. Gravity is constant; it doesn't vary with position. The ground is a flat plane at y=0, and the gravity vector is pointed in the -y direction. With the constant-gravity assumption, the rocket's trajectory becomes a parabola and can be solved easily.
If the rocket starts out high enough, the trajectory is simple. The rocket is pointed completely sideways and fired until the horizontal velocity is zero. After that it is turned upright, falls a bit if necessary, and then fires vertically to brake to a soft landing.
A sample rocket trajectory of this type is plotted below. For this graph, the rocket's acceleration is 2 and its initial velocity is (4, 0). Gravity acceleration is -1. The graphs end when horizontal velocity reaches zero; they do not show the vertical braking component (if any).
Sometimes, though, canceling all horizontal velocity before addressing vertical velocity doesn't work because the rocket falls too far and hits the ground before it's done, or else there isn't enough altitude, given its vertical velocity, to complete the vertical braking. In these cases the rocket has to be tilted upward somewhat, diverting some acceleration toward staying above ground. We want to divert the minimum necessary, so as to extend the braking distance as little as possible.
In the example below, the rocket acceleration is still 2, but the initial velocity is increased to (6, 0). As a result, the rocket has to be tilted at an angle of 76.86 degrees from vertical in order to have space for the vertical descent:
Finally, sometimes the initial velocity is such that the rocket has to actually gain altitude in order to make a soft landing. Here, I've reduced the rocket's acceleration to 1.25, set the initial velocity to (6, -0.5), and put the initial rocket altitude at 1. The rocket angle is 25.84 degrees from vertical:
I've let the rocket graze the ground in the trajectory above. In reality you'd pick a safe minimum altitude and adjust the angle to keep it above that.
Thus, the rocket's trajectory consists of one or two pieces. If the horizontal velocity is nonzero, the rocket will execute a burn while tilted to the side by some amount. Once the horizontal velocity is zero, if there is remaining elevation, there will be a vertical burn to cancel that. (There may be a coasting period in between the two burns.) For simplicity I'm assuming that the rocket can change heading instantly. Time for turning could be added to the problem without too much trouble.
I've covered the coast-and-vertical-braking phase in a previous entry. The remaining problem is to find the angle for the first burn. The general approach is as follows:
For a given angle of the rocket, we can compute when it will cancel its horizontal velocity (divide horizontal velocity by horizontal acceleration). We plug this time into the equations for vertical motion to find out the rocket's elevation and vertical velocity at that time. This elevation and velocity can then be plugged into the equation for the altitude at which to begin vertical braking, developed in the above-mentioned blog entry. If the braking altitude is at or below us, we're good. If it's above us, we can't do it, and this angle is not usable.
Since we're comparing two altitudes, the natural thing to do is to make an equation which says that their difference equals zero. By solving this we get the minimum angle which will result in a successful landing. The minimum angle equals minimum distance traveled since we put as much thrust as possible toward horizontal braking.
Unfortunately since the variable we're solving for is encased in sines and cosines I don't know if there's an analytical way to solve for it. However, it should be easy to search for the value. Here's a sample plot illustrating the type of curve it is:
The curve above corresponds to the second case given earlier. It crosses the x axis at around 76.86 degrees.
The only remaining wrinkle (that I know of) is presented in the third case above. We can't ever let the rocket's elevation dip below zero. We need to determine when its vertical velocity reaches zero (this is a possible point of minimum altitude). If this time is earlier than the time for zero horizontal velocity (and greater than zero), then we need to check the rocket's elevation at that time. If it's less than zero (or whatever safe altitude we choose), then we need to search for a rocket angle closer to vertical that keeps the rocket above-ground.
I've been working this out in Excel; the next thing to do is to implement it in-game. There are a variety of ways it could be used: I could plot the earliest-landing point on the ground, or plot the trajectory to it, or have an attitude indicator showing which direction to burn. Once it's in I'll see how valid the flat-earth assumptions are. They do become more correct as the rocket approaches the ground so they may not be a problem.
Another thing this could be useful for is the beginning of a rocket AI. Computer-controlled rockets will probably have slightly different trajectory needs. They'll be wanting to land at a specific spot, rather than at the earliest possible spot. It shouldn't be too different, though.
To start with I'm making some simplifying assumptions. These boil down to assuming you're close to the ground. Gravity is constant; it doesn't vary with position. The ground is a flat plane at y=0, and the gravity vector is pointed in the -y direction. With the constant-gravity assumption, the rocket's trajectory becomes a parabola and can be solved easily.
If the rocket starts out high enough, the trajectory is simple. The rocket is pointed completely sideways and fired until the horizontal velocity is zero. After that it is turned upright, falls a bit if necessary, and then fires vertically to brake to a soft landing.
A sample rocket trajectory of this type is plotted below. For this graph, the rocket's acceleration is 2 and its initial velocity is (4, 0). Gravity acceleration is -1. The graphs end when horizontal velocity reaches zero; they do not show the vertical braking component (if any).
Sometimes, though, canceling all horizontal velocity before addressing vertical velocity doesn't work because the rocket falls too far and hits the ground before it's done, or else there isn't enough altitude, given its vertical velocity, to complete the vertical braking. In these cases the rocket has to be tilted upward somewhat, diverting some acceleration toward staying above ground. We want to divert the minimum necessary, so as to extend the braking distance as little as possible.
In the example below, the rocket acceleration is still 2, but the initial velocity is increased to (6, 0). As a result, the rocket has to be tilted at an angle of 76.86 degrees from vertical in order to have space for the vertical descent:
Finally, sometimes the initial velocity is such that the rocket has to actually gain altitude in order to make a soft landing. Here, I've reduced the rocket's acceleration to 1.25, set the initial velocity to (6, -0.5), and put the initial rocket altitude at 1. The rocket angle is 25.84 degrees from vertical:
I've let the rocket graze the ground in the trajectory above. In reality you'd pick a safe minimum altitude and adjust the angle to keep it above that.
Thus, the rocket's trajectory consists of one or two pieces. If the horizontal velocity is nonzero, the rocket will execute a burn while tilted to the side by some amount. Once the horizontal velocity is zero, if there is remaining elevation, there will be a vertical burn to cancel that. (There may be a coasting period in between the two burns.) For simplicity I'm assuming that the rocket can change heading instantly. Time for turning could be added to the problem without too much trouble.
I've covered the coast-and-vertical-braking phase in a previous entry. The remaining problem is to find the angle for the first burn. The general approach is as follows:
For a given angle of the rocket, we can compute when it will cancel its horizontal velocity (divide horizontal velocity by horizontal acceleration). We plug this time into the equations for vertical motion to find out the rocket's elevation and vertical velocity at that time. This elevation and velocity can then be plugged into the equation for the altitude at which to begin vertical braking, developed in the above-mentioned blog entry. If the braking altitude is at or below us, we're good. If it's above us, we can't do it, and this angle is not usable.
Since we're comparing two altitudes, the natural thing to do is to make an equation which says that their difference equals zero. By solving this we get the minimum angle which will result in a successful landing. The minimum angle equals minimum distance traveled since we put as much thrust as possible toward horizontal braking.
Unfortunately since the variable we're solving for is encased in sines and cosines I don't know if there's an analytical way to solve for it. However, it should be easy to search for the value. Here's a sample plot illustrating the type of curve it is:
The curve above corresponds to the second case given earlier. It crosses the x axis at around 76.86 degrees.
The only remaining wrinkle (that I know of) is presented in the third case above. We can't ever let the rocket's elevation dip below zero. We need to determine when its vertical velocity reaches zero (this is a possible point of minimum altitude). If this time is earlier than the time for zero horizontal velocity (and greater than zero), then we need to check the rocket's elevation at that time. If it's less than zero (or whatever safe altitude we choose), then we need to search for a rocket angle closer to vertical that keeps the rocket above-ground.
I've been working this out in Excel; the next thing to do is to implement it in-game. There are a variety of ways it could be used: I could plot the earliest-landing point on the ground, or plot the trajectory to it, or have an attitude indicator showing which direction to burn. Once it's in I'll see how valid the flat-earth assumptions are. They do become more correct as the rocket approaches the ground so they may not be a problem.
Another thing this could be useful for is the beginning of a rocket AI. Computer-controlled rockets will probably have slightly different trajectory needs. They'll be wanting to land at a specific spot, rather than at the earliest possible spot. It shouldn't be too different, though.
Monday, November 12, 2007
Flying over eastern Washington
This weekend we visited my parents. My father took me up for a couple of hours in his little yellow Super Cub, which is always fun. We toured the Hanford site, the Coyote Ridge Corrections Center in Connell, Palouse Falls, Lower Monumental Dam, the Kahlotus Cemetery, and we did touch and gos at a couple of farm landing strips. There are always a lot of deer on the wheat fields, and we scared up a coyote yesterday as well.
I'd never been over Hanford before. It's a very interesting place. During World War 2 the US Department of Energy took over a vast swath of land next to the Columbia River in order to produce the plutonium for the nation's nuclear weapons. The river provided cooling water, and the barren land provided isolation.
There was a town named Hanford on the site, which was dismantled. You can still see the streets and the trees; the only walls still standing are the high school building.
The “B” reactor was the “world's first industrial-scale nuclear reactor.” Here's a picture:
About a dozen more reactors were built. In the early 1990s (I believe; I'm writing from memory) the government decided to shut it all down. Most of the reactors have been encased in concrete and steel sarcophagi; the B reactor has been left standing free due to sentimental reasons, but as you can see in the picture, all the ancillary buildings have been torn down.
I don't know the details off-hand, but I think the reactors were shut down fairly suddenly, and their fuel rods were dumped into the nearby processing ponds and tanks. They were left alone for a while and made a mess, which the Hanford site is now cleaning up at great expense.
The nuclear reactors from our nation's decommissioned subs are shipped up the river and placed in neat rows in a pit on the Hanford site:
I'm convinced that nuclear energy will be needed to help solve the coming energy shortages. Nuclear physics is pretty far removed from daily experience in most people's lives, and I think this, combined with its use in weapons, has given it an almost religious aura in many minds, occupying space next to Satan. What people don't realize is that there isn't necessarily a reason why nuclear power generation has to produce lots of radioactive waste. We just haven't been trying to make nuclear reactors any better. In particular, thorium-based reactors look promising. See this blog for details.
The prison in Connell was depressing. They have a medium-security facility already, but are building a huge expansion that looks (from the air) like it will have considerably more security, with a double fence enclosing a no-man's-land.
Forcing people who have done something wrong to only associate with other people like themselves seems like just about the stupidest idea possible, if “correction” is at all a goal. It's a hard problem, I'm sure; it just seems like there must be a better way.
The Kahlotus Cemetery is a small plot in a small town. It apparently contains quite a few graves of anonymous Chinese laborers who were killed in accidents while building the railroad.
I'd never been over Hanford before. It's a very interesting place. During World War 2 the US Department of Energy took over a vast swath of land next to the Columbia River in order to produce the plutonium for the nation's nuclear weapons. The river provided cooling water, and the barren land provided isolation.
There was a town named Hanford on the site, which was dismantled. You can still see the streets and the trees; the only walls still standing are the high school building.
The “B” reactor was the “world's first industrial-scale nuclear reactor.” Here's a picture:
About a dozen more reactors were built. In the early 1990s (I believe; I'm writing from memory) the government decided to shut it all down. Most of the reactors have been encased in concrete and steel sarcophagi; the B reactor has been left standing free due to sentimental reasons, but as you can see in the picture, all the ancillary buildings have been torn down.
I don't know the details off-hand, but I think the reactors were shut down fairly suddenly, and their fuel rods were dumped into the nearby processing ponds and tanks. They were left alone for a while and made a mess, which the Hanford site is now cleaning up at great expense.
The nuclear reactors from our nation's decommissioned subs are shipped up the river and placed in neat rows in a pit on the Hanford site:
I'm convinced that nuclear energy will be needed to help solve the coming energy shortages. Nuclear physics is pretty far removed from daily experience in most people's lives, and I think this, combined with its use in weapons, has given it an almost religious aura in many minds, occupying space next to Satan. What people don't realize is that there isn't necessarily a reason why nuclear power generation has to produce lots of radioactive waste. We just haven't been trying to make nuclear reactors any better. In particular, thorium-based reactors look promising. See this blog for details.
The prison in Connell was depressing. They have a medium-security facility already, but are building a huge expansion that looks (from the air) like it will have considerably more security, with a double fence enclosing a no-man's-land.
Forcing people who have done something wrong to only associate with other people like themselves seems like just about the stupidest idea possible, if “correction” is at all a goal. It's a hard problem, I'm sure; it just seems like there must be a better way.
The Kahlotus Cemetery is a small plot in a small town. It apparently contains quite a few graves of anonymous Chinese laborers who were killed in accidents while building the railroad.
Thursday, November 8, 2007
James McColl and The Hussy's
James McColl is a Glasgow-based songwriter. He was a member of The Supernaturals in the 1990s, and now has a female-fronted band called The Hussy's (their spelling, not mine). Very catchy songs. My personal favorites are “Tiger” and “We Expected”.
The band is recording an album right now. They aren't signed to a label and it looks from their blog entries like they're undergoing some internal friction as a result. Here's hoping they weather their difficulties and get the album out.
The band is recording an album right now. They aren't signed to a label and it looks from their blog entries like they're undergoing some internal friction as a result. Here's hoping they weather their difficulties and get the album out.
Monday, November 5, 2007
Time Acceleration, Hyperbolic View
Library Trip
I made the trek over to the engineering library at the University of Washington this weekend and looked up several of the papers about powered soft-landing trajectory optimization that I wasn't able to view online. The papers weren't as useful as I'd hoped; they all tend to assume a flat earth with constant gravity, which makes the problem much simpler. My world is exaggerated (smaller world, faster time) so my powered flight occurs over a larger range of gravity strength and direction.
Unsurprisingly, it turns out that fuel-minimizing trajectories get as close as they can to two impulsive maneuvers: one at the start, and one at the end. One paper I read calls this a “bang-bang” trajectory. Cute!
Engine Cutoff
On my program, I did a few minor things. I put in an engine cutoff at touchdown to keep the rocket from rebounding off the ground. It's hard to let off on the rocket at the exact instant it touches down, so it is nice to have the game do it for you.
Time Acceleration
I also enhanced my time acceleration. On a “bang-bang” trajectory you spend the middle part just waiting while the rocket coasts. This is boring. Holding down the Tab key speeds up the game clock. Previously it just accelerated time by 25 times. This was fine for low orbits but much too slow for larger orbits, so I decided to make it variable.
My first attempt was to try to keep on-screen speed constant. This meant that the acceleration was proportional to the view radius (which changes as the camera control algorithm zooms in and out), and inversely proportional to the rocket speed. This worked great, except when it didn't. You can probably guess when it didn't: When the rocket speed approached zero. For example, you're sitting on the ground and press Tab. Bang! Infinite speedup. The other bad case is when you're flying straight up; the rocket will slow to a stop at the apex of the trajectory, then start falling back down. If you are accelerating time through the stop, suddenly your rocket is back on the ground, in pieces.
Because of these problems, I have dropped the inverse proportionality with rocket speed; the time acceleration is simply proportional to the view radius. I'm not sure whether this is wholly satisfactory. The camera frames orbits (up to a certain size), so if you are flying a highly elliptical orbit you will zoom around the periapsis at high speed and then dawdle through the apoapsis. Periapsis is a common location for doing burns, and having the time acceleration too high there makes it hard to hit the right spot in the orbit.
I added a game clock and a wall clock to the HUD, and a display of the time multiplier when the Tab key is down.
Hyperbolic Camera
Finally, I spent some time trying out various possibilities for the camera-framing algorithm when the orbit is very large, or hyperbolic. I don't have anything satisfactory yet. Ideally it should return similar results near the boundary with the orbit-framing regime, so the camera adjusts smoothly from one to the other. It needs to show the rocket at all times, and probably some portion of the unflown trajectory. It's also probably important to have the planet always be in view.
An example I tried: find the periapsis point, and the axis of the orbit. Expand a bounding circle from the periapsis point in the direction of the orbit's axis until it hits the rocket. Unfortunately this suffers from problems when the rocket is near periapsis. It also doesn't show too much ahead of the rocket for elliptical trajectories. On the plus side, it shows almost all of trajectories that are not too eccentric. It's probably my favorite out of the things I've tried so far, though.
Portal
A couple of years ago I was at the Game Developer Conference, looking over the entrants for the Independent Games competition. A couple of Digipen students (Kim Swift and another guy whose name escapes me) were there demonstrating their student project, a little game called Narbacular Drop. Valve saw the game, too, and hired the entire team of students to turn it into a shipping product, a process that took about two and a half years. I spent a couple hours this weekend playing it. It's pretty fun up to the point where I'm at (room 18), where it turned maddeningly difficult.
I made the trek over to the engineering library at the University of Washington this weekend and looked up several of the papers about powered soft-landing trajectory optimization that I wasn't able to view online. The papers weren't as useful as I'd hoped; they all tend to assume a flat earth with constant gravity, which makes the problem much simpler. My world is exaggerated (smaller world, faster time) so my powered flight occurs over a larger range of gravity strength and direction.
Unsurprisingly, it turns out that fuel-minimizing trajectories get as close as they can to two impulsive maneuvers: one at the start, and one at the end. One paper I read calls this a “bang-bang” trajectory. Cute!
Engine Cutoff
On my program, I did a few minor things. I put in an engine cutoff at touchdown to keep the rocket from rebounding off the ground. It's hard to let off on the rocket at the exact instant it touches down, so it is nice to have the game do it for you.
Time Acceleration
I also enhanced my time acceleration. On a “bang-bang” trajectory you spend the middle part just waiting while the rocket coasts. This is boring. Holding down the Tab key speeds up the game clock. Previously it just accelerated time by 25 times. This was fine for low orbits but much too slow for larger orbits, so I decided to make it variable.
My first attempt was to try to keep on-screen speed constant. This meant that the acceleration was proportional to the view radius (which changes as the camera control algorithm zooms in and out), and inversely proportional to the rocket speed. This worked great, except when it didn't. You can probably guess when it didn't: When the rocket speed approached zero. For example, you're sitting on the ground and press Tab. Bang! Infinite speedup. The other bad case is when you're flying straight up; the rocket will slow to a stop at the apex of the trajectory, then start falling back down. If you are accelerating time through the stop, suddenly your rocket is back on the ground, in pieces.
Because of these problems, I have dropped the inverse proportionality with rocket speed; the time acceleration is simply proportional to the view radius. I'm not sure whether this is wholly satisfactory. The camera frames orbits (up to a certain size), so if you are flying a highly elliptical orbit you will zoom around the periapsis at high speed and then dawdle through the apoapsis. Periapsis is a common location for doing burns, and having the time acceleration too high there makes it hard to hit the right spot in the orbit.
I added a game clock and a wall clock to the HUD, and a display of the time multiplier when the Tab key is down.
Hyperbolic Camera
Finally, I spent some time trying out various possibilities for the camera-framing algorithm when the orbit is very large, or hyperbolic. I don't have anything satisfactory yet. Ideally it should return similar results near the boundary with the orbit-framing regime, so the camera adjusts smoothly from one to the other. It needs to show the rocket at all times, and probably some portion of the unflown trajectory. It's also probably important to have the planet always be in view.
An example I tried: find the periapsis point, and the axis of the orbit. Expand a bounding circle from the periapsis point in the direction of the orbit's axis until it hits the rocket. Unfortunately this suffers from problems when the rocket is near periapsis. It also doesn't show too much ahead of the rocket for elliptical trajectories. On the plus side, it shows almost all of trajectories that are not too eccentric. It's probably my favorite out of the things I've tried so far, though.
Portal
A couple of years ago I was at the Game Developer Conference, looking over the entrants for the Independent Games competition. A couple of Digipen students (Kim Swift and another guy whose name escapes me) were there demonstrating their student project, a little game called Narbacular Drop. Valve saw the game, too, and hired the entire team of students to turn it into a shipping product, a process that took about two and a half years. I spent a couple hours this weekend playing it. It's pretty fun up to the point where I'm at (room 18), where it turned maddeningly difficult.
Monday, October 29, 2007
Auto-zooming view
This week I started by trying to improve my powered-landing cutoff elevation by using the correct equation for gravitational acceleration as a function of altitude:
This is a differential equation; the acceleration due to gravity is a function of the inverse squared distance above the planet. Mu is a constant based on the size and density of the planet. I was attempting to come up with an equation for y given t. Unfortunately I didn't get anywhere with it. I'm not sure if it has a closed-form solution or not.
After that I turned to a couple of other parts of my lander program that needed improvement. The first was computation of the impact site. Because I am currently assuming all the gravity comes from the one planet, the spacecraft trajectory is a conic section (ellipse, parabola, or hyperbola). I can thus do an analytical intersection between the trajectory curve and the circle of the planet's surface.
My old code was an ellipse/circle intersection. It thus couldn't handle parabolic or hyperbolic trajectories. It also suffered from numerical accuracy problems right around takeoff and landing, when the trajectory is just barely larger than the circle and has an eccentricity near one.
I switched to the polar representation for a conic:
p is the semi-latus rectum, a measure of the size of the conic. e is the eccentricity, a measure of the shape of the conic.
This handles hyperbolic and parabolic trajectories just fine. Its problem is that it falls down when the trajectory becomes degenerate. When you take off vertically, the trajectory follows a straight line. In this case, p is 0, so the equation returns 0 for the radius no matter what angle theta you put in. So, I need to switch to a different algorithm in those cases. I still need to do that.
The third thing I worked on was my view-framing algorithm. I try to zoom the camera in and out so that you can see what's important. When you're in orbit, I frame the entire orbit:
Once you are on a collision trajectory, I want to frame the unflown portion of the trajectory:
Once you've landed, I want to have a close-up view of the spacecraft:
There are still some cases that aren't covered. For instance, what should I frame when the trajectory is hyperbolic? In this case it's infinite. I currently cap the amount that I will zoom out, but if the spacecraft exceeds the limit I zoom out further.
It turns out that the specific mechanical energy is a good measure to use for identifying the different situations. It's the sum of the kinetic energy and the potential energy, divided by the spacecraft's mass:
The kinetic energy is the first term; you probably recognize it (minus the mass) from physics textbooks. The potential energy is the second term; it's set up such that it is zero when the spacecraft is infinitely far away, and gets more negative as the spacecraft approaches the planet.
The specific mechanical energy stays constant as the spacecraft moves around a given trajectory. For non-circular trajectories, potential energy is traded with kinetic energy as the spacecraft orbits.
When the specific mechanical energy is positive, the spacecraft is on a hyperbolic trajectory. When it is zero, the trajectory is parabolic. Negative means an elliptic orbit. The size of the orbit is related to the energy. I use the energy in a low circular orbit as the cutoff point for blending between a landing camera and an orbit camera. Since the energy changes smoothly as the player fires the rockets, it is a good parameter for interpolating between camera modes.
Fundamentals of Astrodynamics, by Bate, Mueller, and White, is the book I am dog-earing as I work through this stuff. It's an unbeatable value, as is typical of Dover books. If you are at all interested in astrodynamics it's a must-have.
This is a differential equation; the acceleration due to gravity is a function of the inverse squared distance above the planet. Mu is a constant based on the size and density of the planet. I was attempting to come up with an equation for y given t. Unfortunately I didn't get anywhere with it. I'm not sure if it has a closed-form solution or not.
After that I turned to a couple of other parts of my lander program that needed improvement. The first was computation of the impact site. Because I am currently assuming all the gravity comes from the one planet, the spacecraft trajectory is a conic section (ellipse, parabola, or hyperbola). I can thus do an analytical intersection between the trajectory curve and the circle of the planet's surface.
My old code was an ellipse/circle intersection. It thus couldn't handle parabolic or hyperbolic trajectories. It also suffered from numerical accuracy problems right around takeoff and landing, when the trajectory is just barely larger than the circle and has an eccentricity near one.
I switched to the polar representation for a conic:
p is the semi-latus rectum, a measure of the size of the conic. e is the eccentricity, a measure of the shape of the conic.
This handles hyperbolic and parabolic trajectories just fine. Its problem is that it falls down when the trajectory becomes degenerate. When you take off vertically, the trajectory follows a straight line. In this case, p is 0, so the equation returns 0 for the radius no matter what angle theta you put in. So, I need to switch to a different algorithm in those cases. I still need to do that.
The third thing I worked on was my view-framing algorithm. I try to zoom the camera in and out so that you can see what's important. When you're in orbit, I frame the entire orbit:
Once you are on a collision trajectory, I want to frame the unflown portion of the trajectory:
Once you've landed, I want to have a close-up view of the spacecraft:
There are still some cases that aren't covered. For instance, what should I frame when the trajectory is hyperbolic? In this case it's infinite. I currently cap the amount that I will zoom out, but if the spacecraft exceeds the limit I zoom out further.
It turns out that the specific mechanical energy is a good measure to use for identifying the different situations. It's the sum of the kinetic energy and the potential energy, divided by the spacecraft's mass:
The kinetic energy is the first term; you probably recognize it (minus the mass) from physics textbooks. The potential energy is the second term; it's set up such that it is zero when the spacecraft is infinitely far away, and gets more negative as the spacecraft approaches the planet.
The specific mechanical energy stays constant as the spacecraft moves around a given trajectory. For non-circular trajectories, potential energy is traded with kinetic energy as the spacecraft orbits.
When the specific mechanical energy is positive, the spacecraft is on a hyperbolic trajectory. When it is zero, the trajectory is parabolic. Negative means an elliptic orbit. The size of the orbit is related to the energy. I use the energy in a low circular orbit as the cutoff point for blending between a landing camera and an orbit camera. Since the energy changes smoothly as the player fires the rockets, it is a good parameter for interpolating between camera modes.
Fundamentals of Astrodynamics, by Bate, Mueller, and White, is the book I am dog-earing as I work through this stuff. It's an unbeatable value, as is typical of Dover books. If you are at all interested in astrodynamics it's a must-have.
Monday, October 22, 2007
Trajectory Feedback
This week I've been working on feedback in my lunar lander program.
The screenshot is not terribly clear. The ugly disc on the bottom is the moon. The tilted green arrow in the center marks the lander's position and orientation; when the camera zooms in you can see the lander itself. The curve passing through the lander that leads to the the little arrow on the ground is the ballistic trajectory (the arrow marks its impact site), while the second curve which intersects the ground is the maximum deceleration trajectory.
The ballistic trajectory is handy for some things. It shows you the “default” behavior of the spacecraft if you don't do anything. You can make a pinpoint landing reasonably well by aiming the ballistic impact site at your target and then throttling and steering to keep it there as the rocket decelerates.
I've been experimenting with some other displays. One I'd like to have is an indication, when you're in orbit, of the nearest point on the planet that you can safely land on. My first attempt toward that is the maximum deceleration trajectory. This simply integrates the spacecraft's position while choosing to decelerate in the direction opposite the velocity vector at each step.
When the rocket is flying high enough, it can reach zero velocity above the ground, in which case this trajectory is good. When you're orbiting lower, though, the trajectory may not reach zero velocity until under the ground somewhere, which isn't as useful.
What I really need is a trajectory that diverts power as necessary away from maximum deceleration and toward maintaining positive elevation. I haven't figured out quite how to do this yet, though. There are a variety of papers about how to compute optimal soft-landing trajectories, but they are almost all for sale at $25 each from IEEE and AIAA. I don't want to buy papers sight-unseen so I'm still trying to figure out a way to see them.
Another display I'd like is an indication, on the ballistic trajectory, of the point beyond which there is nothing you can do to prevent a crash. My first attempt toward that assumes only vertical motion, constant gravity acceleration, and constant rocket acceleration. It computes the altitude at which you must begin decelerating (at your current rocket thrust level) in order to bring the rocket to a soft landing.
Constants:
Variables:
Initial equations:
The position and velocity need to match up at the boundary between the unpowered and powered flight segments. Equating these yields two equations in two unknowns: s and t, the durations of the unpowered and powered segments, respectively:
Via a lot of tedious algebra, these can be solved for t, which can then be plugged into one of the initial equations to yield the altitude at which the powered braking segment begins:
I was surprised at first to see that the direction of initial velocity doesn't matter. You can be going up, or down, and the elevation is the same. On thinking about it, this makes sense, because if you are going up, you are on a parabolic arc that will come back down to the same elevation with exactly opposite velocity.
This equation assumes that gravity is constant. Near the surface this is a reasonable approximation but as you get higher, gravity drops off and it becomes less accurate. Initially I chose to use surface gravity as the approximation. This is conservative because it makes the equation think the rocket will fall faster than it actually does. Choosing gravity at the spacecraft's initial elevation would sometimes cause the equation to underestimate the height at which to start braking. I've experimented with using the average of the gravity at ground and at the spacecraft's initial altitude; this works pretty well. If we were to make gravity dependent on altitude the equations would become differential equations. I might take a look at solving that, eventually.
Even as it is, this display is pretty useful. I can tune the throttle up and down and the indicated altitude rises and falls; if I set it to the bottom of the spacecraft then it will make a perfect soft landing.
The screenshot is not terribly clear. The ugly disc on the bottom is the moon. The tilted green arrow in the center marks the lander's position and orientation; when the camera zooms in you can see the lander itself. The curve passing through the lander that leads to the the little arrow on the ground is the ballistic trajectory (the arrow marks its impact site), while the second curve which intersects the ground is the maximum deceleration trajectory.
The ballistic trajectory is handy for some things. It shows you the “default” behavior of the spacecraft if you don't do anything. You can make a pinpoint landing reasonably well by aiming the ballistic impact site at your target and then throttling and steering to keep it there as the rocket decelerates.
I've been experimenting with some other displays. One I'd like to have is an indication, when you're in orbit, of the nearest point on the planet that you can safely land on. My first attempt toward that is the maximum deceleration trajectory. This simply integrates the spacecraft's position while choosing to decelerate in the direction opposite the velocity vector at each step.
When the rocket is flying high enough, it can reach zero velocity above the ground, in which case this trajectory is good. When you're orbiting lower, though, the trajectory may not reach zero velocity until under the ground somewhere, which isn't as useful.
What I really need is a trajectory that diverts power as necessary away from maximum deceleration and toward maintaining positive elevation. I haven't figured out quite how to do this yet, though. There are a variety of papers about how to compute optimal soft-landing trajectories, but they are almost all for sale at $25 each from IEEE and AIAA. I don't want to buy papers sight-unseen so I'm still trying to figure out a way to see them.
Another display I'd like is an indication, on the ballistic trajectory, of the point beyond which there is nothing you can do to prevent a crash. My first attempt toward that assumes only vertical motion, constant gravity acceleration, and constant rocket acceleration. It computes the altitude at which you must begin decelerating (at your current rocket thrust level) in order to bring the rocket to a soft landing.
Constants:
Variables:
Initial equations:
The position and velocity need to match up at the boundary between the unpowered and powered flight segments. Equating these yields two equations in two unknowns: s and t, the durations of the unpowered and powered segments, respectively:
Via a lot of tedious algebra, these can be solved for t, which can then be plugged into one of the initial equations to yield the altitude at which the powered braking segment begins:
I was surprised at first to see that the direction of initial velocity doesn't matter. You can be going up, or down, and the elevation is the same. On thinking about it, this makes sense, because if you are going up, you are on a parabolic arc that will come back down to the same elevation with exactly opposite velocity.
This equation assumes that gravity is constant. Near the surface this is a reasonable approximation but as you get higher, gravity drops off and it becomes less accurate. Initially I chose to use surface gravity as the approximation. This is conservative because it makes the equation think the rocket will fall faster than it actually does. Choosing gravity at the spacecraft's initial elevation would sometimes cause the equation to underestimate the height at which to start braking. I've experimented with using the average of the gravity at ground and at the spacecraft's initial altitude; this works pretty well. If we were to make gravity dependent on altitude the equations would become differential equations. I might take a look at solving that, eventually.
Even as it is, this display is pretty useful. I can tune the throttle up and down and the indicated altitude rises and falls; if I set it to the bottom of the spacecraft then it will make a perfect soft landing.
Monday, October 15, 2007
Throttling Experiments
This week I returned to my lunar lander project.
The Lunar Lander family of games is ancient (i.e. as old as me), but I've always found them both difficult and dull. I'm attempting to improve the user interface to address these problems. I'm also using a circular world rather than a rectangular chunk, so it's possible to go into and out of orbit.
Orbit comes with its own set of challenges. One of the biggest is time scale. Orbit period increases with the two-thirds power of the orbit altitude, so if you've got a world with a low-altitude orbit of a minute, say, then when you get one radius away from the surface the orbit will take 2.8 minutes. Depending on how frugal the player needs to be with fuel, they may have to wait quite a while between burns. I've got some basic time acceleration implemented to counteract this, and will probably enhance it further as I learn more about what's needed. For instance, the time scale could vary with altitude so that orbits take the same amount of time under acceleration, regardless of size.
At the moment, though, I'm engaged mostly in improving the interface for throttling near the planet. Currently, you steer with the mouse and hold buttons to fire the main rocket. The steering is simple; moving the mouse left or right rotates the rocket. Stop moving the mouse and the rocket stops rotating.
If you were perfect and knew exactly what you were doing, you could land a rocket with a single burn without adjusting the throttle. Nobody's perfect, though, so it's necessary to either stop and restart the engine, or equivalently to adjust the throttle to lower and higher thrust levels.
It might seem obvious to put the throttle on the other mouse axis, but having unrelated inputs on the mouse axes introduces unwanted coupling: it's hard to steer without adjusting the throttle, and vice versa.
Putting the throttle on the mouse wheel works pretty well, if the mouse wheel scrolls smoothly. It makes it harder for the player to use the other mouse buttons, though, since their focus is on precisely positioning both the mouse and wheel. In this mode I offload other things to the other hand, on the keyboard.
Putting the throttle on the keyboard has been problematic. I tried absolute throttle settings on the 1-9 keys, but it's very hard to keep track of where you are. Most of the time when you're landing you want “a little bit more” or “a little bit less” throttle; these are relative adjustments, so the absolute settings are a bad fit.
Another keyboard interface I've tried is to hold A or Z to increment or decrement the throttle setting. Again this is tricky because the adjustment happens at a constant speed. At times this speed is too fast, while other times it's too slow.
The final throttling interface I have is to just use the left and right mouse buttons. Holding the left button generates 1.1 G's of thrust, and holding the right mouse button generates 0.5 G's of thrust. Holding both together generates 1.6 G's of thrust. This interface is nice in that the thrust values are well-known; the player can get an intuitive feel for how strong they will be in each circumstance. On the other hand, because there is so little throttle control, the player has to resort to pulsing the rockets to simulate intermediate thrusts.
Of all of these, I'm most familiar with the last one, but the mouse wheel one is probably my favorite. It allows smooth control of the rocket.
Finally, it looks like putting the throttle indicator up in the top-right corner is a bad idea. When landing, the focus is entirely on the vehicle and its trajectory toward the ground; it's hard to look anywhere else. I am trying to think of ways to communicate necessary information about throttling in this context.
The Lunar Lander family of games is ancient (i.e. as old as me), but I've always found them both difficult and dull. I'm attempting to improve the user interface to address these problems. I'm also using a circular world rather than a rectangular chunk, so it's possible to go into and out of orbit.
Orbit comes with its own set of challenges. One of the biggest is time scale. Orbit period increases with the two-thirds power of the orbit altitude, so if you've got a world with a low-altitude orbit of a minute, say, then when you get one radius away from the surface the orbit will take 2.8 minutes. Depending on how frugal the player needs to be with fuel, they may have to wait quite a while between burns. I've got some basic time acceleration implemented to counteract this, and will probably enhance it further as I learn more about what's needed. For instance, the time scale could vary with altitude so that orbits take the same amount of time under acceleration, regardless of size.
At the moment, though, I'm engaged mostly in improving the interface for throttling near the planet. Currently, you steer with the mouse and hold buttons to fire the main rocket. The steering is simple; moving the mouse left or right rotates the rocket. Stop moving the mouse and the rocket stops rotating.
If you were perfect and knew exactly what you were doing, you could land a rocket with a single burn without adjusting the throttle. Nobody's perfect, though, so it's necessary to either stop and restart the engine, or equivalently to adjust the throttle to lower and higher thrust levels.
It might seem obvious to put the throttle on the other mouse axis, but having unrelated inputs on the mouse axes introduces unwanted coupling: it's hard to steer without adjusting the throttle, and vice versa.
Putting the throttle on the mouse wheel works pretty well, if the mouse wheel scrolls smoothly. It makes it harder for the player to use the other mouse buttons, though, since their focus is on precisely positioning both the mouse and wheel. In this mode I offload other things to the other hand, on the keyboard.
Putting the throttle on the keyboard has been problematic. I tried absolute throttle settings on the 1-9 keys, but it's very hard to keep track of where you are. Most of the time when you're landing you want “a little bit more” or “a little bit less” throttle; these are relative adjustments, so the absolute settings are a bad fit.
Another keyboard interface I've tried is to hold A or Z to increment or decrement the throttle setting. Again this is tricky because the adjustment happens at a constant speed. At times this speed is too fast, while other times it's too slow.
The final throttling interface I have is to just use the left and right mouse buttons. Holding the left button generates 1.1 G's of thrust, and holding the right mouse button generates 0.5 G's of thrust. Holding both together generates 1.6 G's of thrust. This interface is nice in that the thrust values are well-known; the player can get an intuitive feel for how strong they will be in each circumstance. On the other hand, because there is so little throttle control, the player has to resort to pulsing the rockets to simulate intermediate thrusts.
Of all of these, I'm most familiar with the last one, but the mouse wheel one is probably my favorite. It allows smooth control of the rocket.
Finally, it looks like putting the throttle indicator up in the top-right corner is a bad idea. When landing, the focus is entirely on the vehicle and its trajectory toward the ground; it's hard to look anywhere else. I am trying to think of ways to communicate necessary information about throttling in this context.
Monday, October 8, 2007
Forcing a window to maintain a particular aspect ratio
This is a piece of code I removed from my framework recently. I first wrote something like this many years ago, back when computer monitors were virtually all 4:3 aspect ratio.
Games typically run full-screen, but during development it's much more convenient to have them run in a window. For 3D games it's easy to resize the window. The problem is what to do when the window's client area doesn't match the aspect ratio of the full-screen monitor. You can letterbox the image, putting black borders on the sides, or you can crop, trimming part of the image off.
I opted to make the program restrict window resizing so that the window's aspect ratio would always match the target monitor's aspect ratio. This turns out not to be too hard to do in Windows.
First, you need to catch the WM_SIZING message in your message handler:
Second, you need to modify the rectangle that comes in with the WM_SIZING message so that it obeys your restrictions:
window_ratio_x and window_ratio_y are integer constants that define the desired aspect ratio.
We want to enforce a specific aspect ratio for the client area, but the resize code is dealing with the dimensions of the overall window. To handle this, at startup I use AdjustWindowRect to find out how much padding there is in each dimension, and store it off in g_window_adjust_x and g_window_adjust_y:
Finally, you can restrict the minimum window size so that it obeys the aspect ratio restriction. This involves answering the WM_GETMINMAXINFO message:
I ended up removing this code this week because monitors have gotten much less homogeneous in their aspect ratios. My notebook computer, for instance, is 1440 by 900, which is an 8:5 ratio. HDTV aspect ratios are 16:9. A 1280x1024 notebook (with square pixels) has a 5:4 aspect ratio. I decided that it was better to just have my games adapt their UI and viewport to whatever resolution they're given.
Games typically run full-screen, but during development it's much more convenient to have them run in a window. For 3D games it's easy to resize the window. The problem is what to do when the window's client area doesn't match the aspect ratio of the full-screen monitor. You can letterbox the image, putting black borders on the sides, or you can crop, trimming part of the image off.
I opted to make the program restrict window resizing so that the window's aspect ratio would always match the target monitor's aspect ratio. This turns out not to be too hard to do in Windows.
First, you need to catch the WM_SIZING message in your message handler:
LRESULT WINAPI MsgProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam)
{
switch (msg)
{
case WM_SIZING:
resize(int(wParam), *reinterpret_cast<LPRECT>(lParam));
return TRUE;
...
}
return DefWindowProc(hWnd, msg, wParam, lParam);
}
Second, you need to modify the rectangle that comes in with the WM_SIZING message so that it obeys your restrictions:
void resize(int edge, RECT & rect)
{
int size_x_desired = (rect.right - rect.left) - g_window_adjust_x;
int size_y_desired = (rect.bottom - rect.top) - g_window_adjust_y;
switch (edge)
{
case WMSZ_BOTTOM:
case WMSZ_TOP:
{
int size_x = g_window_adjust_x + (size_y_desired * window_ratio_x) / window_ratio_y;
rect.left = (rect.left + rect.right) / 2 - size_x / 2;
rect.right = rect.left + size_x;
}
break;
case WMSZ_BOTTOMLEFT:
{
int size_x, size_y;
if (size_x_desired * window_ratio_y > size_y_desired * window_ratio_x)
{
size_x = rect.right - rect.left;
size_y = g_window_adjust_y + ((size_x - g_window_adjust_x) * window_ratio_y) / window_ratio_x;
}
else
{
size_y = rect.bottom - rect.top;
size_x = g_window_adjust_x + ((size_y - g_window_adjust_y) * window_ratio_x) / window_ratio_y;
}
rect.left = rect.right - size_x;
rect.bottom = rect.top + size_y;
}
break;
case WMSZ_BOTTOMRIGHT:
{
int size_x, size_y;
if (size_x_desired * window_ratio_y > size_y_desired * window_ratio_x)
{
size_x = rect.right - rect.left;
size_y = g_window_adjust_y + ((size_x - g_window_adjust_x) * window_ratio_y) / window_ratio_x;
}
else
{
size_y = rect.bottom - rect.top;
size_x = g_window_adjust_x + ((size_y - g_window_adjust_y) * window_ratio_x) / window_ratio_y;
}
rect.right = rect.left + size_x;
rect.bottom = rect.top + size_y;
}
break;
case WMSZ_LEFT:
case WMSZ_RIGHT:
{
int size_y = g_window_adjust_y + (size_x_desired * window_ratio_y) / window_ratio_x;
rect.top = (rect.top + rect.bottom) / 2 - size_y / 2;
rect.bottom = rect.top + size_y;
}
break;
case WMSZ_TOPLEFT:
{
int size_x, size_y;
if (size_x_desired * window_ratio_y > size_y_desired * window_ratio_x)
{
size_x = rect.right - rect.left;
size_y = g_window_adjust_y + ((size_x - g_window_adjust_x) * window_ratio_y) / window_ratio_x;
}
else
{
size_y = rect.bottom - rect.top;
size_x = g_window_adjust_x + ((size_y - g_window_adjust_y) * window_ratio_x) / window_ratio_y;
}
rect.left = rect.right - size_x;
rect.top = rect.bottom - size_y;
}
break;
case WMSZ_TOPRIGHT:
{
int size_x, size_y;
if (size_x_desired * window_ratio_y > size_y_desired * window_ratio_x)
{
size_x = rect.right - rect.left;
size_y = g_window_adjust_y + ((size_x - g_window_adjust_x) * window_ratio_y) / window_ratio_x;
}
else
{
size_y = rect.bottom - rect.top;
size_x = g_window_adjust_x + ((size_y - g_window_adjust_y) * window_ratio_x) / window_ratio_y;
}
rect.right = rect.left + size_x;
rect.top = rect.bottom - size_y;
}
break;
}
}
window_ratio_x and window_ratio_y are integer constants that define the desired aspect ratio.
We want to enforce a specific aspect ratio for the client area, but the resize code is dealing with the dimensions of the overall window. To handle this, at startup I use AdjustWindowRect to find out how much padding there is in each dimension, and store it off in g_window_adjust_x and g_window_adjust_y:
int WINAPI WinMain(HINSTANCE hinstance, HINSTANCE, LPSTR, int nShowCmd)
{
... // register window class
const int window_style = WS_OVERLAPPEDWINDOW;
RECT rect = { 0, 0, start_size_x, start_size_y };
AdjustWindowRect(&rect, window_style, FALSE);
g_window_adjust_x = (rect.right - rect.left) - start_size_x;
g_window_adjust_y = (rect.bottom - rect.top) - start_size_y;
... // create window, message handling loop, etc.
}
Finally, you can restrict the minimum window size so that it obeys the aspect ratio restriction. This involves answering the WM_GETMINMAXINFO message:
case WM_GETMINMAXINFO:
{
MINMAXINFO * info = reinterpret_cast<MINMAXINFO *>(lParam);
info->ptMinTrackSize.y = ((info->ptMinTrackSize.x - g_window_adjust_x) * window_ratio_y) / window_ratio_x + g_window_adjust_y;
}
break;
I ended up removing this code this week because monitors have gotten much less homogeneous in their aspect ratios. My notebook computer, for instance, is 1440 by 900, which is an 8:5 ratio. HDTV aspect ratios are 16:9. A 1280x1024 notebook (with square pixels) has a 5:4 aspect ratio. I decided that it was better to just have my games adapt their UI and viewport to whatever resolution they're given.
Subdivision curve planet
My week of vacation was a bit of an adventure. An old friend of ours went into labor five weeks early so we took care of her three-year-old for a night. Nobody slept much that night. After that I caught a cold.
Overall it was a good vacation, though. I put together a tricycle for my daughter, read some books, and got caught up on a lot of backlogged household chores. We also spent time with our new neighbors who also have a two-year-old daughter.
I did some computer chores as well. I'm going to give Windows Vista's backup system a try, and back up my work to my external USB hard drive once a week. The old computer is almost back to normal with its new hard drive; we've been installing stuff on it. My notebook computer had some divergent versions of my projects due to my switch from version 7 to version 8 of Visual C++. I've been reconciling and consolidating those.
Early in the week I did some work on terrain modeling. I had been thinking about using subdivision curves, and I wrote a test application that generated a random “planet” out of them. Since the world is two-dimensional the planet consists of a single polygon. I implemented scalar-valued creasing, so the vertices in the control polygon each have a hardness ranging from zero (completely smooth) upward. Basically, if a vertex's hardness is n, it will be not be smoothed for the first n subdivision steps, and then it will be smoothed for all steps thereafter. Fractional values are handled by blending the smooth and hard solutions at the transitional subdivision level.
Here's a screenshot:
The control polygon is drawn in faint gray; you can see it on some of the smoother corners.
Ultimately, though, I decided it's not time yet to make a terrain editor. I really need to focus on basic gameplay and functionality right now.
Overall it was a good vacation, though. I put together a tricycle for my daughter, read some books, and got caught up on a lot of backlogged household chores. We also spent time with our new neighbors who also have a two-year-old daughter.
I did some computer chores as well. I'm going to give Windows Vista's backup system a try, and back up my work to my external USB hard drive once a week. The old computer is almost back to normal with its new hard drive; we've been installing stuff on it. My notebook computer had some divergent versions of my projects due to my switch from version 7 to version 8 of Visual C++. I've been reconciling and consolidating those.
Early in the week I did some work on terrain modeling. I had been thinking about using subdivision curves, and I wrote a test application that generated a random “planet” out of them. Since the world is two-dimensional the planet consists of a single polygon. I implemented scalar-valued creasing, so the vertices in the control polygon each have a hardness ranging from zero (completely smooth) upward. Basically, if a vertex's hardness is n, it will be not be smoothed for the first n subdivision steps, and then it will be smoothed for all steps thereafter. Fractional values are handled by blending the smooth and hard solutions at the transitional subdivision level.
Here's a screenshot:
The control polygon is drawn in faint gray; you can see it on some of the smoother corners.
Ultimately, though, I decided it's not time yet to make a terrain editor. I really need to focus on basic gameplay and functionality right now.
Monday, October 1, 2007
Riding the bus
I'm vacationing this week, but not going anywhere in particular. Today and tomorrow I'm looking after my daughter while my wife goes to Continuing Legal Education classes on trusts and estates. We're watching Sesame Street right now, and I think we may go hop on the bus and ride somewhere in a bit.
I'll get a proper update up soon, after I've actually gotten something done.
I'll get a proper update up soon, after I've actually gotten something done.
Monday, September 24, 2007
Even Less
I've spent my free time this week playing Metroid Prime 3, and trying to resolve my computer situation.
Last weekend I bought an eMachines T5230 from Best Buy. It's an Athlon 64 machine with 1 GB of RAM and an NForce chipset, so it's got an integrated NVidia video card. I thought that it ought to be at least as fast as my three-year-old Dell with the hard drive that went kaput. I was wrong. It was not even half as fast. I'm a programmer, not a computer expert, so I don't know exactly why it was so bad. I bought an $80 hard drive, put it in my old Dell, and returned the eMachines box. Then I spent the rest of the day installing Windows and its various updates.
The days of easy performance progress in computers are over. I've bought a new computer approximately every three years since 1994 or so. Through the 90s my computers would get about four times faster for the same price or less, every three years. Once processor speeds started outstripping memory clock speeds (I believe the 66 MHz 486 was the last computer with the same clock speed for CPU and RAM) the apparent performance increases withered away. It turns out that we use computers not so much for computing as for shuttling large quantities of data around, so faster CPUs don't do a whole lot.
Now, we have hit a wall on CPU speed. My three-year-old Dell is clocked at 3 GHz. You can hardly buy any computer today that fast. Instead, they are all going multi-core. (The eMachines machine was clocked around 2.2 GHz, but had two cores.) Existing applications won't get any faster. Instead, we programmers will have to attempt to eke the CPUs' full potential out with sweat and tears.
Metroid Prime 3 is fun, albeit pretty much the same game as its two predecessors. They did some good work to make the game use the Wii's control scheme. Motion sensing is inherently weaker than buttons and joysticks for many things, though. First, you get no tactile feedback, so you have to rely on visual feedback to know if you've made the correct input. Second, with a joystick you are free to remove your hand to do other things, such as scratching your nose. I find myself holding a rigid position for long stretches when playing with the Wii, which is far less comfortable.
In hindsight I think Nintendo may have goofed up in a few ways with their latest controller. The “hardcore” games like Metroid and Zelda all end up using all the buttons on the wand controller, even though the buttons weren't designed to be used when holding it like a wand. Only A and B are readily accessible; the rest require contortions. I think their idea of making the wand holdable in two different positions (as a wand, and sideways as a more classic NES controller) was a mistake.
I also don't understand why Nintendo didn't make the camera on the front of the wand have a fisheye lens. The field of view is too narrow. I was watching my niece and nephew play Duck Hunt and they were continually having problems with letting the pointer drift off-screen, and then it just kind of got stuck. Other games in Wii Play are especially bad this way. For instance, in the air hockey game the paddle gets stuck when the wand loses sight of the tracking LEDs, and it can be very hard for people who aren't used to keeping the wand pointed right at the screen.
Last weekend I bought an eMachines T5230 from Best Buy. It's an Athlon 64 machine with 1 GB of RAM and an NForce chipset, so it's got an integrated NVidia video card. I thought that it ought to be at least as fast as my three-year-old Dell with the hard drive that went kaput. I was wrong. It was not even half as fast. I'm a programmer, not a computer expert, so I don't know exactly why it was so bad. I bought an $80 hard drive, put it in my old Dell, and returned the eMachines box. Then I spent the rest of the day installing Windows and its various updates.
The days of easy performance progress in computers are over. I've bought a new computer approximately every three years since 1994 or so. Through the 90s my computers would get about four times faster for the same price or less, every three years. Once processor speeds started outstripping memory clock speeds (I believe the 66 MHz 486 was the last computer with the same clock speed for CPU and RAM) the apparent performance increases withered away. It turns out that we use computers not so much for computing as for shuttling large quantities of data around, so faster CPUs don't do a whole lot.
Now, we have hit a wall on CPU speed. My three-year-old Dell is clocked at 3 GHz. You can hardly buy any computer today that fast. Instead, they are all going multi-core. (The eMachines machine was clocked around 2.2 GHz, but had two cores.) Existing applications won't get any faster. Instead, we programmers will have to attempt to eke the CPUs' full potential out with sweat and tears.
Metroid Prime 3 is fun, albeit pretty much the same game as its two predecessors. They did some good work to make the game use the Wii's control scheme. Motion sensing is inherently weaker than buttons and joysticks for many things, though. First, you get no tactile feedback, so you have to rely on visual feedback to know if you've made the correct input. Second, with a joystick you are free to remove your hand to do other things, such as scratching your nose. I find myself holding a rigid position for long stretches when playing with the Wii, which is far less comfortable.
In hindsight I think Nintendo may have goofed up in a few ways with their latest controller. The “hardcore” games like Metroid and Zelda all end up using all the buttons on the wand controller, even though the buttons weren't designed to be used when holding it like a wand. Only A and B are readily accessible; the rest require contortions. I think their idea of making the wand holdable in two different positions (as a wand, and sideways as a more classic NES controller) was a mistake.
I also don't understand why Nintendo didn't make the camera on the front of the wand have a fisheye lens. The field of view is too narrow. I was watching my niece and nephew play Duck Hunt and they were continually having problems with letting the pointer drift off-screen, and then it just kind of got stuck. Other games in Wii Play are especially bad this way. For instance, in the air hockey game the paddle gets stuck when the wand loses sight of the tracking LEDs, and it can be very hard for people who aren't used to keeping the wand pointed right at the screen.
Monday, September 17, 2007
Not much
My family and I have been sick this weekend so I haven't gotten much done.
I've been working on implementing a piston-type joint in the Chipmunk physics library. This is a joint that allows one object to move freely along a straight track, but stops it at the endpoints. The sliding object is allowed to rotate freely. Obviously this is to simulate wheels on suspension; it will be combined with a damped spring to do the suspension.
To do this, I need to solve for an impulse which, when applied to the two bodies at the joint, will eliminate relative velocity in the cross-track direction. Additionally, if the sliding object has reached a track extent, I must generate an impulse which will kill any relative velocity that would carry it past the end of the track. This latter velocity adjustment is essentially the same as a contact non-penetration constraint, in that we don't want to ever produce an impulse that would keep the sliding object from being able to move the other direction, away from the track's endpoint.
Here's some initial math going the opposite direction: given an impulse, what is the change in relative velocity at the joint? I've used the Greek letter Delta (the triangle) to indicate change, as in “change in velocity”. Here are the variable definitions:
And here is development of the equation for change in relative velocity at point r, given an impulse j at that same point:
It's built up from the changes in linear and angular velocities at the two bodies' centers of mass.
By inverting this, it's possible to solve for an impulse which will produce a desired change in local relative velocity. I'll also need to rotate the coordinate system so that velocities along and across the track are easily separated. As I said before, the along-the-track velocity changes are really like contact constraints so they'll need to be handled slightly differently from the cross-track velocity constraint.
I've been working on implementing a piston-type joint in the Chipmunk physics library. This is a joint that allows one object to move freely along a straight track, but stops it at the endpoints. The sliding object is allowed to rotate freely. Obviously this is to simulate wheels on suspension; it will be combined with a damped spring to do the suspension.
To do this, I need to solve for an impulse which, when applied to the two bodies at the joint, will eliminate relative velocity in the cross-track direction. Additionally, if the sliding object has reached a track extent, I must generate an impulse which will kill any relative velocity that would carry it past the end of the track. This latter velocity adjustment is essentially the same as a contact non-penetration constraint, in that we don't want to ever produce an impulse that would keep the sliding object from being able to move the other direction, away from the track's endpoint.
Here's some initial math going the opposite direction: given an impulse, what is the change in relative velocity at the joint? I've used the Greek letter Delta (the triangle) to indicate change, as in “change in velocity”. Here are the variable definitions:
And here is development of the equation for change in relative velocity at point r, given an impulse j at that same point:
It's built up from the changes in linear and angular velocities at the two bodies' centers of mass.
By inverting this, it's possible to solve for an impulse which will produce a desired change in local relative velocity. I'll also need to rotate the coordinate system so that velocities along and across the track are easily separated. As I said before, the along-the-track velocity changes are really like contact constraints so they'll need to be handled slightly differently from the cross-track velocity constraint.
Monday, September 10, 2007
Random Terrain, Joints, Version Control
Terrain Development
I spent some time this week fooling around with my random terrain generator to try and come up with something more varied for driving. I will have to switch to hand-authored terrain soon but want to put it off as long as possible so I can focus on getting the vehicle to feel right.
The terrain generator produces a one-dimensional array of elevation values (since this is a 2D game). It puts in seed values at a large interval (every 256th element, currently), then subdivides by powers of two. The general idea is to subdivide and smooth (using a Loop-type subdivision scheme), then add random offsets which are scaled to the current step size. The subdivision smoothing is very simple: new points that fall halfway between the old points just get the average of their two adjacent old points. New points that fall on the old points get 1/8 of each of the two neighboring old points, plus 3/4 of the old point directly under them.
To make the terrain more interesting, I generate a second height map beforehand which I call the roughness map. When generating the actual height map, I scale the random offsets by the roughness value for that spot. This breaks the map up into smooth, rounded areas and rough, jagged areas.
Here's the code:
I've got an extremely simple random number generator right now since I don't care much about quality. frand() generates uniform values from 0 to 1, and grand() generates Gaussian (i.e. normal-distributed) random values with a mean of 0 and a standard deviation of 1:
The grand() algorithm actually generates two Gaussian random numbers per call; I'm being lazy and throwing out the second one.
I've done a lot of thinking about what kind of hand-authored terrain would work best. Currently I'm leaning toward a subdivision curve (like subdivision surfaces but in 2D). This is because it's easy to author rounded things, and you're not constrained to work at a particular scale. I've thought about a tiled landscape but I don't think that will give me the control I need to get surfaces that are fun to drive. Height fields can't handle overhangs or vertical cliffs; not horrible limitations, but I think I might like to have some cliffs.
Vehicle Development
Regarding the vehicle, I have been experimenting with different combinations of friction, elasticity, torque, and top speed. I am also writing a new joint type for the suspension since none of the joints provided with Chipmunk do exactly what I need. The new joint will constrain the wheel to move along a line segment relative to the vehicle chassis.
Version Control
In other news, the computer I've been using as my version control server died a horrible death. It's amazingly inconvenient. I've been using Perforce since in general it's awesome (albeit expensive for more than two users), but have been having second thoughts about my situation. Since I'm working from a laptop I spend a fair amount of time disconnected from the server, and with Perforce that means you can't check files out to work on them, or diff them to see what you've changed. I may switch to Subversion (since I have experience with that), or something else that is aimed at distributed development. I dislike having version-control-related files strewn about in my codebase, though. Subversion (like its older brother CVS) keeps entire duplicate directories of your source code, for diffing purposes. If it kept them off in some version-control directory I wouldn't mind but it keeps them as subdirectories of the real directories, so I have to wade around them in Explorer.
I spent some time this week fooling around with my random terrain generator to try and come up with something more varied for driving. I will have to switch to hand-authored terrain soon but want to put it off as long as possible so I can focus on getting the vehicle to feel right.
The terrain generator produces a one-dimensional array of elevation values (since this is a 2D game). It puts in seed values at a large interval (every 256th element, currently), then subdivides by powers of two. The general idea is to subdivide and smooth (using a Loop-type subdivision scheme), then add random offsets which are scaled to the current step size. The subdivision smoothing is very simple: new points that fall halfway between the old points just get the average of their two adjacent old points. New points that fall on the old points get 1/8 of each of the two neighboring old points, plus 3/4 of the old point directly under them.
To make the terrain more interesting, I generate a second height map beforehand which I call the roughness map. When generating the actual height map, I scale the random offsets by the roughness value for that spot. This breaks the map up into smooth, rounded areas and rough, jagged areas.
Here's the code:
const int height_field_size = 1024;
static float height[height_field_size];
static float height_old[height_field_size];
void generate_height_field()
{
int mask = height_field_size - 1;
// Generate a height field that represents surface roughness.
const int initial_roughness_step_size = 32;
float roughness[height_field_size];
float roughness_old[height_field_size];
for (int i = 0; i < height_field_size; i += initial_roughness_step_size)
{
roughness[i] = frand();
}
for (int step_size = initial_roughness_step_size; step_size > 1; step_size /= 2)
{
memcpy(roughness_old, roughness, sizeof(roughness_old));
for (int i = 0; i < height_field_size; i += step_size)
{
float h0 = roughness_old[(i + (height_field_size - step_size)) & mask];
float h1 = roughness_old[i];
float h2 = roughness_old[(i + step_size) & mask];
float h1_new = h0 * 0.125f + h1 * 0.75f + h2 * 0.125f;
float h3_new = h1 * 0.5f + h2 * 0.5f;
roughness[i] = h1_new;
roughness[i + step_size / 2] = h3_new;
}
}
// Generate the actual height field, scaling the randomness by the
// roughness at each point.
const int initial_step_size = 256;
for (int i = 0; i < height_field_size; i += initial_step_size)
{
height[i] = grand() * 20.0f;
}
for (int step_size = initial_step_size; step_size > 1; step_size /= 2)
{
memcpy(height_old, height, sizeof(height_old));
float range = float(step_size) * 0.1f;
for (int i = 0; i < height_field_size; i += step_size)
{
float h0 = height_old[(i + (height_field_size - step_size)) & mask];
float h1 = height_old[i];
float h2 = height_old[(i + step_size) & mask];
float h1_new = h0 * 0.125f + h1 * 0.75f + h2 * 0.125f;
float h3_new = h1 * 0.5f + h2 * 0.5f;
h1_new += range * grand() * roughness[i];
h3_new += range * grand() * roughness[i + step_size / 2];
height[i] = h1_new;
height[i + step_size / 2] = h3_new;
}
}
}
I've got an extremely simple random number generator right now since I don't care much about quality. frand() generates uniform values from 0 to 1, and grand() generates Gaussian (i.e. normal-distributed) random values with a mean of 0 and a standard deviation of 1:
float frand()
{
return float(rand()) / float(RAND_MAX);
}
float grand()
{
float x1, x2, w;
do
{
x1 = 2.0f * frand() - 1.0f;
x2 = 2.0f * frand() - 1.0f;
w = x1 * x1 + x2 * x2;
}
while (w >= 1.0f);
w = sqrtf((-2.0f * logf(w)) / w);
float y1 = x1 * w;
// float y2 = x2 * w;
return y1;
}
The grand() algorithm actually generates two Gaussian random numbers per call; I'm being lazy and throwing out the second one.
I've done a lot of thinking about what kind of hand-authored terrain would work best. Currently I'm leaning toward a subdivision curve (like subdivision surfaces but in 2D). This is because it's easy to author rounded things, and you're not constrained to work at a particular scale. I've thought about a tiled landscape but I don't think that will give me the control I need to get surfaces that are fun to drive. Height fields can't handle overhangs or vertical cliffs; not horrible limitations, but I think I might like to have some cliffs.
Vehicle Development
Regarding the vehicle, I have been experimenting with different combinations of friction, elasticity, torque, and top speed. I am also writing a new joint type for the suspension since none of the joints provided with Chipmunk do exactly what I need. The new joint will constrain the wheel to move along a line segment relative to the vehicle chassis.
Version Control
In other news, the computer I've been using as my version control server died a horrible death. It's amazingly inconvenient. I've been using Perforce since in general it's awesome (albeit expensive for more than two users), but have been having second thoughts about my situation. Since I'm working from a laptop I spend a fair amount of time disconnected from the server, and with Perforce that means you can't check files out to work on them, or diff them to see what you've changed. I may switch to Subversion (since I have experience with that), or something else that is aimed at distributed development. I dislike having version-control-related files strewn about in my codebase, though. Subversion (like its older brother CVS) keeps entire duplicate directories of your source code, for diffing purposes. If it kept them off in some version-control directory I wouldn't mind but it keeps them as subdirectories of the real directories, so I have to wade around them in Explorer.
Subscribe to:
Posts (Atom)