How does terrain generation work in a game?

Asked

Viewed 756 times

11

The game No Man’s Sky has not yet released but I’m curious about the technique they used to generate the terrain of the planet. I believe that processing a planet on real scales would be very surreal, even more so with a totally randomly generated terrain as the game seems to demonstrate.

In 8m45s of the video notice that at a certain distance it seems that the terrain of the planet is not rendered until the player is close enough. https://youtu.be/ltJqu9778g0?t=9m45s

I have two choices about how this could work:

1 - Before the player enters the planet’s atmosphere, the planet is presented in a spherical shape, without terrain. But when the player enters the planet’s atmosphere the player is actually sailing on a flat terrain.

2 - The planet is really a sphere, but what changes is the definition of polygons when approaching.

Even so, I can’t quite understand how the terrain generation technique works, because as we can see, a planet has all possible forms of relief, like plains, plateaus, depressions, and even caves.

  • 4

    You need to search for "procedural generation". Games like Elite Dangerous and No Man’s Sky produce their systems, suns and planets based on mathematical formulas. Usually the planets can be made with several different techniques, one of them, for example, is a "cube with rounded faces", becoming in fact a sphere, but without needing calculations with spherical coordinates in all situations. Also, it’s important to know about LOD algorithms (Level Of Detail) that draw more detail as you approach objects (this is true of "conventional 3D games)".

  • 1

    Here is another technique, good for "board" games, but in a spherical world: http://vickijoel.org/hexplanet/ - And for an entire galaxy, the Elite Dangerous https://www.youtube.com/watch?v=iTBvpd3_Vqksystem

  • 2

    Note that it is not "random", but procedural. It has a series of constant formulas that always govern the rules of these universes. One of the most common is the "Perlin Noise".

  • @Bacco, I would never remember this technique of turning a cube into a sphere, but I’ve seen something about it, thanks for the tip.

  • 1

    To be honest, one of the first things I want to test with Unity is if you can do that. In Unreal Engine, which I like a lot, I found extremely boring this part of the terrain (Unreal was definitely not made for this kind of thing, although possible). In Unity I still can’t stop to analyze deeply, but maybe one day. I’ve been interested in procedural generation ever since I saw the first elite, and I’ve been trying to figure out how they could fit coordinates and names from over 2,000 planets on one floppy disk. By the time I "dropped the ball" I got the idea.

  • 1

    To generate cities, these guys did something brilliant: they are based on characteristics of various types of city: concentric circles, squares, density of regions, etc. https://www.youtube.com/watch?vNsNs8-S5ygE

  • @Bacco, I’m doing tests on Unity right now, my experience is trying to apply a randomly generated height map to a cube and then turn it into a sphere.

  • 1

    I would suggest you start with LOD, then worry about Height Map. The problem is you’re getting closer, and the number of polygons is increasing. The Height Map has to be generated dynamically, for you to have detail even when you arrive very close. I would suggest generating a normal map with some formula, and turning it into height map as it approaches (and already generating the normal map of the next level).

  • @Bacco, can I add you to share with you the results of the experiment?

  • 1

    As I will end up not giving you the proper attention, because I am doing a lot of things at the same time (and full of delayed thing to do kkk), I prefer another time to eavesdrop calmly on the evolution of the idea, otherwise, besides not giving the proper attention I will distract myself even more, mainly by being a subject that I like :)

  • Something I noticed about the site loading, that in GTA V, is how the whole map is always loaded, but with no level of detail. After using the free flight code and falling on the other side of the map, I noticed the items and textures on the screen with a horrible quality that was loading and updating over time.

Show 7 more comments

1 answer

10


Okay. Your question is interesting, but it actually has more than one subject in it. I’m going to split the answer into two parts.

Procedural Generation of Content

Like fellow @Bacco already explained in the comments, the set of techniques used in the creation of cities, planets, labyrinths, platforms, music (yes, until music!), anyway, any kind of "content" in a game or simulator, is traditionally called Procedural Content Generation (from English Procedural Content Generation). Currently it is more common to simply use Content Generation because not every technique used is simply "procedural" (in the sense of being a deterministic algorithm).

For you to already understand this difference, consider your own example. The planets and their terrains can be created by a simple algorithm, but once created are fixed. Differently, after created the animals have complex behaviors and need to adapt to the actions of players (and even other animals also created by computer). This last part requires a little more advanced techniques, originating from the Artificial Intelligence.

There are numerous approaches to creating content, and describing them all here would be impossible. But you can get a general idea. One of the classic algorithms is the Game of Life, one of the first cellular automata. This algorithm works like this:

  1. The board is a two-dimensional matrix (representing spaces for living/dead cells).

  2. In it, one draws (or the user defines) a set of initial cells (i.e., one marks the cells that are filled/alive).

  3. The game applies a simple set of rules to go from the current state to the next state, in which some cells "died" and others "were born".

  4. If there has been any change, the game returns to step 3 and makes the application of the rules again. If there has been no change, the game has reached a state of equilibrium and ends.

The set of rules that the Game of Life uses is:

  • Any living cell with less than two living neighboring cells dies (as if suffering from loneliness or the effects of low population).
  • Any living cell with two or more living neighboring cells continues to live in the next generation.
  • Any living cell with more than three living neighboring cells dies (as if suffering from the effects of overpopulation - any resemblance to reality is frightening hehehe).
  • Any dead cell with exactly three living neighboring cells reborn (as if it were the result of reproduction in the population).

With a simple algorithm like this, it is possible to generate very beautiful effects, such as the Gun (Gun) of Gosper:

inserir a descrição da imagem aqui

Although this particular example never converges (i.e., never "complete" the generation of something), there are ways to ensure convergence. You can limit the number of iterations or runtime, for example. The end result will be something generated arbitrarily.

It is important to note that arbitrary is different from random. Something chosen arbitrarily may seem random, but had rules behind each choice. If someone asks you to kick a number of 1 At 10, you can kick anyone and think you had no influence some in that choice, but had. Even randomness exists only in the nature. Indeed, a better term choice to say regarding the content automatically generated by such techniques is emerging (plus in the course of the reply).

Similar techniques are the Algoritmos Genéticos, but you can essentially create in any procedural way if you take care of the rules. The rules are essential to actually not leave the generated content simply random, and naturally will be more or less complex depending on the domain of the problem (i.e., the type of game).

The generation of islands in an image, for example, could be done as follows: in a blank image, randomly select some pixels to be "painted" in black; then, use a Clusterization algorithm (such as K-Averages, explained in this my other answer, for example) to join randomly chosen pixels into groups (the potential islands); finally, draw ellipses around these clusters. The rule goes right into the Clusterization algorithm, in which you define, for example, the desired number of islands.

Another classic example, and easy to implement, is the dynamic creation of labyrinths. A Wikipedia in English has a fairly detailed description of this problem, with different types of algorithms, such as graph-based (the original dots - in blue - serve to create the rooms which, when represented by another graph - in yellow - allow the construction of a path by the elimination of redundant edges):

inserir a descrição da imagem aqui

P.S.: this image is also an animated gif, but no loop. You will need to reload the page to see it running again.

Depending on the content to be generated, there are numerous algorithms and options. Music, for example, is commonly based on an approach similar to that of the Game of Life in which the musical notes are the cells and the board is the pentagram of the score (the lines and spaces where the notes are positioned). Terrain already uses rules of movement of the characters or objects of the game (how far it reaches, where it can go, etc.) and physical attributes of fantasy (such as, for example, in a world similar to the real world, still water is always found in lower parts of the land and rivers always flow from higher to lower parts, etc.). But they can also use higher-level rules, especially when they involve the construction of cities (buildings need to be more or less accessible according to their function, they also need to be built next to water sources, etc).

A short list I keep with interesting articles about content generation can be found here. It contains essentially material on static content, but is in a AI page for games, where you can find more of the subjects I deal with then also.

In relation to the generation of content for autonomous agents (which simulate intelligence), then there are many more studies and definitely escapes from the space that we have available here. But it’s worth quoting the following:

  • Steering Behaviours. The Steering Behaviours (which could be translated as "Driving Behaviors", but it is better to leave in the original because it is not even translated) it is rather simple algorithms for the implementation of some drive controls in games. It all started with the reynalds' work, but the best place for you to learn is in that Game Development article (in English). The idea is that using simple vector mathematics and some physics concepts, you can make one object or character chase another, escape, avoid, anchor (like a ship/vessel arriving at a port) or even simulate herd behavior (something called Boids). Herd behavior (which you see in shoals of fish, swifts, and even humans during subway rush hours - hahaha) can be simulated with three simple behaviors continuously performed by each individual in a group: separation (moving away from the midpoint of the nearest neighbors), alignment (turning in the middle direction of the nearest neighbors) and cohesion (moving to the midpoint of the nearest neighbors).

  • Subsunction Architecture. The Architecture of Subsumption (English Subsumption Architecture) is widely used in robotics. It stems from Brooks' work, in which he argues that animals can have complex behaviors without necessarily having to represent and reason logically about the world. The basic idea is that agent behaviors are programmed into hierarchical layers in order to map perceptions to actions in an intelligent and adaptive way. Each layer implements a level of adaptation, and the hierarchy stems from the importance of each behavior. A lower layer (and thus more priority) could be "foraging". It could be active at a given time, but it could be "suppressed" (or "subsumed") by a more important lower layer that is activated, something like "fleeing from predator". The layers receive all perceptions and work in parallel, generating the outputs that are used in deciding the actions by the agent. Although essentially reactive, this architecture is quite interesting even for more complex behaviors, and has already been used in strategy games such as Diplomacy.

  • Architecture BDI. To Architecture BDI (of the English Belief-Desire-Intention, or Belief-Desire-Intention), is already a symbolic model that uses the concepts of beliefs (what the agent knows of the world, usually updated based on perceptions and also on the result of actions), desires (what the agent needs or wants to achieve, his project objectives) and intentions (objective aspects similar to desires, but with a more "practical" character and translated into plans of execution). Execution plans are actually instructions (or steps) that can be implemented in code. This is a far more far-reaching approach than the previous ones, but it has already been used to simulate AI by playing board games and even to dynamically create the narratives in digital games.

As I said earlier, it makes sense to just call it the Generation of Content because the behavior of intelligent characters computer-controlled also needs to be created dynamically by playing your example. Still, when you see around the term "Procedural Generation", it commonly deals only with fixed elements. Even involving characters, there are fixed elements that need to be created. The game Black & White, for example, used algorithms genetic to determine the objects each creature can eat (something that does not necessarily change during the execution of the game) and even even the characteristics of different creatures. This distinction is important for the next part.

Techniques of Performance Improvement

Keeping all this in mind, it is important to note that creating content is not always carried out dynamically as the game is played. Otherwise, your video game processor would be busy all the time with just that and would not be able to process anything else. Much of the content generation is done before of the game being run, particularly during the construction of the game, but also in an initial step in case the game needs to have variety every new run.

The planets you see in the video you referenced are almost certainly models built before that moment. But the difference in rendering that you have well observed (and which is much more visible in the gradual appearance of the islands in the 0:40 minute) is due to an approach (or technique) that aims to give even more performance of execution to the game, and that does not necessarily have to do with the procedural generation of content.

The best example is a racing game. Imagine a simple racing game, in which the track is lined with coconut trees. The coconut trees that are presented in the distance are small due to the distance. Thus, it does not make sense to already include a three-dimensional model of a tree there, but to present a two-dimensional image (even plated) with low resolution. First, the player will not pay attention to that coconut tree, because his attention is in the car and around it. Secondly, the car will take a long time to get there, so even the necessary calculations to check for collision (if the car hit the tree) will need to be made. So just the simple image.

As the car approaches, the game code decides to exchange this image for a higher resolution (still two-dimensional) image (i.e., with more detail), just for a better visual aesthetic. In fact, this image (more "heavy", because it occupies more memory, takes longer to load, etc.), could already be loaded in another thread while the game used the simplest image, in order to make the game run well without needing those boring screens of "loading...". When the car gets closer, the game decides to already place a 3D model of the tree, but still without a Cor (colliders are internal structures, not visible to the player, that aid in the calculations of collision between objects in a game). After all, the car is still far away, and it makes no sense to keep the game calculating crash tests for something impossible to occur. Finally, as the car really gets closer to the tree, the game adds Apollo to the three-dimensional model, as there are now chances of a collision that will need to be detected.

This technique can involve any kind of dynamic change/improvement. Even 3D models can have different levels of detail and be exchanged comforts the player’s change of focus. As with your game, it can also involve swapping textures for something more detailed until you need to build the 3D model (terrain) only when the player (the ship) is closer. There are numerous others, and game designers commonly work together with programmers to have creative solutions. Instead of using a "wait, load game" screen, the game Batman Arkham Asylum, for example, there is a scene in which the character (Batman! Always be Batman! rs) enters a building with a large open field. In the center, over a cabin, the Joker makes an eloquent speech. This speech, in addition to its use in the game’s history, also serves to bring the player’s attention to the center of the scene while the textures on the sides and, mainly, behind the avatar are being charged (a technique that magicians already know by heart and sauteed).

Final Considerations

Dynamic content generation aims to enable the production of diverse, different and even unexpected content for a game. In a way it provides variety, avoiding sameness. But this is not necessarily about performance-enhancing tricks. Some older games, like many Atari games, used to employ automated content generation simply to generate new levels, but this has the problem of not ensuring that all levels are in fact fun. The modern games have level designers that use content generation tools as support, although they ensure that the phase is fun with manual work.

That is: it is worth noting that game designers usually do not like adaptation fully dynamic and automated in games, because it takes away from them the control over player experience.

Still, there is a growing effort in industry and university to create vast games (what is commonly called the "open world"), with much variety, or with narratives or characters that change as the player acts in the game. That kind of approach is called emerging gameplay (in the sense that it emerges, that it arises). If you’re interested, a fantastic book on this subject that covers a bit of everything I’ve explored in this answer (and more! ) is the book Emergence in Games (no Portuguese translation yet - as far as I know).

inserir a descrição da imagem aqui

Browser other questions tagged

You are not signed in. Login or sign up in order to post.