Why in Opengl do we need to transform all objects to render them?

Asked

Viewed 125 times

2

I’m a beginner in Openggl and I’m having some difficulties to assimilate some concepts. I’m reading the book Mathematics for 3D Game Programming and Computer Graphics 3ª Ed., in the part where the author explains the various coordinate systems that Opengl uses to describe objects.

From what I understand there are 5 coordinate systems, being them:

  1. Object Space: responsible for determining the coordinates of the vertices of an object relative to this same object. Each object has its own space object.
  2. World Space: responsible for determining the coordinates of all vertices of all objects with respect to the coordinates of the "3d world". There is a single World Space.
  3. Camera Space: from what I understand Opengl is only able to render objects from World Space that are attached to a cube fixed limited in range [-1, 1] on axles x, y and z. This cube is also in the World Space. Therefore, in order for our objects to be rendered, we need to transform them so that they "enter" into the cube.
  4. Homogeneus Clip Space: after we transform our objects into the Camera Space, We need to project them into a plane (I don’t know which one) and ignore the objects whose projected coordinates come out of the range [-1, 1]. We call this projection space Homogeneus Clip Space. Also I could not understand this homogeneus. It is at this stage that the coordinates z of the vertices go to the Z-Buffer?
  5. Window Space: after we designed our objects for the Homogeneus Clip Space, all remaining vertices are in the interval [-1, 1] and need to be mapped to the machine screen, in particular to the Viewport previously defined.

The transformations from space to space are made through matrices. The matrix that transforms the Object Space in the World Space is called Model Matrix. The matrix that transforms the World Space in Camera Space is called View Matrix. We can create a single matrix that takes the Object Space to the Camera Space multiplying the Model Matrix for View Matrix. We call the latter as Model-View Matrix and the transformation given by it as Model-View Transformation.

Imagem descrevendo as transformações entre os sistemas de coordenadas. Imagem 1.3, retirada do livro citado

I wrote everything I could understand so that you can correct me on something I said wrong, or reinforce some concept.

My main question is: why did Opengl instead of making all these transformations on objects, it just allowed us to position our object in the World Space and set a camera in that same space and only it is transformed? It is not a loss of performance to do this much of transformations on the objects at all frame?

Anyway, that was my question. I urge you to correct me if I said something wrong.

1 answer

1

In clip space the coordinates have to be between -1 and 1 but we set a range type 0 to 100 and Opengl automatically converts to clip space. Z-Buffer is generated in window space.

Object space is the coordinates of the object relative to its origin. If we want to position this object in world space we apply a translation on the object.

Projection Matrix applies perspective to 3D coordinates and converts to clip space. Here we also define the screen ratio and lens aperture.

The position of the object can be different in each frame. We can perform transformations not only in the camera but also in the objects. In fact, the camera is always in the same place and the world revolves around it. The advantage of Opengl is that we can define these parameters for any perspective and take advantage of the GPU.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.