To preface this, I have to mention that I don’t write game that use 3D models in them. I also generally don’t write games that have much of anything that could be called physics.
I write games that exist on a virtual board, rigidly organized into cells, be they rectangular, isometric, or hexagonal. Almost always, the game objects exist within the cells. Occasonally, they exist on the common edge between cells.
Because of this, I have long since abstracted my drawing code from my game code. I have noticed a lot of developers, especially early on, tend to think of the sprite or whatever they are using as “The Thing” (game object). It is not the thing. It is how The Thing looks when it is rendered on the screen.
Even in the cases where this actually appears to be so (i.e. in 3D model land and in 2d sprite shmups with pixel perfect collision detection), the graphic of The Thing is not The Thing. The geometries for collision detection are important aspects of The Thing and key to how it interacts in the world, but the same could be represented in an untextured 3d model(or more commonly a hitbox) or a monochrome bitmask.(or 2d hitbox)
The graphics of The Thing are not The Thing.
So what?
The ramifications of the graphics not being The Thing are profound. If you do it right, you can write the game code that deals with your game objects to be separate from the rendering code. Replace the rendering module, and you’ve got a new way of seeing your game without changing a bit of the back end code.
This is exactly what things like LibGDX does, but it isn’t really as noticible since all of the rendering platforms use OpenGL. OpenGL is another good example of something that does this. It takes abstracted geometry and renders it on screen.










0 comments