Advent Calendar 11: Into The ~THIRD DIMENSION!~

Greetings, and welcome to Advent Calendar 2022! This year we're being self-indulgent and rambling about video games.

As usual, the Advent Calendar is also a pledge drive. Subscribe to my writing Patreon here by December 15th for at least $5/mo and get an e-card for Ratmas; subscribe for $20/mo (and drop me a mailing address) and you'll get a real paper one!

I hope you're all having a happy winter holiday season. Let the nerd rambling commence!

Video games from the very start have made creative use of two dimensions. The ancestor of the video game as we know it was a Pong-like amusement that involved creative misuse of an oscilloscope, a device meant specifically for graphing electrical signals on an XY plane. But almost as soon as they had worked out how to draw pictures on a flat surface, game makers set their sights on the magical, mystical THIRD DIMENSION.

The first attempts actually mimicked the nascent processes of computer animation. The earliest way to get computers to understand the idea of 3D objects was to define them as a series of flat polygons, attached to each other at edges and vertices at specific angles. The classic example was a still render of the Platonic solids (tetrahedron, cube, octahedron, dodecahedron, icosahedron), as well as a model of a Melitta teapot that was affectionately dubbed the "Utah teapot" or "teapotahedron". Arcade and console games, with their limited computing power, could handle a handful of vertices, but couldn't keep up with filling in the faces with color or shading -- the result was a distinctive 'wireframe' look, which was emulated as a 'high tech' effect in movies like Star Wars, Star Trek II, and TRON. A lot of famous wireframe games from as far back as the Star Wars arcade game in 1983, are genuine 3D renders, just extremely simplistic ones. Many of them made use of vector graphics, where the electron beam in the monitor draws directly from point to point on the screen, rather than having to draw the entire screen on ever field, line by line, top to bottom.

Simulating a three-dimensional space with color and shading on a 2D plane is an interesting problem, and largely makes use of cognitive tricks like size cues and relative motion. As long as the viewport is restricted, you can use some fairly simple transformations to mimic depth of field. Most of these tricks have been widely known and used since the fad for linear perspective art in the Renaissance -- what they were waiting for, with respect to video games, was for computing technology to advance to the point where these transformations could be applied to the picture in real time. Gamers today are snobs that would refuse to open their actual eyes if life ran at any less than 60 frames per second, but realistically you can at least give the viewer the idea of movement in 3D space at 10-15 FPS, as long as your graphics aren't mud. Coding Secrets below gives a pretty concise explanation of how you can create the illusion of movement in space just by careful color manipulation.

Mickey Mania was not the only game to use the technique, which was achievable to some extent on any console that combined tile-based backgrounds and sprites. Slightly more sophisticated on the computing end was the Super Nintendo's infamous "Mode 7". While this was still not a true 3D rendering system, it allowed the programmers to automate the process of those perspective-mimicking distortions, using affine transformations. Retro Game Mechanics Explained gives a good overview, if you dig the technical language.

True volumetric 3D would have to wait for better computing power, and its popularity had to wait for a bunch of geeks who wanted to shoot Nazis from a first-person perspective.

Comments

Popular posts from this blog

The mystery of "Himmmm"

WARNING! Sweeping generalizations inside!