Here we go again with the “video game graphics have come a long way”. Well, they have, leave us alone. They’ve come a long way really fast though, if you think about it, in the last 20 years video game graphics have improved exponentially. If you asked anyone 20 years ago what games would look like today, we’re pretty sure nobody would have thought they would look this good. Of course, in order to get there innovation was needed. Graphics didn’t improve overnight, it was the effort of various different teams of developers working on different games that slowly pushed the boundaries of gaming graphics, the culmination of which we can see today. So here we have some games that introduced or perfected the graphics and mechanics that we take for granted, for the first time.
Star Wars Dark Forces – Looking up and down
Looking up and down doesn’t seem like a big deal, does it? Well, in 1995 it was a huge deal. This was the first FPS game where you could do just that. Compared to every major FPS that was out at the time, we’re talking Wolf, and Doom, and… um let us know if you think of others, but yeah, this was the only game at the time where you could look up and down. Something as simple as that was a huge leap for FPS games.
Doom – 3D!
Doom was the first game to truly emulate a 3D environment for you to explore. It wasn’t true 3D, the devs achieved it by using some very neat tricks, but what they accomplished definitely left you feeling like you’d just played a proper 3D game. Even if you were to return to the original Doom now, you’d still think you were playing a 3D game. The game was actually just extrusion maps, flat 2D maps that had been stretched out to look 3D. This meant that you couldn’t even have a room on top of another in Doom, but that didn’t really stop them from delivering.
Descent – True-3D
1995’s Descent was the first truly 3D game. Don’t confuse that with the first 3D game, that’s different. What we’re getting at here is that in Descent, instead of using 2D sprites, all the models in the game were 3D models made with polygons. You might ask, wait, even Doom was 3D. Well yes, but not true 3D. Doom was made using heightmaps, not polygons. Being the first polygon-oriented 3D game is a pretty big deal, but that wasn’t all. Descent also offered six degrees of freedom, meaning you could point and shoot in any direction, in a time where the norm was simply up, down, left, or right.
Crusaders of Might and Magic – Dynamic Shadows
Crusaders of Might and Magic had quite a bit of hype surrounding it. On release it turned out to be a pretty mediocre game. It received a pretty meh reception. However, it is the first game to have dynamic or volumetric shadows. This means it was the first game to be affected by light sources. Meaning, if your character was near a light source, it affected how you saw shadows. You might be thinking, “pfft, big deal”. Well yes, for 1999, a very big deal. At the time, there were no games around that had dynamic shadows, or proper shadows at all. They were either static images, or blurry patches. This was one of the features about the game (in addition to being true 3D, also a big deal at the time), that created a lot of hype surrounding it. Unfortunately, the game itself was pretty bleh.
Messiah – Tessellation
GPUs today are crazy powerful, 20 years ago, not so much. Unsurprisingly, games had polygon budgets. Meaning the amount of polygons objects could have in a game were capped. Exceeding this budget meant the hardware could no longer support the game. This is where tessellation came in. With a rather weird game called Messiah. Let’s just say the only thing the game is remembered for is tessellation. Tessellation figures out how many polygons can be used by the hardware and brings the engine up or down to that number. Nowadays with GPUs becoming increasingly powerful we don’t really need it as much, but 20 years ago this meant that you could play some fairly intensive games on weaker rigs. It was also a boon to designers and artists, who would no longer need to make several different versions and resolutions of models and assets.
Hitman: Codename Agent 47 – Verlet Integration Ragdoll Physics
The first video game to ever use ragdoll physics was Jurrasic Park: Trespasser, but the first game to use them properly would have to be the first Hitman. Hitman used something called Verlet integration. Character models in Hitman (or any game with ragdoll physics nowadays, for that matter) used something we call a skeleton, and this skeleton was connected together via, well, its bones. These attached bones also acted as weights or constraints for the skeleton, so it wouldn’t go crazy from impact or fly off in some odd direction. Various points on the model’s skeleton could store velocity data; this meant that impact was felt more realistically, although the result would often be more hilarious than realistic. It worked though, at least at some level, there was a successful simulation of realistic ragdoll physics. Ultimately, the techniques that were used in the first Hitman laid the foundation for the ragdoll physics we use in games to this day, including the latest Hitman games.
Far Cry – Weather and Procedural, Regenerative Vegetation
Far Cry was one of those games that set the benchmark for a lot of things when it was released. It also introduced a bunch of new graphical features that would go on to become a staple in the gaming industry. The game’s vegetation reacted realistically to the environment around it, meaning you could destroy the vegetation. Not much of a big deal today, but at the time we were used to trees being indestructible. That’s not all though, you see, you’re not the only destructive element in the environment, the vegetation in Far Cry also reacted to the weather. For this to work, all of the vegetation needed to have its own physics and animation. It doesn’t seem like much, but you could see the vegetation gently swaying in the wind. It doesn’t end there though, the wind could even break the flora in the game, and it was procedural breakage, not scripted fixed animations at set periods time. The damaged plants would then also regenerate over time.
Far Cry also procedurally generated fire, fire which could be affected by the wind. On a windy day in the game, fire would spread farther and catch quicker. That’s some pretty crazy attention to detail for 2004, stuff we’d normally take for granted.
Red Faction – Truly destructible environments
Destructible environments were already a thing when Red Faction came out. However, Red Faction revolutionised the way you could destroy the environment, with what they called “Geo-mod” technology. Until this point, destructible environments meant that only certain parts of the environment were actually destructible, and even if they were, they could only be destroyed in a certain way at certain points. This meant the destruction was all pre-rendered and pre-scripted beforehand. In Red Faction, everything was fair game. All the buildings were actual, destructible, physical structures, and the devs put a lot of detail into the structures to ensure that the destruction felt realistic (and satisfying), which they managed to pull off. This also meant that you could probably destroy it in a whole different way the second time you came around.
LA Noire and Beyond Two Souls – Performance Capture
Motion capture had already existed by the time LA Noire and later, Beyond Two Souls came out. For that we can credit the arcade game Virtua Fighter 2. However performance capture is a whole different beast. In LA Noire, it helped push the game’s facial animations to a whole new level of realism at the time. However, as with all things that are attempted for the first time, it was equal parts realistic, and equal parts hilarious. Beyond Two Souls one could say was the first to somewhat perfect performance capture. Unlike LA Noire which did it only for facial capture, Beyond Two Souls would employ performance capture for, well, everything. Meaning just about all the action you see in the game was motion captured, meaning it was all acted out by real people with green screens behind them. Talk about dedication to accuracy. Of course, this level of mocap isn’t something everyone can just start doing. While it’s employed to some extent in a lot of games, it’s not feasible considering the room and equipment you’d need to pull it off.
Quake 3 Arena – Curved surfaces
Before Quake 3, it was reasonable to assume that everything in a game could probably stab you. You see, before Quake 3 Arena came along, it was the norm for everything, environments, enemies, objects etc to have sharp angled edges. Quake 3 would introduce curved geometry and surfaces to the game, which did quite a few things. For one, things just looked better and more aesthetically pleasing. Second, curved surfaces made things look smoother, smoother things looked nice, even if the textures on said surface weren’t as detailed as its flat and sharp counterpart. In a way this meant less effort or time spent on textures, and more on gameplay, which as we all know, Quake 3 aced that department. Less effort while still looking better is a win-win, not just for Quake 3, but the future of gaming.
Interestingly, Quake 3 was also the first to tease Ray Tracing in a concept demo waaay before RTX cards were ever a thing. It needed a bunch of CPUs to run, we’re talking something like 30 Pentium 4s. This was still just a concept demo, Quake’s levels weren’t designed with ray tracing in mind, or at least, the finished product wasn’t. Still pretty neat.
There are probably (definitely) more games out there that made significant contributions to pushing video game graphics to the next level. Heck there are games that are doing it right now. Simply seeing how far gaming has come in such a short time only makes us excited for the future of gaming and graphics.