Ok, I’ll have to admit it – as someone who fits in the relatively uncommon venn diagram between history and retrogaming, I probably let myself ponder the question of what exactly retrogaming IS a little too much. Why does it matter? after all, it’s just a label, right? Who cares if the Playstation 2 is retro or not. Just play the damn games, damn it!
Yet the historian in me knows that labels can be incredibly important. Have you ever wondered why the Middle Ages were called the Middle Ages? It’s because the period were mischievously labeled so by enlightenment philosophers who were determined to market a distinctive link between their own period and the glories of Ancient Rome. “Oh that bit in the middle? we don’t talk about that…”
This piece of intellectual snobbery has coloured the perception of the Medieval period ever since, passing through well-intentioned but disasterous Victorian romanticism right down to twentieth century historiography and beyond. If you didn’t know that the supposed ‘Dark Ages’ (known as the Early Middle Ages today) were actually a time of technological innovation, it’s not your fault. Despite the tireless work of historians to correct the damage done to the reputation of the Medieval era, it turns out that historical labels can be harder to remove than those cheap price stickers so commonly applied by second-hand videogame shops.
Ok, I’m rabbiting. I suppose the point i’m making is that historical labels matter. They are the lens through which we initially view a subject so applying the wrong one can distort our view of the wider picture . It can be an act that’s difficult to undo.
What is Retrogaming?
With that in mind, the first question we should ask is just what IS retro gaming? It maybe a helpful to use as a hashtag to help people find you on social media, but i’m not sure it’s actually the most appropriate term. According to the Oxford definition, retro is Imitative of a style or fashion from the recent past.
This is definitely how the term is used in other fields. In its 2001 review of the then-new Crysler PT cruiser, motortrend used the term Retro to celebrate Crysler’s application of outmoded 1930’s design elements:
The look is equal parts modern and retro-it reflects certain design elements of the past, yet never directly copies any of them. Its pointy prow, flat rear deck, and upright windshield recall many designs of the ’30s, including Chrysler’s own revolutionary Airflow. Yet the long wheelbase, short overhangs, flush head- and taillights, and neatly integrated bumpers lend a fresh, up-to-date appeal.
When applied to gaming, this definition would actually invert some of common usages. Sonic the Hedgehog 2 would no longer be considered a retro game – at the time it was unarguably cutting edge – but, thanks to its faultless imitation of traditional 90’s pixel art, Sonic Mania most definitely would be.
That’s not to say that current alternatives to the phrase ‘retrogaming’ are more correct mind you. ‘Classic gaming’ is the most obvious alternative, but the main dictionary definitions of classic fail to encompass the way we currently use the term retrogaming. We might say that Marco’s Magic Football is a classic example of an early 90’s platform game, that is to say it is a ‘Very typical of its kind‘, but few would probably say it was classic in the way that is was ‘judged over a period of time to be of the highest quality and outstanding of its kind.’ In fact, our use of the term ‘classic’ would probably fall closest to the original Latin classis, which can simply be used to refer to a group or class.
Indeed, if we look back to the realm of automobiles, we’ll see that – even if adopted – the term ‘classic’ wouldn’t necessarily help to give us a concrete definition. In the UK we have a remarkably solid definition of classic – any vehicle older than 40 years is exempt from vehicle excise duty – but that doesn’t the industry generating articles similar to this one.
Perhaps ‘traditional‘ gaming would be a better term to use then. However, tradition has connotations of adhering to a particular set of pre-established rules, and the videogaming eras that would most unquestionably be considered retro are the ones that had the fewest precedents and the widest experimentation, so i’m not sure that fits terribly well either.
I suppose the only conclusion we can draw is that, if the term ‘retro gaming’ were to be retired there’s no preexisting concept that could fully take its place. I think that, as there is a common sense of what ‘retro gaming’ is, it’s continued usage is probably fine.
We should acknowledge, however, that the concept of retro gaming is an invention of the community it serves. Unfortunately, this means that the dictionary definition probably doesn’t get us any closer to defining what ‘retro’ encompasses. We all know that retro gaming is ‘the playing or collecting of older personal computer, console, and arcade video games in contemporary times,’ but how old is ‘older’?
In the United Kingdom auto industry, the legal status of classic car isn’t left open to wooly interpretation. After 40 years have passed since the vehicles first registration it immediately becomes legible for exemption from both the normal taxation regime and roadworthiness checking. Should we apply a similar rolling window-style classification for retro videogames? There’s a good case that we can. Veteran gaming journalist Jaz Rignall lays out the basis of such scheme here:
Does this work? well a look down the list of Eurogamer’s games of the year 2009 reveals a list that could hardly be considered old and outmoded. Not only have most of the big names on the list – The Forzas, DIRTS, Call of Dutys – received regularly yearly (or bi-yearly) updates, but the content of those updates can hardly be said to be as extensive as those made in the decade between the original Super Mario Brothers and the three dimensional frolics of Mario 64. Indeed, A number of titles – such as Arkham Asylum – have even been re-released for the current generation of gaming platforms.
This is in stark contrast to Eurogamer’s games of the year 2000, which – despite originating from just 9 years before – reveals a whole slew of intellectual properties that had at at best been irregularly updated and, at worst, completely forgotten.
The relative lack of change in gamer appetites between 2009 and 2019 is, I think, indicative of a slowing in the pace of technological upheaval that historically acted as a major catalyst for change within the industry – a pattern we can see reflected in the devlopment of Console-based First Person Shooters.
In September 1995, the original Doom (which by then was already two years old) was ported from the PC to the Super Nintendo Entertainment Sytem. By modern standards Doom would be considered quite a simple game. Enemies and objects were represented by two dimensional pixel art sprites rather than three dimensional models and – though not quite the 2d game some on the internet would have you believe – the rendering engine had a number of clear limitations when handling three dimensional space, such as an inability to place two rooms directly on top of each other.
Despite this relative simplicity, an uncompromised port of Doom was still well beyond the reach of the SNES’ Super FX2 3D expansion chip. Visually, the SNES port featured plain ceilings and floors that had been stripped of the texturing effects found on the PC. Multiplayer options that were present in the original version weren’t physically replicable on the SNES and the gameplay was limited by simplified map geometry and an inability to approach enemies from the side and rear.
Fast forward a decade to December 2005 and everything had changed. Released as a launch title for Microsoft’s Xbox 360, Call of Duty 2 is (ironically for a game based in the past) clearly identifiable as a contemporary First Person Shooter. At a technical level, the game was generations ahead of Doom. Faux 3d had been replaced with a full three dimensional world that contained incredibly detailed scenery. Pixel art villains with limited frames of animation had been replaced with life-like human models who could run, take cover and shout. Perhaps More controversially, Doom’s open labyrinths had been replaced with a more linear campaign that led the player through a number of cinematic set pieces and multiplayer was open to everyone. Crucially, the Xbox 360 version was also broadly identical to the PC version in most ways – with the smaller number of allowed players in a session offset by a more level playing field.
Fast forward again to 2015, and the situation was broadly the same. The Call of Duty series may have emerged as top (rather than plucky under-) dog and swapped the battlefields of World War 2 for a number of more contemporary and futuristic surroundings, but any changes to the core gameplay (and the slow but undeniable shift in emphasis from the cinematic single player campaign to the chaos of online multiplayer) had been evolutionary rather than revolutionary in nature. The sense of continuity was so strong that no one was surprised when the decade old Xbox 360 and Playstation 3 received versions of 2015’s Call of Duty: Black Ops 3. In 2005, a port of Call of Duty 2 for the Super Nintendo or even the ten year old Playstation would have been completely bonkers.
I think we can say then that though ten years is definitely a large enough window for tastes to change an hardware to evolve (a fact ex Sony CEO Kaz Hirai discovered the hard way as he awkwardly shouted ‘it’s Ridge Racer! Riiiiiidge Raaacer!’ in front on a massively apathetic audience) , it doesn’t work as a universal definition for what makes a game or system retro. Though it definitely works as an interval for the decade around 1995, it simply doesn’t work for other decades: though when we examine the noughties ten years seems way to short, when we examine the 80’s – a decade where Sega released three consoles within five years – it actually seems too long.
An alternate (and, it seems, more popular) suggestion is to base the definition of hardware iterations rather than years: once a manufacturer has released two further iterations of hardware, the original console they produced arguably becomes a retro console. As the most current Playstation console at the time of writing was the Playstation 4, this theory would officially make the Playstation 2 a retro console.
This argument definitely has some merit. As we’ve noted, Sega released their first console (the SG-1000) in 1983. They released their first indisputible follow up – The Mark iii (rebadged as the Master System in the West) – 3 years later, before following up with the Megadrive (Genesis in the US) two years after that. In an era of immense and incredibly rapid change, the SG-1000 had gone from cutting edge to undisputably redundant in the space of around 5 years.
Better still, the gap between systems actually seems to widen with successive console generations. Though the larger, more detailed sprites, brisk pace and polyphonic audio make it impossible to mistake a Megadrive title for an SG-1000 release, games released for the Megadrive’s successor – dubbed the Saturn – were literally in another dimension to those found on Sega’s SG-surpassing Master System. While the Mark iii/Master System struggled to render with games that created psuedo-three dimensional effects by resizing two dimensional sprites, the Saturn was able to create entire worlds from three dimensional objects. There could be no argument that, In terms of raw power, the Sega Saturn and it’s cohorts the Sony Playstation and Nintendo 64 were in a completely different league.
However, once again we find that as we draw towards the present day, this model begins to fall apart. When it comes to two dimensional performance, for example, we see that the gap between the Dreamcast and the Super Nintendo is already much, much, smaller than the gap between the Saturn and the Master System.
If we compare the Street Fighter Alpha titles released on each platform, for example, we see that, though the playing field and sprite size is smaller in Alpha 2 on the SNES and the gameplay a little less smooth, the overall experience is largely the same. This is a far cry from the Master System version of Mortal Kombat 2, which featured dramatically clipped speed, command lists, character roster and an overall visual experience that only a parent could love.
As we move into the contemporary era, this generational differential shrinks even further. As the original Xbox is both 2 generations and 14 years older than the unhelpfully named Xbox One, we’d expect there to be a huge difference between the two. Thankfully, we have a very clear point of comparison for this: Grand Theft Auto 5 on the latest Xbox is set in the same fictional state as Grand Theft Auto: San Andreas on the original machine.
And yes, At first glance, there can be no denying that there was a significant visual upgrade between the first and third Xbox consoles. Though notable progress was made across the three Grand Theft Autos released for the Playstation 2 and original Xbox – first simply rendering an original Grand Theft Auto city in three dimensions before adding elements like detailed interiors and aiming mechanics – Grand Theft Autos 4 and 5 took these to another level. Not only did the artwork style change from overtly cartoony to a much more resource-intensive realism, but the game worlds feature elements like functioning bridge toll systems and useable bowling alleys.
With all that said, it is note worthy that this leap – though impressive – is still smaller than we wouldn’t expect going by past precedent. Though we noted earlier that the two dimensional upgrade from the SNES to the Dreamcast was relatively small, when it came to three dimensional performance they were beyond compare. Even with additional processing chips built into game cartridges, the best 3D experiences available on the Megadrive and SNES were based around basic 3d models being placed into sparsely populated worlds. In contrast to this, the high point on the Dreamcast was probably Shenmue – an RPG set in a highly detailed model of a real Japanese town that featured realistic time progression, dozens of ambient pedestrians and the option to use genuine 1980s weather data.
I wouldn’t want this to be read as a criticism of modern gaming but more of a commentary on how the nature of videogame evolution is changing. If we look at Grand Theft Auto 5 on the Playstation 4 and Grand Theft Auto 3 on the Playstation 2, it quickly becomes apparent the core underlying gameplay of Grand Theft Auto 5 is still heavily dependent on the mechanics that powered Grand Theft Auto 3. Everything that can be improved has been improved – including the breathtaking achievement of allowing large numbers of players to play together in a single city – but nonetheless titles like Call of Duty and Grand Theft Auto demonstrate the extent to which big-budget game development has moved focus towards improving existing gameplay models instead of creating entirely new ones.
We can conclude, then, that the two generation rule begins to breakdown in the modern era in a remarkably similar manner to the ten year rule. Though rapid technological development in the early console era had a huge effect on the kinds of games that were produced, the examples we’ve seen so far have demonstrated that post-millennium change has shifted to an evolutionary rather than revolutionary nature. If we are to create a uniform rule for determining which consoles count as retro, another approach is needed.
Towards a Fixed Model
It’s pretty clear then that though there’s a seductive logic to the idea that we can use a simple sliding window to decide when a system moves from being an outdated piece of rubbish to exuding an air of retro cool, the changing pace and nature of this progress simply makes this unworkable. Instead, I think we need to see the period we would currently consider “retro” as a distinct historical era, like the high middle ages or the golden age of comics. Even if we accept it as an enclosed period, however, where do we draw the boundaries?
The Generation Game
As a charming 19 year old thread thread over at rec.games.video.sega, demonstrates, people have been grouping consoles into generational cohorts from – at the very least – the turn of the millennium.
Is it a useful concept? Absolutely. There can be no argument that it makes it easier to distinguish between the earliest consoles with hard-coded games and the slightly later machines that were able to load software from dedicated games cartridges, but for our purposes there is a distinct disadvantage. Though it’s a useful scheme for assessing which machines should be treated as cohorts, it doesn’t provide us with clues when it comes to drawing our lines. Once again a different model is needed.
One final – and surprising obvious schema for establishing the boundaries of retro is the one that was in common use the time: How many bits do you have?
The word ‘bit’ is a contraction of Binary Digit, and when we use it in to compare computer and console architecture this is generally used to convey the overall word length of the processor – effectively the greatest number of calculations it can undertake in one go. The highest number you can count to using 8 binary digits is 256, so 8-bit CPUs can generally perform 256 calculations simultaneously. The greatest number you can count to with 16 binary digits is 65536, so 16-bit CPUs can generally perform 65536 tasks as once. Naturally, this leap in simultaneous calculation meant that the arrival of the 16-bit machines marked a genuine leap in performance, but does the theory hold up beyond this point?
On the surface, the answer seems to be yes. A gamer in the 80’s may have potentially owned a Sega Master System in 1987 that was 8-bit, then they may have purchased a Megadrive in 1990 that was 16-bit, before upgrading to a 32-bit Saturn in 1995. There can be no denying that each one of these systems represented fundamental upgrades in the technical capabilities we would expect from a games console. On top of that, by using the traditional bits system, we don’t have to rely on the year a console was manufactured to work out its cohorts as it depends entirely on the strength of the processor. Though released in the same year as Sega’s Megadrive, for example, the 8-bit zilog z80 contained in the GX4000 games console reveals that it was more of a competitor for Sega’s Z80-based Master System than it was for their Motorola 68000-based Megadrive.
However, even the jump from 8 to 16 bit reveals that this classification system has its flaws. The internal workings of the 68000 CPU in Sega’s Megadrive, for example, are generally 32-bit (2,147,483,647 simultaneous calculations. Blimey!) but face a bottleneck because they have to communicate through a 16-bit bus. Meanwhile, The 65816 inside the Super Nintendo Entertainment System had inner workings that are largely 16-bit but restricted by an 8-bit data bus. Whether we focus on the highest amount of calculations found in the chip or whether we look at the restriction of the bus, it looks the Megadrive should be considered a generation ahead of the Super Nintendo, even though many would argue the custom graphic and sound hardware would make the latter the more powerful machine.
These issues grew as gaming hardware became more powerful and complicated. Atari’s Jaguar, for example, actually featured the same CPU as Sega’s 16-bit megadrive, but Atari argued that it counted as 64 bit machine because of its dual 32-bit graphics processors. Sega quipped, in reply, that if they calculated the Saturn’s power the same way as Atari, it would be an 112 bit monster machine.
If the mid-90s was already posing difficult questions for the processor bits system, after the mid 90s this system falls apart almost entirely. Even today, the computer you’re reading this on in 2019 will most certainly be based around 64-bit architecture. This is for good reason too, as we won’t be reaching 64-bit Windows’ 192GB theoretical RAM limit anytime soon (although I’m sure we will one day).
Though we dub the era of the Playstation 2, Dreamcast and Game Cube as the ‘128-bit era’, this is more to do with continuing the convention rather than making serious commentary on the capabilities of the machines. Where 8-bit generally referred to the overall capability of the machine, “128-bit” machines could generally only use 128-bit integrate for a few highly-specialised tasks.
Consequently I think we can say that while great for marketeers, the bits system is too arbitrary for us to use as the basis for modern classification. Intriguingly, however, in failure we may have also found the answer to our question. Though different from the other potential schemes we’ve examined so far, it runs into exactly the same bottleneck: As the 1990s drew to a close, Sega released the Sega Dreamcast.
Definitions: What is Contemporary Gaming?
We can say with certainty that the retro era of video game consoles began in the late 60s and the evidence we’ve encountered suggests that it drew to a close at sometime around the sixth generation of games consoles – either at its beginning in 1998 or around its close in 2005. With that division in mind we should probably turn our attention to just what divides one era from the other.
When it comes to the common features shared by contemporary consoles, two obvious features that spring to mind are internet connectivity and mass storage, features which combined to have a titanic effect on modern gaming.
Though it’s true that both Sega and Nintendo dabbled in satellite/cable delivery with their 4th generation/16-bit machines, before 2005 games software was generally locked to a form of physical media which had to be bought from a shop. Today – thanks to internal hard drives and large capacity SD cards – distribution by physical media is no longer required. Indeed, thanks to gargantuan release-day patches that require downloading before a game can be played, physical media is often something that has more drawbacks than benefits – a fact reflected by 80% of game sales in the UK now being digital.
Though a change in distribution may not seem a huge deal, it has undoubtedly had a knock on effect on the way games are made. While from a user’s perspective, digital storefronts built into the operating system of a console simply mean that there’s no longer any pressing need for us to purchase our games on a disk, for games developers it opens up the possibility of distributing their games without a formal publisher.
Now, it would be simplistic and disingenuous to suggest that the indie games scene was anything new – people have been writing their own software since the days of the Amiga and ZX Spectrum – but the growth of accessible payment and delivery methods alongside the appearance of cheap and relatively wide-reaching social media-based marketing techniques mean that an aspect of gaming that was relatively hidden has been thrust into the spotlight, becoming a pathway for popular success and a driver for change within the industry as a whole. If you’re the kind of gamer who enjoys tinkering with ‘crafting’ mechanics, we shouldn’t forget that these became ubiquitous in contemporary games of both big and small budgets because of the influence of a small independent game called ‘Minecraft’.
Game patches in their current form also represent a big break from the past. While it’s true patches first started to appear on the pc in the 90s (as any bitter Frontier: First Encounters owner could attest,) back then the scope of a downloadable patch was extremely limited. Interstate ’76 Gold, for example, simply added 3d card support to the base Interstate 76 game. Though this meant the patch file weighed in at (a paltry by today’s standards) 85mb, that equated to a three hour download on a machine equipped with a 56k modem, adding about £1.80 to your phone bill as well.
In the modern era, the reliable presence of both internet connections and mass storage means that huge, multi-gigabyte game updates are commonplace, and have had a titanic influence on the way games are produced. This can often be for the better – contemporary games often receive free content that would have been paid expansions in the 90s – but it can also mean that games are released with known quality control issues and publishers are incentivised to produce more premium content for a successful existing title instead of developing entirely new software.
Sitting alongside patches and downloads, the wide adoption of online gaming on consoles is another element that is largely exclusive to the modern era. Though it’s up for debate whether the original Xbox and PS2 count as retro machines (more on that later,) we should remember that both consoles required users to purchase starter kits in order to take these machines online. Though PS2 and Xbox games that included online features weren’t exactly uncommon, their cases always had to feature a coloured sash to signify this – a far cry from the current state of play in 2019, where integration with online features is the default.
Away from internet-based disruption, however, the main theme of the modern era is stability and a slowing pace of change. Since Nintendo ceded the technological battle ground in 2005 and the Xbox 360 catapulted Microsoft onto level pegging with Sony, no new competitors have made a serious push to enter the hardware space. While the current generation is five years old and still going string, the preceding generation of consoles had an unprecedentedly long life span, with the gap between the Xbox 360 and the Xbox One being almost as large as the gap between the Sega Megadrive and the Dreamcast.
Meanwhile, on the software side, sequels dominated the top 10 best selling games lists for 2015, 2016, and 2017. Strikingly, Grand Theft Auto 5 appears in all three top ten lists and – according to this article over at website Dualshockers – joins the Call of Duty franchise in accounting for 50% of the best-selling games on the Xbox One and PS4.
Indeed, even the rise of something genuinely new appears, it seems to happen in slow motion. As a crop of new Battle Royale games aims to take a bite our of the (now) traditional online shooter market, we should note that PUBG, the original battle royale, is already two years old, and cultural phenomenon Fortnite not far behind that.
Of course, when you consider how the industry has grown over the last 20 years or so, this becomes a lot more understandable. Alongside an exponential growth in the amount of team members required to build the basic assets required for a game, elements that used to be relatively ancillary such as script writing, voice acting and game audio, have become increasingly professionalised and resourced. As gamers, we generally want games designers to be more creative with their output, but I think at times we are too quick to forget that one failed experiment can have a huge impact on the financial viability of a studio.
The modern era is arguably one of contradiction then. We have the most capable set of consoles, but games are often quite conservative in there ambitions. As gamers, the combination of digital distribution with service offerings like gamepass gives us affordable access to more games then ever before, but many of us seem to increasingly rely on a small number of trusted multiplayer games. In an era where interactive storytelling is becoming increasingly developed, competitive multiplayer seems to be taking a bigger and bigger slice of the gaming pie. It’s also an era of relatively homogenised game collections, with each title enjoying a brief period of fame on streaming platforms like Twitch before we inevitably move onto the next.
Definitions: What is Retro Gaming?
If our current gaming era is governed by themes surrounding stability and connectivity, the previous period was an era of instability and disconnect. As it stands, we’re currently in the fourteenth year of an established global console order. In the period between 1983 and 1992, we saw Atari’s reign as the dominant console manufacturer come to an end, the meteoric rise of Nintendo and their eventual usurpation by Sega.
And that’s just talking about the US too – not only did Nintendo fail to gain a strong foothold in most European until the Super Nintendo was relatively mature, but the success of NEC/Hudson’s PC Engine in Japan meant the shape of their console industry always looked different to those found Europe or the US. Though microcomputers in this period were also a much bigger deal in Europe and Japan than they were in the US, the dominant brands differed not just between Europe and Japan, but even from one European country to the next.
The relative instability of the industry in this period both fuelled, and was fuelled by, rapid technological development. With new generations of consoles appearing roughly every five years, games authors were rapidly presented with new opportunities – whether it was the richer, smoother and better-populated worlds of the 16-bit era, or the fully three dimensional environments made possible by the fifth generation of consoles.
This rapid technological development had a huge effect on the kinds of games that were developed througout the retro era. Though we should note that thematically games haven’t changed as much as we would probably think – shooting, racing, sports and adventuring were just as prominent on the Atari VCS and the Fairfield Channel-F as they are today on the PS4 and Xbox – technological advancements meant these themes could be explored using increasingly complicated rule sets and visual perspectives.
As we can see from the example of the Street Fighter series, no sooner had a genre evolved – in this case the one on one fighter – than it could be completely rewritten. Within the space of a generation. The industry moved from games based around a single star character who had a handful of commands to titles that supported huge rosters of characters that boasted large numbers of personalised generic attacks, bespoke special moves and personalised combination commands that required pin-point accuracy when it came to their timing.
As far as the shape of the industry itself, things were different in a couple of directions. For one thing, development was a lot cheaper. Blockbuster Space Combat Sim Wing Commander 3 – a game which included movie sequences that started Mark Hamil and Malcolm McDowell – had a budget of $5 million which was very impressive for its time. However even if we correct this up to $8 million to account for inflation, this is still much smaller than the $20-30 million budgets that became common around a decade later – and in an entirely different league to the $140 million that was recently authorised for the development of Bungie Studio’s Destiny.
This relative cheapness when it came to game development was offset by having more limited avenues to market. With games almost always having to be sold as professional-quality releases from a shop shelf, games in the retro era generally required the intervention of a publisher – even when they represented the work of just one or two people.
With that said, the lower financial barrier meant that big publishers could afford to be a bit more experimental. Though it’s true that the retro period did see Electronic Arts begin there cynical schedule of yearly sports game updates, the period also saw them take a punt on some outside oddities, such as cult classic scare’em’up ‘Haunting’ and produce some of the most lovingly sympathetic and attentive ports in videogame history.
The retro period, then, is in many ways the opposite of the modern. Where the modern world is characterised by connection (both in terms of online gaming and also release schedules and a shared gamer culture,) the lack of internet and greatly staggered release schedules meant that the retro era was one of disconnect. Where our current period is one of general continuity and gradual change, the retro era was fuelled by uncertainly, experimentation and failure. Finally, and perhaps most importantly of all, while many modern games are built around a relatively small number of established genres and styles, the retro era was a comparatively blank slates, with established rule books often being rapidly rewritten as the capabilities of games hardware expanded.
The Troublesome Sixth Generation
With these two rough definitions in mind, it’s relatively easy to see why the question of whether the Sixth Generation counts as retro is a contentious one in the worlds of furious Youtube videos and agitated forum debates. By the time the Dreamcast released in 1999, the gaming world was getting smaller. Though the machine released in Japan a year earlier, the English-speaking launches were scattered over a matter of weeks rather than years. In the run up to release, gamers from across the globe could have found out about the new console by searching for information on the likes of Gamespot, IGN and Eurogamer – three websites that simply didn’t exist when the previous generation of consoles were unleashed on the world.
On top of that, over the next couple of years those same gamers could have tried out the first competitive online console game – Sega’s Chu Chu Rocket – and downloaded extra content for titles like Sonic Adventure 2. There could be no denying, when looking back, that important thresholds had definitively been crossed.
However, though in some ways ahead of its time, there can be no denying that the Dreamcast was very much a product of its time when it came to others. The Dreamcast might have included a modem for online competitive play, but it and the Nintendo GameCube would be the last consoles to include 4 wired controller ports for local competition. Furthermore, The Dreamcast would also be the last console to place ports of successful arcade games – a major factor for successful console launches from the Atari era onward – at the very centre of its commercial strategy. Given that flagship title Shenmue’s $70 million sits in the midrange between the $5 million blockbuster budget of the 90s and the $140 million blockbuster of today, we can say that the sixth generation has a firm foot in both camps.
However, if we can return to the subject matter of our intro for a moment, we should see that this is not a bad thing. Despite the protestation of enlightenment thinkers, there is no single indisputible date for the end of the medieval period. If you said the end of the middle ages was decided by the calendar – by the arrival of the year 1500 – a good many people would agree with you. If instead you answered 1492, when European horizons were broadened by the confirmation of the New World’s existence, other people would agree with you. Mind you, if you answered 1453 – the fall of the last remnant of anything resembling the Roman Empire – or 1517, with the posting or Luthers 99 theses’, more people would agree with you there too. The end of the Middle Ages is a question that has no simple textbook answer.
That’s not to say that the answer is entirely open, mind you – you would need a lot of strong evidence to argue that the middle ages finished in the early part of the 15 century or persisted beyond the middle part of the 16th – but it does go to show that a single subject can be comprised of individual components that developed at different rates and need to be treated with sympathy and a degree of mental flexibility. Personally, I don’t think it matters too much whether we group the sixth generation in with the modern era or the retro era – providing we readily understand the reasons pushing them in one direction or the other.
Returning to our slightly cynical and click-baity headline then, I stand by the claim that – at 14 years old and on the verge of entering its second generation of obsolescence – the Xbox 360 should not be considered a retro console.
Now that isn’t to say that it won’t become old and obsolete (as concepts go I suspect both games ownership and physical media will age much quicker than we expect,) it just won’t become old in quite the same way. From its wireless controllers through to its internet connectivity and plethora of three dimensional open world titles, I don’t think we will ever be in a position to say it’s more or less similar to the Super Nintendo or Master System.
I also think that from a historical perspective, it would do us all a power of good to wall off the “retro” period from the modern. One of the problems we have when discussing this subject is that too much of our thinking is based on the computational power of the consoles themselves, and we don’t give enough thought to how the power of these machines interconnected with elements like developments in controller hardware and the wider financial pressures on the industry. The Super Nintendo would have been a very different – and later – console had it not been for the arrival of Sega’s Megadrive. Street Fighter 2 would have been a less compelling game had it been born into a world where controllers only had two attack buttons.
Indeed, today we’re blessed by having instant access to so many resources that tackle individual threads of gaming history (from the history videos uploaded by the likes of Slope’s Game room and the Gaming Historian right up to Kenny McAlpine’s academic history of Chiptunes,) but you can’t tie threads together if they’re of infinite length. By acknowledging that the retro era is a fixed rather than expanding universe, I believe we take an important step in fundamentally understanding the development of games history as a whole.