Recently we looked at the Sega Saturn and whether it was killed too prematurely by Sega. Though these days the Saturn’s renewed popularity means we regularly ask what could have made the console a success, there’s one question that should probably ask first: Could the Sega Saturn have been a success? On the surface this question seems a pointless one. After all, the Saturn’s immediate predecessor was a huge success in the West and its’ main opposition (the Sony Playstation) was a global smash. Why should the Saturn be any different? It had both the games and a reasonable amount of sales, after all.
However, we should note that Sega’s Saturn was far from bein the only failure of the first 32-bit generation. The failure of Nintendo’s N64 wasn’t quite as dramatic as the Saturn’s but a failure is still a failure regardless of whether you flip your vehicle into a ditch or quietly puncture all four of the tyres and glide silently to a stop.
The Saturn’s Mirror
Unconvinced? Well its true that the N64’s decline in the West was relative minor – the N64 managed to shift around 90% of the Super Nintendo’s userbase in the United States and even managed to grow Nintendo’s userbase in Europe – but things were very, very different in the company’s traditional Japanese stronghold.
Up until the release of the N64, Japan was a territory essentially owned by Nintendo. Having sold around 20 million units of the original Famicom (the Japanese NES), the Japanese Super Famicom managed to attain a similar level of dominance (17 million unit sales) despite facing tough competition from Sega and Hudson/NEC. Indeed, fancy-schmancy PlayStation and Saturn games might have been making the biggest media splashes in 1995 and 1996, but Super Famicom games were still quietly shifting thousands of units and regularly charting highly in Famitsu Weekly.
Though we can’t say that the N64 was a Gizmondo or Virtual Boy, in Japan it still represents a sizable disaster. By selling just 5.5 million units across its entire life span, the N64 saw Nintendo lose the vast majority of its Japanese userbase. They may have sold an extra 1 million consoles in Europe, but this paled next to the 12 million units lost at home. This is an almost perfect mirror of the Sega Saturn, which had the strong domestic sales the N64 so desperately needed but collapsed in the overseas markets where the N64 was able to hold its ground.
The effect of the N64’s failure may have not been as impressive as Sega’s retreat from the console business within a generation, but the failure does seem to have irreparably changed the way Nintendo was perceived. Before the N64 they were Nintendo, market leaders. Since the N64, we’ve had others as market leaders and Nintendo ruling their own domain somewhere else on the side – a perception that even 2005’s smash hit Wii console was unable to alter.
If we accept that the N64 performed massively under expectations (sorry N64 fans!), that means that two of the three central 32-bit consoles failed – and the two from the most experienced console manufacturers drastically to boot. It actually gets more interesting than that, mind you. There weren’t just three 32-bit consoles in play – we have to say there were at least five. When we talk about the 32-bit generation there seems to be a bit of a mental block when it comes to the Atari Jaguar and the 3DO. Looking back we can probably justify this: not only did they both release early – at least a year before the three ‘big’ consoles – but they also failed to make a significant dent in the market. By the end of 1996, European PlayStation sales alone had already surpassed the worldwide sales of both the 3DO and the Jaguar. It’s easy to suggest that they were never serious competition to Sega and Nintendo.
As tempting as it is to lump them together as the ‘also-rans’, the 3DO and the Jaguar came from very different places from each other. Atari and the Jaguar most resemble what we would probably expect from a failing console. As a company, Atari never really fully recovered from being the center of the young US console industry when it crashed in 1983. Having had their consumer wing split from the coin-op side and sold to Commodore founder Jack Triemel, the new Atari Corporation didn’t have much luck. Their 7800 console lost because it was outgunned by the Sega Master System and the Nintendo Entertaiment System, but their Lynx hand held was massively outsold by the weaker (but significantly cheaper) Nintendo Gameboy (who would have guessed that weight and lifespan would be key considerations for a handheld?)
Considering that their Sixteen-Thitry-two Computer (the Atari ST) struggled to get into US retailers and played a distant second fiddle to the Commodore Atari line in Europe Atari seemed to already be in the digital last chance saloon when Martin Brennan from Cambridge-based contractors Flare convinced them that they could build a 3d-based 32-bit console on the cheap.
3DO, on the other hand, was a new company on a very different trajectory. In 1991 Electronic Arts founder Trip Hawkins was on a roll. Having founded the outfit in 1982 as a company that treated its creators and games as artists and art (yes, that Electronic Arts. Yes. Really,) He’d managed to successfully weather the storms of the mid 80s by focusing on the home computer market and had crowbarred EA a space in the console sphere by blackmailing Sega into letting them have distinct library-within-a-library on the Megadrive.
Looking for a new challenge – and realizing his card was probably marked with the existing console manufacturers – Hawkins teamed up with Commodore Amiga architect Dave Needle and Atari Lynx designer Robert J. Michal to form the 3DO company. Their intent wasn’t just to build a machine that would act as a next generation console, they wanted one that would serve as an all-in-one multimedia entertainment device. “This is not just kids’ stuff, it’s not just for nerds and hobbyists,” Hawkins was reported as saying by the LA Times “This is something that will appeal to the masses.”
As we can see from the media reaction at the time, the contemporary prognosis for the two consoles was very different. By 1993, 3DO had announced impressive partnerships with Warner and Panasonic and had also been selected as Time’s product of the year. Atari, meanwhile, had managed to allow the Jaguar to play second fiddle to their other products: “Atari have been announcing the Jaguar since , but it was overshadowed by the release of the excellent though ill-fated Falcon,” C&VG unhelpfully wrote in October 1993.
Of course, fast forward a few years and it had all gone horribly wrong for almost everyone. 3DO only managed to sell around 2-6 million units of their platform world wide and – after initially planning an ‘M2’ upgrade to make the console more competitive – ended up spinning the M2 into a separate machine design before selling the whole thing to to Panasonic and abandoning the hardware market entirely. The Jaguar fared worse, selling just 150,000 consoles (alongside a handful of its worse-fated CD addon) and leaving Atari with 100,000 warehoused stock in 1996.
Though Sega performed much more admirably in terms of the raw numbers, performance in the US and Europe was far worse than they would have wanted. Hugely inflated costs between 1994-1997 meant the Saturn had to be retired early in order for Sega to make a desperate attempt to save themselves with their future-focused Dreamcast. Though the N64 managed to soldier on into the new millennium, Nintendo had looked to replace it as early as 1999 via the potential purchase of an advanced console design (the MX) from 3DO. The 32-bit generation thus represented a complete failure of the most experienced companies in the business. Only relative newcomer Sony was able to escape the carnage.
When we look at the machines individually, there are clear reasons for why they individually failed. When we look at them together, however, the lessons seem to counteract each other. One of the interesting things about 3DO’s plan was that they decided not to manufacture their own console. This was ingenious in theory – Panasonic would be able to build machines at a much lower cost than 3DO could themselves – but came with an inherent drawback. While Sega and Sony could sell machines at a loss and recoup the money through royalties, the partition between the 3DO hardware and software meant that Panasonic needed to sell their hardware at a profit. Considering the advanced tech was already expensive, this need for profit lead to the machine retailing for an eye-watering $700.
If we think the main reason for the 3DO’s failure was its $700 price tag, however, we should consider that the Jaguar retailed at an incredibly competitive $250 but managed to shift far fewer units. The same could be said for the reasons that made the PlayStation successful. Though Sony were forward-thinking with the low royalty rates they demanded from third parties, they’d already been beaten to the punch by 3DO who required just $3 for each game sold.
The Cost of Technology
To truly answer the question of why the first true 3d generation was such a blood bath, I think we need to look beyond the reasons that were individual to each console. While It’s easy for us to criticise the mistakes each individual console manufacturer made, I think there are two overriding factors that led these industry veterans to make such bad decisions.
The first of these was a the cost of components. As we can see from the cost of processors used in these next generation systems, the technology available at the time simply wasn’t as mature (cheap) as hardware manufacturers probably would have wanted. If we look back to the previous generation, we see that the Motorolla 68000 Sega used in the Megadrive had originally blast-processed its way into the market place in 1979, with a price tag of hundreds of dollars. By the time Sega used them to power their System-16 arcade boards in the middle of the 80s this price had dropped to just $15. By the time the Megadrive first appeared in 1988, this cost had dropped to just $8.
For the next generation of consoles, however, all five consoles switched to a newer breed of RISC-based CPUs. RISC -Reduced Instruction Set – CPUs were an interesting product of the 1980s. Where traditional ‘Complex Instruction Set’ CPUs had a number of complex commands that allowed programmers to do complex procedures with a single line of code, reduced Instruction Set CPUs were equipped with only simple instructions that could be completed in a single cycle of the CPU’s clock. These made them more difficult for programmers to work with – a simple multiplication would require 5 lines of code instead of 1 – but they required fewer transistors and gave programmers a more granular level of control of what the CPUs was doing at a cycle to cycle level.
When it came to cost, the picture was a little complicated. On the one hand, the RISC chips themselves were much cheaper than CISC processors of the era. Though its’ price had dropped heavily from its $1000 high, an intel 486 would have still cost $272 at the end of 1993 for example – more than the original RRP of the N64. On the other hand, the technology was still a lot less mature. A price of $30-40 would have seemed like a relative bargain compared to price of competing 3d-capable chipsets, the CPUs used in the machines we’re looking at were still a great deal more expensive than the mature, well-tested models used in the previous generation of consoles.
Another issue – partly linked to the decision to use RISC Cpus – was that all of the machines required far more RAM than previous consoles. While the Super Nintendo could get away with just 128kb of system RAM, 64kb of video ram and 64 kb of RAM for storing audio samples, the PlayStation and Saturn both required 512kb of RAM for audio use alone. Factor in the 2mb of system RAM and the 1.5mb of video RAM and you have RAM requirements which are up to twenty times higher than those of the previous generation.
This wasn’t necessarily be a bad thing in itself, but 1993-5 turned out to be among the worst possible moment for console manufacturers to increase their RAM usage. Though the price of a megabyte of RAM has been falling more or less consistently – from $411,041,792/MB in 1957 to $0.0030/MB in February this year, there have been blips and bounces along the way. Sadly, one of these bumps occurred in the middle of 1993 when a tragic explosion took 2 lives and wiped out the factory that produced 60% of the world’s epoxy resin supply. In the retail market, prices jumped over night. Having briefly dipped under $30/MB in late 1992, the price on the high street jumped by 233% ($33 to $77) before settling down to $58 (a 175% increase.) Though all of our manufacturers would undoubtedly have been able to negotiate better prices then a home user looking to by a single MB of ram, whatever price they were going to pay was far higher than it would have been in either 1992 or 1996.
Another poorly-timed feature was the move to CD-ROM. Previously, CD drives had the been the domain of expensive self-contained console add-ons, so the cost of them hadn’t been baked into the price of base consoles. Though they should probably have been a standard feature – As the drawbacks with N64 and the Jaguar ably demonstrate – their inclusion came a significant cost.
Even if the cost of CD drives had fallen drastically since the 1980s, it was inevitable that a complex moving part built around a laser would be a more expensive inclusion than the simple metal edge connectors used so far. However we can see from the retail market that the cost of them hadn’t come down a tremendous amount. In 1983, Yamaha were able to boast that their CD-X1 was the first player to retail for less than £400 (well, 100,000 yen.) A decade later, (admittedly more robust) PC drives still sold for around £355 in the UK. As with the RAM we can see that, though 1993 probably looked like a good time to jump in to the market, it might have been among the worst. By 1995, the cheapest consumer models had dropped below the $100 mark in the US, and by 1998 the incoming wave of DVD saw CD-based models available for just $55. It’s important to remember these prices aren’t indicative of the price Sony and Sega would have paid but they give us a sense of when and how fast and in which direction the market was moving.
Overall then, we can say that building a console in 1993 was already a more expensive and complicated business than it had been just four years earlier. Consumers already expected more and there were less dependable, cost effective routes to sate the public’s desires at a price they could afford to pay. This would have been a big enough headache on its own, but it turned out that even those demanding customer expectations couldn’t be relied upon to remain consistent.
The expensive hardware would have been easier to mitigate if it wasn’t for the second – and arguably more damaging – problem: at the last moment there was a sudden shift in what ‘Next Generation’ should mean. Though for brevity we like to think of the 32-bit consoles as the beginning of the 3d era, in reality 3d games had been around almost for as long as console gaming itself. Depending on what side of the Atlantic you were on, arcade games like “I, Robot” had been using polygons since 1983, while in the home computing sphere the likes of 1984’s Elite gave players ability to orbit around fully three dimensional space stations. Towards the end of the decade, Western computer titles Driller and Carrier Command and Japanese titles like Wibarm even gave players the option to explorer polygonal interior and arcade developers had created three dimensional worlds from both three dimensional polygons and two dimensional sprites.
On the whole though, progress towards 3D titles was relatively slow. Though by 1989 Namco’s System 21 made the company the effective market leaders and hugely expanded on the capabilities of the technology used in ‘I, Robot’ (the latter could draw 60,000 polygons a second, the former 2,000,) it was still quite basic in comparison with what was to come. Its deployment by Namco was also quite conservative, restricted to rail shooters and driving games. At the same time, PC and Microcomputer coders experimented with more polygon-based titles in the home sphere, but these tended to be slower-paced strategy/simulation games that didn’t demand fast-paced action at a silky-smooth frame rate .
In 1989 then, the future probably seemed relatively easy to predict. If we look at Sega’s consoles, we see that their capabilities were in line with the arcade machines released a couple of years before their release. The SG-1000 seems predominantly designed to play 1980-style arcade games like Carnival and Digger, the specs for 1985’s Mark 3/Master System were in line with Sega’s Model 1 arcade platform while 1988’s Megadrive/Genesis was based on 1985’s System-16 architecture. If instead you look to Nintendo, its clear that the Snes’ ability to scale and rotate sprites was influenced by the techniques used frequently by arcade developers in the late 1980s. If you were beginning to develop a next generation console in 1989 – as both Flare and 3DO were – it would be fair to assume that a home machine releasing in 1991 or 1992 should be targeting the fast, relatively simple 3d worlds of Namco’s racers and shooters.
Indeed, as late as 1992, a sprite-based console capable of untextured 3d probably seemed like the right direction for a next generation console.. When Namco’s 1991 update of Winning Run failed to set the world alight, Sega seized the initiative with their alternative: Virtua Racing. Creating a more detailed world than Winning Run and offering the player a new ability to switch between first and third person driving, Virtua Racing managed to be a definite step up, without being one that changed the central equation. The following year Sega took the world by storm with the first polygonal one-vs-one fighter, but once again the world its combatants fought in was a flat, untextured one. Going into mid 1993, few would argue that a system that combined the advanced sprite manipulation abilities of the latest 2d arcade games with the ability to draw flat, untextured polygons wasn’t a next generation device.
This view changed drastically, however, just a few weeks before the Jaguar and 3DO were due to launch launched. Realising they needed external help, both Sega and Namco had both turned to graphics companies linked to the simulator industry in a bid to keep/retake their edge (Sega partnered with Real 3d – an offshoot of Lockheed – while Namco partnered with Evans and Sutherland.) These turned out to be matches made in heaven: the arcade manufacturers helped the graphics specialists deploy their technology in systems that retailed for far less than $1 million a pop, while the graphics specialists allowed Sega and Namco to deploy advanced texture mapping 18-24 months before they would have been to do so under their own steam.
The result of this was that, at the Japanese amusement show in August 1993, Sega and Namco were both able to show off epoch-defining racing games. In Both Daytona USA and Ridge Racer, the flat unshaded worlds of Virtua Racing and Winning Run had been replaced with fully textured cars and environments. The abstract world of existing 3d titles had been replaced by worlds with recognisable asphalt, concrete, grass, and paint. Everyone who saw them in the flesh was gob-smacked. New publication Edge proclaimed Ridge Racer the most photo-realistic racing game ever, while EGM enthused over Daytona: “Sega’s Daytona USA blows away all other racers creating a new standard in technology, this terrific racer needs to be seen to be believed! It’s a must play. It’s unbelievable!”
Though up to August they had both inarguably been next-generation consoles, the Jaguar and 3DO suddenly looked behind the curve. With the previously advanced 3d graphics of Atari games like Cybermorph and Winning Run suddenly looking shonky, and the limitations of existing FMV games removing the shine from 3DO’s multimedia-based marketing push, both consoles suddenly appeared outmoded before they had even arrived in the shops.
However, it wasn’t just Atari and 3DO who were wrong-footed by the sudden shift in expectations. Though You may think Sega -the developer of Virtua Racing and Daytona – would have had an advantage, the differences between their consumer and coin-op divisions meant that Sega’s consumer wing was taken equally off-guard. Sega console architect Hideki Sato had been working on the assumption that, in lines with the development of previous consoles, the Saturn would primarily based around sprite-based graphics:
“The Saturn actually had just one CPU at the beginning. Then Sony appeared with its polygon-based PlayStation. When I was first designing the Saturn architecture, I was focused on sprite graphics, which had been the primary graphics up to that point. ”Hideki Sato
If Sega altered their machine based around Sony’s specs, does that mean the latter got away scot free? Surprisingly not. Kutaragi and team had been equally taxed by the question of whether the world was ready for a polygon-based console. At an event in Tokyo back in 2012, former Sony Computer Entertainment producer Ryoji Akagawa explained that they had seriously considered making the PlayStation a primarily sprite-based machine, and had only been dissuaded from doing so when they saw the huge crowds building to play, ironically, Sega’s seminal Virtual Fighter.
We can see, them, that the changing nature of ‘Next Gen’ affected four out of the five players involved. Nintendo were served well by their wait-and-see approach in a sense, but they suffered for other reasons. Though waiting was theoretically the best policy, the N64 didn’t arrive until after the PlayStation and Saturn had been through the first couple of rounds of a reasonably savage price war. One of the big criticisms of the N64 was that Nintendo’s decision to with space-restricted cartridges over cheap, spacious CDs cost the 64 a number of key titles, such as the smash-hit Final Fantasy 7. The main reason given for this is generally that expensive, costly cartridges allowed Nintendo a greater level of control over third parties, but given everything we’ve seen its clear a CD-based 64 would have been a very difficult thing for Nintendo to produce. Given the expense of the CD drive and the extra RAM/sound hardware the CD drive would probably require, Nintendo would have faced the choice of launching the system at an uncompetitive price point or following Sega’s more risky strategy of losing money on each piece of hardware sold and recouping it via software sales.
Returning to our original question then, could the Saturn (or the N64, Jaguar or 3DO) have succeeded? Without the benefit of hindsight it’s difficult to see how. Up until 1993, the industry had been working on an informed assumption that the next generation of technology would evolve in one way, only for it to change to a different path at the last minute. For the 16-bit SNES, Nintendo had been able to use relatively cheap custom chips to ape some of the advanced sprite scaling and rotation effects used by contemporary arcade games, but there was no easy way to fudge the kind of graphics the world had witnessed in Ridge Racer in Daytona USA.
This was especially true because the technology had shifted. As we’ve seen, the Megadrive was built around a CPU which was almost nine years old and widely utilised. The SNES, meanwhile, was built around a 7 year old enhancement of an even older processor. When it came to 32-bit RISC processors, there simply wasn’t a comparably mature and reliable workhorse available for manufacturers to utilise. To make matters worse, the consoles inevitably had to use more complicated and expensive components, making it even harder for manufactures to create a console that delivered the required level of performance for the price that customers could afford. On top of that, there were five consoles vying for control of a market smaller than the one that currently supports three. It was almost inevitable that there was going to be a blood bath.
With hindsight, the most obvious thing to do would be to delay. Had the consoles all released in a window between 1996-7, they would most likely have all been both significantly more powerful and cheaper to boot. At the time though, there was no way for this to happen: Both the Jaguar and the 3DO were to close to release and had staked too much on launching early to simply turn back; while consumers, pundits and the industry as a whole had been driven into a sort of frenzy by the prospect of three dimensional worlds. Though plenty today would argue that the 16-bit generation was the finest of all time today, at the time it was regarded as not really offering too much of an upgrade over the older 8-bit consoles:
You might wonder why it’s necessary to build such a high-performance console. The reason is that the current types of games have reached their limit. The market did not grow very much from the 8-bit console generation to the 16-bit console generation. Why? 16-bit consoles were basically straightforward extensions of 8-bit consoles, and because of that, games did not evolve significantly. What, then, is necessary to create such new styles of games? The answer is polygon-based computer graphics.Sega President Hayao Nakayamas, New Year 1994 (Mdshock.com)
With the 16-bit generation having reached a natural end point, no one seems to have had the patience to wait any longer. It seems almost inevitable that that the first 32-bit generation turned out the way it did. The only company who may have had an out was Sega, who had been in talks with Sony to work together on a next generation console.
There are various versions of why this deal failed (Sega of America’s Tom Kalinske blames Sega of Japan, Hideki Sato from Sega of Japan has claimed Sony’s Norio Ohga was vague and dismissive, while others have claimed that Sony’s Ken Kutaragi was dismissive from the beginning,) but it’s probably for the best that it did. Though joining forces might have saved Sega the immediate cash they lost on the Saturn, Sony were a company of a totally different scale. By 1998 Sony’s game division might have been been pulling in more revenue than Sega ever had, but it was small fry compared to Sony’s music, cinema and consumer electronic arms. Indeed, we also shouldn’t forget Sony’s immense cultural brand power too. Sega had done well to position themselves as the ‘cool’ alternative to Nintendo, but Sony electronics in the 1990s had the same level of desirability as a rejuvenated Apple would experience throughout the 2000s and 2010s. Sega would not only have struggled against Sony’s raw financial muscle, but also against their cross-sectional cross-generational cool. Had they entered Sony’s shadow it seems unlikely that they would ever had emerged.
The 32-bit generation feels almost like an inevitable turning point for the industry. It may not have been Sony who emerged as the top dog, but the rising internet and converging media mean it was almost inevitable that larger, more diverse companies were going to take serious interest in the gaming industry. When that happened, the latter’s larger research and development budget would make it difficult for smaller specialists like Sega and Nintendo to compete. By inverting Sega, Nintendo, 3DO and Atari’s vast collective experience from strength to weakness, I believe the first 32-bit generation accelerated an almost inevitable outcome for the gaming industry.