The Upgrade Path, Part Two - Intel versus AMD; The Early Years

Apr 09, 2006 21:22

So. New computer, and all that. Athlon64 processor at the heart of it.

Why, you ask?

The short answer - AMD processors are better, faster, and generally cheaper than Intel processors.

The long answer is all about how it got to be that way. Here begins Part One of a history lesson. I hope you really really like to read ;-).

Disclaimer - Wikipedia is down as I type this. Without a fast and convenient way of checking my facts, I'm doing it all from memory. There may be some errata.

I've been poking around the insides of computers since the days when Apple II's and Commodore 64's were king. Both of those lines of machinery were built around the MOS 6502 processor, an 8-bit chip that ran at a speed of over 1 megahertz and was vastly superior to the Intel 8086 that had been chosen by IBM for its new Personal Computer line. The original IBM PC's were, quite frankly, complete and utter crap... but they had a distinct advantage over their superior competitors - the name IBM. Those three letters were a marketing advantage that no one else could trump for quite some time. Apple and Commodore eventually replaced their venerable product lines with the MacIntosh and the Amiga, whose first incarnations were built around the Motorola 68000, a breathtaking 16-bit CPU that, once again, was vastly superior to Intel's replacement for the 8086; a 16-bit capable chip designated 80286. The 286, for all its deficiencies, was still a vast improvement over its predecessor and finally made the IBM PC - and its legions of clones - something like a legitimate computer, eventually scaling to speeds as high as 12 megahertz.

Motorola and Intel spent a few years one-upping each other. Motorola went to 32-bit with the 68020 and 68030, Intel trying to follow suit with 80386 and finally succeeding with the 80486. Meanwhile, Apple and Commodore - Motorola's main customers - made some amazing marketing blunders that caused them each to lose market share despite offering superior machines (Commodore was also taken over by a pair of investment bankers named Irving Gould and Mehdi Ali who completely gutted the company, leaving it unable to compete while they walked away with all the cash. Commodore filed for bankruptcy in 1994, and now exists only as a fondly-remembered name). Motorola's dwindling customer base, coupled with the fact that Intel's 486 truly was a good processor in its own right, put Intel squarely at the top of the chipmaker game.

Intel's marketshare had grown so large, in fact, that its foundries were unable to meet demand and it found that it had to outsource production of its processor line to other chipmakers to keep up. Hitachi manufactured chips for Intel, as did Samsung. But Intel's most successful production partner was a respected chipmaker based in Santa Clara, California named Advanced Micro Devices. AMD actually started with the 286 and was contracted with Intel for many years of production. But Intel built several large fabs for production of the 486 and its various other chipsets and, when they were finally able meet their own needs, abruptly canceled AMD's contract. AMD found itself bereft of expected income - but with the capability to manufacture a highly-sought after product. So they decided to go into business for themselves, manufacturing and selling 486 processors under their own brand name - and at a much lower price than Intel charged.

Intel sued, of course. But AMD had actually checked all of their fine print and were able to establish in court that the x86 architecture they had been given by Intel was given freely and without restrictions. Their contract had only covered manufacturing in Intel's name. Even if it hadn't, the x86 instruction set was widely documented and could have been reverse-engineered legally - as had happened in the previous decade when Compaq reverse-engineered the BIOS of the IBM PC. Intel wasn't happy, but all they could do was get back to work.

And they did, on a new processor that originally went under the designation 80586. After their failed lawsuit against AMD, they decided to drop the numerical naming scheme in favor of a tradmarkeable name - 'Pentium'.

And at this point, Intel started making mistakes.

The first generation of the Pentium was junk. It was vastly outperformed by the 486 that it was supposed to replace - a 486 clocked at 66 megahertz ran circles around a Pentium of the same clock speed, and there were already 486's that ran at 100 MHz or better. There was also a serious bug in the Pentium's built-in math co-processor that caused it to give inaccurate results for large division calculations. This rendered it unusable for any kind of office environment. Intel quickly - and substantially - revised the internal architecture of the Pentium and released a new version with a completely different socket design that ran at 90 MHz, and this chip proved to be a decent and reliable performer. But Intel had already tarnished its own reputation, and buyers were quietly looking around for alternatives.

AMD had already built a name for itself. Good performance and low cost get you noticed. AMD continued to sell 486's at 133 MHz while Intel stumbled over the Pentium - but they were already preparing to debut a Pentium-compatible processor code-named the K5. Rival chipmakers were developing Pentium-clones of their own; IDT was preparing the Centaur for market, while Cyrix was doing serious work on their 686 chip. AMD's K5 came to market at an initial speed of 90 MHz and no jokes about its name, and eventually reached 150 MHz while making the smaller company a tidy profit. Meanwhile, Intel ramped the Pentium to higher clock speeds and managed to overcome the Pentium's inital bad image. By the time the Pentium reached 133 MHz it was a solid workhorse that easily beat the 486, and Intel was floating on a sea of money. They revised the Pentium's architecture again, added a new instruction set called MMX, and ramped the clock speeds higher.

By this time, AMD and Cyrix were falling a bit behind performance-wise. On the third revision of the Pentium, Intel had significantly improved the Floating-Point Unit of its integrated math co-processor. Software that made use of floating-point math - games, in particular - ran like gazelles on these new chips. AMD's new K6 processors, and the 686MX from Cyrix, ran integer-based math at least as well as a Pentium (better, in office applications), but didn't quite have the same gaming oomph as the newer Pentiums. What they did have was full compatability at a much lower price, and that was enough to keep them in the game. Or at least, that was true of AMD. Some of the Cyrix processors had some odd quirks owing to ways in which the design team had tried to be innovative. Some software, mostly games, required patches to get good performance - or to run at all in some cases.

I started building my first PC's around this time - late 1997. The first computer I built had a 100 Mhz Pentium and 16 megabytes of RAM when everyone else had 166 Mhz chips and 32 megs or better. From the beginning, I stayed on the cheap. I replaced that chip a few months later with a Cyrix 686MX PR-150. The 'PR' stood for 'Performance Rating', and was supposed to indicate that the chip would perform on par with a Pentium 150 despite running at a clock speed of only 133 Mhz. Honestly - it was a mostly accurate claim. It was a significant boost over my previous chip, and Quake seemed happy with it. Some months after that I pulled the Cyrix chip in favor of a 233 Mhz AMD K6 processor and upgraded my memory to 32 megs, rebuilding the whole machine into a new case. Now Quake 2 ran like a champ. I was still very much on the cheap - the Pentium 2 was the big kid on the block with speeds of 300 Mhz or more and AMD had just released the K6-2 at the same speed with an improved FPU and its own custom instruction set named '3D-Now'. I was not without envy, but I stayed within my means while I continued my struggle - to try to learn to like Microsoft Windows.

Putting my K6-2 233 rig next to a Pentium 233 machine was no contest. 3D games ran noticeably better on Pentiums of the same speed. Not that my machine was bad, but the Pentium was better; owing largely to its superior FPU. Even a Pentium 200, though otherwise somewhat slower, would run 3D games somewhat faster. But the AMD K6 was vastly less expensive, therefore a better choice for me at the time. Besides, the emerging reliance on 3D graphics accelerators for games mitigated the differences between CPUs to at least some degree. When I installed a 3dfx Voodoo card in my system, the difference in games performance was astonishing.

Progress continued. The Pentium 2 reached speeds of 450 Mhz, the AMD K6-2 reached 500. I started building computers for other people - using either Intel or AMD CPU's according to their preference - and greatly enjoyed doing so. Rumors were circulating about a radical new CPU core from Intel called 'Katmai', and something from AMD called, predictably, K7. Cyrix released its new - and sadly underperforming - M2 processors and lost market share (they eventually went under and were bought out by the Taiwanese chipmaker VIA, who then bought out the Centaur group from IDT). I replaced my K6 233 with a K6-2 450 at a bargain price and upped the RAM to 64 megs on a new motherboard, and watched Intel start worrying.

Around the time I upgraded to a K6 233, Intel released a new CPU as a response to the marketshare they were losing to AMD from lower prices. It was called 'Celeron' and it was another mistake. The first Celerons, introduced at speeds of 266 and 300 Mhz, sold for much less than Pentium 2's of the same speed and cost Intel less to make - despite the fact that they were the same CPU core. The way Intel saved money was by not including a Level 2 cache. For the non-computer geeks out there - A cache is a small chunk of memory that the CPU uses as a buffer for all the instructions it is executing. Even at that time, CPU's did a hell of a lot of things at the same time. A Level 1 cache is right on the processor, and was 8 kilobytes on the first Pentiums, later increased to 16k on the last generation of Pentiums. The Level 2 cache is a much larger chunk of memory that was originally on the motherboard right next to the CPU. The first Pentiums were typically granted 64k or 128k with either an add-on module or had it built-in to the motherboard. Later motherboards went to 256k and then 512k. My K6-233 system had one full megabyte of L2 cache for the CPU. The point is - the larger the cache the larger the instruction buffer, and the less time the CPU has to wait before receiving and executing new instructions. A larger cache means faster and more robust performance. It's also worth noting that faster processors benefit more from a large L2 cache than slower ones do, simply because they can put a larger number of waiting instructions to work that much sooner.

When Intel designed the Pentium 2 they put the CPU on a card that fit into a slot on the mainboard, rather than a socket. To improve perfomance, they built the L2 cache - 512k worth - onto the card next to the CPU core, tightly coupling it to the CPU for lower latency and therefore greater response time. With the Celeron, they dropped the L2 cache completely. The Celeron had 32k of L1 cache on the chip... and that was all. This rendered its performance so low as to be almost useless. K6 and K6-2 processors danced around it merrily. AMD gained more recognition, Intel got laughed at. A lot. I knew people who bought first-generation Celerons for a song when Intel dumped them out of the market - and made keychains out of them. I think I still have one somewhere.

Intel, to its credit, wised up quickly and released the Celeron-A. This CPU was, again, a revised Pentium 2 core with no L2 cache - but now with the L1 cache increased to 128k. That, coupled with a front-side bus of 66 Mhz versus 100 Mhz for the Pentium 2, meant lower performance - but not so much lower that it felt slow. The new Celerons were well-received... especially after some clever geeks figured out a little secret about the Celeron-A...

Another note for the non-tech geeks (and an apology to true tech geeks, as I'm going to oversimplify the living crap out of this) - A CPU gets its clock speed from two things, a front-side bus and a multiplier. The frontside bus is the base speed at which the motherboard addresses the CPU, the multiplier determines how much faster than the bus the CPU can run. So, if a Pentium 2 has a frontside bus speed of 100 Mhz and a multiplier of 4.5, it's a 450 Mhz CPU. If a Celeron has a frontside bus of 66 Mhz and a multiplier of 4.5, it's a 300 Mhz CPU.

The Celeron-A's ran on a 100 Mhz bus with no trouble at all, meaning they could be turned into 450 Mhz CPU's simply by moving a jumper. A fifty percent perfomance increase is fracking huge. Especially when it's free. You see where this is going, right? Enthusiasts ignored the Pentium 2 in favor of buying up cheaper Celerons and overclocking them. Intel then scrambled to lock down the bus on future Celerons so they couldn't be overclocked, but their expected profit margins took another hit in the meantime. In fixing their mistake, they had another one - albeit not as large.

Meanwhile, Compaq bought Digital Equipment Corporation. DEC was known for three main things; the VAX line of mainframe, miniframe and workstation computers, the VMS operating system that ran on the VAX (and was to some degree the basis for Windows NT), and the Alpha - a lineage of god-powerful supercomputer CPU's. What does this have to do with the Intel/AMD rivalry, you ask? Well, long before Compaq completed it's acquisition of DEC, most of the designers of the Alpha processors jumped ship because they could see that Compaq had no interest in furthering the Alpha or anything derived from it - and they wanted to make kickass CPU's. They all went to AMD. AMD was at that time already designing a successor to the K6/K6-2 core, and had lofty ambitions. The Alpha team found enthusiastic allies in the AMD design labs, and they all got to work. No one outside seemed to notice...

Intel released it's 'Katmai' processor, now dubbed Pentium III, in the fall of 1999. The performance difference between a Pentium 2 450 and a Pentium III 450 was, well, there wasn't any. The Katmai core was simply the Pentium 2 core with some extra instructions added and the ability to eventually ramp to higher clock speeds. It even ran on the same motherboards and this, honestly, was a good thing. The Pentium 2's and III's were expected to run on motherboards with Intel's 440BX chipset, which was a truly marvelous piece of engineering and is regarded by many people to this day - myself included - as the best damn chipset Intel ever produced. When the P-III reached speeds of 500, 550, and then 600 Mhz, we are all impressed by its performance - and wondering what the hell ever happened to AMD's K7 processor. We had heard it was supposed to be something special...

And then the K7 was released, sporting the brand name 'Athlon', which we all hated instantly. It sounded stupid. Then we got our hands on the first Athlon computers (I had been working in the computer department of a major electronics retailer since 1998) and we were simply stunned.

The world had changed.

And Intel was about to do some unbelievably stupid things...

To be continued in - "The Upgrade Path, Part Three - Intel versus AMD, The Smackdown".

the upgrade path, computers

Previous post Next post
Up