A look inside the soul of your machine: MIPS, BIPS and super-chips. (includes related articles)
by Lamont Wood, Robert Bixby
Computers have revolutionized society. The arrival of computers meant that huge, difficult mathematical models could be automated for the first time. The fallout from this revolution has included a varied array of benefits, ranging from atomic power and industrial robots to human-powered flight. But the most enduring result of the revolution was a continually burgeoning demand for increased efficiency and improved performance. This demand has ensured that the revolution will continue, both in society and--every bit as significantly--inside the computer itself. You want to talk about revolution? Take a look at this:
Since the VAX 11/780 was introduced by Digital Equipment (popularly known as DEC) in October 1977, its performance has come to represent a standard or benchmark. This standard is called MIPS, which refers to millions of instructions per second. More exact ways of rating computer speed have since come along, but you still hear people gauging computers in terms of VAX MIPS. The base price of the unit--the size of two soft drink vending machines--was considered a break-through for 32-bit computing at $130,000.
Today, 15 years later, you can get 41 times its performance on a desktop for one-sixtieth the price.
And the experts agree that the last 15 years have only been a warmup. The pace of progress is actually accelerating. Whereas the power of a desktop PC went up by a factor of 10 or so during its first decade (1981-1991), its power should easily grow another hundredfold in its second decade. Desktop BIPS machines (acting on billions of instructions per second) are confidently expected in a few years.
Or as Gordon Campbell, president of microprocessor maker Chips and Technologies in San Jose, California, puts it: "We've been on the brink of new breakthroughs throughout the 1980s, and the 1990s won't be any different. Progress was slower in the first half of the eighties than in the second half, and the pace is still accelerating."
Others point to the law first formulated by Intel cofounder Gordon Moore: The number of transistors that can be fitted on a microchip (and, hence, the potential power of that chip) will double about every 18 months. The law has been generally holding true--and it's expected to hold true into the foreseeable future.
Cheaper, Faster
Moore's Law is unique in that it defies Murphy's Law: "As [the microprocessor] gets smaller, it gets cheaper, and it gets faster," notes Lew Pacely, Intel's product manager for high-end 486 products.
As for questions about technological limits, the experts simply shrug them off. Some may eventually be encountered, but remember that the current status quo would have seemed impossible just a few years ago.
"There were a couple of predictions that we have, in the last few years, figured out were false," says Pacely. "The first is that we would not know what to do with all these transistors we were going to be able to put on a chip. It's very clear now that we'll find a use for every one of them. The other is that we would have problems using optical technology [for reducing circuit designs to microscopic dimensions to fit on a chip]. At the submicron level it was thought to be impossible.
"But very high precision optics overcame the problem of the submicron level, and I think we also underestimated the power of computers to design computers. We found we could use the previous generation on the next generation to make it faster, and that cycle has been compounding itself."
ON THE HORIZON
Progress is expected to follow several intertwined threads:
Smaller geometries. As components can be made smaller, more can be added to a chip, allowing more complexity. Smaller geometries also mean faster speeds, since the smaller transistors can change state faster and since signals don't have to travel as far. (Only in the microcircuit world is the speed of light considered a performance barrier.) The Intel 486 uses circuit traces 0.85 micron wide, geometries on some other recent chips are as low as 0.5 micron, and there is experimental work at the 0.35 level and lab work at the 0.1 level. (A micron, incidentally, is a millionth of a meter--bigger than a virus but smaller than a protozoan. A human hair is 70 microns wide.)
Bigger chips. Designers have found that most of the speed barriers they struggle against result from having to move signals off one chip and onto another, dealing along the way with a jungle of capacitance, impedance, and other electrical engineering concerns. But the speed of operations within a given chip is much less restrained. Chip speeds, in other words, can be faster than system speeds, so the idea is to embody as much of the system as possible inside one chip. So we see math coprocessors, memory caches, and memory management circuitry being added to microprocessor chips. Larger chips are harder to make, of course, but Moore's Law keeps pushing back the horizon. The Intel 8088, designed in 1978, has 29,000 transistors. The Intel 486, designed in 1989, has 1.2 million.
Faster speeds. The speed of a microprocessor chip is governed by the speed of its clock, which can be likened to the rpm of an engine. All things being equal, the greater the rpm, the faster the engine.
"A fair amount of work has gone into hiking clock speeds," notes Roy Druian, manager for 680x0 marketing for Motorola in Austin, Texas. "You run the clock faster, look for things that break, fix them, speed up the internal signal paths, redesign portions of the circuit, and do geometry shrinks in the process." The Motorola 68030 originally had' 1.25-micron geometry and ran at 25 MHz; it now uses 0.8-micron geometry and runs at 50 MHz. Druian sees no reason a microprocessor can't be made to run at 250 or even 500 million cycles per second. (And indeed, at almost the same time he was saying this, Digital Equipment announced that its new Alpha microprocessor had run successfully at 200 MHz. The firm is confident that it can eventually get 400-BIPS performance out of the chip's descendants.)
Clock doubling. If chips run so much faster than systems, why not let them run as fast as they can and have some kind of buffer between them and the system to handle the speed difference? That's what's done with the new Intel 486DX2. It runs at 50 MHz on a 25-MHz system. The 25-MHz system circuitry is much less expensive than 50-MHz system circuitry, yet it achieves about 85 percent of the speed of a true 50-MHz system.
Greater efficiency through complexity. As more components can be added, a chip can be made to do more complex things that will enhance its performance even at the same clock speed. Whereas the Motorola 68000 takes 20 to 30 cycles to perform an instruction, the 68030 takes 6.6 cycles, and the 68040 takes 1.3. And progress has been similar in the Intel dynasty. (Keep in mind that microprocessor instructions are typically arcane things like "pop the stack," and a million per second would not equate to a VAX MIPS.) One of the key selling points of the "P5" chip is that it will be able to perform more than one operation per cycle, which Intel calls superscalar processing.
More parallelism. Even a multi-tasking chip can really only execute one instruction at a time. You could multiply a computer's effectiveness by lining up two or more microprocessors in parallel. A master processor could break up tasks into discrete chunks and pass them through simultaneously, eliminating delays. While not much of that is being done now, more parallelism is constantly being added to individual processors. For instance, early Intel microprocessors performed binary multiplications at the rate of one bit per cycle, notes Pacely. But Intel was able to add circuitry to later chip generations that would "look ahead" and multiply bits in parallel.
More layers. As chips get larger, they also get deeper. By stacking layers, you multiply the processing power without taking up more real estate. The Intel 486 uses three layers, and four-layer chips are being planned.
RISC architecture. Reduced instruction set computers (RISC) were supposed to be the wave of the future, with their ability to get higher performance by performing simpler things at higher clock speeds.
Today, the developers shrug. "RISC is great marketing technology," says Pacely. "Remember artificial intelligence? Its basic concepts got integrated into a lot of products, but there never was an artificial intelligence market per se. RISC is just a computer architecture, and the people who build processors may position them as RISC to be different. But everyone uses a certain amount of RISC in chips these days, since it lets a lot of simple things get done in one clock cycle."
More brain sweat. You might think that we could miniaturize endlessly. But there is a very definite limit to circuit size. For instance, when geometries get below about 0.3 microns, the designers are likely to start running into quantum effects: The components will be so small that they will obey the odd laws of quantum physics instead of the laws of Newtonian physics that rule the macroscopic world inhabited by people (commonly called the real world).
"The real issue is that we won't be able to draw circuit traces at the quantum level and will have to switch from optics to particle beams or something," says Pacely. "But are we thinking that clever people won't find a way to solve these problems? No. They will. It's a relatively straightforward engineering problem, not a breakthrough science problem."
THE LIGHT FANTASTIC
"The quantum problem will probably be as much a problem as any past problem that we overcame," agrees Druian. "We thought there would be problems at the one-micron barrier, but we addressed them. As you migrate down in size, there are always new problems you have to address, but I don't see them as significantly different from previous ones. Whatever the width of an atom is, we'll probably run into problems there. But I understand work is being done on light chips."
"There are a lot of advantages to light," adds Campbell. "There is no electronic radiation, it doesn't heat up, and it's faster (than copper-transmitted signals) by a factor of 2 or so. People are looking at light very seriously, but the density you can get is not competitive with silicon--not that you can't get single elements small enough, but there is no way to get 4 million on a chip. Silicon technology is pretty mature in that sense. Maybe by the end of the decade, we'll see light chips."
Meanwhile, Intel has stated publicly that it expects to be selling 386-compatible 2-BIPS chips with 100 million transistors, running at 250 MHz, by the end of the decade. There are those who disagree with this prediction--because it's too conservative. "Two BIPS? We'll see it more like about 1997. The pace is picking up," notes Dean McCarron, analyst with In-Stat, a semiconductor market research firm in Scottsdale, Arizona.
THE WRITING ON THE WALL
As for what use we'll make of this deluge of MIPS and BIPS, sources agree that we'll probably see more and more resources devoted to the user interface and communications. After all, our computer applications have remained basically the same: accounting, word processing, databases, and so forth. Where the extra power could be best harnessed would be in making the interfaces simpler and in speeding up communications.
Handwriting recognition interfaces are already being touted, and mass-market voice recognition can't be far off. Virtual reality on the order of the holodeck on "Star Trek: The Next Generation" may not be in the cards yet, but we'll probably see stabs in that direction. All parts of the entertainment industry await advances in the computer hardware to make their interaction more friendly, more intense, and more real. Education might cease to exist as we know it and merge with entertainment and information in an industry that will form the backbone of a culture that is yet to be born.
In the meantime, hang on to your hat. The excitement has only begun.
Most of the advances in society since the dawn of civilization have been the result of developments in machine technology from the wheel to the microprocessor. Our dreams of a future where dwindling resources are divided like loaves and fishes to feed, clothe, and house the multitudes rely on the development of the machines that multiply the power of our intellect the way levers and wheels multiply the power of our musculature. Rapid progress in the microprocessor field helps to make that vision of an abundant future more assured.