The twilight of the PC. (personal computer innovations slowing) (Hardware Clinic) (Column)
by Mark Minasi
If you've been following the industry recently with an eye to buying a new computer, you've probably noticed an important trend. PCs are less expensive than they've ever been.
There's nothing new in that. PC prices have been on a constant downward spiral. But, if you've been in the market for very long, you've probably noticed another important trend: The computer you want to buy - the one with the latest technology, the most RAM, and the biggest mass storage capacity - has always been just out of reach. Until lately.
The low prices for the best of the best should make PC lovers jump for joy, but there is a dark side to this phenomenon because just as the rapid drop in prices has spurred sales, it may also signal the end of the line for PCs.
Since shortly after the arrival of the PC in 1981, the market could be separated into three distinct levels. A basic computer system that could run the low- to midrange programs of the day cost around $1,000.
Even five years ago, $1,000 would buy you enough XT power to run Wordperfect 5.0 and Lotus 1-2-3 2.0.
If you had a little more cash, or higher expectations for your machine in terms of speed and processing power, the next price point, around $3,000, got you either a power user's clone or a low-end machine from a major computer company.
If you had a lot of cash and were running major applications that required lots of horsepower like a huge database or CAD software, you could get a top-of-the-line machine with the best display, largest and fastest hard drive, and so on, for $6,000-$11,000.
PC prices have always dropped at a steady rate; in general, today's $3,000 power workstation is tomorrow's basic PC. It goes through this metamorphosis to a basic PC because the basic requirements of software grow over time. For example, an 8088-based XT will run Wordperfect 5.1 with no problems, but 6.0 doesn't run very well on an XT. Even on a 16-MHz 286 AT clone it seems slow.
While these price points have remained steady for close to a decade, the drop in PC prices in the past two years is unprecedented. The reason for the price drop is that the PC world is different today. The difference can be seen on the high end.
Since today's high-end machine is tomorrow's midlevel machine, we should be able to predict what tomorrow's midrange machine will be. We look up from our fire-breathing desktops to see what's on the horizon. And we see nothing.
What did a top-of-the-line computer look like two years ago? A 486DX2/66 with 16MB, SCSI controller, 380MB hard disk, CD-ROM drive, local-bus video, and 17-inch monitor would have been a high-end computer. That would have cost about $7,000-$9,000.
How about a top-of-the-line computer today? It looks pretty much the same, except that it would probably have a 520MB hard disk and would cost around $4,500.
The high-end machine is rapidly becoming the midrange machine, and there is nothing taking its place. As I see it, the big issues are the following:
* Processors are maxing out. * PC buses have unacceptable speed limitations. * PC BIOS cannot exceed 1GB hard disk size. * Networking isn't built into DOS or Windows. * PC operating systems lack good memory management, multitasking, and security.
We haven't seen a new PC processor in two years - not even a faster version of an existing chip.
You may be thinking, What about the Pentium? Well, what about it? The Pentium may turn out to be a practical chip one day, but that won't be today, or even by the time you read this.
The Pentium is plagued by heat problems and production difficulties. Intel designed the Pentium with a 0.8-micron resolution on the chip mask, requiring the Pentium chip to be quite large as chips go and making it harder to build in quantity.
And at 66 Mhz, the Pentium doesn't really produce real-world speed that's much in excess of that of a 486DX2/66; the real improvement will be seen if a 100-MHz version ever appears. As you may have read a few months ago in "Hardware Clinic," Intel won't be ramped up to produce Pentiums in any quantity until late in 1994.
So the basic CPU has been in a developmental stall for a couple of years. Maybe we've gone as far as we can without a major CPU change. It happened to the 6502 series that powered the first generation of 8-bit computers like the Apple II and the Commodore 64 and the Z80 that powered the CP/M machines that paved the way for the PC. We have to learn to accept the fact that you can only improve an existing technology a certain amount before you need to scrap it and start from scratch.
The notion that PC-compatible processors are maxing out in power is more serious than it appears on first glance.
Microcomputers got their start in the mid 1970s as hobbyist machines and as machines that a computer junkie could control completely. But one of the things that made the PC popular was the relatively high amount of computing power that you could buy for a relatively small amount of money. The idea that Intel-compatible microprocessors have increased in power by a factor of about 100 in ten years while mainframe processors have only jumped by single-digit factors in that time is one of the things that has fueled the move to client/server architecture.
But would corporate America invest all that time and money if it knew it was moving from one dead-end architecture to another?
What's faster than the Pentium? These days, lots of things are. But first and foremost is the DEC Alpha chip. Not only will it run NT programs very quickly, but it can also run regular old DOS and Windows programs (under NT, of course).
But a chip maker recently told me, "The Alpha's obsolete already. A whole bunch of new 128-bit superscalar chips will be out before you know it, and they'll cost about what the Alpha does . . . or they may be cheaper."
IBM'S Power PC chip is a real alternative. It will offer desktop systems in the $10,000 range that will outpace a, 486 by a factor of about 4. Count on the $10,000 price to come down quickly.
Originally, IBM and Apple were set to work with each other on the Power PC and its accompanying operating system, Taligent. Taligent was supposed to be essentially Macintosh System 8 and to run on Macs, PCs, and Power PCs. But now IBM has back out of the Mac-compatility promise, giving Apple good reason to want to sell Power PCs for less money than IBM. And if neither IBM nor Apple sells cheap Power PCs, any company can buy the Power PC chip set from Motorola and undersell IBM and Apple.
The next problem in the PC architecture is the speed of buses. The ISA and EISA buses operate at only 8 MHz, and the MCA bus operates at 10 MHz - and this in an age of 66-MHz computers.
Yes, there is a local-bus standard, in the VESA (Video Electronics Standards Association) local bus, but it's not much of a standard. I've seen a fair number of compatibility problems with boards using the VESA standard.
State-of-the-art buses should transport 64 bits, not 32, and should allow bus mastering (intelligent boards transporting data between themselves without CPU intervention). You probably know that bus mastering is already available with the MCA and EISA buses, but it's not part of either VESA or PCI, the new Intel local-bus standard.
Some help may come from the PCMCIA (Personal Computer Memory Card Industry Association) bus slot type. PCMCIA boards are smart enough to be able to configure themselves when inserted, and they can be changed while the computer is running. These are both powerful features. But PCMCIA does not support bus mastering yet, and it ticks along at a mere 8 MHz.
Ever notice that the Enterprise's chief engineer, Geordi LaForge, never has to screw around with cables?
Every time I'm fumbling around with a LAN cable or installing a new SCSI device, I find that Geordi comes to mind. Apple's Newton can beam its information from one Newton to another. Why can't my lap-top beam data to and from my main desktop PC?
Another communications problem that plagues PC users is setting up and maintaining a network. LANs are a major pain for several reasons.
Some of the most important reasons stem from the general problem of keeping wires in the walls attached to PCs without any breaks, cracks, nicks, cuts, or bruises. That problem applies to all computer communications. But the PC adds an extra element of trouble with its antediluvian operating system, DOS and Windows. DOS was not designed with networks in mind. File sharing was a notion tacked onto the side of DOS, and networks become part of DOS workstations with the inclusion of temperamental device drivers.
By contrast, the Mac's operating system was built with networking in mind from the very beginning. It was fairly lame networking - a serial port connection no faster than 0.24MB per second - but the underlying architecture makes adding a high-speed network like Ethernet a simple matter.
NT and UNIX are examples of micro-computer operating systems that are designed to network, but DOS will never be NT.
Which brings met to PC operating systems. DOS was an obsolete piece of garbage back in 1987, but we still use it. We use it for varied reasons, but the main one is inertia.
What we have in the DOS and Windows environment is adequate. But our use of the PC is limited terribly by DOS and Windows. There's the annoying 640K limitation. Getting around it with DPMI (DOS Protected Mode Interface) or XMS (eXtended Memory Specification) code is cumbersome and apparently poorly understood by programmers. It can be quite a trick to get a number of DOS and Windows programs to work together.
DOS is inflexible. It's necessary to reboot your system every time you make any change to CONFIG.SYS or AUTOEXEC.BAT. We take it as a given, but why must it be that way? Other operating systems don't require this of you. The product manager of Windows NT told me, "If you ever have to reboot your computer after you've got NT up and running, then we've failed in our job."
DOS doesn't support true multitasking; it's still quite possible (in fact it's simple) to crash a Windows communication program by accessing some large file in one program while Windows communication goes on in another program.
In every computer generation, progress and innovation go on for years. It seems for a while that the sky's the limit. But the constant need to support the old while inventing the new eventually dictates that nearly all of the industry's time is taken up with the old, leaving nothing for the new. That generation of hardware and software eventually becomes entrapped by the fact that it's good enough.
Soon, we PC users may have to make a choice. We can either join the vanguard or be left behind. And just when I thought I was done buying hardware for a while.
Speak Up!
Do you have a hardware problem you'd like Mark to tackle in this column? Let him know about it by calling (900) 285-5239 (sponsored by Pure Entertainment, P.O. Box 186, Hollywood, California 90078). The call will cost 95 cents per minute, you must be 18 years of age or older, and you must use a touch-tone phone.