[go: up one dir, main page]

0% found this document useful (0 votes)
220 views1,067 pages

PC Biblia

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 1067

Winn L.

Rosch Hardware Bible, Electronic Edition, New Table of Contents

Table of Contents

● Preface
● Introduction
● 1, Background
● 2, Motherboards
● 3, Microprocessors
● 4, Memory
● 5, The BIOS
● 6, Chipsets and Support Circuits
● 7, The Expansion Bus
● 8, Mass Storage Technology
● 9, Storage Interfaces
● 10, Hard Disks
● 11, Floppy Disks
● 12, Compact Discs
● 13, Tape
● 14, Input Devices
● 15, The Display System
● 16, Display Adapters
● 17, Displays
● 18, Audio
● 19, Parallel Ports
● 20, Printers and Plotters
● 21, Serial Ports
● 22, Telecommunications

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/NEWTOC.HTM (1 de 2) [23/06/2000 04:11:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, New Table of Contents

● 23, Networking
● 24, Power
● 25, Cases
● Appendix A, PC History
● Appendix B, Regulations
● Appendix C, Health and Safety
● Appendix D, Data Coding
● Appendix E, Disk Parameter Reference

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/NEWTOC.HTM (2 de 2) [23/06/2000 04:11:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Table of Contents

● Preface

● Table of Contents

● Credits and Acknowledgements

● About the Author

● Comments and Inquiries

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/httoc.htm [23/06/2000 04:13:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Credits

Winn L. Rosch Hardware Bible, Premier Edition


By Winn L. Rosch

*
SAMS Publishing
201 West 103rd Street, Indianapolis, Indiana 46290

Text and Illustrations Copyright (c) 1997 by Winn L. Rosch


Premier Edition
All rights reserved. No part of this book shall be reproduced, stored in a retrieval system, or transmitted by any means, electronic, mechanical, photocopying,
recording, or otherwise, without written permission from the publisher. No patent liability is assumed with respect to the use of the information contained
herein. Although every precaution has been taken in the preparation of this book, the publisher and author assume no responsibility for errors or omissions.
Neither is any liability assumed for damages resulting from the use of the information contained herein. For information, address Sams Publishing, 201 W.
103rd St., Indianapolis, IN 46290.

International Standard Book Number: 0-672-30954-8

Library of Congress Catalog Card Number: 96-67965

2000 99 98 97 4 3 2 1
Interpretation of the printing code: the rightmost double-digit number is the year of the book's printing; the rightmost single-digit, the number of the book's
printing. For example, a printing code of 97-1 shows that the first printing of the book occurred in 1997.

Text in paper version of this book was composed in AGaramond and MCPdigital by Macmillan Computer Publishing.

Printed in the United States of America

All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Sams Publishing cannot attest to the
accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.

To Caelyn

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/credits.htm (1 de 2) [23/06/2000 04:13:58 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

● Publisher and President: Richard K. Swadley


● Publishing Manager: Dean Miller
● Director of Editorial Services: Cindy Morrow
● Director of Marketing: Kelli S. Spencer
● Assistant Marketing Managers: Kristina Perry and Rachel Wolfe
● Acquisitions Editor: Grace M. Buechlein
● Development Editor: Brian-Kent Proffitt
● Software Development Specialist: Patty Brooks
● Production Editor: Sandy Doell
● Indexers: Tom Dinse and Johnna L. VanHoose
● Technical Reviewers: John Nelson and Vince Neal
● Editorial Coordinator: Katie Wise
● Technical Edit Coordinator: Lynnette Quinn
● Resource Coordinator: Deborah Frisby
● Editorial Assistants: Carol Ackerman, Andi Richter, and Rhonda Tinch-Mize
● Cover Designer of Printed Edition: Tim Amrhein
● Book Designer of Printed Edition: Alyssa Yesh
● Copy Writer: David Reichwein
● Production Team Supervisors: Brad Chinn and Charlotte Clapp

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/credits.htm (2 de 2) [23/06/2000 04:13:58 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Introduction

Introduction

Tools define our culture. We aren't so much what we make as what we use to make it. Even
barbarians can make holes with a bow and shaft; we have drill presses, hydraulic punches, and lasers.
More importantly, the development of tools defines civilization. No culture is considered civilized
unless it uses the latest tools. The PC is the tool that defines today's current age and culture.
Once a tool only for initiates and specialists, today the PC has become as common as, well, all those
monitors sitting on office desks and overweight attache cases bulging with
keyboard-screen-and-battery combinations. The influence and infiltration of the PC stretches beyond
comparison with any other modern tool, even beyond the reach of common metaphors. No office
machine is as common; none so well used; none so revered—and so often reviled. Unlike the now
nearly forgotten typewriter that was restricted to secretaries and stenographic pools, the PC now
resides resplendently on once bare drafting tables, executive desks, and kitchen counters. Unlike fax
machines, calculators, and television sets, the PC doesn't do just one thing but meddles in nearly
everything you do at work and at home. Unlike your telephone, pager, or microwave oven, the PC
isn't something that you use and take for granted, it's something you wonder about, something you
want to improve and expand, perhaps even something that you would like to understand.
Indeed, to use any tool effectively you have to understand it—what it can do, how it works, how you
can use it most effectively. Making the most of your PC demands that you know more than how to rip
through the packing tape on the box without lacerating your palms. You cannot just drop it on your
desk, stand back, and expect knowledge to pour out as if you had tapped into a direct line to a fortune
cookie factory.
Unfortunately, despite the popularity of the PC, the machine remains a mystery to too many people.
For most people, the only one more baffling than programming a VCR. Everyone knows that
something happens between the time your fingers press down on the keys and a letter pops up on the
screen, or a page curls out of the printer, or a sound never heard before by human ears shatters the
cone of your multimedia loudspeakers. Something happens, but that something seems beyond human
comprehension.
It's not. The personal computer is the most logical of modern machines. Its most complex thoughts are
no more difficult to understand than the function of a light switch. Its power comes from
combinations—a confluence of ideas, circuits, and functional blocks—each of which is easy to
understand in itself. The mystery arises only when your vision is blocked by the steel wall of the
computer case that blinds you to the simplicity fitted inside.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/intro.htm (1 de 4) [23/06/2000 04:15:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Introduction

This book is designed to be both x-ray vision and explanation. It lets you see what's inside your PC,
helps you understand how it works, and lets you master this masterpiece of modern technology.
Moreover, you learn what you can do to make your PC better, how you can use it more effectively,
how you can match the right components to make it better suit what you want to do, and how you can
get the most speed, the most quality, the most satisfaction from today's technology.
In home and workplace, the personal computer is today's most technically advanced tool. With a PC,
you can get yourself and your office organized, you can tackle jobs that would otherwise be too time
consuming to try, you can extend your imagination to see forms your mind cannot make, and you can
relieve yourself of the tedium of repetitive, busy work. Most importantly, the PC is a tool for
acquiring information, for learning, for communicating. It is your primary portal for connecting with
the Internet and World Wide Web.
For some, tools, or the technologies behind them, define the ages of man. Humankind started in the
Stone Age. The apes that came before (and still persist in a few place where civilization has bowed to
nature) pounded with rocks. Our ancestors went further, shaping rocks to better suit their needs:
hammers, points, and knives. The story of civilization has been much the same during the passing
millennia. As we have progressed from stone to bronze to iron, the aim has been the same: making the
tool that best suits the task at hand, one that's sharper, stronger, and better fits the hand and job.
The story of the PC fits this same pattern. When the PC was introduced, it was like the rock in the
raw. You grabbed it because you could. It was the right size to get your hands on, clumsy to use, and
accomplished a job without any particular elegance. You sort of pounded away on the thing and
hoped for the best.
The coming of metal meant that tools could be molded to their purpose. In the Bronze Age, then the
Iron Age, work was faster and more precise. The modern PC offers the same kind of malleability. It's
a machine made to be molded to your exact needs. Fortunately, you don't need a forge to fit your PC
to the function you desire. The physical part of the change is simple; the hard work is all mental—you
have to know your PC to use it. You have to understand this most modern of tools.
Only a poor workman blames his tools when his job goes awry. But what kind of workman doesn't
understand his tools at all, doesn't know how they work, and cannot even tell a good tool from a bad
one? Certainly you wouldn't trust such a workman on an important job, one critical to your business,
one that might affect your income or budget, or take control of your leisure pursuits. Yet far too many
people profess ignorance about the PC, a tool vital to their businesses, their hobbies, or their
households.
The familiar PC is the most important business tool to emerge from electronic technology. It is vital to
organizing, auditing, even controlling, most contemporary businesses. In fact, anywhere there is work
to do, you likely will find a PC at work. If you don't understand this modern tool, you will do worse
than a bad job. Soon, you may have no job at all.
Unlike a hammer or screwdriver, however, the personal computer is a tool that frightens those
uninitiated into its obscure cabala. Tinkering with a personal computer is held in the same awe as
open heart surgery, except that cardiovascular surgeons, too, are as apprehensive as the rest of us
about tweaking around the insides of a computer. But computers are merely machines, made by
people, meant to be used by people, and capable of being understood by people.
As with other tools, they are unapproachable only by those inexperienced with them. An automobile
mechanic may reel at the sight of a sewing machine. The seamstress or tailor may throw up his hands

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/intro.htm (2 de 4) [23/06/2000 04:15:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Introduction

at the thought of tuning his car. The computer is no different. In fact, today's personal computer is
purposely designed to be easy to take apart and put back together, easy to change and modify, and
generally invincible except at the hands of the foolish or purposely destructive. As machines go, the
personal computer is sturdy and trouble free. Changing a card in a computer is safer and more certain
of success than fixing a simple home appliance, such as a toaster, or changing the oil in your car.
Times change, and so has the personal computer. No longer is it an end in itself. It is a means to an
end. It takes care of the office work, it lets you send and receive e-mail, it lets you explore online, it
even plays games. In the future, the PC will likely be the centerpiece of both your home entertainment
system and your communications systems.
Although the PC has gained tremendously in power in the last few years, its technology is more
accessible than ever. New developments promise to make your next PC easier to set up, too. A
far-reaching new standard called Plug-and-Play can make upgrading your system as easy as plugging
in new components—no adjustments, no configuration, no brains necessary. But even these
innovations don't mean that you can take full advantage of your investment in your PC without
understanding it and the underlying technology.
If anything puts people off from trying to understand PCs, it is the computer mystique. After all, the
computer is a thinking machine, and that one word implies all sorts of preposterous nonsense. The
thinking machine could be a devious machine, one hatching plots against you as it sits on your desk,
thinking of evil deeds that will cause you endless frustration. Although you might attribute Satanic
motivation to the machine that swallows up a day's work the instant your lights flicker, you would be
equally justified in attributing evil intent to the bar of soap that makes you slip in the shower.
Even if you don't put your PC on an altar and sacrifice your firstborn program to it, the image of a
thinking machine can mislead you. A thinking machine has a brain, therefore opening it up and
working inside is brain surgery, and the electronic patient is just as likely to suffer irreversible damage
at the hands of an unskilled operator as a human. A thinking machine must work in the same
unfathomable way as does the human mind, something so complicated that in thousands of years of
attempts by the best geniuses, no one has yet satisfactorily explained it.
But computers think only in the way a filing cabinet or adding machine thinks—hardly in the same
way as you do or Albert Einstein did. The computer has no emotions or motivations. It does nothing
on its own, without explicit instructions that specify each step it must take. Moreover, the PC has no
brain waves. The impulses traveling through the computer are no odd mixture of chemicals and
electrical activity, of activation and repression. The computer deals in simple pulses of electricity,
well understood and carefully controlled. The intimate workings of the computer are probably better
understood than the seemingly simple flame that inhabits the internal combustion engine inside you
car. Nothing mysterious lurks inside the thinking machine called the computer.
Computers are thought fearsome because they are based on electrical circuits. Electricity can be
dangerous, as the ashes of anyone struck by lightning will attest. But inside the computer, the danger
is low. At its worst, it measures 12 volts, which makes the inside of a computer as safe as playing with
an electric train—even safer because you cannot trip over a PC's tracks (they're safely ensconced in
the PC's disk drive). In fact, nothing that's readily accessible inside the computer will shock you,
straighten your hair, or shorten your life. The personal computer is designed that way. It's made for
tinkering, adding in accessories, and taking them out.
Computers are thought delicate because they are built from supposedly delicate electronic circuits. So
delicate that their manufacturers warn you to ground yourself to a cold water pipe before touching

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/intro.htm (3 de 4) [23/06/2000 04:15:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Introduction

them. So delicate that, even though they cost $500 each, they can be destroyed by an evil glance. In
truth, even the most susceptible of these circuits, the one variety of electronic component that's really
delicate, only requires extreme protection when it's not installed where it belongs. Although pulses of
static electricity can damage circuits, the circuitry in which the component is installed naturally keeps
the static under control. (Although static pulses are a million times smaller than lightning bolts, the
circuits inside semiconductor chips are a million times smaller and more delicate than you are.)
Certainly a bolt of lightning or a good spark of static can still do harm, but the risk of either can be
minimized simply. In most situations and work places, you should have little fear of damaging the
circuits inside your computer.
Most people don't want to deal with the insides of their computers because the machines are complex
and confusing. In truth, they are and they aren't. It all depends on how you look at them. Watching a
movie on videotape is hardly a mental challenge but understanding the whirring heads inside the
machine and how the image is synchronized and the hi-fi sound is recorded is something that will spin
your brain for hours. Similarly, changing a board or adding a disk drive to a computer is simple. It's
designing the board and understanding the Boolean logic that controls the digital gates on it that takes
an engineering degree.
As operating systems get more complicated, computers are becoming easier to use. Grab a mouse and
point the cursor at an onscreen window, and you can be a skilled computer operator in minutes. That
may be enough to content you. But you will be shortchanging yourself and your potential. Without
knowing more about your system, you probably won't tap all the power of the PC. You won't be able
to add to it and make it more powerful. You may not even know how to use everything that's there.
You definitely won't know whether you have got the best computer for your purposes or some
overpriced machine that cannot do what simpler models excel at.
In other words, although you don't need skill or an in-depth knowledge of computer or data processing
theory, you do need to know what you want to accomplish and what you can accomplish, the how and
why of working with, expanding, even building a personal computer system.
And that's the purpose of this book, to help you understand your present or future personal computer
so that you can use rather than fear it. This text is designed to give you an overview of what makes up
a computer system. It will give you enough grounding in how the machine works so that you can
understand what you're doing if you want to dig in and expand or upgrade your system.
At the same time, the charts and tables provide you with the reference materials you need to put that
knowledge in perspective and to put it to work. Not only can you pin down the basic dates of
achievements in technology, find the connections you need to link a printer or modem, and learn the
meaning of every buzz word, this book will help you understand the general concept of your personal
computer and give you the information you need to choose a computer and its peripherals. As you
become more familiar with your system, this book will serve as a guide. It will even help you craft
your own adapters and cables, if you choose to get your hands dirty.
The computer is nothing to fear and it need not be a mystery. It is a machine, and a straightforward
one at that. One that you can master in the few hours it takes to read this book.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/intro.htm (4 de 4) [23/06/2000 04:15:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Preface to the Premier Edition

In the decade since the first edition of this book appeared, the world has changed, and PCs have not
only been part of that change but have caused some of the change. The PC has grown up, shrunk
down, gained importance, and lost status. PCs have grown to have the power not just to capture our
thoughts and imagination but to create their own reality. They have shrunk so that you can carry the
most powerful machines from work to home and not spend the evening recovering. They have
become a major part of international commerce. And from items of reverence they have entered
society as little more than appliances. All they need is a good coat of white enamel and a cord too
short to let you plug them in and put them where you want them. Some need only the enamel.
Paging through the manuscript of the previous edition, I might have been shocked to see how out of
date it had become in a few years. “Might have been shocked” because, after almost two decades of
chasing after computer technology, nothing comes as a shock anymore except reaching inside a PC
after forgetting to switch off the power. Between the third and this fourth, "Premier" Edition,
however, the PC industry has taken yet another unforeseen twist, one that leads down yet another path
to who knows where. At least the journey is always fun.
Nowhere in the third edition will you find the word “Internet.” Certainly the Internet was there, but it
hadn't yet been discovered by the umpteen million folk who now log on every day. And no one, not
even its most fervent promoters, would have believed that in a few short years son-of-Internet, the
World Wide Web, would become the one most important reason people would consider buying a PC.
In this edition, you'll find the magical word “Internet” splashed in nearly every chapter. After all, no
computer book can be authoritative without a good dose of the hottest buzzwords. Of course, there
have been more changes than just the Internet, so I've captured a whole hive of buzzwords for this
edition—and given them surprisingly full discussions.
While I was adding the new spice, I also diligently expunged a good deal of ancient history, keeping
only enough to put things in perspective. Most that is now gone is the stuff that many of us
approaching codger-hood would like to forget—words like IBM (search and replace with Intel and
Microsoft hegemony), PS/2, OS/2, megabyte (search and replace with gigabyte), even DOS. So much
of what we once took for granted has finally—and often thankfully—taken leave.
Some of it is fondly remembered, some not so. As I was working on this revision, however, I
discovered that too much can't be forgotten even if you don't want to be reminded of it every day.
How to deal with these issues tormented me for a long while, but I persevered, mixing discussions of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/preface.htm (1 de 5) [23/06/2000 04:16:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

new technologies with historic perspective. When my manuscript plopped down on the publisher's
doorstep and cracked the flagstone underneath, however, cries of anguish reverberated from coast to
coast. The publisher moaned over the massive deforestation needed to make the paper for printing just
one copy of the resulting book, let alone what would be required for the several that they were likely
to sell (or at least report on the royalty statement).
The real issue was not ecological, however. The publisher was kind enough to point out that my
writing itself was a worse pollution than would be caused by clear-cutting the entire continent. The
problem was that, as originally drafted, this Premier edition could not have been printed in a single
volume—not even using bible paper (yes, there is a such thing, but it's named after a somewhat more
worthy work than this).
Another kind of clear-cutting was called for. The publisher sent a whole cadre of editors to help
me—all in nice, white coats and carrying butterfly nets. But no matter how surgically we wielded our
blue pencils, we had to cut enough important stuff that it piled up knee deep on the floor. We quickly
discarded our initial thought—hack away with abandon because no one reads past page two,
anyhow—and sought a solution that preserved the integrity of the book. The result was two books in
one.
What you hold in your hand is the lean-and-mean abridged bible, trimmed considerably to publishable
size. Inside a pocket in the back jacket you'll find a CD containing the big book (as well as a number
of other delights), the entire new edition as originally conceived (but with the benefit of adept editing.
-ed.)
The publisher calls the stuff on CD “Electronic Bonus Sections” to give you the impression you're
getting more for your money. Of course you don't need a Nobel prize in economics to figure out that
you've actually paid for the disk. What you're really getting is a convenient way of dealing with an
overwhelming amount of information.
The division between the paper and electronic versions is hardly random. I've moved material of
mostly historic interest and more detailed and technical data about current (and upcoming) topics to
the CD. In other words, the CD lets you probe further into matters of particular interest to you. To
alert you to such electronic elaborations, the publisher has scattered emendations throughout the book,
dire warnings that look like this:
See Electronic Bonus Chapter 5 on the CD for a complete section called "System Identification Bytes," which includes
Table 5.F, "IBM Model and Submodel Byte Values."

The trend at the time the previous edition rolled off my screen was toward platform independence.
Everyone and her sister was announcing a new RISC microprocessor and all were going to run some
new, great operating system, one that grew up somewhere other than the state of Washington. How
could anyone have been so naïve? (Ask about my love life and you'll know.) But today we do have
our choice of operating systems. Of course, all start with Windows—take your pick, 95 or NT.
Despite all the changes in the world and PCs, I've tried to keep this edition consistent with the earlier
three, retaining the same general layout but spinning off a new chapter on mass storage interfaces, a
subject broad and complex enough to deserve its own discussion. I've shuffled some chapters around
to improve the organization and make my publisher think I actually did some work on the revision.
I've tried to include as much information as was available at the time this was prepared about all new
and important technologies. Besides that Internet thing, you'll find discussions of all the latest
standards you'll want in your next PC—USB, IEEE 1284, 1394, IrDA, CardBus, Pentium II, MMX,
and enough kinds of RAM to ensure a good slumber even if you don't need to count sheep.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/preface.htm (2 de 5) [23/06/2000 04:16:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Look, look, look! Finally this author has gotten it together to deliver his tables and figures to the
publisher with the manuscript, and the publisher got them in the right places in the text. Some even
have the right captions. A few are even useful. More tables than ever before. More illustrations. And
less idle time for the author (as if I had any to begin with!)
As with any major revision to a successful book, I approached this project happily—you know, a
minor rewrite and the money would pour in. Okay, once you take into account the publisher's
bookkeeping, it's more of a dribble and usually a drought. I had dreams that chapters could remain
essentially intact and I could just add a reference to Pentium here and there.
After about ten minutes of such revision, I was ripping chapters apart, reorganizing, purging, and
adding hundreds of pages of new material. Here's a quick run through of the changes and some other
important stuff you'll find inside.
● Chapter 1, "Background," is a basic introduction to PCs and the standards that define a starting
point in PC hardware—for example Microsoft's PC 97 standard that designates the minimum you'll
want for the latest operating system, unnamed as this is written. (Microsoft has just drafted a new PC
98 standard, so you might guess what the new operating system may be called. -ed.)
● Chapter 2, "Motherboards," has gotten a good dose of reality, including a discussion of new
technologies and (in the Electronic Bonus Sections) some of the latest commercial motherboard
products.
● Chapter 3, "Microprocessors," of course includes the latest incarnation of Intel's wonder chips up
through the Pentium II and also some surprisingly strong competitors from AMD and Cyrix.
● Chapter 4, "Memory," shifts emphasis to new technologies and from chips to modules. You'll find
discussions of more memories than at a 50th class reunion.
● Chapter 5, "The BIOS," admits itself to be an exercise in irrelevance, given the rise of the
overwhelming operating system. Of course, trends in BIOS design are all covered, up through
Plug-and-Play and a pointer to ACPI later in the book.
● Chapter 6, "Chipsets and Support Circuits," adds more relevant new technologies to the dusty
circuitry still required for compatibility with machines designed before some current PC users were
born.
● Chapter 7, "The Expansion Bus," puts more emphasis on PCI and CardBus, along with (in the
Electronic Bonus Sections) some of their industrial kin. There's enough about where we came from to
help you see why we're going where we are.
● Chapter 8, "Mass Storage Technology," builds upon its foundation in previous editions, trading
megabytes for gigabytes and mixing in such novel concepts as rewritable CDs and various shades of
RAID.
● Chapter 9, "Storage Interfaces," a new chapter, tackles all the new and some coming designs,
giving just a tip of the hat to old, best forgotten, friends. You'll find SCSI-3, various shades of EIDE,
SSA and even Fibre Channel in its pages.
● Chapter 10, "Hard Disks," discusses several new technologies and generally tries to keep up with
this fast spinning technology.
● Chapter 11, "Floppy Disks," covers old ground but leaps ahead to the new 100MB designs as well.

● Chapter 12, "Compact Discs," reaches into DVD territory and also presents a good look at CD-R,
the likely replacement for both CDs and cartridge hard disks as the preeminent data exchange
medium.
● Chapter 13, "Tape," takes note of the new technologies that are keeping this ancient medium alive,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/preface.htm (3 de 5) [23/06/2000 04:16:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

things like DLT and helical tape formats, reserving discussions of older technologies to the Electronic
Bonus Sections.
● Chapter 14, "Input Devices," updates old technologies, adds a few topics that fell through the
cracks in previous editions (things like joysticks) and even reaches into 3D systems.
● Chapter 15, "The Display System," still covers the basics of image making but approaches new
issues arising from 3D accelerators and multimedia needs.
● Chapter 16, "Display Adapters," covers all the requirements you'll want in your display system,
including the latest chips.
● Chapter 17, "Displays," illustrates more issues involved with CRTs but puts more emphasis on
LCDs and, a new technology called FED that holds a promise of someday replacing both CRTs and
LCDs.
● Chapter 18, "Audio," looks at everything you'll want to hear from your PC, from microphones to
speakers with an improved discussion of MIDI as well.
● Chapter 19, "Parallel Ports," covers the new IEEE 1284 standard and the bus-like ECP and fast
EPP.
● Chapter 20, "Printers and Plotters," reflects the latest trends in output technology—in particular,
printers—with an emphasis on lasers and today's high resolution color inkjets.
● Chapter 21, "Serial Ports," not only brings a vastly improved discussion of RS232 but also includes
several new standards—ACCESS.bus, IrDA, USB, and IEEE 1394.
● Chapter 22, "Modems," starts off with familiar modem technology (including the latest 56K
designs) but also emphasizes the new all digital systems including land line, cable, and satellite
connections. There are a few mentions of the Internet in here, too. Maybe more than a few.
● Chapter 23, Networking, outlines the basics you need to know to build a small office or home
network, including an in depth "grow your own" discussion.
● Chapter 24, "Power," includes discussions of the latest standards from APM and ACPI to Smart
Battery, as well as familiar power protection materials.
● Chapter 25, "Cases," gets physical and discusses the physical aspects of PC packaging and
peripheral installation.
If that's not enough, the CD also includes four appendixes. These serve mostly as reference material
that would have otherwise interrupted the smooth flow of the book (which has been described as a
Level 5 rapids by whitewater rafters). The chosen four include:
● Appendix A, "PC History," which outlines the relevant history of computers and other technologies
relevant to PCs
● Appendix B, "Regulations," which covers government regulations pertaining to PCs that you
should know about
● Appendix C, "Health and Safety," which discusses issues of health and safety

● Appendix D, "Data Coding," which lists the important data coding systems, information you need
or at least need to be able to find somewhere.
In addition, I've included an updated drive parameter table in electronic form so that you can quickly
find the setup values you need to get an old, even ancient, hard disk drive working.
My publisher and I have done our best to assure the accuracy of what you find in here. It falls short of
the scholarly mark in missing footnotes, endnotes, and sources. I've got a bunch of excuses, none too
compelling, except apparatus like that would have put me even further behind schedule on a book

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/preface.htm (4 de 5) [23/06/2000 04:16:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

project that's very time sensitive. In any event, much of what you'll find here is based on original
research, conversations with the people behind the technologies, the promoters of the standards, and
hands-on experience. Too often when I went in search of the literature for more detail, the only
references I found were my own.
That said, you can depend on the names and dates given here. No date is mentioned lightly. It reflects
when a given standard or technology was developed or released, specifically stated as such. Where
names are too often forgotten, I've made an effort to put credit where it is due for many of the minor
inventions that have made PCs as successful as they are.
This book has always served two purposes: It's an introductory text to help anyone get up to speed on
PCs and how they work. But once you know what's going on inside, it continues to serve as the
ultimate PC reference. I've tried to update both as well as keep it relevant in today's vastly changed
world of computing. As in previous editions, if you need an answer about PCs, how they work, how to
make them work, or when they began to work, you will find it answered here, definitively.
-wlr, 18 April 1997

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/preface.htm (5 de 5) [23/06/2000 04:16:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Chapter 1: Background
What is a PC? Where did it come from? And why should you care? The answers are
already cloudy and the questions may one day become the mystery of the ages. But the
PC’s origins are not obscure; its definition is malleable but manageable, and your
involvement is, well, personal but promising. This chapter offers you an overview of what
a PC is, how its software and hardware work together, the various hardware components
of a PC, and the technologies that underlie their construction. The goal is perspective, an
overview of how the various parts of a PC work together. The rest of this book fills in the
details.

■ Personal Computers
■ History
■ Characteristics
■ Interactivity
■ Dedicated Operation
■ Programmability
■ Connectivity
■ Accessibility
■ Labeling Standards
■ MPC 1.0
■ MPC 2.0
■ MPC 3.0
■ PC 95
■ PC 97
■ MMX
■ Variations on the PC Theme
■ Workstation
■ Server

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (1 de 49) [23/06/2000 04:26:12 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

■ Simply Interactive Personal Computer


■ Network Computer
■ Numerical Control Systems
■ Personal Digital Assistants
■ Laptops and Notebooks
■ Software
■ Applications
■ Utilities
■ DOS Utilities
■ Windows Utilities
■ Operating Systems
■ Programming Languages
■ Machine Language
■ Assembly Language
■ High Level Languages
■ Batch Languages
■ Linking Hardware and Software
■ Device Interfaces
■ Input/Output Mapping
■ Memory Mapping
■ Addressing
■ Resource Allocation
■ BIOS
■ Device Drivers
■ Hardware Components
■ System Unit
■ Motherboard
■ Microprocessor
■ Memory
■ BIOS
■ Support Circuits
■ Expansion Slots
■ Mass Storage
■ Hard Disk Drives
■ CD ROM Drives

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (2 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

■ Floppy Disk Drives


■ Tape Drives
■ Display Systems
■ Graphics Adapters
■ Monitors
■ Flat Panel Display Systems
■ Peripherals
■ Input Devices
■ Printers
■ Connectivity
■ Input/Output Ports
■ Modems
■ Networks

Background

In carving a timeline out of the events of the first few millennia of civilization, the historians of an age
some time hence will list the events that changed the course of the world and human development: the
Yucatan-bound meteor that blasted dinosaurs to extinction, the taming of fire, coaxing iron from
stone, inventing a machine to press ink from type to paper, and—most important for this book if not
the future—creating personal computers that fit in your hand and link to every other person and PC in
the world.
The PC and the technologies developed around it truly stand to change the course of civilization and
the world as dramatically as any of humankind’s other great inventions. For nearly two decades, PCs
already have changed the way people work and play. As time goes by, PCs and their offspring are
working their way more deeply into our lives. Even today they are changing how we see the world,
the way we communicate, and even how we think.
A PC is an extension of human abilities that lets us reach beyond our limitations. In a word, a PC is a
tool. Like any tool, from stone ax to Cuisinart, the PC assists you in achieving some goal. It makes
your work easier, be it keeping your books, organizing your inventory, tracking your recipes, or
honing your wordplay. It makes otherwise impossible tasks—for example, logging onto the
Internet—manageable, often even fun.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (3 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Compared to the stone ax, a computer is complicated, though it probably takes no longer to really
master. Compared to the modern tools of everyday life, it is one of the most expensive, second only to
the tools of transportation like cars, trucks, and airplanes. Compared to the other equipment in office
or kitchen, it is the most versatile tool at your disposal.
At heart, however, a computer is no different than any other tool. To wield it effectively, you must
learn how to use it. You must understand it, its capabilities, and its limitations. A simple description
of the gadgets you can buy today is not enough. Names and numbers mean nothing in the abstract.
After all, an entire class of people memorize specifications and give the appearance of knowing what
they are talking about. As salespeople, they even try to guide your decisions in buying computer
equipment. Getting the most that modern technology has to offer—that is, both taking care of your
work efficiently and acquiring the best hardware to handle your chores— without spending more than
you need to requires an understanding of the underlying technology.

Personal Computers

Before you can talk about PCs at all, you need to know what you’re talking about. The question is
simple: What is a PC?
PCs have been around long enough that the only folks likely not to recognize one by sight have vague
notions that mankind may someday harness the mystery of fire. But defining exactly what a personal
computer is is one of those tasks that seems amazingly straightforward until you actually try to do it.
When you take up the challenge, the term transforms itself into a strange cross mating between
amoeba and chameleon (biologists alone should shudder even at the thought of that one). The
meaning of the term changes with the person you speak to and in the circumstance in which you
speak.
A personal computer is a computer designed to be used by one person. In this, the age of the
individual, in which everything is becoming personal—even stalwart team sports are giving way to
superstar showcases—a computer designed for use by a single person seems natural, not even
warranting a special description. But the personal computer was born in an age when computers were
so expensive that neither the largest corporations nor governments could afford to make one the
exclusive playground of a single person. In fact, when the term was first applied, personal computer
was almost an oxymoron, self-contradictory, even simply impossible.
Computers, of course, were machines for computing, calculating numbers. Personal computers were
for people who had to make computations, to quantify quadratic equations, to factor prime numbers,
to solve the structure of transparent aluminum. About the only people needing such computing power
were engineers, scientists, statisticians, and similar social outcasts. Then a strange thing happened, the
digital age. Suddenly anything worth talking about or looking at turned into numbers—books, music,
movies, telephone calls—just about everything short of a cozy hug from Mom. As the premier digital
device, the PC is the center of it all.

History

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (4 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

The first product to bear the PC designation was the IBM Personal Computer, introduced in August,
1981. After the introduction of that machine, hobbyists, engineers, and business people quickly
adopted the term to refer to small desktop computers meant to be used by one person. The initials PC
quickly took over as the predominant term, saving four syllables in every utterance.
In the first years of the PC, the term was nondenominational. It referred to any machine with certain
defining characteristics, which we will discuss shortly. In fact, the term "personal computer" was in
general use before IBM slapped it on its early desktop iron.
Over the years, however, the term "PC" has taken a more specialized application. It serves to
distinguish a particular kind of computer design. Because that design currently happens to be the
dominant one worldwide, many people use the term in its original sense. That works in most polite
conversation, unless you’re in a conversation with someone whose favorite computer does not follow
the dominant design. When you refer to his hardware as a "PC," the polite part of the conversation is
likely to quickly disappear as he disparages the PC and rues the days when his favorite—Amiga or
Macintosh—once parried for marketshare.
The specialized definition of a PC means a machine that’s compatible with the first IBM Personal
Computer—that is, a computer that uses a microprocessor that understands the same programs and
languages as the one in the first PC—though it is likely to understand more than just that and do a
heckuva lot better job of it! In fact, what we now call PCs were once IBM-compatible, because in
those primeval years (roughly 1981 to 1987), the IBM design was the accepted industry standard,
which all manufacturers essentially copied. After 1987, IBM overplayed its role in defining the
industry, and lost its position in the marketplace. The term "IBM-compatible" fell into disuse. PC and,
now rarely, PC-compatible have taken over.
Under this more limited definition, a PC is a machine with a design broadly based on the first IBM
PC. Its microprocessor is made by Intel or, if made by another company, is designed to emulate an
Intel microprocessor. The rest of the hardware follows designs set by industry standards discussed
throughout the rest of this book.

Characteristics

Like so much of the modern world, a modern personal computer is something that’s easy to recognize
and difficult to define. In truth, the personal computer is not defined by its parts (because the same
components are common across the entire range of computers from pocket calculators to
super-computing vector processors) but by how it is used. Every computer has a central processing
unit and memory, and the chips used by PCs are used by just about every size of machine. Most
computers also have the same mass storage systems and similar display systems to those of the PC.
Although you’ll find some variety in keyboards and connecting ports, at the signal level all have much
in common, including the electrical components used to build them.
In operation, however, you use your PC in an entirely different manner from any other type of
computer. The way you work and the way the PC works with you is the best definition of the PC.
Among the many defining characteristics of the personal computer, you’ll find that the most important
are interactivity, dedicated operation, programmability, connectivity, and accessibility. Each of these
characteristics helps make the PC into the invaluable tool it has become, distinguishing it from the
computers that came before and an array of other devices with computer-like pretensions.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (5 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Interactivity

A PC is interactive. That is, you work with it, and it responds to what you do. You press a key and it
responds, sort of like a rat in a Skinner box pressing a lever for a food pellet.
Although that kind of give and take, stimulus and response relationship may seem natural to you, it’s
not the way computers have always been. Most of the first three decades that computers were used
commercially, nearly all worked on the batch system. You had to figure out everything you wanted to
do ahead of time, punch out a stack of cards, and dump them on the desk of the computer operator,
who, when he finished his crossword puzzle, would submit your cards to processing. The next day,
after a 16-hour overnight wait, your program, which took only seconds to run, generated results that
you would get on a pile of paper big enough to push a paper company into profitability and wipe out a
small section of the Pacific Northwest. And odds are (if your job was your first stab at solving a
particular problem) the paper would be replete with error messages, basically guaranteeing you
lifelong tenure at your job because that’s how long it would take to get the program to run without all
the irritating error flags.
In other words, unlike the old batch system that made you wait overnight to find out how stupid you
are, today’s interactive PC tells you in a fraction of a second.

Dedicated Operation

A personal computer is dedicated. Like a dog with one master, the PC responds only to you—if only
because it’s sitting on your desk and has only one keyboard. Although the PC may be connected to
other machines and people through modems and telephone wires or through a network and
world-reaching web, the link-up is for your convenience. In general, you use the remote connection
for individual purposes, such as storing your own files, digging out data, and sending and receiving
messages meant for your eyes (and those of the recipient) alone.
In effect, the PC is an extension of yourself. It increases your potential and capabilities. Although it
may not help you actually think faster, it answers questions for you, stores information (be it numeric,
graphic, or audible), and manages your time and records. By helping you get organized, it cuts
through the normal clutter of the work day and helps you streamline what you do.

Programmability

A personal computer is versatile. Its design defines only its limitations, not the tasks that it can
perform. Within its capabilities, a PC can do almost anything. Its function is determined by programs,
which are the software applications that you buy and load. The same hardware can serve as a message
center, word processor, file system, image editor, or presentation display.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (6 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Programmability has its limits, of course. Some PCs adroitly edit prime time television shows, while
others are hard pressed to present jerky movie files gleaned off the Internet. The hardware you attach
to your PC determines its ultimate capabilities in such specialized applications. Underneath that
hardware, however, all PCs speak the same language and understand the same program code. In
effect, the peripheral hardware you install acts much like a program and adapts the PC for your
application.

Connectivity

A personal computer is cooperative. That means that it is connected and communicative. It can work
with other computer systems no matter their power, location, or even the standard they follow. A
computer is equally adept at linking to hand held Personal Digital Assistants and supercomputers.
You can exchange information and often even take control. Through a hard-wired modem or wireless
connection, you can tie your personal computer into the world-wide information infrastructure and
probe into other computers and their databases no matter their location.
PC connectivity takes many forms. The main link-up today is, of course, the Internet and specifically
its World Wide Web. One wire ties your PC and you into the entire digital world. Connectivity
involves a lot more than the Internet—and in many ways, much less, too. Despite the dominance of
the Web, you can still connect to bulletin boards (at least those that haven’t migrated to the web) or tie
into your own network. For example, one favored variety of connection brings together multiple
personal computers to form a network or workgroup. These machines can cooperate continuously,
sharing expensive resources, such as high speed printers and huge storage systems. Although such
networks are most common in businesses, building one in your home lets you share files, work in any
room, and just be as flexible as you want in what you do.

Accessibility

A personal computer is accessible. That is, you can get at one when you need it. It’s either on your
desk at work, tucked into a corner of your home office, or in your kid’s bedroom.
The PC is so accessible, even ubiquitous, because it is affordable. Save your lunch money for a few
weeks and you can buy a respectable PC. Solid starter machines cost about the same as a decent
television or mediocre stereo system.
More importantly, when you need to use a PC, you can. Today’s software is nearly all designed so
that you can use it intuitively. Graphic operating systems, online help, consistent commands, and
enlightened programmers have turned the arcana of computer control into something feared only by
those foolish enough not to try their hands on a keyboard.
A PC brings together those five qualities to help you do your work or extend your enjoyment. No
other modern appliance (for that is what PCs have become) have this unique mix of characteristics. In
fact, you could call your personal computer your interactive, dedicated, versatile, cooperative, and
accessible problem solver but the acronym PC is much easier to remember; besides, that’s what
everyone else is calling the contraption anyway.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (7 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Labeling Standards

The characteristics that define a PC are a more lofty goal. In order to accomplish what you expect of
it, a PC has to run PC software. Although that statement sounds straightforward, it is complicated by
the least enduring aspect of progress: as technology races ahead, investments get left behind. After
nearly two decades of development, today’s PC is a beast quite unlike that original machine that
dropped off IBM’s first assembly line. More importantly, today’s PC software and even our
expectations are leagues apart from those of even a few years ago. A problem arises when you try to
bring two worlds or two dominions of time together.
PCs do remarkably well working back in time with software. New PCs still adroitly run most old
software, some stretching back to the dawn of civilization. This backward compatibility, although
expected, is actually the exception among computers. Most hardware improvements made in other
computer platforms have rendered old software incompatible, necessitating that you update both
hardware and software at the same time.
Going in the other direction, however, using new software with an old PC poses problems. Over the
years, Intel has added new features to its microprocessors, and peripheral manufacturers have
developed system add-ons that are so desirable they are expected in every PC. Program writers have
taken advantage of both the new microprocessor capabilities and the new peripherals, and the
software they write often won’t run on older PCs that lack the innovations. Even when new software
runs on old PCs, you often won’t want to try. Programmers work with the latest, fastest PCs and craft
their products to work with ample power. Older systems that lack modern performance may run
programs so slowly that most people won’t tolerate the results. In order that you can be assured that
your PC will work with their products and deliver adequate performance, software developers, both
individually and as groups, have developed standards for PCs.
The standards are actually certifications that PCs meet a minimum level of compliance. They are not
true industry standards, such as those published by major industry bodies like the ANSI (American
National Standards Institute), IEEE (Institute of Electrical and Electronic Engineers), or ISO
(International Standards Organization), nor are they mandated by any government organization.
Rather, they are specifications arbitrarily set by a single company or industry marketing organization.
The only enforcement power is that of trademark law: the promoter of the standard owns the
trademark that appears on the associated label, and it can designate who can use the trademark.
Two major organizations, the Multimedia PC Marketing Council and Microsoft Corporation, have
promoted such PC certification standards, and they judge whether a given product meets the standards
they set. Products that qualify are permitted to wear a label designating their compliance. The label
tells you that a given PC meets the organization’s requirements for acceptable operation with its
software. A recent addition has been the MMX certification logo from Intel. Unlike the other
certifications, the MMX logo represents hardware compatibility, and it is displayed on software
products.
The Multimedia PC Marketing Council developed the first qualification standard to assure that you
could easily select a PC that would run any multimedia software. They developed the concept of a
Multimedia PC, a computer equipped with all the peripherals necessary for producing sounds and
onscreen images acceptable in multimedia presentations. Producing the data to supply those

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (8 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

peripherals also required a powerful microprocessor and ample storage, so the Multimedia PC Council
also added minimum requirements for those aspects of the PC into its standards. As the programmers
created multimedia software with greater realism that demanded faster response and more power from
PCs, the council added a second, then a third, higher standard to earn certification. In 1996, The
Multimedia PC Marketing Council was superseded as the custodian of the MPC specifications by the
Multimedia PC Working Group, a committee of the Software Publishers Association.
The early incarnations of Windows software had long been criticized for draining too much power
from PCs, so during the development of Window 95, Microsoft developed a set of minimal
requirements that a PC needed to meet to wear the Windows logo. This set of requirements became
known as PC 95. To match the needs of the next generation of Windows, Microsoft revised these
specifications into PC 97.
The Microsoft standards go beyond merely the system hardware, that is, what the system has, and
include the firmware, what the system does. Taken together with the Windows operating system,
these standards define what a PC can do.

Table 1.1. Comparison of Major PC Labeling Standards

Standard MPC 1.0 MPC 2.0 MPC 3.0 PC 95 PC 97


Effective date 1990 May 1993 June 1995 November 1995 January 1997
Microprocessor type 386SX 486SX Pentium No requirement Pentium
Microprocessor speed 16 MHz 25 MHz 75 MHz No requirement 120 MHz
Memory 2 MB 4 MB 8 MB 4MB 16MB
Floppy disk 1.44MB 1.44MB 1.44 MB Not required Not required
Hard disk 30 MB 160 MB 540 MB Not required No requirement
CD ROM speed 1x 2x 4x
CD ROM access time 1000 ms 400 ms 250 ms
Serial port One RS-232 One RS-232 16550A One RS-232 USB
Parallel port One SPP One SPP One SPP One SPP ECP
Game port One One One or USB Not required Not required

In reading left to right, the table implies a progression, and this short list of standards demonstrates
exactly that. MPC 1.0 was the PC industry’s first attempt to break with old technology, to leave lesser
machines behind so you and your expectations could advance to a new level unfettered by past
limitations. All the ensuing standards lift the level higher, demanding more from every PC so that you
can take advantage of everything that twenty years of progress in small computers (not to mention a
few millennia of civilization) has to offer.

MPC 1.0

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (9 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Figure 1.1 The MPC 1.0 logo.

The primary concern of the Multimedia PC Council was, of course, that you be delighted with the
multimedia software that you buy for your PC. As multimedia products became available in 1990,
many people were frustrated that their own computers, some of which might date back to XT days,
were unable to run their new software. The members of the council realized consumers need a quick
and easy way to determine if a given new computer could adequately handle the demands of new
multimedia software. Consequently, the MPC specifications summarized in Table 1.2 are aimed
primarily at performance and ensuring the inclusion of multimedia peripherals.

Table 1.2. Multimedia PC Requirements Under MPC 1.0

Feature Requirement
Microprocessor type 386SX
Microprocessor speed 16 MHz
Required memory 2 MB
Recommended memory No recommendation
Floppy disk capacity 1.44 MB
Hard disk capacity 30 MB
CD ROM transfer rate 150 KB/sec
CD ROM access time 1000 milliseconds
Audio DAC sampling 22.05 KHz, 8-bit, mono
Audio ADC sampling 11.025 KHz, 8-bit, mono
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game

As the lowest industry performance qualification for PCs, the original MPC standard requires the least
from a PC. As originally propounded, the Multimedia PC Council asked only for 286-style
microprocessors running at speeds of 12 MHz or higher. The memory handling shortcomings of 286
and earlier microprocessors, however, quickly became apparent, so the council raised the requirement
to a minimum of a 386SX in the current MPC 1.0 standard. Because of this chip choice, the minimum
speed required is 16 MHz, the lowest rating Intel gives the 386SX chip. For memory, a system
complying with MPC 1.0 must have at least 2.0 megabytes of RAM, enough to get Windows 3.1 off
the ground. The specification also required a full range of mass storage devices, including a 3.5-inch
floppy disk drive capable of reading and writing 1.44MB media, a 30MB hard disk (small even at the
time), and a CD ROM drive.
Because at the time MPC 1.0 was created, CD ROMs were relatively new and unstandardized, the
specification defined several drive parameters. The minimum data transfer rate was set at a sustained
150KB/sec transfer rate, the rate required by stereophonic audio CD playback. The standard also
required the CD ROM drive to have an average seek time of 1 second or less, which fairly represented
the available technology of the time (although not, perhaps, user expectations). For software

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (10 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

compatibility, the standard demanded the availability of a Microsoft-compatible (MSCDEX 2.2)


driver that understood advanced audio program interfaces, as well as the ability to read the
fundamental CD standards (mode 1, with mode 2 and form 1 and 2 being optional).
Because audio is an implicit part of multimedia, the MPC 1.0 specification required an extensive list
of capabilities in addition to playing standard digital audio disks (those conforming with the Red
Book specification of the CD industry). The standard also required a front panel volume control for
the audio coming off music CDs, a requirement included in all ensuing MPC specifications. The
apparent objective was to allow you to listen to your CDs while you worked on other programs and
data sources.
Another part of MPC 1.0 was the requirement for a sound board that must be able to play back,
record, synthesize, and mix audio signals with well-defined minimum quality levels. The MPC 1.0
specifications required a digital to analog converter (for playback) and an analog to digital converter
(to sample and record audio). Under MPC 1.0, the requirements for each differed slightly.
The DAC (playback) circuitry requires a minimum of 8-bit linear PCM (pulse code modulation)
sampling with a 16-bit converter recommended. That 8-bit sampling was a true minimum, the same
level used by the telephone company for normal long-distance calls, hardly up to the level of a good
clock radio. Playback sampling rates were set at 11 and 22 KHz with CD-quality 44.1 KHz
"desirable." The lower of two standard sampling rates is a bit better than telephone quality (which is 8
KHz); the higher falls short of FM radio quality. The analog to digital conversion (recording)
sampling rate requirements include only linear PCM (that is, no compression) with low quality 11
KHz sampling, both 22 and 44.1 KHz being optional.
In effect, the MPC 1.0 specification froze in place the state of the art in sound boards and CD ROM
players at the time of its creation while allowing the broadest possible range of PCs to bear the MPC
moniker. The only machines it outlawed outright were those with technology so old no reputable
computer company was willing to market them at any price. After all, the council was comprised of
people trying to sell PCs as well as multimedia software, and they wanted as many of their machines
as possible to qualify for certification.
Far from perfect, far from pure, MPC 1.0 did draw an important line, one that showed backward
compatibility has its limits. In effect, it said, "Progress has come, so let’s raise our expectations." In
hindsight, the initial standard didn't raise expectations or the MPC requirements high enough, but for
the first time it freed software publishers from the need to make their products backward-compatible
with every cobwebbed computer made since time began.

MPC 2.0

Figure 1.2 The MPC 2.0 logo.

As multimedia software became more demanding, the MPC 1.0 standard proved too low to guarantee
good response and acceptable performance. Consequently, in May, 1993, the Multimedia PC Council
published a new standard, MPC 2.0, for more advanced systems. As with the previous specification,
MPC 2.0 was held back by practical considerations: keeping hardware affordable for you and
profitable for the manufacturer. Although it set a viable minimum standard, something for
programmers to design down to, it did not represent a demanding, or even desirable, level of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (11 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

performance for multimedia systems. Table 1.3 summarizes the basic requirements of MPC 2.0.

Table 1.3. Multimedia PC Requirements Under MPC 2.0.

Feature Requirement
Microprocessor type 486SX
Microprocessor speed 25 MHz
Required memory 4 MB
Recommended memory 8 MB
Floppy disk capacity 1.44 MB
Hard disk capacity 160 MB
CD ROM transfer rate 300 KB/sec
CD ROM access time 400 milliseconds
Audio DAC sampling 44.1 KHz, 16-bit, stereo
Audio ADC sampling 44.1 KHz, 16-bit, stereo
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game

The most important aspect of MPC 2.0 is that it raised the performance level required by a PC in
nearly every hardware category, reflecting the vast improvement in technology in the slightly more
than two years since the release of MPC 1.0. It required more than double the microprocessor power,
with a 486SX chip running at 25 MHz being the minimal choice. More importantly, MPC 2.0
demanded 4 megabytes of RAM, with an additional 4 megabytes recommended.
While MPC 2.0 still required a 1.44 MB floppy so that multimedia software vendors need only worry
about one distribution disk format, it pushed its hard disk capacity recommendation up to 160 MB.
This huge, factor-of-five expansion reflects both the free-for-all plummet in disk prices as well as the
blimp-like expansion of multimedia software.
CD ROM requirements were toughened two ways by MPC 2.0. The standard demands a much faster
access time, 400 milliseconds versus one full second for MPC 1.0, and it requires double-speed
operation (a data transfer rate of 300 KB/sec). Although triple- and quadruple-speed CD ROM drives
were already becoming available when MPC 2.0 was adopted, most multimedia software of the time
gained little from them, so the double-speed requirement was the most cost-effective for existing
applications.
Under MPC 2.0, the CD ROM drive must be able to play back commercially recorded music CDs and
decode track their identifications (using data embedded in subchannel Q). In addition, the
specification required that the drive handle extended architecture CD ROMs (and recommends the
extended architecture capabilities include audio) and be capable of handling PhotoCDs and other
disks written in multiple sessions.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (12 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

The primary change in the requirement for analog to digital and digital to analog converters was that
MPC 2.0 made CD-quality required all the way. Sound boards under MPC 2.0 must allow both
recording and playback at full 44.1 KHz sampling in stereo with a 16-bit depth. Lower rate sampling
(11.025 and 22.05 KHz) must also be available. MPC 2.0 also required an integral synthesizer that can
produce multiple voices and play up to six melody notes and two percussion notes at the same time.
In addition, the sound system in an MPC 2.0 machine must be able to mix at least three sound sources
(four are recommended) and deliver them to a standard stereophonic output on the rear panel, which
you can plug into your stereo system or active loudspeakers. The three sources for the mixer included
Compact Disc audio from the CD ROM drive, the music synthesizer, and a wavetable synthesizer or
other digital to analog converter. An auxiliary input was also recommended. Each input must have an
8-step volume control.
An MPC 2.0 system must have at least a VGA display system (video board and monitor) with 640
pixel by 480 pixel resolution in graphics mode and the capability to display 65,535 colors (16-bit
color). The standard recommended that the video system be capable of playing back quarter-screen
(that is, 320 pixel by 200 pixel) video images at 15 frames per second.
Port requirements under MPC 2.0 matched the earlier standard: parallel, serial, game (joystick), and
MIDI. Both a 101-key keyboard (or its equivalent) and a two-button mouse were also mandatory.

MPC 3.0

Figure 1.3 The MPC 3.0 logo.

As the demands of multimedia increased, the Multimedia PC Marketing Council once again raised the
hurdle in June, 1995, when it adopted MPC 3.0. The new standard pushed up hardware requirements
in nearly every area with the goal of achieving full-screen, full-motion video with CD-quality sound.
In addition, the council shifted its emphasis from specific hardware requirements to performance
requirements measured by a test suite developed by the council. Table 1.4 lists many of the basic
requirements of MPC 3.0.

Table 1.4. Multimedia PC Requirements Under MPC 3.0.

Feature Requirement
Microprocessor type Pentium or equivalent
Microprocessor speed 75 MHz
Required memory 8 MB
Minimum memory bandwidth 100 MB/sec
Floppy disk capacity 1.44 MB, optional in notebook PCs
Hard disk capacity 540 MB
Hard disk access time < 12 ms
Hard disk throughput > 3 MB/sec

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (13 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

CD ROM transfer rate 550 KB/sec


CD ROM access time 250 milliseconds
Audio DAC sampling 44.1 KHz, 16-bit, stereo
Audio ADC sampling 44.1 KHz, 16-bit, stereo
Graphics interface PCI 2.0 or later
Graphics performance 352x240x15 colors at 30 frames/sec
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game or USB
Modem V.34 with fax

The MPC 3.0 standard does not require any particular microprocessor. Rather, the standard notes that
a 75 MHz Pentium—or its equivalent from another manufacturer—is sufficient to achieve the
performance level required by the test suite.
For mass storage, MPC 3.0 retains the same floppy disk requirement, although it makes the inclusion
of any floppy drive optional in portable systems. More important are the hard disk requirements. To
give adequate storage, the standard mandates a 540 MB disk as the minimum, of which 500 MB must
be usable capacity. Recognizing the need for fast storage in video applications, the disk performance
requirements are demanding. Not only does the standard require an average access time of 12
milliseconds or less, it also asks for a high transfer rate as well. The disk interface must be able to pass
9 MB per second while the disk medium itself must be able to be read at a rate of 3 MB per second (a
buffer or cache in the drive itself making possible the faster interface transfers). The access timings
require that any MPC 3.0-compliant disk spin faster than 4000 RPM.
CD storage must be compatible with all the prevailing standards, including standard CD ROM,
CD-ROM XA, Photo CD, CD-R (recordable CDs), CD Extra and CD-interactive. Drives must
achieve nearly a 4x transfer rate; the standard requires a 550 KB/sec transfer rate rather than the 600
KB/sec of a true 4x drive. All multimedia PCs, both desktop and portable, require CD drives, although
the access time requirements differ. Desktop units must have rated access times better than 250
milliseconds; portable units abide a less stringent 400 ms specification.
The audio requirements of MPC 3.0 extend not only to sampling and digitization but through the full
system from synthesis to speakers. The digital aspects of the audio system must handle at least
stereophonic CD-quality audio (44.1 KHz sampling with 16-bit depth) with Yamaha OPL3 synthesis
support (although use of the Yamaha chip is not a requirement). Speaker and amplifier quality is
strictly defined. Two-piece speaker systems must have a range from 120 Hz to 17.5 KHz with at least
three watts of power per channel (at 1% distortion) across the audio spectrum. A subwoofer, which
must be backed by at least 15 watts, extends the range down to 40 Hz or lower.
MPC 3.0 requires a compliant system to deliver full-motion video, that is, 30 frames per second, for
an image that fills about one-third of a standard VGA screen (352 by 288 resolution) with realistic
15-bit color (that’s five bits for each primary color). MPC 3.0 also requires support of a number of
video standards including PCI 2.0 or newer for the hardware connection with the video controller and
MPEG expansion (in hardware or software) for playback.
The basic ports required under previous MPC standards face further refinement under MPC 3.0. The

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (14 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

mandatory serial port must have a 16550A UART, and when you connect the mouse or other pointing
device to an MPC 3.0 system, you must still have at least one serial port available. The standard
allows for a USB connection to the mouse and for the use of USB instead of a game port (which
would otherwise be required). A modem complying with the V.34 standard and capable of sending
and receiving facsimile transmissions is also required. The standard parallel port and MIDI port are
carried over from previous incarnations of the MPC specifications.

PC 95

Although aimed at setting a minimum compatibility level for machines designed to run Windows 95,
Microsoft officially released its PC 95 standard months after the operating system became available.
The specification was released November 1, 1995. In early 1996, some of its requirements were made
more demanding and it was augmented by several design "recommendations" that would assure more
satisfactory operation.
In truth, the primary concern with PC 95 is operability rather than performance. The standard does not
make specific requirements as to the microprocessor, its speed, or the disk capacity of a PC to earn the
"Designed for Windows 95" sticker. Instead it seeks compliance with specific standards—many of
which were originally promulgated by Microsoft—that link Windows 95 to your hardware.
Implicit in PCs designed for Windows 95 (or any modern operating system, for that matter) is the
need for a 386 or later microprocessor to take advantage of Windows 95’s advanced operating modes
and superior memory addressing. By the time Microsoft issued PC 95, however, all microprocessors
less than a 486 were unthinkable in a new PC and lower speed Pentiums had almost become the
choice for entry-level systems. PC 95 does address the need for memory and sets its requirement at
Windows 95’s bare bones minimum, four megabytes. The 1996 revision adds a recommendation of
eight megabytes reserved exclusively for Windows.
PC 95 puts more emphasis on compliance with the Plug-and-Play standard, requiring BIOS support of
the standard. The proper operation of the automatic configuration features of Windows 95 make this
level of support mandatory. The intention is to make PCs more accessible to people who do not wish
to be bothered by the details of technology.
The chief other requirements of the PC 95 standard include a minimum range of ports (one serial and
one basic parallel port), a pointing device, and a color display system that can handle basic graphics,
VGA-level resolution in 256 colors. Table 1.5 summarizes the major requirements of the PC 95
standard.

Table 1.5. Major Requirements of the "Designed for Windows 95" Label

Feature Original Revised PC 95 Revised Chapter


Requirement Recommendation
Microprocessor NR NR NR 3
System memory 4 MB 8MB 8MB 4
16-bit I/O decoding No Yes 4 7

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (15 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Local bus No Yes Yes 7


Sound board No No Yes 18
Parallel port SPP ECP ECP 19
Serial port RS-232 16650A UART 16550 UART 21
USB No No Yes 21
Display resolution 640x480x8 800x600x16 800x600x16 15
and 1024x768x8 and 1024x768x8
Display memory NR 1MB 2MB 15
Display connection ISA Local bus Local bus 15
Hard disk drive NR Required Required 10
CD ROM NR NR Recommended 12
Plug-and-Play BIOS Required Required Required 5
Software setting of No Yes Yes 5 and 6
resources
Note: NR indicates
the standard makes
no specific
requirement

Even at the time of its release, PC 95 was more a description of a new PC than a prescription dictating
its design. Most systems already incorporated all of the requirement of PC 95 and stood able to wear
the Window 95 compatibility sticker.

PC 97

In anticipation of the successor to Windows 95, Microsoft developed a new, higher standard in
mid-1996. Generally known as PC 97 in the computer industry, its terms set up the requirements for
labeling PCs as designed for the Windows 95 successor, likely termed Windows 97.
The PC 97 standard is both more far-reaching and more diverse than that of PC 95. PC 97 sets
minimum hardware requirements, interface standards, and other required conformances necessary to
give the new operating system full functionality. In addition, it contemplates the fragmentation of the
PC industry into separate and distinct business and home applications. As a result, the standard is
actually three. One standard is for the minimal PC for the new operating system, Basic PC 97; another
defines the requirements of the business computer, Workstation PC 97; and a third describes the home
computer, the Entertainment PC 97. Table 1.6 lists the major requirements of each of these three
facets of PC 97.

Table 1.6. Major Requirements for PC 97 Designations

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (16 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Feature Basic PC 97 Workstation PC 97 Entertainment PC 97 Chapter


Microprocessor Pentium Pentium Pentium 3
Speed 120 MHz 166 MHz 166 MHz 3
Memory 16 MB 32 MB 16 MB 4
ACPI Required Required Required 24
OnNow Required Required Required 24
Plug-and-Play Required Required Required 5
USB 1 port 1 port 2 ports 21
IEEE 1394 Recommended Recommended Required 21
ISA bus Optional Optional Optional 7
Keyboard Conventional Conventional USB or wireless 14
Pointing device Conventional Conventional USB or wireless 14
Wireless interface Recommended Recommended Remote control required 14
Audio Recommended Recommended Advanced audio 18
Modem or ISDN Recommended Recommended Required 22
Display resolution 800x600x16 1024x768x16 1024x768x16 151
Display memory 1 MB 2 MB 2 MB 15
Local bus video Required Required Required 15
Bus master controller Required Required Required 7 and 9

The most striking addition is Microsoft’s new demand for performance, setting the standard
microprocessor for an entry-level system—the Basic PC 97—at a 120 MHz Pentium, which, little
more than three years ago, would have been the top of the range. In addition, all systems require high
speed local bus video connections and bus mastering in their mass storage systems.
With the PC 97 standard, Microsoft has taken the initiative in moving PC expansion beyond the
limitations of the original 1981 PC design. Microsoft has relegated the conventional PC expansion
bus, best known as ISA and resident in nearly all PCs for more than a decade, to the status of a mere
option. Microsoft has put its full support behind the latest version of PCI, 2.1, as the next expansion
standard in PC 97 hardware.
The difference in memory requirements for business and home systems reflects the needs of their
respective applications. A business machine is more likely to run several simultaneous applications
and more likely to mash huge blocks of data, pushing up its memory needs. On the other hand, the
home system demands high speed connections such as IEEE 1394 (see Chapter 21, "Serial Ports") for
making the most of multimedia, an extra USB port, and mandatory modem and high quality audio.
Windows 95 took so long to boot up that Microsoft apparently feared some people would switch to
another operating system before it finished (or so you might assume by the company’s insistence on
the OnNow standard for quick booting). PC 97 also requires ACPI for conserving power (see Chapter
24, "Power") and full Plug-and-Play compliance for setup convenience. PC 97 also includes a wealth
of additional requirements for portable systems.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (17 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

All systems designed for the next generation of color need the ability to render images in true to life,
16-bit color, which allows the simultaneous display of up to 65,536 hues.
The advances between PC 95 and PC 97 reflect the fast changes in computer technology. You simply
could not buy a machine with all the capabilities of PC 97 when Microsoft announced the PC 95
standard, yet PC 97 reflects the minimum you should expect in any new computer. Even if you don’t
target a "Designed for Windows" sticker for your next PC purchase, you should aim for a system
equipped with the features of the PC 97 standard.

MMX

With the introduction of its MMX-enhanced Pentium microprocessors in January, 1997, Intel added a
designation with the MMX logo that appears as a certification on products. This MMX label, as
shown in Figure 1.1, indicates that software uses a set of commands specially written to take
advantage of the new features of the MMX-enhanced microprocessors offered by Intel as well as
makers of Intel-compatible chips, including AMD and Cyrix. On some operations—notably the
manipulation of audio and video data as might be found in multimedia presentations and many
games—this software will see about a 50% to 60% performance enhancement when run in computers
equipped with an MMX-enhanced microprocessor.
Figure 1.4 The MMX certification logo.

MMX does not indicate the innate power of a microprocessor, PC, or program. Computers that use
MMX technology may deliver any of several levels of performance based on the speed of their
microprocessor, their microprocessor type, and the kind of software you run. A computer with an
MMX microprocessor will run MMX-certified software faster than a computer with the same type of
microprocessor without MMX. The MMX-enhanced system will show little performance advantage
over a system without MMX on ordinary software. From the other perspective, software without the
MMX label will run at the about same speed in either an MMX-enhanced PC or one without an MMX
processor, providing the two have the same speed rating (in megahertz) and the same processor type
(for example, Pentium).
In other words, the MMX label on a box of software is only half of what you need. To take advantage
of the MMX software label, you need an MMX-enhanced PC. And an MMX-enhanced PC requires
MMX-labeled software to distinguish itself from PCs without MMX.

Variations on the PC Theme

What about the PC pretenders? True PCs aren’t the only small computers in use today. A variety of
hardware devices share many of the characteristics of the true PC—and sometimes even the
name—but differ in fundamental design or application. For example, some machines skimp on
features to gain some other benefit, such as increased ruggedness or small size. Others forego the
standard microprocessors and operating systems to gain added speed in specialized applications.
Others are simply repackage (or rename) jobs applied to conventional PCs.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (18 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

In any case, any worthwhile discussion of PCs and their technologies requires that the various themes
and variations be distinguished. The following list includes the most common terms often used for
PCs and PC wannabes:

Workstation

The term "workstation" is often ambiguous because it commonly takes two definitions. The term
derives from the function of the machine. It is the computer at which an office worker stations
himself.
In some circles, the term "workstation" is reserved for a PC that is connected to a network server.
Because the server is often also a PC, the term "PC" doesn’t distinguish the two machines from one
another. Consequently, people often refer to a PC that functions as a network node as a workstation,
and the machine linking the workstations together is the server. (The term "node" can’t substitute for
workstation because devices other than PCs can also be nodes.)
The other application of the term "workstation" refers to powerful, specialized computers still meant
to be worked upon by a single individual. For instance, a graphic workstation typically is a powerful
computer designed to manipulate technical drawings or video images at high speed. Although this sort
of workstation has all the characteristics of a PC, engineers distinguish these machines with the
workstation term because the machines do not use the Intel-based microprocessor architecture typical
of PCs. Moreover, the people who sell these more powerful computer want to distinguish their
products from run-of-the-mill machines so they can charge more.
Of course, the term "workstation" also has a definition older than the PC, one that refers to the
physical place at which someone does work. Under such a definition, the workstation can be a desk,
cubicle, or workbench. In the modern office, even this kind of workstation includes a PC.

Server

Server describes a function rather than a particular PC technology or design. A server is a computer
that provides resources that can be shared by other computers. Those resources include files such as
programs, databases, and libraries; output devices such as printers, plotters, and film recorders; and
communications devices such as modems and Internet access facilities.
Traditionally a server is a fast, expensive computer. A server does not need to be as powerful as the
PCs it serves, however, particularly when serving smaller networks. Compared to the work involved
in running the graphic interface of a modern operating system, retrieving files from a disk drive and
dispatching them to other PCs is small potatoes indeed. An ordinary PC often suffices.
For example, if you want to run Windows 95 on a PC as a workstation, you need at least eight
megabytes of RAM to avoid the feeling of working in a thick pot of oatmeal. Dedicate the same PC as
a server of other Windows 95 PCs, and you’ll get good response with as little as the minimal four
megabytes. The difference is all the overhead needed to run a user interface that the server need not
bother with.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (19 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

On the other hand, the server in a large corporation requires a high level of security and reliability
because multiple workstations and even the entire business may depend on it. Ideally, such a
big-business server displays fault tolerance, the ability withstand the failure of one or more major
systems, such as a disk drive or one of several microprocessors, and continue uninterrupted operation.
Compared to ordinary PCs, most servers are marked by huge storage capacity. A server typically must
provide sufficient storage space for multiple users—dozens or even hundreds of them.
Most of the time, the server stands alone, unattended. No one sits at its keyboard and screen
monitoring its operation. It runs on autopilot, a slave to the other systems in the network. Although it
interacts with multiple PCs, it rarely runs programs in its own memory for a single user. Its software is
charged mostly with reading files and dispatching them to the proper place. In other words, although
the server interacts, it’s not interactive in the same way as an individual PC.

Simply Interactive Personal Computer

In early 1996 Microsoft coined the term SIPC to stand for Simply Interactive Personal Computer, the
software giant’s vision of what the home computer will eventually become. The name hardly reflects
the vision behind the concept. The SIPC is hardly simple, but a complete home entertainment device
that will be the centerpiece of any home electronics system, if not the home entertainment system. The
goal of the design initiative is to empower the PC with the capabilities of all its electronic rivals. It is
to be as adept at video games as any Sega or Nintendo system, as musically astute as a stereo (and
also able to create and play sounds as a synthesizer), and able to play video better than your VCR. In
other words, the SIPC is a home PC with a vengeance. In fact, it’s what the home PC has supposed to
be all along but the hardware (and software) were never capable of.
Compared to the specification of any home PC, the minimal SIPC is a powerhouse, starting with a
150 MHz Pentium with 16 MB of RAM and taking off from there. More striking, it is designed as a
sealed box, one that you need never tinker with. Expansion, setup, and even simple repair are
designed to be on the same level as maintaining a toaster.
Rather than something grand or some new, visionary concept, the SIPC is more a signpost pointing
the way the traditional PC is headed. The PC 97 specification covers nearly all the requirements of the
SIPC so, in effect, the SIPC is here now, lurking on your desk in the disguise of an ordinary PC.

Network Computer

The opposite direction for the home PC is one stripped of power instead of enhanced. Instead of being
a general purpose machine, this sort of home PC would be particularly designed for interactive use,
with data and software supplied by outside sources. The primary source is universally conceived as
the Internet. Consequently, one of the favored terms for this sort of design is often termed an Internet
Box.
The same concept underlies the Network Computer, commonly abbreviated as NC. As with the
Internet Box, an NC is a scaled-down PC aimed primarily at making Internet connections. They allow

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (20 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

browsing the World Wide Web, sending and receiving electronic mail, and running Java-based
utilities distributed through the net, but they lack the extensive data storage abilities of true PCs.
Similar in concept and name but developed with different intentions and by different organizations is
the NetPC, a more conventional PC designed to lower maintenance costs.
The revised home PC concept of the Network Computer (NC rather than NetPC) envisions a machine
that can either have its own screen or work with the monitor that’s part of your home entertainment
system, typically a television set. In contrast, a related technology often called the Set Top Box was
meant to be an Internet link that used your television as the display. It earned its name from its likely
position, sitting on top of your television set.
Only the names Internet and Set Top Box (and even NC) are new. The concept harks back to the days
before PCs. The SIPC is, in fact, little more than a newer name for a smart terminal.
A terminal is the start and endpoint of a data stream, hence the name. It features a keyboard to allow
you to type instructions and data, which can then be relayed to a real computer. It also incorporates a
monitor or other display screen to let you see the data the computer sends back at you.
The classic computer terminal deals strictly with text. A graphic terminal has a display system
capable of generating graphic displays. A printing terminal substitutes a printer for the screen.
A smart terminal has built-in data processing abilities. It can run programs within its own confines,
programs which are downloaded from a real computer. The limitation of running only programs
originating outside the system distinguishes the smart terminal from a PC. In general, the smart
terminal lacks the ability to store programs or data amounting to more than the few kilobytes that fit
into its memory.
Although smart terminals are denizens of the past, the Internet Box has a promising future, or, more
correctly, a future of promises. Its advocates point out that its storage is almost limitless with the full
reach of the Internet at its disposal, not mere megabytes, not gigabytes, but terabyte territory and
beyond. But downloading that data is like trying to drain the ocean through a soda straw. Running
programs or simply viewing files across a network is innately slower than loading from a local hard
disk. Insert a severe bottleneck like a modem and local telephone line in the network connection, and
you’ll soon rediscover the entertainment value of watching paint dry and the constellations realign.
Instead of the instant response you get to your keystrokes with a PC, you face long delays whenever
your Internet Box needs to grab data or program code from across the net. Until satellite and cable
modems become the norm (both for you and Internet servers), slow performance will hinder both the
interactivity and the wide acceptance of the Internet Box.
The principal difference between a true Network Computer and its previous incarnations is that a
consortium of companies, including Apple, IBM, Oracle, Netscape, and Sun Microsystems, has
developed a standard for them. Called the NC Reference Profile, the standard is more an Internet’s
Greatest Hits of the specifications world. It requires compliance with the following languages and
protocols:
● The Java language, including the Java Application Environment, the Java Virtual Machine, and
Java class libraries

● HTML (HyperText Markup Language), the publishing format used for Web pages

● HTTP, which browser software uses to communicate across the Web

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (21 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

● Three E-mail protocols including Simple Mail Transfer Protocol, Internet Message Access
Protocol Version 4, and Post Office Protocol Version 3

● Four multimedia file formats including AU, GIF, JPEG, and WAV

● Internet Protocol, so that they can connect to any IP-based network including the World Wide
Web

● Transmission Control Protocol, also used on the Internet and other networks

● File Transfer Protocol, use to exchange files across the Internet

● Telnet, a standard that lets terminals access host computers

● Network File System, which allows the NC to have access to files on a host computer

● User Datagram Protocol, which lets applications share data through the file system

● Simple Network Management Protocol, which helps organize and manage the NC on the
network

● Dynamic Host Configuration Protocol, which lets the NC boot itself across the network and log
in

● Bootp, which is also required for booting across a network

● Several optional security standards

The hardware requirements of the NC Reference Profile are minimal. They include a minimum screen
resolution at the VGA level (640 by 480 pixels), a pointing device of some kind, text input capability
that could be implemented with handwriting recognition or as a keyboard, and an audio output. The
original profile made no demand for video playback standards or internal mass storage such as a hard
or even floppy disk.
Sun Microsystems introduced the first official NC on October 22, 1996.
[The NetPC, on the other hand, represents an effort by industry leaders Intel and Microsoft (assisted
by Compaq Computer Corporation, Dell Computer Corporation, and Hewlett-Packard Company) to
create a specialized business computer that lowers the overall cost of using and maintaining small
computers for a business. The NetPC and ordinary PC share many of the same features. They differ
mostly in that, as fits the name, the NetPC is designed so that it can be updated through a network
connection. The PC manager in a business can thus control all of the NetPCs in the business from his
desk instead of having to traipse around to every desk to make changes. In addition, the NetPC
eliminates most of the PC manager’s need to tinker with hardware. All hardware features of the
NetPC are controlled through software and can be altered remotely, through the network connection.
The case of the NetPC is, in fact, sealed so that the hardware itself is never changed.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (22 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

The classic ISA expansion bus—long the hallmark of a conventional PC—is omitted entirely from the
NetPC. The only means of expansion provided for a NetPC is external, such as the PC Card and
CardBus slots otherwise used by notebook computers.
The design requirements to make a NetPC are set to become an industry standard, but at the time this
was written they were still under development. Intel and Microsoft introduced a NetPC draft standard
for industry comment on March 12, 1997.

Numerical Control Systems

In essence, a numerical control system is a PC with its hard hat on. That is, an NCS is a PC designed
for harsh environments such as factories and machine shops. One favored term for the construction of
the NCS is ruggedized, which essentially means made darned near indestructible with a thick steel or
aluminum case that’s sealed against oil, shavings, dust, dirt, and probably even the less-than-savory
language that permeates the shop floor.
The NCS gains its name from how it is used. As an NCS, a PC’s brain gets put to work controlling
traditional shop tools like lathes and milling machines. An operator programs the NCS—and thus the
shop tool—by punching numbers into the PC’s keypad. The PC inside crunches the numbers and
controls the movement of the operating parts of the tool, for example adjusting the pitch of a screw
cut on a lathe.
Not all NCSes are PCs, at least the variety of PCs with which this book is concerned. Some NCSes
are based on proprietary computers, more correctly, computerized control systems, built into the shop
tools they operate. But many NCSes are ordinary PCs reduced to their essence, stripped of the friendly
accouterments that make them desktop companions and reduced to a single, tiny circuit board that fits
inside a control box. They adhere to the PC standard to allow their software designers to use a familiar
and convenient platform for programming. They can use any of the languages and software tools
designed for programming PCs to bring their numerical control system to life.

Personal Digital Assistants

Today’s tiniest personal computers fit in the palm of your hand (provided your hand is suitably large)
or slide into your pocket (provided you dress like Bozo the Clown.) To get the size down,
manufacturers limit the programmability and hardware capabilities of these hand held devices mostly
to remembering things that would otherwise elude you. Because they take on some of the same duties
as a good administrative assistant, these almost-computers are usually called Personal Digital
Assistants.
The PDA is a specialized device designed for a limited number of applications. After all, you have
few compelling reasons for building a spreadsheet on a screen three inches square or writing a novel
using a keyboard the size of a commemorative postage stamp. For the most part, the PDA operates as
a scheduling and memory augmentation system. It keeps track of the details of your life so you can
focus on the bigger issues.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (23 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

The tiny dimensions and specialized application of the PDA make them unique devices that have no
real need to adhere to the same standards as larger PCs. Being impractically small for a usable
keyboard, many PDAs rely on alternate input strategies such as pointing with pens and handwriting
recognition. Instead of giving you the big picture, their smaller screens let you see just enough
information. Because of these physical restraints and differences in function, most PDAs have their
own hardware and their own operating systems, entirely unlike those used by PCs.
Although they are designed to work with PCs, they do not run as PCs or use PC software. In that way,
they fall short of PCs in versatility. They adroitly handle their appointed tasks, but they don’t aspire to
do everything; you won’t draw the blueprints of a locomotive on your PDA. In other words, although
they are almost intimately personal and are truly computers, they are not real PCs.

Laptops and Notebooks

A laptop or notebook PC is a PC but one with a special difference in packaging. The simple and most
misleading definition of a laptop computer is one that you can use on your lap. Most people use them
on airline tray tables or coffee tables, and with unusual good judgment, the industry has refrained
from labeling them as coffee table computers. Although the terms laptop and notebook are often used
interchangeably, this book prefers the term "notebook" to better reflect the various places you’re apt
to use one.
The better definition is that a laptop or notebook is a portable PC that is entirely self-contained. A
single package includes all processing power and memory, the display system, the keyboard, and a
stored energy supply (batteries). All laptop PCs have flat-panel display systems because they fit, both
physically into the case and into the strict energy budgets dictated by the power available from
rechargeable batteries.
Notebook computers almost universally use a clamshell design. The display screen folds flat atop the
keyboard to protect both when traveling. Like a clamshell, the two parts are hinged together at the
rear. This design and weight distinguishes the laptop from the lunchbox PC, an older design that is
generally heavier (10-15 pounds) and marked by a keyboard which detaches from a vertical
processing unit that holds the display panel. Laptop PCs generally weigh from 5 to 10 pounds, most
falling almost precisely in the middle of that range.
PCs weighing less than five pounds are usually classified as sub-notebook PCs. The initial
implementations of sub-notebook PCs achieved their small dimensions by slighting on the keyboard,
typically reducing its size by 20% horizontally and vertically. More recent sub-notebook machines
trim their mass by slimming down—reducing their overall thickness to about one inch—instead of
paring length or width. The motivation for this change is pragmatic: larger screens require space
enough for a normal size keyboard. In addition, most sub-notebook machines are favored as remote
entry devices. That is, journalists prefer them for typing in drafts while they are on the move. The
larger keyboard makes the machines more suited to this application.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (24 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Software

The most important part of a PC isn’t the lump of steel, plastic, and ingenuity sitting on your lap or
desktop but what that lump does for you. The PC is a means to an end. Unless you’re a collector
specializing in objets d’art that depreciate at alarming rates, acquiring PC hardware is no end in itself.
After all, by itself a computer does nothing but take up space. Plug it in and turn it on, and it will
consume electricity—and sit there like an in-law overstaying his welcome. Like that in-law, a PC
without something to make it work represents capabilities without purpose. You need to motivate it,
tell it what to do, and how to do it. In other words, the reason you buy a PC is not to have the
hardware but to have something to run software.
Hardware is simply a means to an end. To fully understand and appreciate computer hardware, you
need a basic understanding of how it relates to software and how software relates to it. Computer
hardware and software work together to make a complete system that carries out the tasks you ask of
it.
Computer software is more than the box you buy when you want to play Microsoft Office or Myst.
The modern PC runs several programs simultaneously, even when you think you’re using just one.
These programs operate at different levels, each one taking care of its own specific job, invisibly
linking to the others to give you the impression you’re working with a single, smooth-running
machine.
The most important of these programs are the applications, the programs like Myst or Office that you
actually buy and load onto your PC, the ones that boldly emblazon their names on your screen every
time you launch them. In addition, you need utilities to keep your PC in top running order, protect
yourself from disasters, and automate repetitive chores. An operating system links your applications
and utilities together and to the actual hardware of your PC. At or below the operating system level,
you use programming languages to tell your PC what to do. You can write applications, utilities, or
even your own operating system with the right programming language.

Applications

The programs you run to do actual work on your PC are its applications, short for application
software, programs with a purpose, programs you apply to get something done. They are the dominant
beasts of computing, the top of the food chain. Everything else in your computer system, hardware
and software alike, exists merely to make your applications run. Your applications determine what
you need in your PC simply because they won’t run—or run well—if you don’t supply them with
what they want.
Strange as it may seem, little of the program code in most applications deals with the job for which
you buy it. Most of the program revolves around you, trying to make the rigorous requirements of the
computer hardware more palatable to your human ideas, aspirations, and whims. The part of the
application that serves as the bridge between your human understanding and the computer’s needs is
called the user interface. It can be anything from a typewritten question mark that demands you type
some response to a Technicolor graphic menu luring your mouse to point and click.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (25 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

In fact, the most important job of most modern applications is simply translation. The user interface of
the program converts your commands, instructions, and desires into a form digestible by your
computer system. At the same time, the user interface reorganizes the data you give your PC into the
proper form for its storage or takes data from storage and reorganizes it to suit your requirements, be
they stuffing numbers into spreadsheet cells, filling database fields, or moving bits of sound and
images to speakers and screen.
The user interface acts as an interpreter and translates your actions and responses into digital code.
Although the actual work of making this translation is straightforward, even mechanistic (after all, the
computer that’s doing all the work is a machine), it demands a great deal of computer power. For
example, displaying a compressed bitmap that fills a quarter of your screen in a multimedia video
involves just a few steps. Your PC need only read a byte from memory, perform a calculation on it,
and send it to your monitor. The trick is in the repetition. While you may press only one key to start
the operation, your PC has to repeat those simple steps over a million times each second. Such a chore
can easily drain the resources available in your PC. That’s why you need a powerful PC to run today’s
video-intensive multimedia applications—and why multimedia didn’t catch on until microprocessors
with Pentium power caught up with the demands of your software.
The actual function of the program, the algorithms that it carries out, are only a small part of its code,
typically a tiny fraction. The hardcore computing work performed by major applications—the kind of
stuff that the first Univac and other big mainframe computers were created to handle—is amazingly
minimal. For example, even a tough statistical analysis may involve but a few lines of calculations
(though repeated again and again). Most of what your applications do is simply organize and convert
static data from one form to another.
Application software often is divided into several broad classes based on what the programs are meant
to accomplish. These traditional functions include:
● Word processing the PC-equivalent of the typewriter with a memory and an unlimited
correction button

● Spreadsheets the accountant’s ledger made automatic to calculate arrays of numbers

● Databases a filing system with instant access and ability to automatically sort itself

● Communications for linking with other computers, exchanging files, and browsing the Internet

● Drawing and painting to create images such as blueprints and cartoon cells that can be filed
and edited with electronic ease

● Multimedia software for displaying images and sound like a movie theater under the control of
an absolute dictator (you)

The lines between many of these applications are blurry. For example, many people find that
spreadsheets serve all their database needs, and most spreadsheets now incorporate their own graphics
for charting results.
Several software publishers completely confound the distinctions by combining most of these
applications functions into a single package that includes a database, graphics, spreadsheet, and word

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (26 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

processing. These combinations are termed application suites. Ideally, they offer several advantages.
Because many functions (and particularly the user interface) are shared between applications, large
portions of code need not be duplicated as would be the case with stand-alone applications. Because
the programs work together, they better know and understand one another’s resource requirements,
which means you should encounter fewer conflicts and memory shortfalls. Because they are all
packaged together, you stand to get a better price from the publisher.
Although application suites have vastly improved since their early years, they sometimes show their
old weaknesses. Even the best sometimes falls short of the ideal, comprised of parts that don’t
perfectly mesh together, created by different design teams over long periods. Even the savings can be
elusive because you may end up buying several applications you rarely use among the ones you want.
Nevertheless, suites like Microsoft Office have become popular because they are single-box solutions
that fill the needs of most people, handling more tasks with more depth than they ordinarily need. In
other words, the suite is an easy way to ensure you’ll have the software you need for almost anything
you do.

Utilities

Even when you’re working toward a specific goal, you often have to make some side trips. Although
they seem unrelated to where you’re going, they are as much a necessary part of the journey as any
other. You may run a billion-dollar pickle packing empire from your office, but you might never get
your business negotiations done were it not for the regular housekeeping that keeps the place clean
enough for visiting dignitaries to sit down.
The situation is the same with software. Although you need applications to get your work done, you
need to take care of basic housekeeping functions to keep your system running in top condition and
working most efficiently. The programs that handle these auxiliary functions are called utility
software.
From the name alone you know that utilities do something useful, which in itself sets them apart from
much of the software on today’s market. Of course, the usefulness of any tool depends on the job you
have to do—a pastry chef has little need for the hammer that so well serves the carpenter or PC
technician—and most utilities are crafted for some similar, specific need. You might want a better
desktop than Microsoft chooses to give you, an expedient way of dispensing with software you no
longer want, a means of converting data files from one format to another, backup and anti-virus
protection for your files, improved memory handling (and more free bytes), or diagnostics for finding
system problems. Each of these needs has spawned its own category of specialized utilities.
Some utilities, however, are useful to nearly everyone and every PC. No matter what kind of PC you
have or what you do with it, you want to keep your disk organized and running at top speed, to
prevent disasters by detecting disk problems and viruses, and to save your sanity should you
accidentally erase a file. The most important of these functions are included with today’s PC operating
systems, either integrated into the operating system itself or as individual programs that are part of the
operating system package.

DOS Utilities

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (27 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

DOS utilities are those that you run from the DOS command prompt. Because they are not burdened
with mating with the elaborate interfaces of more advanced operating systems, DOS utilities tend to
be lean and mean, a few kilobytes of code to carry out complex functions. Although they are often not
much to look at, they are powerful and can take direct hardware control of your PC.
Many DOS utilities give you only command-line control. You type in the program name followed by
one or more filenames and options (sometimes called "switches"), typically a slash or hyphen
followed by a more or less mnemonic letter identifying the option. Some DOS utilities run as true
applications with colorful screens and elaborate menus for control.
The minimal set of DOS utilities are those that come with the operating system. These are divided into
two types, internal and external.
Internal utilities are part of the DOS command interpreter, the small program that put the C>
command prompt on your screen. Whenever the prompt appears on the screen, you can run the
internal utilities by typing the appropriate command name. Internal commands include the most basic
functions of your PC: copying files (COPY), displaying the contents of files (TYPE), erasing files
(DEL), setting a path (PATH), and changing the prompt itself (PROMPT).
External utilities are separate programs, essentially applications in miniature. Some are entire suites of
programs that aspire to be full-fledged applications and are distinguished only by what they do. Being
external from the operating system kernel, most external utilities load every time you use them, and
your system must be able to find the appropriate file to make the external utility work. In other words,
they must be in the directory you’re currently logged into or in your current search path. Because they
are essentially standalone programs, you can erase or overwrite them whenever you want, for example
to install a new or improved version.

Windows Utilities

Under advanced operating systems like Windows or OS/2, you have no need to distinguish internal
and external utilities. The operating systems are so complex that all utilities are essentially external.
They are separate programs that load when you call upon them. Although some functions are
integrated into the standard command shell of these operating systems so running them is merely a
matter of making a menu choice, they are nevertheless maintained as separate entities on disk. Others
have the feel of classic external utilities and must be started like ordinary applications, for example by
clicking on the appropriate icon or using the Run option. No matter how they are structured or how
you run them, however, utilities retain the same function: maintaining your PC.

Operating Systems

The basic level of software with which you will work on your PC is the operating system. It’s what
you see when you don’t have an application or utility program running. But an operating system is
much more than what you see on the screen. As the name implies, the operating system tells your PC

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (28 de 49) [23/06/2000 04:26:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

how to operate, how to carry on its most basic functions. Early operating systems were designed
simply to control how you read from and wrote to files on disks and were hence termed disk operating
systems (which is why DOS is called DOS). Today’s operating systems add a wealth of functions for
controlling every possible PC peripheral from keyboard (and mouse) to monitor screen.
The operating system in today’s PCs has evolved from simply providing a means of controlling disk
storage into a complex web of interacting programs that perform several functions. The most
important of these is linking the various elements of your computer system together. These linked
elements include your PC hardware, your programs, and you. In computer language, the operating
system is said to provide a common hardware interface, a common programming interface, and a
common user interface.
Of these interfaces only one, the operating system’s user interface, is visible to you. The user interface
is the place at which you interact with your computer at its most basic level. Sometimes this part of
the operating system is called the user shell. In today’s operating systems, the shell is simply another
program, and you can substitute one shell for another. In effect, the shell is the starting point to get
your applications running and the home base that you return to between applications. Under DOS, the
default shell is COMMAND.COM; under Windows versions through 3.11, the shell is Program
Manager (PROGMAN EXE).
Behind the shell, the Application Program Interface, or API of the operating system, gives
programmers a uniform set of calls, key words that instruct the operating system to execute a built-in
program routine that carries out some pre-defined function. For example, a program can call a routine
from the operating system that draws a menu box on the screen. Using the API offers programmers
the benefit of having the complicated aspects of common program procedures already written and
ready to go. Programmers don’t have to waste their time on the minutiae of moving every bit on your
monitor screen or other common operations. The use of a common base of code also eliminates
duplication, which makes today’s overweight applications a bit more svelte. Moreover, because all
applications use basically the same code, they have a consistent look and work in a consistent manner.
This prevents your PC from looking like the accidental amalgamation of the late night work of
thousands of slightly aberrant engineers that it is.
As new technologies, hardware, and features get added to the repertory you expect from your PC, the
operating system maker must expand the API to match. Old operating systems required complete
upgrades or replacements to accommodate the required changes. Modern operating systems are more
modular and accept extensions of their APIs with relatively simple installations of new code. For
example, one of the most important additions to the collection of APIs used by Windows 95 was a set
of multimedia controls called DirectX. Although now considered part of Windows 95, this collection
of four individual APIs, later expanded to six, didn’t become available until two months after the
initial release of the operating system. The DirectX upgrade APIs supplemented the original API
multimedia control program code in the original release with full 32-bit versions.
Compared to the API, the hardware interface of an operating system works in the opposite direction.
Instead of commands sent to the operating system to carry out, the hardware interface comprises a set
of commands the operating system sends out to make the hardware do its tricks. These commands
take a generalized form for a particular class of hardware. That is, instead of being instructions for a
particular brand and model of disk drive, they are commands that all disk drives must understand, for
example to read a particular cluster from the disk. The hardware interface (and the programmer)
doesn’t care about how the disk drive reads the cluster, only that it does—and delivers the results to
the operating system. The hardware engineer can then design the disk drive to do its work any way he
wants as long as it properly carries out the command.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (29 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

In the real world, the operating system hardware interface doesn’t mark the line between hardware
and software. Rather, it draws the line between the software written as part of the operating system
and that written by (or for) the hardware maker. The hardware interface ties into an additional layer of
software called a driver that’s specifically created for the particular hardware being controlled. Each
different piece of hardware—sometimes down to the brand and model number—gets its own special
driver. Moreover, drivers themselves may be layered. For example, the most recent versions of
Windows use a mini-driver model in which a class of hardware devices gets one overall driver, and a
specific product gets matched to it by a mini-driver.
Not all operating systems provide a common hardware interface. In particular, DOS makes few
pretenses of linking hardware. It depends on software publishers to write their own links between their
program and specific hardware (or hardware drivers). This method of direct hardware control is fully
described in the "Linking Hardware and Software" section later in this chapter.
Outside of the shell of the user interface, you see and directly interact with little of an operating
system. The bulk of the operating system program code works invisibly (and continuously). And
that’s the way it’s designed to be.

Programming Languages

A computer program is nothing more than a list of instructions for a microprocessor to carry out. A
microprocessor instruction, in turn, is a specific pattern of bits, a digital code. Your computer sends
the list of instructions making up a program to your microprocessor one at a time. Upon receiving
each instruction, the microprocessor looks up what function the code says to do, then it carries out the
appropriate action.
Every microprocessor understands its own repertoire of instructions just as a dog might understands a
few spoken commands. Where your pooch might sit down and roll over when you ask it to, your
processor can add, subtract, move bit patterns around, and change them. Every family of
microprocessor has a set of instructions that it can recognize and carry out, the necessary
understanding designed into the internal circuitry of each chip. The entire group of commands that a
given microprocessor model understands and can react to is called that microprocessor’s instruction
set or its command set. Different microprocessor families recognize different instruction sets, so the
commands meant for one chip family would be gibberish to another. The Intel family of
microprocessors understands one command set; the IBM/Motorola PowerPC family of chips
recognize an entirely different command set.
As a mere pattern of bits, a microprocessor instruction itself is a simple entity, but the number of
potential code patterns allows for incredibly rich command sets. For example, the Intel family of
microprocessors understands more than eight subtraction instructions, each subtly different from the
others.
Some microprocessor instructions require a series of steps to be carried out. These multi-step
commands are sometimes called complex instructions because of their composite nature. Although the
complex instruction looks like a simple command, it may involve much work. A simple instruction
would be something like "pound a nail"; a complex instruction may be as far ranging as "frame a
house." Simple subtraction or addition of two numbers may actually involve dozens of steps,
including the conversion of the numbers from decimal to binary (1’s and 0’s) notation that the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (30 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

microprocessor understands.
Broken down to its constituent parts, a computer program is nothing but a list of symbols that
correspond to patterns of bits that signal a microprocessor exactly as letters of the alphabet represent
sounds that you might speak. Of course, with the same back to the real basics reasoning, an orange is
a collection of quarks squatting together with reasonable stability in the center of your fruit bowl. The
metaphor is apt. The primary constituents of an orange—whether you consider them quarks, atoms, or
molecules—are essentially interchangeable, even indistinguishable. By itself, every one is
meaningless. Only when they are taken together do they make something worthwhile (at least from a
human perspective), the orange. The overall pattern, not the individual pieces, is what’s important.
Letters and words work the same way. A box full of vowels wouldn’t mean anything to anyone not
engaged in a heated game of Wheel of Fortune. Match the vowels with consonants and arrange them
properly, and you might make words of irreplaceable value to humanity: the works of Shakespeare,
Einstein’s expression of general relativity, or the formula for Coca-Cola. The meaning is not in the
pieces but their patterns.
Everything that the microprocessor does consists of nothing more than a series of these step-by-step
instructions. A computer program is simply a list of microprocessor instructions. The instructions are
simple, but long and complex computer programs are built from them just as epics and novels are
built from the words of the English language. Although writing in English seems natural,
programming feels foreign because it requires that you think in a different way, in a different
language. You even have to think of jobs, such as adding numbers, typing a letter, or moving a block
of graphics, as a long series of tiny steps. In other words, programming is just a different way of
looking at problems and expressing the process of solving them.
These bit-patterns used by microprocessors can be represented as binary codes, which can be
translated into numbers in any format—hexadecimal and decimal being most common. In this form,
the entire range of these commands for a microprocessor is called machine language. Most human
beings find words or pseudo-words to be more comprehensible symbols. The list of word-like
symbols that control a microprocessor is termed assembly language.
You make a computer program by writing a list of commands for a microprocessor to carry out. At
this level, programming is like writing reminder notes for a not-too-bright person of an ethnic
background you want to denigrate—first socks, then shoes.
This step-by-step command system is perfect for control freaks but otherwise is more than most
people want to tangle with. Even simple computer operations require dozens of microprocessor
operations, so writing complete lists of commands in this form can be more than many programmers
want to deal with. Moreover, machine and assembly language commands are microprocessor-specific:
they work only with the specific chips that understand them. Worse, because the microprocessor
controls all computer functions, assembly language programs usually work only on a specific
hardware platform.
It needs software, that list of instructions called a program, to make it work. But a program is more
than a mere list. It is carefully organized and structured so that the computer can go through the
instruction list step by step, executing each command in turn. Each builds on the previous instructions
to carry out a complex function. The program is essentially a recipe for a microprocessor.
Microprocessors by themselves only react to patterns of electrical signals. Reduced to its purest form,
the computer program is information that finds its final representation as the ever-changing pattern of
signals applied to the pins of the microprocessor. That electrical pattern is difficult for most people to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (31 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

think about, so the ideas in the program are traditionally represented in a form more meaningful to
human beings. That representation of instructions in human-recognizable form is called a
programming language.
Programming languages create their own paradox. You write programs on a computer to make the
computer run and do what you want it to do. But without a program the computer can do nothing.
Suddenly you’re confronted with a basic philosophic question: Which came first, the computer or the
program?
The answer really lays an egg. With entirely new systems, the computer and its first programs are
conceived and created at the same time. As a team of hardware engineers builds the computer
hardware, another team of software engineers develops its basic software. They both work from a set
of specifications, the list of commands the computer can carry out. The software engineers use the
commands in the specifications, and the hardware engineers design the hardware to carry out those
commands. With any luck, when both are finished the hardware and software come together perfectly
and work the first time they try. It’s sort of like digging a tunnel from two directions and hoping the
two crews meet in the middle of the mountain.
Moreover, programs don’t have to be written on the machine for which they are meant. The machine
that a programmer uses to write program code does not need to be able to actually run the code. It
only has to edit and store the program so that it can later be loaded and run on the target computer. For
example, programs for game machines are often written on more powerful computers called
development systems. Using a more powerful machine for writing gives programmers more speed and
versatility.
Similarly, you can create a program that runs under one operating system using a different operating
system. For example, you can write DOS programs under Windows. Moreover, you can write an
operating system program while running under another operating system. After all, writing the
program is little more than using a text editor to string together commands. The final code is all that
matters; how you get there is irrelevant to the final program. The software writer simply chooses the
programming environment he’s most comfortable in, just as he choose the language he prefers to use.

Machine Language

The most basic of all coding systems for microprocessor instructions merely documents the bit pattern
of each instruction in a form that human beings can see and appreciate. Because this form of code is
an exact representation of the instructions that the computer machine understands, it is termed
machine language.
The bit pattern of electrical signals in machine language can be expressed directly as a series of ones
and zeros, such as 0010110. Note that this pattern directly corresponds to a binary (or base-two)
number. As with any binary number, the machine language code of an instruction can be translated
into other numerical systems as well. Most commonly, machine language instructions are expressed in
hexadecimal form (base-16 number system). For example, the 0010110 subtraction instruction
becomes 16(hex).

Assembly Language

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (32 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

People can and do program in machine language. But the pure numbers assigned to each instruction
require more than a little getting used to. After weeks, months, or years of machine language
programming, you begin to learn which numbers do what. That’s great if you want to dedicate your
life to talking to machines but not so good if you have better things to do with your time.
For human beings, a better representation of machine language codes involves mnemonics rather than
strictly numerical codes. Descriptive word fragments can be assigned to each machine language code
so that 16(Hex) might translate into SUB (for subtraction). Assembly language takes this additional
step, enabling programmers to write in more memorable symbols.
Once a program is written in assembly language, it must be converted into the machine language code
understood by the microprocessor. A special program, called an assembler handles the necessary
conversion. Most assemblers do even more to make the programmer’s life manageable. For example,
they enable blocks of instructions to be linked together into a block called a subroutine, which can
later be called into action by using its name instead of repeating the same block of instructions again
and again.
Most of assembly language involves directly operating the microprocessor using the mnemonic
equivalents of its machine language instructions. Consequently, programmers must be able to think in
the same step-by-step manner as the microprocessor. Every action that the microprocessor does must
be handled in its lowest terms. Assembly language is consequently known as a low level language
because programmers write at the most basic level.

High Level Languages

Just as an assembler can convert the mnemonics and subroutines of assembly language into machine
language, a computer program can go one step further, translating more human-like instructions into
multiple machine language instructions that would be needed to carry them out. In effect, each
language instruction becomes a subroutine in itself.
The breaking of the one to one correspondence between language instruction and machine language
code puts this kind of programming one level of abstraction farther from the microprocessor. Such
languages are called high level languages. Instead of dealing with each movement of a byte of
information, high level languages enable the programmer to deal with problems as decimal numbers,
words, or graphic elements. The language program takes each of these high level instructions and
converts it into a long series of digital code microprocessor commands in machine language.
High level languages can be classified into two types: interpreted and compiled. Batch languages are a
special kind of interpreted language.

Compiled Languages

Compiled languages execute like a program written in assembler but the code is written in a more

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (33 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

human-like form. A program written with a compiled language gets translated from high level
symbols into machine language just once. The resulting machine language is then stored and called
into action each time you run the program. The act of converting the program from the English-like
compiled language into machine language is called compiling the program; to do this you use a
language program called a compiler. The original, English-like version of the program, the words and
symbols actually written by the programmer, is called the source code. The resulting machine
language makes up the program’s object code.
Compiling a complex program can be a long operation, taking minutes, even hours. Once the program
is compiled, however, it runs quickly because the computer needs only to run the resulting machine
language instructions instead of having to run a program interpreter at the same time. Most of the
time, you run a compiled program directly from the DOS prompt or by clicking on an icon. The
operating system loads and executes the program without further ado. Examples of compiled
languages include C, COBOL, FORTRAN, and Pascal.
Object-oriented languages are special compiled languages designed so that programmers can write
complex programs as separate modules termed objects. A programmer writes an object for a specific,
common task and gives it a name. To carry out the function assigned to an object, the programmer
needs only to put its name in the program without reiterating all the object’s code. A program may use
the same object in many places and at many different times. Moreover, a programmer can put a copy
of an object into different programs without the need to rewrite and test the basic code, which speeds
up the creating of complex programs. The newest and most popular programming languages like C++
are object-oriented.
Because of the speed and efficiency of compiled languages, compilers have been written that convert
interpreted language source code into code that can be run like any compiled program. A BASIC
compiler, for example, will produce object code that will run from the DOS prompt without the need
for running the BASIC interpreter. Some languages, like Microsoft Quick BASIC, incorporate both
interpreter and compiler in the same package.
When PCs were young, getting the best performance required using a low level language. High level
languages typically include error routines and other overhead that bloats the size of programs and
slows their performance. Assembly language enabled programmers to minimize the number of
instructions they needed and to ensure that they were used as efficiently as possible.
Optimizing compilers do the same thing but better. By adding an extra step (or more) to the program
compiling process, the optimizing compiler checks to ensure that program instructions are arranged in
the most efficient order possible to take advantage of all the capabilities of a RISC microprocessor. In
effect, the optimizing compiler does the work that would otherwise require the concentration of an
assembly language programmer.
In the end, however, the result of using any language is the same. No matter how high the level of the
programming language, no matter what you see on your computer screen, no matter what you type to
make your machine do its daily work, everything the microprocessor does is reduced to a pattern of
digital pulses to which it reacts in knee jerk fashion. Not exactly smart on the level of an Albert
Einstein or even the trouble-making kid next door, but the microprocessor is fast, efficient, and useful.
It is the foundation of every PC.

Interpreted Languages

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (34 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

An interpreted language is translated from human to machine form each time it is run by a program
called an interpreter. People who need immediate gratification like interpreted programs because they
can be run immediately, without intervening steps. If the computer encounters a programming error, it
can be fixed, and the program can be tested again immediately. On the other hand, the computer must
make its interpretation each time the program is run, performing the same act again and again. This
repetition wastes the computer’s time. More importantly, because the computer is doing two things at
once, both executing the program and interpreting it at the same time, it runs more slowly.
BASIC, an acronym for the Beginner’s All-purpose Symbolic Instruction Set, is the most familiar
programming language. BASIC, as an interpreted language, was built into every personal computer
IBM made in the first decade of personal computing. Another interpreted language, Java, promises to
change the complexion of the Internet.
Java, the Internet language, is also interpreted. Your PC downloads a list of Java commands and
converts them into executable form inside your PC. The interpreted design of Java helps make it
universal. The Java code contains instructions that any PC can carry out regardless of its operating
system. The Java interpreter inside your PC converts the universal code into the specific machine
language instructions your PC and its operating system understand.
In classic form using an interpreted language involved two steps. First, you would start the language
interpreter program, which gave you a new environment to work in, complete with its own system of
commands and prompts. Once in that environment, you executed your program, typically starting it
with a "Run" instruction. More modern interpreted systems like Java hide the actual interpreter from
you. The Java program appears to run automatically by itself, although in reality the interpreter is
hidden in your Internet browser or operating system. Microsoft’s Visual Basic gets its interpreter
support from a run-time module which must be available to your PC’s operating system for Visual
Basic programs to run.

Batch Languages

A batch language allows you to submit a program directly to your operating system for execution.
That is, the batch language is a set of operating system commands that your PC executes sequentially
as a program. The resulting batch program works like an interpreted language in that each step gets
evaluated and executed only as it appears in the program.
Applications often include their own batch languages. These, too, are merely lists of commands for
the application to carry out in the order that you’ve listed them to perform some common, everyday
function. Communications programs use this type of programming to automatically log into the
service of your choice and even retrieve files. Databases use their own sort of programming to
automatically generate reports that you regularly need. The process of transcribing your list of
commands is usually termed scripting. The commands that you can put in your program scripts are
sometimes called the scripting language.
Scripting actually is programming. The only difference is the language. Because you use commands
that are second nature to you (at least after you’ve learned to use the program) and follow the syntax
that you’ve already learned running the program, the process seems more natural than writing in a

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (35 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

programming language.

Linking Hardware and Software

Software is from Venus. Hardware is from Mars—or, to ruin the allusion for sake of accuracy,
Vulcan. Software is the programmer’s labor of love, an ephemeral spirit that can only be represented.
Hardware is the physical reality, the stuff pounded out in Vulcan’s forge—enduring, unchanging, and
often priced like gold. Bringing the two together is a challenge that even self-help books would find
hard to manage. Yet every PC not only faces that formidable task but tackles it with aplomb (or so
you hope!)
Your PC takes ephemeral ideas and gives them the power to control physical things. In other words, it
allows its software to command its hardware. The challenge is making the link.
In the basic PC, every instruction in a program gets targeted on the microprocessor. Consequently, the
instructions can control only the microprocessor and don’t themselves reach beyond. The circuitry of
the rest of the computer and all of the peripherals connected to it must get their commands and data
relayed by the microprocessor. Somehow the microprocessor must be able to send signals to these
devices.

Device Interfaces

Two methods are commonly used, input/output mapping and memory mapping. Input/output mapping
relies on sending instructions and data through ports. Memory mapping requires passing data through
memory addresses. Ports and addresses are similar in concept but different in operation.

Input/Output Mapping

A port is an address but not a physical location. The port is a logical construct that operates as an
addressing system separate from the address bus of the microprocessor even though it uses the same
address lines. If you imagine normal memory addresses as a set of pigeon holes for holding bytes,
input/output ports act like a second set of pigeon holes on the other side of the room. To distinguish
which set of holes to use, the microprocessor controls a flag signal on its bus called memory-I/O. In
one condition it tells the rest of the computer the signals on the address bus indicate a memory
location; in its other state, the signals indicate an input/output port.
The microprocessor’s internal mechanism for sending data to a port also differs from memory access.
One instruction, move, allows the microprocessor to move bytes from any of its registers to any
memory location. Some microprocessor operations can even be performed in immediate mode,
directly on the values stored at memory locations.
Ports, however, use a pair of instructions, In, to read from a port, and Out, to write to a port. The

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (36 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

values read can only be transferred into one specific register of the microprocessor (called the
accumulator), and can only be written from that register. The accumulator has other functions as well.
Immediate operations on values held at port locations is impossible—which means a value stored in a
port cannot be changed by the microprocessor. It must load the port value into the accumulator, alter
it, then reload the new value back into the port.

Memory Mapping

The essence of memory mapping is sharing. The microprocessor and the hardware device it controls
share access to a specific range of memory addresses. To send data to the device, your microprocessor
simply moves the information into the memory locations exactly as if it were storing something for
later recall. The hardware device can then read those same locations to obtain the data.
Memory-mapped devices, of course, need direct access to your PC’s memory bus. Through this
connection, they can gain speed and operate as fast as the memory system and its bus connection
allows. In addition, the microprocessor can directly manipulate the data at the memory location used
by the connection, eliminating the multi-step load/change/reload process required by I/O mapping.
The most familiar memory-mapped device is your PC’s display. Most graphic systems allow the
microprocessor to directly address the frame buffer that holds the image which appears on your
monitor screen. This design allows the video system to operate at the highest possible speed.
The addresses used for memory mapping must be off-limits to the range in which the operating
system loads your programs. If a program should transgress on the area used for the hardware
connection, it can inadvertently change the data there—nearly always with bad results. Moreover, the
addresses used by the interface cannot serve any other function, so they take away from the maximum
memory addressable by a PC. Although such deductions are insignificant with today’s PCs, it was a
significant shortcoming for old systems, many of which were limited to 16 megabytes.

Addressing

To the microprocessor the difference between ports and memory is one of perception: memory is a
direct extension of the chip. Ports are the external world. Writing to I/O ports is consequently more
cumbersome and usually requires more time and microprocessor cycles.
I/O ports give the microprocessor and computer designer greater flexibility. And they give you a
headache when you want to install multimedia accessories.
Implicit in the concept of addressing, whether memory or port addresses, is proper delivery. You
expect a letter carrier to bring your mail to your address and not deliver it to someone else’s mailbox.
Similarly, PCs and their software assume that deliveries of data and instructions will always go where
they are supposed to. To assure proper delivery, addresses must be correct and unambiguous. If
someone types a wrong digit on a mailing label, it will likely get lost in the postal system.
In order to use port or memory addresses properly, your software needs to know the proper addresses
used by your peripherals. Many hardware functions have fixed or standardized addresses that are the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (37 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

same in every PC. For example, the memory addresses used by video boards are standardized (at least
in basic operating modes), and the ports used by most hard disk drives are similarly standardized.
Programmers can write the addresses used by this fixed-address hardware into their programs and not
worry whether their data will get where it’s going.
The layered BIOS approach helps eliminate the need of writing explicit hardware addresses in
programs. Drivers accomplish a similar function. They are written with the necessary hardware
addresses built in.

Resource Allocation

The basic hardware devices were assigned addresses and memory ranges early in the history of the PC
and for compatibility reasons have never changed. These fixed values include those of serial and
parallel ports, keyboards, disk drives, and the frame buffer that stores the monitor image. Add-in
devices and more recent enhancements to the traditional devices require their own assignments of
system resources. Unfortunately, beyond the original hardware assignments there are no standards for
the rest of the resources. Manufacturers consequently pick values of their own choices for new
products. More often than you’d like, several products may use the same address values.
Manufacturers attempt to avoid conflicts by allowing a number of options for the addresses used by
their equipment. You select among the choices offered by manufacturers using switches or jumpers
(on old technology boards) or through software (new technology boards, including those following the
old Micro Channel and EISA standards). The latest innovation, Plug-and-Play, attempts to put the
responsibility for properly allocating system resources in the hands of your PC and its operating
system, although the promise often falls short of reality when you mix new and old products.
With accessories that use traditional resource allocation technology, nothing prevents your setting the
resources used by one board to the same values used by another in your system. The result is a
resource conflict that may prevent both products from working. Such conflicts are the most frequent
cause of problems in PCs, and eliminating them was the goal of the modern, automatic resource
allocation technologies.

BIOS

The Basic Input/Output System or BIOS of a PC has many functions, as discussed in Chapter 5. One
of these is to help match your PC’s hardware to software.
No matter the kind of device interface (I/O mapped or memory mapped) used by a hardware device,
software needs to know the addresses it uses to take control of it. Using direct hardware control
requires that programs or operating systems are written using the exact values of the port and memory
addresses of all the devices installed in the PC. All PCs running such software that takes direct
hardware control consequently must assign their resources identically if the software is to have any
hope of working properly.
PC designers want greater flexibility. They want to be able to assign resources as they see fit, even to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (38 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

the extent of making some device that might be memory-mapped in one PC into an I/O-mapped
device in another. To avoid permanently setting resource values and forever locking all computer
designs to some arbitrary standard, one that might prove woefully inadequate for future computer
designs, the creators of the first PCs developed BIOS.
The BIOS is program code that’s permanently recorded (or semi-permanently in the case of Flash
BIOS systems) in special memory chips. The code acts like the hardware interface of an operating
system but at a lower level; it is a hardware interface that’s independent of the operating system.
Programs or operating systems send commands to the BIOS, and the BIOS sends out the instructions
to the hardware using the proper resource values. If the designer of a PC wants to change the
resources used by the system hardware in a new PC, he only has to change the BIOS to make most
software work properly. The BIOS code of every PC includes several of these built-in routines to
handle accessing floppy disk drives, the keyboard, printers, video, and parallel and serial port
operation.

Device Drivers

Device drivers have exactly the same purpose as the hardware interface in the BIOS code. They link
your system to another device by giving your PC a handy set of control functions. Drivers simply take
care of devices not in the repertory of the BIOS. Rather than being permanently encoded in the
memory of your PC, drivers are software that must be loaded into your PC’s memory. As with the
BIOS links, the external device driver provides a library of programs that it can easily call to carry out
a complex function of the target hardware or software device.
All device drivers have to link with your existing software somehow. The means of making that link
varies with your operating system. As you should expect, the fundamental device driver architecture is
that used by DOS. Drivers that work with DOS are straightforward, single-minded, and sometimes
dangerous. Advanced operating systems like Windows and OS/2 have built-in hooks for device
drivers that make them more cooperative and easier to manage.
You need to tangle with device drivers because no programmer has an unlimited imagination. No
programmer can possibly conceive of every device that you’d want to link to your PC. In fact,
programmers are hard pressed to figure out everything you’d want to do with your PC—otherwise
they’d write perfect programs that would do exactly everything that you want.
Thanks to an industry with a heritage of inter-company cooperation only approximated by a good pie
fight, hardware designer tend to go their own directions when creating the control systems for their
products. For example, the command one printer designer might pick for printing graphics dots may
instruct a printer of a different brand to advance to the next sheet of paper. Confuse the codes, and
your office floor will soon look like the training ground for housebreaking a new puppy.
Just about every class of peripheral has some special function shared with no other device. Printers
need to switch ribbon colors; graphics boards need to put dots on screen at high resolution; sound
boards need to blast fortissimo arpeggios; video capture boards must grab frames; and mice have to do
whatever mice do. Different manufacturers often have widely different ideas about the best way to
handle even the most fundamental functions. No programmer or even collaborative program can ever
hope to know all the possibilities. It’s even unlikely that you could fit all the possibilities into a BIOS
with fewer chips than a Las Vegas casino or operating system with code that would fit onto a stack of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (39 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

disks you could carry. There are just too many possibilities.
Drivers let you customize. Instead of packing every control or command you might potentially need,
the driver packages only those that are appropriate for a particular product. If all you want is to install
a sound board, your operating system doesn’t need to know how to capture a video frame. The driver
contains only the command specific to the type, brand, and model of a product that you actually
connect to your PC.
Device drivers give you a further advantage. You can change them almost as often as you change your
mind. If you discover a bug in one driver—say sending an upper case F to your printer causes it to
form feed through a full ream of paper before coming to a panting stop—you can slide in an updated
driver that fixes the problem. In some cases, new drivers extend the features of your existing
peripherals because programmers didn’t have enough time or inspiration to add everything to the
initial release.
The way you and your system handle drivers depends on your operating system. DOS, 16-bit versions
of Windows, Windows 95, and OS/2 each treat drivers somewhat differently. All start with the model
set by DOS, then add their own innovations. All 16-bit versions of Windows run under DOS, so
require that you understand (and use) some DOS drivers. In addition, these versions of Windows add
their own method of installing drivers as well as several new types of drivers. Windows 95
accommodates both DOS and 16-bit Windows drivers to assure you of compatibility with your old
hardware and software. In addition, Windows 95 brings its own 32-bit protected-mode drivers and a
dynamic installation scheme. OS/2 also follows the pattern set by DOS but adds its own variations as
well.
Driver software matches the resource needs of your hardware to your software applications. The
match is easy when a product doesn’t allow you to select among resource values; the proper addresses
can be written right into the driver. When you can make alternate resource allocations, however, the
driver software needs to know which values you’ve chosen. In most cases, you make your choices
known to the driver by adding options to the command that loads the driver (typically in your PC’s
CONFIG.SYS) or through configuration files. Most new add-in devices include an installation
program that indicates the proper options to the driver by adding the values to the load command or
configuration file, though you can almost always alter the options with a text-based editor.
This complex setup system gives you several places to make errors that will cause the add-in device,
or your entire PC, to operate erratically or fail altogether. You might make duplicate resource
assignments, mismatch drivers and options, or forget to install the drivers at all. Because multimedia
PCs use so many add-in devices, they are particularly prone to such problems. Sound boards in
particular pose installation problems because they usually incorporate several separate hardware
devices (the sound system proper, a MIDI interface, and often a CD ROM interface) each of which
has its own resource demands.

Hardware Components

Before you can jump into any discussion about personal computers, you have to speak the language.
You can’t talk intelligently about anything if you don’t know what you’re talking about. You need to
know the basic terms and buzzwords so you don’t fall under some charlatan’s spell.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (40 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Every PC is built from an array of components, each of which serves a specific function in making the
overall machine work. As with the world of physical reality, a PC is built from fundamental elements
combined together. Each of these elements adds a necessary quality or feature to the final PC. These
building blocks are hardware components, built of electronic circuits and mechanical parts to carry
out a defined function. Although all of the components work together, they are best understood by
examining them and their functions individually. Consequently this book is divided into sections and
chapters by component.
Over the years of the development of the PC, the distinctions between many of these components
have turned out not to be hard and fast. In the early days of PCs, most manufacturers followed the
same basic game plan using the same components in the same arrangement, but today greater
creativity and diversity rules. What once were separate components have merged together; others have
been separated out. Their functions, however, remain untouched. For example, although modern PCs
may lack the separate timer chips of early machines, the function of the timer has been incorporated
into the support circuitry chipsets.
For purposes of this book and discussion, we’ll divide the PC into several major component areas,
including the system unit, the mass storage system, the display system, peripherals, and connectivity
features. Each of these major divisions can be, in turn, subdivided into the major components (or
component functions) required in a complete PC.

System Unit

The part of a PC that most people usually think of as the computer—the box that holds all the
essential components except, in the case of desktop machines, the keyboard and monitor—is the
system unit. Sometimes called CPU—for Central Processing Unit, a term also used to describe
microprocessors as well as mainframe computers—the system unit is the basic computer component.
It houses the main circuitry of the computer and provides the jacks (or outlets) that link the computer
to the rest of its accouterments including the keyboard, monitor, and peripherals. A notebook
computer combines all of these external components into one but is usually called simply the
computer rather than the system unit or CPU.
One of the primary functions of the system unit is physical. It gives everything in your computer a
place to be. It provides the mechanical mounting for all the internal components that make up your
computer, including the motherboard, disk drives and expansion boards. The system unit is the case of
the computer that you see and everything that is inside it. The system unit supplies power to operate
the PC and its internal expansion, disk drives, and peripherals.

Motherboard

The centerpiece of the system unit is the motherboard. All the other circuitry of the system unit is
usually part of the motherboard or plugs directly into it.
The electronic components on the motherboard carry out most of the function of the machine: running
programs, making calculations, even arranging the bits that will display on the screen.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (41 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Because the motherboard defines each computer’s functions and capabilities and because every
computer is different, it only stands to reason that every motherboard is different, too. Not exactly.
Many different computers have the same motherboard designs inside. And oftentimes a single
computer model might have any of several different motherboards depending on when it came down
the production line (and what motherboard the manufacturer got the best deal on).
The motherboard holds the most important elements of your PC, those that define its function and
expandability. These include the microprocessor, BIOS, memory, mass storage, expansion slots, and
ports.

Microprocessor

The most important of the electronic components on the motherboard is the microprocessor. It does
the actual thinking inside the computer. Which microprocessor, of the dozens currently available,
determines not only the processing power of the computer but also what software language it
understands (and thus what programs it can run).
Many older computers also had a coprocessor that added more performance to the computer on some
complex mathematical problems such as trigonometric functions. Modern microprocessors generally
internally incorporate all the functions of the coprocessor.

Memory

Just as you need your hands and workbench to hold tools and raw materials to make things, your PC’s
microprocessor needs a place to hold the data it works on and the tools to do its work. Memory, which
is often described by the more specific term RAM (which means Random Access Memory) serves as
the microprocessor’s workbench. Usually located on the motherboard, your PC’s microprocessor
needs memory to carry out its calculations. The amount and architecture of the memory of a system
determines how it can be programmed and, to some extent, the level of complexity of the problems
that it can work on. Modern software often requires that you install a specific minimum of
memory—a minimum measured in megabytes—to execute properly. With modern operating systems,
more memory often equates to faster overall system performance.

BIOS

A computer needs a software program to work. It even needs a simple program just to turn itself on
and be able to load and read software. The Basic Input/Output System or BIOS of a computer is a set
of permanently recorded program routines that gives the system its fundamental operational
characteristics, including instructions telling the computer how to test itself every time it is turned on.
In older PCs, the BIOS determines what the computer can do without loading a program from disk
and how the computer reacts to specific instructions that are part of those disk-based programs. Newer

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (42 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

PCs may contain simpler or more complex BIOSes. A BIOS can be as simple as a bit of code telling
the PC how to load the personality it needs from disk. Some newer BIOSes also include a system to
help the machine determine what options you have installed and how to get them to work best
together.
At one time, the origins of a BIOS determined the basic compatibility of a PC. Newer
machines—those made in the last decade—are generally free from worries about compatibility. The
only compatibility issue remaining is whether a given BIOS supports the Plug-and-Play standard that
allows automatic system configuration (which is a good thing to look for in a new PC but its absence
is not fatal in older systems).
Modern operating systems automatically replace the BIOS code with their own software as soon as
your PC boots up. For the most part, the modern BIOS only boots and tests your PC, then steps out of
the way so that your software can get the real work done.

Support Circuits

The support circuitry on your PC’s motherboard links its microprocessor to the rest of the PC. A
microprocessor, although the essence of a computer, is not a computer in itself (if it were, it would be
called something else, such as a computer). The microprocessor requires additional circuits to bring it
to life: clocks, controllers, and signal converters. Each of these support circuits has its own way of
reacting to programs, and thus helps determine how the computer works.
In today’s PCs, all of the traditional functions of the support circuitry have been squeezed into
chipsets, which are relatively large integrated circuits. In that most PCs are now based on a small
range of microprocessors, their chipsets distinguish their motherboards and performance as much as
do their microprocessors. In fact, for some folks the choice of chipset is a major purchasing criterion.

Expansion Slots

Exactly as the name implies, the expansion slots of a PC allow you to expand its capabilities by
sliding in accessory boards, cleverly termed expansion boards. The slots are spaces inside the system
unit of the PC that provide special sockets or connectors to plug in your expansion boards. The
expansion slots of notebook PCs accept modules the size of credit cards that deliver the same
functions as expansion boards.
The standards followed by the expansion slots in a PC determine both what boards you can plug in
and how fast the boards can perform. Over the years, PCs have used several expansion slot standards.
In new PCs, the choices have narrowed to three—and you might want all of them in your next system.

Mass Storage

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (43 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

To provide your computer with a way to store the huge amounts of programs and data that it works
with every day, your PC uses mass storage devices. In nearly all of today’s computers, the primary
repository for this information is a hard disk drive. Floppy disks and CD ROM drives give you a way
of transferring programs and data to (and from) your PC. One or more mass storage interfaces link the
various storage systems to the rest of your PC. In modern systems, these interfaces are often part of
the circuitry of the motherboard.

Hard Disk Drives

The basic requirements of any mass storage system are speed, capacity, and low price. No technology
delivers as favorable a combination of these virtues as the hard disk drive, now a standard part of
nearly every PC. The hard disk drive stores all of your programs and other software so that they can
be loaded into your PC’s memory almost without waiting. In addition, the hard disk also holds all the
data you generate with your PC so that you can recall and reuse it whenever you want. In general, the
faster the hard disk and the more it can hold, the better.
Hard disks also have their weaknesses. Although they are among the most reliable mechanical devices
ever made—some claim to be able to run for 30 years without a glitch—they lack some security
features. The traditional hard disk is forever locked inside your PC, and that makes the data stored on
it vulnerable to any evil that may befall your computer. A thief or disaster can rob you of your system
and your data in a single stroke. Just as you can’t get the typical hard disk out of your PC to store in a
secure place, the hard disk gives you no easy way to put large blocks of information or programs into
your PC.

CD ROM Drives

Getting data into your PC requires a distribution medium, and when you need to move megabytes, the
medium of choice today is the CD ROM drive. Software publishers have made the CD ROM their
preferred means of getting their products to you. A single CD that costs about the same as a floppy
disk holds hundreds of times more information and keeps it more secure. CD’s are vulnerable to
neither random magnetic fields nor casual software pirates. CD ROM drives are a necessary part of all
multimedia PCs, which means just about any PC you’d want to buy today.
The initials stand for Compact Disc, Read-Only Memory, which describe the technology at the time it
was introduced for PC use. Although today’s CD ROMs are based on the same silver laser-read discs
that spin as CDs in your stereo system, they are no longer read-only and soon won’t be mere CDs.
Affordable drives to write your own CDs with computer data or stereo music are readily available.
Many makers of CD ROM drives are now shifting to the DVD (Digital Video Disc) standard to give
their products additional storage capacity.

Floppy Disk Drives

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (44 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Inexpensive, exchangeable, and technically unchallenging, the floppy disk was the first, and at one
time only, mass storage system of many PCs. Based on well-proven technologies and mass produced
by the millions, the floppy disk provided the first PCs with a place to keep programs and data and,
over the years, served well as a distribution system through which software publishers could make
their products available.
In the race with progress, however, the simple technology of the floppy disk has been hard-pressed to
keep pace. The needs of modern programs far exceed what floppy disks can deliver, and other
technologies (like those CD ROM drives) provide less expensive distribution. New incarnations of
floppy disk technology that pack 50 to 100 times more data per disk hold promise but at the penalty of
a price that will make you look more than twice at other alternatives.
All that said, the floppy disk drive remains a standard part of all but a few highly specialized PCs,
typically those willing to sacrifice everything to save a few ounces (sub-notebooks) and those that
need to operate in smoky, dusty environments that would make Superman cringe and Wonder Woman
cough.

Tape Drives

Tape is for backup, pure and simple. It provides an inexpensive place to put your data just in case—in
case some light-fingered freelancer decides to separate your PC from your desktop, in case the fire
department hoses to death everything in your office that the fire and smoke failed to destroy, in case
you think DEL *.* means "display all file names," in case that nagging head cold turns out to be a
virus that infects your PC and formats your hard disk, in case your next-door neighbor bewitches your
PC and turns it into a golden chariot pulled by a silver charger that once was your mouse, in case an
errant asteroid ambles through your roof. Having an extra copy of your important data helps you
recover from such disasters and those that are even less likely.
Computer tape drives work on the same principles as the cassette recorder in your stereo. Some are, in
fact, cassette drives. All such drives use tape as an inexpensive medium for storing data. All modern
tape systems put their tape into cartridges that you can lock safely away or send across the continent.
And all are slower than you’d like and less reliable than you’d suspect. Nevertheless, tape remains the
backup medium of choice for most people who choose to make backups.

Display Systems

Your window into the mind of your PC is its display system, the combination of a graphics adapter or
video board and a monitor or flat-panel display. The display system gives your PC the means to tell
you what it is thinking, to show you your data in the form that you best understand, be it numbers,
words, or pictures.
The two halves of the display system work hand-in-hand. The graphics adapter uses the digital signals
inside your PC to built an electronic map of what the final image should look like, storing the data for
every dot on your monitor in memory. Electronics generate the image that appears on your monitor
screen.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (45 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Graphics Adapters

Your PC’s graphics adapter forms the image that you will see on your monitor screen. It converts
digital code into a bit pattern that maps each dot that you’ll see. Because it makes the actual
conversion, the graphics adapter determines the number of colors that can appear on your monitor as
well as the ultimate resolution of the image. In other words, the graphics adapter sets the limit on the
quality of the images your PC can produce. Your monitor cannot make an image any better than what
comes out of the graphics adapter. The graphics adapter also determines the speed of your PC’s video
system; a faster board will make smoother video displays.
Many PCs now include at least a rudimentary form of graphics adapter in the form of display
electronics on their motherboards; others put the display electronics on an expansion board.

Monitors

The monitor is the basic display system that’s attached to most PCs. Monitors are television sets with
Michael Millken’s appetite for money. While a 21-inch TV might cost $300 in your local appliance
store, the same size monitor will likely cost $2000 and still not show the movies you rent.
Although both monitor and television are based on the same aging technology, one which dates back
to the 1920’s, they have different aims. The monitor makes more detail, the television sets its sights
on the mass market and makes up for its shortcomings in volume. In any case, both rely on big glass
bottles coated with glowing phosphors that shine bright enough to light a room.
The quality of the monitor attached to your PC determines the quality of the image you see. Although
it cannot make anything look better than what’s in the signals from your graphics adapter, it can make
them look much worse and limit both the range of colors and the resolution (or sharpness) of the
images.

Flat Panel Display Systems

Big, empty bottles are expensive to make and delicate to move. Except for a select elite, most
engineers have abjured putting fire-filled bottles of any kind in their circuit designs, the picture tube
being the last remnant of this ancient technology. Replacing it are display systems that use solid-state
designs based on liquid crystals. Lightweight, low in power requirements, and generally shock and
shatter resistant, LCD panels have entirely replaced conventional monitors in notebook PCs and
promise to take over desktops in the coming decade. Currently, they remain expensive (several times
the cost of a picture tube) and more limited in color range, but research into flat panel systems is
racing ahead, while most labs have given up picture tube technology as dead.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (46 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Peripherals

The accessories you plug into your computer are usually called peripherals. The name is a carryover
from the early beginnings of computers when the parts of a computer that did not actually compute
were located some distance from the central processing unit, on the periphery, so to speak.
Today’s PCs have two types of peripherals, the internal and external. Internal peripherals fit inside the
system unit and usually directly connect to its expansion bus. External peripherals are physically
separate from the system unit, connect to the port connectors on the system unit, and often (but not
always) require their own source of power. Although the keyboard and monitor of a PC fit the
definition of external peripherals, they are usually considered to be part of the PC itself and not
peripherals.

Input Devices

You communicate with your PC, telling it what to do, using two primary input devices, the keyboard
and the mouse. The keyboard remains the most efficient way to enter text into applications, faster than
even the most advanced voice recognition systems that let you talk to your PC. The mouse—more
correctly termed a pointing device to include mouse-derived devices such as trackballs and the
proprietary devices used by notebook PCs—relays graphic instructions to your computer, letting you
point to your choices or sketch, draw, and paint. If you want to sketch images directly onto your
monitor screen, a digitizing tablet works more as you would with a pen.
To transfer images to your PC, a scanner copies graphics into bit-images. With the right software, it
becomes an optical character recognition, or OCR system, that reads text and transforms words into
electronic form.
A voice recognition or voice input system tries to make sense out of your voice. It uses a microphone
to turn the sound waves of your voice into electrical signals, a processing board that makes those
signals digital, and sophisticated software that attempts to discern the individual words you’ve spoken
from the digital signal.

Printers

The electronic thoughts of a PC are notoriously evanescent. Pull the plug and your work disappears.
Moreover, monitors are frustratingly difficult to pass around and post through the mail when you want
to show off your latest digital art creation. Hard copy, the print-out on paper, solves the problem. And
the printer makes your hard copy.
More than any other aspect of computing, printer technology has transformed the industry in the last
decade. Where printers were once the clamorous offspring of typewriters, they’ve now entered the
space age with jets and lasers. The modern PC printer is usually a high speed, high quality laser
printer that creates four or more pages per minute at a quality level that rivals commercial printing.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (47 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Inkjet printers sacrifice the utmost in speed and quality for lower cost and the capability of printing
color without depleting the green in your budget.

Connectivity

The real useful work that PCs do involves not just you but also the outside world. The ability of a PC
to send and receive data to different devices and computers is called connectivity. Your PC can link to
any of a number of hardware peripherals through its input/output ports. Better still, through modems,
networks, and related technologies it can connect with nearly any PC in the world.

Input/Output Ports

Your PC links to its peripherals through its input and output ports. Every PC needs some way of
acquiring information and putting it to work. Input/output ports are the primary route for this
information exchange. In the past, the standard equipment of most PCs was simple and almost
pre-ordained—one serial port and one parallel port, typically as part of the motherboard circuitry.
Today, new and wonderful port standards are proliferating faster than dandelions in a new lawn.
Hard-wired serial connections are moving to the new Universal Serial Bus (USB) while the Infrared
Data Association (IrDA) system provides wireless links. Similarly the simple parallel port has become
an external expansion bus capable of linking dozens of devices to a single jack.

Modems

To connect with other PCs and information sources such as the Internet through the international
telephone system, you need a modem. Essentially a signal converter, the modem adapts your PC’s
data to a form compatible with the telephone system.
In a quest for faster transfers than the ancient technology of the classic telephone circuit can provide,
however, data communications are shifting to newer systems such as digital telephone services (like
ISDN), high speed cable connections, and direct digital links with satellites. Each of these requires its
own variety of connecting device, not, strictly speaking, a modem but called that for consistency’s
sake. Which you need depends on the speed you want and the connections available to you.

Networks

Any time you link two or more PCs together, you’ve made a network. Keep the machines all in one
place—one home, one business, one site in today’s jargon—and you have a Local Area Network
(LAN). Spread them across the country, world, or universe with telephone, cable, or satellite links,
and you get a Wide Area Network (WAN).

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (48 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 1

Once you link up to the World Wide Web, your computer is no longer merely the box on your desk.
Your PC becomes part of a single, massive international computer system. Even so, it retains all the
features and abilities you expect from a PC—it only becomes even more powerful.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh01.htm (49 de 49) [23/06/2000 04:26:14 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

Chapter 2: Motherboards
Traditionally, the centerpiece of most personal computers is the motherboard. It is the
physical and logical backbone of the entire system. In many PCs, the motherboard may
be the entire system. The circuitry located on the motherboard defines the computer, its
capabilities, limitations, and personality. Where once motherboard design was a
free-for-all, new standards define the size, shape, and connections you can expect.

■ Background
■ Design Approaches
■ Bus-Oriented Computers
■ Single-Board Computers
■ Compromise Designs
■ Nomenclature
■ Daughterboards
■ Expansion Boards
■ System Boards
■ Planar Boards
■ Baseboards
■ Main Board
■ Logic Boards
■ Backplanes
■ Technologies
■ Digital Logic Circuitry
■ Electronics
■ Vacuum Tubes
■ Semiconductors
■ Integrated Circuits
■ Printed Circuits

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (1 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

■ Standards
■ Early Motherboards
■ AT
■ Mini-AT
■ LPX
■ Mini-LPX
■ ATX
■ Mini-ATX
■ Commercial Products
■ Classic
■ Premiere
■ Advanced
■ Performance
■ New Generation

Motherboards

Nearly all PCs and compatible computers share one common feature: they are built with a single,
large, printed circuit board as their foundation. In many cases, the big board—usually called the
motherboard—essentially is the entire computer. Almost completely self-contained, the one board
holds the most vital electronic components that define the PC: its microprocessor, support circuitry,
memory, and often video and audio functions. Anything you want to add to your PC plugs into the
expansion bus that’s part of the motherboard. As such a basic element, the motherboard defines both
the PC and its capabilities. The circuitry it contains determines the overall performance of your
system. Without the motherboard, you wouldn’t have a PC.
The motherboard also offers us an excellent introduction to the technologies that underlie modern
computers. The design and construction practices of the motherboard are identical to those of all
computer circuits and peripherals—and, indeed, nearly all modern electronic devices.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (2 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

Background

In a modern PC, the motherboard is the big green centerpiece inside the case. Each computer maker
essentially builds the rest of its PCs around the motherboard. On the motherboard, the PC makers put
all the most important electrical circuits that make up the computer. The expansion bus on the
motherboard provides a foundation for future expansion, adding new features and capabilities to your
PC.
Open your PC and you’ll see the motherboard inside. It usually lines the bottom of desktop systems or
one side of tower and mini-tower systems. It’s the biggest circuit board inside the PC, likely the
biggest circuit board in any electronic device you have around your home or office. Typically, it takes
the form of a thick green sheet about the size of a piece of notebook paper and is decorated with an
array of electronic components.
Although all motherboards look much the same, that similarity belies many differences in
technologies and approaches to PC design. Some computer makers strive to cram as much circuitry as
possible on the motherboard. Others put as little as possible there. The difference affects both the
initial cost of your PC and its future use and expansion.

Design Approaches

Nothing about computers requires a motherboard. You could build a PC without one—at least if you
had sufficient knowledge of digital circuits and electronic fabrication, not to mention patience that
would make Job seem a member of the television generation. Building a PC around a single
centralized circuit board seems obvious, even natural, only because of its nearly universal use.
Engineers designed the very first mass-market PCs around a big green motherboard layout, and this
design persists to this day.
Motherboards exist from more than force of habit, however. For the PC manufacturer, the
motherboard design approach has immediate allure. Building a PC with a single large motherboard is
often the most economical way to go, at least if your aim is soldering together systems and pushing
them out the loading dock. There are alternatives, however, which can be more versatile and are
suited to some applications. The more modular approach used in some of these alternatives allows you
more freedom in putting together or upgrading a system to try to keep up with the race of technology.
The motherboard-centered design of most PCs is actually a compromise approach. Basic PC design is
a combination of two diametrically opposed design philosophies. One approach aims at diversity,
adaptability, and expandability by putting the individual functional elements (microprocessor,
memory, and input/output circuitry) on separate boards that plug into connectors that link them
together through a circuit bus. Such machines are known as bus-oriented computers. The alternative
concentrates on economy and simplicity by uniting all the essential components of the computer on a
single large board. Each of these designs has its strengths.

Bus-Oriented Computers

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (3 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

At the time the PC was developed, the bus-oriented design was the conservative approach. A true
bus-oriented design seems the exact opposite of the motherboard. Instead of centralizing all circuitry,
the bus-oriented design spreads it among multiple circuit boards. It’s sort of the Los Angeles approach
to computer design—it sprawls out all over without a distinct downtown. Only a freeway system links
everything together to make a working community. In the bus-oriented computer, that freeway system
is the bus.
A bus gets its name because, like a Greyhound, all the signals of the bus travel together and make the
same stops at the same connectors along the way. The most popular small business computers of that
pre-PC era were built around the S-100 bus standard. The name was mostly descriptive, indicating
that the complete bus comprised 100 connections. Most larger computers made at the time the PC was
introduced used the bus-oriented design as well.
The bus approach enabled each computer to be custom configured for its particular purpose and
business. You attached whatever components the computer application required to the bus. When you
needed them, you could plug larger, more powerful processors, even multiple processors, into the bus.
This modular design enabled the system to expand as business needs expanded. It also allowed for
easier service. Any individual board that failed could be quickly removed and replaced without
circuit-level surgery.
Actually, among the smaller computers that preceded the PC, the bus-oriented design originated as a
matter of necessity simply because all the components required to make a computer would not fit on a
circuit board of practical size. The overflowing circuitry had to be spread among multiple boards, and
the bus was the easiest way to link them all. Although miniaturization has nearly eliminated such
needs for board space, the bus-oriented design still occasionally resurfaces. You’ll sometimes find
special purpose PCs such as numerical control systems and network servers that use the bus-oriented
approach for the sake of its modularity.

Single-Board Computers

The advent of integrated circuits, microprocessors, and miniaturized assemblies that put multiple
electronic circuit components into a single package, often as small as a fingernail, greatly reduced the
amount of circuit board required for building a computer. By the end of the 1970s, putting an entire
digital computer on a single circuit board became practical.
Reducing a computer to a single circuit board was also desirable for a number of reasons. Primary
among them was cost. Fewer boards means less fabrication expense and lower materials cost. Not
only can the board be made smaller, but the circuitry that's necessary to match each board to the bus
can be eliminated. Moreover, single-board computers have an advantage in reliability. Connectors are
the most failure prone part of any computer system. The single-board design eliminates the bus
connectors as a potential source of system failure.
On the downside, however, the single-board computer design is decidedly less flexible than the
bus-oriented approach. The single board has its capabilities forever fixed the moment it is soldered
together at the factory. It can never become more powerful or transcend its original design. It cannot
adapt to new technologic developments.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (4 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

Although the shortcomings of the single-board computer make it an undesirable (but not unused)
approach for desktop computers, the design works well for many laptop and notebook computers. The
compactness of the single-board approach is a perfect match for the space- and weight-conscious
notebook PC design, and the lack of expansion is tolerated for sake of a compact, tote-able design.
Notebooks computers show the essence of the single-board design. Created to be completely
self-contained while minimizing mass and volume, notebook machines cram as much as possible onto
their circuit boards. In a quest for compactness, some of the biggest savings come from shaving away
the connectors required by an expansion bus.
Notebook machines also illustrate the shortcomings of the pure single-board design and why some
kinds bus continue to survive and will continue to flourish in PCs. Computer technology changes fast,
but innovations in different areas arrive independently. A notebook PC that's otherwise in tune with
the times might fall short in one particular area of operation. You might, for example, buy a fast
notebook PC only to find that a new kind of high speed modem becomes affordable a few months
later. A single-board system with a built-in modem would be forever stuck with older, slower modem
designs. With expansion slots, however, you can adapt your system to new standards and technologies
as they arrive.
All better notebook computers now amend the single-board computer approach with a specialized,
miniature expansion bus such as PC Card or CardBus (see Chapter 7, "The Expansion Bus").

Compromise Designs

Rather than strictly following either the single-board or bus-oriented approach, the companies that
made the first mass-market small computers brought the two philosophies together, mixing the best
features of the single-board computer and the bus-oriented design in one box. In IBM's initial
implementation of this design, the first PC model, one large board hosts the essential circuitry that
defines the computer, and slots are available for expansion and adaptability.
Throughout the history of the PC, functions have migrated from expansion slots to the motherboard.
For example, the original IBM PC required separate expansion boards for its display system, mass
storage, input/output ports, and system clock. (If you wanted a decent amount of memory, you needed
to add it with another expansion board.) Many modern PCs pack all of these functions—and
more—on their motherboards. Complete sound systems and even network connections are now part of
many motherboards.
At least three motivations underlie this migration: expectations, cost, and capability. As the power and
potential of personal computers have increased, people expect more from their PCs. The basic
requirements for a personal computer have risen so that features that were once options and
afterthoughts are now required. To broaden the market for personal computers, manufacturers have
striven to push prices down. Putting the basics required in a computer on the main circuit board
lowers the overall cost of the system for exactly the same reasons that a single-board computer is
cheaper to make than the equivalent bus-oriented machine. Moreover, using the most modern
technologies, manufacturers simply can fit more features on a single circuit board. The original PC
had hardly a spare square inch for additional functions. Today, all the features of a PC hundreds of
times more powerful than that original will fit into a couple of chips.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (5 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

As with any trend, however, aberrant counter-trends in PC design appear and disappear occasionally.
Some system designers have chosen to complicate their systems to make them explicitly upgradable
by pulling essential features, such as the microprocessor, off the main board. The rationale underlying
this more modular design is that it gives the manufacturer (and your dealer) more flexibility. The PC
maker can introduce new models as fast as he can slide a new expansion board into a box;
motherboard support circuitry need not be re-engineered. Dealers can minimize their inventories.
Instead of stocking several models, the dealer (and manufacturer) need only keep a single box on the
shelf, shuffling the appropriate microprocessor module into it as the demand arises. For you, the
computer purchaser, these modular systems also promise upgradability, which is a concept that's
desirable in the abstract (your PC need never become obsolete) but often impractical (upgrading is
rarely a cost-effective strategy).
The compromise of a big board and slots will continue to be popular as long as PCs need to be
expandable and adaptable. In the long term, however, you can expect single-board systems to appear
again in the form of products dedicated to a purpose, for example, a dedicated multimedia playback
machine (an intelligent entertainment engine, essentially a VCR with a college education) or a set-top
box to connect your television to the World Wide Web (essentially a PC with a learning disability).

Nomenclature

Terminology is always a thorny issue with emerging technologies, and the rapid development of PCs
assures that new technologies will continually be emerging. To understand one another, we have to
talk about things using the same words. Unfortunately, confusion reigns even with components as
essential as motherboards. Consequently, our point of departure for discussing the construction of
motherboards, computers, and electronics in general will be nomenclature, the words we use to
describe these pesky things. The computer industry uses many words to describe printed circuit
boards with specific functions. The motherboard is only one of them. Other terms you may commonly
encounter include daughterboard, system board, planar board, expansion board, logic board, and
backplane.

Daughterboards

The term "motherboard" hints at the function of the board and its relationship to boards that plug into
it, which are termed daughterboards. Drawing a direct analogy begets some strange images: you
might imagine the smaller boards sucking from the larger one or the archaic concept of daughters
clinging to their mother. Better to just think of the terms referring to the relative importance of the
boards—mother is preeminent (mother knows best). There's no more sexism in the terms than there is
in the Spanish language assigning the female gender to a table radio. (After all, sex and language
gender are entirely different concepts, and anyone doubting that must have missed one of the more
important high school health classes.) Besides, the term "daughterboard" is more mellifluous than
alternatives such as "sonboard" or the more generic "offspringboard."
The motherboard-daughterboard relationship has nothing to do with size. Just as daughters can grow
up to be taller than their mothers, daughterboards can be larger than the motherboards they plug into.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (6 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

In fact, the defining characteristic of the motherboard is not size or the circuitry it holds but the
linkage it provides for expanding the system (or other function the motherboard provides). The
motherboard links the daughterboard to the rest of the PC or other machine that holds the
motherboard. Connectors, rather than active circuitry, are the essential element of the motherboard.
Motherboards often are nothing more than several connectors wired together into a bus or backplane
(see the "Backplanes" section later in this chapter). PCs can be—and have been—built with no
components except expansion connectors and the electrical links between them (which means little
more than wires) on the motherboard.
Sometimes you’ll see the term daughtercard instead of daughterboard. This term is simply a variation,
the "card" suffix reflecting calling the underlying technology "printed circuit cards" instead of
"printed circuit boards." When talking about PCs, the "board" term is better because it allows you to
reserve "card" for the slide-in credit-card-size PC Cards (which follow the specifications promulgated
by PCMCIA, as discussed in Chapter 7).

Expansion Boards

One gender-neutral term for daughterboard is expansion board, and it has become the favored term
among PC users. The term refers to the function of daughterboards in PCs—plug-in expansion boards
enable you to expand your system by adding new functions. In PC nomenclature, an expansion board
is a printed circuit card that slides into one of the expansion slots provided in the case of a PC.
Expansion boards are often further distinguished by the standard followed by their interface or the
connector at the bottom of the board. For example, an ISA expansion board follows the Industry
Standard Architecture bus standard (see Chapter 7) and a PCI expansion board follow the Peripheral
Component Interconnect standard.
Although PC expansion boards can all be considered daughterboards, not all daughterboards are
expansion boards. For example, some PC expansion boards can themselves be expanded by plugging
a daughterboard onto them. Because such boards plug only into their host board, they are not true PC
expansion boards. Most people call the circuit boards that plug into the motherboard the system’s
expansion boards. Circuit boards that plug into expansion boards are daughterboards. That convention
at least relieves us of adding another generation and creating the granddaughter-board.

System Boards

As with many aspects of computers and computing, large organizations are apt to apply their own
nomenclature to things. IBM developed its own name for the board that held the principal circuitry of
its entire line of personal computers from the original IBM PC through its successors XT and AT.
Consequently what most people call a motherboard, IBM usually terms a system board. IBM’s choice
of name is apt because the board (or more correctly, the circuitry on the board) defines the entire
computer system. And, yes, one reason for using the term was to create a gender-neutral term. IBM
didn't want to take sides in the war between the sexes. Other than word choice, however, there’s no
difference between a system board and a motherboard in a PC.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (7 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

This interchangeability of terms does not extend to devices other than PCs. Although a distortion
analyzer, oscilloscope, or even television set may be built with a motherboard and daughterboards,
you would never call the central printed circuit board in one of these devices a system board.

Planar Boards

Another gender-neutral term promoted by IBM for the motherboard, which first came into common
parlance with the introduction of the Personal System/2 line of machines, was planar board. In
conversation, IBM engineers often shorten the term to the simple adjective, planar. The term probably
refers to the motherboard defining the principal plane of the computer system.
As with most gender-neutral neologisms, "planar board" is less descriptive than the terms it is meant
to replace. At face value, the term could not be more vague or all-embracing. All printed circuit
boards are planar—that is, flat—except, perhaps, for a few special-purpose flexible assemblies like
those folded into cameras. Even the term "system board" is more precise in that it at least describes
the function of the circuit assembly.

Baseboards

Just as IBM gives its own names to motherboards, Intel does likewise. The company’s preferred term
(as seen in its technical manuals) is baseboard. On Intel technical literature, the term is given as one
word and used interchangeably with motherboard. For example, the manual dated May, 1996, for the
VS440FX lists the product as a "motherboard" while the manual for the Performance/AU data
December, 1995, terms the product a "baseboard."

Main Board

Another gender-neutral term that some manufacturers use for motherboard is main board. The term is
appropriate—the main board is the largest circuit board inside—and the foundation for—the computer
system and, hence, it is the main board in a PC’s case. The main issue is, of course, whether we need
yet another politically correct term for motherboard.

Logic Boards

The PC industry has no monopoly on vague, gender-neutral terms. In the realm of the Apple
Macintosh, the main circuit board inside a computer is often called a logic board. Of course, every
circuit board inside a computer is based on digital logic, and the term could describe any digital circuit
board. When Apple people talk amongst themselves, however, they know what they mean when they
say "logic board," and now you are privy to their secret.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (8 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

Backplanes

Another name sometimes used to describe the motherboard in PCs is backplane. The term is a
carry-over from bus-oriented computers. In early bus-oriented design, all the expansion connectors in
the machine were linked by a single circuit board. The expansion boards slid through the front panel
of the computer and plugged into the expansion connectors in the motherboard at the rear. Because
the board was necessarily planar and at the rear of the computer, the term "backplane" was perfectly
descriptive. With later designs, the backplane found itself lining the bottom of the computer case.
Backplanes are described as active if, as in the PC design, they hold active logic circuitry. A passive
backplane is nothing more than expansion connectors linked by wires or printed circuitry. The system
boards of most personal computers could be described as active backplanes, though most engineers
reserve the term "backplane" for bus-oriented computers in which the microprocessor plugs into the
backplane rather than residing on it. The active circuitry on an active backplane under such a limited
definition would comprise bus control logic that facilitates the communication between boards.

Technologies

Personal computers could not exist—at least not in their current, wildly successful form—were it not
for two concepts: binary logic and digital circuitry. The binary approach reduced data to its most
minimalist form, essentially an information quantum. A binary data bit simply indicates whether
something is or is not. Binary logic provides rules for manipulating those bits to allow them to
represent and act like, real world information we care about, things like numbers, names, and images.
The binary approach involves both digitization, using binary data to represent information, and
Boolean algebra, the rules for carrying out the manipulations of the binary data.
The primary logic element in all modern PCs is a device called the microprocessor, important enough
to earn not only its own chapter in this book but also to empower the personal computer revolution.
The microprocessor is today’s preeminent manipulator of binary logic, and current devices ranks
among the most complex of all human creations. Digital electronic circuitry makes the
microprocessor’s fast, error-free manipulation of binary data possible.
Despite the complexity of the microprocessor, the technology that makes it work is quite basic. The
microprocessor simply controls the flow of electrical signals. It is an electronic circuit, a special kind
called a digital logic circuit. The only thing remarkable about the microprocessor is the significance
we apply to the signals it controls.

Digital Logic Circuitry

The essence of the digital logic that underlies the operation of the microprocessor and motherboard is
the ability to use one electrical signal to control another.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (9 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

Certainly there are myriad ways of using one electrical signal to control another, as any student of
Rube Goldberg can attest. As interesting and amusing as interspersing cats, bellows, and bowling
balls in the flow of control may be, most engineers have opted for a more direct system that uses a
more direct means based on time proven electrical technologies.
In modern digital logic circuitry the basis of this control is amplification, the process of using a small
current to control a larger current (or a small voltage to control a larger voltage). The large current (or
voltage) exactly mimics the controlling current (or voltage) but is stronger or amplified. In that every
change in the large signal is exactly analogous to each one in the small signal, devices that amplify in
this way are called analog. The intensity of the control signal can represent continuously variable
information—for example, a sound level in stereo equipment. The electrical signal in this kind of
equipment thus is an analogy to the sound that it represents.
In the early years of the evolution of electronic technology, improving this analog amplification
process was the primary goal of engineers. After all, without amplification, signals eventually
deteriorated into nothingness. As the age of digital information dawned, however, engineers found
another ways to use amplification, a technology that laid the groundwork for today’s digital
computers.
The limiting case of amplification occurs when the control signal causes the larger signal to go from
its lowest value, typically zero, to its highest value. In other words, the large signal goes off and
on—switches—under control of the smaller signal. The two states of the output signal (on and off)
can be used as part of a binary code that represents information. For example, the switch could be
used to produce a series of seven pulses to represent the number 7. Because information can be coded
as groups of such numbers (digits), electrical devices that use this switching technology are described
as digital. Note that this switching directly corresponds to other, more direct, control of on-off
information such as pounding on a telegraph key, a concept we’ll return to in later chapters.
Strictly speaking, an electronic digital system works with signals called high and low, corresponding
to a digital one and zero. In formal logic systems, these same values are often termed true and false. In
general, a digital one or logical true corresponds to an electronic high. Sometimes, however, special
digital codes reverse this relationship.
In practical electrical circuits, the high and low signals only roughly correspond to on and off.
Standard digital logic systems define both the high and low signals as ranges of voltages. High is a
voltage range near the maximum voltage accepted by the system, and low is a voltage range near (but
not necessarily exactly at or including) zero. A wide range of undefined voltages spreads between the
two, lower than the lowest edge of high but higher than the highest edge of low. The digital system
ignores the voltages in the undefined range. Figure 2.1 shows how the ranges of voltages interrelate.
Figure 2.1 Significance of TTL voltage levels.

Perhaps the most widely known standard is called TTL (for transistor-transistor logic). In the TTL
system, which is still common inside PC equipment, a logical low is any voltage below 0.8 volts. A
logical high is any level above 2.0 volts. The range between 0.8 and 2.0 volts is defined.
As modern PCs shift to lower voltages, the top level of the logical high shifts downward—for
example, from the old standard of 5.0 volts to the 3.3 volts of the latest computer equipment (and even
lower voltages of new power-conserving microprocessors)—but their relationship, along with the
undefined range in between, remains the same. Modern systems usually retain the same low and
undefined ranges—just lop the top off the figure showing the TTL voltage levels.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (10 de 13) [23/06/2000 04:33:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

When applied to the operating voltage of a microprocessor, Intel calls the 3.3 voltage level STD.
Under Intel’s microprocessor specification, the STD level specifies that the operating voltage will
actually fall in the range of 3.135 to 3.6 volts. To increase the reliability of some higher speed chips,
Intel specifies operation at a higher voltage level, called VRE, that is nominally 3.5 volts but may fall
anywhere in the range 3.4 to 3.6 volts. Because the VRE range allows a greater difference between
high and low, it helps the higher speed microprocessors better avoid noise that might impair their
operation.
Digital systems are equally apt to pick up noise as analog systems. However, an analog signaling
system cannot distinguish the noise from the subtle variations of the desired signal. The noise
becomes part of the signal. It gets transmitted and amplified with the signal. The further the signal
goes or the more it is amplified, the more noise—and thus, degradation—it will suffer. If too much
noise gets mixed in with an analog signal, the signal becomes unusable. Digital systems, however,
ignore traces of noise that get added into the signal. Typically, even when it is added to the digital
signal, it will not be enough to move the signal from low to undefined. Because the digital system
accepts the entire range of low voltages as having the same meaning, small changes added by noise
within a given range are simply ignored. Moreover, because the noise is ignored, every time the
digital signal goes through logic circuitry (the digital equivalent of analog amplification) the noise is
left behind, removed from the signal entirely. With each pass through a logic circuit, the signal starts
out fresh and noise free. Where noise continuously adds into analog signals, it gets continually
cleansed away in digital systems.

Electronics

The essence of any digital logic system is that one electrical current (or voltage) can control another
one. No matter whether a microprocessor understands a few instructions or many, no matter whether
it operates as fast as lightning or as slow as your inbred in-laws, it depends on this principle of
electrical control. Over the years, improving technology has steadily refined the mechanisms for
carrying out this action, allowing electronic circuits to become denser (more functions in smaller
packages) and faster.
The first approach to electrical control of electrical flow evolved from the rattling telegraph key.
Instead of just making noise, the solenoid of the telegraph sounder was adapted to closing electrical
contacts, making a mechanism now called the relay. In operation, a relay is just a switch that’s
controlled by an electromagnet. Activating the electromagnet with a small current moves a set of
contacts that switch the flow of a larger current. The relay doesn’t care if the control current starts off
in the same box as the relay or a continent away. The basis of Bell Lab's 1946 Mark V computer, the
relay is a component that's still used in modern electrical equipment.

Vacuum Tubes

The vacuum tube improved on the relay design by eliminating the mechanical part of the
remote-action switch. Vacuum tubes developed out of Edison’s 1880 invention of the incandescent
light bulb. In 1904 John Ambrose Fleming discovered that electrons would flow from the negatively

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (11 de 13) [23/06/2000 04:33:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

charged hot filament of the light bulb to a positively charged cold collector plate but not in the other
direction, creating the diode tube. In 1907 Lee De Forest created the Audion, now known as the triode
tube, which interposed a control grid between the hot filament (the cathode) and the cold plate (the
anode). The grid allowed the Audion to harness the power of the attraction of unlike electrical charges
and repulsion of like charges, enabling a small charge to control the flow of electrons through the
vacuum inside the tube.
The advantage of the vacuum tube over the relay in controlling signals is speed. The relay operates at
mechanical rates, perhaps a few thousand operations per second. The vacuum tube can switch millions
of times per second. The first recognizable computers (like Eniac) were built from thousands of tubes,
each configured as a digital logic gate.

Semiconductors

Using tube-based electronics in computers is fraught with problems. First is the space-heater effect:
tubes have to glow like light bulbs to work and they generate heat along the way, enough to smelt
rather than process data. And, like light bulbs, tubes burn out. Large tube-based computers required
daily shut-down and maintenance and several technicians on the payroll. In addition, tube circuits are
big. The house-sized computers of 1950s-vintage science fiction would easily be outclassed in
computing power by today's desktop machines. In the typical tube-based computer design, one logic
gate required one tube that took up considerably more space than a single microprocessor with tens of
millions of logic gates. Moreover, physical size isn’t only a matter of housing. The bigger the
computer, the longer it takes its thoughts to travel through its circuits—even at the speed of
light—and the more slowly it thinks.
Making today's practical PCs took another true breakthrough in electronics: the transistor, which
emerged in 1948 at Bell Laboratories, developed by John Bardeen, Walter Brattain, and William
Shockley. A tiny fleck of germanium (later, silicon) formed into three layers, the transistor was
endowed with the capability to let one electrical current applied to one layer alter the flow of another,
larger current between the other two layers. Unlike the vacuum tube, the transistor needed no hot
electrons because the current flowed entirely through a solid material—the germanium or
silicon—hence, the common name for tubeless technology, solid-state electronics.
Germanium and silicon are special materials (actually, metals) called semiconductors. The term
describes how these materials resist the flow of electrical currents. They resist more than conductors
(like the copper in wires) but not as much as insulators (like the plastic wrapped around the wires).
By itself, being a poor but not awful electrical conductor is as remarkable as lukewarm water. Infusing
atoms of impurities into the semiconductor's microscopic lattice structure dramatically alters the
electrical characteristics of the material and makes solid-state electronics possible. The process of
adding impurities is called doping. Some impurities add extra electrons (carriers of negative charges)
to the crystal; others leave holes in the lattice where electrons would ordinarily be, and these holes act
as positive charge carriers. Electricity easily flows across the junction between the two materials when
passed by the extra electrons to the holes on the other side, because the holes willingly accept the
electrons, passing them on to the rest of the circuitry. The electrical flow in the other direction is
severely impeded because the electron-rich side won’t accept more electrons carried to the junction by
the holes. In other words, electricity flows only in one direction through the semiconductor junction,
just as it flows only one way through a vacuum tube diode.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (12 de 13) [23/06/2000 04:33:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 2

The original transistor incorporated three layers with two junctions between dissimilar materials.
Ordinarily, no electricity could pass through such an arrangement because the two junctions would be
oriented in opposite directions, one blocking electrical flow one way and the second blocking flow in
the other direction. The neat trick that makes a transistor work is draining away some of the electrons
at the junction that prevents the electrical flow. A small current drains away the excess electrons and
allows a large current to move through the junction. By this means, the transistor can amplify and
switch currents.
A semiconductor is often described by the type of impurity that has been added to its structure: N-type
for those with extra electrons (negative charge carriers) and P-type for those with holes (positive
charge carriers). Ordinary three-layer transistors, for example, come in two configurations, NPN and
PNP, depending on which type of semiconductor is in the middle.
Modern computer circuits mostly rely on a kind of transistor in which the current flow through a
narrow channel of semiconductor material is controlled by a voltage applied to a gate (which
surrounds the channel) made from metal oxide. The most common variety of these transistors is made
from N-type material and results in a technology called NMOS, an acronym for N-channel Metal
Oxide Semiconductor. A related technology combines both N-channel and P-channel devices and is
called CMOS (Complementary Metal Oxide Semiconductor) because the N-and P-type materials are
complements (opposites) of one another.
The typical microprocessor once was bu

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh02.htm (13 de 13) [23/06/2000 04:33:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

Chapter 3: Microprocessors
The microprocessor is the heart and brain inside every personal computer. This tiny chip of
silicon determines the speed and power of the entire computer by handling most, if not all,
the data processing in the machine. The microprocessor determines the ultimate power of
any PC. Relentless development has made the chip and systems ever more powerful.

■ Background
■ Basic Circuit Design
■ Logic Gates
■ Memory
■ Instructions
■ Registers
■ Clocked Logic
■ Functional Parts of a Microprocessor
■ The Input/Output Unit
■ The Control Unit
■ The Arithmetic/Logic Unit
■ Advanced Technologies
■ Pipelining
■ Branch Prediction
■ Superscalar Architectures
■ Out-of-Order Execution
■ Register Renaming
■ Instruction Sets
■ Microcode
■ RISC
■ Micro-Ops
■ Very Long Instruction Words

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (1 de 36) [23/06/2000 04:49:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

■ Single Instruction, Multiple Data


■ Electrical Characteristics
■ Thermal Constraints
■ Operating Voltages
■ Voltage Reduction Technology
■ Extremely Low Voltage Semiconductors
■ Power Management
■ Physical Matters
■ Packaging
■ Location
■ Identification
■ Sockets and Upgrading
■ Commercial Products
■ Intel Microprocessors
■ The 4004 Family
■ The 8080 Family
■ The 8086 Family
■ The 80286 Family
■ The 80386 Family
■ The 80486 Family
■ The Pentium Family
■ Pentium Pro
■ Intel-Compatible Microprocessors
■ 8088 and 286-Level Chips
■ 386-Compatible Processors
■ 486-Compatible Processors
■ Pentium-Class Chips
■ Motorola CISC Chips
■ 68000
■ 68010
■ 68020
■ 68030
■ 68040
■ RISC Designs
■ DEC Alpha
■ Hewlett-Packard Precision Architecture

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (2 de 36) [23/06/2000 04:49:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

■ MIPS Family
■ PowerPC
■ SPARC
■ Math Coprocessors
■ Floating-Point Numbers
■ History
■ 8087
■ 80287
■ 387
■ 487SX
■ Chips and Technologies SuperMathDX
■ Cyrix 83D87
■ Cyrix 83S87
■ Cyrix 87DLC and 87SLC
■ Cyrix CX487
■ Cyrix EMC87
■ IIT 3C87
■ ULSI 83C87
■ Weitek 1167
■ Weitek 3167
■ Weitek 4167
■ Intel Architecture

Microprocessors

The microprocessor made the PC possible. Today one or more of these modern miracles serves as the
brain in not only personal computers but nearly all systems up to the latest supercomputers. So vital is
the microprocessor that it’s often termed "a computer on a chip."
Technically, today’s microprocessor is a masterpiece of high tech black magic. It starts as silicon that
has been carefully grown as an extremely pure crystal. The silicon is sliced thin with great precision, and
then the chips are heinously polluted by baking in hot ovens containing gaseous mixtures of highly
purified poisons (like arsenic without the old lace) that diffuse into the silicon as impurities and change

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (3 de 36) [23/06/2000 04:49:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

its electrical properties. This alchemy turns sand to gold, making huge profits for the chip makers and
creating electronic brains as capable as that of, say, your average arthropod.
The comparison is apt. As with insects and crustaceans, your PC can react, learn, and remember. Unlike
higher organisms bordering on true consciousness (for example, your next door neighbors with the
plastic fauna in their front yard), the microprocessor doesn’t reason. Nor is it self-aware. Clearly,
although computers are often labeled as "thinking machines," what goes through their microprocessor
minds is far from your thought processes and stream of consciousness. Or maybe not. Some theoreticians
believe your mind and a computer work in fundamentally the same way, although no one knows exactly
how the human mind actually works. Let’s hope they know more about microprocessors.
They do. The operating principals of the microprocessor are well understood. After all, despite its
revolutionary design and construction, the operating principle of the microprocessor is exactly the same
as a breadmaking machine or dishwasher. As we’ll see, all these contrivances carry out their jobs as a
series of steps under some guiding mechanism, be it a timing motor or a software program. You dump
raw materials in and expect to get out the desired result, although if you’re not careful, you’re apt to face
a pile of gooey powder, broken pots, or data as meaningless as the quantum wave function of Gilligan’s
Island.
As with your home appliances, microprocessor hardware was designed to carry out a specific function,
and silicon semiconductor technology was harnessed simply to implement those functions. Nothing
about what the microprocessor does is true mystical magic that might be practiced by a shaman,
charlatan, or accountant.
In fact, a microprocessor need not be made from silicon (scientists are toying with advanced
semiconducting materials that promise higher speeds) nor need it be based on electronics. A series of
gears, cams, and levers or a series of pipes, valves, and pans could carry out all the logical functions to
achieve exactly the same results as your PC. Mechanical and hydraulic computers have, in fact, been
built, though you’d never mistake one for a PC.
The advantage of electronics and the microprocessor is speed. Electrical signals travel at nearly the
speed of light; microprocessors carry out their instructions at rates of a hundred to two million per
second. Without that speed, the elaborate programs on your dealer’s shelves would never have been
written. Executing such a program with a stream-driven computing engine might have taken lifetimes.
The speed of the microprocessor makes it into the miracle that it is.
The advantage of the silicon-based form of electronics is familiarity. An entire industry has arisen to
work with silicon. The technology is mature. Fabricating silicon circuits is routine and the results are
predictable. Familiarity also breeds economy. Billions of silicon chips are made each year. Although the
processes involved are precise and exotic, the needed equipment and materials are readily available. In
other words, silicon is used a lot because it is used a lot.

Background

Reduced to its fundamental principles, the workings of a modern silicon-based microprocessor are not
difficult to understand. They are simply the electronic equivalent of a knee-jerk. Every time you hit the
microprocessor with an electronic hammer blow (the proper digital input), it reacts by doing a specific
something, always the same thing for the same input and conditions, kicking out the same function.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (4 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

The complexity of the microprocessor and what it does arises from the wealth of inputs it can react to
and the interaction between successive inputs. Although the microprocessor’s function is precisely
defined by its input, the output from that function varies with what the microprocessor has to work on,
and that depends on previous inputs. For example, the result of you carrying out a specific
command—"Simon says lift your left leg"—will differ dramatically depending on whether the previous
command was "Simon says sit down" or "Simon says lift your right leg."
Getting an electrical device to respond in knee-jerk fashion rates as one of the greatest breakthroughs in
technology. The first application was to extend the human reach beyond what you could immediately
touch, beyond the span of the proverbial 10-foot pole. The simple telegraph is one of the earliest, and
perhaps best, examples. Closing a switch (pressing down on the telegraph key) sends a current down the
wire that activates an electromagnet at the distant end of the wire, causing the rattle at the other end that
yields a message to a distant telegrapher. This grand electro-mechanical invention underlies all of
modern computer technology. It puts one electrical circuit in control of another circuit a great or small
distance away.

Basic Circuit Design

From these simple beginnings, from the telegraph technology of the 1850s, you can build a computer.
Everything that a computer does involves one of two operations: decision-making and memory, or in
other words, reacting and remembering. Telegraph technology can do both. A silicon semiconductor
does likewise because it, too, allows you to control one signal with another.
The electrical circuit that makes decisions is called a logic gate. One that remembers is termed a latch or
simply memory.

Logic Gates

Giving an electrical circuit the power to make a decision isn’t as hard as you might think. Start with that
same remote action of the telegraph but add a mechanical arm that links it to a light switch on your wall
so that as the telegraph pounds, the light flashes on and off. Certainly you’ll have done a lot of work for
a little return in that the electricity could be used to directly light the bulb. There are other possibilities,
however, that produce intriguing results. You could, for example, pair two weak telegraph arms so that
their joint effort would be required to throw the switch to turn on the light. Or you could link the two
telegraphs so that a signal on either one would switch on the light. Or you could install the switch
backwards so that when the telegraph activated, the light would go out instead of on.
These three telegraph-based design examples actually provide the basis for three different types of
computer circuits called logic gates (the AND, OR, and NOT gates, respectively). As electrical circuits,
they are called "gates" because they regulate the flow of electricity, allowing it to pass through or cutting
it off, much as a gate in a fence allows or impedes your own progress. These logic gates endow the
electrical assembly with decision-making power. In the light example, the decision is necessarily simple:
when to switch on the light. But these same simple gates can be formed into elaborate combinations that
make up a computer that can make complex logical decisions.
The concept of applying the rigorous approach of algebra to logical decision-making was first proposed

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (5 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

by English mathematician George Boole. In 1847, Boole founded the system of modern symbolic logic
that we now term Boolean logic (alternately, Boolean algebra). In his system, Boole reduced
propositions to symbols and formal operators that followed the strict rules of mathematics. Using his
rigorous approach, logical propositions could be proved with the same certainty as mathematical
equations.
The three logic gates can perform the function of all the operators in Boolean logic. They form the basis
of the decision-making capabilities of the computer as well as other logic circuitry. You’ll encounter
other kinds of gates such as NAND (short for "Not AND"), NOR (short for "Not OR"), and Exclusive
OR, but you can build any one of the others from the basic three, AND, OR, and NOT.
In computer circuits, each gate requires at least one transistor. A microprocessor with 5.5 million
transistors may have nearly that many gates.

Memory

These same gates also can be arranged to form memory. Start with the familiar telegraph. Instead of
operating the current for a light bulb, however, reroute the wires from the switch so that they, too, link to
the telegraph’s electromagnet. In other words, when the telegraph moves, it throws a switch that supplies
itself with electricity. Once the telegraph is supplying itself with electricity, it will stay on using that
power even if you switch off the original power that first made the switch. In effect, this simple system
remembers whether it has once been activated. You can go back at any time and see if someone has ever
sent a signal to the telegraph memory system.
This basic form of memory has one shortcoming: it’s elephantine and never forgets. Resetting this
memory system requires manually switching off both the control voltage and the main voltage source.
A more useful form of memory takes two control signals: one switches it on; the other switches it off. In
simplest form, each cell of this kind of memory is made from two latches connected at cross purposes so
that switching one latch on cuts the other off. Because one signal sets this memory to hold data and the
other one resets it, this circuit is sometimes called set-reset memory. A more common term is flip-flop
because it alternately flips between its two states. In computer circuits, this kind of memory is often
simply called a latch. Although the main memory of your PC uses a memory that works on a different
electrical principal, latch memory remains important in circuit design.

Instructions

Although the millions of gates in a microprocessor are so tiny that you can’t even discern them with an
optical microscope (you need at least an electron microscope), they act exactly like elemental,
telegraph-based circuits. They use electrical signals to control other signals. The signals are just more
complicated, reflecting the more elaborate nature of the computer.
Today’s microprocessors don’t use single signals to control their operations. Rather, they use complex
combinations of signals. Each microprocessor command is coded as a pattern of signals, the presence or
absence of an electrical signal at one of the pins of the microprocessor’s package. The signal at each pin
represents one bit of digital information.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (6 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

The designers of a microprocessor give certain patterns of these bit-signals specific meanings. Each
pattern is a command called a microprocessor instruction that tells the microprocessor to carry out a
specific operation. The bit pattern 0010110, for example, is the instruction that tells an Intel 8086-family
microprocessor to subtract in a very explicit manner. Other instructions tell the microprocessor to add,
multiply, divide, move bits or bytes around, change individual bits, or just wait around for another
instruction.
Microprocessor designers can add instructions to do just about anything from matrix calculations to back
flips—that is, if the designers wanted to, if the instruction actually did something useful, and if they had
unlimited time and resources to engineer the chip. Practical concerns like keeping the design workable
and the chip manageable constrain the range of commands given to a microprocessor.
The entire repertoire of commands that a given microprocessor model understands and can react to is
called that microprocessor’s instruction set or its command set. The designer of the microprocessor
chooses which pattern to assign to a given function. As a result, different microprocessor designs
recognize different instruction sets just as different board games have different rules.
Despite their pragmatic limits, microprocessor instruction sets can be incredibly rich and diverse, and the
individual instructions incredibly specific. The designers of the original 8086-style microprocessor, for
example, felt that a simple command to subtract was not enough by itself. They believed that the
microprocessor also needed to know what to subtract from what and what it should do with the result.
Consequently, they added a rich variety of subtraction instructions to the 8086 family of chips that
persists into today’s Pentium Pros. Each different subtraction instruction tells the microprocessor to take
numbers from different places and find the difference in a slightly different manner.
Some microprocessor instructions require a series of steps to be carried out. These multi-step commands
are sometimes called complex instructions because of their composite nature. Although the complex
instruction looks like a simple command, it may involve much work. A simple instruction would be
something like "pound a nail"; a complex instruction may be as far ranging as "frame a house." Simple
subtraction or addition of two numbers may actually involve dozens of steps, including the conversion of
the numbers from decimal to the binary (1’s and 0’s) notation that the microprocessor understands. For
instance, the previous sample subtraction instruction tells one kind of microprocessor that it should
subtract a number in memory from another number in the microprocessor’s accumulator, a place that’s
favored for calculations in today’s most popular microprocessors.
Everything that the microprocessor does consists of nothing more than a series of these step-by-step
instructions. A computer program is simply a list of microprocessor instructions. The instructions are
simple, but long and complex computer programs are built from them just as epics and novels are built
from the words of the English language. Although writing in English seems natural, programming feels
foreign because it requires that you think in a different way, in a different language. You even have to
think of jobs, such as adding numbers, typing a letter, or moving a block of graphics, as a long series of
tiny steps. In other words, programming is just a different way of looking at problems and expressing the
process of solving them.

Registers

Before the microprocessor can work on numbers or any other data, it first must know what numbers to
work on. The most straightforward method of giving the chip the variables it needs would seem to be
supplying more coded signals at the same time the instruction is given. You could dump in the numbers

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (7 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

6 and 3 along with the subtract instruction, just as you would load laundry detergent along with shirts
and sheets into your washing machine. This simple method has its shortcomings, however. Somehow the
proper numbers must be routed to the right microprocessor inputs. The microprocessor needs to know
whether to subtract 6 from 3 or 3 from 6 (the difference could be significant, particularly when you’re
balancing your checkbook).
Just as you distinguish the numbers in a subtraction problem by where you put them in the equation (6-3
versus 3-6), a microprocessor distinguishes the numbers on which it works by their position (where they
are found). Two memory addresses might suffice were it not for the way most microprocessors are
designed. They have only one pathway to memory, so they can effectively "see" only one memory value
at a time. So instead, a microprocessor loads at least one number to an internal storage area called a
register. It can then simultaneously reach both the number in memory and the value in its internal
register. Alternately (and more commonly today) both values on which the microprocessor is to work are
loaded into separate internal registers.
Part of the function of each microprocessor instruction is to tell the chip which registers to use for data
and where to put the answers it comes up with. Other instructions tell the chip to load numbers into its
registers to be worked on later or to move information from a register someplace else, for instance to
memory or an output port.
A register functions both as memory and as a workbench. It holds bit patterns until they can be worked
on or sent out of the chip. The register is also connected with the processing circuits of the
microprocessor so that the changes ordered by instructions actually appear in the register. Most
microprocessors typically have several registers, some dedicated to specific functions (such as
remembering which step in a function the chip is currently carrying out; this register is called a counter
or instruction pointer) and some designed for general purposes. At one time, the accumulator was the
only register in a microprocessor that could manage calculations. In modern microprocessors, all
registers are more nearly equal (in some of the latest designs, all registers are equal, even
interchangeable), so the accumulator is now little more than a colorful term left over from a bygone era.
Not only do microprocessors have differing numbers of registers, but the registers may be of different
sizes. Registers are measured by the number of bits that they can work with at one time. A 16-bit
microprocessor, for example, should have one or more registers that each holds 16 bits of data at a time.
Today’s microprocessors have 32- or 64-bit registers.
Adding more registers to a microprocessor does not make it inherently faster. When a microprocessor
lacks advanced features, such as pipelining or superscalar technology (discussed later in this chapter), it
can perform only one operation at a time. More than two registers would seem superfluous. After all,
most math operations involve only two numbers at a time (or can be reduced to a series of two-number
operations). Even with old-technology microprocessors, however, having more registers helps the
software writer create more efficient programs. With more places to put data, a program needs to move
information in and out of the microprocessor less often, which can potentially save several program steps
and clock cycles.
Modern microprocessor designs, particularly those influenced by the latest research into design
efficiency, demand more registers. Because microprocessors run much faster than memory, every time
the microprocessor has to go to memory, it must slow down. Therefore, minimizing memory accessing
helps improve performance. Keeping data in registers instead of memory speeds things up.
On the other hand, having many registers is the equivalent of moving main memory into the
microprocessor with all the inherent complexities and shortcomings of memory technology. Research
has determined that about 32 registers for microprocessors using current technologies works best.
Consequently, nearly all of today’s most advanced microprocessors, the RISC (Reduced Instruction Set

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (8 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

Computer) chips discussed later in this chapter, have 32 registers.


The width of the registers does, however, have a substantial effect on the performance of a
microprocessor. The more bits assigned to each register, the more information that the microprocessor
can process in every cycle. Consequently, a 64-bit register in one of today’s top RISC chips holds the
potential of calculating eight times as fast as an 8-bit register of a first generation microprocessor—all
else being equal.
The performance advantage of using wider registers depends on the software being run, however. If, for
example, a computer program tells the microprocessor to work on data 16 bits at a time, the full power of
32-bit registers will not be tapped. For this reason, DOS, a 16-bit operating system written with 16-bit
instructions, does not take full advantage of today’s powerful 32-bit microprocessors. Nor do most
programs written to run under DOS or advanced operating systems that have inherited substantial 16-bit
code (such as Windows 95). Modern 32-bit operating systems are a better match and consequently
deliver better performance with the latest microprocessors such as the Pentium Pro.
You might notice one problem with really wide registers—most data isn’t all that wide. Text normally
comes in byte-wide (8-bit) blocks. Sound usually takes the form of two-byte units. Image data may be
one, two, three, or four bytes wide but almost never needs to be the eight bytes wide that many modern
microprocessors prefer. The latest microprocessors, those using Intel’s MMX technology, have been
designed to more efficiently use their wide registers by processing multiple narrow data types
simultaneously in a single register. The special MMX instructions tell the microprocessor how to process
all the short data blocks at once.

Clocked Logic

Microprocessors do not carry out instructions as soon as the instruction code signals reach the pins that
connect the microprocessor to your computer’s circuitry. If chips did react immediately, they would
quickly become confused. Electrical signals cannot change state instantly; they always go through a
brief, though measurable, transition period—a period of indeterminate level during which the signals
would probably perplex a microprocessor into a crash. Moreover, all signals do not necessarily change at
the same rate, so when some signals reach the right values, others may still be at odd values. As a result,
a microprocessor must live through long periods of confusion during which its signals are, at best,
meaningless, at worst, dangerous.
To prevent the microprocessor from reacting to these invalid signals, the chip waits for an indication that
it has a valid command to carry out. It waits till it gets a "Simon says" signal. In today’s PCs, this
indication is provided by the system clock. The clock sends out regular voltage pulses, the electronic
equivalent of the ticking of a grandfather clock. The microprocessor checks the instructions given to it
each time it receives a clock pulse—providing it is not already busy carrying out another instruction.
Early microprocessors were unable to carry out even one instruction every clock cycle. Vintage
microprocessors may require as many as 100 discrete steps (and clock pulses) to carry out a single
instruction. The number of cycles required to carry out instructions varies with the instruction and the
microprocessor design. Some instructions take a few cycles, others dozens. Moreover, some
microprocessors are more efficient than others in carrying out their instructions. The trend today is to
minimize and equalize the number of clock cycles needed to carry out a typical instruction.
Today’s microprocessors go even further in breaking the correspondence between the system clock and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (9 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

the number of instructions that are executed. They deliberately change the external system clock speed
before it is used internally by the microprocessor circuitry. In most cases, the system clock frequency is
increased by some discrete factor (typically two or three, although some Pentium chips use non-integral
factors such as 1.5 as their clock multipliers) so that operations inside the chip run faster than the
external clock would permit. Despite the different frequencies inside and outside the chip, the system
clock is still used to synchronize logic operations. The microprocessor’s logic makes the necessary
allowances.
The lack of correspondence between cycles and instruction execution means that clock speed (typically a
frequency given in megahertz (MHz)) alone does not indicate the relative performance of two
microprocessors. If, for example, one microprocessor requires an average of six clock cycles to execute
every instruction and another chip needs only two, the first chip will be slower (by 50 percent) than the
second even when its clock speed is twice as fast. The only time that clock speed gives a reliable
indication of relative performance is when you compare two identical chip designs that operate at
different frequencies, say Pentium chips running at 150 and 166 MHz. (The latter Pentium would
calculate about 10 percent faster.)

Functional Parts of a Microprocessor

Most microprocessor designs divide their internal clocked logic circuitry into three function parts: the
input/output unit (or I/O unit), the control unit, and the arithmetic-logic unit (or ALU). The last two are
sometimes jointly called the central processing unit (or CPU), although the same term often is used as a
synonym for the entire microprocessor. Some chip makers further subdivide these units, give them other
names, or include more than one of each in a particular microprocessor. In any case, the functions of
these three units are an inherent part of any chip.
All three parts of the microprocessor interact together. In all but the simplest microprocessor designs, the
I/O unit is under control of the control unit, and the operation of the control unit may be determined by
the results of calculations of the arithmetic/logic unit CPU. The combination of the three parts
determines the power and performance of the microprocessor.
Each part of the microprocessor also has its own effect on the processing speed of the system. The
control unit operates the microprocessor’s internal clock, which determines the rate at which the chip
operates. The I/O unit determines the bus width of the microprocessor, which influences how quickly
data and instructions can be moved in and out of the microprocessor. And the registers in the
arithmetic/control unit determine how much data the microprocessor can operate on at one time.

The Input/Output Unit

The input/output unit links the microprocessor to the rest of the circuitry of the computer, passing along
program instructions and data to the registers of the control unit and arithmetic/logic unit. The I/O unit
matches the signal levels and timing of the microprocessor’s internal solid-state circuitry to the
requirements of the other components inside the PC. The internal circuits of a microprocessor, for
example, are designed to be stingy with electricity so that they can operate faster and cooler. These
delicate internal circuits cannot handle the higher currents needed to link to external components.
Consequently, each signal leaving the microprocessor goes through a signal buffer in the I/O unit that

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (10 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

boosts its current capacity.


The input/output unit can be as simple as a few buffers, or it may involve many complex functions. In
the latest Intel microprocessors used in some of the most powerful PCs, the I/O unit includes cache
memory and clock-doubling or -tripling logic to match the high operating speed of the microprocessor to
slower external memory.
The microprocessors used in PCs have two kinds of external connections to their input/output units:
those connections that indicate the address of memory locations to or from which the microprocessor
will send or receive data or instructions, and those connections that convey the meaning of the data or
instructions. The former is called the address bus of the microprocessor; the latter, the data bus.
The number of bits in the data bus of a microprocessor directly influences how quickly it can move
information. The more bits that a chip can use at a time, the faster it is. Microprocessors with 8-, 16-, and
32-bit data buses are all used in various ages of PCs. The latest Pentium and Pentium Pro
microprocessors go all the way to 64 bits.
The number of bits available on the address bus influences how much memory a microprocessor can
address. A microprocessor with 16 address lines, for example, can directly work with 216 addresses;
that’s 65,536 (or 64K) different memory locations. The different microprocessors used in various PCs
span a range of address bus widths from 20 to 32 bits. Although the Pentium and Pentium Pro stick with
a 32-bit address bus, other chip makers have extended the reach of some of their products to 64 bits.

The Control Unit

The control unit of a microprocessor is a clocked logic circuit that, as its name implies, controls the
operation of the entire chip. Unlike more common integrated circuits, whose function is fixed by
hardware design, the control unit is more flexible. The control unit follows the instructions contained in
an external program and tells the arithmetic/logic unit what to do. The control unit receives instructions
from the I/O unit, translates them into a form that can be understood by the arithmetic/logic unit, and
keeps track of which step of the program is being executed.
With the increasing complexity of microprocessors, the control unit has become more sophisticated. In
the Pentium, for example, the control unit must decide how to route signals between what amounts to
two separate processing units. In other advanced microprocessors, the function of the control unit is split
among other functional blocks, such as those that specialize in evaluating and handling branches in the
stream of instructions.

The Arithmetic/Logic Unit

The arithmetic/logic unit handles all the decision making (the mathematical computations and logic
functions) that are performed by the microprocessor. The unit takes the instructions decoded by the
control unit and either carries out them out directly or executes the appropriate microcode (see the
following "Microcode" section) to modify the data contained in its registers. The results are passed back
out of the microprocessor through the I/O unit.
The first microprocessors had but one ALU. Modern chips may have several, which commonly are

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (11 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

classed into two types. The basic form is the integer unit, one that carries out only the simplest
mathematical operations. More powerful microprocessors also include floating-point units, which handle
advanced math operations (such as trigonometric and transcendental functions) typically at greater
precision. Early Intel microprocessors made the floating-point unit a separate optional chip sometimes
called a numeric or math coprocessor, which is discussed in the "Math Coprocessor" section later in this
chapter.
Even chips equipped solely with integer units can carry out advanced mathematical operations with
suitable programs that break the problems into discrete simple steps. Floating-point units use separate,
dedicated instructions for their advanced functions and carry out the operations more quickly.

Advanced Technologies

Because higher clock speeds make circuit boards and integrated circuits more difficult to design and
manufacture, engineers have a strong incentive to get their microprocessors to process more instructions
at a given speed. Most modern microprocessor design techniques are aimed at exactly that.
One way to speed up the execution of instructions is to reduce the number of internal steps the
microprocessor must take for execution. Step reduction can take two forms: making the microprocessor
more complex so that steps can be combined or making the instructions simpler so that fewer steps are
required. Both approaches have been used successfully by microprocessor designers—the former as
CISC (Compex Instruction Set Computer) microprocessors, the latter as RISC.
Another way of trimming the number of cycles required by programs is to operate on more than one
instruction simultaneously. Two approaches to processing more instructions at once are pipelining and
superscalar architecture. Both CISC and RISC chips take advantage of these technologies as well as
several design techniques that help them operate more efficiently. Differences in the two classes of
microprocessor are more easy to understand if you first know the underlying technologies.

Pipelining

In older microprocessor designs, a chip works single-mindedly. It reads an instruction from memory,
carries it out step by step, and then advances to the next instruction. Each step requires at least one tick
of the microprocessor’s clock. Pipelining enables a microprocessor to read an instruction, start to process
it, and then, before finishing with the first instruction, read another instruction. Because every instruction
requires several steps, each in a different part of the chip, several instructions can be worked on at once,
and passed along through the chip like a bucket brigade or its more efficient alternative, the pipeline.
Intel’s Pentium chips, for example, have four levels of pipelining. Up to four different instructions may
be undergoing different phases of execution at the same time inside the chip. When operating at its best,
pipelining reduces the multiple step/multiple clock cycle processing of an instruction to a single clock
cycle.
Pipelining is very powerful, but it is also demanding. The pipeline must be carefully organized, and the
parallel paths kept carefully in step. It’s sort of like a chorus singing a canon like Fr[ag]ere
Jacques—one missed beat and the harmony falls apart. If one of the execution stages delays, all the rest
delay as well. The demands of pipelining are one factor pushing microprocessor designers to make all

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (12 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

instructions execute in the same number of clock cycles. That way, keeping the pipeline in step is easier.
In general, the more stages to a pipeline, the greater acceleration it can offer. Super-pipelining breaks the
steps of the basic pipelining themselves into multiple steps. Today’s fastest Intel microprocessor, the
Pentium Pro, uses 12 stages in its super-pipeline.
Real-world programs conspire against lengthy pipelines, however. Nearly all program branch. That is,
their execution can take alternate paths down different instruction streams depending on the results of
calculations and decision-making. A pipeline can load up with instructions of one program branch before
it discovers that another branch is the one the program is supposed to follow. In that case, the entire
contents of the pipeline must be dumped, and the whole thing loaded up again. The result is a lot of
logical wheel-spinning and wasted time. The bigger the pipeline, the more time wasted. The waste
resulting from branching begins to outweigh the benefits of bigger pipelines in the vicinity of five stages.

Branch Prediction

Today’s most powerful microprocessors are adopting a technology called branch prediction logic to deal
with this problem. The microprocessor makes its best guess at which branch a program will take as it is
filling up the pipeline; it then executes these most likely instructions. Because the chip is guessing at
what to do, this technology is sometimes called speculative execution.
When the microprocessor’s guesses turn out to be correct, the chip benefits from the multiple pipeline
stages and is able to run through more instructions than clock cycles. When the chip’s guess turns out
wrong, however, it must discard the results obtained under speculation and execute the correct code. The
chip marks the data in later pipeline stages as invalid and discards it. Although the chip doesn’t lose
time—the program would have executed in the same order anyway—it does lose the extra boost
bequeathed by the pipeline.

Superscalar Architectures

The steps in a program normally are listed sequentially, but they don’t always need to be carried out
exactly in order. Just as tough problems can be broken into easier pieces, program code can be divided as
well. If, for example, you want to know the larger of two rooms, you have to compute the volume of
each, and then make your comparison. If you had two brains, you could compute the two volumes
simultaneously. A superscalar microprocessor design does essentially that. By providing two or more
execution paths for programs, it can process two or more program parts simultaneously. Of course, the
chip needs enough innate intelligence to determine which problems can be split up and how to do it. The
Pentium, for example, has two parallel, pipelined execution paths.
The first superscalar computer design was the Control Data Corporation 6600 mainframe, introduced in
1964. Designed specifically for intense scientific applications, the initial 6600 machines were built from
eight functional units and were the fastest computers in the world at the time of their introduction.
Superscalar architecture gets its name because it goes beyond the incremental increase in speed made
possible by scaling down microprocessor technology. An improvement to the scale of a microprocessor
design would reduce the size of the microcircuitry on the silicon chip. The size reduction shortens the
distance that signals must travel and lowers the amount of heat generated by the circuit (because the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (13 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

elements are smaller and need less current to effect changes). Some microprocessor designs lend
themselves to scaling down. Superscalar designs get a more substantial performance increase by
incorporating a more dramatic change in circuit complexity.
Using pipelining and superscalar architecture cycle-saving techniques has dramatically cut the number of
clock cycles required for the execution of a typical microprocessor instruction. Early microprocessors
needed, on average, several cycles for each instruction. Many of today’s chips (both CISC and RISC)
actually have average instruction throughputs of fewer than one cycle per instruction.

Out-of-Order Execution

No matter how well the logic of a superscalar microprocessor divides a program, each pipeline is
unlikely to get an equal share of the work. One or another pipeline will grind away while another
finishes in an instant. Certainly the chip logic can shove another instruction down the free pipeline—if
another instruction is ready. But if the next instruction depends on the results of the one before it and that
instruction is the one stuck grinding away in the other pipeline, the free pipeline stalls. It is available for
work but can do no work. Potential processor power gets wasted.
Like a good Type A employee who always looks for something to do, microprocessors can do the same
thing. They can check the program for the next instruction that doesn’t depend on previous work that’s
not finished and work on the new instructions. This sort of ambitious approach to programs is termed
out-of-order execution, and it helps microprocessors take full advantage of superscalar designs.
This sort of ambitious microprocessor faces a problem, however. It is no longer running the program in
the order in which it was written, and the results might be other than the programmer had intended.
Consequently, microprocessors capable of out-of-order execution don’t immediately post the results
from their processing into their registers. The work gets carried out invisibly and the results of the
instructions that are processed out of order are held in a buffer until the chip has finished the processing
of all the previous instructions. The chip puts the results back into the proper order, checking to be sure
that the out-of-order execution has not caused any anomalies, before posting the results to its registers.
To the program and the rest of the outside world, the results appear in the microprocessor’s registers as if
they had been processed in normal order, only faster.

Register Renaming

Out-of-order execution often runs into its own problems. Two independently executable instructions may
refer to or change the same register. In the original program, one would carry out its operation, and the
other would do its work later. During superscalar out-of-order execution, the two instructions may want
to work on the register simultaneously. Because that conflict would inevitably lead to confusing results
and errors, an ordinary superscalar microprocessor would have to ensure the two instructions referencing
the same register executed sequentially instead of in parallel, eliminating the advantage of its superscalar
design.
To avoid such problems, advanced microprocessors use register renaming. Instead of a small number of
registers with fixed names, they use a larger bank of registers that can be named dynamically. The
circuitry in each chip converts the references made by an instruction to a specific register name to point

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (14 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

instead to its choice of physical register. In effect, the program asks for the EAX register, and the chip
says, "Sure," and gives the program a register it calls EAX. If another part of the program asks for EAX,
the chip pulls out a different register and tells the program that this one is EAX, too. The program takes
the microprocessor’s word for it, and the microprocessor doesn’t worry because it has several million
transistors to sort things out in the end.
And it takes several million registers because the chip must track all references to registers. It must to
ensure that when one program instruction depends on the result in a given register, it has the right
register and results dished up to it.

Instruction Sets

Instructions are the basic unit for telling a microprocessor what to do. Internally the circuitry of the
microprocessor has to carry out hundreds, thousands, or even millions of logic operations to carry out
one instruction. The instruction, in effect, triggers a cascade of logical operations. How this cascade is
controlled marks the great divide in microprocessor and computer design.
The first electronic computers used a hard-wired design. An instruction simply activated the circuits
appropriate for carrying out all the steps required. This design has its advantages. It optimizes the speed
of the system because the direct hard-wire connection adds nothing to slow down the system. Simplicity
means speed, and the hard-wired approach is the simplest. Moreover, the hard-wired design was the
practical and obvious choice. After all, computers were so new that no one had thought up any
alternative.
But the hard-wired computer design has a significant drawback. It ties the hardware and software
together in a single unit. Any change in the hardware must be reflected in the software. A modification
to the computer means that programs have to be modified. A new computer design may require that
programs be entirely rewritten from the ground up.

Microcode

The inspiration for breaking away from the hard-wired approach was the need for flexibility in
instruction sets. Throughout most of the history of computing, determining exactly what instructions
should make up a machine’s instruction set was more an art than a science. IBM’s first commercial
computers, the 701 and 702, were designed more from intuition than from any study of which
instructions programmers would need to use. Each machine was custom tailored to a specific
application. The 701 ran instructions thought to serve scientific users; the 702 had instructions aimed at
business and commercial applications.
When IBM tried to unite its many application-specific computers into a single, more general purpose
line, these instruction sets were combined so that one machine could satisfy all needs. The result was, of
course, a wide, varied, and complex set of instructions. The new machine, the IBM 360 (introduced in
1964), was unlike previous computers in that it was created not as hardware but as an architecture. IBM
developed specifications and rules for how the machine would operate, but enabled the actual machine to
be created from any hardware implementation designers found most expedient. In other words, IBM
defined the instructions that the 360 would use but not the circuitry that would carry them out. Previous

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (15 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

computers used instructions that directly controlled the underlying hardware. To adapt the instructions
defined by the architecture to the actual hardware that made up the machine, IBM adopted an idea
originally conceived by Maurice Wilkes at Cambridge University called microcode.
Using this technology, an instruction causes a computer to execute a small program to carry out the logic
instructions required by the instruction. The collection of small programs for all the instructions the
computer understands is its microcode.
Although the additional layer of microcode made machines more complex, it added a great deal of
design flexibility. Engineers could incorporate whatever new technologies they wanted inside the
computer, yet still run the same software with the same instructions originally written for older designs.
In other words, microcode enabled new hardware designs and computer systems to have backward
compatibility with earlier machines.
Since the introduction of the 360, nearly all mainframe computers have used microcode. When the
microprocessors that enabled PCs were created, they followed the same design philosophy as the 360 by
using microcode to match instructions to hardware. In effect, the microcode in a microprocessor is a
secondary set of instructions that runs invisibly inside the chip on a nanoprocessor—essentially a
microprocessor within a microprocessor.
This microcode and nanoprocessor approach makes creating a complex microprocessor easier. The
powerful data processing circuitry of the chip can be designed independently of the instructions it must
carry out. The manner in which the chip handles its complex instructions can be fine-tuned even after the
architecture of the main circuits is laid in place. Bugs in the design can be fixed relatively quickly by
altering the microcode, which is an easy operation compared to the alternative of developing a new
design for the whole chip, a task that’s not trivial when millions of transistors are involved. The rich
instruction set fostered by microcode also makes writing software for the microprocessor (and computers
built from it) easier, reducing the number of instructions needed for each operation.
Microcode has a big disadvantage, however. It makes computers and microprocessors more complicated.
In a microprocessor, the nanoprocessor must go through several of its own microcode instructions to
carry out every instruction you send to the microprocessor. More steps mean more processing time taken
for each instruction. Extra processing time means slower operation. Engineers found that microcode had
its own way to compensate for its performance penalty—complex instructions.
Using microcode, computer designers could easily give an architecture a rich repertoire of instructions
that carry out elaborate functions. A single, complex instruction might do the job of half a dozen or more
simpler instructions. Although each instruction would take longer to execute because of the microcode,
programs would need fewer instructions overall. Moreover, adding more instructions could boost speed.
One result of this microcode "more is merrier" instruction approach is that typical PC microprocessors
have seven different subtraction commands.

RISC

Although long the mainstream of computer and microprocessor design, microcode is not necessary.
While system architects were staying up nights concocting ever more powerful and obscure instructions,
a counter force was gathering. Starting in the 1970s, the microcode approach came under attack by
researchers who claimed it takes a greater toll on performance than its benefits justify.
By eliminating microcode, this design camp believed, simpler instructions could be executed at speeds

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (16 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

so much higher that no degree of instruction complexity could compensate. By necessity, such
hard-wired machines would offer only a few instructions because the complexity of their hard-wired
circuitry would increase dramatically with every additional instruction added. Practical designs are best
made with small instruction sets.
John Cocke at IBM’s Yorktown Research Laboratory analyzed the usage of instructions by computers
and discovered that most of the work done by computers involves relatively few instructions. Given a
computer with a set of 200 instructions, for example, two-thirds of its processing involves using as few
as 10 of the total instructions. Cocke went on to design a computer that was based on a few instructions
that could be executed quickly. He is credited with inventing the Reduced Instruction Set Computer or
RISC in 1974. In 1987 Cocke’s work on RISC won him the Turing Award (named for computer pioneer
Alan M. Turing, known best for the Turing Test definition of artificial intelligence), given by the
Association for Computing Machinery as its highest honor for technical contributions to computing.
Note that the RISC concept pre-dated the term, however. The term RISC is credited to David Peterson,
who used it in a course in microprocessor design at the University of California at Berkeley in 1980. The
first chip to bear the label and to take advantage of Cocke’s discoveries was RISC-I, a laboratory design
that was completed in 1982. To distinguish this new design approach from traditional microprocessors,
microcode-based systems with large instruction sets have come to be known as Complex Instruction Set
Computers or CISC designs.
Cocke’s research showed that most of the computing was done by basic instructions, not by the more
powerful, complex, and specialized instructions. Further research at Berkeley and Stanford Universities
demonstrated that there were even instances in which a sequence of simple instructions could perform a
complex task faster than a single complex instruction could. The result of this research is often
summarized as the 80/20 Rule: about 20 percent of a computer’s instructions do about 80 percent of the
work. The aim of the RISC design is to optimize a computer’s performance for that 20 percent of
instructions, speeding up their execution as much as possible. The remaining 80 percent of the
commands could be duplicated, when necessary, by combinations of the quick 20 percent. Analysis and
practical experience has shown that the 20 percent could be made so much faster that the overhead
required to emulate the remaining 80 percent was no handicap at all.
In 1979 IBM introduced its model 801, the first machine to take advantage of Cocke’s findings. It is
credited as the first computer intentionally designed with a reduced instruction set. The 801 was a 32-bit
minicomputer with 32 registers that could execute its simple instructions in a single processor cycle. The
801 led to the development of IBM’s Personal Computer/RT in 1986, which was refined into the RISC
System/6000. The multi-chip processor in the RS/6000 was consolidated into a single chip that formed
the basis of IBM’s PowerPC microprocessors (now being jointly developed with Motorola).
The Berkeley line of RISC research led to the RISC-II microprocessor (in 1984) and SOAR. Together,
these laboratory designs inspired Sun Microsystems to develop the SPARC line of microprocessors.
RISC philosophy also inspired John Hennesey at Stanford University to found the MIPS project there.
Although the MIPS group once said that the acronym was derived from a description of their design goal
(Microprocessor without Interlocked Pipeline Stages), more commonly it is held to stand for Millions of
Instructions Per Second, a rudimentary yardstick of microprocessor performance. The MIPS project
eventually spawned RISC-chip developer MIPS Computer Systems (known as MIPS Technologies since
its merger with Silicon Graphics in 1992). The Silicon Graphics R2000, R3000, R4000, R4400, and
R6000 chips trace their heritage back to the Stanford line of development.
No sharp edge demarcates the boundaries of what constitutes a reduced or complex instruction set. The
DEC Alpha, for example, one of the most recent RISC designs, has a very full repertoire of 160
instructions. In contrast, Intel’s Pentium, generally considered to be a CISC microprocessor, features

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (17 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

about 150 instructions (depending on how you count). In light of such incongruities, some RISC
developers now contend the RISC term has stood for, not Reduced Instruction Set, but rather, Restricted
Instruction Set Computer all along.
More important than the nomenclature or number of instructions that a computer or microprocessor
understands in characterizing RISC and CISC is how those instructions are realized. Slimming down a
computer’s instruction set is just one way that engineers go about streamlining its processing. As the
instructions are trimmed, all the ragged edges that interfere with its performance are trimmed off, and all
that remains is honed and smoothed to offer the least possible resistance to the passage of data.
Consequently, RISC designs are best distinguished from CISC not by a single "to be or not to be" rule
but whether (and how well) they incorporate a number of characteristics. Some of the important
characteristics of RISC include:
Single-cycle or better execution of instructions. Most instructions on a RISC computer
will be carried out in a single clock cycle, if not faster, because of pipelining. The chip
doesn’t process a single instruction in a fraction of a cycle, but instead works on multiple
instructions simultaneously as they move down the pipeline. For example, a chip may work
on four instructions simultaneously, each of which requires three cycles to execute. The net
result is that the chip would require three fourths of a clock cycle for each instruction.
Uniformity of instructions. The RISC pipeline operates best if all instructions are the same
length (number of bits), require the same syntax, and execute in the same number of cycles.
Most RISC systems have instruction sets made up solely of 32-bit commands. In contrast,
the CISC command set used by the Intel-standard microprocessors in PCs uses instructions
that are eight, sixteen, or thirty-two bits long.
Lack of microcode. RISC computers either entirely lack microcode or have very little of it,
relying instead on hard-wired logic. Operations handled by microcode in CISC
microprocessors require sequences of simple RISC instructions. Note that if these complex
operations are performed repeatedly, the series of RISC instructions will lodge in the high
speed memory cache of the microprocessor. The cache contents then act like microcode
that’s automatically customized for the running program.
Load-store design. Accessing memory during the execution of an instruction often imposes
delays because RAM cannot be accessed as quickly as the microprocessor runs.
Consequently, most RISC machines lack immediate instructions (those that work on data in
memory rather than in registers) and minimize the number of instructions that affect
memory. Data must be explicitly loaded into a register using a separate load instruction
before it can be worked on. The sequence of instructions in program code can then be
organized (by an optimizing compiler) so that the delay on the pipeline is minimized.
The hard work is in the software. The RISC design shifts much of the work in achieving
top performance to the software that runs on the system. RISC performance depends on
how efficiently the instructions for running the system are arranged. Processing multiple
instructions in a single clock cycle requires that the program pipeline be kept full of
instructions that are constantly moving. If the pipeline harmony breaks down, the system
stalls.
RISC systems depend on special language programs called optimizing compilers that
analyze the instruction steps they generate to see whether rearranging the instructions will
better match the needs of the microprocessor pipeline. In effect, RISC programs are
analyzed and rewritten for optimum speed before they are used. The extra time spent on

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (18 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

preparing the program pays off in increased performance every time it runs. Commercial
programs are already compiled when you get them, so you normally don’t see the extra
effort exerted by the optimizing compiler. You just get quicker results.
Design simplicity. Above all, simplicity is the key to the design of a RISC machine or
microprocessor. Although, for example, the Intel 80486 microprocessor has the equivalent
of about one million transistors inside its package, the RISC-based MIPS M/2000 has only
about 120,000; yet the two are comparable in performance. Fewer transistors mean fewer
things to go wrong. RISC chips aren’t necessarily more reliable, but making them without
fabrication errors is easier than with more complex chips.
More important than number of transistors is the amount of space on the silicon chip that needs to be
used to make a microprocessor. As the area of a chip increases, the likelihood of fabrication errors
increases. During the fabrication process, errors are inevitable. A speck of dust or a bit of semiconductor
that doesn’t grow or etch properly can prevent the finished circuit from working. A number of such
defects are inevitable on any single silicon matrix. The larger and more complex the circuits on the
matrix, the more likely any one (or all) of them will be plagued by a defect. Consequently, the yield of
usable circuits from a matrix plummets as the circuits become more complex and larger. Moreover, the
bigger the design of a chip, the fewer patterns that will fit on a die. That is, the fewer chips that can be
grown at a time with given fabrication equipment. Overall, the yield of RISC chips can thus be greater.
In more practical terms, it costs more to build more complex microprocessors.
Because they are simpler, RISC chips are easier to design. Fewer transistors means less circuitry to lay
out, test, and give engineers nightmares. Just as the blueprints of an igloo would be more manageable
than those for a Gothic cathedral, a RISC chip design takes less work and can be readied faster.
Little wonder, then, why new microprocessor manufacturers favor RISC. In fact, some people claim that
every microprocessor designed since 1985 has been RISC. Like every exaggeration, this one holds more
than a grain of truth. RISC ideas have infiltrated every high performance microprocessor design. The
only CISC chips surviving in the high performance market are those designed by Intel, and even the
newest of them incorporate RISC concepts. RISC has become such a big selling point that every chip
maker claims to have it. In that no one can pin down exactly what constitutes RISC, who is to say
otherwise?
In light of all the advantages of RISC, the survival of any CISC microprocessors might seem odd. In
truth, CISC chips in the real world outnumber RISC chips by a thousand to one or more. Fax machines,
microwave ovens, hand held calculators, VCRs, even automobiles all have microprocessors inside, and
such chips are almost universally CISC chips. The power and performance of RISC is simply
unnecessary in such applications. Even in computers, CISC-based systems outsell RISC machines by a
factor on the order of 20 to 1.

Micro-Ops

Many microprocessors that look like CISC chips and execute the classic Intel CISC instruction set are
actually RISC chips inside. Although chip makers seeking to clone Intel’s microprocessors were the first
to use such designs, Intel adopted the same strategy for its Pentium Pro microprocessor.
The basic technique involves converting the classic Intel instructions into RISC-style instructions to be
processed by the internal chip circuitry. Intel calls the internal RISC-like instructions micro-ops. The
term is often abbreviated as uops (strictly speaking, the initial "u" should be the Greek letter mu, an

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (19 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

abbreviation for micro) and pronounced you-ops. Other companies use slightly different terminology.
NexGen (now part of Advanced Micro Devices) used the term RISC86 instructions. AMD itself prefers
the term R-ops or ROPs.
By design, the micro-ops sidestep the primary shortcomings of the Intel instruction set by making the
encoding of all commands more uniform, converting all instructions to the same length for processing,
and eliminating arithmetic operations that directly change memory by loading memory data into registers
before processing.
The translation to RISC-like instructions allows the microprocessor to function internally as a RISC
engine. The code conversion occurs in hardware, completely invisible to your applications and out of the
control of programmers. They are just another way that the modern microprocessor functions as a
magical black box—you simply dump in any old code, and answers pop out at amazing speed.

Very Long Instruction Words

Just as RISC is flowing into the product mainstream, a new idea is sharpening the leading edge. Very
Long Instruction Word technology at first appears to run against the RISC stream by using long,
complex instructions. In reality, VLIW is a refinement of RISC, meant to better take advantage of
superscalar microprocessors. Each very long instruction word is made from several RISC instructions. In
a typical implementation, eight 32-bit RISC instructions combine to make one instruction word.
Ordinarily, combining RISC instructions would add little to overall speed. As with RISC, the secret of
VLIW technology is in the software—the compiler that produces the final program code. The
instructions in the long word are chosen so that they execute at the same time (or as close to it as
possible) in parallel processing units in the superscalar microprocessor. The compiler chooses and
arranges instructions to match the needs of the superscalar processor as best as possible, essentially
taking the optimizing compiler one step further. In essence, the VLIW system takes advantage of
pre-processing in the compiler to make the final code and microprocessor more efficient.
VLIW technology also takes advantage of the wider bus connections of the latest generations of
microprocessors. Existing chips link to their support circuitry with 64 bit buses. Many have 128-bit
internal buses. The 256-bit very long instruction words push ahead the next step, enabling the
microprocessor to load several cycles of work in a single memory cycle.
No VLIW microprocessor systems are currently available. In fact, the only existing VLIW command
sets remain experimental. The next generation of microprocessor very likely will see the integration of
VLIW concepts.

Single Instruction, Multiple Data

In a quest to improve the performance of Intel microprocessors on common multimedia tasks, Intel’s
hardware and software engineers analyzed the operations multimedia programs most often required.
They then sought the most efficient way to enable their chips to carry out these operations. They
essentially worked to enhance the signal processing abilities of their general purpose microprocessors so
that they would be competitive with dedicated processors such as digital signal processor (DSP) chips.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (20 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

They called the technology they developed Single Instruction, Multiple Data. In effect, a new class of
microprocessor instructions, SIMD, is the enabling element of Intel’s MultiMedia Extensions (MMX) to
its microprocessor command set.
As the name implies, SIMD allows one microprocessor instruction to operate across several bytes or
words (or even larger blocks of data). In the MMX scheme of things, the SIMD instructions are matched
to the 64-bit data buses of Intel’s Pentium and newer microprocessors. All data, whether it originates as
bytes, words, or 16-bit double-words gets packed into 64-bit form. Eight bytes, four words, or two
double-words get packed into a single 64-bit package that gets loaded into a 64-bit register in the
microprocessor. One microprocessor instruction then manipulates the entire 64-bit block.
Although the approach at first appears counter-intuitive, it improves the handling of common graphic
and audio data. In video processor applications, for example, it can trim the number of microprocessor
clock cycles for some operations by 50 percent or more.

Electrical Characteristics

At its heart a microprocessor is an electronic device. No matter what logical design it uses—CISC,
RISC, VLIW, SIMD, or whatever—it uses logic gates made from semiconductor circuitry to carry out its
operations. The electronic basis of the microprocessor has important ramifications in the construction
and operation of chips.
The free lunch principle (that is, there is none) tells us that every operation has its cost. Even the quick
electronic thinking of a microprocessor takes a toll. The thinking involves the switching of state of tiny
transistors, and each state change consumes a bit of electrical power, converting it to heat. The
transistors are so small that the process generates a minuscule amount of heat, but with millions of them
in a single chip, the heat adds up. Modern microprocessors generate so much heat that keeping them cool
is a major concern in their design.
Heat is wasted power, and power is at a premium in notebook computers. Consequently, microprocessor
designers, with an eye toward prolonging battery life, have adopted a number of strategies to cut power
consumption in portable applications.

Thermal Constraints

The tight packing of circuits on chips makes heat a major issue in their design and operation. Heat is the
enemy of the semiconductor because it can destroy the delicate crystal structure of a chip. If a chip gets
too hot, it will be irrevocably destroyed. Packing circuits tightly concentrates the heat they generate, and
the small size of the individual circuit components makes them more vulnerable to damage.
Heat can cause problems more subtle than simple destruction. Because the conductivity of
semiconductor circuits also varies with temperature, the effective switching speed of transistors and
logic gates also changes when chips get too hot or too cold. Although this temperature-induced speed
change does not alter how fast a microprocessor can compute (the chip must stay locked to the system
clock at all times), it can affect the relative timing between signals inside the microprocessor. Should the
timing get too far off, a microprocessor might make a mistake, with the inevitable result of crashing your
system or contaminating your data. All chips have rated temperature ranges within which they are

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (21 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

guaranteed to operate without such timing errors.


Because chips generate more heat as speed increases, they can produce heat faster than it can radiate
away. This heat build-up can alter the timing of the internal signals of the chip so drastically that the
microprocessor will stop working and—as if you couldn’t guess—cause your system to crash. To avoid
such problems, computer manufacturers often attach heatsinks to microprocessors and other
semiconductor components to aid in their cooling.
A heatsink is simply a metal extrusion that increases the surface area from which heat can radiate from a
microprocessor or other heat-generating circuit element. Most heatsinks have several fins, rows of pins,
or some geometry that increases their surface area. Heatsinks are usually made from aluminum because
that metal is one of the better thermal conductors, enabling the heat from the microprocessor to quickly
spread across the heatsink.
Heatsinks provide passive cooling, called that because it requires no power-using mechanism to perform
its cooling. Heatsinks work by convection, transferring heat to the air that circulates past the heatsink.
Air circulates around the heatsink because the warmed air rises away from the heatsink and cooler air
flows in to replace it.
In contrast, active cooling involves some kind mechanical or electrical assistance in removing heat. The
most common form of active cooling is a fan, which blows a greater volume of air past the heatsink than
would be possible with convection alone.
As a by-product of a microprocessor’s thinking, heat is waste. The energy that raises the temperature of
the microprocessor does no useful work. But it does drain the energy source that’s supplying the
microprocessor.
Some chips run so hot that their manufacturers integrate active cooling with the chip itself. For example,
Intel’s Pentium Overdrive upgrade chips have a small, built-in plastic fan. The problem with such
integrated active cooling is that the fan can fail and the chip overheat. Intel’s solution to this problem is
to slow the chip to a modest speed (typically about 25 MHz) upon the failure of the fan, cutting heat
production and helping preserve the chip against thermal damage. Unfortunately, the chip does not warn
you when it slows down, and the response of your PC may slow noticeably for no apparent reason.
The makers of notebook PCs face another challenge in efficiently managing the cooling of their
computers. Using a fan to cool a notebook system is problematic. The fan consumes substantial energy,
which trims battery life. Moreover, the heat generated by the fan motor itself can be a significant part of
the thermal load of the system. Most designers of notebook machines have turned to more innovative
passive thermal controls such as heat pipes and using the entire chassis of the computer as a heatsink.

Operating Voltages

In desktop computers, overheating rather than excess electrical consumption is the major power concern.
Even the most wasteful of microprocessors uses far less power than an ordinary light bulb. The most that
any PC-compatible microprocessor consumes is about nine watts, hardly more than a night light and of
little concern when the power grid supplying your PC has megawatts at its disposal.
If you switch to battery power, however, every last milliwatt is important. The more power used by a
PC, the shorter the time its battery can power the system or the heavier the batteries it will need to
achieve a given life between charges. Every degree a microprocessor raises its case temperature clips

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (22 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

minutes from its battery run-time.


Battery-powered notebooks and sub-notebook computers consequently caused microprocessor engineers
to do a quick about face. Where once they were content to use bigger and bigger heatsinks, fans, and
refrigerators to keep their chips cool, today they focus on reducing temperatures and wasted power at the
source.
One way to cut power requirements is to make the design elements of a chip smaller. Smaller digital
circuits require less power. But shrinking chips is not an option; microprocessors are invariably designed
to be as small as possible with the prevailing technology.
To further trim the power required by microprocessors to make them more amenable to battery
operation, engineers have come up with two new design twists: low voltage operation and system
management mode. Although founded on separate ideas, both are often used together to minimize
microprocessor power consumption. Most new microprocessor designs will likely incorporate both
technologies. In fact, some older microprocessor designs have been retrofitted with such power-saving
technologies (for example, the SL-Enhanced series of Intel 486 chips and the portable versions of the
Pentium.)
Since the very beginning of the transistor-transistor logic family of digital circuits—the design
technology that later blossomed into the microprocessor—digital logic has operated with a supply
voltage of five volts. That level is essentially arbitrary. Almost any voltage would work. But five-volt
technology offers some practical advantages. It’s low enough to be both safe and frugal with power
needs but high enough to avoid noise and allow for several diode drops, the inevitable reduction of
voltage that occurs when a current flows across a semiconductor junction.
Every semiconductor junction, which essentially forms a diode, reduces or drops the voltage flowing
through it. Silicon junctions impose a diode drop of about 0.7 volts, and there may be one or more such
junctions in a logic gate. Other materials impose smaller drops—that of germanium, for example, is 0.4
volts—but the drop is unavoidable.
There’s nothing magical about five volts. Reducing the voltage used by logic circuits dramatically
reduces power consumption because power consumption in electrical circuits increases by the square of
the voltage. That is, doubling the voltage of a circuit increases the power it uses by fourfold. Reducing
the voltage by one-half reduces power consumption by three-quarters—providing, of course, that the
circuit will continue to operate at the lower voltage.
Microprocessor designers have begun to exploit the potential of lower voltage operation by creating new
microprocessors that scorn traditional five-volt operation. Advanced Micro Devices developed the first
of this new generation of microprocessors in 1992 as a version of the 386 microprocessor that operated
at the STD level of 3.3 volts. Other chip makers followed with their own low voltage products. Initially,
nearly all of them were designed for 3.3-volt operation. For example, the first generation of Intel’s
Pentium chips and the 486DX4 series were all designed for 3.3-volt power sources. The latest Pentiums
designed for portable applications operate at 2.9 volts or lower. Pentium Pro chips also operate at this
lower voltage level.
Most bus architectures only support the 3.3 volt range because it does not require drastic design changes.
Engineers chose the 3.3-volt level because its signals remain compatible with those of traditional 5-volt
TTL circuits but are low enough to halve power consumption. The new 3.3-volt microprocessors will
work with conventional 5-volt support chips (a PC will have to supply 3.3 volts to the microprocessor
and 5 volts to the rest of its circuits), but the real energy savings will come when the rest of the circuits
in the PC (support chips and memory) also operate at the lower voltage level. As with microprocessors,
these semiconductors will need to be entirely redesigned for low voltage operation, and chip makers

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (23 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

have already taken up the challenge.


Some Intel microprocessors require the compromise VRE operating voltage of 3.5 volts. Although it
saves power and reduces heat when compared to 5.0 volt operation, it is obviously not as cool as 3.3 volt
operation. The slightly higher voltage does allow a greater distinction between logical high and low logic
states, giving cleaner signals at high speeds. Many 166 MHz Pentium chips require VRE voltage level
operation.

Voltage Reduction Technology

To avoid compatibility problems with standard 3.3 volt bus and support circuit designs, Intel endowed its
latest portable Pentium chips with what it calls Voltage Reduction Technology. Put simply, these chips
have internal voltage regulators that reduce the 3.3 volt supply down to 2.9 volts for the operation of the
internal logic of the chips. This 0.4 volt reduction results in energy savings of about 30 percent over true
3.3 volt designs. In notebook PCs, these chips allow for commensurately longer battery life or greater
performance. As Table 3.1 demonstrates, a 2.9 volt Pentium processor operating at 90 MHz delivers
about 20 percent more speed than a 3.3 volt chip operating at 75 MHz with no reduction in battery life.

Table 3.1. Pentium Processor Mobile Power Specifications

Processor Frequency Operating Voltage Maximum Power* Typical Power


75 MHz 3.3 Volts 8.0/6.5 Watts 3.0—4.0 Watts
90 MHz 2.9 Volts 7.3/5.5 Watts 2.5—3.5 Watts
75 MHz 2.9 Volts 6.0/4.5 Watts 2.0—3.0 Watts
*theoretical maximum power/measured
maximum power

Extremely Low Voltage Semiconductors

From a power consumption standpoint, the lower the operating voltage the better. Chip makers are
exploring new technologies that promise even more dramatic reductions in operating voltage.
In February 1996, at the International Solid State Circuits Conference, held in San Francisco, Toshiba
America Electronic Components, Inc. unveiled a dramatic new circuit technology it termed Extremely
Low Voltage Semiconductors. According to Toshiba, the new design allows integrated circuits to operate
at a level of only 0.5 volt. This factor-of-ten reduction effectively reduces power requirements to one
hundredth of what would be needed at the standard 5.0 volt TTL level. The design achieves its low
voltage capabilities by allowing the chip maker to individually control the threshold voltage of each
transistor in a chip. The threshold voltage is the level at which a transistor switches from off to on, and
current circuit designs require all transistors in a chip to operate at a common threshold voltage.
Currently, no commercial products use Extremely Low Voltage Semiconductor technology. Noise and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (24 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

interference concerns make applying it to overall PC architectures problematic, but adapting circuits
using designs similar to Intel’s Voltage Reduction Technology promises dramatic reductions in the
power needs of notebook PCs.

Power Management

Most microprocessors have been designed to be like the Coast Guard, always prepared (the U.S. Coast
Guard motto is semper paratus). They kept all of their circuits not only constantly ready but operating at
full potential, whether they were being used or not. From an energy usage viewpoint, that’s like burning
all the lights in your entire house while you sit quietly in the living room reading a book. You might
venture into some other room, so you keep those lights burning—and keep the local electric company in
business.
Most people (at least, most frugally minded people) switch on the lights only in the rooms in which they
are roaming, keeping other lights off to minimize the waste of electricity. Newer microprocessors are
designed to do the same thing, switch off portions of their circuitry and even some of the circuits in your
PC external to the microprocessor when they are unneeded. When, for example, you’re running a
program that’s just waiting around for you to press a key, the microprocessor could switch most of its
calculating circuits off until its receives an interrupt from the keyboard controller. This
use-only-what’s-needed feature is called system management mode. It was pioneered by Intel’s 386SL
microprocessor and has become a standard feature of most newer chips.
In addition, many microprocessors are able to operate at a variety of speeds. Slowing a chip down
reduces its power consumption (it also reduces performance). Many current chips enable their host
computers to force a speed reduction by lowering the clock speed. Microprocessors that use static logic
designs are able to stop operating entirely without risking their register contents, enabling a complete
system shutdown to save power. Later they can be reactivated without losing a beat (or byte). The
electrical charges in ordinary, dynamic designs drain off faster than they get restored if the dynamic
circuit slows too much.

Physical Matters

The working part of a microprocessor is exactly what the nickname "chip" implies: a small flake of a
silicon crystal no larger than a postage stamp. Although silicon is a fairly robust material with moderate
physical strength, it is sensitive to chemical contamination. After all, semiconductors are grown in
precisely controlled atmospheres, the chemical content of which affects the operating properties of the
final chip. To prevent oxygen and contaminants in the atmosphere from adversely affecting the precision
engineered silicon, the chip itself must be sealed away. The first semiconductors, transistors, were
hermetically sealed in tiny metal cans.
The art and science of semiconductor packaging has advanced since those early days. Modern ICs are
often surrounded in epoxy plastic, an inexpensive material that can be easily molded to the proper shape.
Unfortunately, microprocessors can get very hot, sometimes too hot for plastics to safely contain. Most
powerful modern microprocessors are consequently cased in ceramic materials that are fused together at
high temperatures. Older, cooler chips reside in plastic. The most recent trend in chip packaging is the
development of inexpensive tape-based packages optimized for automated assembly of circuit boards.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (25 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

Packaging

The most primitive of microprocessors—that is, those of the early generation that had neither substantial
signal nor power requirements—fit in the same style housing popular for other integrated circuits, the
infamous dual in-line pin or DIP package. The only problem chips in DIPs face is getting signals in and
out. Even ancient 8-bit chips require more connections than the 14 to 20 that fit on normal-size DIP
packages. Consequently, most DIP microprocessors have housings with 40 or more pins.
The typical microprocessor DIP is a black epoxy plastic rectangle about two inches long and half an inch
wide. Some more powerful DIP chips use ceramic cases with metal seals over the location where the
silicon itself fits. A row of connecting pins line both of the long sides of the chip package like the legs of
a centipede.
The most important of these legs is pin number one, which helps determine the proper orientation for
putting the chip in its socket. The number one pin of the two rows terminates the row of pins that’s on
the same end of the chip as its orientation notch, on the left row when viewed from the top of the chip
(Figure 3.1).
Figure 3.1 An 80286 DIP chip showing pin one at the lower left.
The DIP package is far from ideal for a number of reasons. Adding more connections, for example,
makes for an ungainly chip. A centipede microprocessor would be a beast measuring a full five inches
long. Not only would such a critter be hard to fit onto a reasonably sized circuit board, it would require
that signals travel substantially farther to reach the end pins than those in the center. At modern
operating frequencies, that difference in distance can amount to a substantial fraction of a clock cycle,
potentially putting the pins out of sync.
Modern chip packages are compact squares that avoid these problems. At least four separate styles of
package have been developed to accommodate the needs of the latest microprocessors.
Today the most common is the Pin Grid Array or PGA, a square package that varies in size with the
number of pins that it must accommodate. Recent microprocessors are about two inches square.
Sixteen-bit chips typically have two rows of pins parallel to each edge of the chip and dropping down
from its bottom, a total of about 68 pins. Processors with 32-bit bus connections have between 112 and
168 pins arranged similarly but in three rows. Chips with 64-bit connection potential may have up to 321
pins in four rows arrayed as one square inside another.
In any case, the pins are spaced as if they were laid out on a checker board, all evenly spaced, with the
central block of pins (and sometimes those at each of the four corners) eliminated. Again, pin number
one is specially marked for orientation purposes. The ferrule through which the pin leaves the ceramic
case is often square for pin one and round for the others. In addition, the corner of the chip that
corresponds to the location of pin one is typically chopped off (see Figure 3.2).
Figure 3.2 Pin-grid array socket (with PGA chip).
To fit the larger number of pins used by the latest Pentium and Pentium Pro chips into a reasonable
space, Intel rearranged the pins, staggering them so that they can fit closer together. The result is a
staggered pin grid array package.
Each of these PGA and SPGA packages have their own matched sockets. Some chips fit different

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (26 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

sockets. For example, a chip without a keying pin fits into a socket with a keying pin. The socket used
for the microprocessor on your PC’s motherboard determines which upgrades you can use in your PC.
These issues are addressed in the "Sockets and Upgrading" section, later in this chapter.
The Pentium Pro adds another twist to the classic PGA package. Its twofold design—separate CPU and
cache chips—uses a unique package that offers a separate chamber for each chip. The result is called a
Multi-Cavity Module or MCM. Although the Pentium Pro’s MCM uses pins in parallel rows much like a
normal PGA package, the MCM is rectangular and the number of pin rows on the long and short sides
are different (four on the long, two on the short). In addition, on one side of the module, an extra pin
sprouts between each square of four. The ceramic-based MCM used by the Pentium Pro provides
suitable packaging for the chip and enough space for the 387 pins needed by the complex processor but,
on the downside, is expensive (reputedly as much as $30 each). Figure 3.3 illustrates a Pentium Pro
MCM.
Figure 3.3 Multi-cavity module PGA package.
Pins such as those used by the PGA, SPGA, and MCM packages are prone to damage and relatively
expensive to fabricate, so chip makers have developed pinless packages for microprocessors. The first of
these to find general use was the Leadless Chip Carrier, or LCC, socket. Instead of pins, this style of
package has contact pads on one of its surfaces. The pads are plated with gold to avoid corrosion or
oxidation that would impede the flow of the minute electrical signals used by the chip (Figure 3.4). The
pads are designed to contact special springy mating contacts in a special socket. Once installed, the chip
itself may be hidden in the socket, under a heatsink, or perhaps only the top of the chip may be visible,
framed by the four sides of the socket.
Figure 3.4 Leadless Chip Carrier microprocessor, top and bottom views.
In an LCC socket, the chip is held in place by a pivoting metal wire. You pull the wire off the chip, and
the chip pops up. Hold an LCC chip in your hand and it resembles a small ceramic tile. Its bottom edge
is dotted with bright flecks of gold—the chip’s contact pads.
PGA and LCC packages are made from a ceramic material because the rigid material provides the
structural strength needed by the chip. To avoid the higher cost of ceramics, chip makers created an
alternate design that could be fabricated from plastic. Called the Plastic Leaded Chip Carrier, or PLCC,
this package has another advantage besides cost: a special versatility. It can be soldered directly to a
printed circuit board using surface-mount techniques. Using this package, the computer manufacturer
can save the cost of a socket while improving the reliability of the system. (Remember, connections like
those in chip sockets are the least reliable part of a computer system.)
The PLCC chip can also be used in a socket. In this case, the socket surrounds the chip. The leads from
the chip are bent down around its perimeter and slide against mating contacts inside the inner edge of the
socket’s perimeter. A PLCC chip is rather easy to press into its socket but difficult to pop out—you must
carefully wedge underneath the chip and lever it out.
New microprocessors with low thermal output sometimes use a housing designed to be soldered down,
the Plastic Quad Flat Package or PQFP, sometimes called simply the "quad flat pack" because the chips
are flat (they fit flat against the circuit board) and they have four sides (making them a quadrilateral) See
Figure 3.5.
Figure 3.5 Plastic Quad Flat Package microprocessor.
Manufacturers like this package because of its low cost and because chips using it can be installed in
exactly the same manner as other modern surface-mount components. However, the quad flat pack is

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (27 de 36) [23/06/2000 04:49:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

suitable only to lower power chips because soldered connections can be stressed by microprocessors that
get too hot. As with other chip packages, proper orientation of a quad flat pack is indicated by a notch or
depression near pin number one.
Another new package design takes the advantage of the quad flat pack a step further. Called the Tape
Carrier Package, it looks like a piece of photographic film with a square pregnant bulge in the middle
once it’s stripped of its shipping protection. Thin, gold-plated leads project from each edge of the
compact package that measures just over one inch wide (26 mm) and thinner than a dime (1 mm).
Perfect for weight-conscious portable PCs, a Pentium processor in a TCP package weighs less than a
gram compared to about 50 grams for its PGA equivalent. Figure 3.6 illustrates a typical TCP
microprocessor.
Figure 3.6 Tape Carrier Package microprocessor.
The TCP package starts with a substrate of polyimide film laminated to copper foil. The foil is etched to
form two contact patterns, one that will engage with tabs on the silicon chip of the microprocessor and
others that engage with the system board of a PC. After etching, the traces are gold plated, as are
matching tabs on the silicon chip. The chip is placed on the film, and the gold-plated tabs and traces
bonded together. The silicon chip is then encapsulated with polyimide siloxane resin to protect it.
Multiple microprocessors can be encapsulated individually or at regular intervals along a long length of
tape, which is delivered to PC makers on a spool. This fabrication process is called Tape Automated
Bonding and is regularly used to make a variety of electronic components, including most LCD panels.
TCP microprocessors must be installed using special tools. During automated assembly of a circuit
board, the assembly machine cuts the individual TCP microprocessors from the individual protective
carrier or tape spool. It then shapes the etched leads from sticking straight out from the sides of the
package into a Z-shape that extends below the bottom of the package so that they can contact the circuit
board. A special paste applied to the circuit board physically and thermally bonds the chip to the
pre-defined mounting area (which may include a built-in heatsink) while a hot bar clamps down on the
leads to solder them to the contacts on the board.
The latest chip package uses the PGA layout but eliminates the most vulnerable and expensive part of
the design, the pins themselves. Instead it substitutes precision-formed globs of solder that can mate with
socket contacts or be soldered directly to a circuit board using surface-mount technology. Because the
solder contacts start out as tiny balls but use a variation on the PGA layout, the package is termed
solder-ball grid array. The process of forming the solder balls is so precise that chip manufacturers can
space the resulting contacts more closely than when using pins. Consequently, SBGA is winning favor
for the latest chips that have 300 or more contacts.
Intel’s latest innovation in microprocessor packaging is to pre-install chips on modules that slide into
sockets like ordinary expansion boards. As with expansion boards, the modules have an edge connector
with a single row of contacts. Consequently, Intel calls this package, first introduced with the Pentium II,
the Single Edge Contact cartridge or SEC cartridge. This modular design has several important benefits.
It allows you to easily install or upgrade microprocessors, which in turn, means lower support cost for
Intel and a larger potential upgrade market. It also allows Intel to use any kind of chip packaging it wants
inside the cartridge, for example, inexpensive tape carrier designs. Although the SEC cartridge adds a
second set of connections (one between chip and module, one between the module and your PC), edge
connectors are substantially less expensive to fabricate than chip packages with multiple pins so the
overall cost of manufacturing can be lower.
The package that the chip is housed in has no effect on its performance. It can, however, be important
when you want to replace or upgrade your microprocessor with a new chip or upgrade card. Many of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (28 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

these enhancement products require that you replace your system’s microprocessor with a new chip or
adapter cable that links to a circuit board. If you want the upgrade or a replacement part to fit on your
motherboard, you may have to specify which package your PC uses for its microprocessor.

Location

Ordinarily, you should have no need to see or touch the microprocessor in your PC. As long as your
computer works (and considering the reliability that most have demonstrated, that should be a long, long
time), you really need have no concern about your microprocessor except to know that it’s inside your
computer doing its job. However, some modern system upgrades require that you plug a new
microprocessor into your system or even replace the one that you have.
Before you can replace your microprocessor, you must identify which chip it is. That’s easy. As a
general rule, all you have to look for is the largest integrated circuit chip on your computer’s
motherboard. Almost invariably, it will be the microprocessor. That’s only fitting because the
microprocessor is also the most important chip in the computer. In modern PCs, the microprocessor uses
a large, square package.
If you find several large chips on your system board, odds are one of them is the microprocessor. Others
may be equally big because they have elaborate functions and need to make many connections with the
system board, which means they need relatively large packages to accommodate their many leads.
Almost universally, the microprocessor chip will be installed in a socket (which may or may not be
visible); most support chips will be soldered directly to the system board. Sometimes the microprocessor
will be hidden under a heatsink, which you can identify by its heat-radiating fins.
The appearance of each different microprocessor depends on the package it uses, but all can be identified
by their model number emblazoned on top. You’ll have to sort through a few lines of numbers to find the
key identifying signature, but the model designations of most chips are readily sorted out.

Identification

Picking out a microprocessor in a PC is like shooting fish in a barrel—the biggest is the one you’re most
likely to hit. The biggest chip in a PC is likely the microprocessor. Identifying the brand, model, and
other characteristics is another matter, one that requires a knowledge of secret codes.
Every manufacturer has its own designation for its microprocessors. Fortunately, most follow a few
industry conventions in creating their particular nomenclature. The microprocessor model number is
typically a mish-mash of numbers and letters. Somewhere among them are numbers matching Intel’s
once ubiquitous identification system: 286, 386, or 486. Pentium, being a trademark, cannot be used by
other manufacturers, so they have taken the steps Intel refused to, adding 586 and now 686 to their
product lines.
Examine any microprocessor, and you’re likely to find the model designation boldly emblazoned on it.
Figure 3.7 shows how Advanced Micro Devices identifies its chips.
Figure 3.7 AMD microprocessor identification.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (29 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

Most chip makers use similar codes for their product options. For example, the letter "N" is nearly
universal for a product with a "commercial" rating, generally one designed to operate within the
temperature range of 0 to 30 degrees Celsius. Intel’s DX2 and DX4 designations for clock doubling and
tripling are also commonly used, as is the speed rating in megahertz. Other options may vary with the
chip maker.

Sockets and Upgrading

Ordinarily you don’t have to deal with microprocessor sockets unless you’re curious and want to pull out
the chip, hold it in your hand, and watch a static discharge turn a $300 circuit into epoxy-encapsulated
sand. Choose to upgrade your PC to a new and better microprocessor, and you’ll tangle with the details
of socketry, particularly if you want to improve your Pentium.
Intel recognizes nine different microprocessor sockets for its 486 and newer microprocessors. Table 3.2
summarizes these socket types, the chips that use them, and the upgrades appropriate to them.

Table 3.2. Sockets for Intel Microprocessors and Upgrades

Socket Number Pins Layout Voltage Microprocessor OverDrives


0 168 In-line 5V 486DX DX2, DX4
1 169 In-line 5V 486DX, 486SX DX2, DX4
2 238 In-line 5V 486DX, 486SX, DX2 DX2, DX4, Pentium
3 237 In-line 3V or 5V 486DX, 486SX, DX2, DX4 DX2, DX4, Pentium
4 273 In-line 5V 60 or 66 MHz Pentium Pentium
5 320 Staggered 3V Other Pentium Pentium
6 235 In-line 3V DX4 Pentium
7 321 Staggered 3V Other Pentium Pentium
8 387 Staggered 3V Pentium Pro Pentium Pro

The first 486-based PCs used a standard PGA socket. Although Intel does not officially designate it, for
consistency we will call this ground-level socket by the designation Socket 0. Intel’s original plans made
Socket 1 the official 486 upgrade socket. However, to open the market to owners of PCs made before
Socket 1 became standard, Intel developed a line of OverDrive upgrades to match Socket 0.
Socket 2 is a superset of socket 1. The inner 169 pins match the Socket 1 standard, so a 486
microprocessor simply plugs into the center of the socket, leaving the outermost row of pin holes open.
486-level OverDrives plug into this socket in exactly the same way. The outer row of pinholes
accommodates the wider bus of the Pentium OverDrive (P24T) upgrade.
Socket 3 fits the same mold as Socket 2 but rearranges the keying pins, omitting one. The pin
rearrangement helps key the socket so that 3V microprocessors cannot accidentally (and fatally) get
plugged into 5 volt PCs. The socket accommodates all the same upgrades as Socket 2 in addition to 3
volt chips. Socket 6 follow the same basic design but is used only in newer PCs equipped with the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (30 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

486DX4 microprocsssor.
Intel introduced Socket 4 to accommodate the needs of the wider interface of the initial Pentium release.
No other Intel original equipment processors use this socket, although Intel has designed Pentium
OverDrive chips to fit it.
The second generation Pentiums, those operating at speeds of 75 MHz and higher, adopted an entirely
new socket design based on Staggered Pin Grid Array technology. Socket 5 introduced this design. Intel
added another keying pin to this socket to produce Socket 7, the company’s preferred socket for all new
Pentium systems. Socket 7 allows the most OverDrive upgrade options, although OverDrive chips are
available to match both Sockets 5 and 7.
Socket 8 accepts the Pentium Pro chip and Intel’s planned Pentium Pro OverDrive upgrades.

Commercial Products

Although PCs are the most visible application of microprocessor technology, they represent only a
fraction of the total microprocessor market. Nearly every consumer electronic device now has a
microprocessor inside, nearly every new automobile relies on one or more microprocessors to control its
engine; even many children’s toys owe their appeal to microprocessor technology. The vast majority of
these products use chips obscure to all but their designers.
In PCs, on the other hand, the microprocessor is the centerpiece. Chip choice is one of the chief guides in
selecting a computer. Nowhere are microprocessors more visible. You buy a PC based on the type and
speed of microprocessor it contains. Today, the minimal system holds at least a Pentium chip or one of
its non-Intel clones such as the Cyrix 686. To avoid obsolescence, you’ll probably want an
MMX-empowered Pentium, Pentium Pro, or its equivalent, as they become available.
No matter the designation or origin, all microprocessors in today’s PCs share a unique characteristic and
heritage. All are direct descendants of the very first microprocessor, the Intel 4004. The instruction set
used by all current PC microprocessors is rooted in the instructions selected for the 4004, with enough
elaboration over the years that you’d never suspect the link to today’s Pentiums without a history lesson.
This view of microprocessors is necessarily historic. Today’s fastest chips must abide by design
decisions made more than a quarter century ago. The history of Intel microprocessors has been one of
adding new features, each of which is a combination Band-Aid and bridge: A Band-Aid to patch
problems inherent in the historic design, a bridge between that history and the latest (and most powerful)
microprocessor design ideals. The steady progress in the power of Intel microprocessors is a testament to
the tremendous resources Intel has devoted to the development of its products.
We’ll begin by looking at Intel’s own microprocessors and their evolution. Next we will examine the
efforts of competitors to develop chips compatible with the Intel instruction set.

Intel Microprocessors

The history of microprocessor development has been mostly a matter of increasing these numbers. With
each new generation of microprocessor, the number and size of its registers increases, and data and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (31 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

address buses become wider. As a result, microprocessors and the personal computers made from them
have become increasingly more powerful.
Today, Intel Corporation, the largest maker of microprocessors, has moved into the role of the largest
independent maker of semiconductors in the world. Depending on your viewpoint, you could credit its
dominance of the industry to hard work, astute planning, corporate predation, or cosmic coincidence. But
the root of the success is undeniable. Credit for the invention of the microprocessor goes to Intel.
Over the years since the invention, the company has pushed the limits of technology to keep ahead of its
competition. Its designs—all direct descendants of its original creation—are now the only CISC
microprocessors used in PCs. And yet, chance has played a role in Intel’s fantastic growth as well. The
company probably would not be on top of the industry had not a handful of engineers at IBM thought
one of Intel’s products offered a tenuous marketing advantage over the other chips then available. (See
Table 3.3.)

Table 3.3. Intel Microprocessor Time Line

Chip Intro MIPS(est.) Int. Ext Transistors Designrules Memory Ext. Int. Int.
date Bus Bus Clock Clock FPU?
Width Width
4004 Nov-71 0.06 4 4 2300 10.0 640 0.108 0.108 NO
bytes
8008 Apr-72 0.06 8 8 3500 10.0 16K 0.2 0.2 NO
8080 Apr-74 0.64 8 8 6000 6.0 64K 2 2 NO
8085 Mar-76 0.37 8 8 6500 3.0 64K 5 5 NO
8086 Jun-78 0.33 16 16 29,000 3.0 1MB 5 5 NO
0.66 16 16 29,000 3.0 1MB 8 8 NO
0.75 16 16 29,000 3.0 1MB 10 10 NO
8088 Jun-79 0.33 16 8 29,000 3.0 1MB 5 5 NO
0.75 16 8 29,000 3.0 1MB 8 8 NO
80286 Feb-82 1.2 16 16 134,000 1.5 16MB 8 8 NO
1.5 16 16 134,000 1.5 16MB 10 10 NO
1.66 16 16 134,000 1.5 16MB 12 12 NO
386DX Nov-85 5.5 32 32 275,000 1.5 4GB 16 16 NO
Feb-87 6.5 32 32 275,000 1.5 4GB 20 20 NO
Apr-88 8.5 32 32 275,000 1.5 4GB 25 25 NO
Apr-89 11.4 32 32 275,000 1.5 4GB 33 33 NO
386SX Jun-88 2.5 32 16 275,000 1.5 4GB 16 16 NO
Jan-89 2.5 32 16 275,000 1.5 4GB 20 20 NO
2.7 32 16 275,000 1.5 4GB 25 25 NO
2.9 32 16 275,000 1.5 4GB 33 33 NO

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (32 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

386SL Oct-90 4.2 32 16 855,000 1.0 32MB 20 20 NO


Sep-91 5.3 32 16 855,000 1.0 32MB 25 25 NO
486DX Apr-89 20 32 32 1,200,000 1.0 4GB 25 25 YES
May-90 27 32 32 1,200,000 1.0 4GB 33 33 YES
Jun-91 41 32 32 1,200,000 0.8 4GB 50 50 YES
486SX Sep-91 13 32 32 1,185,000 1.0 4GB 16 16 NO
Sep-91 16.5 32 32 1,185,000 1.0 4GB 20 20 NO
Sep-91 20 32 32 1,185,000 1.0 4GB 25 25 NO
Sep-92 27 32 32 900,000 0.8 4GB 33 33 YES
486DX2 Mar-92 41 32 32 1,200,000 0.8 4GB 25 50 YES
Aug-92 54 32 32 1,200,000 0.8 4GB 33 66 YES
486SL Nov-92 15.4 32 32 1,400,000 0.8 64MB 20 20 YES
19 32 32 1,400,000 0.8 64MB 25 25 YES
25 32 32 1,400,000 0.8 64MB 33 33 YES
486DX4 Mar-94 60 32 32 1,200,000 0.6 4GB 25 75 YES
81 32 32 1,200,000 0.6 4GB 33 100 YES
Pentium Mar-93 100 64 32 3,100,000 0.8 4GB 60 60 YES
P5
112 64 32 3,100,000 0.8 4GB 66 66 YES
Pentium Mar-94 150 64 32 3,100,000 0.6 4GB 60 90 YES
P54C
168 64 32 3,100,000 0.6 4GB 66 100 YES
225 64 32 3,100,000 0.35 4GB 66 133 YES
Jan-96 255 64 32 3,100,000 0.35 4GB 60 150 YES
278 64 32 3,100,000 0.35 4GB 66 166 YES
Jun-96 336 64 32 3,100,000 0.35 4GB 66 200 YES
Pentium Jan-97 278 64 32 4,500,000 0.35 4GB 66 166 MMX
P55C
Jan-97 336 64 32 4,500,000 0.35 4GB 66 200 MMX
Pentium Nov-95 337 64 32 5,500,000 0.5 4GB 66 150 YES
Pro
373 64 32 5,500,000 0.35 4GB 66 166 YES
404 64 32 5,500,000 0.35 4GB 66 180 YES
450 64 32 5,500,000 0.35 4GB 66 200 YES

The history of the microprocessor stretches back to a 1969 request to Intel from a now-defunct Japanese
calculator company, Busicom. The original plan was to build a series of calculators, each one different
and each requiring a custom integrated circuit. Using conventional IC technology, the project would
have required the design of 12 different chips. The small volumes of each design would have made

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (33 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

development costs prohibitive.


Intel engineer Mercian E. (Ted) Hoff had a better idea, one that could slash the necessary design work.
Instead of a collection of individually tailored circuits, he envisioned creating one general purpose
device that would satisfy the needs of all the calculators. His approach worked. The result was the first
general purpose microprocessor, the Intel 4004.
The chip was a success. Not only did it usher in the age of low cost calculators, it also gave designers a
single solid-state programmable device for the first time. Instead of designing the digital
decision-making circuits in products from scratch, developers could buy an off-the-shelf component and
tailor it to their needs simply by writing the appropriate program.

The 4004 Family

The 4004 was first introduced to the marketplace in 1971. As might be deduced from its manufacturer’s
designation, 4004, this ground breaking chip had registers capable of handling four bits at a time through
a four-bit bus. Puny by today’s standards, those four bits—enough to code 16 symbols including all
numbers from zero to nine as well as operators—were just useful enough to make calculations. The chip
could add, subtract, and multiply just as capably (but hardly as fast) as the much larger computers of the
time. It was designed to run at 108 KHz (that’s about 1/10 megahertz).
One major difference divided the capabilities of the 4004 from the brains in real computers. Larger
computers worked not only with numbers but also with alphabetic symbols and text. Handling these
larger symbol sets was something beyond the ken of the 4004. After all, most alphabets have more than
16 characters. Making the microprocessor into a more general purpose device required expanding the
size of the chip’s registers so it could handle representations of all the letters of our alphabet and more.
Although six bits could accommodate all upper- and lowercase letters as well as numbers (26 bits can
code 64 symbols), it would leave little room to spare for punctuation marks and such niceties as control
codes. In addition, the emergence of the eight-bit byte as the standard measure of digital data resulted in
its being chosen as the register size of the next generation of microprocessor, Intel’s 8008, introduced in
1972.
The 8008 was, however, at heart just an update of the 4004 with more bits in each register. It used the
same technology (which meant the smallest features etched onto the silicon chip measured 10 microns
across) and ran a bit faster (200 KHz) but broke no new ground. Overall, the 8008 was an interesting and
workable chip, and it found application in some initial stabs at building personal computers. Now,
however, it’s only a footnote in the history of the PC.

The 8080 Family

Intel continued development (as did other integrated circuit manufacturers) and, in 1974, created a rather
more drastic revision, the 8080. Unlike the 8008, it was planned from the start for byte-size data. Intel
gave the 8080 a richer command set, one that embraced all the commands of the 8008 but went further.
This set a pattern for Intel microprocessors: every increase in power and range of command set enlarged
on what had gone before rather than replacing it, assuring backward compatibility (at least to some
degree) of the software. The improvements made to the 8080 made the chip one of the first with the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (34 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

inherent capability to serve as the foundation of a small computer.


A few engineers had even better ideas for improving the 8080 and left Intel to develop these
improvements on their own. After forming Zilog Corporation, they unveiled to the world the Z80
microprocessor. In truth, the Z80 was an evolutionary development, an 8080 with more instructions, but
it began a revolution by unlocking the power of the first widely accepted standard small computer
operating system, CP/M, an acronym for the Control Program for Microcomputers.
An operating system is a special program that links programs, the microprocessor, and its related
hardware, such as storage devices. CP/M was developed by Digital Research and modeled on the
operating systems used by larger computers. Though hardly perfect, it worked well enough that it
became the standard for many small computers used in business. Its familiarity helped the programmers
of larger computers adapt to it, and they threw their support behind it. Although CP/M was designed to
run on the 8080, the Z80 chip offered more power, and it became the platform of choice to make the
system work.
All the while, Intel continued to improve on its eight-bit microprocessor designs. One effort was the
8085, a further elaboration on the 8080, which was designed to use a single five-volt power supply and
use fewer peripheral chips than its predecessor. Included in its design were vectored interrupts and a
serial input/output port. Alas, the 8085 never won the favor of the small computer industry. A few small
computers, now almost entirely forgotten, were designed around it.

The 8086 Family

In 1978, Intel pushed technology forward with its 8086, a microprocessor that doubled the size of its
registers again to 16 bits and promised 10 times the performance of the 8080. The 8086 also improved
on the 8080 by doubling the size of the data bus to 16 bits to move information in and out twice as fast.
It also had a substantially larger address bus (20 bits wide) that enabled the 8086 to directly control over
one million bytes—a megabyte—of memory.

8086

As a direct descendent of the 8080 and cousin of the Z80, the 8086 shared much of the command set of
the earlier chips. Just as the 8080 elaborated on the commands of the 8008, the 8086 embellished those
of the 8080. The registers of the 8086 were cleverly arranged so that they could be manipulated either at
their full 16-bits width or as two separate 8-bit registers exactly like those of the 8080.
The memory of the 8086 was also arranged to be a superset of that of the 8080. Instead of being one vast
megabyte romping ground for data, it was divided into 16 segments that each contained 64 kilobytes. In
effect, the memory of the 8086 was a group of 8080 memories linked together. The 8086 looked at each
segment individually and did not permit a single large data structure to span segments—at least not
easily.
In some ways, the 8086 was ahead of its time. Small computers were based on 8-bit architectures,
memory was expensive (that’s why a megabyte seemed like more than enough), and few other chips
were designed to handle 16 bits at a time. Using the 8086 forced engineers to design full 16-bit devices,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (35 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 3

which, at the time, were not entirely cost effective.

8088

A year after the introduction of the 8086, Intel introduced the 8088. The 8088 was identical to the 8086
in every way—16-bit registers, 20 address lines, the same command set—except one. Its data bus was
reduced to 8 bits, enabling the 8088 to exploit readily available 8-bit support hardware.
As a backward step in chip design, the 8088 might have been lost to history, much like the 8085, had not
IBM begun to covertly design its first personal computer around it. IBM’s intent was evidently to cash in
on the 8088 design. Its 8-bit data bus enabled the use of inexpensive off-the-shelf support chips. Its
16-bit internal design gave the PC an important edge in advertising over the 8-bit small computers
already available. And its 8080-based ancestry hinted that the wealth of CP/M programs then available
might easily be converted to the new hardware. In the long run, of course, these advantages have proven
either temporary or illusory. Sixteen-bit support chips are available cheaply, the IBM name proved more
valuable than the 16-bit registers of the 8088, and few CP/M program were ever directly adapted for the
PC.
What was important is that the lame 8088 microprocessor became t

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh03.htm (36 de 36) [23/06/2000 04:49:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Chapter 4: Memory
Memory is mandatory to make a microprocessor and your PC work. Moreover, the
memory in your PC in part determines what programs you can run and how fast.
Memory is vital. It's where all the bytes must be that your PC's microprocessor needs in
order to operate. Memory holds both the raw data that needs to be processed and the
results of the processing. Memory can even be a channel of communication between the
microprocessor and its peripherals. Memory comes in many types, described and
delimited by function and technology. Each has its role in the proper function of your
PC.

■ Background
■ Primary and Secondary Storage
■ Volatility
■ Non-Volatile Memory
■ Volatile Memory
■ Measurement
■ Measuring Units
■ Granularity
■ Requirements
■ DOS
■ Old Windows
■ New Windows
■ Access
■ Constraints
■ Microprocessor Addressing
■ System Capacity
■ Contiguity Problems
■ Technologies

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (1 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

■ Random Access Memory


■ Dynamic Memory
■ Static Memory
■ Read-Only Memory
■ Mask ROM
■ PROM
■ EPROM
■ EEPROM
■ Flash Memory
■ Magnetic Memory
■ Core Memory
■ Bubble Memory
■ Virtual Memory
■ Demand Paging
■ Swap Files
■ RAM Doublers
■ Logical Organization
■ Hardware
■ Real Mode Memory
■ Protected Mode Memory
■ Lower Memory
■ BIOS Data Area
■ Upper Memory
■ High Memory Area
■ Frame Buffer Memory
■ Shadow Memory
■ Cache Memory
■ Bank-Switched Memory
■ DOS
■ Memory Managers
■ Virtual Machine Control Programs
■ DOS Extenders
■ Extended Memory Specification
■ Virtual Control Program Interface
■ DOS Protected Mode Interface

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (2 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

■ Expanded Memory
■ Enhanced Expanded Memory Specification
■ EMS Version 4.0
■ Backfilling
■ Expanded Memory Emulation
■ Windows
■ DOS Memory Under Windows
■ System Resource Memory
■ Application Memory
■ Unified Memory Architecture
■ Performance
■ Memory Speed
■ Interleaving
■ Caching
■ Cache Size
■ Describing Cache Performance
■ Cache Mapping
■ Burst-Mode Caches
■ Internal and External Caches
■ Cache-on-a-Stick
■ DRAM Technology
■ Static Column RAM
■ Page-Mode RAM
■ Extended Data Out Memory
■ Burst EDO DRAM
■ Synchronous DRAM
■ Enhanced DRAM
■ Cached DRAM
■ Rambus DRAM
■ Multibank DRAM
■ Video Memory
■ Windows RAM
■ Bus Interface
■ Errors
■ Causes

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (3 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

■ Soft Errors
■ Hard Errors
■ Detection and Prevention
■ Parity
■ Fake Parity
■ Detection/Correction
■ Repair
■ Packaging
■ Discrete Chips
■ Memory Modules
■ 30-pin SIMMs
■ 72-pin SIMMs
■ Dual In-Line Memory Modules
■ Small Outline DIMMs
■ SIPPs
■ Installation
■ Memory Modules
■ Mixing Modules
■ Contact Material
■ SIMM Adapters
■ Identifying Modules
■ Orientation
■ Discrete Chips
■ Socket Selection
■ Orientation

Memory

The difference between genius and mere intelligence is storage. The quick-witted react fast, but the
true genius can call upon memories, experiences, and knowledge to find real answers—the difference

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (4 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

between pressing a button fast and having the insight to know which button to press.
PCs are no different. Without memory, a PC is nothing more than a switchboard. All of its reactions
would have to be hard-wired in. The machine could neither read through programs nor retain data. It
would be stuck in a persistent vegetative state, kept alive by electricity but able to react only
autonomously.
A fast microprocessor is meaningless without a place instantly at hand to store programs and data for
current and future use. Its internal registers can only hold a handful of bytes (and they can be slippery
critters, as you know if you've tried to grab hold of one), hardly enough for a program that
accomplishes anything truly useful. Memory puts hundreds, thousands, even millions of bytes at the
microprocessor's disposal, enough to hold huge lists of program instructions or broad blocks of data.
Without memory, your PC’s microprocessor is worthless. It can’t even function as a doorstop—most
microprocessors are too thin. Moreover, without enough memory your favorite applications will run
slowly, if at all.
With today’s huge applications and operating systems, memory can be one of the most important
influences on the overall performance of your PC. Both the quantity and quality of the memory in
your system have their effects on speed. Installing more memory helps your PC work its quickest.
Memory technologies also influence the speed at which your PC’s microprocessor can work. The
same chip can be a speed demon when linked to an optimum memory system or a slug when chained
to memory constrained by yesterday’s technologies.

Background

The term "memory" covers a lot of territory even when confined to the computer field. Strictly
speaking, memory is anything that holds data, even a single bit. That memory can take a variety of
forms. A binary storage system, the kind used by today's PCs, can be built from marbles, marzipan, or
metal-oxide semiconductors. Not all forms of memory work with equal efficacy (as you'll soon see),
but the concept is the same with all of them—preserving bits of information in recognizable and
usable form. Some forms of memory are just easier for an electronic microprocessor to recognize and
manipulate. On the other hand, other sorts of memory may roll or taste better.
The primary characteristic required for computer memory is that electricity be able to alter it. After
all, today's computers think with electricity. They are made from electronic integrated circuits. Little
wonder that the most practical memory for computers is also made from integrated circuits. But the
memory that's available in IC form comes in great variety, differing, for example, in function,
accessibility, technology, capacity, and speed. Before we can get into the intimate details, we need to
understand the broad concepts underlying computer memory.

Primary and Secondary Storage

A variety of devices and technologies can store digital information in a form that’s electrically
accessible. Function distinguishes what is generally termed computer memory from the kind of data
storage kept by disks and tapes. Both normal memory and disk storage preserve information that the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (5 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

computer needs but each preserves it in its own way for its own purpose.
What most people consider as computer memory in a specific sense functions as your PC’s primary
storage. That is, the contents of the storage system is in a form that your PC’s microprocessor can
immediately access, ready to be used. In fact, the direct instructions used by some microprocessors
can alter the values held in primary storage without the need to transfer the data into the chip’s
registers. For this reason, primary storage is sometimes called working memory.
The immediacy of primary memory requires that your microprocessor be able to find any given value
without poring through huge blocks of data. The microprocessor must access any value at random.
Consequently, most people refer to the working memory in their PCs as Random Access Memory or
RAM, although RAM has a more specific definition when applied to memory technologies.
No matter the name you use for it, primary storage is, in effect, the short term memory of your PC. It's
easy to get at but tends to be limited in capacity—at least compared to other kinds of storage.
The alternate kind of storage is termed secondary storage. In most PCs, disks and tape systems serve
as the secondary storage system. They function as the machine's long-term memory. Not only does
disk and tape memory maintain information that must be kept for a long time, but also it holds the
bulk of the information that the computer deals with. Secondary storage may be tens, hundreds, or
thousands of times larger than primary storage. Secondary storage is often termed mass storage
because of its voluminous capacity: it stores a huge mass of data.
Secondary storage is one extra step away from your PC’s microprocessor. Your PC must transfer the
information in secondary storage into its primary storage system in order to work on it. Secondary
storage also adds a complication to the hardware. Most secondary storage is electromechanical. In
addition to moving electrical signals, it also involves physically moving a disk or tape to provide
access to information. Because mechanical things generally move slower than electrical signals except
in science fiction, secondary storage is slower than primary storage, typically by a factor of a thousand
or more.
In other words, the most important aspect of primary storage system in your PC is access speed,
although you want to have as much of it as possible. The most important aspect of secondary storage
is capacity, although you want it to be as fast as possible.

Volatility

In all all-too-human memories, one characteristic separates short-term and long-term memories. The
former are fleeting. If a given fact or observation doesn’t make it into your long-term memory, you’ll
quickly forget whatever it was—for example, the name that went with the face so quickly introduced
to you at a party.
Computer memories are similar. The contents of some are fleeting. With computers, however,
technology, rather than attention, determines what gets remembered and what is forgotten. For
computers, the reaction to an interruption in electrical supply defines the difference between short and
long term memory. The technical term used to describe the difference is memory volatility. Computer
memory is classed either as non-volatile or volatile.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (6 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Non-Volatile Memory

Non-volatile memory is exactly what you expect memory to be, forever. Once you store something in
non-volatile memory, it stays there until you change it. Neither rain, nor sleet, nor dark of night, nor a
power failure affects non-volatile memory.

Volatile Memory

Volatile memory is not so blessed as non-volatile memory. It, like worldly glory, is transitory. It lasts
not the three score years and ten of human existence or the fifteen minutes of fame. It survives only as
long as does its source of power. Remove power from volatile memory, and its contents evaporate in
microseconds.
Given the choice, you’d of course want the memory of your PC to be non-volatile. The problem is that
nearly all memory systems based solely on electricity and electronic storage are volatile. Electricity is
notorious for its intransigence. Given the slightest opportunity, it will race off or drain away. On the
other hand, electronic memory systems are fast: all non-volatile systems are slower, often
substantially so.
Electronic memory systems can be made to simulate non-volatile memories by assuring a steady
stream of power with a battery backup system. To prolong the period through which battery power
can protect memory contents, these system are also designed to minimize power drain. But
technologies that consume the least power also tend to be more expensive. Consequently, the bulk of
PC memory systems are volatile, prone to memory loss from power failures. PCs with memories
innately immune to power loss would be prohibitively expensive and excruciatingly slow.

Measurement

In digital computer systems, memory operates on a very simple concept. In principal, all that
computer memory needs to do is preserve a single bit of information so that it can later be recalled.
Bit, an abbreviation for binary digit, is the smallest possible piece of information. A bit doesn't hold
much intelligence—it only indicates whether something is or isn't—on or off, up or down, something
(one) or nothing (zero). It's like the legal system: everything is in black and white, and there are no
shades of gray (at least when the gavel comes down).
When enough bits are taken collectively, they can code meaningful information. A pattern of bits can
encode more complex information. In their most elementary form, for example, five bits could store
the number 5. Making the position of each bit in the code significant increases the amount of
information a pattern with a given number of bits can identify. (The increase follows the exponential
increase of powers of two—for n bits, 22 unique patterns can be identified.) By storing many bit
patterns in duplicative memory units, any amount of information can be retained.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (7 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Measuring Units

People don't remember the same way computers do. For us human beings, remembering a complex
symbol can be as easy as storing a single bit. While two choices may be enough for a machine, we
prefer a multitude of selections. Our selection of symbols is as broad as the imagination. Fortunately
for typewriter makers, however, we've reserved just a few characters as the symbol set for our
language—26 uppercase letters, a matching number of lowercase letters, 10 numerals, and enough
punctuation marks to keep grammar teachers preoccupied for entire careers. Representing these
characters in binary form makes computers wonderfully useful, so computer engineers tried to
develop the most efficient bit patterns for storing the diversity of symbols we finicky humans prefer.
If you add together all those letters, numbers, and punctuation marks, you'll find that the lowest power
of two that could code them all is 128 (or 27). Computer engineers went one better—by using an eight
bit code yielding a capacity of 256 symbols, they found that all the odd diacritical marks of foreign
languages and similar nonsense (at least to English speakers) could be represented by the same code.
The usefulness of this eight bit code has made eight bits the standard unit of computer storage, the
ubiquitous byte.
Half a byte—a four-bit storage unit—is called a nibble because, at least in the beginning of the
personal computer revolution, engineers had senses of humor. Four bits can encode 16
symbols—enough for 10 numerals and six operators (addition, subtraction, multiplication, division,
exponents, and square roots), making the unit useful for numbers-only devices such as hand held
calculators.
The generalized term for a package of bits is the digital word, which can comprise any number of bits
that a computer might use as a group. The term "word" has developed a more specific meaning in the
field of PCs, however, because Intel defines a word as two bytes of data, sixteen bits. According to
Intel, a double-word comprises two words, thirty-two bits; a quad-word is four words, eight bytes, or
sixty-four bits.
The most recent Intel microprocessors are designed to handle data in larger gulps. To improve
performance, they feature wider internal buses between their integral caches and processing circuitry.
In the case of the 486, this bus is 128-bits wide. Intel calls a single bus-width gulp a line of memory.
Table 4.1 summarizes the common names for the sizes of primary storage units.

Table 4.1. Primary Intel Memory Storage Unit Designations

Unit Bits Bytes


Bit 1 0.125
Nibble 4 0.5
Byte 8 1
Word 16 2
Double-word 32 4
Quad-word 64 8
Line (486) 128 16

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (8 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

The Multimedia Extensions (MMX) used by the latest Intel and AMD microprocessors have
introduced four additional data types into PC parlance. These repackage groups of smaller data units
into the 64-bit registers used by the new microprocessors. The new units are all termed packed
because they fit or pack as many smaller units as possible into the larger registers. These new units are
named after the smaller units comprising them. For example, when eight bytes are bunched together
into one 64-bit block to fit an MMX microprocessor register, the data is in packed byte form. Table
4.2 lists the names of these new data types.

Table 4.2. New 64-bit MMX Data Types

Name Basic units Number of Units


Packed byte Byte (8 bits) 8
Packed word Word (16 bits) 4
Packed double-word Double-word (32 bits) 2
Quad-word 64 bits 1

To remember a single bit—whether alone or as part of a nibble, byte, word, or


double-word—computer memory needs only to preserve a single state, that is, whether something is
true or false, positive or negative, a binary one or zero. Almost anything can suffice to remember a
single state—whether a marble is in one pile or another, whether a dab of marzipan is eaten or
molding on the shelf, whether an electrical charge is present or absent. The only need is that the
memory unit has two possible states and that it will maintain itself in one of them once it is put there.
Should a memory element change on its own, randomly, it would be useless because it does not
preserve the information that it's supposed to keep.
While possibilities of what can be used for remembering a single state are nearly endless, how the bits
are to be used makes some forms of memory more practical than others. The two states must be both
readily changeable and readily recognizable by whatever mechanism is to use them. For example, a
string tied around your finger will help you remember a bit state but would be inconvenient to store
information for a machine. Whatever the machine, it would need a mechanical hand to tie the knot
and some means of detecting its presence on your finger—a video camera, precision radar set, or even
a gas chromatography system.
Today’s applications demand thousands and millions of bytes of memory. The basic measuring units
for memory are consequently large multiples of the byte. Although they wear common Greek prefixes
shared by units of the metric system, the computer world has adopted a slightly different measuring
system. Although the Greek prefix "kilo" means thousand, computer people assign a value of 1024 to
it, the closest round number in binary, 210 (two to the tenth power). Larger units increase by a similar
factor so that a megabyte is actually 220 bytes and a gigabyte is 230 bytes. Table 4.3 summarizes the
names and values of these larger measuring units.

Table 4.3. Names and abbreviations of Large Storage Units

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (9 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Unit Abbreviation Size


In units In bytes
Kilobyte KB or K 1024 Bytes 1024
Megabyte MB or M 1024 Kilobytes 1048576
Gigabyte GB 1024 Megabytes 1073741824
Terabyte TB 1024 Gigabytes 1099511627776
Petabyte PB 1024 Terabytes 1,125,899,906,843,624
Exabyte EB 1024 Petabytes 1,152,921,504,607,870,976
Zettabyte ZB 1024 Exabytes 1,180,591,620,718,458,879,424
Yottabyte YB 1024 Zettabytes 1,208,925,819,615,701,892,530,176

Another term tossed into conversations about memory is "bank." The word indicates not a quantity
but arrangement of memory. A bank of memory is nothing more than a block of storage considered as
a single unit for some specific purpose. For example, a bank-switching system connects and
disconnects banks or blocks of memory with your microprocessor.
When discussing the primary storage of PCs, "bank" has a more specific definition. In this context, a
bank of memory is any size block of memory that is arranged with its bits matching the number of
data connections to your microprocessor. That is, a bank of memory for a Pentium is a block of
memory arranged 64 bits wide.
Although memory often comes in byte-width units, modern PCs have 32- or 64-bit memory buses.
They require banks that are addressed four to eight bytes at a time. Consequently, each bank in such
machines may comprise four or eight memory modules, each unit having the same capacity.

Granularity

When system designers speak about memory granularity, they mean the smallest increments in which
you can add memory to your PC. The granularity depends on three factors: the data bus width of your
PC, the bus width of the memory, and the minimum size of the available memory units. When the bus
width of your PC matches that of the memory modules it uses, the granularity is the minimum
capacity module your PC will accommodate. When your PC data bus width exceeds the width of the
modules it uses, you must add multiple modules for each memory increase. The granularity is then the
total capacity of all the modules you must add for the minimum increase.
For example, if you have a Pentium computer that uses four-byte wide modules (called 72-pin
SIMMs, see "72-Pin SIMMs" section later in this chapter), you need a minimum of two modules to
expand the memory of your machine. In that the current smallest size of a 72-bit module is four
megabytes, your machine has a memory granularity of eight megabytes. Most modern PCs have a
granularity of eight megabytes.
The data bus of a PC depends on its processor. PCs with 486 microprocessors have data buses four
bytes (32 bits) wide. Pentium and Pentium Pro PCs have eight-byte (64-bit) data buses.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (10 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Most modern PCs use memory modules that have bus widths from one to eight bytes wide. Your PC
determines the memory module bus width that you must use because the width is fixed by the
memory sockets. Many 486 PCs required one-byte wide modules; many Pentium and later 486
machines used four-byte modules. The latest Pentium and Pentium Pro machines use eight-byte
modules.
Module capacities depend on the technology available and the price you’re willing to pay. The earliest
modules held only 256K bytes. Later generations allow 1, 2, 4, 8, and 16 megabytes per module.
Although smaller memory modules are available, the most popular 72-pin modules have a capacity of
8MB.

Requirements

How much memory you actually need in a PC depends on what you want to do. The first PCs came
equipped with 16K of memory, which quickly proved to be insufficient to do anything useful. You
had to upgrade to 64K just to run DOS. Although the quantities have changed, the basic situation
hasn’t. Ever since the first PC, software has demanded more and more memory from your PC, and the
amount of memory you need depends on the operating system and applications that you want to run.
Today most computers come equipped with a minimum of memory, and you have to add more to
really get computing. For example, with memory-hungry Windows 95, the eight megabytes that’s
standard in most basic systems is enough to get you going but not enough to deliver good
performance.
Odds are you will use your PC with one of the three most popular operating systems, DOS, old
Windows (Windows for Workgroups 3.11 and earlier), or new Windows (Windows 95 and later).
Each of these has specific memory needs and limitations.

DOS

Modern versions of DOS make do with as little as 128K of system memory, a trivial amount in
today’s terms. Modern applications can demand many times that, and a modern operating system can
take advantage of almost as much memory as you can afford.
The design of DOS constrains the amount of memory available to programs that run using it. In most
PCs, DOS applications can access no more than 640K of memory, although in some situations you
can squeeze out a few dozen kilobytes more using a memory manager, discussed below.
By its very design, DOS is limited to addressing just over one megabyte. DOS operates only in real
mode, and that’s all the memory your PC’s microprocessor can address in real mode. Adding more
than that to your PC will not benefit ordinary DOS applications.
In the past, some programs have used DOS extenders to address memory beyond the limits of real
mode. Another technology, expanded memory, allowed applications specifically written to use it to
address more than a megabyte. Both of these technologies, discussed more fully below, are essentially
obsolete. Program publishers have shifted from using them to crafting their programs to run under

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (11 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Microsoft Windows.
The bottom line is that if you restrict yourself to DOS and normal DOS applications, one megabyte is
all you can ordinarily use. Even if you install more memory in your PC, you probably won’t be able to
take much advantage of it.

Old Windows

Old Windows begins where DOS ends. You could get Windows 3.1 off the ground (and maybe even
run one application under it) with as little as a megabyte of memory. According to Microsoft, you can
start Windows 3.1 in standard mode with as little as 512K of memory. This memory must be
configured as 128K conventional, 384K extended (these and other memory terms are defined in
following sections). Running in enhanced mode, Windows itself requires at least 762K (182K
conventional, 580K extended), but it won’t let you start your PC unless you have more. Windows
checks how much memory is available in your system when it loads, and it demands 1024K before it
will load in enhanced mode when you have a 386SX or better microprocessor in your PC.
To do anything useful with old Windows, you'll want much more memory. For tolerable performance,
you'll want at least four megabytes. Getting good performance requires about eight megabyte—the
additional megabytes means that Windows won’t swap data to disk as often. Moreover, some
applications demand even more memory. Some won’t even start until you’ve got eight megabytes in
your PC. Some may require a full sixteen megabytes.
Sixteen megabytes is all the old Windows can use directly. However, old Windows can use memory
beyond that for auxiliary functions such as disk caching, which can help improve overall performance.
Most authorities agree, however, that unless your software makes specific requirements, sixteen
megabytes is the optimum to install for old Windows.

New Windows

Windows 95 dramatically changed the memory requirements of PCs. It not only uses memory better
but also uses (and requires) more of it. To get Windows 95 started, you need at least 4MB. Even then,
you’ll spend most of your time waiting while Windows tries to fill its memory needs with bytes from
your hard disk. In other words, with 4MB Windows 95 is frustratingly slow.
The performance of Windows 95 become tolerable—you’re not apt to fall asleep when switching
between applications—at about 8MB. You can work at that level providing you don’t try to do a lot of
multitasking or load several applications simultaneously. Windows 95 starts to come into its own at
about 16MB and, in most situations, can appreciably improve its performance with a move to 32MB.
At that point, the law of diminishing returns begins to kick in, and you get less improvement for
greater memory investments.
In any case, Windows 95 can use up to 4GB of memory. Only half of that can be used by applications.
Windows reserves the other 2GB for itself and its own internal functions.
For Windows NT versions 4.0 and later, you’ll want to start at 32MB. From there your wallet is the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (12 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

limit.

Access

Memory works like an elaborate set of pigeon holes used by post office workers to sort local mail. A
memory location called an address is assigned to each piece of information to be stored. Each address
corresponds to one pigeon hole, unambiguously identifying the location of each unit of storage. The
address is a label, not the storage location itself (which is actually one of those tiny electronic
capacitors, latches, or fuses).
Because the address is most often in binary code, the number of bits available in the code determines
how many such unambiguous addresses can be directly accessed in a memory system. As noted
before, an eight-bit address code permits 256 distinct memory locations (28 = 256). A 16-bit address
code can unambiguously define 65,536 locations (216 = 65,536). The available address codes
generally correspond to the number of address lines of the microprocessor in the computer, although,
strictly speaking, they need not.
The amount of data stored at each memory location depends on the basic storage unit, which varies
with the design of the computer system. Generally, each location contains the same number of bits
that the computer processes at one time—so an eight-bit computer (like the original PC) stores a byte
at each address and a 32-bit machine keeps a full double-word at each address.
The smallest individually addressable unit of today's 32-bit Intel microprocessors—the 386, 486, and
Pentium—is actually four double-words, 16 bytes. Smaller memory units cannot be individually
retrieved because the four least significant address lines are absent from these microprocessors.
Because the chips prefer to deal with data one line at a time, greater precision in addressing is
unnecessary.
Memory chips do not connect directly to the microprocessor's address lines. Instead, special circuits,
which comprise the memory controller, translate the binary data sent to the memory address register
into the form necessary to identify the memory location requested and retrieve the data there. The
memory controller can be as simple as address decoding logic circuitry or an elaborate
application-specific integrated circuit that combines several memory-enhancing functions.
To read memory, the microprocessor activates the address lines corresponding to the address code of
the wanted memory unit during one clock cycle. This action acts as a request to the memory controller
to find the needed data. During the next clock cycle, the memory controller puts the bits of code
contained in the desired storage unit on the microprocessor's data bus. This operation takes two cycles
because the memory controller can't be sure that the address code is valid until the end of a clock
cycle. Likewise, the microprocessor cannot be sure the data is valid until the end of the next clock
cycle. Consequently, all memory operations take at least two clock cycles.
Writing to memory works similarly—the microprocessor first sends off the address to write to; the
memory controller finds the proper pigeon hole; then the microprocessor sends out the data to be
written. Again, the minimum time required is two cycles of the microprocessor clock.
Reading or writing can take substantially longer than two cycles, however, because microprocessor
technology has pushed into performance territory far beyond the capabilities of today's affordable

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (13 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

DRAM chips. Slower system memory can make the system microprocessor—and the rest of the
PC—stop while it catches up, extending the memory read/write time by one or more clock cycles.

Constraints

All else being equal, more memory is better. Unfortunately, the last time all else was equal was before
chaos split into darkness and light. You may want an unlimited amount of memory in your PC, but
some higher authority may mitigate against it—simple physics for one. The Pauli exclusion principle
made practical: You can't stuff your system with more RAM than will fit into its case.
Long before you reach any such physical limit, however, you'll face a more steadfast barrier. (After
all, you can always buy a bigger case for your PC.) Many aspects of the design of real-world PCs
limit the amount of memory that the system can actually use. Important factors include the addressing
limits of microprocessors, the design limits of systems, and the requirement that program memory be
contiguous.

Microprocessor Addressing

Every Intel microprocessor has explicit memory handling limits dictated by its design. Specifically,
the amount of memory that a particular microprocessor can address is constrained by the number of
address lines assigned to that microprocessor and internal design features. Ordinarily, a
microprocessor can directly address no more memory than its address lines will permit. Although
modern microprocessors make this constraint pretty much irrelevant, for older chips these limits are
very real.
A microprocessor needs some way of uniquely identifying each memory location it can access. The
address lines permit this by assigning a memory location to each different pattern that can be coded by
the chip's address lines. The number of available patterns then determines how much memory can be
addressed. These patterns are, of course, simply a digital code.
The on/off patterns of the 20 address lines of the 8088 and 8086 microprocessors can uniquely define
220 addresses, the one megabyte addressing limit of DOS, a total of 1,048,576 bytes. Because 286
microprocessors have 24 addressing lines, they can directly access up to 224 bytes of RAM, that's 16
megabytes or 16,777,216 bytes. Other chips, such as the 386SL, suffer similar limits (32MB in the
case of the 386SL). No PC with a 286 microprocessor can directly address more than 16MB of RAM;
a 386SL, 32MB.
With the introduction of chips with a full 32 address lines—such as the 386DX, 486, and
Pentium—direct memory addressing has become practically unlimited. These microprocessors can
directly access four gigabytes of memory—that's 4,294,967,296 bytes. You're unlikely to need more
than that amount of addressability soon, particularly considering most programs are still written with
the DOS constraints in mind. If you could find RAM at $25 per megabyte, reaching the limit of the
Pentium would cost you $102,400 for memory alone. Adding it would itself make an interesting
upgrade, one that would keep you plugging in memory modules for the better part of a day—if you
could find a PC with enough sockets to fill.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (14 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

System Capacity

Not all computers can take advantage of all the memory that their microprocessors could address. For
example, many 386-based PCs that use the class AT bus for expansion generally permit the direct
addressing of only 16 megabytes. The reason for this memory-addressing shortfall is that the AT bus
was designed with only 24 addressing lines rather than the full 32 of the microprocessor. Newer
classic-bus computers break through this limit by the expedient of forcing you to keep all memory in
proprietary expansion. Once no longer constrained by the bus, memory capacity could be expanded to
the limits of the PC's microprocessor. But hardly any ISA machine gives you that option. A nagging
few of them still maintain the 16MB bus limit even on motherboard memory because designing and
building PCs with such limits is easier and cheaper. Some other systems restrict you to 32MB or so
because of constraints built into their support chips.
The second-generation PC expansion buses, EISA and Micro Channel, extend their address buses to a
full 32 bits (though you are well advised not to use these buses for memory expansion. More on that
later.) Today's local bus implementations (both VL Bus and PCI) have exactly the same 32-bit
constraint. Addressability is not an issue with them.
Other aspects of computer design may also limit internal addressing to levels below those allowed by
the system's microprocessor. For example, most of the first generation of 386-based EISA computers
allowed up to 32 megabytes of RAM to be installed—more than AT-bus machines but far within the
four gigabyte constraints imposed by their microprocessors. Many machines now available have
pushed the limit to 64MB. Some of the latest machines have no inherent limit except for the number
of memory sockets they provide. The only way to be certain about the addressing limit of a given PC
is to check its specification sheet.
Some special architectures allow microprocessors to address more memory than the amount for which
they were designed. The most popular of the techniques is the bank-switching method used by EMS.
Note, however, that the expanded memory standard restricts bank-switched memory to far less
capacity than today's top microprocessors can directly address. Consequently, bank-switching as a
means to add extra RAM to PCs has fallen into disfavor.
While desktop systems that lack a full 16MB capacity on the system board allow you to stuff in that
much RAM in expansion slots, most laptop and notebook PCs lack true AT-style expansion slots,
substituting proprietary connectors for modems and memory. Consequently, nearly all notebook PCs
leave you stuck with memory limits ordained by the manufacturer. While the PCMCIA 2.0 standard
allows for full 32-bit addressing, other design factors limit the expandability of notebook machines
with slots for PC Cards.

Contiguity Problems

Nearly all applications and operating systems assume your system's memory is contiguous. That is,
there are no gaps in the entire range from beginning to end. One reason is that most programs contain
instructions called relative jumps—the instruction tells the program to leap from one point in memory

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (15 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

to another. The distance to jump is defined as the number of bytes separating the old point of
execution from the new rather than indicating exact memory addresses. That is, a relative jump tells a
program to look for its next instruction a given number of bytes from its last instruction. Relative
jumps make programs flexible because they allow the software to load into memory without any
reference to an absolute address. Programs can slip into RAM anywhere without a problem. This
flexibility allows the same program to run in systems with different memory capacities, even different
resident software loaded.
If a hole appears in memory—that is, if the RAM addresses are not all contiguous—there's always a
chance that a relative jump will drop program execution in the middle of nowhere, where there is no
waiting instruction—likely no memory at all. Not knowing what to do, the program stops or does
something unexpected—which means it usually crashes your system.
Ordinarily this need for contiguity is no problem because most PCs require their memory to all be
contiguous and won't let you install it otherwise. However, you may face contiguity problems in some
special cases: when using ROM shadowing and memory apertures.
ROM shadowing, the popular PC speed-up technique, can have a detrimental effect on memory
expansion. The extended memory used holding the remapped ROM code has to come from
somewhere. Most machines simply steal 256K or so of extended memory from the very top of the
available addresses. But an annoying number of older PCs assume that you will never want to put
more than 16MB in your PC, so they fix the address range used for shadowing right at the 16MB
border. If you stuff such a PC with more than 16MB of RAM, shadowing will still steal what it needs
from that fixed address at the 16MB limit, which puts a gaping chasm in the machine's address range.
Operating systems and programs butt against the hole and can go no farther no matter how much more
memory you install in your system.
Thankfully, this limit is easy to sidestep—switch off shadowing using your PC's advanced setup
procedure. Additional memory will likely benefit your system more than shadowing.
Memory apertures are address ranges used by PC peripherals for memory-mapped input/output
operation and control. That is, your PC sends data and control signals to the memory-mapped device
by writing to a given range of memory addresses. The device picks up the data there and does its
thing, whatever that may be. A Weitek math coprocessor is one example of a memory-mapped device.
Fortunately (actually, by plan), the Weitek chip uses an address range in the gigabyte range, far from
where you'd want to install any RAM.
Another device that's likely to demand a dedicated memory address range—and one that's more likely
to pop into your PC soon—is a direct memory aperture video board. Until IBM introduced its XGA
system in 1991, most display adapters followed the VGA standard for memory use. Their frame
buffers were bank-switched from a 64K frame within the real-mode address range. But
bank-switching complicates the programming of graphic software and slows down video speed. IBM's
XGA added a direct memory aperture addressing mode, which reserved a range in extended memory
for directly addressing the XGA frame buffer.
Although other manufacturers have been slow to adopt the XGA standard (or completely resistant to
it), many have embraced the direct memory aperture concept. Many graphics adapters now use direct
memory apertures to address their frame buffers in their higher resolution modes.
Because display adapters include their own video memory, these memory apertures don't steal any of
the RAM you install in your PC. Moreover, because the frame buffer memory is not used by your
programs for execution, it need not be contiguous with the rest of RAM. In theory, there should be no

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (16 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

problem.
With advanced expansion buses that can address a full 4GB, there almost never is a problem with a
memory aperture. So many addresses are available, there's no chance of conflict. But with old ISA,
the memory aperture can severely restrict your RAM expansion. Because you are forced to install
display adapters in an expansion slot and because ISA is limited to 16MB, the memory aperture used
by a display adapter in an ISA system must appear somewhere below the 16MB border. And it
does—Wham! The aperture blasts a huge hole in your RAM address range somewhere below the
16MB border. Memory above the aperture but below 16MB cannot be reached by most operating
systems because it becomes non-contiguous due to the aperture's hole. The aperture itself steals a
megabyte or two, so you're left with a PC that can give your programs no more than about 14MB no
matter how much RAM you install.
In many PCs, the situation is even worse. You might be limited to only 8MB of total RAM no matter
how much you install in the system. Most PCs have eight 9-bit SIMM sockets and let you install
SIMMs of just about any capacity. But the aperture makes SIMMs larger than 1MB useless. Installing
a single bank of 4MB SIMMs puts overall capacity at 16MB and guarantees a conflict with a memory
aperture that's limited to that range by ISA. The most memory you can install in such systems is two
banks of four 1MB SIMMs—that's just 8MB.
This aperture limit ordinary does not occur with Micro Channel, EISA, VL Bus, or PCI because all
four of those buses permit 32-bit addressing and do not force the aperture below the 16MB border. As
long as the maker of the video board allows you to move the board's aperture above the address range
used by your system's DRAM, you should face no problem.

Technologies

In digital computers, it is helpful to store a state electrically so the machine doesn't need eyes or hands
to check for the string, marble, or marzipan. Possible candidates for electrical state-saving systems
include those that depend on whether an electrical charge is present or whether a current will flow.
Both of these techniques are used in computer memories for primary storage systems.
The analog of electricity, magnetism, can also be readily manipulated by electrical circuits and
computers. In fact, a form of magnetic memory called core was the chief form of primary storage for
the first generation of mainframe computers. Some old-timers still call primary storage "core
memory" because of this history. Today, however, magnetic storage is mostly reserved for mass
storage because magnetism is one step removed from electricity. Storage devices have to convert
electricity to magnetism to store bits and magnetic fields to electrical pulses to read them. The
conversion process takes time, energy, and effort—all of which pay off for long-term storage, at
which magnetism excels, but are unnecessary for the many uses inside the computer.
Using electrical circuits endows primary storage with the one thing it needs most—speed. Only part of
its swiftness is attributable to electricity, however. More important is the way in which the bits of
storage are arranged. Bits are plugged into memory cells that are arranged like the pigeon holes used
for sorting mail—and for the same reason. Using this arrangement, any letter or bit of memory can be
instantly retrieved when it is needed. The microprocessor does not have to read through a huge string
of data to find what it needs. Instead it can zero in on any storage unit at random. Consequently, this
kind of memory is termed Random Access Memory, more commonly known by its acronym, RAM.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (17 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Random Access Memory

The vast majority of memory used in PCs is based on storing electrical charges rather than magnetic
fields. Because all the other signals inside a PC are normally electronic, the use of electronic memory
is only natural. It can operate at electronic speed without the need to convert technologies. Chip
makers can fabricate electronic memory components exactly as they do other circuits, even on the
same assembly lines. Best of all, electronic memory is cheap, the most affordable of all direct-access
technologies.

Dynamic Memory

The most common electronic memory inside today's personal computers brings RAM to life using
minute electrical charges to remember memory states. Charges are stored in small capacitors. The
archetypical capacitor comprises two metal plates separated by a small distance that's filled with an
electrical insulator. A positive charge can be applied to one plate and, because opposite charges
attract, it draws a negative charge to the other nearby plate. The insulator separating the plates
prevents the charges from mingling and neutralizing each other.
The capacitor can function as memory because a computer can control whether the charge is applied
to or removed from one of the capacitor plates. The charge on the plates can thus store a single state
and a single bit of digital information.
In a perfect world, the charges on the two plates of a capacitor would forever hold themselves in
place. One of the imperfections in the real world results in no insulator being perfect. There's always
some possibility that a charge will sneak through any material, although better insulators lower the
likelihood that they cannot eliminate it entirely. Think of a perfect capacitor as being like a glass of
water, holding whatever you put inside it. A real-world capacitor inevitably has a tiny leak through
which the water (or electrical charge) drains out. The leaky nature of capacitors themselves is made
worse by the circuitry that charges and discharges the capacitor because it, too, allows some of the
charge to leak off.
This system seems to violate the primary principal of memory—it won't reliably retain information for
very long. Fortunately, this capacitor-based system can remember long enough to be useful—a few or
few dozen milliseconds—before the disappearing charges make the memory unreliable. Those few
milliseconds are sufficient that practical circuits can be designed to periodically recharge the capacitor
and refresh the memory. For example, some Motorola 1MB SIMMs require memory refreshing every
8 milliseconds. Some 8MB SIMMs need a refresh only every 32 ms.
Refreshing memory is akin to pouring extra water into a glass from which it is leaking. Of course, you
have to be quick to pour the water while there's a little left so you know which glass needs to be
refilled and which is supposed to be empty.
To assure the integrity of its memory, PCs periodically refresh memory automatically. During the
refresh period, the memory is not available for normal operation. Accessing memory also refreshes

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (18 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

the memory cell. Depending on how a chip maker has designed its products, accessing a single cell
also may refresh the entire row or column containing the accessed memory cell.
Because of the changing nature of this form of capacitor-based memory and its need to be actively
maintained by refreshing, it is termed dynamic memory. Integrated circuits that provide this kind of
memory are termed dynamic RAM or DRAM chips.
In personal computer memories, special semiconductor circuits that act like capacitors are used
instead of actual capacitors with metal plates. A large number of these circuits are combined to make
a dynamic memory integrated circuit chip. As with true capacitors, however, dynamic memory of this
type must be periodically refreshed.

Static Memory

While dynamic memory tries to trap evanescent electricity and hold it in place, static memory allows
the current flow to continue on its way. Instead, it alters the path taken by the power, using one of two
possible courses of travel to mark the state being remembered. Static memory operates as a switch
that potentially allows or halts the flow of electricity.
A simple mechanical switch will, in fact, suffice as a form of static memory. It, alas, has the handicap
that it must be manually toggled from one position to another by a human or robotic hand.
A switch that can be itself controlled by electricity is called a relay, and this technology was one of
the first used for computer memories. The typical relay circuit provided a latch. Applying a voltage to
the relay energizes it, causing it to snap between not permitting electricity to flow to allowing it. Part
of the electrical flow could be used to keep the relay itself energized, which would, in turn, maintain
the electrical flow. Like a door latch, this kind of relay circuit stays locked until some force or signal
causes it to change, opening the door or the circuit.
Transistors, which can behave as switches, can also be wired to act as latches. In electronics, a circuit
that acts as a latch is sometimes called a flip-flop because its state (which stores a bit of data) switches
like a political candidate who flip-flops between the supporting and opposing views on sensitive
topics. A large number of these transistor flip-flop circuits, when miniaturized and properly arranged,
together make a static memory chip. Static RAM is often shortened to SRAM by computer
professionals. Note that the principal operational difference between static and dynamic memory is
that static RAM does not need to be periodically refreshed.

Read-Only Memory

Note that both the relay and the transistor latch must have a constant source of electricity to maintain
their latched state. If the current supplying them falters, the latch will relax and the circuit forgets.
Even static memory requires a constant source of electricity to keep it operating. Similarly, if dynamic
memory is not constantly refreshed, it too forgets. When the electricity is removed from either type of
memory circuit, the information that it held simply evaporates, leaving nothing behind. Consequently,
these electrically-dependent memory systems are called volatile. A constant supply of electricity is

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (19 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

necessary for them to maintain their integrity. Lose the electricity, and the memory loses its contents.
Not all memory must be endowed with the ability to be changed. Just as there are many memories that
you would like to retain—your first love, the names of all the constellations in the Zodiac, the answers
to the chemistry exam—a computer is better off when it can remember some particularly important
things without regard to the vagaries of the power line. Perhaps the most important of these more
permanent rememberings is the program code that tells a microprocessor that it's actually part of a
computer and how it should carry out its duties.
In the old-fashioned world of relays, you could permanently set memory in one position or another by
carefully applying a hammer. With enough assurance and impact, you could guarantee that the system
would never forget. In the world of solid-state, the principal is the same but the programming
instrument is somewhat different. All that you need is switches that don't switch—or, more accurately,
that switch once and jam. This permanent kind of memory is so valuable in computers that a whole
family of devices called Read-Only Memory or ROM chips has been developed to implement it.
These devices are called read-only because the computer that they are installed in cannot store new
code in them. Only what is already there can be read from the memory.
In contrast, the other kind of memory, to which the microprocessor can write as well as read, is
logically termed Read-Write Memory. That term is, however, rarely used. Instead, read-write memory
goes by the name RAM even though ROM also allows random access to its contents.

Mask ROM

If ROM chips cannot be written by the computer, the information inside must come from somewhere.
In one kind of chip, the mask ROM, the information is built into the memory chip at the time it is
fabricated. The mask is a master pattern that's used to draw the various circuit elements on the chip
during fabrication. When the circuit elements of the chip are grown on the silicon substrate, the
pattern includes the information that will be read in the final device. Nothing, other than a hammer
blow or its equivalent in destruction, can alter what is contained in this sort of memory.
Mask ROMs are not common in personal computers because they require their programming to be
carried out when the chips are manufactured; changes are not easy to make and the quantities that
must be made to make things affordable are daunting.

PROM

One alternative is the Programmable Read-Only Memory chip or PROM. This style of circuit consists
of an array of elements that work like fuses. Too much current flowing through a fuse causes the fuse
element to overheat, melt, and interrupt the current flow, protecting equipment and wiring from
overloads. The PROM uses fuses as memory elements. Normally, the fuses in a PROM conduct
electricity just like the fuses that protect your home from electrical disaster. Like ordinary fuses, the
fuses in a PROM can be blown to stop the electrical flow. All it takes is a strong enough electrical
current, supplied by a special machine called a PROM programmer or PROM burner.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (20 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

PROM chips are manufactured and delivered with all of their fuses intact. The PROM is then
customized for its given application using a PROM programmer to blow the fuses one-by-one
according to the needs of the software to be coded inside the chip. This process is usually termed
"burning" the PROM.
As with most conflagrations, the effects of burning a PROM are permanent. The chip cannot be
changed to update or revise the program inside. PROMs are definitely not something for people who
can't make up their minds—or for a fast changing industry.

EPROM

Happily, technology has brought an alternative, the Erasable Programmable Read-Only Memory chip
or EPROM. Sort of self-healing semiconductors, the data inside an EPROM can be erased and the
chip re-used for other data or programs.
EPROM chips are easy to spot because they have a clear window in the center of the top of their
packages. Invariably, this window is covered with a label of some kind, and with good reason. The
chip is erased by shining high intensity ultraviolet light through the window. If stray light should leak
through the window, the chip could inadvertently be erased. (Normal room light won't erase the chip
because it contains very little ultraviolet light. Bright sunshine does, however, and can erase
EPROMs.) Because of their versatility, permanent memory, and easy reprogrammability, EPROMs
are ubiquitous inside personal computers.

EEPROM

A related chip is called Electrically Erasable Programmable Read-Only Memory or EEPROM


(usually pronounced double-E PROM). Instead of requiring a strong source of ultraviolet light,
EEPROMs need only a higher than normal voltage (and current) to erase their contents. This electrical
erasability brings an important benefit—EEPROMs can be erased and reprogrammed without popping
them out of their sockets. EEPROM gives electrical devices such as computers and their peripherals a
means of storing data without the need for a constant supply of electricity. Note that while EPROM
must be erased all at once, each byte in EEPROM is independently erasable and writable. You can
change an individual byte if you want. Consequently, EEPROM has won favor for storing setup
parameters for printers and other peripherals. You can easily change individual settings, yet still be
assured the values you set will survive switching the power off.
EEPROM has one chief shortcoming—it can be erased only a finite number of times. Although most
EEPROM chips will withstand tens or hundreds of thousands of erase and reprogram cycles, that's not
good enough for general storage in a PC that might be changed thousands of times each second you
use your machine. This problem is exacerbated by the manner in which EEPROM chips are
erased—unlike ordinary RAM chips in which you can alter any bit whenever you like, erasing an
EEPROM means eliminating its entire contents and reprogramming every bit all over again. Change
any one bit in a EEPROM, and the life of every bit of storage is shortened.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (21 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Flash Memory

A new twist to EEPROM is Flash ROM, sometimes called Flash RAM (as in the previous edition of
this book—we’ve altered our designation to better fit usage and continuity in our discussion of ROM
technology) or just Flash memory. Instead of requiring special, higher voltages to be erased, Flash
ROM can be erased and reprogrammed using the normal voltages inside a PC. Normal read and write
operations use the standard five-volt power that is used by most PC logic circuits. (Three-volt Flash
ROM is not yet available.) An erase operation requires a super-voltage, a voltage in excess of the
normal operating supply for computer circuitry, typically 12 volts.
For system designers, the electrical re-programmability of Flash ROM makes it easy to use.
Unfortunately, Flash ROM is handicapped by the same limitation as EEPROM—its life is finite
(although longer than ordinary EEPROM)]md]and it must be erased and reprogrammed as one or
more blocks instead of individual bytes.
The first generation of Flash ROM made the entire memory chip a single block, so the entire chip had
to be erased to reprogram it. Newer Flash ROMs have multiple, independently erasable blocks that
may range in size from 4K to 128K bytes. The old, all-at-once style of Flash ROM is now termed bulk
erase flash memory because of the need to erase it entirely at once.
New multiple-block Flash ROM is offered in two styles. Sectored-erase Flash ROM is simply divided
into multiple sectors. Boot Block Flash ROM specially protects one or more blocks from normal erase
operations so that special data in them—such as the firmware that defines the operation of the
memory—will survive ordinary erase procedures. Altering the boot block typically requires applying
the super-voltage to the reset pin of the chip at the same time as performing an ordinary write to the
book block.
Although modern Flash ROMs can be erased only in blocks, most support random reading and
writing. Once a block is erased, it will contain no information. Each cell will contain a value of zero.
Your system can read these blank cells, though without learning much. Standard write operations can
change the cell values from zero to one but cannot change them back. Once a given cell has been
changed to a logical one with a write operation, it will maintain that value until the Flash ROM gets
erased once again, even if the power to your system or the Flash ROM chip fails.
Flash memory is an evolving technology. The first generation of chips required that your PC or other
device using the chips handle all the minutiae of the erase and write operations. Current generation
chips have their own onboard logic to automate these operations, making Flash ROM act more like
ordinary memory. The logic controls the timing of all the pulses used to erase and write to the chip,
ensures that the proper voltages reach the memory cells, and even verifies that each write operation
was carried out successfully.
On the other hand, the convenience of using Flash ROM has led many developers to create disk
emulators from it. For the most effective operation and longest life, however, these require special
operating systems (or modified versions of familiar operating systems) that minimize the number of
erase and reprogramming cycles.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (22 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Magnetic Memory

Magnetic storage technology brackets computer memory. The first memory systems for the first
computers relied on magnetism, and the perennial future memory technology has long been magnetic.
The personality of magnetic storage is, well, magnetic and compelling. Magnetic storage is
non-volatile and so familiar that it has gone beyond contempt. Magnetic principle are probably among
the best understood and the technology has long been the engineer’s friend. But in the vast scheme of
things and computer memory, magnetism always comes up short, or at least slow. It’s also expensive.
For some magnetic storage is history. For others, it’s prophecy. For most of us, however, it remains a
curiosity.

Core Memory

True old-timers who ratcheted themselves down into PCs from mainframe computers sometimes
speak of a computer’s memory system as core. The term doesn’t derive from the centrality of memory
to the operation of the computer but rather from one of the first memory technologies used by ancient
computers. Core memory was based on magnetic storage made from a fabric of wires with a ferrite
doughnut, called a core, at each intersection of the warp and woof. The threads of the fabric were
actually thin wires that could induce a magnetic field in the ferrite doughnuts. The computer could
select an individual ferrite ring by selecting the pair of wires that crossed inside it. After the computer
stored data as the magnetic field in the core, it could later read it back. Reading the field, however,
erased it. Consequently, the computer rewrote each bit after reading it, regenerating its contents.
Core memory had two advantages. It was fast, at least compared to the chief alternative of the time,
magnetic tape. An individual bit could be found and read in a microsecond or so. In addition, core
memory was non-volatile and would retain its contents even if the computer was switched off.
On the downside, core memory was expensive because each storage location had to be individually
fabricated. And it was big, even massive. A space the size of one of today’s megabit memory chips
could hold only a few bytes of core memory. Almost immune to miniaturization, core faded from the
scene when semiconductor memory systems became available, surviving only as a name.

Bubble Memory

In the early 1980s another form of magnetic storage, bubble memory, showed promise for use in small
computers. Bubble memory is based on an odd characteristic of materials like gadolinium gallium
garnet, which can be easily magnetized only along a single axis. When you fabricate a thin film of
one of these materials so that the only permitted fields are up or down, the fields sort themselves into
a random pattern of magnetic domains. Pass a magnetic field through the medium, however, and it
causes the domains aligned with it to coalesce around it in a circle, forming a bubble. Once formed,
the bubble maintains itself until it is destroyed by a strong magnetic field. The bubbles and the data

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (23 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

they store are non-volatile.


A single bubble can store only a single bit, so a bubble memory device requires thousands or millions
of them. To avoid the need for some kind of mechanical scanning (as is used in other magnetic
storage devices like tape and disk drives), the bubble device circulates the magnetic bubbles in a long
loop. The bubbles are coaxed through the device by a changing magnetic field. It works like a
bucket-brigade—the magnetic bubble gets passed along from position to position along the loop. The
bubbles are created and read only at the end of the loop, so they all are kept in constant motion.
Although the bubbles can be read only sequentially, the circulation is so fast that bubble memory
performs much like a random access device.
One underlying problem with bubble memory is that access time and capacity interact. The larger the
capacity of the bubble device, the more bubbles that must circulate, the longer it takes to run through
the complete collection, and the slower the access time. Moreover, physical principles limit the speed
at which the magnetic domains can change and the bubble circulates. These factors put bubble
memory at a severe performance disadvantage as compared to silicon memory.

Virtual Memory

Even before anyone conceived the idea of the first PC, computer designers faced the same trade-off
between memory and mass storage. Mass storage was plentiful and cheap; memory was expensive, so
much so that not even large corporations could afford as much as they wanted. In the early years of
PCs, engineers tried to sidestep the high cost of memory by faking it, making computers think they
had more memory that they actually did. With some fancy footwork and a lot of shuffling around of
bytes, they substituted mass storage for memory. The engineers called the memory the computer
thought it had but that really didn’t exist in reality virtual memory. Although the cost of memory has
fallen by a millionfold, memory use has followed suit. Programs’ need for memory still outpaces what
any reasonable person or organization can afford (or wants to pay for), so virtual memory not only
exists but is flourishing.
Microprocessors cannot ordinarily use disk storage to hold the data they work on. Even if they could,
it would severely degrade the performance because the access time for disk storage is thousands of
times longer than for solid state memory. Virtual memory systems attempt to ameliorate the
performance degradation by swapping blocks of code and data between solid state and disk storage.
They keep the bytes that you’re most likely to need next in solid state memory and send the other stuff
to disk.

Demand Paging

Most modern PCs take advantage of a feature called demand paging that has been part of all Intel
microprocessors since the 386. These chips are able to track memory contents as it is moved between
disk and solid state memory in 4K blocks. The microprocessor assigns an address to the data in the
block, and the address stays constant regardless of where the data actually gets stored.
Once solid-state memory reaches its capacity, the virtual memory system copies the contents of one or

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (24 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

more pages of memory to disk as the memory space is required. When the system needs data that has
been copied to disk, it copies the least recently used pages to disk and refills their space with the
disk-based pages it needs. When your system attempts to read from a page with an address that’s not
available in solid-state memory, it creates a page fault. The fault causes a virtual memory manager
routine to handle the details of exchanging data between solid-state and disk storage.
This reactive kind of control is called demand paging because it swaps data only when the
microprocessor demands unavailable addresses. It makes no attempt to anticipate the needs of the
microprocessor.
Virtual memory technology allows your PC to run more programs than would be otherwise possible
given the amount of solid-state memory in your system. The effective memory of your system
approaches the spare capacity of your disk. The downside is that it takes away from the capacity of
your hard disk (although disk storage is substantially less expensive than solid-state memory by about
two orders of magnitude). In addition, performance slows substantially when your system must swap
memory.
Virtual memory is an old technology, harking back to the days of mainframe computers when, as
now, disk storage was cheaper than physical memory. Many DOS applications took advantage of the
technology, and it has been part of every version of Windows since Windows 386.
Windows uses a demand-paging system that’s based on a least recently used algorithm. That is, the
Windows virtual memory manager decides which data to swap to disk based on when it was last used
by your system. The Windows VMM also maintains the virtual memory page table, which serves as a
key to which pages are used by each application; which are kept in solid-state storage; and which are
on disk.
Windows decides on which pages to swap to disk using two flags for each page. The accessed flag
indicates that the page has been read from or written to since the time it was loaded into memory. The
dirty flag indicates the page has been written to.
When Windows needs more pages in solid-state memory, it scans through its page table looking for
pages showing neither the accessed nor dirty flags. As it makes its scan, it resets the accessed but not
the dirty flags. If it does not find sufficient unflagged pages, it scans through the page table again.
This time, more pages should be unflagged because of the previous resetting of the accessed flags. If
it still cannot find enough available pages, the Windows virtual memory manager then swaps pages
regardless of the flags.

Swap Files

The disk space used by a virtual memory system usually takes the form of an ordinary file, though one
reserved for its special purpose. Engineers call the virtual memory file a swap file because the
memory system swaps data to and from the file as the operating system requires it. All versions of
Windows since Windows 386 use swap files of some kind.
Windows versions before Windows 95—which we’ll call old Windows—allowed you to choose a
temporary or permanent swap file. Your choice traded off disk versatility for performance.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (25 de 33) [23/06/2000 04:58:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

Temporary Swap Files

As their name implies, temporary swap files have no permanent place in your system. Old Windows
versions create a temporary swap file when you start Windows and destroy the temporary swap file
when you exit. When Windows is not running, no swap file pollutes your hard disk. This strategy
leaves more disk space for the times you’re not running Windows, though you’ll need to free up space
for the swap file again the next time you start Windows.
The temporary swap file itself is simply an ordinary DOS file on your hard disk, which Windows
automatically creates when it starts, and erases when you shut down Windows. Windows manages the
temporary swap file, increasing and decreasing its size as conditions dictate. Windows automatically
assigns the name WIN386.SWP to the temporary swap file. It is a standard DOS file. If Windows
somehow fails to erase it, you can delete it with the standard DOS command. File locking prevents
your deleting the temporary swap file while Windows is running because the file will be in use.
Windows itself decides on the optimum size for your temporary swap file, but that doesn’t mean it’s
out of your control. You can limit the maximum size of the temporary swap file in two ways, each
involving a setting in the [386enh] section of the SYSTEM.INI file in your Windows directory. The
setting MaxPagingFileSize= lets you indicate the maximum allowable size for your temporary swap
file in kilobytes. Alternately, you can use the MinUserDiskSpace= setting to tell Windows how many
kilobytes to keep free on your hard disk. Under Windows 3.0, you need at least 1.5MB for a
temporary swap file. Windows 3.1 requires at least 512K.
You can also control the location of your temporary swap file, although each major version of
Windows uses its own means of control. Under Windows 3.0, the temporary swap file is ordinarily
stored in your Windows directory. You can relocate it to the root directory of any disk using the
PagingDrive= setting in the [386enh] section of SYSTEM.INI. Under Windows 3.1, you can specify
both drive and subdirectory used by the temporary swap file using the PagingFile= setting.

Permanent Swap Files

A permanent swap file reserves a place for itself on your hard disk and takes the storage it uses out of
play whether or not it is needed—or even whether Windows is running. In exchange for your giving
up disk space, the permanent swap file gives you greater speed. The permanent swap file gets built
from contiguous clusters on the disk, that is, adjacent clusters (disk allocation or storage units) that
can be read from or written to with a minimum of disk head movement. Ordinary DOS files are built
from clusters that can be scattered across your disk, requiring your disk’s head do a tarantella during
every access. Making the file permanent prevents its clusters from being used, reused, fragmented,
and scattered.
Windows 3.0 and 3.1 each call their permanent swap files 386SPART.PAR and each bestows the
hidden and system attributes on the file, which means its name is not ordinarily displayed nor can you
delete it. Windows 95 does not use permanent swap files. A permanent swap file always resides in the
root directory of a drive. You create a permanent swap file for Windows 3.0 using the program
SWAPFILE.EXE. Under Windows 3.1, you use Control Panel. Select the 386 Enhanced icon; then
click on the Virtual Memory button. In either case, Windows remembers the location and size of the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (26 de 33) [23/06/2000 04:58:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

permanent swap file in a file called SPART.PAR that is kept in your Windows directory. Delete this
file, and Windows will lose control over the permanent swap file. It won’t even be able to find the file
to delete it. The only way to then delete the permanent swap file is to use the ATTRIB command to
remove the hidden and system attributes from the swap file’s directory entry with the following
command from the DOS prompt:

ATTRIB -H -S 386SPART.PAR
You can then delete the permanent swap file as you would any other DOS file.
Windows gives you control of the size of the permanent swap file you use in your system through the
Virtual Memory button in Control Panel. Open the virtual memory Window, and you’ll see a screen
like that shown in Figure 4.1. You have several control options including the disk on which to locate
the swap file, whether the swap file is permanent or temporary, and the amount of disk space to give
over to a permanent swap file.
Figure 4.1 The virtual memory window for controlling old Windows swap files.

Although you have great latitude in setting the size of your permanent swap file, Windows cannot
build a permanent swap file any larger than the largest contiguous range of clusters on the designated
disk. To maximize the number of contiguous clusters on your disk, either defragment it using a disk
optimization utility or reformat the disk (and don’t forget to back up the files on your disk before
formatting!)
Note that Windows is hard-wired to only use physical disks with 512-byte sectors for the permanent
swap file. You cannot use a network drive (nor would you want to for performance reasons) nor can
you use a disk partitioned with a proprietary disk manager that requires a driver entry in your
CONFIG.SYS file. Disks using Compaq’s ENHDISK.SYS driver are the only exception recognized
by Microsoft.

Windows 95 Swap Files

Windows 95 erases that complication by combining features of permanent and temporary swap files
into its virtual memory system.
Under Windows 95, the swap file mixes together features of temporary and permanent swap files. The
Windows 95 swap file is dynamic, like a temporary swap file, expanding as your system demands
virtual memory and contracting when it does not. In addition, it can shuffle itself into the scatter
clusters of a fragmented hard disk. It can even run on a compressed hard disk.
Windows 95 give you full menu control of its swap file. You can start the virtual memory control
system by clicking on the Virtual Memory button you’ll find in the Performance Tab of Device
Manager. (You start the device manager by clicking on the System icon in Control Panel.) Once you
click the Virtual Memory button, you should see a screen like that shown in Figure 4.2.
Figure 4.2 The Virtual Memory window for controlling Windows 95 swap files.

You can choose to let Windows 95 choose the size and place of your swap file or take direct control.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (27 de 33) [23/06/2000 04:58:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

By default, Windows 95 puts your swap file in the Windows directory of your C: drive. By selecting
the appropriate box, you can tell Windows the disk and directory in which to put your swap file and
set minimum and maximum limits for its size.

RAM Doublers

The concept certainly is compelling: doubling the memory of your PC without wasting your money
on such incidentals as actually buying memory modules. For a fraction of the price of real memory,
you can fake it and make your PC think and act like it has more than its share of RAM.
Surprisingly, there’s more than a grain of truth to this bit of legerdemain. Add a Ram doubler to your
PC and you can run programs you could never run before. You might even speed things up a bit by
cutting the disk accesses required by virtual memory.
Unfortunately, there’s more than a bit of snake oil to RAM doubling technology. For example, the
first best selling product proved to be little more than a sham, though the benefit of a doubt makes it
an unintentional one. The first incarnation of SoftRAM proved to do less than nothing, not adding
memory but slowing your system down. An apology, a refund, and a revised product has put that issue
to rest.
With current products, you can expect to get at least some benefit from buying a RAM doubler. With
old Windows, the most important gain is greater resource memory. Old Windows uses several stacks
that are limited in size to a single segment, 64K. Resource memory, which holds icons and other bits
of trivia used in the normal operation of Windows, can quickly fill up and prevent you from loading
applications even when you have megabytes of free RAM in your system. By using algorithms that
store data more efficiently (data compression), they can squeeze more into the limited confines of the
segment allocated to Windows resources.
Microsoft has raised or removed most of the resource limitations from current versions of Windows,
making this variety of software less of a value. With the current low prices of RAM, physical memory
is usually the better investment.

Logical Organization

Although it may be made from the same kind of chips, not all memory in a PC works in the same
way. Some programs are restricted to using only a fraction of the available capacity; some memory is
off-limits to all programs.
Memory handling is, of course, determined by the microprocessor used to build a computer. Through
the years, however, the Intel microprocessors used in PCs have dramatically improved their memory
capabilities. In less than seven years, the microprocessor-mediated memory limitation was pushed
upward by a factor of four thousand, far beyond the needs of any program written or even conceived
of—at least today.
But neither PCs nor applications have kept up with the memory capabilities of microprocessors. Part

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (28 de 33) [23/06/2000 04:58:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

of the reason for this divergence has to do with some arbitrary design decisions made by IBM when
creating the original PC. But the true underlying explanation is your own expectations. You expect
new PCs to be compatible with the old, run the same programs, and use most of the same expansion
hardware. To achieve that expected degree of compatibility, the defects and limitations of the original
PC's memory system have been carried forward for ensuing generations to enjoy. A patchwork of
improvements adds new capabilities without sacrificing (much of) this backward compatibility, but
they further confuse the PC's past memories.
The result is that PCs are stuck with a hierarchy of memory types, each with different abilities and
compatibilities, some useful to some applications, some useless to all but a few. Rather than
improving with age, every advance adds more to the memory mix up.
The classification of memory depends, in part, on the operating system that you run. Part of modern
operating systems is memory management software that smoothes over the differences in memory
type.

Hardware

At the hardware level, your PC divides its memory into several classes that cannot be altered except
by adjusting your hardware. The electrical wiring of your PC determines the function of these
memory types. Although your PC’s setup procedure may give you the option of adjusting the amount
of memory assigned to some of these functions, the settings are made in the hardware circuitry of your
PC. They cannot ordinarily be adjusted while programs are running because the change you make will
overrule anything the program does. Moreover, the alterations will likely surprise your programs. For
example, the program may expect an address range in the memory to be available for its use that the
hardware setting reappropriates for another purpose. The effect is like pulling a carpet out from under
a well-meaning grandmother—a crash from which she might not be able to get up.

Real Mode Memory

The foundation on which the memory system of every PC is built is the memory that can be addressed
by your PC’s microprocessor while it is running in real mode. For today’s microprocessors based on
the Intel design to be backwardly compatible and able to run older software, they must mimic the
memory design of the original 8086 family. The hallmark of this design is the real operating mode in
which they must begin their operation. Because of the original Intel 8086 microprocessor design, real
mode only allows for one megabyte of memory. Because it serves the host microprocessor operating
in its real mode, this starting memory is termed real mode memory.
The address range used by real mode memory starts at the very beginning of the address range of Intel
microprocessors, zero. The last address is one shy of a megabyte (because counting starts at zero
instead of one), that is 1,048,575 in decimal, expressed in hexadecimal as 0FFFFF(Hex). Because this
memory occurs at the base or bottom of the microprocessor address range, it is also called base
memory.
When real mode was supplemented by protected mode with the introduction of the 80286

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (29 de 33) [23/06/2000 04:58:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

microprocessor, a new, wider, more exotic address range opened up in protected mode. Because this
range was off-limits to microprocessors and programs of the then status quo, the real mode range also
earned the epithet conventional memory, hinting that there was something unconventional, even
suspicious, about using addresses beyond the megabyte limit. Today, of course, the most conventional
new PCs sold have many times the old "conventional" memory limit.

Protected Mode Memory

The rest of the memory that can be addressed by modern microprocessors is termed protected mode
memory. As the name implies, this memory can be addressed only when the microprocessor is
running in its protected mode. The address range of protected mode memory stretches from the top of
real mode memory to the addressing limit of your microprocessor. In other words, it starts at one
megabyte—1,048,576 or 100000(Hex)—and extends to 16 megabytes for 286 microprocessors and to
4 gigabytes for 386 through Pentium Pro microprocessors.
To contrast it with base memory, protected mode memory is sometimes called extended memory.

Lower Memory

When IBM’s engineers created the PC, they reserved half of the basic one-megabyte addressing range
of the 8088 microprocessor, 512K bytes, for the system's BIOS code and direct microprocessor access
to the memory used by the video system. The lower half was left for running programs.
Even though 512K seemed generous in the days when 64K was the most memory other popular
computers could use, the wastefulness of the original limit soon became apparent. Less than a year
after the original PC was introduced, IBM engineers rethought their memory division and decided that
an extra 128K could safely be reassigned to program access. That change left 384K at the upper end
of the address range for use by video memory and BIOS routines.
This division persists, leaving us with the lower 640K addressing range assigned for program use.
Because it appears at the lower end of the real mode range, this memory is commonly called lower
memory.
The "lower memory" designation is rather recent and reflects the breaking away of PCs from any one
operating system. Lower memory once was called DOS memory because the programs written using
DOS could only run in lower memory and DOS was the only significant operating system available.

BIOS Data Area

IBM also reserved the first kilobyte of lower memory for specific hardware and operation system
functions, to provide space for remembering information about the system and the location of certain
sections of code that are executed when specific software interrupts are made. Among other functions,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (30 de 33) [23/06/2000 04:58:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

this memory range holds data used by BIOS functions, and it is consequently called the BIOS data
area.
Included among the bytes at the bottom of the addressing range are interrupt vectors, pointers that tell
the microprocessor the addresses used by each interrupt it needs to service. Also kept in these bottom
bytes is the keyboard buffer—16 bytes of storage that hold the code of the last 16 characters you
pressed on the keyboard. This temporary storage allows the computer to accept your typing while it’s
temporarily busy on other tasks. It can then go back and process your characters when it's not as busy.
The angry beeping your PC makes sometimes when you hold down one key for too long is the
machine's way of complaining that the keyboard buffer is full and that it has no place to put the latest
characters, which it steadfastly refuses to accept until it can free up some buffer space. In addition,
various system flags, indicators of internal system conditions that can be equated to the code of
semaphore flags, are stored in this low memory range.

Upper Memory

The real mode addressing range above lower memory is called, logically enough, upper memory.
Unlike the address range of lower memory that is, in most PCs, completely filled with physical RAM,
upper memory is an amalgam of RAM, ROM, and holes. Not all addresses in upper memory have
physical memory assigned to them. Instead, a few ranges of addresses are given over to specific
system support functions and other ranges are left undefined. The expansion boards you slide into
your PC take over some of these unused address ranges to give your microprocessor access to the
BIOS code stored in the ROM chips on the boards.
In most PCs, the top 32K of upper memory addresses are occupied by the ROM holding the BIOS
code of the system. Until recently, all IBM computers filled the next lower 32K with the program
code of its Cassette BASIC language.
The memory mapping abilities of 386 and later microprocessors allow software designers to remap
physical memory to some of the unused addresses in the upper memory range. Using memory
management software, DOS can run some utility programs in upper memory addresses. Consequently,
this memory range is sometimes called High DOS memory, but that term is misleading. Because of the
design of DOS, however, the program code in normal DOS applications must fit into a contiguous
block of addresses. System functions assigned to address ranges in upper memory interrupt the
contiguity of upper memory and prevent DOS applications from running in the address space there.

High Memory Area

Microprocessors with protected mode memory capabilities have an interesting quirk—they can
address more than one megabyte of memory in real mode. When a program running on an 8088 or
8086 microprocessor tries to access memory addresses higher than one megabyte, the addresses
"wrap" around and start back at zero. However, with a 286 or more recent microprocessor, including
the 486 and Pentiums, if the twenty-first address line (which 8088s and their kin lack) is activated, the
first segment's worth of addresses in excess of one megabyte will reach into extended memory. This
address line (A20) can be activated during real mode using a program instruction. As a result, one

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (31 de 33) [23/06/2000 04:58:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

segment of additional memory is accessible by 286 and better microprocessors in real mode.
This extra memory, a total of 64K minus 16 bytes, is called the High Memory Area. Because it is not
contiguous with the address range of lower memory, it cannot be used as extra memory by ordinary
DOS applications. However, memory managers can relocate driver and small utility programs into its
address range much as they do the addresses in upper memory. Only one driver or utility can be
loaded into the High Memory Area under DOS, and that code must be smaller than the 65,520 bytes
available in the address range.

Frame Buffer Memory

PC video systems are memory-mapped, which means that the color of every pixel on your monitor
gets stored in a memory location that your PC’s microprocessor can directly alter the same way it
writes data into memory. Your PC holds a complete image frame in memory. Your video system
scans the memory, address by address, to draw an image frame on your monitor screen. The memory
that holds a complete image frame is termed a frame buffer.
Because your PC’s microprocessor needs direct access to the frame buffer to change the data (and
hence pixels or dots) in the image, the memory used by the frame buffer must fit within the addressing
range of your PC’s microprocessor. In the early years of PCs, IBM reserved several address ranges in
upper memory for the frame buffers used by the different video standards it developed. The frame
buffer of the VGA system begins immediately after the 640K top boundary of lower memory. The
memory assigned to the original monochrome display system and still used in VGA text modes starts
64K higher.
Video systems more recent than VGA often place frame buffers in the protected mode addressing
range. Even these still use the VGA frame buffer range for compatibility purposes.
The physical memory of the frame buffer is usually separate and distinct from the main memory of
your PC. In most PCs, the frame buffer is part of the video board installed in an expansion slot. Even
PCs that incorporate their video circuitry on the motherboard separate the frame buffer from main
memory (although this design is changing in Unified Memory Architecture systems, noted later in this
chapter). Because of this separation and because it cannot be used for running programs, the amount
of memory in the frame buffer is usually not counted in totaling up the amount of RAM installed in a
PC.

Shadow Memory

The latest 32-bit and 64-bit computers provide a means to access memory through 8-, 16-, 32-, or
64-bit data buses. It's often most convenient to use a 16-bit data path for ROM BIOS memory (so only
two expensive EPROM chips are needed instead of the four required by a 32-bit path or eight by a
64-bit path). Many expansion cards, which may have onboard BIOS extensions, connect to their
computer hosts through 8-bit data buses. As a result, these memory areas cannot be accessed nearly as
fast as the host system's 32-bit or 64-bit RAM. This problem is compounded because BIOS routines,
particularly those used by the display adapter, are among the most often used code in the computer (at

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (32 de 33) [23/06/2000 04:58:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 4

least when running DOS).


To break through this speed barrier, many designers of 80386 computers use shadow memory. They
copy the ROM routines into fast 32-bit or 64-bit RAM and use the page virtual memory mapping
abilities of the 80386 and newer microprocessors to s

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh04.htm (33 de 33) [23/06/2000 04:58:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

Chapter 5: The BIOS


At one time, a set of built-in program routines called the Basic Input/Output System or
BIOS defined the "personality" of a PC, defining what the machine could do and how it
was done. Today’s software and operating systems make the BIOS something to be
avoided yet a thing still necessary for making your PC work—sort of like a big-city
freeway system. Although only a fraction of the size of a typical application
program—often less than 32K of code—the BIOS is the default control for many of the
most important functions of the PC—how it interprets keystrokes, how it puts characters
on the screen, how it communicates through its ports. It defines the compatibility of your
PC with expansion hardware. As with that freeway system, you cannot avoid the BIOS.
Without it, you PC couldn’t even boot up.

■ Background
■ Firmware
■ Functions
■ Booting Up
■ Power-On Self Test
■ Error Codes
■ Beep Codes
■ BIOS Extensions
■ Initial Program Load
■ Interface Functions
■ Compatibility
■ BIOS Development
■ Advanced BIOS
■ Software Interrupts
■ Parameter Passing
■ Entry Points
■ Linking to Hardware

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (1 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

■ Performance Penalties
■ Data Storage
■ BIOS Data Area
■ Date
■ System Identification Bytes
■ Disk Parameter Tables
■ DOS Tables
■ BIOS Identification
■ AMI BIOS
■ Award BIOS
■ ROM BASIC
■ System Configuration
■ CMOS
■ Background
■ Basic Memory Assignments
■ Resetting CMOS
■ Advanced Setup
■ Parity Check
■ Memory Testing
■ Numeric Processor Testing
■ Cache Operation
■ Wait States
■ Bus Clock
■ ROM Shadowing
■ Concurrent Refresh
■ Page Interleave
■ Page Mode
■ Virus Protection
■ Typematic Rate
■ Num Lock
■ Boot Device or Sequence
■ Passwords
■ Programmable Option Select
■ EISA Configuration
■ Plug-and-Play

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (2 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

■ Background
■ Compatibility
■ Expansion Board Support
■ Boot Operation
■ Structure
■ Performance
■ Efficiency
■ Access and Shadowing
■ System Tuning
■ Upgrades
■ Replacement Chips
■ Flash BIOS
■ Suppliers

The BIOS

Think of the Basic Input/Output System of your PC as its crazy Aunt Maud, locked away from public
eyes in a cobwebby garret. Except for the occasional deranged laughter that rings through the halls,
you might never know she was there, and you don’t think about her again till the next reel of the
B-movie that makes up your life.
As with old Aunt Maud, your PC’s BIOS is something that you want to forget but is always there,
lingering in the background, popping into sight only at the least convenient times. Despite its
idiosyncrasies and age, despite the embarrassment it causes, your PC can’t live without its BIOS. It
defines what your PC is and keeps it in line, just as Aunt Maud defines what your family really is and
her antics keep you in line (or at least tangled up in obscure legal proceedings). You’d really like to be
rid of Aunt Maud, but only she knows the secret of your family’s jewels, a secret someday you hope
you’ll wrest from her.
The BIOS of your PC lingers around like that unwelcome relative, but it also holds the secrets of your
PC. You can lock it up, even build a wall around it, but it will always be there. When you switch on
your PC, it laughs at you from the monitor screen, appearing in the rags and tatters of text mode
before your system jumps off into modern high resolution color. Most of what you do on your PC
now sidesteps the BIOS, so you’d think you could do without it. You might never suspect that behind
the scenes the BIOS of your PC tests your system, assures you that everything is okay when you start

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (3 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

your system, helps you set up your PC so that it runs at its best, and gracefully steps out of the way
when you no longer need it. Your Aunt Maud might just be not as crazy as she seems, watching over
you quietly from her garret, hiding in the background but working in her own mysterious ways to
make sure your life goes well.

Background

Although mostly invisible and oft forgotten, your PC’s BIOS is nevertheless one of its most important
and enabling parts. The BIOS is, in fact, the one essential constituent that distinguishes one PC from
another even when they both share the same microprocessor, motherboard, and support hardware.

Firmware

Strictly speaking, however, the BIOS isn’t hardware at all even though it is an essential part of your
PC’s hardware. The BIOS is special program code—in a word, software—that’s permanently (or
nearly so) encapsulated in ROM chips or, as is most often the case with newer PCs, flash memory.
Because of the two-sided aspects of the BIOS, existing in the netherworld between hardware and
software, it and other pieces of program code encapsulated in ROM or fast memory are often termed
firmware.
The importance of the BIOS arises from its function. The BIOS tests your PC computer every time
you turn it on. It may even allocate your system's resources for you automatically, making all the
adjustments necessary to accommodate new hardware. The BIOS also determines the compatibility of
your PC with both hardware and software and can even determine how flexible your PC is in setup
and use.
Today, many PCs and all modern operating systems enhance the basic BIOS firmware with additional
instructions loaded from disk like ordinary software. This software-based code performs the same
functions as the traditional BIOS firmware, linking your PC's hardware to the software programs that
you run. But every PC still requires at least a vestigial piece of BIOS firmware, if just to enable
enough of your system to run so that it can load the BIOS-like software. Although the BIOS plays a
less active role in every operation of your PC—in fact, the BIOS often is entirely bypassed by today’s
operating systems—it remains an essential part of every new PC and future machines still waiting on
the engineer’s drawing board.

Functions

The BIOS code of most PCs has a number of separate and distinct functions. The BIOS of a typical
PC contains routines that test the computer, blocks of data that give the machine its personality,
special program routines that allow software to take control of the PC's hardware so that it can more

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (4 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

smoothly mesh with the electronics of the system, and even a complete system (in some PCs) for
determining which expansion boards and peripherals you have installed and ensuring that they do not
conflict in their requests for input/output ports and memory assignments. In a few PCs—mostly IBM's
older machine—the BIOS also includes a rudimentary programming language that allows you to use
the machine without any other software (or even a disk drive to load it). Although all of these
functions get stored in the same memory chips, the program code of each function is essentially
independent of the rest. Each function is a separate module, and the name BIOS refers to the entire
group of modules.
The classic definition of the PC BIOS is the firmware that gives the PC its personality. This definition
refers only to one functional module of the PC, the one that’s invariably replaced by operating system
code. The nebulous term "personality" described how the computer performed its basic functions,
those necessary to make it a real computer. Although this definition includes a number of different
factors, including how quickly and smoothly various operations were completed, the term
"personality" mostly distinguished PCs from Apple Macintoshes. The PC BIOS was, in some eyes,
rudimentary. The functions it supplied appeared basic compared to the elaborate and exotic firmware
of the Macintosh, which include high order functions such as how the machine paints graphics. Rather
than personality, the difference is philosophy. The PC design envisions the augmentation and
replacement of BIOS functions with operating system code. The Macintosh design essentially divides
the operating system between software and firmware, the firmware being part of the system’s BIOS.
Beyond personality and philosophy, the BIOS firmware of a PC governs how system board
components interact, the chipset features that are used, and even how much of the microprocessor's
time is devoted to keeping memory working. The setup procedures in most new PCs also are held in
the BIOS.
In most PCs, the first thing the BIOS tells the microprocessor to do is to run through all the known
components of the system—the microprocessor, memory, keyboard, and so on—and to test to
determine whether they are operating properly. After the system is sure of its own integrity, it checks
to see whether you have installed any expansion boards that hold additional BIOS code. If you have,
the microprocessor checks the code and carries out any instructions that it finds. A modern PC may
even check to see if any new expansion boards are plugged in without being set up properly. The
BIOS code might then configure the expansion board so that it functions properly in your PC.
When the microprocessor runs out of add-in peripherals, it begins the actual boot-up process, which
engineers call the Initial Program Load or IPL. The BIOS code tells the microprocessor to jump to a
section of code that tells the chip how to read the first sector of your floppy or hard disk. Program
code then takes over from the BIOS and tells the microprocessor how to load the operating system
from the disk to start the computer running.
After the operating system has taken control of the microprocessor, the BIOS does not rest. Its
firmware also includes several sets of routines that programs can call to carry out everyday
functions—typing characters on the screen or to a printer, reading keystrokes, timing events. Because
of this basic library, programmers can create their grand designs without worrying about the tiny
details. If the operating system wants to take over these functions, the PC BIOS steps out of the way,
yielding control without complaint.
The general trend has been for the operating system to take over BIOS functions. Software drivers
have taken over nearly all interface functions of the BIOS. Because software drivers load into RAM,
they are not limited in the amount of space available for their code. Software drivers also extend the
capabilities while the BIOS limits them. Using only the BIOS, your PC cannot do anything that the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (5 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

BIOS does not know about. Consider, for instance, floppy disk drives. When operated in their
standard modes, the BIOS routines function well and allow you to read, write, and format floppy disks
using the industry standard disk formats. At the same time, however, they impose limits on what the
drives can do. Controlled through the BIOS, drives act only like products that have been blessed by
your PC’s manufacturer. But floppy disk drives are more versatile than the BIOS suggests. They can
read and write the disk formats used by other computer system and others that can be used for
copy-protecting diskettes. Taking advantage of a disk drive's abilities beyond the powers officially
sanctioned by the BIOS has allowed software companies to squeeze more data onto individual
floppies in nonstandard formats giving you fewer floppies to worry about when your favorite software
doesn’t come on CD ROM.

Booting Up

The BIOS starts to work as soon as you switch your system on. When all modern Intel
microprocessors start to work, they immediately set themselves up in real mode and look at a special
memory location that is exactly 16 bytes short of the top of the one-megabyte real mode addressing
range, absolute address 0FFFF0(Hex). This location holds a special program instruction, a jump that
points to another address where the BIOS code actually begins.
Cold boot describes the process of starting your PC and loading its operating system by
turning the power on. If your PC is running, you cold boot by first switching it off then
back on.
Warm boot describes the process of restarting your PC and loading its operating system
anew after it has already been running and has booted up at least once before. You start a
warm boot by giving the infamous "three finger salute" by pressing the Ctrl, Alt, and Del
keys at the same time.
Hot boot describes what you get when you slide a piece of footwear into your oven in a
mistaken attempt at the preparation of filet of sole. The term is not used to describe the
PC boot-up process.
At the operating system level, a cold boot and a warm boot are essentially the same. Your PC starts
from the beginning and loads the operating system from scratch. A warm boot or switching your PC
off for a cold boot signals the microprocessor to reset itself to its turn-on condition, erasing the
contents of its registers. The microprocessor then loads or reloads the operating system.
The important difference between a cold and warm boot is not what happens to your operating system
but the effect on your PC’s internal circuits. A cold boot automatically restores all the circuits in your
PC to their original, default condition whether they are on the motherboard or expansion boards
because it cuts off their electrical supply. It also wipes away everything in its memory for a fresh start.
A warm boot does not affect the supply of electricity to your PC’s circuitry, so memory and the
boards installed in your PC are not wiped clean, although some of the contents of your PC’s memory
gets overwritten as your operating system reloads.
Because a warm boot does not automatically restore all the expansion boards in your PC to their
initial conditions, it sometimes does not solve software problems. For example, your modem may not
release the telephone line (hang-up) at the end of your Internet session. A warm boot may leave the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (6 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

modem connected, but a cold boot will assure the modem disconnects and releases your telephone
line.
Unless the boards in your PC follow the Plug-and-Play standard that specifies a standard reset
condition, your PC has no way of telling the state of each board in your PC after a warm boot. It
makes no attempt to find out and just takes what it gets. Ordinarily, such blind acceptance is not a
problem. If, however, some odd state of an expansion board caused your PC to crash, a warm boot
will not solve the problem. For example, sometimes video boards will come up in odd states with
strange screen displays after a crash that’s followed by a warm boot. Cold booting your PC again
usually eliminates such problems.
Your PC also behaves differently during the cold and warm booting process. During a cold boot, your
PC runs through its POST procedure to test all its circuitry. During a warm boot, your PC sidesteps
POST under the assumption that your PC has already booted up once so its circuitry must be working
properly.
Your PC must somehow distinguish between a cold and warm boot to decide whether to run its POST
diagnostics. To sort things out, your PC uses its normal memory, which it does not wipe out during a
warm boot. Each time your PC boots up, it plants a special two-byte signature in memory. When your
system boots, it looks for the signature. If it find the signature, it knows it has been booted at least
once before since you turned on the power so it does not need to run through POST. When it fails to
find the signature, it runs its diagnostics as part of the cold boot process. Note that if something in
your system changes the signature bytes—as crashing programs sometimes do—your PC will runs
through a cold boot even though you haven’t turned it off.
The signature bytes have the value 1234(Hex). Because they are stored in Intel little endian format
(that is, the least significant byte comes first), they appear in memory as the sequence 34 12.
Programs can also initiate a warm or cold boot simply by jumping to the appropriate section of BIOS
code. However, because some expansion boards don’t automatically reset when your PC runs through
the cold boot BIOS code, anomalies may persist after such a program-initiated cold boot. For
example, your video board may latch itself into an odd state, and a complete reset may not unlock it.
For this reason some programs instruct you to turn your PC off and back on during the installation
process to guarantee all the hardware in your system properly resets.

Power-On Self Test

Every time your PC switches on, the BIOS immediately takes command. Its first duty is to run
through a series of diagnostic routines—system checks—called the Power-On Self Test routine, or
POST, that ensures every part of your PC's hardware is functioning properly before you trust your
time and data to it. One by one, the POST routine checks the circuits of your system board and
memory, the keyboard, your disks, and each expansion board. After the BIOS makes sure that the
system is operating properly, it initializes the electronics so that they are ready for the first program to
load.

Error Codes

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (7 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

The BIOS tests are relatively simple. The BIOS sends data to a port or register then looks to see the
results. If it receives expected results, the BIOS assumes all is well. If it finds a problem, however, it
reports the failure as well as it can. If the display system is working, it posts an error code number on
your monitor screen. (The limited amount of memory available prevents the BIOS from storing an
elaborate—that is, understandable—message for all the hundreds of possible error conditions.) If your
PC is so ill that the display system will not even work, the BIOS sends out a coded series of beeps
through your system's loudspeaker.
Although the exact codes used by a BIOS vary with the supplier of the BIOS, certain codes have
earned a more general definition. The principle point of departure is the set of codes chosen by IBM
for the diagnostic reports from its line of personal computers. Table 5.1 lists the more important of
these error codes that IBM has assigned to specific problem areas. Many IBM computers, particularly
the earlier models using IBM’s own BIOS, use these same numbers to report errors. Some other
manufacturers follow the same convention. Many do not.

Table 5.1. IBM On-Screen Error Code

0100-SERIES: SYSTEM BOARD ERRORS


0101 Interrupt failure; general system board failure
0102 ROM checksum error (PC, XT), Timer (AT)
0103 ROM checksum error (PC, XT), Timer interrupt (AT)
0104 Interrupt controller (PC, XT), Protected mode (AT)
0105 Timer (PC, XT)
0106 System board circuitry
0107 System board circuitry or an adapter card
0108 System board circuitry
0109 DMA test failure
0121 Unexpected hardware interrupt
0131 Cassette wrap test failed
0151 Real-time clock (or CMOS RAM)
0152 System board circuitry
0161 CMOS power failure
0162 CMOS checksum error
0163 Clock date error
0164 Memory size (POST finds value different from CMOS)
0165 Adapter added/removed (PS/2)
0199 Device list not correct
0200-SERIES: MEMORY

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (8 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

0201 Memory test failed


0300-SERIES: KEYBOARD
0301 Stuck key/improper response
0302 Keyboard test error
0302 Keyboard locked
0303 Keyboard interface error (on system board)
0304 Non-specific keyboard error
0365 Keyboard failure
0366 Keyboard cable failure
0400-SERIES: MONOCHROME DISPLAY
0401 Memory or sync test failure
0432 Parallel port test failure
0500-SERIES: COLOR GRAPHICS ADAPTER
0501 Memory or sync test failure
0556 Light pen failure
0564 Screen paging error
0600-SERIES: FLOPPY DISK SYSTEM
0601 Drive or adapter test failure
0602 Drive failure
0603 Wrong drive capacity
0606 Disk verify function error
0607 Write-protected diskette
0608 Bad command
0610 Disk initialization error
0611 Timeout error
0612 Bad controller chip
0613 DMA failure
0614 DMA boundary error
0621 Seek error
0622 CRC error
0623 Record not found
0624 Bad address mark
0625 Controller seek failure
0626 Data compare error
0627 Change line error

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (9 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

0628 Disk removed


0700-SERIES: FLOATING-POINT UNIT
0900-SERIES: LPT1
1000-SERIES: LPT2
1100-SERIES: COM1
1200-SERIES: COM2
1300-SERIES: GAME CONTROL ADAPTER
1301 Adapter failure
1302 Joystick failure
1400-SERIES: PRINTER
1500-SERIES: SDLC COMMUNICATIONS ADAPTER
1600-SERIES: DISPLAY STATION EMULATION ADAPTER (DSEA)
1700-SERIES: HARD DISK SYSTEM
1701 Drive not ready; Disk or adapter test failure
1702 Time out; Disk or adapter error
1703 Drive error
1704 Adapter or drive error
1705 Record not found
1706 Write fault
1707 Track 0 error
1708 Head select error
1709 Bad error correction code
1710 Read buffer overrun
1711 Bad address mark
1712 Nonspecific error
1713 Data compare error
1714 Drive not ready
1730 Adapter error
1731 Adapter error
1732 Adapter error
1780 Drive C: boot failure
1781 Drive D: failure
1782 Controller boot failure
1790 Drive C: error
1791 Drive D: error

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (10 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

1800-SERIES: PC or XT EXPANSION CHASSIS


2000-SERIES: FIRST BISYNCHRONOUS (BSC) ADAPTER
2100 SERIES: SECOND BISYNCHRONOUS (BSC) ADAPTER
2200-SERIES: CLUSTER ADAPTER
2400-SERIES: ENHANCED GRAPHICS ADAPTER
2401 Adapter test failure
2456 Light pen failure
2500-SERIES: SECOND ENHANCED GRAPHICS ADAPTER
2600-SERIES: PC/370-M ADAPTER
2700-SERIES: PC/3277 EMULATION ADAPTER
2800-SERIES: 3278/79 EMULATOR ADAPTER
2900-SERIES: PRINTER
3000-SERIES: NETWORK ADAPTER
3001 Adapter ROM test failure
3002 Adapter RAM test failure
3006 Interrupt conflict
3100-SERIES: SECOND NETWORK ADAPTER
3300-SERIES: COMPACT PRINTER
3600-SERIES: IEEE-488 (GPIB) ADAPTER
3800-SERIES: DATA ACQUISITION ADAPTER
3900-SERIES: PROFESSIONAL GRAPHICS CONTROLLER ADAPTER
4400-SERIES: 5278 DISPLAY ATTACHMENT UNIT AND 5279 DISPLAY
4500-SERIES: IEEE-488 (GPIB) ADAPTER
4600-SERIES: ARTIC INTERFACE ADAPTER
4800-SERIES: INTERNAL MODEM
4900-SERIES: SECOND INTERNAL MODEM
5600-SERIES: FINANCIAL COMMUNICATION SYSTEM
7000-SERIES: PHOENIX BIOS CHIPSET
7000 CMOS failure
7001 Shadow memory
7002 CMOS configuration error
7100-SERIES: VOICE COMMUNICATION ADAPTER
7300-SERIES: 3.5-INCH FLOPPY DISK DRIVE
7301 Drive or adapter test failure
7307 Write-protected diskette

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (11 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 5

7308 Bad command


7310 Track zero error
7311 Time-out
7312 Bad controller or DMA
7315 Bad index
7316 Speed error
7321 Bad seek
7322 Bad CRC
7323 Record not found
7324 Bad address mark
7325 Controller seek error
7400-SERIES: 8514/A DISPLAY ADAPTER
7401 Test failure
7426 Monitor failure
7600-SERIES: PAGE PRINTER
8400-SERIES: SPEECH ADAPTER
8500-SERIES: 2MB MEMORY ADAPTER
8600-SERIES: POINTING DEVICE
8900-SERIES: MIDI ADAPTER
10000-SERIES: MULTIPROTOCOL COMMUNICATIONS ADAPTER

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh05.htm (12 de 12) [23/06/2000 05:03:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

Chapter 6: Chipsets and Support Circuits


Chipsets and other support circuits are the glue that holds a PC together, providing the
signals that the microprocessor needs to operate as well as those that link the PC and its
peripherals. Over the years of development, the form and nature of those support circuits
has changed but their function has remained consistent, part of the definition of a PC.
Today’s chipsets essentially define the PC and distinguish one system with a given
microprocessor from another.

■ Chipsets
■ Background
■ Functions
■ System Controller
■ Peripheral Controller
■ Memory Controller
■ Practical Products
■ Early Makers
■ Intel Chipsets
■ System Control
■ Timing Circuits
■ Clocks and Oscillators
■ Timers
■ Real Time Clock
■ Interrupt Control
■ Assignments and Priority
■ Interrupt Sharing
■ Serialized Interrupts
■ Direct Memory Access
■ Assignments

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (1 de 20) [23/06/2000 05:11:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

■ Speed
■ Distributed DMA Support
■ Power Management
■ Peripheral Control
■ Bus Interface
■ Hard Disk Interface
■ Floppy Disk Controller
■ Keyboard Controller
■ Input/Output Ports
■ Memory Control
■ Addressing
■ Refreshing
■ Error Handling
■ Caching

Chipsets and Support Circuits

Just as you can't build a house without nails, you can't put together a computer without support chips.
A wealth of circuits are needed to hold together all the functions of a PC, to coordinate its operation,
and control the signals inside it. After all, you need more than a microprocessor to make a
computer—otherwise a microprocessor would be a computer. While some systems come close to
being little more than microprocessors and some microprocessors come close to being complete
computers, today most personal computers require a number of support functions to make their
microprocessors useful—and make the microprocessors work. PC chipsets provide a microprocessor
with the signals it needs to operate as well as creating all the other functions your system requires to
operate. Moreover, a manufacturer’s choice of chipsets determines the overall functionality of the PC.
How well the manufacturer puts the chips to work can determine the overall performance of the
system.
Those with a penchant for details can take glee in pointing out that it is possible to build a house
without nails, what with space-age adhesives, drywall screws, and even peg-and-tenon construction.
They're right. The art of PC construction has advanced so that where dozens of support chips once
were needed, two or three suffice for most systems. In fact, all the essential support can now be
packaged in a single chip, sometimes inside the same package as the system microprocessor—no

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (2 de 20) [23/06/2000 05:11:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

nails, no glue, just one prefabricated assembly.


Despite the physical lack of support chips in some of today's systems, the functions performed by
these vital circuits remain as vital as they were for the first PCs. Every computer needs the same
essential elements to work: a clock or oscillator to generate the signals that lock the circuits together;
a memory controller to ensure each byte goes to the proper place and stays there; bus-control logic to
command the flow of data in and out of the chip and the rest of the system; direct memory access
control to assist in moving data; and interrupt control to meet the needs of interactive computing.
Only the form of these circuit elements has changed to protect the profits—they are more compact in
combination, and more affordable, too.

Chipsets

In early PCs, support circuits were constructed from a variety of discrete circuits—small, general
purpose integrated circuits such as logic gates—and a few functional blocks that each had a specific
function, although not one limited to a specific model or design of computer. These garden-variety
circuits were combined to build all the necessary computer functions into the first PC.
As PCs became increasingly popular, enterprising semiconductor firms combined many of the related
computer functions into a single package. Eliminating the discrete packages and all their
interconnections helped make PCs more reliable. Moreover, because a multitude of circuits could be
grown together at the same time, this integrated approach made the PC support circuitry less
expensive. At first, only related functions were grouped together. As semiconductor firms became
more experienced and fabrication technology permitted smaller design rules and denser packaging,
however, all the diverse support functions inside a PC were integrated into a few VLSI components
individually termed Application-Specific Integrated Circuits or ASICs, collectively called a chipset.

Background

The chipset changed the face of the PC industry. With discrete support circuitry, designing a PC
motherboard was a true engineering challenge because it required a deep understanding of the
electronic function of all the elements of a PC. Using a chipset, a PC engineer need only be concerned
with the signals going in and out of a few components. The chipset might be a magical black box, for
all the designer cares. In fact, in many cases the only skill required to design a PC from a chipset is
the ability to navigate from a roadmap. Most chipset manufacturers provide circuit designs for
motherboards to aid in the evaluation of their products. Many motherboard manufacturers (all too
many, perhaps) simply take the chipset maker's evaluation design and turn it into a commercial
product.
Taken to the extreme, a chipset becomes a single-chip PC. In fact, not only is a single integrated
circuit sufficient to hold all the support circuitry of an entire PC, chip makers have tried integrating
support circuitry with a microprocessor. The newest example is the Cyrix MediaGX microprocessor,
which, together with one added chip, contains all the circuitry of a Pentium-class PC with multimedia
extensions, including high resolution video and sound synthesis. In previous chip generations, the best

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (3 de 20) [23/06/2000 05:11:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

example was perhaps Intel's 486SL, a full-fledged 486DX microprocessor designed for low power and
total system control, a single chip with a whole computer inside.
Although a single-chip design has its benefits, particularly in space-critical applications such as
notebook and hand held PCs, it is not always the best approach. A multi-chip design allows hardware
engineers more freedom to customize and optimize their products—which means a greater ability to
tune in more speed. The semiconductor costs of one-chip and three-chip implementations are not
significantly different, and the extra chips impose little penalty on desktop-size motherboards.
Because chipsets control specific hardware features, they are typically designed to match a given type
of microprocessor and, often, expansion bus. Today’s most popular chipsets link to the Pentium
microprocessor and PCI expansion bus.

Functions

No matter how simple or elaborate the chipset in a modern PC, it has three chief functions. It must act
as a system controller that holds together the entire PC, giving all the support the microprocessor
needs to be a true computer system. It must extend the reach of the microprocessor as a peripheral
controller and operate input/output ports, expansion buses, and disk interfaces. And, as a memory
controller, it links the microprocessor to the memory system, establishes the main memory and cache
architectures, and assures the reliability of the data stashed away in RAM chips.
Every PC needs some circuitry to take care of each of these basic operations. The first generation of
PCs used individual logic chips to build the necessary circuits. Modern PCs put all of these
functions—and sometimes more—into one or more chips.

System Controller

The basic function of a chipset is to turn a chip into a PC, to add what a microprocessor needs to be an
entire system. The basic functions required by a modern PC are several. These include:
● Timers and oscillators, which create the timebases required for the microprocessor, memory,
and the rest of the computer to operate.

● Interrupt controller, which manages the hardware interrupts that give priority to important
functions.

● DMA controller, which governs data transfers to and from memory independently of the
micrprocessor to free the chip up for more important duties (like thinking).

● Power manager, which watches over the overall electrical use of the computer to save power
and, in notebook machines, conserve battery reserves.

Typically one of the chips in a chipset is a system controller that handles all of these functions. Nearly

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (4 de 20) [23/06/2000 05:11:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

all the functions of the system controller in a PC chipset are well defined and, for the most part,
standardized. In nearly all PCs, the most basic of these functions use the same means of access and
control—the ports and memory locations match those used by the very first PCs. In fact, access to
these functions determines the fundamental PC compatibility of a computer.
That said, designers didn’t think of some of the system control functions required by a modern PC
when they crafted the first computers. The most important of these is power management, which was
never an issue until PCs started running from—and running through—batteries. Consequently, the
power management functions of chipsets vary in hardware design, although most now share common
software control systems (discussed in Chapter 24, "Power"). Moreover, some chipsets may omit
some of these less standard function.
Designers match the system controller in most chipsets to a particular microprocessor. Some,
however, are more versatile and will work with microprocessors of a given class (for example, all
586-level chips such as the Intel Pentium and its competitors from AMD, Cyrix, and NexGen) while
others may even have broader application. Because the functions of the system controller as so basic
and well-defined, you can expect any chipset to handle them well.

Peripheral Controller

The system controller of the chipset makes your PC operate. The peripheral controller lets it connect.
The peripheral controller creates the interfaces needed for other devices to link to your
microprocessor. The primary functions of the peripheral controller include:
● Bus interface, which links the microprocessor to one or more expansion buses.

● Floppy disk interface, which puts one or two floppy disk drives under the control of the
system.

● Hard disk interface, which links one or two hard disks to the system.

● Keyboard controller, which translates codes from the keyboard into a form readily understood
by your PC and its programs.

● I/O port controller, which give access to the input and output ports of your PC so you can
make serial and parallel connections with your peripherals.

As with the system control functions, most of the peripheral interface functions are well-defined
standards that have been in use for years. The time-proved functions usually cause few problems in
the design, selection, or operation of a chipset. Some interfaces have a more recent ancestry, and, if
anything, will cause a problem with a chipset, they will. For example, the PCI bus and EIDE disk
interface are newcomers when compared to the decade-old keyboard interface or even-older ISA bus.
When a chipset shows teething pains, they usually arise in the circuits relating to these newer
functions.
Some chipset designs break out one or more of the peripheral control functions and put them in

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (5 de 20) [23/06/2000 05:11:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

dedicated chips. This expedient allows independent development and permits the chip maker to revise
the design of one section of the chipset without affecting the circuits of the other functions or their
manufacturer. For example, Intel put the PCI bridge controller of its Pentium Pro Orion chipset in a
separate package.

Memory Controller

You can just toss the memory every PC needs into a computer’s case like broadcasting grass seed and
hope to have everything (or anything!) work. The memory must be logically and electronically linked
to the rest of the system. Moreover, memory chips and modules have their own maintenance needs
that your PC must manage to keep its electronic storage intact. The simple memory systems of early
PCs needed little more than a simple decoder chip to link memory to the microprocessor. Modern
memory architectures are substantially more complex, incorporating layers of caching, error
correction, and automatic configuration processes. Today’s chipsets handle all the interconnection and
support that the most complicated memory system requires.
In addition to handling main memory, the memory controller in the typical chipset also acts as a cache
controller for your PC’s secondary memory cache. The Pentium Pro shifts this function from the
chipset into the microprocessor itself.
The design of the memory controller in a chipset has dramatic ramifications on the configuration of
your PC. It determines how much RAM you can plug into your PC, what kind of RAM your PC can
use (for example, parity, non-parity, or error-corrected), the operating speed and rating of the memory
modules you install, and the size and technology of the secondary cache.

Practical Products

Chipsets did not so much burst on the scene as dribble. Because of the uncertain demand, the
designers of the first PCs used off-the-shelf electronic components to create all the circuitry in most
machines. Enterprising chip designers sometimes combined a few functions in a single package, but
the advent of chipsets had to wait until the volume of units that could be sold justified the
development cost.

Early Makers

Any line drawn between the transitionary combination chips and true chipsets is necessarily vague
and ambiguous. IBM earned some claim to the development with the introduction of the PS/2 line in
1987 in which most of the system functions were combined into ASICs—Application-Specific
Integrated Circuits. Even these lacked the high degree of integration expected today.
Chips and Technology ranks as the first independent chipset producer, having created its first products
in 1988. Once C&T proved itself another wildly successful PC startup company, the market became

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (6 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

intensely competitive—so much so that early players (like C&T) were suffering by the early 1990’s.
Other makers of chipsets include Acer Labs, Contaq Microsystems (acquired by Cypress
Semiconductor in 1994), OPTi, Samsung, Silicon Integrated Systems (SiS), Symphony Labs, VLSI
Technology, Inc.
The increasing sophistication in the designs of PCs, microprocessors, expansions buses, and memory
systems made the design effort in creating chipsets more intensive and expensive. Intel, with a
double-barrel competitive edge in having designed both the leading microprocessors and the bus of
choice (PCI), has grabbed chipset dominance. When Intel introduces new microprocessors such as the
Pentium Pro, it is often the only chipset supplier.

Intel Chipsets

In reading the specifications of many PCs, you’re apt to encounter mention of motherboard chipsets
using grand and mysterious-sounding names such as those of gods or planets (or both). Most of these
references are to the code names of various Intel chipsets.
Intel’s chipsets are not necessarily better than others on the market, but they are ubiquitous. The early
models of PCs that use new Intel microprocessors often use Intel chipsets for the simple reason that
Intel’s chipsets are first on the market—the company’s engineers have a bit of insight into the new
chips.
For the PCI bus, Intel offers eight distinct chipset lines supporting its microprocessors from the 486
on. Although inventoried by Intel by part number in the form of 82nnnaX (where n indicates a
numeral and a an alphabetic character), these chipsets are best known to the public by their code
names. Even Intel usually clips the initial "82" on the part designation when referring to them. Listed
by code name, the Intel PCI chipsets include:
● Aries is the code name given to an Intel 486 chipset 82426EX. Designed for transitionary
systems that incorporate support for both the PCI and VL buses, Aries has a direct
microprocessor to PCI-bridge that gives it superior performance to older chipsets that link to
PCI through the VL bus. That said, it does not support PCI to PCI bridges.

● Saturn is another Intel 486 chipset designed primarily for PCI-only systems. The latest version
(the third major revision but commonly known as Saturn II) handles power management,
speed-tripled microprocessors (the DX4 line), and Pentium OverDrive upgrades.

● Mercury is the code name for Intel’s 82430LX, a PCI chipset design for the bottom-end
Pentium chips that operate at 60 or 66 MHz. As with the chips, it is designed solely for 5-volt
operation. And again, as with the chips, it is essentially obsolete.

● Neptune is the code name for Intel’s 82430NX, a PCI-based chipset designed primarily for the
75, 90, and 100 MHz versions of the Pentium. Besides being matched to 3.3-volt operation to
match the higher speed Pentiums, the Neptune chipset allows the use of larger amounts of
memory, provides power management, and enables dual-processor PCs.

● Triton is Intel’s principal line of Pentium chipsets for Pentium-based PCs. The official Intel

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (7 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

model designation is 430 followed by two letters indicating the particular application. The
primary model is the 430FX, which mates with Pentium processors running at clock speeds up
to 200 MHz. The 430FX can stream data to the PCI bus at speeds of 100 MB/sec and supports
all of the most in-demand buzz acronyms, including the ISA bus, EDO and pipelined SDRAM
memory, and EIDE disk drives. The 430HX is aimed at general purpose business machines and
incorporates Concurrent PCI for better multitasking performance and USB support for future
connectability and reliability enhancements, including support for parity and error-corrected
memory. The 430VX is Intel’s home PC equivalent, supporting both Concurrent PCI and USB
but aimed for non-parity memory systems using fast page mode, EDO, or SDRAM
technologies.

● Ariel is the code name for another chip in the Triton family, the 430MX, once known as the
Mobile Triton. The name hints at its application—designed for use in notebook computers. It
incorporates most of the same features as the Triton chipset together with advanced power
management to help conserve battery power.

● Orion is the code name for two Pentium Pro chipset variations. The 450KX version is a
eight-chip set that includes the features most in demand for individual PC workstations or low
end file servers. The 450GX is a nine-chip set that aims at the higher end server market. Each of
these chipsets include four components: a PCI Bridge, a Data Path (DP), a Data Controller
(DC), and Memory Interface Component (MIC). The primary difference between the two is that
the high end GX links as many as four Pentium Pro microprocessors, addresses up to 4GB of
physical memory, and has some additional signals and is more severely tested with additional
load on the processor bus. The KX only supports two Pentium Pro chips and 512MB of
physical memory.

● Mars was to be Intel’s high volume Pentium Pro chipset for desktop applications, but some
sources say that it didn’t work out well, probably because technology and the marketplace
outpaced it. For example, it was designed to accommodate 256MB of memory, but the chipset
that replaced it, Natoma (below) handles four times more. In a world of plunging memory
prices, that greater capability alone is reason for looking to a newer design.

● Natoma is the release version of Intel’s Pentium Pro chipset designed primarily for low cost
desktop PCs. At its introduction, the chipset cost less than half what Orion did. Intel’s preferred
product designation is the 440FX PCIchipset. The actual chipset comprises three parts: the
82441FX PCI Bridge and Memory controller (PMC), the 82442FX Data Bus Accelerator
(DBX), and the 82371SB PCI ISA/IDE Xccelerator (PIIX3) bridge. Together, these three chips
give a Pentium Pro motherboard (such as Intel’s AP440FX) support for up to one gigabyte
using all major memory technologies, including fast page mode and EDO and allows the use of
non-parity, parity, or ECC memory as well as bus control for both ISA and PCI, two USB ports,
and a bus-mastering IDE disk interface with provisions for connecting as many as four devices.

For the most part, exactly which functions are built into the chipset depends on the magnanimity of
the chipset maker. Although adding more functions make the chipset more complex and costly, a
more feature-packed chipset can also give its maker a marketing advantage. Intel’s chipsets set a
standard that other chipset makers attempt to outdo.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (8 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

System Control

The foundation level of support supplied by chipsets matches that of the dedicated circuits of the first
PCs. All trace their roots back to the original PC design crafted by IBM in 1981 and carried through to
today’s PCs for the sake of software compatibility. Although seemingly anachronistic in an age when
those who can remember the first PC also harbor memories of Desotos and Dinosaurs, even the most
current chipsets must precisely mimic the actions of early PCs so that the oldest software will still
operate properly in new PCs—providing, of course, all the other required support is also present.
After all, some Neanderthal will set switchboards glowing from I-95 to the Silicon Valley with threats
of lawsuits and aspersions about the parenthood of chipset designers when the DOS utilities he
downloaded in 1982 won’t run on his new Pentium Pro.
Crack open one of the latest chipsets and you’ll find three functions so essential that they would have
to be incorporated into modern PCs even if perfect backward compatibility were not an issue: timing
circuits, interrupt control, and direct memory access control. Here’s a closer look at each one:

Timing Circuits

Although anarchy has much to recommend it should you believe in individual freedom or sell
firearms, anarchy is anathema to computer circuits. Today's data processing designs depend on
organization and controlled cooperation—timing is critical. The meaning of each pulse passing
through a PC is dependent on time relationship. Signals must be passed between circuits at just the
right moment for the entire system to work properly.
This time is critical in PCs because their circuits are designed using a technology known as clocked
logic. All the logic elements in the computer operate synchronously. They carry out their designated
operations one step at a time and each circuit makes one step at the same time as all the rest of the
circuits in the computer. This synchronous operation helps the machine keep track of every bit that it
processes, assuring that nothing slips between the cracks.

Clocks and Oscillators

The system clock is the conductor who beats the time that all the circuits follow, sending out special
timing pulses at precisely controlled intervals. The clock, however, must get its cues from somewhere,
either its own internal sense of timing or some kind of metronome.
An electronic circuit that accurately and continuously beats time is termed an oscillator. Most
oscillators work on a simple feedback principle. Like the microphone that picks up its own sounds
from public address speakers too near or turned up too high, the oscillator, too, listens to what it says.
As with the acoustic-feedback squeal that the public address system complains with, the oscillator,
too, generates its own howl. Because the feedback circuit is much shorter, however, the signals need
not travel as far and their frequency is higher, perhaps by several thousandfold.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (9 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

The oscillator takes its output as its input, then amplifies the signal, sends it to its output where it goes
back to the input again in an endless—and out of control—loop. By taming the oscillator by adding
impediments to the feedback loop, by adding special electronic components between the oscillator's
output and its input, the feedback and its frequency can be brought under control.
In nearly all PCs a carefully crafted crystal of quartz is used as this frequency control element. Quartz
is one of many piezoelectric compounds. Piezoelectric materials have an interesting property—if you
bend a piezoelectric crystal, it generates a tiny voltage. Or if you apply a voltage to it in the right way,
the piezoelectric material bends.
Quartz crystals do exactly that. But beyond this simple stimulus/response relationship, quartz crystals
offer another important property. By stringently controlling the size and shape of a quartz crystal, it
can be made to resonate at a specific frequency. The frequency of this resonance is extremely stable
and very reliable—so much so that it can help an electric watch keep time to within seconds a month.
While PCs don't need the absolute precision of a quartz watch to operate their logic circuits properly,
the fundamental stability of the quartz oscillator guarantees that the PC operate at a clock frequency
within its design limits.
A modern PC doesn’t have just one clock but several. It may use separate frequencies for its
expansion bus (or buses), memory circuits, and microprocessors. Most PCs link these frequencies,
synchronizing them. They may all originate in a single oscillator and use special circuits such as
frequency dividers that reduce the oscillation rate by a selectable factor or frequency multipliers that
increase it. For example, a PC may have a 66 MHz oscillator that directly controls the memory
system. A frequency divider may reduce that by half to run the PCI bus and another divider may
reduce it by eight to produce the ISA bus clock. A frequency multiplier inside the microprocessor may
boost the clock to 132 or 200 MHz. Because all of these frequencies originate in a single clock signal,
they are automatically synchronized with even the most minute variations in the original clock
reflected in all those derived from it.
Some sections of PCs operate asynchronously using their own clocks. For example, the scan rate used
by your PC’s video system usually is derived from a separate oscillator on the video board. In fact,
some video boards have multiple oscillators for different scan rates. Some PCs may even have
separate clocks for their basic system board functions and run their buses asynchronously from their
memory and microprocessor.

Historical Perspective

The very first IBM Personal Computer was designed around a single such oscillator built from a
crystal that resonated at 14.31818 megahertz (MHz). The odd frequency was chosen for a particular
reason—it's exactly four times the subcarrier frequency used in color television signals (3.58 MHz.).
The engineers who created the original PC thought compatibility with televisions would be an
important design element of the PC. They were not anticipating multimedia but looking for a cheap
way of putting PC images onscreen. When the PC was released, no inexpensive color computer
monitors were available (or necessary for almost non-existent color graphic software).
The actual oscillator in these early machines was made from a special integrated circuit, type 8284A,
and the 14.31818 MHz crystal. One output at the crystal's fundamental frequency was routed directly
to the expansion bus. Another oscillator output was divided by a discrete auxiliary chip to create the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (10 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

1.19 MHz frequency that was used as a timebase for the PC's timer/counter circuit. The same chip
also divided the fundamental crystal frequency by three to produce a frequency of 4.77 MHz (the
actual clock signal used by the microprocessor in the PC) determining the operating speed of the
system microprocessor. This same clock signal also synchronized all the logic operations inside the
PC and related eight-bit bus computers.
Because the 14.31818 crystal determines the speed at which a PC or related computer operates, you
may think that you could speed up such a system and improve its overall performance simply by
replacing the crystal with one that operates at a higher frequency. While this strategy actually
increases the operating speed of a PC, it is not a good idea for several reasons.
One problem is easily solved. Although the standard 8088 microprocessor in the PC is rated at only
five megahertz and may not operate properly at higher speeds, you can swap it out for one that can
handle a faster clock—an 8088-2 or NEC V-20, for example. But you need to upgrade other parts of
the system to higher speed, too. A bigger obstacle is the one-oscillator design of the PC. Because all
timings throughout the whole PC system are locked to that one oscillator, odd things happen when
you alter its frequency. The system clock won't keep very good time, perennially kept in a high speed
time warp. Software that depends on system timings may crash. Expansion boards may not work at
the altered bus speed. Even your floppy disks may operate erratically with some software.
The goal in the oscillator/clock design of the PC seems to have been frugality rather than flexibility,
versatility, or usability. In those early systems, one master frequency was cut, chopped, minced, and
diced into whatever else was needed inside the computer. But as with trying to make a cut-rate system
speed up, altering the frequency of any part of a computer with the PC oscillator design is likely to
throw off all the other frequency-critical components. Consequently, when IBM rethought the basic
concept of a personal computer and came up with its Advanced Technology approach, the oscillator
was completely redesigned.
The more enlightened AT design broke the system clock free from the bondage of the timer and its
oscillator. Instead of just one crystal and oscillator, the AT and every subsequent computer based
upon its design (which means nearly all PCs) uses three. One is used to derive the system clock for
synchronizing the bus, microprocessor, and related circuits. Another operates at 14.31818 MHz. and
provides input to a timer/counter chip and a 14.31818 bus signal for compatibility with the PC. The
third oscillator controls the CMOS time-of-day clock that runs on battery power even when the
computer is switched off.
The oscillator of the original AT was much the same as that of the PC, to the extent of being based on
the same 8284A timer chip and 14.31818 MHz. crystal. Its output was routed directly to the bus.
Another output was divided down to the 1.19 MHz to feed the timer/counter circuit to maintain
backward compatibility with the first PCs.
In the original AT, a special circuit was dedicated to generating the system clock signal, a type 82284
System Clock Generator chip. The operating frequency of the microprocessor was governed by the
crystal associated with this chip. The 82284 divides the crystal frequency in half to produce the clock
that controls the speed of the microprocessor, bus, and associated circuitry. The original AT operated
with a six megahertz clock derived from a twelve megahertz crystal; later ATs ran at eight megahertz,
derived from a sixteen megahertz crystal.
Replacing the crystal used by the 82284 oscillator alters the speed of the microprocessor. This change
also affected the operating speed of the expansion bus. In the original AT design, the bus clock
frequency was locked to the microprocessor clock. The bus and the microprocessor ran in lockstep.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (11 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

For a while, manufacturers pushed up the bus speed along with the microprocessor clock, to such
blazing rates as 16 MHz—and that with a 286 chip! At these higher bus speeds, however, the
expansion bus became unreliable. Most old expansion boards simply couldn’t keep up. And yet even
faster microprocessors were on the horizon.
When Compaq introduced its first DeskPro 386 in 1987, it broke the link between the bus and the
microprocessor clocks. The first of these machines, which ran at 16 MHz, sliced the microprocessor
and memory clock in two to run the bus. Once the microprocessor and expansion bus clocks were
separated, they almost never came together again. Nearly all PCs since then have separated their bus
and microprocessor clocks.
Most ISA systems strive to run the bus at the sub-multiple of the microprocessor clock that comes
closest to the eight megahertz that most expansion boards are designed to accommodate. The EISA
bus design dictates a nominal eight megahertz bus speed as well. Even in modern PCI-based PCs, the
compatibility bus (which means essentially the same old expansion bus packed into the AT) runs at or
near 8 MHz.
Although IBM took another new tack when it introduced the Micro Channel, which operated the
expansion bus asynchronously from the microprocessor, for a variety of reasons (few of which are
technical in nature), the design never caught on. The next major and long-lasting change went the
opposite direction, at least temporarily. Some PC manufacturers again linked the bus speed directly to
the microprocessor clock. To sidestep the issue of compatibility with old expansion boards, they
created a new high speed expansion bus that supplemented the compatibility bus. These machines
became known as local bus PCs.
At first, local bus PCs used only two clock frequencies. But as microprocessors outran the new bus
design, PC makers again broke the local bus clock free from the microprocessor. This three-speed
design has become the standard today, with separate clocks for the microprocessor, local bus, and
compatibility bus, although all are usually derived from a single master oscillator through frequency
dividers.
Modern microprocessors add a fourth speed, an even higher rate inside the microprocessor. This
higher frequency is generated inside the microprocessor by an internal frequency multiplier.

Overclocking

In the old days when operating a PC was a sport akin to bronco busting for venturesome souls or those
a bit short of cash and common sense, one common method of eking more speed from a PC was to
alter the system clock frequency and run the microprocessor at a frequency beyond its ratings, a
technique called overclocking. In that the master clock derives its frequency from a single clock
controlled by a single crystal, the change was too tempting. After all, you can buy clock crystals at
most electronic parts stores for a few dollars. You can easily pull out the 66 MHz crystal and slide in
one rated at 100 MHz.
Modifying clock frequencies is both easier and more complicated than ever before. New motherboard
designs make altering frequencies easier. But you have more than one frequency to dicker with and
more chances for making your system go sour.
Modern motherboard often make the operating speed of their master oscillators programmable.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (12 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

Instead of ordinary oscillators, they use frequency synthesizers, special chips that can create nearly
any necessary frequency from any other. They use a crystal to keep their operating frequency
rock-stable but are electrically programmed to generate the frequencies needed to run the system. So
that the motherboard can accommodate the widest range of microprocessors, manufacturers often let
you adjust the operating frequency of the synthesizer using a jumper or switch. A few motherboard
even make the external bus an advanced setup option that you can change from your keyboard when
configuring its CMOS. Depending on your PC, you may be able to change the microprocessor clock
directly or alter it indirectly by changing the external microprocessor bus speed and the
microprocessor’s internal multiplier. In any case, you simply make the setting that best matches the
chip you want to plug in. If you want to experiment and push the envelop, just change the synthesizer
settings. You’ll also want to have handy a bucket of water to cool things down and a big enough credit
limit to buy a new microprocessor should your best laid schemes go agley as they aft gang.
Sources that advocate overclocking note that altering the internal microprocessor speed often makes
less of an improvement than changing the external microprocessor bus speed that’s used by the
memory system. And when you alter the external bus speed, with most modern motherboards you’ll
also be changing the speed of the PCI bus. You have to take all three speeds into consideration and
make the proper match between them. Table 6.1 list the possible multipliers popular Pentium level
microprocessors allow you to set. Motherboards may restrict your choices of these values.

Table 6.1. Possible Internal Multipliers for Some Modern Microprocessors

Manufacturer Microprocessor Possible Multipliers


AMD K5 1.5, 2 (150 and 166 MHz versions only)
Cyrix 6x86 2, 3
Cyrix M2 2, 2.5, 3, 3.5
Intel Pentium 1.5, 2, 2.5, 3
Intel Pentium Pro 2.5, 3, 3.5, 4

Intel sanctions three external bus speeds for its Pentium line of microprocessors: 50, 60, and 66 MHz.
The frequency multiplier inside the Pentium should be set to exactly match a multiple of one of these
speeds, the highest being the best. Increase the speed of your microprocessor, and you may find you
have to shift to a lower frequency (and higher multiple) to stay at one of the Intel-sanctioned
frequencies. For example, move a 166 MHz Pentium operating with a 66 MHz bus with a 2.5
multiplier to 180 MHz, and you’ll have to increase the multiplier to 3x and lower the bus speed to 60
MHz. These settings may deliver lower overall performance.
Some motherboards allow for higher bus speeds, commonly 75 and 83.5 MHz, and a few also allow
the intermediate speed of 55 MHz. Choosing one of the faster speeds (with the appropriate
microprocessor speed and multiplier) can significantly improve the performance of your PC. But the
higher bus clocks usually force higher PCI speeds as most motherboards run their PCI slots at half the
microprocessor bus speed. At 37.5 MHz some PCI boards become unreliable. Additionally, some
memory systems may become unreliable when operated in excess of 66 MHz. Successful
overclocking requires that you select and test your PCI boards and memory modules at the higher
operating speed you choose.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (13 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

Some microprocessor are more amenable to overclocking than others. Most sources agree that Intel
chips (with a few exceptions) are conservatively rated and can be successfully overclocked.
Microprocessors from other makers are more likely to have pushed their manufacturing technology to
its limits. With them, chip reliability may be severely compromised at speeds higher than their ratings.
These chips run close to their thermal limits already. Pushing the envelop might make it catch on fire.

Timers

The signals developed by the clocks and oscillators inside a PC are designed for internal consumption
only. That is, they are used for housekeeping functions—locking together the operation of various
circuit components. System timers serve more diverse functions. Unlike clocks and oscillators, which
are fixed in frequency and purpose by hardware design, your PC's timers are programmable, so their
output frequencies can be altered to suit the needs of special applications. If you’re one of the stout
souls who like the challenge and frustration of writing your own programs, you may want to take
control of one of your PC’s timers. Many commercial applications do, without giving you the slightest
hint.
The timer signals in the original PC were generated from the system clock using a 8253 timer/counter
integrated circuit chip. Although this chip would be as foreign in a modern PC as a smudge pot or
buggy whip, its exact functions are still there, locked inside some nondescript chipset. In fact, you get
three functions because the 8253 was three 16-bit timers in one. One of its outputs controlled the
time-of-day clock inside the PC, another took care of the memory refresh circuitry of the computer,
and the third was used to generate the tones made by the PC's speaker.
The 8253 timer/counter operates as a frequency divider by counting the clock pulses it receives from
the system clock. It reduces the value it holds in an internal register by one with each pulse it receives.
In the PC series of computers, the signal that the 8253 timer/counter actually counts is a sub-multiple
of the system clock, divided by four, to about 1.19 MHz.
The 8253 can be set up (through I/O ports in the PC) to work in any of six different modes, two of
which can only be used on the speaker channel. In the most straightforward way, Mode 2, it operates
as a frequency divider or rate generator. You load its register with a number, and it counts to that
number. When it reaches it, it outputs a pulse and starts all over again. Load the 8253 register with 2
and it sends out a pulse at half the frequency of the input. Load it with one thousand, and the output
becomes 1/1000th the input. In this mode, the chip can generate an interrupt at any of a wide range of
user-defined intervals. Because the highest value you can load into its 16-bit register is 216 or 65,536,
the longest single interval it can count is about .055 second—that is, the 1.19 MHz input signal
divided by 65,536.
The six modes of the PC’s 8253 timer/counter and their functions and programming are given in
Table 6.2.

Table 6.2. Operating Modes of 8253 Timer/Counter Chip

Mode Name Operation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (14 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

0 Interrupt on Terminal Count Timer is loaded with a value and counts down from that
value to zero, one counter per clock pulse.
1 Hardware Retriggerable One-Shot A trigger pulse causes timer output to go low; when the
counter reaches zero, the output goes high and stays high
until reset. The process repeats every time triggered.
Pulse length set by writing a control word and initial
count to chip before first cycle.
2 Rate Generator Timer divides incoming frequency by the value of the
initial count loaded into it.
3 Square Wave Produces a series of square waves with a period
(measured in clock pulses) equal to the value loaded into
the timer.
4 Software Retriggerable Strobe Timer counts down the number of clock cycles loaded
into it; then pulses its output. Software starts the next
cycle.
5 Hardware Retriggerable Strobe Timer counts down the number of clock cycles loaded
into it; then pulses its ouput. Hardware-generated pulse
initiates the next cycle.

The time-of-day signal in the original PC used the 8253 timer/counter to count out its longest possible
increment, generating pulses at a rate of 18.2 per second. The pulses cause the time-of-day interrupt,
which the PC counts to keep track of the time. These interrupts can also be used by programs that
need to regularly investigate what the computer is doing; for instance, checking the hour to see
whether it's time to dial up a distant computer. Note that reprogramming this channel of a PC has
interesting effects on the time-of-day reported by the system, generally making the hours whiz by.
The speaker section of the 8253 works the same way, only it generates a waveform that is used to
power the speaker and make sounds. Programs can modify any of its settings to change the sound of
the speaker. Programs can also modify the channel that drives the memory controller, which may
likely crash your computer.
The timer/counter of the AT was similar to that of earlier IBM computers except that it is based on a
8254-2 chip. In the AT, this chip provided three outputs, equivalent to the functions of the PC timer.
One generated the 18.2 per second pulse that drove the time-of-day signal and interrupt; the second
provided a trigger for memory refresh cycles, fixed in the case of the AT to produce a signal with a
period of 15 milliseconds; and the third drove the internal speaker. Controls for these operate in the
same way as those for the related PC functions and are found at the same I/O ports.
Modern PCs based on commercial chipsets simply duplicate the functions of the AT's 8254-2 timer
chip in their own silicon. The time-of-day and speaker timers in these machines are programmable
exactly as they are in the AT. Many systems use a third timer channel in the traditional manner to
determine the intervals at which to refresh system memory. A few aim to prevent unintended disasters
by relegating the memory refresh function to other circuits beyond the reach of your programs. As
long as you don't plan to tinker with timers (that is, you don't do any hardware-level programming
yourself) there's no reason to prefer one design over the other.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (15 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

Real Time Clock

Nearly all PCs made in the last decade include a real time clock among their support circuits. All of
these built-in time-of-day clocks trace their heritage back to a design created by IBM for its original
AT. In that computer the clock was built around a specific clock circuit, the MC146818 chip, which
also held the CMOS memory that stored system setup information.
Based on low power CMOS circuitry, the MC146818 was designed to run constantly to keep its
internal clock accurate, no matter whether your PC was switched on or off. When you switch your PC
off, it still requires some kind of electricity, even if a minuscule amount, to keep current.
Consequently, nearly every PC includes a battery of some kind to supply clock power when the
system is unplugged or otherwise turned off.
The MC146818 measured time by counting pulses of a crystal oscillator operating nominally at
32.768 kilohertz, so it could be as accurate as a quartz watch. (The MC146818 can be programmed to
accept other oscillator frequencies as well.) Many compatible PCs tell time as imaginatively as a
four-year old child, however, because their manufacturers never think to adjust them properly. Most
put a trimmer (an adjustable capacitor) in series with their quartz crystal, allowing the
manufacturer—or anyone with a screwdriver—to alter the resonate frequency of the oscillator. Giving
the trimmer a tweak can bring the real time clock closer to reality—or further into the twilight zone.
(You can find the trimmer by looking for the short cylinder with a slotted shaft in the center near the
clock crystal, which is usually the only one in a PC with a kilohertz rather than megahertz rating.)
The real time clock has a built-in alarm function. The MC146818 can be programmed to generate an
interrupt when the hour, minute, and second of the time set for the alarm arrives. The alarm is set by
loading the appropriate time values into the registers of the MC146818.
Many chipsets emulate the MC146818 in their internal circuitry. In addition, special real time clock
modules (which also mimic the MC146818) with integral batteries are also available. Popular today
are modules that incorporate the real time clock, CMOS memory used by the BIOS, and a backup
battery that keeps the clock running and CMOS fresh when you switch off your PC. For example, the
Dallas 12887A module or one of its derivatives is popular on modern motherboards.
Reading the clock inside the MC146818 (or a chipset that emulates the MC146818) requires the same
two-step process as reading or writing BIOS configuration information. The clock is addressed
through the same two I/O ports as setup memory: one port—070(Hex)—to set the location to read or
write, and a second port—071(Hex)—to read or write the value.

Interrupt Control

Intel microprocessors understand two kinds of interrupts—software and hardware. A software


interrupt is simply a special instruction in a program that's controlling the microprocessor. Instead of
addition, subtraction, or whatever, the software interrupt causes program execution to temporarily
shift to another section of code in memory.
A hardware interrupt causes the same effect but is controlled by special signals outside the normal
data stream. The only problem is that the microprocessors recognize far fewer interrupts than would

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (16 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

be useful—only two interrupt signal lines are provided. One of these is a special case, the
Non-Maskable Interrupt. The other line is shared by all system interrupts.
To extend the capabilities of the interrupt system, PCs use an interrupt controller chip that manages
several interrupt signals and sets the priority for which gets serviced first. Although a separate chip in
early PCs, specifically a type 8259A, the functions of the interrupt controller have long been
integrated into PC chipsets.

Assignments and Priority

The 8259A became the de facto standard for interrupt control when IBM chose that chip for its very
first PC in 1981. The chip handles eight interrupt signals, numbered zero through seven, and assigns
each one a decreasing priority as the numeric designation increases. IBM made various arbitrary
assignments for the function of each of these eight interrupts for its earliest computers, the PC, XT,
Portable PC, and PCjr. These first interrupt assignments are listed in Table 6.3.

Table 6.3. Eight-Bit Bus Interrupt Assignments

Interrupt Number Function


NMI Memory Parity Errors, Coprocessor
IRQ0 Timer Output 0
IRQ1 Keyboard (Buffer Full)
IRQ2 EGA Display; Network, 3278/79 Adapter
IRQ3 Serial Port 2; Serial Port 4; SDLC Communications; BSC Communications;
Cluster Adapter; Network (alternate); 3278/79 (alternate)
IRQ4 Serial Port 1; Serial Port 3; SDLC Communications; BSC Communications;
Voice Communications Adapter
IRQ5 Hard Disk Controller
IRQ6 Floppy Disk Controller
IRQ7 Parallel Port 1; Cluster Adapter (alternate)

Certainly only the mentally infirm might be tempted to use one of these eight-bit computers for
serious work today. But the eight-bit bus survived inside many PCs until a few years ago, and these
slots still imposed the same limits on available interrupts: only six were available to devices
connected to the eight-bit slot on the expansion bus.
Even in those vintage PCs, eight interrupts (and six on the bus) quickly proved inadequate for
complex systems. Consequently, IBM nearly doubled that number when it introduced its "Advanced
Technology" Personal Computer AT. IBM shoehorned in the extra interrupt by adding a second
interrupt controller chip (another 8259A) to the system architecture. The IBM engineers cascaded the
second chip to the first. That is, they connected the output of the new chip to one of the inputs of the
old interrupt controller, the output of which, in turn, connects to the microprocessor. The chip closest

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (17 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

to the microprocessor operates essentially as the same old single interrupt controller in a PC or XT.
However, one of its interrupts, input number two (IRQ2) is no longer connected to the PC bus.
Instead, it receives the output of the second 8259A chip.
To take the place of interrupt number two, its functions were reassigned to one of the new interrupt
channels, number nine. This interrupt works that same with its new connection—the signal only needs
to traverse two controllers instead of one before the interrupt swings into action. Despite its new
number, the new interrupt nine functions just like PC interrupt two, with the same priority activated
by the same control line.
While the 8259A controllers each still handle individual interrupts on a priority level corresponding to
the reverse of the numerical designation of their inputs, the cascaded arrangement of the two
controllers results in an unusual priority system. Top priority is given to interrupts zero and one on the
first chip. Because the second chip is cascaded to interrupt two on the new chip, the new, higher
numbered interrupts that go through this connection get the next highest priority. In fact, interrupt
nine (which, remember, is actually the interrupt zero input of the second 8259A controller) gets top
priority of all interrupts available on the expansion bus. The rest of the interrupts connected to the
second controller receive the next priority levels in ascending order up to interrupt fifteen. Finally, the
remaining interrupts on the first chip follow in priority, from interrupt three up to interrupt seven.
When it developed the new 15-interrupt AT system, IBM made new assignment of their functions.
Table 6.4 lists these original interrupt assignments.

Table 6.4. AT Interrupt Assignments

Interrupt Number Function


IRQ0 Timer Output 0
IRQ1 Keyboard (Buffer Full)
IRQ2 Cascade from IRQ9
IRQ3 Serial Port 2; Serial Port 4; SDLC Communications; BSC Communications;
Cluster Adapter; Network (alternate); 3278/79 (alternate)
IRQ4 Serial Port 1; Serial Port 3; SDLC Communications; BSC Communications;
Voice Communications Adapter
IRQ5 Parallel Port 2, Audio
IRQ6 Floppy Disk Controller
IRQ7 Parallel Port 1; Cluster Adapter (alternate)
IRQ8 Real time Clock
IRQ9 Software redirected to INT 0A(Hex); Video; Network; 3278/79 Adapter
IRQ10 Reserved
IRQ11 Reserved
IRQ12 Reserved, Built-in mouse
IRQ13 Coprocessor

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (18 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

IRQ14 Primary Hard Disk Controller (IDE)


IRQ15 Secondary Hard Disk Controller

Modern PCs and operating systems offer more versatility in assigning the 15 interrupts available in
the ISA compatibility expansion bus. Within the hardware limitations discussed later in this chapter,
almost any interrupt can be assigned to any device. The Plug-and-Play system can make the required
assignments automatically. In such systems, normal interrupt assignment charts are meaningless
because every system may be configured differently.
For example, if you’re running Windows 95 and want to view the actual interrupt usage of your PC,
you need to consult Device Manager by selecting the System icon in Control Panel. Select Computer
on the device menu and click on the Properties button. Select the Interrupt request (IRQ) radio button
in the View Resources tab—which is the default screen that pops up—and you’ll see a list of the
actual interrupt assignments in use in your PC, akin to that shown in Figure 6.1.
Figure 6.1 Windows 95 Device Manager showing actual interrupt assignments.

A few interrupt assignments are inviolable. In all systems, four interrupts can never be used for
expansion devices. These include IRQ 0, used by the timer/counter; IRQ 1, used by the keyboard
controller; IRQ2, the cascade point for the upper interrupts (which is redirected to IRQ 9); and IRQ 8,
used by the real time clock. In addition, all modern PCs have microprocessors with integral
floating-point units or accept external FPUs, both of which use IRQ 13.
In general, the interrupt number indicates its priority of service by the host system, lower numbers
having the highest priority. So if the microprocessor received both IRQ 0 and IRQ 5 at the same time,
it would service IRQ 0 (the lower number) first. The cascade approach used to add the upper
interrupts disrupts this straightforward ranking. Because the upper interrupts signal your system
through the IRQ 2 channel, all upper interrupts take the priority of IRQ 2 in respect to IRQ values 0
through 7 and their own numerical order in respect to the other upper interrupts. Table 6.5 lists the
priorities of service for the interrupts generally available for expansion.

Table 6.5. Service Priorities of Available Interrupts

Priority Interrupt Function


1 IRQ 9 Cascade to IRQ 2
2 IRQ 10 Available
3 IRQ 11 Available
4 IRQ 12 Available
5 IRQ 13 Floating-point unit
5 IRQ 14 Hard disk drive
6 IRQ 15 Available
7 IRQ 3 COM2
8 IRQ 4 COM1
9 IRQ 5 LPT2

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (19 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 6

10 IRQ 6 Floppy disk drives


11 IRQ 7 LPT1

If you have a critical device, you can assign it the available interrupt with the highest priority,
typically IRQ 10. In general, however, these priorities don't make much difference in the operation of
your PC because interrupts rarely occur simultaneously, and most interrupt service routines don’t last
long enough to make a difference.
In other words, the interrupt you assign to a particular device is not critical as long as it is unique and
both the hardware device and its interrupt service routine (which means its driver software) know
which interrupt you choose. Setting up your hardware using the Windows installation procedure
ensures that you make the right matches. The Plug-and-Play system makes these interrupt assignments
for you automatically.

Interrupt Sharing

Just in case the 15 available interrupts (16 counting the special nonmaskable interrupt) still don't
stretch far enough, the ISA compatibility bus makes provisions for interrupt sharing, allowing two
different devices to use the same interrupt to draw the attention of the microprocessor.
When an interrupt is shared, each device that’s sharing a given interrupt uses the same hardware
interrupt request line to signal to the microprocessor. The interrupt-handling software or

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh06.htm (20 de 20) [23/06/2000 05:11:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 7

Chapter 7: The Expansion Bus


Your PC's expansion bus allows your system to grow. It provides a high speed
connection for internal peripherals that enhance the power of your PC. Standardized
buses spawned an entire industry dedicated to making interchangeable PC expansion
boards. Where once a single standard sufficed, PC expansion has become specialized
with expansion buses optimized for multi-user computers, high performance video
systems, and notebook machines.

■ Background
■ Bus Functions
■ Data Lines
■ Address Lines
■ Power Distribution
■ Timing
■ Flow Control
■ System Control
■ Bus-Mastering and Arbitration
■ Slot-Specific Signals
■ Bridges
■ Physical Aspects
■ Connector Styles
■ Connector Layout
■ Board Size
■ Slot Spacing
■ Slot Limits
■ Compatibility
■ History
■ Buses Before the PC

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh07.htm (1 de 5) [23/06/2000 05:13:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 7

■ PC Bus
■ Industry Standard Architecture
■ Micro Channel Architecture
■ Enhanced ISA
■ Proprietary Local Buses
■ VESA Local Bus
■ Peripheral Component Interface
■ PCMCIA
■ PC Card
■ CardBus
■ Miniature Card
■ Design and Operation
■ Industry Standard Architecture
■ Eight-Bit Subset
■ Sixteen-Bit Extension
■ Plug-and-Play ISA
■ PC/104
■ Peripheral Component Interconnect
■ CompactPCI
■ PCMCIA
■ PC Card
■ CardBus
■ Miniature Card
■ Historic Architectures
■ Micro Channel Architecture
■ Enhanced Industry Standard Architecture
■ VESA Local Bus

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh07.htm (2 de 5) [23/06/2000 05:13:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 7

The Expansion Bus

PCs earn their versatility with their expansion slots. You can make your PC into anything you want it
to be with an appropriate selection of plug-in boards—within reason, of course. Adding the slicing
and dicing power of a Ginsu knife or the warm affection of Pygmalion-style dreams requires more
elaborate accessorizing. But expansion boards can make a modest PC into a multimedia extravaganza,
an infallible data collection clerk, a high speed information retrieval system, or an expensive desktop
paperweight.
An expansion slot is just a space for the board. The real power for pushing the capabilities of your
system comes from the connections provided by the slot—the expansion bus. The expansion bus is the
electrical connector sitting at the bottom of the slot. The expansion bus is your PC's electrical
umbilical, a direct connection with the PC's logical circulatory system that allows whatever expansion
brainchild you have to link to your system.
The purpose of the expansion bus is straightforward: it enables you to plug things into the machine
and, hopefully, enhance the PC's operation. The buses themselves, however, are not quite so simple.
Buses are much more than simple electrical connections like you make when plugging in a lamp.
Through the bus circuits, your PC transfers not only electricity but also information. Like all the data
your PC must deal with, that information is defined by a special coding in the sequence and pattern of
digital bits. The bus connection must flawlessly transfer that data. To prevent mistakes, every bus
design also includes extra signals to control the flow of that information; adjust its rate to
accommodate the speed limits of your PC and its expansion accessories; and adjust the digital pattern
itself to match design variations. Different buses each take their own approach to the signals required
for control and translation, and these design variations govern how your computer can grow. As a
result, the standard that your PC's bus follows is a primary determinant of what enhancement products
work with it—whether they are compatible. The design of the expansion bus also sets certain limits on
how the system performs and what its ultimate capabilities can be.
Today’s preferred computer configuration includes two expansion buses, a compatibility bus and a
high speed local bus. The former allows you to plug in almost any of the vast range of expansion
boards manufactured in the last decade and a half. Technically, this compatibility bus is called ISA or
Industry Standard Architecture. The high speed bus is a recent innovation that allows new expansion
boards to operate at speeds closer to those of today’s fastest microprocessors. Sometimes called a
local bus, the preferred high speed design is PCI, short for Peripheral Component Interconnect.
The ISA bus in a modern PC allows you to plug in legacy devices—essentially any old expansion
board that you have lying around, even those so old they’re covered with more cobwebs than a
mummy’s tomb—and low performance peripherals. Both types of accessories have one thing in
common: the need to generate data so slowly that no handicap—even an expansion bus with a design
dating back more than a decade and a half—can further impede their performance. Analog data/fax
modems, digitizing tablets, mice, even printers are the likely suspects for plugging into an ISA
expansion bus slot.
The PCI bus is for performance-critical peripherals. That is, it is the place for plugging in devices that
affect the overall speed at which you PC operates. The three most important of these are video or
graphic boards, mass storage devices such as hard disks, and high speed network adapters such as

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh07.htm (3 de 5) [23/06/2000 05:13:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 7

those that follow the 100Base-T standard.


Nothing prevents expansion board designers from putting modems or even a standard serial port on a
PCI board. The PCI expansion bus readily adjusts to slow as well as fast components; using PCI for a
low performance component won’t make that component operate appreciably faster. PCs with only
PCI slots are consequently not only feasible but likely. Although some expansion board makers may
shift their low performance products to PCI, the more probable future scenario leaves all those legacy
and low performance expansion boards in trash heaps and museums as higher performance standards
replace them. For example, conventional serial and parallel ports will soon give way to the Universal
Serial Bus for general expansion, and analog data/fax modems will give way to ISDN and ADSL
terminal adapters (defined and described in Chapter 22, "Modems"), all of which will likely start out
using PCI as the bus of choice.
Notebook PCs present their own expansion problems, the two most important of which are size and
power. Normal expansion boards and buses are too large and too power hungry to meet the needs of
portable battery-powered PCs. Most notebook PCs manufacturers initially broke through these
barriers by developing their own proprietary expansion standards, sometime with different board
designs for each computer model.
Modern notebook PCs now use standardized expansion slots that follow one of two standards: PC
Card or CardBus. The former is akin to ISA, meant for legacy or low performance cards. CardBus is
the modern high speed alternative. Moreover, CardBus slots are backward compatible: a CardBus slot
will accept either a CardBus or PC Card expansion card.
Unlike those of desktop computers, those of notebook machines are externally accessible—you don’t
have to open the PC to add an expansion board. External upgrades are, in fact, a necessity for
notebook computers, in that cracking open the case of a portable computer is about as messy as
opening an egg and, for most people, an equally irreversible process. In addition, both the PC Card
and CardBus standards allow for hot-swapping. That is, you can plug in and unplug a board that
follows either standard while the electricity is on to your PC and the machine is running.

Background

For nearly the first decade of the personal computer industry's existence, PCs were defined by the
expansion bus they used. Mainstream machines almost universally adhered to the standard set by
IBM's early PC and AT computers. After all, when the first PC came on the market, its bus gave the
world a single standard where none seemingly existed before. Manufacturers of expansion boards
have a set of dimensions and layout of electrical signals to guide them in crafting their products. (The
timing of those signals was never explicitly defined by IBM, however.) The AT extended that original
design to match the performance capabilities of more modern peripherals while retaining almost
complete compatibility with the original.
The real virtue of the PC and AT bus designs was not technical, however, but simply that they had
IBM's backing. That alone was the most compelling reason to use it. At the time, the IBM name
meant business computer; IBM was the major computer maker in the world. IBM set standards and
the world followed—blindly, perhaps. Compared to other expansion buses used until that time,
however, the PC bus was nothing remarkable. In fact, the design is remarkable mostly because of its

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh07.htm (4 de 5) [23/06/2000 05:13:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 7

simplicity. Most of its underlying design decisions were arbitrary and associated with expedience and
lower costs. Still, everything needed was there, and the bus was entirely workable. When the IBM PC
became a runaway success, board makers had to adopt the PC bus to sell their products. The standard

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh07.htm (5 de 5) [23/06/2000 05:13:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 8

Chapter 8: Mass Storage Technology


Mass storage is where you put the data that you need to keep at hand but which will not
fit into memory. Designed to hold and retrieve megabytes at a moment's notice, mass
storage traditionally has been the realm of magnetic disks, but other technologies and
formats now serve specialized purposes and await their chances to move into the
mainstream.

■ Technologies
■ Magnetic
■ Magnetism
■ Magnetic Materials
■ Magnetic Storage
■ Digital Magnetic Systems
■ Saturation
■ Coercivity
■ Retentivity
■ Magneto-Optical
■ Write Operation
■ Read Operation
■ Media
■ Optical
■ Data Organization
■ Sequential Media
■ Random Access Media
■ Combination Technologies
■ Data Coding
■ Flux Transitions
■ Single-Density Recording

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh08.htm (1 de 8) [23/06/2000 05:17:20 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 8

■ Double-Density Recording
■ Group Coded Recording
■ Advanced RLL
■ Data Compression
■ Lossless Versus Lossy Compression
■ Compression Implementations
■ Control Electronics
■ Primeval Controllers
■ Combined Host Adapter and Controller
■ Embedded Controllers
■ Integrated Hard Disk Cards
■ Caching
■ Cache Operation
■ Read Buffering
■ Write Buffering
■ Memory Usage
■ Software Caches
■ Hardware Caches
■ Drive Arrays
■ Technologies
■ Data Striping
■ Redundancy and Reliability
■ Implementations
■ RAID Level 0
■ RAID Level 1
■ RAID Level 2
■ RAID Level 3
■ RAID Level 4
■ RAID Level 5
■ RAID Level 6
■ RAID Level 10
■ RAID Level 53
■ Parallel Access Arrays

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh08.htm (2 de 8) [23/06/2000 05:17:20 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 8

Mass Storage Technology

The difference between genius and mere intelligence is storage. The quick-witted react fast, but the
true genius can call upon memories, experiences, and knowledge to find real answers. PCs are no
different. Putting a fast microprocessor in your PC would be meaningless without a means to store
programs and data for current and future use. Mass storage is the key to giving your PC the long-term
memory that it needs.
Essentially an electronic closet, mass storage is where you put information that you don't want to
constantly hold in your hands but that you don't want to throw away, either. As with the straw hats,
squash rackets, wallpaper tailings, and all the rest of your dimly remembered possessions that pile up
out of sight behind the closet door, retrieving a particular item from mass storage can take longer than
when you have what you want at hand.
Mass storage can be online storage, instantly accessible by your microprocessor's commands, or
offline storage, requiring some extra intervention (such as you sliding a cartridge into a drive) for your
system to get the bytes that it needs. Sometimes, the term near-line storage is used to refer to systems
in which information isn't instantly available but can be put into instant reach by microprocessor
command. The jukebox—an automatic mechanism that selects CD-ROM cartridges (sometimes tape
cartridges)—is the most common example.
Moving bytes from mass storage to memory determines how quickly stored information can be
accessed. In practical online systems, the time required for this access ranges from less than 0.01
second in the fastest hard disks to 1000 seconds in some tape systems, spanning a range of 100,000 or
five orders of magnitude.
By definition, the best offline storage systems have substantially longer access times than the quickest
online systems. Even with fast-access disk cartridges, the minimum access time for offline data is
measured in seconds because of the need to find and load a particular cartridge. The slowest online
and the fastest offline storage system speeds, however, may overlap because the time to ready an
offline cartridge can be substantially shorter than the period required to locate needed information
written on a long online tape.
Various mass storage systems span other ranges as well as speeds. Storage capacity reaches from as
little as the 160 kilobytes of the single-sided floppy disk to the multiple gigabytes accommodated by
helical tape systems. Costs run from less than $100 to more than $10,000.
Personal computers use several varieties of mass storage. You can classify mass storage in several
ways: the technology and material the storage system uses for its memory, the way (and often, speed)
your PC accesses the data, and whether you can exchange the storage medium to increase storage, to
exchange information, or provide security.
Another way of dividing mass storage—probably the most familiar—is by device type. Mass storage

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh08.htm (3 de 8) [23/06/2000 05:17:20 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 8

devices common among PCs include hard disks, floppy disks, PC Cards, magneto-optical drives, CD
ROM drives (players and recorders), and tape drives. Although each of these devices gives your PC a
unique kind of storage, they share technologies and media. For example, magnetic storage serves as
the foundation for both hard disks and tape drives. The devices differ, however, in how they put
magnetic technology to work. Hard disks give your PC nearly instant access to megabytes and
gigabytes of data while tape drives offer slower, even laggardly, access in exchange for an
inexpensive cartridge medium that gives you a safe backup system.
All mass storage systems have four essential qualities: capacity, speed, convenience, and cost. The
practical differences between mass storage devices are the trade-offs they make in these qualities.
Today’s mass storage systems use three basic technologies: magnetic, optical, and solid state memory.
Hard disks, floppy disks, and tape systems use magnetic storage. CD drives use optical storage. PC
Cards use solid-state memory. (Of course, hard disk drives also come in PC Card format).
Magneto-optical drives combine magnetic and optical technologies.
Mass storage systems use one of two means of accessing data: random access and sequential access.
Tape drives are the only sequential media devices in common use with PCs. New technologies,
however, are blurring the distinction between random and sequential storage. MO disks and CD
ROMs began life as sequential devices with enhanced random-access capabilities. Special hard disks,
called AV drives, are random access devices that have been specially designed to enhance their
sequential storage abilities.
Most mass storage systems put their storage media in interchangeable cartridges. Only one kind of
mass storage does not permit you to interchange cartridges, the hard disk drive. This inflexible
technology is the most popular today chiefly because it scores highest in all other mass storage
qualities: capacity, speed, and cost.
All of these media share the defining characteristics of mass storage. They deal with data en masse in
that they store thousands and millions of bytes at a time. They also store that information online. To
earn their huge capacities, the mass storage system moves the data out of the direct control of your
PC's microprocessor. Instead of being held in your computer's memory where each byte can be
accessed directly by your system's microprocessor, mass storage data requires two steps to use. First,
the information must be moved from the mass storage device into your system's memory. Then that
information can be accessed by the microprocessor.
The best way to put these huge ranges into perspective is to examine the technologies that underlie
them. All mass storage systems are unified by a singular principal—they use some kind of mechanical
motion to separate and organize each bit of information they store. To retain each bit, these systems
make some kind of physical change to the storage medium—burning holes in it, blasting bits into
oblivion, changing its color, or altering a magnetic field.

Technologies

The key to mass storage is the medium. Mass storage relies on having a medium that can be readily
changed from one state to another and retains those changes for a substantial period, usually measured
in years, without the need for maintenance such as an external power source. Paper and ink have long
been a successful storage medium for human thoughts—the ink readily changes the paper from white

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh08.htm (4 de 8) [23/06/2000 05:17:20 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 8

to black, and those changes can last centuries, providing the printer doesn’t skimp on the quality of
the paper or ink.
In fact, paper and ink have been used successfully for computer storage. Bar codes and even the
Optical Character Recognition (OCR) of printed text allow PCs to work with this time-proven storage
system. But paper and ink come up short as a computer storage system. It lacks the speed, capacity,
and convenience required for a truly effective PC mass storage system. You can’t avoid the
comparisons—whatever latest computer storage system some benighted manufacturer introduces has
the capacity of several Libraries of Congress full of printed text and speed that makes Evelyn Woods
look dyslexic. Perhaps a reverse metaphor is more apt. A single VGA screen image, if printed in its
hexadecimal code as text characters, would fill an average book. Text characters of the code of a
single Windows program would fill an encyclopedia. Your computer needs to read the entire
VGA-image book in less than a blink of an eye and load the encyclopedic program in a few seconds.
Compared to what paper and ink delivers, the needs of a PC for mass storage capacity are prodigious
indeed. The storage system must also allow the PC to sort through its storage and find what it wants
faster than the speed of frustration, which typically runs neck and neck with light. And the medium
must be convenient to work with, for you and your PC. The list of suitable technologies is amazingly
short: magnetic and optical. All PC mass storage media are based on those two basic technologies or a
combination of them.

Magnetic

Magnetic storage media have long been the favored choice for computer mass storage. The primary
attraction of magnetic storage is non-volatility. That is, unlike most electronic or solid-state storage
systems, magnetic fields require no periodic addition of energy to maintain their state once it is set.
Over decades of development, the capacities of magnetic storage systems have increased by a factor
in the thousands and their speed of access has shrunk similarly. Despite these differences, today’s
magnetic storage system relies on exactly the same principles as the first devices.
The original electronic mass storage system was magnetic tape—that thin strip of paper (in the United
States) upon which a thin layer of refined rust had been glued. Later, the paper gave way to plastic,
and the iron oxide coating gave way to a number of improved magnetic particles based on iron,
chrome dioxide, and various mixtures of similar compounds.
The machine that recorded upon these rust-covered ribbons was the Magnetophon, the first practical
tape recorder, created by the German division of the General Electric Company, Allgemeine
Elektricitaets Gesellschaft (AEG) in 1934. Continually improved but essentially secret through the
years of World War II despite its use at German radio stations, the Magnetophon was the first device
to record and play back sound indistinguishable from live performances. After its introduction to the
United States (in a demonstration by John T. Mullin to the Institute of Radio Engineers in San
Francisco on May 16, 1946, tape recording quickly became the premiere recording medium and
within a decade gained the ability to record video and digital data. Today, both data cassettes and
streaming tape system are based on the direct offspring of the first Magnetophon.
The principle is simple. Some materials become magnetized under the influence of a magnetic field.
Once the material becomes magnetized, it retains its magnetic field. The magnetic field turns a
suitable mixture or compound based on one of the magnetic materials into a permanent magnet with

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh08.htm (5 de 8) [23/06/2000 05:17:20 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 8

its own magnetic field. A galvanometer or similar device can later detect the resulting magnetic field
and determine that the material has been magnetized. The magnet material remembers.

Magnetism

Key to the memory of magnetism is permanence. Magnetic fields have the wonderful property of
being static and semi- permanent. On their own, they don't move or change. The electricity used by
electronic circuits is just the opposite. It is constantly on the go and seeks to dissipate itself as quickly
as possible. The difference is fundamental. Magnetic fields are set up by the spins of atoms physically
locked in place. Electric charges are carried by mobile particles—mostly electrons—that not only
refuse to stay in place but also are individually resistant to predictions of where they are or are going.
Given the right force in the right amount, however, magnetic spins can be upset, twisted from one
orientation to another. Because magnetic fields are amenable to change rather than being entirely
permanent, magnetism is useful for data storage. After all, if a magnetic field were permanent and
unchangeable, it would present no means of recording information. If it couldn't be changed, nothing
about it could be altered to reflect the addition of information.
At the elemental particle level, magnetic spins are eternal, but taken collectively, they can be made to
come and go. A single spin can be oriented in only one direction, but in virtually any direction. If two
adjacent particles spin in opposite directions, they cancel one another out when viewed from a larger,
macroscopic perspective.
Altering those spin orientations takes a force of some kind, and that's the key to making magnetic
storage work. That force can make an alteration to a magnetic field, and after the field has changed, it
will keep its new state until some other force acts upon it.
The force that most readily changes one magnetic field is another magnetic field. (Yes, some
permanent magnets can be demagnetized just by heating them sufficiently, but the demagnetization is
actually an effect of the interaction of the many minute magnetic fields of the magnetic material.)
Despite their different behavior in electronics and storage systems, magnetism and electricity are
manifestations of the same underlying elemental force. Both are electromagnetic phenomena. One
result of that commonalty makes magnetic storage particularly desirable to electronics
designers—magnetic fields can be created by the flow of electrical energy. Consequently, evanescent
electricity can be used to create and alter semi-permanent magnetic fields.
When set up, magnetic fields are essentially self-sustaining. They require no energy to maintain,
because they are fundamentally a characteristic displayed by the minute particles that make up the
entire universe (at least according to current physical theories). On the sub-microscopic scale of
elemental particles, the spins that form magnetic fields are, for the most part, unchangeable and
unchanging. Nothing is normally subtracted from them—they don't give up energy even when they
are put to work. They can affect other electromagnetic phenomena, such as that used in mass to divert
the flow of electricity. In such a case, however, all the energy in the system comes from the electrical
flow—the magnetism is a gate, but the cattle that escape from the corral are solely electrons.
The magnetic fields that are useful in storage systems are those large enough to measure and effect
changes on things that we can see. This magnetism is the macroscopic result of the sum of many
microscopic magnetic fields, many elemental spins. Magnetism is a characteristic of sub-microscopic

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh08.htm (6 de 8) [23/06/2000 05:17:20 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 8

particles. (Strictly speaking, in modern science magnetism is made from particles itself, but we don't
have to be quite so particular for the purpose of understanding magnetic computer storage.)

Magnetic Materials

Three chemical elements are magnetic—iron, nickel, and cobalt. The macroscopic strength as well as
other properties of these magnetic materials can be improved by alloying them, together and with
non-magnetic materials, particularly rare earths like samarium.
Many particles at the molecular level have their own intrinsic magnetic fields. At the observable
(macroscopic) level, they do not behave like magnets because their constituent particles are
organized—or disorganized—randomly so that in bulk, the cumulative effects of all their magnetic
fields tend to cancel out. In contrast, the majority of the minute magnetic particles of a permanent
magnet are oriented in the same direction. The majority prevails, and the material has a net magnetic
field.
Some materials can be magnetized. That is, their constituent microscopic magnetic fields can be
realigned so that they reveal a net macroscopic magnetic field. For instance, by subjecting a piece of
soft iron to a strong magnetic field, the iron will become magnetized.

Magnetic Storage

If that strong magnetic field is produced by an electromagnet, all the constituents of a magnetic
storage system become available. Electrical energy can be used to alter a magnetic field, which can be
later detected. Put a lump of soft iron within the confines of an electromagnet that has not been
energized. Any time you return, you can determine whether the electromagnet has been energized in
your absence by checking for the presence of a magnetic field in the iron. In effect, you have stored
exactly one bit of information.
To store more, you need to be able to organize the information. You need to know the order of the
bits. In magnetic storage systems, information is arranged physically by the way data travel serially in
time. Instead of being electronic blips that flicker on and off as the milliseconds tick off, magnetic
pulses are stored like a row of dots on a piece of paper—a long chain with beginning and end. This
physical arrangement can be directly translated to the temporal arrangement of data used in a serial
transmission system just by scanning the dots across the paper. The first dot becomes the first pulse in
the serial stream, and each subsequent dot follows neatly in the data stream as the paper is scanned.
Instead of paper, magnetic storage systems use one or another form of media—generally a disk or
long ribbon of plastic tape—covered with a magnetically reactive mixture. The form of medium
directly influences the speed at which information can be retrieved from the system.
No matter whether tape or disk, when a magnetic storage medium is blank from the factory, it
contains no information. The various magnetic domains on it are randomly oriented. Recording on the
medium reorients the magnetic domains into a pattern that represents the stored information, as shown
in Figure 8.1.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh08.htm (7 de 8) [23/06/2000 05:17:20 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 8

Figure 8.1 Orientation of magnetic domains in blank and recorded media.

After you record on a magnetic medium, you can erase it by overwriting it with a strong magnetic
field. In practice, you cannot reproduce the true random orientation of magnetic domains of the
unused medium. However, by recording a pattern with a frequency out of the range of the reading or
playback system—a very high or low frequency—you can obscure previously recorded data and make
the medium act as if it were blank.

Digital Magnetic Systems

Computer mass storage systems differ in principle and operation from tape systems used for audio and
video recording. Whereas audio and video cassettes record analog signals on tape, computers use
digital signals.
In the next few years, this situation will likely change as digital audio and video tape recorders
become increasingly available. Eventually, the analog audio and video tape will become historical
footnotes, much as the analog vinyl phonograph record was replaced by the all-digital compact disc.
In analog systems, the strength of the magnetic field written on a tape varies in correspondence with
the signal being recorded. The intensity of the recorded field can span a range of more than six orders
of magnitude. Digital systems generally use a code that relies on patterns of pulses, and all the pulses
have exactly the same intensity.
The technological shift from analog to digital is rooted in some of the characteristics of digital storage
that make it the top choice where accur

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh08.htm (8 de 8) [23/06/2000 05:17:20 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 9

Chapter 9: Storage Interfaces


An interface links two disparate devices together. The most important of these link your
PC to its mass storage devices. The design of the interface determines not only how your
PC controls the device but also sets the limits on the performance of the overall system.

■ Background
■ Practical Matters
■ Performance
■ Connectability
■ Cost
■ Design Differences
■ Device-Level Interfaces
■ System Level Interfaces
■ Determining the Interface Used by a Drive
■ AT Attachment
■ Background
■ History
■ Performance
■ Capacity
■ 504MB Addressing Limit
■ CHS Translation
■ Logical Block Addressing
■ Other Features
■ ATA Packet Interface
■ Power Management
■ Security
■ Device Identification

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh09.htm (1 de 7) [23/06/2000 05:20:37 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 9

■ Implementations
■ ATA
■ Fast ATA
■ Enhanced IDE
■ ATA-2
■ ATA-3
■ Wiring
■ Single Drive
■ Two Drives
■ Three or Four Drives
■ Connectors
■ 40-Pin Connector
■ 44-Pin Connector
■ 68-Pin Connector
■ Signals and Operation
■ Logical Interface
■ Register Addresses
■ Secondary Host Adapters
■ Operating System Support
■ Windows 95
■ Windows 3.X
■ DOS
■ SCSI
■ Background
■ History
■ Nomenclature
■ SCSI-1
■ SCSI-2
■ SCSI-3
■ Advanced SCSI Architecture
■ TwinChannel SCSI
■ Ultra SCSI
■ SCSI Parallel Interface
■ Performance
■ Capacity and Addressing

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh09.htm (2 de 7) [23/06/2000 05:20:37 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 9

■ Wiring
■ Connectors
■ Operating System Support
■ Operation and Arbitration
■ Serial Storage Architecture
■ Background
■ History
■ Frames
■ Data Coding
■ Special Characters
■ Addressing
■ Wiring
■ Connectors
■ Fibre Channel
■ History
■ Signaling
■ Coding
■ Frames
■ Protocols
■ Wiring
■ Standards and Coordination
■ ST506/412
■ Background
■ Cabling
■ Data Cable
■ Control Cable
■ ESDI
■ Cabling
■ Operating System Support
■ Floppy Disk Interface
■ Background
■ Controllers
■ Cabling
■ Drive Select Jumpers
■ Drive Select Settings

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh09.htm (3 de 7) [23/06/2000 05:20:37 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 9

■ Drive Cabling
■ Single Drive and Straight-Through Cables
■ Connectors
■ Terminating Resistor Networks
■ Power Connections

Storage Interfaces

"Interface" may just top more lists of jargon capable of reducing advocates of simpler speech to tears.
To those who don't know better, it sounds erudite in a technical, over-educated way just right to
elevate the speaker into intellectual incomprehensibility. Restricted to its use in computer technology,
it fares little better. It remains misunderstood and, when bantered about, serves to separate those who
know they don't know from those who don't know they don't know.
Interface is so slippery because it's hard to define. Strictly speaking, an interface is a coming together.
In computers, it's where two disparate devices link up. The confusion starts with that link because
transferring information requires several levels of connection. Plugs must mechanically fit into the
appropriate jacks. The electrical signals must match. The definition of data bits must match, and the
overall logical structure of the data must agree.
When you control both sides of a connection, the interface is a non-issue. You use whatever
connectors and signals you want because you can adjust either end to suit the other. In the real world,
however, one person or organization rarely has complete control over all aspects of an interface, at
least not any more. IBM set interface standards when it controlled nearly the entire PC market, but
today's diversity naturally leads to chaos. While chaos may be appealing to Bolsheviks and modern
mathematicians, it's not so wonderful if you're trying sell mass storage devices. Interface confusion
leads to something all drive makers dread—your telephone call. Every time they answer the phone to
give you support, they lose money (including when support requires dialing a 900 number, in which
case the manufacturer loses customers and then loses money). Drive makers want their products to
plug in with as few problems as possible. Moreover, they want their products to plug into as many
PCs as possible to give their product the widest possible market.
Understandably, then, makers of mass storage products have led most initiatives to standardize
interfaces. They've done their job well, maybe too well, and have created a wonderful profusion of
mass storage interfaces that in itself is chaotic. Not only do you have to worry about getting an
interface to work, but you have to worry about which interface you want to work with.
Despite the valiant efforts of making interfaces less confusing—generally successful in that making
the most common connections has improved greatly over the years—the confusion of interface choice

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh09.htm (4 de 7) [23/06/2000 05:20:37 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 9

is steadily increasing. As PCs push into ever-higher performance territory, old interface standards
have fallen by the wayside or have suffered dramatic changes. Moreover, new ideas and new
performance demands have led to a steady profusion of new interfaces.
Fortunately, the interface choices for PCs are manageably few. The most popular of them are now
mature technologies that are easier to use and understand then ever before.

Background

The whole purpose of the controller is to link a disk or tape drive with its computer host. So that the
widest variety of devices can be connected to a controller, the signals in this connection have been
standardized.

Practical Matters

The interface used by a mass storage device has three very practical implications. It sets the maximum
performance of the storage system. It also controls how (and whether) you can expand the capacity of
your storage system by adding more drives. And differences in interface often translate into price
differences.

Performance

All information in your storage system must pass through the interface on its way to your PC's
microprocessor and memory. The speed at which the bytes move through the interface sets the
ultimate limit on the performance of the storage system. Engineers use two ways of expressing this
speed. The term peak transfer rate usually describes the theoretical limit to the speed of the interface
as determined by multiplying its clock frequency by the width of the data bus of the interface. The
term throughput expresses a more realistic value of how many bytes can move through the interface in
a given time in an actual installation. The throughput is often substantially less than the peak transfer
rate because of the inherent requirements of the interface (such as the overhead required in addressing
and acknowledging the transfer of data packets) and physical limitations of the interface and the
storage devices themselves. Peak transfer rate is usually quoted for interface standards because they
deal with the purely on-paper theoretic aspects of the interface. Product reviews express test results in
terms of throughput.
Both the peak transfer rate and throughput are usually measured in units of bytes, kilobytes, or
megabytes that pass through the interface in a second. The most common measure of modern
interfaces consequently is megabytes per second, abbreviated MB/sec. Sometimes specifications list
the peak transfer rates of older interfaces as their clocking frequency in megahertz (MHz). Because
these older interfaces were serial designs, which means they had data channels one bit wide, the
megahertz measure translates directly into megabits per second. To get a numerical figure that's

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh09.htm (5 de 7) [23/06/2000 05:20:37 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 9

directly comparable to the MB/sec numbers given for modern interfaces, you must divide the MHz
rating by eight.
The transfer rate of an interface sets the limit on the performance of the storage system; it does not
indicate how you can expect a given device to perform. In other words, you can expect your actual
mass storage devices to move information at a rate slower—often substantially slower—than the peak
transfer rate of the interface.
The microprocessors in the first generations of PCs were substantially slower than most of the mass
storage devices of their time. Consequently, the PC, rather than the interface or the storage device,
controlled the overall performance of the system. Microprocessor speed has jumped far ahead of both
storage devices and older interfaces, so far in fact that older interfaces cannot keep up with the data
needs of today's PCs. Storage devices, too, have pushed their data handling abilities beyond the limits
of many interfaces. Obtaining the optimum performance from a storage system often requires using
the interface with the highest possible performance.
Table 9.1 lists the peak transfer rates of the most common interfaces used with mass storage devices
in PCs. With today's Pentium and faster PCs, the interface transfer rates slower than the 10 MHz of
Fast SCSI-2 will limit mass storage performance.

Table 9.1. Mass Storage Interface Comparison

Interface Peak transfer rate (in megabytes per second) Number of devices
Floppy disk 0.125 2
ST506 0.625 2
ESDI 3.125 2
AT Attachment (IDE) 4 2
SCSI 5 7
Fast SCSI-2 10 7
ATA-2 (EIDE) 16 4
SSA 20 (soon, 40) 127
Ultra SCSI 40 15
P1394 100 127
FC-AL 100 126
Aaron (Proposed) 200 126

These peak transfer rates take into account the different bus widths of the various interfaces. The
floppy disk, ST506, ESDI, P1394, FC-AL, and Aaron interfaces all are serial designs which transfer
information one bit at a time—though often at very high rates. SCSI and Fast SCSI-2 are eight-bit
interfaces. Wide SCSI broadens that connection to 16 or 32-bits, as in the case of Ultra SCSI. AT
Attachment is usually a 16-bit interface. Although the broader buses don't necessarily improve the
throughput of an interface, they do lower the frequency of the signals passed through the connection.
Lower frequencies make designs easier and allow the use of longer cables than would be practical at

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh09.htm (6 de 7) [23/06/2000 05:20:37 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 9

higher frequencies.
The interface of a given mass storage device is a characteristic that's fixed by its designer and
manufacturer. In general, they match the interface to the drive mechanism to deliver the best
performance that the mechanism can deliver under the rubric of the interface standard. To avoid
performance constraints, you'll want to choose devices that use the fastest possible interface.
Transfer rate is not the only aspect of drive performance influenced by interface choice, however. The
most common interfaces for PC hard disk drives, AT Attachment and SCSI, typically involve
disconcertingly different amounts of overhead. The command overhead of a typical SCSI host adapter
is in

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh09.htm (7 de 7) [23/06/2000 05:20:37 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

Chapter 10: Hard Disks


In most PCs, the hard disk is the principal mass storage system. It holds all of your
programs and data files and must deliver them to your system at an instant’s notice.
Hard disks differ by technology, interface, speed, and capacity—all of which are
interrelated.

■ Background
■ Hard Disk Technology
■ Mechanism
■ Rotation
■ Speed
■ Latency
■ Standby Mode
■ Data Transfer Rate
■ Platters
■ Substrates
■ Areal Density
■ Oxide Media
■ Thin Film Media
■ Contamination
■ Read/Write Heads
■ Physical Design
■ Electrical Design
■ Head Actuators
■ Geometry
■ Tracks
■ Cylinders

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (1 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

■ Sectors
■ Physical Format
■ Sector Identification
■ Sector Interleave
■ Cylinder Skewing
■ Addressing
■ CHS Addressing
■ Sector Translation
■ Logical Block Addressing
■ Sector Re-Mapping
■ Disk Parameters
■ File System
■ File Allocation Table
■ Clusters
■ Compression
■ High Performance File System
■ New Technology File System
■ Capacity Limits
■ 10/16MB Limit
■ 32MB Limit
■ 128MB Limit
■ 528MB Limit
■ 2GB Limit
■ 8GB Limit
■ Performance Issues
■ Average Access Time
■ Data Transfer Rate
■ Disk Caching
■ AV Drives
■ Cartridge Drives
■ SyQuest
■ Bernoulli
■ Jaz
■ Alternate Technologies
■ Magneto-Optical

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (2 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

■ Standards
■ Performance
■ Applications
■ Phase-Change Disks
■ Compatibility
■ Applications
■ Installation
■ Physical Issues
■ Internal Installation
■ External Installation
■ Setup and Operation
■ Drive Recognition
■ BIOS Setup
■ Low Level Formatting
■ Partitioning
■ Formatting
■ Reliability
■ MTBF
■ Warranty
■ Support

10

Hard Disks

A PC without a hard disk demonstrates solid-state senility—all that’s left of long-term memory are
the brief flashbacks that can be loaded (with great effort and glacial speed) by floppy disks, tape, or
typing. What’s left is a curiosity to be nursed along until death overtakes it—probably your
own—because a PC without a hard disk will make you wish you were dead.
The hard disk is the premiere mass storage device for today’s PCs. No other peripheral can approach
the usefulness of the hard disk’s combination of speed, capacity, and straightforward user installation.
Your PC’s hard disk stores your files and extends the RAM capacity of your PC with virtual memory.
It deals in megabytes, hundreds or thousands of them. In one second, the disk has to be able to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (3 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

remember or disgorge information equivalent to the entire contents of a physics textbook or novel.
And it must be equally capable of casting aside its memories and replacing them with revised versions
to keep your system up to date. That’s a big challenge, particularly for a device that may be no larger
than a deck of playing cards and uses less power than a night light.
Perhaps the most amazing thing about hard disks is their ability to keep up with the needs of
contemporary programs. The first PCs didn’t even have hard disks. The first drives were about half
the size of a shoebox and held ten megabytes, a fraction of what you need for a single Windows 95
program. Today’s hard disk takes up about a tenth the space and holds 100 times the data—or more.
In fact, the standard unit of measurement for disk capacity has shifted a thousand-fold, from
megabytes to gigabytes. Yet all the while the cost of hard disks has been plummeting, not just the cost
per megabyte, but the basic price of the standard equipment drive.
Depending on your needs and demands, hard disks can be expensive or cheap. Like tires, power tools,
and companions, they come in various sizes and speed ratings. You can scrounge through ads and find
decade-old hard disk drives that will still plug into your PC at prices that will make Scrooge
smile—and you weep while you wait and wait for its ancient technology to catch up with the demands
of a modern microprocessor.
In truth, today’s hard disk drives have little in common with their forebears of as few as five years
ago. Modern hard disks take up less space, respond faster, have several times the capacity, and last
several times as long, and have nowhere near the failure potential of older drives. A modern drive
won’t even plug into your PC the same way early hard disks did. New and constantly evolving
interfaces promise to keep pushing up speeds while making installation easier.
While the standards of speed and quality among hard disks have never been higher, sorting among
your options has never been tougher. As the range of available products grows wider, the differences
between the competition at each level have narrowed. Finding the one right hard disk now more than
ever requires understanding what’s inside a drive, what the different mechanisms and technology are,
and what best mates with a modern machine.

Background

Because of their ability to give nearly random access to data, magnetic disk drives have been part of
computers since long before there were PCs. The first drives suffered from the demands of data
processing, however, and quickly wore out. Their heads ground against their disks leaving dust where
data had been. For fast access, some lined dozens of heads along the radius of the disk, each sweeping
its own dedicated range of disk and data. Such designs had fast access speeds, dependent only on the
speed of the spin of the disk (which is still an issue, even today) and minimal maintenance worries
because they had a minimum of moving parts. But the size of the heads and the cost of arraying a raft
of them meant such drives were inevitably expensive. Though not a major problem with mainframe
computers priced in the millions, pricing a PC with such a drive would put computers within the
budgets solely of those with personal Space Shuttles in their garages.
The breakthrough came at IBM’s Hursley Labs near Winchester in England. Researchers there put a
single head to work scanning across the disk to get at every square inch (England had not yet gone
metric) of its surface. Their breakthrough, however, totally eliminated the wear of head against disk
and was destined to set the standard for computer storage for more than three decades. By

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (4 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

floating—actually flying—the read/write head on a cushion of air, the head never touched the disk
and never had a chance to wear it down. Moreover, the essentially friction-free design allow the head
to move rapidly between positions above the disk.
This original design had two sections, a "fixed" drive that kept its disk permanently inside the drive
and a removable section that could be dismounted for file exchange or archiving. Each held 30
megabytes on a platter about 14 inches across. During development, designers called the drive a 30/30
to reflect its two storage sections. In that Remington used the same designation for its most famous
repeating rifle—the gun that won the West—this kind of drive became known as a Winchester disk
drive.
The name "Winchester" first referred to the specific drive model. Eventually it was generalized to any
hard disk. In the computer industry, however, the term was reserved for drives that used the same
head design as the original Winchester. New disk drives—including all of those now in PCs—do not
use the Winchester head design.
Besides Winchester, you may also hear other outdated terms for what we today call a "hard disk."
Many folks at IBM still refer to them as "fixed disks." When computer people really want to confound
you, they will sometimes use another IBM term from the dark ages of computing, DASD, which
stands for Direct Access Storage Device. No matter the name, however, today all hard disks are
essentially the same in principle, technology, and operation.

Hard Disk Technology

The hard disk is actually a combination device, a chimera that’s part electronic and part mechanical.
Electrically, the hard disk performs the noble function of turning evanescent pulses of electronic
digital data into more permanent magnetic fields. As with other magnetic recording devices—from
cassette recorders to floppy disks—the hard disk accomplishes its end using an electromagnet, its
read/write head, to align the polarities of magnetic particles on the hard disks themselves. Other
electronics in the hard disk system control the mechanical half of the drive and help it properly
arrange the magnetic storage and locate the information that is stored on the disk.

Mechanism

The mechanism of the typical hard disk is actually rather simple, comprising fewer moving parts than
such exotic devices as the electric razor and pencil sharpener. The basic elements of the system
include a stack of one or more platters—the actual hard disks themselves. Each of these platters serves
as a substrate upon which is laid a magnetic medium in which data can be recorded. Together the
platters rotate as a unit on a shaft, called the spindle. Typically, the shaft connects directly to a spindle
motor that spins the entire assembly.

Rotation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (5 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

Hard disks almost invariably spin at a single, constant rate measured in revolutions per minute or
RPM. This speed does not change while the disk is in operation, although some disks may stop to
conserve power. This constant spin is technically termed constant angular velocity recording. This
technology sets the speed of the disk’s spin at a constant rate so that in any given period over any
given track, the drive’s read/write head hangs over the same length arc (measured in degrees) of the
disk. The actual length of the arc, measured linearly (in inches or centimeters) varies depending on the
radial position of the head. Although the tiny arc made by each recorded bit has the same length when
measured angularly (that is, in degrees), when the head is farther from the center of the disk the
bit-arcs are longer when measured linearly (that is, in inches or millimeters). Despite or because of the
greater length of each bit toward the outer edge of the disk, each spin stores the same number of bits
and the same amount of information. Each spin at the outer edge of the disk stores exactly the same
number of bits as those at the inner edge.
Constant angular velocity equipment is easy to build because the disk spins at a constant number of
RPM. Old vinyl phonograph records are the best example of constant angular velocity recording—the
black platters spun at an invariant 33, 45, or 78 RPM. Nearly all hard disks and all ISO standard
magneto-optical drives use constant angular velocity recording.
A more efficient technology, called constant linear velocity recording, alters the spin speed of the
disk depending on how near the center tracks the read/write head lies, so that in any given period the
same length of track passes below the head. When the head is near the outer edge of the disk, where
the circumference is greater, the slower spin allows more bits and data to be packed into each spin.
Using this technology, a given size disk can hold more information.
Figure 10.1 illustrates the on-disk difference between the two methods of recording. The sector length
varies in constant angular velocity but remains constant using constant linear velocity. The number of
sectors is the same for each track in constant angular velocity recording but varies with constant linear
velocity.
Figure 10.1 Comparison of constant angular and linear velocity recording methods.

Constant linear velocity recording is ill-suited to hard disks. For the disk platter to be properly read or
written to, it must be spinning at the proper rate. Hard disk heads regularly bounce from the outer
tracks to the inner tracks as your software requests them to read or write data. Slowing or speeding up
the platter to the proper speed would require a lengthy wait, perhaps seconds because of inertia, which
would shoot the average access time of the drive through the roof. For this reason, constant linear
velocity recording is used for high capacity media that don’t depend so much on quick random access.
The most familiar is the Compact Disc, which sacrifices instant access for sufficient space to store
your favorite symphony.
Modern hard disks compromise between constant angular velocity and constant linear velocity
recording. Although they maintain a constant rotation rate, they alter the timing of individual bits
depending on how far from the center of the disk they are written. By shortening the duration of the
bits (measured in microseconds) over longer tracks the drive can maintain a constant linear length
(again, measured in inches or whatever) for each bit. This compromise technique underlies multiple
zone recording technology, which we will more fully discuss later in this chapter.

Speed

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (6 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

The first disk drives (back in the era of the original IBM Winchester) used synchronous motors. That
is, the motor was designed to lock its rotation rate to the frequency of the AC power line supplying the
disk drive. As a result, most motors of early hard disk drives spun the disk at the same rate as the
power line frequency, 3600 revolutions per minute, which equals the 60 cycles per second of
commercial power in the United States.
Synchronous motors are typically big, heavy, and expensive. They also run on normal line
voltage—117 volts AC—which is not desirable to have floating around inside computer equipment
where a couple of errant volts can cause a system crash. As hard disks were miniaturized, disk makers
adopted a new technology—the servo-controlled DC motor—that eliminated these problems. A
servo-controlled motor uses feedback to maintain a constant and accurate rotation rate. That is, a
sensor in the disk drive constantly monitors how fast the drive spins and adjusts the spin rate should
the disk vary from its design specifications.
Because servo motor technology does not depend on the power line frequency, manufacturers are free
to use any rotation rate they want for drives that use it. Early hard disks with servo motors stuck with
the standard 3600 RPM spin to match their signal interfaces designed around that rotation rate. Once
interface standards shifted from the device level to the system level, however, matching rotation speed
to data rate became irrelevant. With system-level interfaces, the raw data is already separated,
deserialized, and buffered on the drive itself. The data speeds inside the drive are entirely independent
from those outside. With this design, engineers have a strong incentive for increasing the spin rate of
the disk platter: The faster the drive rotates, the shorter the time that passes between the scan of any
two points on the surface of the disk. A faster spinning platter makes a faster responding drive and
one that can transfer information more quickly. With the design freedom afforded by modern disk
interfaces, disk designers can choose any spin speed without worrying about signal compatibility. As
a result, the highest performing hard disks have spins substantially higher than the old
standard—some rotate as quickly as 5400 or 7200 RPM.
Note that disk rotation speed cannot be increased indefinitely. Centrifugal force tends to tear apart
anything that spins at high rates, and hard disks are no exception. Disk designers must balance
achieving better performance with the self-destructive tendencies of rapidly spinning mechanisms.
Moreover, overhead in PC disk systems tends to overwhelm the speed increases won by quickening
disk spin. Raising speed results in diminishing returns. According to some developers, the optimum
rotation rate (the best trade-off between cost and performance) for hard disks is between 4500 and
5400 RPM.

Latency

Despite the quick and constant rotation rate of a hard disk, it cannot deliver information instantly on
request. There’s always a slight delay that’s called latency. This term describes how long after a
command to read to or write from a hard disk the disk rotates to the proper angular position to locate
the specific data needed. For example, if a program requests a byte from a hard disk and that byte has
just passed under the read/write head, the disk must spin one full turn before that byte can be read
from the disk and sent to the program. If read and write requests occur at essentially random times in
regard to the spin of the disk (as they do) on the average the disk has to make half a spin before the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (7 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

read/write head is properly positioned to read or write the required data. Normal latency at 3600 RPM
means that the quickest you can expect your hard disk—on the average—to find the information you
want is 8.33 milliseconds. For a computer that operates with nanosecond timing, that’s a long wait,
indeed.
The newer hard disks with higher spin speeds cut latency. The relationship between rotation and
latency is linear, so each percentage increase in spin pushes down latency by the same factor. A
modern drive with a 5400 RPM spin achieves a latency of 5.6 milliseconds.

Standby Mode

During operation, the platters in a hard disk are constantly spinning because starting and stopping
even the small mass of a two-inch drive causes an unacceptable delay in retrieving or archiving your
data. This constant spin assures that your data will be accessible within the milliseconds of the latency
period.
In some applications, particularly notebook computers, the constantly spinning hard disk takes a toll.
Keeping the disk rotating means constant consumption of power by the spindle motor, which means
shorter battery life. Consequently, some hard disks aimed at portable computers are designed to be
able to cease spinning when they are not needed. Typically, the support electronics in the host
computer determine when the disk should stop spinning. In most machines that means if you don’t
access the hard disk for a while, the computer assumes you’ve fallen asleep, died, or had your body
occupied by aliens and won’t be needing to use the disk for a while. When you do send out a
command to read or write the disk, you then will have to wait while it spins back up to
speed—possibly as long as several seconds. Subsequent accesses then occur at high hard disk speeds
until the drive thinks you’ve died again and shuts itself down.
The powering down of the drive increases the latency from milliseconds to seconds. It can be a big
penalty. Consequently, most notebook computers allow you to adjust the standby delay. The longer
the delay, the more likely your drive will be spinning when you want to access it—and the quicker
your PC’s battery will discharge. If you work within one application, a short delay can keep your PC
running longer on battery power. If you shift between applications when using Windows or save your
work often, you might as well specify a long delay because your disk will be spinning most of the
time, anyway. Note, too, that programs with autosaving defeat the purpose of your hard disk’s standby
mode, particularly when you set the autosave delay to a short period. For optimum battery life, you’ll
want to switch off autosaving—if you have sufficient faith in your PC.

Data Transfer Rate

The speed of the spin of a hard disk also influences how quickly data can be continuously read from a
drive. At a given storage density (which disk designers try to make as high as possible to pack as
much information in as small a package as possible), the quicker a disk spins, the faster information
can be read from it. As spin rates increase, more bits on the surface of the disk pass beneath the
read/write head in a given period. This increase directly translates into a faster flow of data—more
bits per second.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (8 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

The speed at which information is moved from the disk to its control electronics (or its PC host) is
termed the data transfer rate of the drive. Data transfer rate is measured in megabits per second,
megahertz (typically these two take the same numeric value) or megabytes per second (one-eighth the
megabit per second rate). Higher is better.
The data transfer rates quoted for most hard disks are computed values rather than the speeds you
should expect in using a hard disk drive in the real world. A number of factors drive down the actual
rate at which information can be transferred from a disk drive.
Every disk interface has overhead. Early disk interfaces (see ST506 and ESDI, following) measured
transfer rate as the speed at which they could push raw data between the drive and its
controller—everything on the disk was shoved across the interface. Along with the data you wanted
came a flood of formatting details that would be stripped out. Modern interfaces don’t deal in raw
data, so they don’t suffer this limitation. But they still are slowed by the time and overhead needed to
negotiate each data transfer.
The measure of the actual amount of useful information that moves between a disk drive and your PC
is called the throughput. It is always lower—substantially lower—than the disk’s data transfer rate.
The actual throughput achieved by a drive system varies with where the measurement is made
because each step along the way imposes overhead. The throughput between your drive and controller
is higher than between drive and memory. And the actual throughput to your programs—which must
be managed by your operating system—is slower still. Throughput to DOS on the order of a few
hundred kilobytes per second is not unusual for hard disk drives that have quoted transfer rates in
excess of ten or twenty megabytes per second.

Platters

The disk spinning inside the hard disk drive is central to the drive—in more ways than one. The
diameter of this platter determines how physically large a drive mechanism must be. In fact, most hard
disk drives are measured by the size of their platters. When the PC first burst upon the world, hard
disk makers were making valiant attempts at hard disk platter miniaturization, moving from those
eight inches in diameter (so-called eight-inch disks) to 5.25-inch platters. Today the trend is to
ever-smaller platters. Most large-capacity drives bound for desktop computer systems now use
3.5-inch platters. Those meant for PCs in which weight and size must be minimized (which means, of
course, notebook and smaller PCs) have platters measuring 2.5, 1.8, or 1.3 inches (currently the
smallest) in diameter. (See Chapter 25, "The Case," for form-factor details.)

To increase storage capacity in conventional magnetic hard disk storage systems, both sides of a
platter are used for storing information, each surface with its own read/write head. (One head is on the
bottom where it must fly below the platter.) In addition, manufacturers often put several platters on a
single spindle, making a taller package with the same diameter as a single platter. The number of
platters inside a hard disk also influences the speed at which data stored on the hard disk can be found.
The more platters a given disk drive uses, the greater the probability that one of the heads associated
with one of those platters will be above the byte that’s being searched for. Consequently, the time to
find information is reduced.
Adding platters has drawbacks besides increasing the height of a drive. More platters means greater
mass so their greater inertia requires longer to spin up to speed. This is not a problem for desktop

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (9 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

machines—power-on memory checks typically take longer than even the most laggardly hard disk
requires to spin up. But an additional wait is annoying in laptop and notebook computers that slow
down and stop their hard disks to save battery energy. Additionally, because each surface of each
platter in a hard disk has its own head, the head actuator mechanism inevitably gets larger and more
complex as the number of platters increases. Inertia again takes its toll, slowing down the movement
of the heads and increasing the access time of the drive. Of course, drive makers can compensate for
the increased head actuator mass with more powerful actuators, but that adds to the size and cost of
the drive.

Substrates

The platters of a conventional magnetic hard disk are precisely machined to an extremely fine
tolerance measured in micro-inches. They have to be—remember, the read/write head flies just a few
micro-inches above each platter. If the disk juts up, the result is akin to a DC-10 encountering Pike’s
Peak, a crash that’s good for neither airplane nor hard disk. Consequently, disk makers try to ensure
that platters are as flat and smooth as possible.
The most common substrate material is aluminum, which has several virtues: It’s easy to machine to a
relatively smooth surface. It’s generally inert, so it won’t react with the material covering it. It’s
non-magnetic so it won’t affect the recording process. It’s been used for a long while (since the first
disk drives), and is consequently a familiar material. And above all, it’s cheap.
A newer alternative is commonly called the glass platter, although the actual material used can range
from ordinary window glass to advanced ceramic compounds akin to Space Shuttle skin. Glass
platters excel at exactly the same qualities as do aluminum platters. On the positive side, they hold the
advantage of being able to be made smoother and allowing read/write heads to fly lower. But because
glass is newer, it’s less familiar to work with. Consequently, glass-plattered drives are moving slowly
into the product mainstream.

Areal Density

The smoothness of the substrate affects how tightly information can be packed on the surface of a
platter. The term used to describe this characteristic is areal density, that is, the amount of data that
can be packed onto a given area of the platter surface. The most common unit for measuring areal
density is megabits per square inch. The higher the areal density, the more information can be stored
on a single platter. Smaller hard disks require greater areal densities to achieve the same capacities as
larger units.
Areal density is generally measured in megabytes per square inch of disk surface, and current
products achieve values on the order of 500 to 1000 megabits per square inch.
A number of factors influence the areal density that can be achieved by a given hard disk drive. The
key factor is the size of the magnetic domain that encodes each bit of data, which is controlled in turn
by several factors. These include the height at which the read/write head flies and the particle (grain)
size of the medium.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (10 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

Manufacturers make read/write heads smaller to generate smaller fields and fly them as closely to the
platter as possible without risking the head running into the jagged peaks of surface roughness. The
smoothness of the medium limits the lowest possible flying height—a head can fly closer to a
smoother surface.
The size of magnetic domains on a disk is also limited by the size of the magnetic particles
themselves. A domain cannot be smaller than the particle that stores it. At one time, ball mills ground
a magnetic oxide medium until the particle size was small enough for the desired application. Platters
were coated with a slurry of the resulting magnetic material. Modern magnetic materials minimize
grain size by electroplating the platters.

Oxide Media

The first magnetic medium used in hard disks was made from the same materials used in conventional
audio recording tapes: ferric or ferrous oxide compounds—essentially fine grains of rather exotic rust.
As with recording tape, the oxide particles are milled in a mixture of other compounds including a
glue-like binder and often a lubricant. The binder also serves to isolate individual oxide particles from
one another. This mud-like mixture is then coated onto the platters.
The technology of oxide coatings is old and well developed. The process has been evolving for more
than 50 years, and now rates as a well understood, familiar—and obsolete—technology. New hard
disk designs have abandoned oxide media, and with several good reasons. Oxide particles are not the
best storers of magnetic information. Oxides tend to have lower coercivities and their grains tend to be
large when compared to other, newer media technologies. Both of these factors tend to limit the areal
density available with oxide media. The slight surface roughness of the oxide medium compounds
that of the platter surface, requiring the hard disk read/write head to fly farther away from it than other
media, which also reduces maximum storage density. In addition, oxide coatings are generally soft
and are more prone to getting damaged when the head skids to a stop, when the disk ceases its spin, or
when a shock to the drive causes the head to skitter across the platter surface, potentially strafing your
data as effectively as an attack by the Red Baron.

Thin Film Media

In nearly all current hard disk drives, oxide coatings have been replaced by thin film magnetic media.
As the name implies, a thin film disk has a microscopically skinny layer of a pure metal, or a mixture
of metals, mechanically bound to its surface. These thin films can be applied either by plating the
platter much the way chrome is applied to automobile bumpers, or by sputtering, a form of vapor
plating in which metal is ejected off a hot electrode in a vacuum and electrically attracted to the disk
platter.
Thin film media hold several special advantages over oxide technology. The very thinness of thin film
media allows higher areal densities because the magnetic field has less thickness in which to spread
out. Because the thin film surface is smoother, it allows heads to fly closer. Thin film media also has
higher coercivities, which allows smaller areas to produce the strong magnetic pulses needed for

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (11 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 10

error-free reading of the data on the disk.


One reason that thin film can be so thin and support high areal densities is that, as with chrome-plated
automobile bumpers and faucets, plated and sputtered media require no binders to hold their magnetic
layers in place. Moreover, as with chrome plating, the thin films on hard disk platters are genuinely
hard, many times tougher than oxide coatings. That makes them less susceptible to most forms of
head crashing—the head merely bounces off the thin film platter just as it would your car’s bumpers.

Contamination

Besides shock, head crashes can result from contaminants such as dust or air pollu

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh10.htm (12 de 12) [23/06/2000 05:24:19 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 11

Chapter 11: Floppy Disks


The floppy disk is the premiere data exchange medium for PCs and the most popular
backup system. Except for a few notebook computers, all PCs come with at least one
floppy disk drive as standard equipment. Although floppy disk drives come in a variety of
sizes and capacities (disks measure from 2.5 to 8 inches in diameter and store from
160KB up to 120MB each) all work in essentially the same way.

■ Background
■ History
■ Media
■ Sides and Coatings
■ Magnetic Properties
■ Diskettes
■ 3.5-Inch Floppies
■ 5.25-Inch Floppies
■ Floptical
■ 100MB Technologies
■ Iomega ZipDisk
■ LS-120
■ Drives
■ Mechanical Design
■ Speed Control
■ Head Control
■ Head Indexing
■ Extra-High Density Considerations
■ Controllers
■ Operation
■ Hardware

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh11.htm (1 de 6) [23/06/2000 05:28:31 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 11

■ Integration
■ Setup
■ Drive Support
■ Setup
■ Compression
■ Care
■ Magnetic Enemies
■ Physical Dangers
■ Care Recommendations
■ Radical Recovery

11

Floppy Disks

Since the first PC booted up, the floppy disk has been a blessing and a curse, subject of the same old
saw usually reserved for kids, spouses, and governments—"You can’t live with them, and you can’t
live without them."
You can’t live without floppy disks because they provide the one universal means of information
interchange, data storage, and file archiving used by PCs. They’re convenient—you can stuff half a
dozen floppy disks into a shirt pocket—and easy to use. Slide a disk into a slot, and you’ve got
another megabyte or so online. Push a button, pop out the disk, and you’re ready for another
megabyte. No other medium is so simple and universal. Tape cartridges come in dozens of formats;
cartridge disks require software drivers and their own maintenance software; CDs add that
complication and make recording an expensive proposition with a recording CD drive costing as
much as some PCs alone.
Despite the appeal of floppies, most people have a hard time living with floppy disks because of their
frustration factor. Floppy disks have traditionally been slow and small. No matter how much a floppy
disk holds, it will be a few kilobytes shy of what you need. Moreover, floppy disks are plagued by
problems. Subtle magnetic differences between disks often have no apparent effect until months after
you’ve trusted your important data to a disk that can no longer be read. A profusion of standards
means you need to carefully match disks to drives, drive to controllers, and the whole kaboodle to
DOS. You never seem to have the right drive to match the disk that came in the box of software—or
enough space for all the drives you need to match the proliferating floppy disk standards. Indeed,
floppies are like taxes—something that everyone lives with and no one likes.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh11.htm (2 de 6) [23/06/2000 05:28:31 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 11

Background

The floppy disk itself is only part of a system. Just as a nail is worthless in the abstract (without the
hammer to drive it in place and the arm to swing the hammer), so is a floppy disk, unless you have all
the other parts of the system. These include the floppy disks themselves, called the media, the floppy
disk drive mechanism, the floppy disk drive controller, and the disk operating system software. All
four elements are essential for the proper (and useful) operation of the system.
The floppy disk provides a recording medium that has several positive qualities. The flat disk surface
allows an approximation of random access. As with hard disks, data are arranged in tracks and
sectors. The disk rotates the sectors under a read/write head, which travels radially across the disk to
mark off tracks. More importantly, the floppy disk is a removable medium. You can shuffle dozens of
floppies in and out of drives to extend your storage capacity. The floppy disk in the drive provides
online storage. Offline, you can keep as many floppy disks as you want.
The term "floppy disk" is one of those amazingly descriptive terms that abound in this age of
genericisms. Inside its protective shell, the floppy disk medium is both floppy, (flexible), and a wide,
flat disk. The disks are stamped out from wide rolls of the magnetic medium like cutting cookies from
dough.
The wide rolls look like hyperpituitary audio or video tape, and that’s no coincidence. It’s
composition is the same as for recording tape—a polyester substrate on which a magnetic oxide is
bound. Unlike tape, however, all floppy disks are coated with magnetic material on both sides. The
substrate is thicker than tape, too, about three mils, compared to one mil or less for recording tape.
After all, if floppies were thinner, they’d be too floppy.
To protect it, the floppy resides in a shell. The first floppies had a truly floppy protective shell, one
made out of thicker but still flexible Mylar. Today, the floppy fits into a hard case and overall is not
very floppy. The disk inside, the one made from the media, remains floppy so the name remains the
same—uniquely accurate in a world of computers inhabited by spinning fixed disks and recordable
read-only memory.
That said, the floppy disk has taken on more forms than a schizophrenic changeling. Over the years,
the floppy has adapted to various sizes, storage densities, formats, and recording technologies. The
traditionally magnetic-only medium has taken an optical boost to squeeze ever more data into as little
rust as possible. From an initial 160 kilobytes, floppy disk capacity has grown almost a thousandfold,
to 120 megabytes, while the available area for storing data was cut in half. Although some folks scoff
and say that floppy disk technology hasn’t kept up with processing power and hard disk capacity, all
have grown at about the same rate, now with about a thousandfold improvement from their
beginnings. And all bear as little resemblance to their progenitors as we do to the slime from which
we arose.

History

The concept of the floppy disk arose long before the PC was conceived. When the floppy was first

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh11.htm (3 de 6) [23/06/2000 05:28:31 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 11

conceived, personal computers didn’t exist and no one appeared to have any need for the medium as a
data exchange. IBM is usually credited with the creation of the floppy, but one that neither looked nor
operated like today’s floppies. The most obvious difference between the first floppy and current disks
was that it was bigger, an 8-inch disk in a slightly larger Mylar envelop (like that of today’s 5.25-inch
floppy disks). Rather than a read/write medium, it was more an early CD ROM, a read-only disk for
distributing information. In particular, IBM used it to store diagnostic programs and micro-code for its
large computer system, instead of tape (too cumbersome and many applications didn’t require such a
large capacity) or memory chips (too expensive). These first floppies held about 100K of data and
program code using single-density recording on a single side of the medium. By 1973 the 8-inch
floppy had been adapted to a convenient read-write medium suitable for both the original application
and for storage of data-entry systems such as IBM’s DisplayWriter word processing system.
The eight-inch floppy disks had a number of features going for them that made them desirable as a
computer data storage medium . They were compact (at least compared to the ream of paper that
could hold the same amount of information), convenient, and standardized. Above all, they were
inexpensive to produce and reliable enough to depend on. From the computer hobbyists’ standpoint,
their random access ability made them a godsend for good performance, at least when compared to
the only affordable alternative, the cassette tape.
In 1976 Shugart Associates introduced the 5.25-inch floppy disk, a timely creation that exactly
complemented the first commercial PCs, introduced at about the same time. (Both Apple Computer
and Microsoft Corporation were founded in 1976, although fledgling Microsoft offered a BASIC
interpreter as its first product—its operating system for floppy disks did not arrive until 1981.)
Because these were smaller than the older 8-inch variety, these 5.25 inch floppies were called
diskettes by some. The irregularly used name later spread to even smaller sizes of floppy disks.
In 1980 Sony Corporation introduced the 3.5-inch floppy disk of the same mechanical construction
that we know today. The initial reception was lukewarm to say the least. The 5.25-inch disk was the
unassailable storage standard. The little disks, however, gained a foothold in the small computer
marketplace when Apple adopted them for their initial Macintosh in 1984. In the PC industry,
however, the 3.5-inch floppy remained off limits until about 1986 when the first notebook computers
needed a more compact storage system. Its place was assured when first IBM then the rest of the
computer industry moved to the new diskette size.
Computer makers experimented with all sorts of floppy formats, including disks as small as 2.5 inches
introduced by Zenith Data Systems for an early sub-notebook PC. None of these alternate formats
have survived, although in 1996 Iomega Corporation was exploring a two-inch new-technology
floppy with about 20 megabytes capacity.
As data needs increased, media and drive manufacturers made several attempts at creating larger
capacity floppies. Extra-high Density 3.5-inch floppies, which doubled traditional floppy capacity to
2.88MB, remain available but little used. In 1988, the Floptical drive bumped single-disk capacity to
20MB, but never won more than a small market niche.
In 1996, the PC market saw the introduction of two floppy systems with capacities over 100MB.
Iomega Corporation, one of the vendors of Floptical systems, developed a proprietary system called
the Zip drive. Shortly thereafter an industry consortium promoted the LS-120 system with not only
slightly greater capacity but also backward compatibility with 1.44MB floppy disks.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh11.htm (4 de 6) [23/06/2000 05:28:31 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 11

Media

The traditional floppy disk medium itself is the thin, flexible disk inside the protective shell. This disk
is actually a three layer sandwich, the meat of which is a polyester substrate that measures about 3.15
mils (thousandths of an inch) or 80 micrometers thick. The bread is the magnetic recording medium
itself, a coating less than one-thousandth of an inch thick on each side of the substrate.
Nearly all floppy disks use this same substrate. The new exception is the advanced-technology
LS-120 system, which uses a slightly thinner substrate (2.5 mils) that’s cut from a different plastic,
polyethylene terathalate (PET). This substrate is more flexible than the traditional polyester to let the
LS-120 medium bend better around the head for more reliable contact.
The floppy disk medium starts out as vast rolls of the substrate that are coated in a continuous process
at high speed. The stamping machine cuts individual disks from the resulting roll or web of medium
like a cookie cutter. After some further mechanical preparation (for example, a metal hub is attached
to the cookies of 3.5 inch disks), another machine slides the disks into their protective shells.

Sides and Coatings

No matter the substrate, a mixture of magnetic oxide and binder coat both sides of the substrate. Even
so-called "single-sided" disks are coated on both sides.
Even though all traditional floppy disks have an oxide coating on both sides, media makers sometimes
offer single-sided disks. Instead of omitting the coating on one side (which might make the disk
vulnerable to warping from temperature or humidity changes), the manufacturer simply skips testing
one side, certifying only one side will accept the data without error. By convention, the bottom
surface of the disk is used in single-sided floppy disk drives.
The actual testing of the floppy disk is one of the most costly parts of the manufacturing process.
Testing two sides takes more time and inevitably results in more rejected disks. Two sides simply
provide more space in which problems can occur. Both single-sided and double-sided floppy disks
may be made from exactly the same batch of magnetic medium.
In years gone by, some particularly frugal floppy disk users trimmed the price they paid for floppy
disks by substituting their own testing (during the floppy disk format process) for that ordinarily
performed by the manufacturer. They bought single-sided disks and attempted to format them
double-sided. Every disk they successfully formatted on both sides was a bonus. What they really
accomplished was shifting the cost of testing the medium from the manufacturer to themselves—their
bargain was paid for by their own time. The economics of today’s floppy disks make such bargains
dubious. When double-sided disks cost 25 cents, single-sided disks must be cheap indeed (if they are
even available) to make the procedure pay off.

Magnetic Properties

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh11.htm (5 de 6) [23/06/2000 05:28:31 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 11

The thickness of the magnetic coating on the floppy disk substrate varies with the disk type and
storage density. In the most common disk types it measures from 0.035 mil to 0.1 mil (that is, 0.9 to
2.5 micrometers). In general, the higher the storage density of the disk, the thinner the magnetic
coating. The individual particles are also finer grained. Table 11.1 lists the coating thicknesses for
common floppy disk types.

Table 11.1. Floppy Disk Media Characteristics

Disk type Coating thickness Coercivity


5.25-inch double-density 2.5 micrometers 290 oersteds
5.25-inch high-density 1.3 micrometers 660 oersteds
3.5-inch double-density 1.9 micrometers 650 oersteds
3.5-inch high-density 0.9 micrometers 720 oersteds

Although all common floppy disk coatings use ferric oxide magnetic media, engineers have tailored
the magnetic particles in the mix to the storage densit

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh11.htm (6 de 6) [23/06/2000 05:28:31 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Chapter 12: Compact Discs


From a simple stereo component, the Compact Disc has blossomed into today’s premiere
digital exchange medium. A single disk can slip up to 680 megabytes of information into
your PC. In coming years, the CD itself will be eclipsed by its designated successor, the
Digital Versatile Disk that starts with eight times the CD’s capacity and speed. Even
today, the CD is a many splendored thing—nearly every application has its own
standards and requirements. The DVD promises even more—more versatility and more
standards.

■ Background
■ Technology
■ Medium
■ Basic Format
■ Data Coding
■ Sessions
■ Addressing
■ Capacity
■ Standards
■ Red Book
■ Yellow Book
■ Green Book
■ Orange Book
■ Blue Book
■ White Book
■ CD-DA
■ Proprietary Standards
■ CD ROM
■ Format

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (1 de 26) [23/06/2000 05:38:40 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

■ Players
■ Transfer Rate
■ Access Time
■ Mechanism
■ Changers
■ Controls
■ CD-Recordable
■ Media
■ Dye Layer
■ Physical Format
■ Capacity
■ Vulnerabilities
■ Operation
■ Speed
■ Disc Writing Modes
■ Underrun
■ Testing
■ Fixation
■ Labeling
■ CD-Erasable
■ Photo CD
■ Color
■ Resolution
■ Image Compression
■ Video CD
■ DVD
■ Physical Specifications
■ Standards
■ Installation
■ Interface
■ ATAPI or IDE
■ SCSI
■ Parallel
■ Proprietary
■ Audio Connections

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (2 de 26) [23/06/2000 05:38:40 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

■ Power Connections
■ Drivers
■ CD Care

12

Compact Discs

For the distribution of digital information—music or data—the Compact Disc is typically the most
affordable alternative for moving hundreds of megabytes to thousands of locations. This low cost
makes the CD ROM the premiere digital publishing medium. Already, hundreds of CD ROM titles
are available, each one holding an encyclopedia of data.
The CD ROM system just doesn’t make sense without the availability of disks with data already
stored on them. Because you can’t write to a CD ROM, you can’t do anything with the drive unless
you have something to read. In other words, a CD ROM player is not something to buy in the
abstract—you must have some software in mind that you want to slide into it. After all, if nothing but
Hawaiian music was available on audio CD, who but the most dedicated listeners would buy a player?
A wide variety of CD ROM disks are available, though not yet for everyone’s tastes. There is great
potential for the growth of the CD ROM market because of their inexpensiveness and the ease of
duplicating them.
Developed by the joint efforts of Philips and Sony Corporation in the early 1980s, when the digital
age was taking over the stereo industry, the Compact Disc was first and foremost a high fidelity
delivery medium. Initially released in the United States in 1983, within five years it had replaced the
vinyl phonograph record as the premiere stereophonic medium because of its wide-range, lack of
noise, invulnerability to damage, and long projected life.
The seventy or so minutes of music that was one of the core specifications in designing the Compact
Disc system—enough to hold Beethoven’s Ninth Symphony—was a lot of data, over 600 megabytes
worth. With a covetous gleam, computer engineers eyed the shiny medium and discovered that data is
data (okay, data are data) and the Compact Disc could be a repository for more megabytes than
anyone had reason to use. (Remember, these were the days when the pot at the end of the rainbow
held a twenty or thirty megabyte hard disk.) When someone got the idea that a plastic puck that cost a
buck to make and retailed for $16.99 could be filled with last year’s statistics and marketed for $249,
the rush was on. The Compact Disc became the CD ROM (which stands for Compact Disc,
Read-Only Memory), and megabytes came to the masses—at a price.
Soon sound became only one of the applications of the Compact Disc medium. The original name had
to be extended to distinguish musical CDs from all the others. To computer people, the CD of the
stereo system became the CD-DA, Compact Disc, Digital Audio.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (3 de 26) [23/06/2000 05:38:40 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Cheap and easy duplication make CD ROM an ideal distribution medium. Its versatility (the same
basic technology stores data, sound, and video) came about because all engineers eagerly search for
big, cheap storage. The Compact Disc has both of those virtues by design. Hardly coincidentally, the
same stuff that the Compact Disc stores so well is the core of multimedia. Quite naturally, those little
silver discs are the enabling factor behind the multimedia explosion in PCs. Moreover, the Compact
Disc is central to the digitalization and computerization of photography, or at least photographic
storage. The Photo CD system promises to hold your images longer and more compactly than long
familiar photographic film. Within the next few years, the CD ROM will likely become as mandatory
in your next PC as the hard disk is today.

Background

One of the great virtues of the Compact Disc is storage density—little discs mean a lot of megabytes.
The enabling technology behind that high density is optics. Unlike virtually all other PC storage
media, the Compact Disc and CD ROM use light waves instead of magnetic fields to encode
information.
The virtues of light for storage are numerous. Using lenses, you can focus a beam of
light—particularly coherent laser light—to a tiny spot smaller than the most diminutive magnetic
domain writable on a hard disk drive. Unlike the restricted magnetic fields of hard disks that have to
be used within a range of a few millionths of an inch, light travels distance with ease. Leaping along
some 5.9 trillion miles in a year, some beams have been traveling since almost the beginning of the
universe 10 to 15 billion years ago. The equipment that generates the beam of light that writes or
reads optical storage need not be anywhere near the medium itself, which gives equipment designers
more freedom than they could possibly deserve.

Technology

Optical technology underlies the CD ROM. The basic idea is that you can encode binary data as a
pattern of black and white splotches just as on and off electrical signals can. You can make your mark
in a variety of ways. The old reliable method is plain, ordinary ink on paper. The bar codes found
universally on supermarket products do exactly that.
Reading the patterns of light and dark takes only a photodetector, an electrical component that reacts
to different brightness levels by changing its resistance. Light simply lets electricity flow through the
photodetector more easily. Aim the photodetector at the bar code and it can judge the difference
between the bars and background as you move it along (or move the product along in front of it). The
lasers that read in the checkout quicken the scan. The photodetector watches the reflections of the red
laser beam and patiently waits until a recognizable pattern—the bar code as the laser scans across
it—emerges from the noise.
You could store the data of a computer file in one gigantic bar code and bring back paper tape as a
storage medium. Even if you were willing to risk your important data to a medium that turns yellow
and flakes apart under the unblinking eye of the sun like a beach bum with a bad complexion, you’d

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (4 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

still have all the joy of dealing with a sequential storage medium. That means renew your subscription
to your favorite magazines because you’ll have a lot of waiting to do.
The disk, with its random access abilities, is better suited as a storage system. That choice was
obvious even to the audio-oriented engineers who put the first Compact Disc systems together. They
had a successful pattern to follow: the old black vinyl phonograph record. The ability to drop a needle
on any track of a record had become ingrained in the hearts and minds of music lovers for over 100
years. Any new music storage system needed equally fast and easy access to any selection. The same
fast and easy access suits computer storage equally well.

Medium

The heart of the Compact Disc system is the disc itself. Once you’ve stepped past the obvious
decision to choose the disk for its random access abilities, you face many pragmatic decisions: What
size disk? What should it be made from? How fast should it spin? What’s the best way to put the
optical pattern on the disk? What’s the cheapest way to duplicate a million copies when the album
goes platinum? Audio engineers made pragmatic choices about all of these factors long before the
idea of CD ROM had even been conceived.
Size is related to playing time. The bigger the disk, the more data it holds, all else being equal. But a
platter the size of a wading pool would win favor with no one but plastics manufacturers. Shrinking
the size of every splotch of the recorded digital code increases the storage capacity of any size disk,
but technology and manufacturing tolerances limit the minimum size of the storage splotch. Given the
maximum storage density that a workable optical technology would allow (about 150 megabytes per
square inch), the total amount of storage dictates the size of the disk. With the "Ode to Joy" as a
design goal and the optical technology of 1980 to take them there, engineers found a 4.6-inch platter
their ideal compromise. A nice, round 100 millimeters was just too small for Beethoven.
The form of the code was another pragmatic choice. For a successful optical music storage system,
normal printing and duplication methods all had their drawbacks. Printing the disk with ink was out of
the question because no printing process can reliably re-create detail as fine as was necessary.
Photography could keep all the detail—an early optical storage system prototype was based on photo
technology—but photographic images are not readily made in million lot quantities.
Besides printing, the one reproduction process that was successfully used to make millions was the
stamping of ordinary phonograph records, essentially a precision molding process. Mechanical
molding and precision optical recording don’t seem a very good match. But engineers found a way.
By altering the texture of a surface mechanically, they could change its reflectivity. A coarse surface
doesn’t reflect light as well as a smooth one; a dark pit doesn’t reflect light as well as a highly
polished mirror. That was the breakthrough: the optical storage disk would be a reflective mirror that
would be dotted with dark pits to encode data. A laser beam could blast pits into the disk. Then the
pits, a mechanical feature, could be duplicated with stamping equipment similar to that used in
manufacturing ordinary phonograph records.
Those concepts underlie the process of manufacturing Compact Discs. First a disk master is recorded
on a special machine with a high powered laser that blasts the pits in a blank recording master making
a mechanical recording. Then the master is made into a mold. One master can make many duplicate
molds, each of which is then mounted in a stamping machine. The machine heats the mold and injects

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (5 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

a glob of plastic into it. After giving the plastic a chance to cool, the stamping machine ejects the disk
and takes another gulp of plastic.
Another machine takes the newly stamped disc and aluminizes it so that it has a shiny, mirror-like
finish. (Some discs, notably those used by Kodak for its Photo CD system, get a gold coating.) To
protect the shine, the disk is laminated with a clear plastic cover that guards the mechanical pattern
from chemical and physical abuse (oxidation and scratches).
This process is much like the manufacture of vinyl records. The principal differences are that the CD
has only one recorded side, its details are finer, and it gets an after-treatment of plating and
laminating. The finishing steps add to the cost of the Compact Disc, but most of the cost in making a
disc is attributable to the cost of the data it stores—either royalties to a recording act or the people
who create, compile, or confuse the information that’s to be distributed.
Computer CDs themselves store information exactly the same way it’s stored on the CDs in your
stereo system, only instead of getting up to 74 minutes of music, a CD ROM disk holds about 680
megabytes of data. That data can be anything from simple text to SuperVGA images, to programs,
and the full circle back to music for multimedia systems.
Compared to vinyl phonograph records or magnetic disks, Compact Discs offer a storage medium that
is long-lived and immune to most abuse. The protective clear plastic layer resists physical tortures (in
fact, Compact Discs are more vulnerable to scratches on their label side than the side that is scanned
with the playback laser). The data pits are sealed within layers of the disk itself and are never touched
by anything other than a light beam—they never wear out and acquire errors only when you abuse the
disks purposely or carelessly (for example, by scratching them against one another when not storing
them in their plastic jewel boxes). Although error correction prevents errors from showing up in the
data, a bad scratch can prevent a disk from being read at all.
Compact Discs show their phonograph heritage in another way. Instead of using a series of concentric
tracks as with magnetic computer storage systems, the data track on the CD is one long, continuous
spiral, much like the single groove on a phonograph record. The CD player scans the track from near
the center of the disk to the outer rim.
To maximize the storage available on a disc, the CD system uses constant linear velocity recording.
The disc spins faster for its inner tracks than it does for the outer tracks so that the same length of
track appears under the read/write head every second. The actual velocity is 1.2 meters per second. As
a result, the spin varies from about 400 RPM at the inner diameter to 200 RPM at the outside edge. At
each spin of the disc, the track advances outward from the center of the disk, a distance called the
track pitch, by 1.6 micrometers. The individual pits that encode data bits are at least 0.83 micrometers
long.

Basic Format

As with other disk media, the CD divides its capacity into short segments called sectors. In the CD
ROM realm, however, these sectors are also called large frames and are the basic unit of addressing.
Because of the long spiral track, the number of sectors or large frames per track is meaningless—it’s
simply the total number of sectors on the drive. The number varies but can reach about 315,000 (for
example, for 70 minutes of music).

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (6 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Large frames define the physical format of a Compact Disc and are defined by the CD ROM media
standards to contain 2352 bytes. (Other configurations can put 2048, 2052, 2056, 2324, 2332, 2340, or
2352 bytes in a large frame.) The CD ROM media standards allow for several data formats within
each large frame, dependent on the application for which the CD ROM is meant. In simple data
storage applications, data mode one, 2048 bytes in a 2352-byte large frame actually store data. The
remaining 304 bytes are divided among a synchronization field (12 bytes), sector address tag field (4
bytes), and an auxiliary field (288 bytes). In data mode two, which was designed for less critical
applications not requiring heavy duty error correction, some of the bytes in the auxiliary field may
also be used for data storage, providing 2336 bytes of useful storage in each large frame. Other
storage systems allocate storage bytes differently but in the same large frame structure.
The four bytes of sector address tag field identify each large frame unambiguously. The identification
method hints at the musical origins of the CD ROM system—each large frame bears an identification
by minute, second, and frame which corresponds to the playing time of a musical disc. One byte each
is provided for storing the minute count, second count, and frame count in binary coded decimal form.
BCD storage allows up to 100 values per byte, more than enough to encode 75 frames per second, 60
seconds per minute, and the 70 minute maximum playing time of a Compact Disc (as audio storage).
The fourth byte is a flag that indicates the data storage mode of frame.
In data mode one, the auxiliary field is used for error detection and correction. The first four bytes of
the field stores a primary error detection code, and are followed by eight bytes of zeros. The last 276
hold a layered error correction code. This layered code is sufficient for detecting and repairing
multiple bit errors in the data field.
Extended architecture rearranges the byte assignment of these data modes to suit multi-session
applications. In XA Mode 2 Form 1, the twelve bytes of sync and four of header are followed by an
eight-byte subheader that helps identify the contents of the data bytes, 2048 of which follow. The
frame ends with an auxiliary field storing four bytes of error detection and 276 bytes of error
correction code. In XA Mode 2 Form 2, the auxiliary field shrinks to four bytes, the leftover bytes
extending the data contents to 2324 bytes.

Data Coding

The bytes of the large frame do not directly correspond to the bit pattern of pits that are blasted into
the surface of the CD ROM. Much as hard disks use different forms of modulation to optimize both
the capacity and integrity of their storage, the Compact Disc uses a special data-to-optical translation
code. Circuitry inside the Compact Disc system converts the data stream of a large frame into a bit
pattern made from 98 small frames.
Each small frame stores 24 bytes of data (thus 98 of them equal a 2352-byte large frame) but consists
of 588 optical bits. Besides the main data channel, each small frame includes an invisible data byte
called the subchannel and its own error correction code. Each byte of this information is translated
into 14 bits of optical code. To these 14 bits, the signal processing circuitry adds three merging bits,
the values of which are chosen to minimize the low frequency content of the signal and optimize the
performance of the phase-lock loop circuit used in recovering data from the disk.
The optical bits of a small frame are functionally divided into four sections. The first 27 bits comprise
a synchronization pattern. They are followed by the byte of subchannel data, which is translated into

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (7 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

17 bits (14-bit data code plus three merging bits). Next comes the 24 data bytes, (translated in 408
bits), followed by eight bytes of error correction code (translated into 136 bits).
The subchannel byte actually encodes eight separate subchannels, designated with letters from P
through W. Each bit of the subchannel byte has its own esoteric function that’s part of the deep
structure of the CD system and is hidden from your normal application software. In fact, the only
concern of your applications is to determine how the 2048 (or so) bytes of active storage in each large
frame are divided and used. The CD ROM drive translates the block requests made by the SCSI (or
other interface) into the correct values in the synchronization field to find data.

Sessions

A session is a single recorded segment on a CD, which may comprise multiple tracks. The session in
normally recorded all at once in a single session, hence the name. Under the Orange Book standard, a
session can contain data, audio, or images.
On the disc, each session begins with a lead-in, which provides space for a table of contents for the
session. The lead-in length is fixed at 4500 sectors, equivalent to one minute of audio or 9MB of data.
When you start writing a session, the lead-in is left blank and is filled in only when you close the
session.
At the end of the session on the disc is a lead-out, which contains no data but only signals to the CD
player that it has reached the end of the active data area. The first lead-in on a disc measures 6750
sectors long, the equivalent of 1.5 minutes of audio or 13MB of data. Any subsequent lead-outs on a
single disk last for 2250 sectors, half a minute, or about 4MB of data.

Addressing

The basic addressing scheme of the Compact Disc is the track, but CD tracks are not the same as hard
disk tracks. Instead of indicating a head position or cylinder, the track on a CD is a logical structure
akin to the individual tracks or cuts on a phonograph record.
A single Compact Disc is organized as one of up to 99 tracks. Although a single CD can
accommodate a mix of audio, video, and digital data, each track must be purely one of the three.
Consequently, a disc mixing audio, video, and data would need to have at least three tracks.
The tracks on a disc are contiguous and sequentially numbered, although the first track containing
information may have a value greater than one. Each track consists of at least 300 large frames (that’s
four seconds of audio playing time). Part of each track is a transition area called pre-gap and post-gap
areas (for data discs) or pause areas (for audio disks).
Each disk has a lead-in area and a lead-out area corresponding to the lead-in and lead-out of
phonograph records. The lead-in area is designated track zero, and the lead-out area is track
0AA(Hex). Neither is reported as part of the capacity of the disk, although the subchannel of the
lead-in contains the table of contents of the disc. The table of contents lists every track and its address
(given in the format of minutes, seconds, and frames).

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (8 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Tracks are subdivided into as many as 99 indices by values encoded in the subchannel byte of nine out
of ten small frames. An index is a point of reference that’s internal to the track. The number and
location of each index is not stored in the table of contents. The pre-gap area is assigned an index
value of zero.

Capacity

The nominal maximum capacity of a CD amounts to 74 minutes of music recording time or about
650MB when used for storing data. These capacities are only approximate, however. A number of
factors control the total capacity of a given disc. For example, mass produced audio CDs sometimes
contain more than 74 minutes of sound because disc makers can cram more onto each disc by
squeezing the track on the glass master disc into a tighter, longer spiral. This techniques can extend
the playing time of a disc to 80 minutes or more.
The special CDs that you can write on with your PC cannot benefit from this tighter track strategy
because their spiral is put in place when the discs are manufactured. The standard formats yield four
capacity levels on two different sizes of disc, as discussed in the CD-R section below. In any case,
these numbers represent the maximum storage capacity of a recordable CD. Nearly anything you do
when making a CD cuts into that capacity.
Although 650 megabytes seemed generous at one time—even as little as a few years ago—the needs
of modern PCs are quickly overwhelming the CD. These demands have added impetus to develop the
much higher capacity DVD.

Standards

The Compact Disc medium has proven so compelling that everyone wants to use it. Unfortunately,
everyone wants to use it in his own way. And everyone want to have his own standards. Not quite
everyone gets his own standards, but nearly every application does. Moreover, as with other systems
of PC mass storage, the standardization of CD ROM occurs at several levels—hardware and software.
At the hardware level, Compact Disc systems are governed by several different standards that depend
on what the system will be used for. Certain physical aspects pertain to all current CD standards,
however. For example, the information on all CDs starts at the center and progresses in an outward
spiral on the disk. The data on the disc begins with a Table of Contents, usually abbreviated as TOC,
which is located before track 1 on the disc. It occupies a space called the pre-gap area. After the
pre-gap area, the various standards go their own directions. For example, audio CDs start with music
on track one. Mixed mode CDs, which may combine data, music, and video, put data on track 1. This
incompatibility can lead to ear shattering surprises when you attempt to play a data CD on your stereo
system.
The industry standards are commonly known by the color of the cover of the book that governs them.
These include:

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (9 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Red Book

Red Book describes CD-DA, the original Compact Disk application, which stores audio information
in digital form. The name Red Book refers to the international standard (ISO 10149) which was
published as a book with a red cover and specifies the digitization and sampling rate details, including
the data transfer rate and the exact type of pulse code modulation used.

Yellow Book

Yellow Book, first introduced in 1984, describes the data format standards for CD ROM disks, and
includes CD-XA, which adds compressed audio information to other CD ROM data. Yellow Book
divided CD ROM operation into two modes. Mode 1 is meant for ordinary computer data. Mode 2
handles compressed audio and video data. Because Yellow Book discs can contain audio, video, and
data in their two modes, they are often termed mixed mode discs. Yellow Book is the standard that
first enabled multimedia CDs. It is now an internationally recognized standard as ISO 10149:1989 .

Green Book

Green Book governs CD-i, Compact Disc-Interactive, developed by Philips as a hardware and
software standard as an elaboration of Yellow Book for bringing together text, sound, and video on a
single disk. Under the Green Book standard, CD-i uses Adaptive Delta Pulse Code Modulation to
squeeze more audio on every disk (up to two full hours of full quality stereo or 20 hours of monaural
voice quality sound) on its audio for up to two hours of music (in stereo!) CD-i allows the audio,
video, and data tracks to be interleaved on the disc so they can be combined by your PC into an
approximation of a multimedia extravaganza.
Among its other capabilities, the Green Book standard allows data to be hidden in the pre-gap area
used by the Table of Contents. Because audio CD players operating under the Red Book standard do
not attempt to play the information in the pre-gap area, locating data in the pre-gap area under the
Green Book standard prevents the track one problem. This recording method is naturally single
session, so it operates properly even with CD drives which are not capable of dealing with
multi-session discs. This particular arrangement is preferred by some software publishers because, as
the pre-gap area is closest to the center of the disc and less likely to suffer from scratches, it offers
higher data integrity.

Orange Book

Orange Book is the official tome that describes the needs and standards for recordable Compact Disc
systems. It turns the otherwise read-only medium into a write-once medium so you can make your

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (10 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

own CDs (on expensive equipment). Introduced in 1992, the Orange Book standard introduced
multi-session technology. A multi-session disc can contain blocks of data written at different times
(sessions). Each session has its own lead-in track and table of contents.
Developed jointly by Philips and Sony, the Orange Book defines both the physical structure of
recordable CDs and how various parts of the data area on the disc must be used. These include the
Program Area, which hold the actual data the disc is meant to store; a Program Memory Area that
records the track information for the whole disc and all the sessions it contains; the Lead-in and
Lead-out Areas; and a Power Calibration Area that’s used to calibrate the power of the record laser.

Blue Book

The most recent of the CD standards, the Blue Book, was first published in December 1995. The Blue
Book introduces stamped multi-session CDs, which solve the track one compatibility problem. The
Blue Book standard requires the first track of a multi-session CD to be Red Book audio. The second
session, which is invisible to ordinary audio CD players, contains computer data. CD players that
follow the Blue Book standard with proper multi-session drivers can read both the audio and data
portions of the discs. The technology underlying the Blue Book standard was formerly known as
CD-Extra.
Microsoft promotes this format as CD-Plus. It enables CD makers to put multimedia data into the
unused capacity of music CDs. For example, the typical audio CD includes 50 minutes or so of music.
The remaining 24 minutes of playing capacity can be given over to liner notes, a cover picture, even a
short, compressed video.

White Book

The standards for Video CD are called the White Book. The format is based on CD-i. Each disc must
contain a CD-i application so that it can play on any standard CD-i player. The discs are termed CD-i
Bridge discs.

CD-DA

Developed jointly by Philips and Sony Corporation, the CD-Digital Audio system was first introduced
in the United State in 1983. The standard CD-DA disc holds up to about 70 minutes of stereo music
with a range equivalent to today’s FM radio station—the high end goes just beyond 15 KHz; the low
end, nearly to DC. The system stores audio data with a resolution of 16 bits, so each analog audio
level is quantified as one of 65,536 levels. With linear encoding, that’s sufficient for a dynamic range
of 96 decibels, that is 20log(162). To accommodate an upper frequency limit of 15 KHz with adequate
roll-off for practical anti-aliasing filters, the system uses a sampling rate of 44.1 KHz.
Under the Red Book standard, this digital data is restructured into 24-byte blocks, arranged as six

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (11 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

samples of each of a pair of stereophonic channels (each of which has a depth of 16 bits). These 24
bytes are encoded along with control and subchannel information into the 588 optical bits of a small
frame, each of which stores about 136 microseconds of music. Ninety-eight of these small frames are
grouped together in a large frame, and 75 large frames make one second of recorded sound.
In CD-DA systems, the large frame lacks the sync field, header, and error correction code used in CD
ROM storage. Instead, the error-correction and control information is encoded in the small frames.
The necessary information to identify each large frame is spread through all 98 bits of subchannel Q
in a given large frame. One bit of the subchannel Q data is drawn from each small frame.
From the subchannel Q data, a sector is identified by its ordinary playing time location (in minutes,
seconds, and frame from the beginning of the disk). The 98 bits of the subchannel Q signal spread
across the large frame is structured into nine separate parts: a two-bit synchronization field; a four-bit
address field to identify the format of the subchannel Q data; a four-bit control field with more data
about the format; an eight-bit track number; an eight-bit index number; a 24-bit address counting up
from the beginning of the track (counting down from the beginning of the track in the pre-gap area);
eight reserved bits; a 24-bit absolute address from the start of the disk; and 16 bits of error correction
code. At least nine of ten consecutive large frames must have their subchannel Q signals in this
format.
In the remaining large sectors, two more subchannel Q formats are optional. If used, they must occur
in at least one out of 100 consecutive large frames. One is a disc catalog number that remains
unchanged for the duration of the disk; the other is a special recording code that is specific and
unchanging to each track.

Proprietary Standards

In addition, several manufacturers have tried to take the Compact Disc medium their own directions
and have developed what are still proprietary standards that they hope will someday sweep through
the industry (along with their products). Among these are:
● Video Interactive System, developed by Microsoft and Tandy Corporation

● CD-TV, a proprietary video storage standard developed by Commodore International

● MMCD, a multimedia standard for hand held Compact Disc players developed by Sony
Corporation

● Photo CD, a standard for storing high quality photographic images developed by Eastman
Kodak Company

Although these hardware standards define Compact Disc formatting and data storage methods, they
do not specify how your operating systems and applications will use the disk-based storage. They are
like the low level format of a hard disk. In dedicated hardware applications—like audio Compact Disc
systems—that level of standardization is sufficient. Your PC, however, needs some means of finding
files equivalent to the FAT and directory structure of other disk systems.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (12 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

CD ROM

As its name implies, CD-Read Only Memory is fundamentally an adaptation of the Compact Disc to
storing digital information—rock and roll comes to computer storage. Contrary to the implications of
the name, however, you can write to CD ROM discs with your PC, providing you buy the right
(which means expensive) equipment. For most applications, however, the CD ROM is true to its
designation—it delivers data from elsewhere into your PC. Once a CD ROM disk is pressed, the data
it holds cannot be altered. Its pits are present for eternity.
In the beginning, CD ROM was an entity unto itself, a storage medium that mimicked other mass
storage devices. It used its own storage format. The kind of data that the CD ROM lent itself to was
unlike that of other storage systems, however. The CD ROM supplied an excellent means for
distributing sounds and images for multimedia systems; consequently, engineers adapted its storage
format to better suit a mixture of data types. The original CD ROM format was extended to cover
these additional kinds of data with its Extended Architecture. The result was the Yellow Book
standard.

Format

The Yellow Book describes how to put information on a CD ROM disk. It does not, however, define
how to organize that data into files. In the DOS world, two file standards have been popular. The first
was called High Sierra format. Later this format was upgraded to the current standard, the ISO 9660
specification.
The only practical difference between these two standards is that the driver software supplied with
some CD ROM players, particularly older ones, meant for use with High Sierra formatted disks may
not recognize ISO 9660 disks. You’re likely to get an error message that says something like "Disc
not High Sierra." The problem is that the old version of the Microsoft CD ROM extensions—the
driver that adapts your CD ROM player to work with DOS—cannot recognize ISO 9660 disks.
To meld CD ROM technology with DOS, Microsoft Corporation created a standard bit of operating
code to add onto DOS to make the players work. These are called the DOS CD ROM extensions, and
several versions have been written. The CD ROM extensions before Version 2.0 exhibit the
incompatibility problem between High Sierra and ISO 9660 noted earlier. The solution is to buy a
software upgrade to the CD ROM extensions that came with your CD ROM player from the vendor
who sold you the equipment. A better solution is to avoid the problem and ensure any CD ROM
player you purchase comes with Version 2.0 or later of the Microsoft CD ROM extensions.
ISO 9660 embraces all forms of data you’re likely to use with your PC. Compatible disks can hold
files for data as well as audio and video information.
For Window 95, Microsoft created another set of extensions to ISO 9660. Called the Joliet CD ROM
Recording Specification, these extensions add support for longer file names—but to 128 characters
instead of the 255-character maximum of Windows 95—as well as nesting of directories beyond eight

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (13 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

levels, allowing directory names to use extensions, and broadening the character set. To maintain
compatibility with ISO 9660, the extra Joliet data must fit in a 240-character limit, foreclosing on the
possibility of encoding all Windows 95 directory data.

Players

In that a computer CD ROM player has the same basic job as a CD-Digital Audio machine in your
home stereo, you’d expect the technology inside each to be about the same. In fact, all have similar
mechanisms.
CD ROM players tend to be more expensive than stereo models because retrieving computer data is
more demanding. A tiny musical flaw that might pass unnoticed even by trained ears could have
disastrous consequences in a data stream. Misreading a decimal point as a number, even zero, can
result in error laden calculations. To minimize, if not eliminate, such problems, computer CD ROM
players require different error correction circuitry than is built into stereo equipment that uses much
more powerful algorithms. CD-DA errors are corrected at the small frame level, 24 bytes at a time.
CD ROM data errors are corrected at the large frame level, 2048 more bytes at a time.
CD ROM players also require more intimate control and faster access times. The toughest job a digital
audio player faces is moving from track to track when you press a button. A CD ROM player must
skate between tracks as quickly as possible—in milliseconds if your human expectations are to be
fulfilled.
Even the link between your PC and CD ROM player complicates the drive and makes it more
expensive. By itself a CD ROM player does nothing but spin its disk. Your computer must tell the
player what information to look for and read out. And your computer is needed to display—visually
and aurally—the information the CD ROM player finds, be it text, a graphic image, or a musical
selection. Sending those commands requires an interface of some kind, in most cases either SCSI or
ATAPI. Neither is needed in a digital audio player.

Transfer Rate

Unlike music and video systems, which require real time playback of their data (unless you prefer to
watch the recorded world race by as if overdosed on adrenaline), digital data is not ordinarily locked
to a specific time frame. In fact, most people would rather have information shipped as quickly as
possible from disc to memory.
For real time playback, the original CD-Digital Audio system required a 150-kilobyte-per-second data
transfer rate. In the data domain, however, that’s almighty slow—one-quarter to one-tenth the
throughput of a modern hard disk even after you account for all the overhead. Raw hard disk transfer
rates exceed 50 times the CD-DA rate.
The transfer rate of a Compact Disc system is a direct function of the speed at which the disc itself
spins. Increasing the data transfer rate requires higher rotation speeds. Consequently, today’s CD
ROM players operate at multiples of the standard CD DA spin rate. The speeds are usually expressed

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (14 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

as a multiple of the spin rate of the original audio CD speed—for example 1x, 2x, 4x, or even 20x.
Because of the different sizes of blocks and error-correction methods used by different formats, the
exact transfer rate at a given spin rate varies with the type of CD. Table 12.1 lists the transfer rate of
various common CD formats at different speed ratings.

Table 12.1. Actual Transfer Rates in Bytes per Second in Various CD Modes

Audio Mode 1 Mode 2 XA Form 1 XA Form 2


Block size 2362 2048 2336 2048 2324
1x 176,000 153,600 175,200 153,600 174,300
2x 352,800 307,200 350,400 307,200 348,600
4x 705,600 614,400 700,800 614,400 697,200
6x 1,058,400 921,600 1,051,200 921,600 1,045,800
8x 1,411,200 1,228,800 1,401,600 1,228,800 1,394,400
10x 1,760,000 1,536,000 1,752,000 1,536,000 1,743,000
12x 2,112,000 1,843,200 2,102,400 1,843,200 2,091,600
16x 2,822,400 2,457,600 2,803,200 2,457,600 2,788,800
20x 4,223,000 3,286,400 4,204,800 3,286,400 4,183,200

In common parlance, the Mode 1 rate is taken as the basis for measuring transfer speed, and it is
usually rounded to 150 KB/sec. Consequently double-speed (2x) drives spin twice as fast to deliver
300 KB/sec transfer rates; quadruple-speed (4x) drives, 600 KB/sec; and so on.
Faster is, as always, better. But speed ratings can be misleading. Today’s fast drives (rated up to 20x)
operate with constant angular velocity recording—they spin at a constant rotation rate so their actual
data transfer rate varies, increasing as the head travels from the inside edge of the spiral track to the
outside edge. These CD drives almost universally are rated at the highest speed they achieve, the
potential peak transfer rate. In actual operation, the transfer rate varies from this rated speed
(achieved only at the outer edge of the disc) down to one-half that rate (at the inner edge of the
recorded area). Because the data begins at the inside of the disc and progresses outward, these high
speed drives may actually never run up to their rated rate on actual discs. Moreover, if a basic original
1x drive spun its disc at a constant rate, it would, by this rating system, be called a 2x drive.
As a practical matter, the rated speeds give you a means for comparing drives even though the ratings
don’t reflect actual performance. The minimum speed you should expect from a new playback-only
CD drive is 8x. Most software works adequately with drives as slow as 4x (the latest releases may be
more demanding). CD recorders operate at lower rates but are catching up.
High speed drives can retain their compatibility with Red Book audio by buffering. They read the
audio data at their higher rate and pump it into their buffer. Then they unload the buffer at the real
time rate.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (15 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Access Time

Compared to magnetic hard disks, CD ROM players are laggardly beasts. Mass is the reason. The
optical head of the CD system is substantially more massive than the flyweight mechanisms of hard
disks. Instead of a delicate read/write head, the CD ROM player has a large optical assembly that
typically moves on a track. The assembly has more mass to move, which translates into a longer wait
for the head to settle into place.
Besides the mass of the head, the constant linear velocity recording system of CD ROMS slows the
access speed. Because the spin rate of the disk platter varies depending on how far the read/write head
is located from the center of the disk, as the head moves from track to track, the spin rate of the disk
changes. With music, which is normally played sequentially, that’s no problem. The speed difference
between tracks is tiny, and the drive can quickly adjust for it. Make the CD system into a
random-access mechanism, and suddenly speed changes become a big issue. The drive might have to
move its head from the innermost to outermost track, requiring a drastic speed change. The inertia of
the disk spin guarantees a wait while the disk spins up or down.
Some old CD ROM players required nearly a second to find and read a given large frame of data.
Modern designs cut that time to 100 to 200 milliseconds, still about ten times longer than the typical
hard disk drive.

Mechanism

Nearly all CD ROM players fit a standard half-height 5.25-inch drive bay for one very practical
reason: A 4.6-inch disc simply won’t fit into a 3.5-inch drive slot. Drives can be internal or external,
the latter usually including a power supply. The best choice is what works for you—internal for lower
cost if you have expansion space inside your PC, external if you don’t (for example, if you have a
notebook computer).
Disk handling and how you get a disc into a CD ROM player is an important aspect of drive design.
Some mechanisms incorporate a sliding drawer much like that on most audio CD systems using either
a micromotor or a spring to slide the drawer out. You simply drop the disc in and slide the drawer
closed. The only important drawback to this design is that drives must be mounted horizontally lest
the discs fall out.
Some CD ROM players make the job even easier—and safer for your discs. You load your discs into
a special carrier called a caddy which resembles the plastic jewel box case that most commercial
music CDs come in. When you want to load a disk into the CD ROM player, you slide the whole
carrier into a waiting slot. Most people buy a carrier for each CD ROM disk they have because of this
convenience and the extra protection the carrier affords the disk—no scratches and no fingerprints,
guaranteed! If your CD system gets heavy use by younger folk or uncaring office personnel, use of a
caddy will extend the life of your investment in CD media. Most drives that use caddies will operate
either in horizontal or vertical orientation.
At one time three forms of caddy were used, but now the industry has settled on one. The most
common was initially used by Denon, Hitachi, some Matsushita, Sony, Toshiba, and newer NEC

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (16 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

drives. It is the survivor. Its design resembles a 3.5-inch floppy disk with a single metal shutter that
slides back to let the drive optics see the disk. You open the caddy by squeezing tabs at the end
opposite the shutter to drop in a disc.
The other two caddy designs have essentially disappeared from the market. The Philips-style caddy
was clear smoked plastic and opened by pressing two tabs. The CD slides inside between large white
plastic pincers. The other style was that used by the old NEC CDR-77/88 drive.

Changers

As CD collections grow, the old idea from the stereo system—the disc or record changer—becomes
increasingly compelling. You can load up several discs and have them at ready access. No more
shuffling through stacks of discs and jewel cases. You can keep your favorite discs just a keystroke
away.
The first CD ROM changers were, in fact, derived from those used in stereo systems. Pioneer adapted
its six-CD cartridge to computer use to create the first changer—only natural in that Pioneer has
patented that changer design. More recently, other manufacturers have developed changers that don’t
need cartridges. As with single-disc CD drives, the choice between cartridge and free-disc system is
one of preference. Either style of drive works.
Similarly, CD changers operate at the same speeds as single-CD units, although the fastest changers
lag behind the fastest single-disc drives. Today 6X changers are commonplace. Unfortunately,
software has not tracked the developments in CD changers. Both applications and operating systems
have problems with the multi-disc drives.
Most driver software for CD changers assign separate drive letters to each disc (or disk position) in
the changer. A four-disc changer would thus get four drive letters. To access the disc in a particular
changer slot, you only need to use the appropriate drive letter.
Life isn’t so simple, however. Most CD-based applications don’t expect being loaded into a changer
and react badly. Although some applications will run no matter where you load them, some will force
an icon to pop on your screen and ask in which slot to find the disc. Some are worse—they will refuse
to run except in a favored slot with the favored drive letter. Worse, the error message you get won’t
help you find the problem. It may tell you that the program can’t find its drive, even if you’ve
installed the program for that exact drive. The only solution is to find the slot the program favors and
always use that slot for that particular program.
Dumb old DOS doesn’t mind changers particularly. It just accepts whatever drive letters you give it.
Advanced Windows 95, however, outsmarts itself by testing and sensing all the drives that are
connected to it. As a result, when Windows boots up, it will chunk through each slot in your CD
changer and wait for it to come up to speed. In that spinning up and down each disc may take 20
seconds, cycling through the whole changer can add minutes to the already long boot-up interval. In
addition, some changer drivers do not interface well with Windows and may not properly inform
Windows when you change discs. Change discs without closing a program, and the system may hang
until you slide the program’s disc back into the right slot and close it properly.
Certainly better drivers and applications that are more aware will come onto the market. But you
should beware that, as with any new technology, CD changers are not perfect.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (17 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Controls

The MPC specification requires a volume control on the front panel of any CD drive you have in your
multimedia PC. This control is useful if you decide to use your drive for playing back music while
you work. You can plug headphones into the jack, also on the front of the drive, and use the volume
control to adjust the loudness of the playback independent of the CD control software you run on your
PC.
Other than the volume control, CD drives need no physical controls. All of their functions are
operated by the software you run on your PC.

CD-Recordable

The nature of the CD ROM medium and operation of CD recorders make the creation and writing of a
CD ROM a more complex operation than simply copying files to a hard disk drive. Because CD
ROMs are essentially sequentially recorded media, the CD recorder wants to receive data and write it
to disc as a continuous stream. In most CD recorders, the stream of data cannot be interrupted once it
starts. An interruption in the data flow can result in an error in recording. Moreover, to obtain the
highest capacity possible from a given CD, you want to limit the number of sessions into which you
divide the disc. As noted above, each session steals at least 13MB from disc capacity for the overhead
of the session’s lead-in and lead-out.
If your system cannot supply information to your CD recorder fast enough, the result is a buffer
underrun error. When you see such an error message on your screen, it means your CD recorder has
exhausted the software buffer and run out of data to write to the disc. You can prevent this error by
increasing the size of the buffer if your software allows it. Or you can better prepare your files for
transfer to CD. In particular, build a CD image on a hard disk that can be copied on the fly to the CD.
The best strategy is to give over your PC to the CD writing process, unloading any TSR programs,
background processes, or additional tasks in a multi-tasking system. Screen savers, popup reminders,
and incoming communications (your modem answering the phone for data or a fax) can interrupt your
CD session and cause you to waste your time, a session, or an entire disc.
Your system needs to be able to find the files it needs to copy to your CD ROM as efficiently as
possible. Copying multiple short files can be a challenge, particularly if your hard disk is older and
slower or fragmented. CD recorder makers recommend disks with access times faster than about 19
milliseconds. An AV-style hard disk is preferable because such drives are designed for the smooth,
continuous transfer of data and don’t interrupt the flow with housekeeping functions such a thermal
calibration. You’ll also want to be sure that your files are not fragmented before transferring them to
CD. Run your defrag utility before writing to your CD.
Depending on the manufacturer of your CD recorder and the software accompanying it, you may have
a choice of more than one mode for copying data to CD. In general, you have two choices, building a
CD image on your hard disk and copying that image intact to your CD. Some manufacturers call this

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (18 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

process "writing on the fly." From a hardware standpoint, this is the easiest for your system and CD
recorder to cope with because the disk image is already in the form of a single huge file with all of the
directory structures needed for the final CD in their proper places. Your system need only read your
hard disk and send a steady stream of data to the CD recorder.
The alternative method is to create the CD structure in its final form on the CD itself. Some
manufacturers call this "writing a virtual image." In making a CD by this method, your CD recorder’s
software must follow a script or database to find which files it should include on the disk and locate
the files on your hard disk. The program must allocate the space on your CD, dividing it into sectors
and tracks, while at the same time reading the hard disk and transferring the data to the CD.

Media

Discs used in CD recorders differ in two ways from those used by conventional CD players—besides
being blank when they leave the factory. CD-R discs require a recordable surface, something that the
laser in the CD recorder can alter to write data. This surface takes the form of an extra layer of dye on
the CD-R disc. Recordable CDs also have a formatting spiral permanently stamped into each disc.

Dye Layer

As with other CDs, a recordable disc has a protective bottom layer or substrate of clear polycarbonate
plastic that gives the disc its strength. A thin reflective layer is plated on the polycarbonate to deflect
the CD beam back so that it can be detected by the drive. Between this reflective layer and the normal
protective top lacquer layer of the disc, a CD-R disc has a special dye layer. The dye is photoreactive
and changes its reflectivity in response to the high power mode of the CD recorder’s laser. Figure 12.1
shows a cross-section of a typical CD-R disc.
Figure 12.1 Cross-section of recordable CD media using cynanine dye (not to scale).

Three compounds are commonly used for photo-reactive dyes used by CD-R discs. These are most
readily distinguished by their color: either green, gold, or blue.

Green

Not surprisingly, green CD-R discs look green. The dye layer, based on a cynanine compound, is
green and lustrous from the reflective backing partly shining through. Taiyo Yuden developed the
photoreactive dye which was used for the first CD-R discs, including those used during the
development of the CD-R standards. Even now, green CD-R discs are believed to be more forgiving
of variations in laser power during the read and write processes. The green cynanine dye is believed to
be permanent enough to give green CD-R discs a useful life of about 75 years. In addition to Taiyo
Yuden, several companies including Kodak, Ricoh, TDK and Verbatim make green CD-R discs.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (19 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Gold

Kodak made the world aware of gold CD-R discs when it introduced its Photo-CD system, touting
them as being higher quality than the green variety. The gold dye, a phthalocyanine, was actually
developed by Mitsui Toatsu Chemicals. The chief advantage of gold over green discs is longer life
because the dye is less sensitive to bleaching by ambient light. If it were on a dress or shirt, it would
be more colorfast. Gold CD-R discs are believed to have a useful life of about 100 years. Some people
believe that gold discs are also better for high speed (2x or 4x) recording than are green discs. Mitsui
Toatsu and Kodak manufacture most gold CD-R discs. Kodak Photo-CD discs also have an extra
coating the company called "Inforguard" which makes them more resistant to scratches (and thus
prolongs their life when used in hostile environments like the typical home), but the coating is
independent of the dye color.

Blue

The most recent of the CD shades is blue, a color that results from using cynanine with a alloyed
silver substrate. The material is proprietary and patented by Verbatim, currently the sole
manufacturer. According to some reports, it is more resistant to ultraviolet radiation than either green
or gold dyes and makes reliable discs with low block error rates. Verbatim also adds a scratch
resistant coating to its discs, but as with PhotoCDs, the coating is independent from the disc color.
Some manufacturers use multiple layers of dyes on their discs, sometimes even using two different
dyes. The multiple-layer CD-R discs are often described as green-green, gold-gold, or green-gold
depending on the colors of the various layers.
Additionally, the reflective layers of recordable CDs also vary in color. They may be silver or gold,
which subtly alters the appearance of the dye.
There should be no functional difference between the different CD-R colors—all appear the same hue
to the monochromatic laser of a CD drive that glows at a wavelength of 780 nanometers. But while all
of the CD-R materials reliably yield approximately the same degree of detectable optical change, as a
practical matter they may act differently. Some early CD ROM readers may have varying sensitivities
to the materials used in CD-R discs and will reliably read one color but not another. There is no
general rule about which color is better or more suited to any particular hardware. The best strategy is
to find what works for you and stick with it.

Physical Format

The polycarbonate substrate of all CD-R discs has a spiral groove physically stamped into it. More
than a simple channel, this groove incorporates sector formatting data which incidentally defines the
capacity of the disc. Because this format is physically encoded on the disc, it cannot be altered, and
you cannot increase the capacity of a CD-R disc (although you can reduce the capacity of the disc

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (20 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

simply by not writing to its entire surface). You can think of the groove as being the low level format
of the recordable CD.

Capacity

With a read-only medium, you normally don’t have to concern yourself with the issue of storage
capacity. That’s for the disc maker to worry about—the publisher has to be sure everything fits. With
about 650 megabytes of room on the typical CD and many products requiring only a few megabytes
for code, the big problem for publishers is finding enough stuff to put on the disc so that you think
you’re getting your money’s worth.
The advent of recordable CDs changes things entirely. With CDs offering convenient long-term
storage for important files such as graphic archives, you’ll be sorely tempted to fill your CDs to the
brim. You’ll need to plan ahead to make all your files fit.
CD ROMs have substantial overhead that cuts into their available capacity. If you don’t plan for this
overhead, you may be surprised when your files don’t fit.

Raw Capacity

CD ROM capacities are measured in minutes, seconds, and sectors, based on the audio format from
which engineers derived the medium. Recordable CDs come in four capacities: 18 and 21 minute
discs are 80 millimeters in diameter; 63 and 74 minute discs are 120 millimeters in diameter. These
raw capacities are summarized in Table 12.2.

Table 12.2. Recordable CD Raw Capacity (No Allowance for Overhead)

Capacity Minutes Capacity Sectors Capacity Bytes


18 81,000 165,888,000
21 94,500 193,536,000
63 283,500 580,608,000
74 333,000 681,984,000

Two kinds of file overhead affect the number of bytes available on a given recordable CD that can
actually be used for storage. One is familiar from other mass storage devices, resulting from the need
to allocate data in fixed-size blocks. The other results from the format structure required by the CD
standards. Table 12.3 reflects the effects of this overhead in CD-R capacity.

Table 12.3. Maximum CD-R Capacities for Common Data Formats

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (21 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Audio Mode 1 Mode 2 XA Form 1 XA Form 2


Block size 2352 bytes 2048 bytes 2336 bytes 2048 bytes 2324 bytes
18 Min 190,512,000 165,888,000 189,216,000 165,888,000 188,244,000
21 Min 222,264,000 193,536,000 220,752,000 193,536,000 219,618,000
63 Min 666,792,000 580,608,000 662,256,000 580,608,000 658,854,000
74 Min 783,216,000 681,984,000 777,888,000 681,984,000 773,892,000

Logical Block Padding

As with most hard and floppy disks, CD ROMs allocate their storage in increments called logical
blocks. Although logical block sizes of 512, 1024, and 2048 bytes are possible with today’s CD
drives, only the 2048-byte logical block format is in wide use. If a file is smaller than a logical block,
it is padded out to fill a logical block. If a file is larger than one logical block, it fills all its logical
blocks except the last, which is then padded out to be completely filled. As a result of this allocation
method, all files except those that are an exact multiple of the logical block size require more disc
space than their actual size. In addition, all directories on a CD require at least one logical block of
storage.
That said, CD ROMs are typically more frugal with their storage than today’s large hard disks. The
standard DOS and Windows 95 disk formats require allocation units called clusters of 16 kilobytes for
disks with capacities between one and two gigabytes, so they waste substantially more space on
allocation unit padding than do CDs.

Format Overhead

In addition to the block-based overhead shared with most mass storage devices, CD ROMs have their
own format overhead that is unique to the CD system. These are remnants of the audio origins of the
CD medium.
Because audio CDs require lead-in and lead-out tracks, the Yellow Book standard for CD ROM
makes a similar allowance. The specifications require that data on a CD ROM begin after a
two-second pause, followed by a lead-in track 6500 sectors long. Consequently, the first two seconds
of storage space and the lead-in area on a CD are not usable for data. These two seconds comprise a
total of 150 sectors each holding 2048 bytes, which trims the capacity of the disc by 307,200 bytes.
The 6500 sector lead-in consumes another 13,312,000 megabytes. The lead-out gap at the end of a
storage session and pre-gap that allows for a subsequent session consume another 4650 sectors or
9,523,200 bytes.
The ISO 9660 file structure also eats away at the total disk capacity. The standard reserved the first 16
sectors of the data area—that’s 32,768 bytes—for system use. Various elements of the disc format
also swallow up space. The root file, primary volume descriptor, and volume descriptor set terminator
each require a minimum of one sector. The path tables require at least two sectors. The required

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (22 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

elements consequently take another five sectors or 10,120 bytes of space. Discs with complex file
structures may exceed these minima and lose further storage space.
The more sessions you divide a given CD into, the less space will be available for your data. Each
session on a multi-session CD requires its own lead-in. Consequently, each session requires at least
13MB of space in addition to the file structure overhead.

Vulnerabilities

No matter the dye used, recordable CD media are not as durable as commercially stamped CDs. They
require a greater degree of care. They are photosensitive, so you should not expose them to direct
sunlight or other strong light sources. The risk of damage increases with exposure. The label side of
recordable CDs is often protected only by a thin lacquer coating. This coating is susceptible to
damage from solvents such as acetone (finger nail polish remover) and alcohol. Many felt tip markers
use such solvents for their inks, so you should never use them for marking on recordable CDs. The
primary culprits are so-called permanent markers, which you can usually identify by the strong aroma
of their solvents. Most fine point pen-style markers use aqueous inks, which are generally safe on CD
surfaces. Do not use ballpoint, fountain pen, pencil or other sharp-tipped markers on recordable CDs
because they may scratch through the lacquer surface and damage the data medium.
The safest means of identifying a recordable CD is using a label specifically made for the recordable
CD medium. Using other labels is not recommended because they may contain solvents that will
attack the lacquer surface of the CD. Larger labels may also unbalance the disc and make reading it
difficult for some CD players. In any case, once you put a label on a recordable CD, do not attempt to
remove it. Peeling off the label likely will tear off the protective lacquer and damage the data medium.

Operation

Creating a CD is a complete process. The drive doesn’t just copy down data blocks as your PC pushes
them out. Every disc, even every session, requires its own control areas to be written to the disc. Your
CD-R drive doesn’t know enough to handle these processes automatically because the disc data
structure depends on your data and your intentions. Your CD-R drive cannot fathom either of these.
The job falls to the software you use to creates your CD-R discs.
Your CD creation software organizes the data for your disc. As it sends the information to your CD-R
drive, it also adds the control information required for making the proper disc format. After you’ve
completed writing to your disc, the software fixates the disc so that it can be played. The last job is
left to you—labeling the disc so you can identify the one you need from a stack more chaotic than the
pot of an all night poker game.

Speed

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (23 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

As with ordinary CD ROM, the speed of CD-R drives is the transfer rate of the drive measured in
multiples of the basic audio CD speed, 150KB/sec. The very first CD recorders operated at 1x speed,
and each new generation has doubled that speed. The fastest drives currently operate at 4x, although
technical innovation can increase that just as it has improved basic CD speed.
Most CD recorders have two speed ratings, one for writing and one for reading. The writing speed is
invariably the same or less than the reading speed. Advertisements usually describe drives using two
numbers, the writing speed (lower number) first. The most common speed combinations are: 1x1,
single-speed read and write; 1x2, single-speed write and double-speed read; 2x2 double-speed writing
and reading; 2x4 double-speed writing and quadruple-speed reading; and 4x4 quadruple-speed in both
writing and reading.
How fast a CD recorder writes is only one factor in determining how long making one or more CDs
will take. Other variables include your system, writing mode (whether you try to put files together for
a CD session on the fly or try to write a disc image as one uninterrupted file), and the number of
drives.
Your system and writing mode go hand-in-hand. As noted below, a CD recorder requires a constant,
uninterrupted stream of data to make a disc. The speed at which your PC can maintain that data flow
can constrain the maximum writing speed of a CD-R drive. Factors that determine the rate of data
flow include the speed of the source of the data (your hard disk), the fragmentation of the data, and
the interfaces between the source disk and your CD recorder.
Most CD recorders have built-in buffers to bridge across temporary slowdowns in the data supply,
such as may be involved in your hard disk’s read/write head repeatedly moving from track to track to
gather together a highly fragmented file or when an older, non-A/V drive performs a thermal
calibration. Even with this bridge action, however, such hard disk slowdowns reduce the net flow of
data to the CD recorder. If you try to create a CD by gathering together hundreds of short hard disk
files on the fly, your hard disk may not be able to keep up with the data needs of a 4x CD recorder. In
fact, if the files are many and small, the hard disk may not even be able to maintain 1x speed, forcing
you to resort to making an image file before writing to disc.
Current CD recording software is oriented to SCSI devices. It works optimally with moving files from
SCSI-based hard disks and CD readers. When your data originates on an IDE or EIDE hard disk or
CD reader, this software slows down, possibly knocking you from 4x to 2x or 1x operation in writing
CDs. In fact, some software only allows operations such as copying CD tracks from SCSI CD readers.
To work with IDE or EIDE CD tracks, you may first have to build an image file on your hard disk
drive. In other words, you should carefully check the requirements of the recording software before
you invest in it.
The bottom line is that your present PC may not be able to deliver data at the rate required by a 4x CD
recorder. In that a 4x drive is substantially more expensive than a 2x drive, you may be wasting
money on speed you cannot use. To take full advantage of a 4x drive with today’s software, you may
also have to invest in a Fast SCSI drive or even an Ultra SCSI hard disk drive and a bus mastering
host adapter to serve as the data source.
When you have to produce a large number of CDs quickly, one of the best strategies is to use multiple
drives. Five drives writing simultaneously cuts the net creation time of an individual CD by 80
percent. For moderate volume applications, stacks of CD writers can make a lot of sense—and CDs.
For large volume applications (generally more than a few hundred), pressing CDs is the most cost
effective means of duplication, albeit one that requires waiting a few days for mastering and pressing.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (24 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

Disc Writing Modes

Depending on your CD-R drive and your CD creation software, you may have your choice of the
mode you use for writing to your CD. The mode determines what you can write to your discs and
when. Typically, you don’t have to worry about the writing mode because your software takes care of
the details of it automatically. However, some drives and software may be limited to the modes under
which they can operate.
The basic CD writing modes are four: track-at-once, multi-session, disc-at-once, and incremental
writing. Each has its own requirements, limitations, and applications.

Track-at-Once

The most basic writing method for CDs is the creation of a single track. A track can be in any format
that your CD-R drive can write, for example, a CD ROM compatible disc or a CD-DA disc for your
stereo system. The track-at-once process writes an entire track in a single operation. A track must be
larger than 300 blocks and smaller than the total capacity of the disc minus its overhead.
Writing track-at-once requires only that you designate what files you want to put on a CD. Your CD
creation software takes over and handles the entire writing process.
Originally the big limitation of track-at-once writing was that you could write only one track on a disc
in a single session. Consequently, unless you had a lot to write to your disc already prepared
beforehand, this process was wasteful of disc space. Some modern CD systems can add one track at a
time to a disc within a single session, even allowing you to remove the disc from the drive and try it in
another in the middle of the process.
Each track has overhead totaling 150 blocks for run-in, run-out, pre-gap and linking. CD standards
allow 99 tracks per disc; consequently, if your tracks are small you may waste substantial capacity.
Writing the maximum number of blocks of minimal size (300 blocks plus 150 block of overhead
each) will only about half fill the smallest, 18-minute CD disc (44,550 blocks on a 81,000 block disc).

Track Multi-Session

Sometimes called track incremental mode, track multi-session mode is the most common means of
allowing you to take advantage of the full capacity of CDs. Track multi-session writing allows you to
add to CDs as you have the need by dividing the capacity of the disc into multiple sessions, up to
about 50 of them. Each session has many of the characteristics of a complete CD including its own
lead-in and lead-out areas as well as table of contents.
In fact, the need for these special formatting areas for each session is what limits the number of
sessions on the disc. The lead-in and lead-out areas together require about 13.5 megabytes of disc

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (25 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 12

space. Consequently, CDs with a total capacity of 680 megabytes can hold no more than about 50
sessions.
When the CD standards were first created, engineers didn’t even consider the possibility that
individual consumers would ever be able to write their own discs. They assumed that all discs would
be factory mastered in a single session. They designed early CD drives to recognize only one session
on a disc. Many older CD ROM drives (particularly those with 1x and 2x speed ratings) were
single-session models and cannot handle multi-session discs written in track multi-session mode.
Single-session drives generally read only the first session on a disc and ignore the rest.
Another problem that may arise with multi-session discs is the mixing of formats. Many CD players
are incapable of handling discs on which CD ROM Mode 1 or 2 sessions are mixed with XA sessions.
The dangerous aspect of this problem is that some CD mastering software (and CD drives) allow you
to freely mix formats in different sessions. You may create a disc that works when you read it on your
CD drive that cannot function in other CD drives. The moral is not to mix formats on a disc. (Don’t
confuse format with data type. You can freely mix audio, video, and pure data as long as they are
written in the same format, providing the one you choose is compatible with all three data types.)
Most modern CD-R machines allow you to write more than one track in a given session. The
advantage of this technique is the elimination of most of the 13.5MB session overhead. Instead of
lead-in and lead-out tracks, each pair of tracks is separated by 150 blocks (two seconds) of
pre-gap—overhead of only about 300K. The entire session must, of course, be framed by its own lea

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh12.htm (26 de 26) [23/06/2000 05:38:41 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

Chapter 13: Tape


Tape is for backup—your insurance that a disaster doesn't erase every last vestige of
your valuable data. You also can use some of the standardized tape systems as exchange
media—the floppy disk for the age of megabytes. Tape systems come in a number of
formats with different capacities, speeds, and levels of convenience. The best, however, is
the one that's easiest to use.

■ Background
■ Medium
■ Tape
■ Cartridges
■ Technologies
■ Start-Stop Tape
■ Streaming Tape
■ Parallel Recording
■ Serpentine Recording
■ Helical Recording
■ Formats
■ Linear Recording Tape Systems
■ Open Reel Tape
■ 3480/3490/3590 Cartridges
■ Audio Cassettes
■ D/CAS
■ DCC
■ Quarter-Inch Data Cartridges
■ Mini-Cartridges
■ Digital Linear Tape
■ Obsolete and Nonstandard Tape Systems

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (1 de 15) [23/06/2000 05:43:23 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

■ Helical-Scan Systems
■ Eight Millimeter
■ Digital Data Standard
■ Pereos
■ Backup Issues
■ Portable Drives
■ Network Backups
■ Backup Software
■ Image Backups
■ File-by-File Backups
■ Verification
■ Tape Requirements
■ Media Matching
■ Backup Strategy

13

Tape

Faith is trusting your most valuable possessions—purportedly to protect them—to a sealed black box
filled with fragile machinery you don't understand honed to split-hair tolerances and vulnerable to a
multitude of ills—electroshock, impact, even old age. One misstep and your treasures can be
destroyed.
Such groundless, even misguided, trust has no place in business and definitely no role in the rigorous
world of personal computers, but that's exactly what you do every time you save a file to your hard
disk drive. You send data to a complex, sealed black box with the hope of someday retrieving it.
Technology has given us this faith, but just in case the faith is misused, technology has also
bequeathed us the backup system.
A backup system will help you sleep at night—not with narcotic action (tape is definitely not habit
forming) but with peace of mind. You won’t have to worry about your PC exploding from overload,
the office burning down from spilled midnight oil, or thieves stealing your PC and all your business
records with it. With a suitable backup system, you can quickly make a copy of your most valuable
data on a movable medium that you can take to preserve somewhere safe. When disaster strikes—as it
inevitably does —you always have your backup copy ready to replace the original.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (2 de 15) [23/06/2000 05:43:23 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

The concept is simple but far from perfect. A backup system requires a combination of contradictory
features. It must be permanent but reusable, secure but portable, reliable but cheap, easy to use but a
piece of PC equipment. Two principal types of storage equipment best fit the bill: cartridge disks
(which we’ve already discussed) and tape, the subject of this chapter.

Background

Tape was the first magnetic mass storage system used in computers, harking back to the days of room
size Univacs and vacuum tubes. It first proved itself as a convenient alternative to punched cards and
punched paper tape—the primary storage system used by mainframe computers. Later, information
transfer became an important use for tape. Databases could be moved between systems as easily as
carting around one or more spools of tape. After magnetic disks assumed the lead in primary storage,
tape systems were adapted to backing them up.
When tape joined with personal computing it already enjoyed many of the benefits of its evolution in
the mainframe environment. Never considered as primary storage—except in the nightmares of the
first PC's original designers who thoughtfully included a cassette port on the machine for its millions
of users to ignore—tape started life in the PC workplace in the same role it serves today, as a basic
backup medium. Since the first systems, tape has grown in stride with the disks that it serves to back
up. From initial systems able to store only about 30MB per tape, modern systems pack gigabytes into
compact, convenient cartridges.
With half a century of development behind it, tape has the potential to play a major role in your PC,
not just protecting your data but also as an archival and exchange medium. Tape is unmatched as a
low cost medium for reliably holding data for the long term.
That said, tape has never lived up to its potential, and still lands far from the mark. Tape is plagued by
a trio of problems: standardization, suffering the dual curse of having both too few and too many
standards, software that stodgily stays a generation behind the hardware, and laziness. Using tape is
simply bothersome. The returns are always far off and, you hope, unnecessary. Tape is about as
appealing as thoughts of savings accounts when you stand in the candy store with a quarter in your
hand. You never think of the possibility of being down and out in the gutter with cavities raging
among your sugar-weakened teeth. One Tootsie Roll won’t doom you to wear dentures at seventeen.
Missing one backup session won’t put your company out of business when the accounts receivable
disappear into the ether.
With tape you have to face the same grim possibilities as when buying life insurance. No one likes
insurance salespeople, not even other insurance salespeople. Why would anyone actually like tape?
In the end, you hate tape and buy it because there’s nothing better to accomplish what it does. You
swallow hard, buy a backup system, and swallow hard again every time you use it.
At least you can keep it from getting caught in your thought with the satisfaction of knowing you
bought the right tape system for your PC, one that will actually serve its purpose and preserve your
backups for the time when you need them. So swallow hard once again and forge ahead.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (3 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

Medium

As a physical entity, tape is both straightforward and esoteric. It is straightforward in design,


providing the perfect sequential storage medium—a long, thin ribbon that can hold orderly sequences
of information. The esoteric part involves the materials used in its construction.

Tape

The tape used by any system consists of two essential layers—the backing and the coating. The
backing provides the support strength needed to hold the tape together while it is flung back and forth
across the transport. Progress in the quality of the backing material mirrors developments in the
plastics industry. The first tape was based on paper. Shortly after the introduction of commercial tape
recorders at the beginning of the 1950s, cellulose acetate (the same plastic used in safety film in
photography for three decades previously) was adopted. The state of the art plastic is polyester, of
double knit leisure suit fame. In tape, polyester has a timeless style of its own—flexible and long
wearing with a bit of stretch. It needs all those qualities to withstand the twists and turns of today's
torturous mechanisms, fast shuttle speeds, and abrupt changes of direction. The typical tape backing
measures from one-quarter mil (thousandth of an inch) to one mil thick, about 10 to 40 microns.
The width of the backing varies with its intended application. Wider tapes offer more area for storing
data but are most costly and, after a point, become difficult to package. The narrowest tape in
common use, cassette tape, measures 0.150 inches (3.8 millimeters) wide. The widest in general use
for computing measures 0.5 inches (12.7 millimeters). Equipment design and storage format
determine the width of tape to be used.
Coatings have also evolved over the decades, as they have for all magnetic media. Where once most
tapes were coated with doped magnetic oxides, modern coatings include particles of pure metal in
special binders and even vapor plated metal films. Tape coatings are governed by the same principles
as other magnetic media; the form is different but the composition remains the same. As with all
magnetic storage systems, modern tape media have higher coercivities and support higher storage
densities.

Cartridges

Taken by itself, tape is pretty hard to get a handle on. Pick up any reasonable length of tape, and
you’ll have an instant snarl on your hands. The only place that tape is used by itself is in endless loops
(one ends splice to the other) in special bins used by audio and video duplicating machines. In all
other applications, the tape is packaged on reels or in cartridges.
Reels came first. Simple spools onto which a length of tape gets wound, the reel is the simplest
possible tape carrier. In this form, tape is called open reel. Normal tape transport requires two reels,
one to supply the tape and one to take up the tape after it passes past the read/write heads. The

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (4 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

principal problem with reel-based systems is manual tape handling. To use a tape reel, you must slide
it onto a hub, pull out a leader, thread it past the read/write heads, and wrap it around the hub of the
take-up reel. In addition, the tape must be physically turned over to use its second side (if it has one)
or rewound when you’re done with it. Although the path taken by recording tape is less tortuous than
that of movie film, many people are challenged to thread it properly. Automatic threading systems are
complex and, when badly implemented, unreliable. Although open reel tape remains in use, most PC
applications avoid it.
Putting tape in a cartridge adds a permanent package that both provides protection to the delicate
medium and makes it more convenient to load. The most basic cartridge, that used by the mainframe
3480/3490/3590 system, simply packages a reel of tape in a plastic shell and relies on an automatic
threading mechanism.
All current PC-size tape systems, including digital cassettes, quarter-inch cartridges, four-millimeter,
and eight-millimeter helical tape systems, use cassette style cartridges that include both the supply and
take-up reels in a single cartridge. The originator of this design is traceable back to the original audio
cassette.
Developed—and patented—by the Dutch Philips conglomerate, the audio cassette was just one of
many attempts to sidestep the need for threading open reel tapes. The idea did not originate with
Philips. An earlier attempt by RCA, which used a similar but larger cassette package, failed ignobly in
the marketplace. The Compact Cassette, as it was labeled by Philips, was successful because it was
more convenient and did not aspire so high. It was not designed as a high fidelity medium, but grew
into that market as technology improved its modest quality. While the RCA cartridge was about the
size of a thin book, the Compact Cassette fit into a shirt pocket and was quite at home when it was on
the go in portable equipment. Size and convenience led to its adoption as the auto sound medium of
choice and then the general high fidelity medium of choice (even before the introduction of the
Compact Disc, cassettes had earned the majority of the music market).
The basic cassette mechanism simply takes the two spools of the open reel tape transport and puts
them inside a plastic shell. The shell protects the tape because the tape is always attached to both
spools, eliminating the need for threading. The sides of the cassette shell serve as the sides of the tape
reel—holding the tape in place so that the center of the spool doesn't pop out. This function is
augmented by a pair of Teflon slip sheets, one on either side of the tape inside the shell, that help to
eliminate the friction of the tape against the shell. A clear plastic window in either side of the shell
enables you to look at how much tape is on either spool—how much is left to record on or play back.
The reels inside the cassette themselves are merely hubs that the tape can wrap around. A small clip
that forms part of the perimeter of the hub holds the end of the tape to the hub. At various points
around the inside of the shell, guides are provided to ensure that the tape travels in the correct path.
The cassette also incorporates protection against accidental erasure of valuable music or information.
On the rear edge of the cassette—away from where the head inserts—are a pair of plastic tabs
protecting hole-like depressions in the shell. A finger from the cassette transport attempts to push its
way into this hole. If it succeeds, it registers that the cassette is write-protected. Breaking off one of
these tabs therefore protects the cassette from accidental erasure. To restore recordability, the hole
needs only to be covered up. Cellophane or masking tape—even a Band-Aid or file folder label works
for that purpose. Two such tabs exist—one to protect each side of the tape. The tab in the upper left
protects the top side of the cassette. (Turn the cassette over, and the other side becomes the top—but
the tab that allows recording on this side still appears in the upper left.)

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (5 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

More recent audio cassettes may have additional notches on the rear edge to indicate to the automatic
sensing cassette drives the type of tape inside the cassette shell. Audio tape comes in four varieties
that require different settings on the cassette recording for optimal operation.
More recent tape cartridges have altered some of the physical aspects of the cassette design but retain
the underlying technologies. For example, the shell of the audio cassette is thickened at the open edge
to allow the record/playback head and the drive puck to be inserted against the tape and the tape is
unprotected in the head access area. All other data tape cartridges have a nearly uniform thickness and
provide some kind of door to protect the tape from damage. Although other data tape cartridges do not
use the same tab and hole mechanism for write protection, all incorporate write-protection of some
kind. Most make it reversible using a sliding tab or rotating indicator. And, of course, other data
cartridges use tapes of different widths than audio style cassettes. The exact design of a cartridge
depends on the goals, knowledge, and prejudices of its designers.
The development of cartridges, no matter their physical embodiment, has had a vital effect on tape
backup. All claims for the convenience of tape backup are based on this ease of loading the cartridges.
But people still decry the inconvenience of tape backup—they just shift the blame to the software.

Technologies

Tape systems are often described by how they work, that is, the way they record data onto the tape.
For example, although the term "streaming tape" that’s appended to many tape drives may conjure
images of a cassette gone awry and spewing its guts inside the dashboard of your card (and thence to
the wind as you fling it out the window), it actually describes a specific recording mode that requires
an uninterrupted flow of data. At least four of these terms—start-stop, streaming, parallel, and
serpentine—crop up in the specifications of common tape systems for PCs.

Start-Stop Tape

The fundamental difference between tape drives is how they move the tape. Early drives operated in
start-stop mode; they handled data one block (ranging from 128 bytes to a few kilobytes) at a time and
wrote each block to the tape as it was received. Between blocks of data, the drive stopped moving the
tape and awaited the next block. The drive had to prepare the tape for each block, identifying the
block so that the data could be properly recovered. Watch an old movie with mainframe computers
with jittering tape drives, and you’ll see the physical embodiment of start-stop tape.
The earliest PC tape systems operated in start-stop mode. They had to. The computers and their disks
were so slow that they could not move data to the drive as fast as the drive could write it to tape.
Modern PCs, disks, and tape drives are all faster, and they use large memory buffers to assure that the
tape-bound data forms an interrupted stream. Tape drives usually shift to start-stop mode only when
an intervening circumstance—for example, an important task steals so much microprocessor time that
not enough is available to prepare data for the tape—temporarily halts the data flow. The drive then
will often rewind to find its place before accepting the next block of data and starting the tape in
motion again.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (6 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

Streaming Tape

When your PC tape drive gets the data diet it needs, bytes flow to the drive in an unbroken stream and
the tape runs continuously. Engineers called this mode of operation streaming tape.
Drives using streaming tape technology can accept data and write it to tape at a rate limited only by
the speed the medium moves and the density at which bits are packed—the linear density of the data
on the tape. Because the tape does not have to stop between blocks, the drive wastes no time. The
streaming design also lowers the cost of tape drives because the drives do not have to accelerate the
tape quickly or brake the motion of the tape spools, allowing a lighter weight mechanism to be used.
Nearly all PC tape drives are now capable of streaming data to tape.

Parallel Recording

Just as disk drives divide their platters into parallel tracks, the tape drive divides the tape into multiple
tracks across the width of the tape. The number of tracks varies with the drive and the standard it
follows.
The first tape machines used with computer systems recorded nine separate data tracks across the
width of the tape. The first of these machines used parallel recording in which they spread each byte
across their tracks, one bit per track with one track for parity. A tape was good for only one pass
across the read/write head, after which the tape needed to be rewound for storage. Newer tape systems
elaborate on this design by laying 18 or 36 tracks across a tape, corresponding to a digital word or
double-word, written in parallel.
Parallel recording provides a high transfer rate for a given tape speed because multiple bits get written
at a time, but makes data retrieval time consuming—finding a given byte might require fast
forwarding across an entire tape. In addition, the read/write heads and electronics are necessarily
complicated. The head requires a separate pole and gap for each track. To prepare the signals for each
head gap, the tape drive requires a separate amplifier. These complications increase the cost of tape
drives that use parallel recording.

Serpentine Recording

Most PC tape systems use multi-track drives, but do not write tracks in parallel. Instead, they convert
the incoming data into serial form and write that to the tape. Serial recording across multiple tracks
results in a recording method called serpentine recording.
Serpentine cartridge drives write data bits sequentially across the tape in one direction on one track at
a time continuing for the length of the tape. When the drive reaches the end of the tape, it reverses the
direction the tape travels and cogs its read/write head down one step to the next track. At the end of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (7 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

that pass, the drive repeats the process until it runs out of data or fills all the tracks. Figure 13.1 shows
the layout of tracks across a tape using four tracks of serpentine recording.
Figure 13.1 Layout of four tracks using serpentine recording.

A serpentine tape system can access data relatively quickly by jogging its head between tracks
because it needs to scan only a fraction of the data on the tape for what you want. Additionally, it
requires only a single channel of electronics and a single pole in the read/write head, lowering overall
drive costs. Modern serpentine systems may use over 50 tracks across a tape.

Helical Recording

The basic principle of all the preceding tape systems is that the tape moves past a stationary head. The
speed the tape moves and the density of data on the tape together determine how fast information can
be read or written, just as the data density and rotation rate of disks controls data rate. Back in the 50s,
however, data rate was already an issue when engineers tried to put television pictures on ordinary
recording tape. They had the equivalent of megabytes to move every second, and most ordinary tape
systems topped out in the thousands. The inspired idea that made video recording possible was to
make the head move as well as the tape to increase the relative speed of the two.
Obviously, the head could not move parallel to the tape. The first videotape machines made the head
move nearly perpendicular to the tape movement. Through decades of development, however, rotating
a head at a slight angle to the tape so that the head traces out a section of a helix against the tape has
proven to be the most practical system. The resulting process is called helical scan recording. Today
two helical scan systems are popular, eight-millimeter and DAT (Digital Audio Tape).
In a helical scan recording system, the rotating heads are mounted on a drum. The tape wraps around
the drum outside its protective cartridge. Two arms pull the tape out of the cartridge and wrap it about
halfway around the drum (some systems, like unlamented Betamax, wrap tape nearly all the way
around the drum). So that the heads travel at an angle across the tape, the drum is canted at a slight
angle, about five degrees for eight-millimeter drives and about six degrees for DAT. The result is that
a helical tape has multiple parallel tracks that run diagonally across the tape instead of parallel to its
edges. These tracks tend to be quite fine—some helical systems put nearly 2000 of them in an inch.
In most helical systems, the diagonal tracks are accompanied by one or more tracks parallel to the tape
edge used for storing servo control information. In video systems, one or more parallel audio tracks
may also run the length of the tape. Figure 13.2 shows how the data and control tracks are arranged on
a helical tape.
Figure 13.2 Helical scan recording track layout.

Helical scan recording can take advantage of the entire tape surface. Conventional stationary head
recording systems must leave blank areas—guard bands—between the tracks containing data. Helical
systems can and do overlap tracks. Although current eight-millimeter systems use guard bands, DAT
writes the edges of tracks over one another.
This overlapping works because the rotating head drum actually has two (or more) heads on it, and
each head writes data at a different angular relationship (called the azimuth) to the tracks on the tape.
In reading data, the head responds strongly to the data written at the same azimuth as the head and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (8 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

weakly at the other azimuth. In DAT machines, one head is skewed twenty degrees forward from
perpendicular to its track; the other head is skewed backward an equal amount.

Formats

Each of these various recording methods, along with physical concerns such as tape width and
cartridge design, allows for a nearly infinite range of tape recording systems. When you try to make
sense of backup systems, it often seems like all of the possibilities have been tried in commercial
products.
The number of formats used by PC tape systems is indeed diverse. Search for the one perfect backup
systems and you’ll be confronted by more "standards" than in any other area of personal computing.
One tape industry organization alone publishes hundreds of tape standards, and a wide variety of
competing systems and standards thrive outside of those definitions. Worse, many of the standards
don’t guarantee intercompatibility among products that abide them. All too often you can buy a tape
drive that conforms to an industry standard but will not read tape written by another manufacturer’s
drive that follow the same standards.
In other words, tape standards and the resulting formats often serve as points of departure rather than
bastions of rigid conformity. If nothing else, however, they can help guide us on a quick tour of your
various options in the world of tape.
The chief division among tape systems is between linear recording and helical recording systems.

Linear Recording Tape Systems

The most straightforward tape systems use simple linear recording. That is, they simply move the tape
past a stationary head just like the very first audio tape recorders. Linear recording is the old reliable
in tape technology. The process and methods have been perfected (as much as that is possible) over
more than five decades of use.
The oldest of the tape systems, progenitor of all, is open reel tape. All other linear tape systems are
just open reel tape with the reels enclosed. The chief differences are in the size of the cartridges. All of
the various linear systems share a common technology, so all benefit from the improvements in
technology—through the years the capacity of a given length of tape has multiplied so that cassette
size cartridge now hold more megabytes than the biggest open reel of days gone by.

Open Reel Tape

Early in the history of computing, a standard format arose for open reel tape. The tape, nominally
one-half inch wide, was split into nine parallel tracks, each running the full length of the tape. One
track was used for each bit of a byte of data, the ninth track containing parity-checking information.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (9 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

Every byte was recorded in parallel—a lateral slice across the tape. Because of these enduring
physical characteristics of the medium, open reel tape is often termed half-inch or nine-track tape.
Individual reels of tape can be almost any diameter larger than the three-inch central hole. The most
common sizes are 7 and 10.5 inches in diameter. Tape lengths vary with reel size and with the
thickness of the tape itself. A 10-inch spool nominally holds 2500 to 3600 feet (about 760 to 1100
meters) of tape.
As open reel technology evolved, the distance between each byte was gradually reduced, packing an
increasing amount of information on every inch of tape. Originally, open reel tapes were recorded
using FM signals, packing 800 bytes on every linear inch of the tape. Advancing to MFM doubled the
capacity to 1,600 bpi. This density is now the most common in open reel tape. More exotic transports
push data densities up to 3200 or even 6250 bpi.
Data are recorded on open reel tape in distinct blocks, each separated by a stretch of blank tape called
the inter-block gap. The length of this gap can vary from a fraction of an inch to several inches,
depending on characteristics of the overall system (involving such factors as how quickly the host
computer can send and receive information from the tape subsystem). Together, the tape length, data
density, and inter-block gap determine the capacity of a single reel of tape. The common 1600 bpi
density and a reasonable inter-block gap can put about 40 megabytes on a 10-inch reel.
Although once considered great, that 40 megabytes isn't much by today's PC storage standards. It
takes awfully large reels to pack a workable amount of information, and that's the chief disadvantage
of open reel tape. Tape reels are big and clumsy, and the drives match. When a single reel is more
than 10 inches across, fitting a drive to handle it inside a 5.25-inch drive is more than difficult.
Ten-inch reels themselves are massive; spinning (and stopping) them requires a great deal of torque,
which means large, powerful motors—again something incompatible with the compact necessities of
the PC. In fact, most open reel tape transports dwarf the typical PC; some look more like small
refrigerators.
Open reel drives also tend to be expensive because they are essentially low volume yet precision
machinery. Price increases with storage density. Today low density, 1600 bpi drives are available in
the PC price range (one vendor sells a unit for under $1000). But a high density open reel system still
costs more than a good, high speed PC—$3,000 and up. Unlike other PC peripherals, the price of
open reel tape has been stable for years. No breakthrough technologies are on the horizon to
revolutionize nine-track tape and its pricing.
On the positive side, age and mass can be virtues when it comes to system and data integrity. Because
of the low density used in recording, each flux transition on an open reel tape involves a greater
number of oxide particles, making it potentially more resistant to degradation (all other oxide
characteristics being equal). The big, heavyweight drives are generally sturdy, designed for industrial
use, and should last nearly forever when attached to a PC.
As a backup system alone, open reel tape is not much of a bargain, however. Other tape systems,
particularly the various cartridge formats, are less expensive and—according to their design
specifications—as reliable or even more reliable. For most people, cartridges also are easier to use.
Open reel tape excels as a data interchange medium, however. Almost any 1600 bpi tape is readable
on almost any open reel transport. Although block lengths and inter-block gaps may vary, these
differences are relatively easy to compensate for. Consequently, open reel remains the medium of
choice for shifting information between mainframe and minicomputers. For example, most mailing
lists are delivered on open reel tapes. An open reel transport opens this world to the personal

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (10 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

computer, allowing the interchange of megabytes of information with virtually any other system.
Although most open reel systems for PCs concentrate on the interchangeability of the tapes, they also
include provisions for making open reel backups. Think of the backup capability as a bonus rather
than the reason for buying an open reel system.

3480/3490/3590 Cartridges

Mainframe computer operators endured the inconveniences of open reel tape for more than 20 years
before an accepted successor appeared on the scene. A new tape system that's essentially a cross
between cartridges and open reel is replacing old fashioned open reel tapes in the role of backup
storage. Because of its newness, however, the system has not yet proven a successor to open reel tape
as an interchange medium.
Termed 3480 after the model number of the first IBM machine that used the new medium (introduced
in 1985), the system is based on cartridges that are little more than open reel tapes stuffed into a
protective shell. Later changes in the format spawned successors, 3490, 3490E, and 3590. No matter
the designation, the tape is still half an inch wide, and it runs through the drive much like open reel
tapes at a speed of two meters per second (about 79 inches per second). The drive mechanism pulls
the tape out of the cartridge; winds it onto a tape up spool; shuttles it back and forth to find and write
data; and rewinds it back into the cartridge when it is done. In effect, the cartridge is just an oddly
shaped all-enclosing reel that doesn't itself rotate. Not only can people more easily slide tape into
drives, but automatic mechanisms can locate tapes and load them. Such mechanisms are often called
jukeboxes because they work like the classic 1950s Wurlitzers that gave three plays for a quarter,
selecting the songs to play from an internal array of disks—complete with big windows so you could
stare in amazement at the mechanical wonder.
The data format on the tape has evolved through the years. The initial 3480 system doubled the
number of tracks used in open reel recording to 18. The newer 3490 system packs 36 tracks across the
width of a tape. The 3590 format puts 128 tracks on the half-inch tape.
In the prototype 3480 implementation, the 18 tracks are written in two parallel sets of 9 tracks
simultaneously, which doubles the data transfer speed and throughput of the system. In addition, the
recording density is higher than open reel tapes at 7700 bits per inch, increasing both capacity and the
data speed of the system. Even with this early system, a cartridge with less than a quarter the volume
of an open reel tape (3480 cartridges measure 4.75 x 4.25 x .75 inches) could hold much more data—
about 200 megabytes. In the 3490 system, a standard capacity cartridge holds about 400MB of
uncompressed data while an extended capacity cartridge twice the length (3490E) holds twice as
much, 800MB. With its hundred-plus tracks, a 3590 cartridge can store up to 10GB of uncompressed
data.
The disadvantage of these innovations is the price. These tape systems are meant for mainframes and
minicomputers. All such tape transports currently available are big ticket ($20,000 plus) products
designed for the mainframe market. Although one company made an early attempt to adapt 3480
cartridges to PC use (with a nonstandard format), no such products are currently offered.

Audio Cassettes

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (11 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

Introduced originally as a dictation medium, the audio cassettes grew both up—into a stereophonic
music recording medium that spawned a market for equipment sometimes costing thousands of
dollars—and down—into the realm of cheap, portable recorders costing $10-20. These low-end
machines represent the cheapest way ever created to magnetically record information. Although
originally conceived solely for dictation, cassettes use modem-like methods to record digital signals
modulated onto audible tones. Low prices and ready accessibility made cassettes the choice of Stone
Age computer hobbyists for recording data and pushed cassettes into the commercial market as a
distribution medium for computer software, mostly for inexpensive home-style computers.
When the PC first came on the market, the cassette was seen as a viable storage alternative, at least
among the home and hobbyist computer markets, by industry watchers with eyeglasses as thick as
bathyscaph portholes. Even IBM caught cassette mania and elected to build a port for attaching a
cassette machine into every PC.
Little more than a year later the marketplace myopia improved, and the storage needs of the PC
showed the shortcomings of audio cassette technology adapted to data: slow speed and sequential
access. The modulation audio method of recording yielded a data rate about equivalent to a
1200-bit-per-second modem, and finding data on a tape took a long time or much guessing with your
finger on the fast forward button. These practical matters led to the cassette port being dropped from
the XT and all subsequent IBM computers. Among PCs, the cassette as a primary data storage device
is mostly of historical interest.

D/CAS

A few years ago tape evolved into a more compelling backup platform. Drive maker Teac developed a
new, high speed cassette transport aimed particularly at data storage. It abandoned the audio cassette
standard used by earlier systems and pegged its performance on par with higher priced cartridge-based
backup systems using a digital recording system. The result was called the digital cassette or D/CAS
for short.
The tape medium in the Teac D/CAS data cassettes uses different a compound than standard audio
tapes, a material with a higher coercivity. To mechanically distinguish D/CAS tapes from audio tapes
(and to prevent using audio tapes in D/CAS equipment, Teac added a huge coding notch on the
backbone of the D/CAS cassette. A matching tab in the tape drive locks out audio cassettes from the
digital mechanism.
The first Teac system wrote two tracks on each cassette, one track carrying data in each direction.
However, rather than requiring you to flip over the tapes, the drive mechanism was bi-directional and
automatically scanned both sides in sequence. Not only did you not need to flip over a tape in the
Teac system, you were prohibited from doing so. The asymmetrical placement of the identifying
notch absolutely precludes the use of the wrong side of a tape.
The first Teac D/CAS system could put a full 60MB on one tape. That was quickly increased to
160MB. In 1991, Teac introduced a new mechanism (their model number MT-2ST/F50) that pushed
capacity to 600MB. Using a standard SCSI-2 interface, it was able to move information onto tape at a
rate of up to 242K per second. Despite the low media cost and acceptable capacity, the D/CAS system

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (12 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

never proved popular, and Teac has discontinued manufacture of the drives.

DCC

When consumer electronics companies sought to give consumers a recordable digital medium, the two
leaders that joined forces to create the Compact Disc went separate directions. Sony Corporation, with
great experience in helical video recording, opted for a similar but miniaturized system that has come
to be known as Digital Audio Tape, discussed in the "Digital Data Standard" section later in this
chapter. Philips, the originator of the Compact Cassette, the most popular audio tape format ever, built
on its experience to create the DCC, short for Digital Compact Cassette.

Background

As an audio medium, the DCC had two unarguable strengths: simplicity and compatibility. Unlike the
DAT system that uses a rapidly spinning read/write head to achieve helical scanning, the DCC is a
linear tape system. The head does not move and only the tape moves laterally across it. This design
not only eliminates moving parts but also it minimizes on servo electronics because the primary tape
alignment system is mechanical, a special guide integrated into the read/write head assembly. This
simplicity helps Philips keep the prices of the DCC systems substantially below that of DAT.
The trick behind this innovation is multi-track thin-film magnetic head technology. The DCC machine
records multiple tracks in parallel to increase the data rate. The tracks are necessarily narrow, the
miniaturization made possible by thin-film techniques—essentially evaporating and condensing the
coils of the magnetic poles in place.
DCC, as an audio medium, is also backwardly compatible with conventional audio cassettes. The
tapes you’ve made on your cassette recorder over the last 25 years will play back just fine on a DCC
recorder. Moreover, the DCC machine can make tapes that will play back digitally on themselves or
as analog audio through a conventional cassette machine. In effect, the DCC recorder and tapes bridge
between the analog and digital realms.
With these two great virtues, you might expect DCC to be a runaway success. Philips did. But like
DAT, acceptance of DCC in the audio marketplace has proven disappointing. Worse, while DAT
proved a connoisseur’s medium, the lower price of DCC branded it as lower quality. Worse, to get the
data rate low enough to fit wide range audio on linear tape, the DCC system uses a data compression
Philips calls Precision Adaptive Sub-Band Coding, which (the company claims) reduces the required
amount of digital storage by a factor of four without affecting the quality of the sound. Nevertheless,
compression in any form is anathema to those with golden ears, making the DCC system about as
desirable to audiophiles as AM radio.
Repackaged as DCC Data the same technology and medium hopes to make its mark as a PC backup
system. With backward compatibility a non-issue in backup systems (cassette tape has had little
market penetration in the past), the prime selling point is price. The simpler drives and cassettes are
potentially less expensive than other tape backup media.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (13 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

Format

The DCC cassette is remarkably similar to conventional audio cassettes. The exterior dimensions are
much the same, 4.35 x 2.5 x .375 inches (110.4 x 63.8 x 9.6 mm). Philips, however, redesigned the
interior mechanism of the cassette and added a metal cover that seals the cassette when not in the
drive and locks the tape hubs in place to prevent tape from spilling out. When you slide a DCC tape
into a drive, the cover automatically retracts. Because the tapes are meant only for one-sided
recording, there’s no need to flip them over and no need for them to be symmetrical. In fact, only the
bottom of a DCC tape has holes for the tape hubs. The top is flat and can be covered with a label. The
DCC tapes also incorporate a reversible write-protection switch instead of using the break-off tapes of
conventional audio cassettes.
The tape inside the cassettes itself measures 3.78 millimeters wide, the same 0.150 inch as
conventional cassette tape. Philips reformulated the magnetic recording medium on the tape for the
higher density recording of the system. In addition, the backing is thinner than that of the most
popular cassette tapes.
The DCC system writes data in 16 parallel data tracks across the tape accompanied by two control
tracks that store file access information. The data tracks nestle between the auxiliary tracks, which are
closest to the two edges of the tape.
The system uses separate write and read heads, each with 18 tracks. The read heads use
magneto-resistive technology. Each write head creates a data track that measures 185 microns across.
The read head scans only a width of 70 microns. The narrower read width helps eliminate noise and
relaxes the precision needed in guiding the tape. By improving the tape guidance, tightening
tolerances, and reducing the writing track width by one-half, the second generation of DCC Data
machines is promised to fit 700MB across 36 tracks on a D120 tape.
The system writes on the data tracks with a flux density of 50,800 transitions per inch, so each flux
transition measures one-half micron long. On the auxiliary track, the DCC system writes at 6350
transitions per inch, each transition occupying a space 4 microns long on the tape.
The basic DCC tape speed is the same as that of audio cassettes, 1.875 inches per second (47.65 mm
per second). The typical operating speed of a DCC drive is double that rate. The raw reading speed of
the system is thus nominally 95.250 KHz on each track. Read together, the 16 tracks produce a raw
data rate of about 1.5 megabits per second.
Actual data throughput is lower because the DCC Data system uses 8/10 data coding which requires
10 flux transitions to encode 8 data bits. The result is an actual data throughput of about
150Kbytes/sec.
The nomenclature for tape length carries over from audio cassettes to DCC. The standard D120 tape
has the same playing time at the same speed as an C120 audio tape. Because the DCC system
typically operates at twice the tape speed as audio machines and uses only one side of the tape, a
D120 tape runs for about 30 minutes. In computer terms, you can back up your system, filling a tape
to capacity (about 300MB), in half an hour, for a backup speed of 10MB/min.
To minimize errors, the DCC system uses several safeguards. The basic design of the system allows
for read-after-write verification in a single pass. For long term quality assurance, the data tracks are

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (14 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 13

protected by a two-layer cross-interleaved Reed-Solomon error-correction algorithm. Because the


long bit length of the auxiliary tracks reduces noise, they are protected by only a single-layer
Reed-Solomon algorithm.
The DCC system stores data in tape frames of 13,056 bytes each of which 8,192 bytes are actual data.
Error correction steals another 3,968 bytes of each frame, providing 48 percent redundancy. Each
frame also includes 128 bytes of system infor

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh13.htm (15 de 15) [23/06/2000 05:43:24 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

Chapter 14: Input Devices


Input devices are the means by which you move information into your PC—the primary
means by which you interact with your personal computer. The various available devices
span an entire range of technologies, from the tactile to the vocal. Although they work in
different ways, all accomplish the same task: they enable you to communicate with your
computer.

■ Keyboards
■ Technologies
■ Capacitive
■ Contact
■ Touch
■ Key Layouts
■ QWERTY
■ Dvorak-Dealey
■ Control and Function Keys
■ PC 83-Key
■ AT 84-Key
■ Advanced 101-Key
■ Windows 104-Key
■ Ergonomic Designs
■ Convenience Functions
■ Typematic
■ RepeatKeys
■ BounceKeys, FilterKeys, and SlowKeys
■ StickyKeys
■ ToggleKeys
■ Electrical Function

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (1 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

■ Scan Codes
■ Host Interface
■ Software Commands
■ Host Interface
■ Compatibility
■ Connections
■ Mice
■ Technology
■ Mechanical Mice
■ Optical Mice
■ Buttons
■ Interfaces
■ Serial Mice
■ Bus Mice
■ Proprietary Mice
■ Protocols
■ Resolution
■ Customization
■ Working Without a Mouse
■ Trackballs
■ Switches
■ Ball Size
■ Handedness
■ Protocols
■ Resolution
■ Non-Ball Trackballs
■ Joysticks and Paddles
■ Technology
■ Interface
■ Touch Screens
■ Capacitive
■ LED Array
■ Pressure Sensitive
■ Resistive
■ Surface Acoustical Wave

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (2 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

■ 3D Controllers
■ Viewer Tracking
■ Mechanical Tracking Devices
■ Optical Tracking Devices
■ Electromagnetic Tracking Devices
■ Acoustic Tracking Devices
■ Light Pens
■ Digitizers
■ Pointers
■ Technology
■ Electromagnetic
■ Resistive
■ Magneto-Strictive
■ Acoustic
■ Resolution and Accuracy
■ Speed
■ Size
■ Standards
■ Scanners
■ Types
■ Drum Scanners
■ Flatbed Scanners
■ Hand Scanners
■ Video Scanners
■ Slide Scanners
■ Features
■ Color Versus Monochrome
■ Scanning Speed
■ Dynamic Range
■ Resolution
■ Transparency Adapters
■ Optical Character Recognition
■ Sheet Feeders
■ Electrical Interfacing
■ Application Interfacing

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (3 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

■ Operation
■ Pre-Scanning
■ Defining the Imaging Area
■ Dynamic Range Adjustments
■ Defining Scan Area
■ Setting Resolution

14

Input Devices

You don’t buy a PC as a big, full box that you can shake your data out of as if it were a
giant digital saltcellar. The lure of the PC is that you can fill it with what you want—your
programs, your data, your hopes, your dreams, your arcade games. The problem you face
is getting all that stuff into your PC.
Certainly you can download programs and files from online and copy disks. But if you
want to make your personal computer really personal, you need a way to fill it with your
personal thoughts, sketches, and ideas. Even at its worst, the infamous computer curse
"Garbage in, garbage out" presupposes that you have some way of dumping your own,
personal garbage into your PC. Even when you aspire higher, you face the same need.
After all, if your computer doesn’t have raw material to work on, it simply cannot do any
work. If you can’t tell it what to do, the computer can’t do anything.
The one needed element is the input device, a channel through which you can pass data
and commands to your PC. Absent a silico-cerebral mind link, that connection inevitably
involves some kind of mechanical device. Commands, data, and ideas have to be reduced
to physical form to exit your mind and enter your PC. The input device converts the
mechanical into the electronic form that your PC can understand.
The basic electro-mechanical interface is the switch. A computer can detect the state of a
switch by sensing the electrical flow through it (the switch is on if electricity flows; off it
does not). Thus with a single switch, you could communicate with your PC exactly one
bit at a time—a daunting task if you want to create a multi-megabyte database.
You can speed the communications by employing several switches, a whole bank of
them. In fact, early computers were programmed exactly in that way—as are computers
today. However, instead of using old-fashioned toggle switches, today’s computers use
pushbuttons. Each button is assigned a code to send to the computer—a letter of the
alphabet or other symbol. The entire bank of switches is called a keyboard. The keyboard

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (4 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

remains the primary input device used by today’s PCs.


Keyboards have shortcomings. The primary one is that they are relatively inefficient at
relaying spatial information to your computer; they send symbols. A number of
applications, however, depend on knowing where rather than what—moving a cursor, for
example. The computer knows what you want to move (the cursor); it just needs the
spatial information about where to put it. A whole menagerie of input devices has arisen
to improve on keyboards—mice, trackballs, joysticks, pens, and digitizing tablets.
As computers moved into the graphic realm with the aid of pointing devices, they also
developed the need to acquire huge blocks of graphic data from external sources, a means
of converting a physical (optical) image into electronic form. The scanner fills this need.

Keyboards

The primary input device for most computer systems is the keyboard, and until voice
recognition systems are perfected to the point that they can recognize continuous speech,
the dominance of the keyboard is not likely to change. Even then the keyboard will
probably remain unapproachable for speed and accuracy for years to come. The keyboard
also is more suited to data entry in open offices, airplanes, and anywhere your privacy is
not ensured.
When buying a new PC, the keyboard is the least of your worries. After all, the
manufacturer takes care of it for you. Nearly every desktop PC comes completely
equipped with a keyboard (otherwise the PC wouldn’t be much good for anything).
Notebook PCs have their keyboards completely built in. Moreover, keyboards are pretty
much all the same these days—or at least they all look the same. Over the last decade, the
key layout has become almost completely standardized, the one true keyboard design that
might have been ordained by God. With a desktop machine, you get 101 keys on a
surfboard-size panel that monopolizes your desktop or overflows your lap. With a
notebook PC, you’re stuck with whatever the computer maker thought best for you.
The default keyboard that comes with your PC is more variable than you might think,
however. Underneath all those keys you might find one or another exotic technology that
would seem best left to the realm of engineers. But the technology used by your
keyboard determines not only how it works but how long it works and how much you
will enjoy working with it. It may even influence whether keyboarding is a time of
pleasure or pain. The underlying differences are enough to make you consider casting
aside the (usually) cheap, default keyboard the maker of your desktop PC packed in the
box and getting something more suitable to your fingers and your work. When you
consider a notebook PC, a difference in keyboards may be enough to make you favor one
machine over another, particularly when the rest of the two systems are well matched.

Technologies

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (5 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

The keyboard concept—a letter for every pushbutton—is almost ancient, dating back to
the days of the first typewriter. The basic layout and function has changed little since the
last half of the 19th century. Even PC keyboards seem to have changed little. Look at
one, and you’ll see a design that has changed little, if at all, since 1987. No matter what
PC you buy today, you’re almost certain to get a keyboard that follows the now industry
standard design that gives you 101 or more keys to press arrayed across a board that fills
half your desktop.
All keyboards have the same function: detecting the keys pressed down by your fingers
and relaying that information to your computer. Even though two keyboards may look
identical, they may differ considerably in the manner in which they detect the motion of
your fingers. The technology used for this process—how the keyboard works
electrically—can affect the sturdiness and longevity of the keyboard. Although all
operate in effect as switches by altering the flow of electricity in some way, the way
those changes are detected evolved into an elaborate mechanism.
Nearly every technology for detecting the change in flow of electricity has been adapted
to keyboards at one time or another. The engineer’s goal has been to find a sensing
mechanism that combines accuracy—detecting only the desired keystroke and ignoring
errant electrical signals—with long life (you don’t want a keyboard that works for six
words), along with the right "feel," the personal touch. In past years, keyboard designers
found promise in complex and exotic technologies like Hall-effect switches, special
semiconductors that react to magnetic field changes. The lure was the wonder of
magnetism—nothing need touch to make the detection. A lack of contact promised a
freedom from wear, a keyboard with endless life.
In the long run, however, the quest for the immortal keyboard proved misguided.
Keyboards rated for tens of millions of keypresses met premature ends with a splash
from a cup of coffee. The two most common designs in PCs are the capacitive and hard
contact keyboards.

Capacitive

When the PC was introduced, it inherited the keyboard technology used by its
predecessors—terminals and workstations. At the time, the basic switch had severe
shortcomings for heavy duty office use: it didn’t last very long when confronted with
long term use and environmental hazards. Even oxidation—an effect of ordinary
air—caused them to become unreliable. Consequently, IBM adapted a proven design that
sequestered the switches from the air. Instead of relying on the contacts of a switch to
change the flow of electricity, IBM opted to detect a change in capacitance.
Capacitance is essentially a stored charge of static electricity. Capacitors store electricity
as opposite static charges in one or more pairs of conductive plates separated by a
non-conductive material. The opposite charges create an attractive field between one
another, and the insulating gap prevents the charges from coming together and canceling
out one another. The closer the two charged plates are, the stronger the field and the more
energy can be stored. Moving the plates in relation to one another changes their capacity

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (6 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

for storing charge, which in turn can generate a flow of electricity to fill up the increased
capacity or drain off the excess charge as the capacity decreases.
These minute electrical flows are detected by the circuitry of a capacitive keyboard. The
small, somewhat gradual changes of capacity are amplified and altered so that they
resemble the quick flick of a switch.
Capacitive keyboards are generally built around an etched circuit board. Two large areas
of tin and nickel plated copper form pads under each switch station (in keyboard
terminology, each key is called a station). The pads of each pair are neither physically
nor electrically connected to one another. They act as the plates of a capacitor.
In the IBM capacitive keyboard design, pushing down any key on the keyboard forces a
circle of metalized-plastic down, separating a pair of pads that lies just below the key
plunger. Although the plastic backing of the circle prevents making a connection that
allows electricity to flow between the pads, the initial proximity of the pads results in a
capacity charge. Separating them causes a decrease in this capacitance—a change on the
order of 20 to 24 picofarads decreasing to 2 to 6 picofarads. The reduction of capacitance
causes the necessary small but detectable current flow in the circuitry leading to the pads.
Some compatible capacitive keyboards do the opposite of the IBM design. Pressing the
key pushes capacitive pads together and increases the capacitance. This backward
process has the same effect, however. It alters the flow of current in a way that can be
detected by the keyboard.
Capacitive keyboard designs work well. Most have rated lives of over 10,000,000
keypresses at each station. If they have a shortcoming it is that their sensing is indirect.
It’s like hooking up an intercom to listen in on a distant door bell. It works, but a direct
approach—moving the door bell itself—would be more efficient with less complication
and fewer things to go wrong.

Contact

The direct approach in keyboards is using switches to alter the flow of electricity. The
switches in the keyboard do exactly what all switches are supposed to do—open and
close an electrical circuit to stop or start the flow of electricity. Using switches requires
simpler (although not trivial) circuitry to detect each keystroke, although most
switch-based PC keyboards still incorporate a microprocessor to assign scan codes and
serialize the data for transmission to the system unit.
Design simplicity and corresponding low cost have made switch-based keyboards
today’s top choice for PCs. These keyboards either use novel technology to solve the
major problem of switches—a short life—or just ignore it. Cost has become the dominant
factor in the design and manufacture of keyboards. In the trade-off between price and
life, the switch-based design is the winner.
Three switch-based keyboard designs have been used in PCs: mechanical switches,
rubber domes, and membrane switches.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (7 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

Mechanical switches use the traditional switch mechanism, precious metal contacts
forced together. The switch under each keyboard station can be an independent unit that
can be individually replaced, or the entire keyboard can be fabricated as one assembly.
Although the former might lend itself to easier repair, the minimum labor charge for
computer repair often is higher than the cost of a replacement keyboard.
The contact in a mechanical switch keyboard can do double duty, chaperoning the
electrical flow and positioning the keycaps. Keyboard contact can operate as springs to
push the keycap back up after it has been pressed. Although this design is compelling
because it minimizes the parts needed to make a keyboard, it is not suited to PC-quality
keyboards. The return force is difficult to control and the contact material is apt to suffer
from fatigue and break. Consequently, most mechanical switch keyboards incorporate
springs to push the keycaps back into place as well as other parts to give the keyboard the
right feel and sound.
Rubber dome keyboards combine the contact and positioning mechanisms into a single
piece. A puckered sheet of elastomer—a stretchy, rubber-like synthetic—is molded to put
a dimple or dome under each keycap, the dome bulging upward. Pressing on the key
pushes the dome down. Inside the dome is a tab of carbon or other conductive material
that serves as one of the keyboard contacts. When the dome goes down, the tab presses
against another contact and completes the circuit. Release the key, and the elastomer
dome pops back to its original position, pushing the keycap back with it.
The rubber dome keyboard design was first used by IBM for the PCjr. Although the
original product was maligned for having small keys (derisively termed Chicklets), the
underlying mechanism has proven itself and is now widely used in full size keyboards.
One piece construction makes rubber dome keyboards inexpensive. Moreover, proper
design yields a keyboard with excellent feel—give of the individual domes can be
tailored to enable you to sense exactly when the switch makes contact. A poor design,
however, makes each keypress feel rubbery and uncertain.
Membrane keyboards are similar to rubber domes except they use thin plastic
sheets—the membrane—printed with conductive traces rather than elastomer sheets. The
contacts are inside dimples in the plastic sheets. Pressing down on a key pinches the
dimples together, closing the switch contact. The membrane design often is used for
keypads to control calculators and printers because of its low cost and trouble-free life.
The materials making contact can be sealed inside the plastic, impervious to harsh
environments. By itself, the membrane design makes a poor computer keyboard because
its contacts require only slight travel to actuate. However, an auxiliary key mechanism
can tailor the feel (and key travel) of a membrane keyboard and make typing on it
indistinguishable from working with a keyboard based on another technology.

Touch

Today the principal dividing line between keyboards is not technology but touch—what
typing actually feels like. A keyboard must be responsive to the touch of your
fingers—when you press down, the keys actually have to go down. More than that,
however, you must feel like you are typing. You need tactile feedback, sensing through

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (8 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 14

your fingers when you have activated a key.


The most primitive form of tactile feedback is the hard stop—the key bottoms out and
stops moving at the point of actuation. No matter how much harder you press, the key is
unyielding, and that is the problem. To assure yourself that you are actuating the key, you
end up pressing harder than necessary. The extra force tires you out more quickly.
One alternative is to make the key actuate before the end of key travel. Because the key
is still moving when you realize that it registered your keystroke, you can release your
finger pressure before the key bottoms out. You don’t have to expend as much effort, and
your fingers don’t get as tired.
The linear travel or linear touch keyboard requires that you simply press harder to push a
key down. In other words, the relationship between the displacement of the key and the
pressure you must apply is linear throughout the travel of the key. The chief shortcoming
of the linear touch keyboard is that your fingers have no sure way of knowing when they
have pressed down far enough. Audible feedback, a click indicating that the key has been
actuated can help, as does the appearance onscreen of the character you typed. Both slow
you down, however, because you are calling more of your mind into play to register a
simple keystroke. If your fingers could sense the actuation of the key themselves, your
fingers could know when to stop reflexively.
Better keyboards provide this kind of tactile feedback by requiring you to increase
pressure on the keyboard keys until they actuate and then dramatically lowering the force
you need to press down farther until you reach the limit of travel. Your fingers detect the
change in effort as an over-center feel. Keyboards that provide this positive over-center
feel are generally considered to be the best for quick touch typing.
A spring mechanism, carefully tailored to abruptly yield upon actuation of each key was
the classic means of achieving a tactile feel and could be adapted to provide an audible
"click" with every keypress. The spring mechanism also returns the key to the top of its
travel at the end of each keystroke. The very first PC keyboards were elaborate
constructions

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh14.htm (9 de 9) [23/06/2000 05:48:59 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

Chapter 15: The Display System


Your PC’s Display System allows you to see exactly what your PC is doing as it works.
Because it gives you instant visual feedback, the display system makes your PC
interactive. The display system also affects the speed of your PC and your pleasure (or
pain) in using your machine. PCs use a number of different technologies in creating their
displays, and the choice determines what you see, how sharply you see it, and how
quickly.

■ Background
■ Teletype Output
■ Video Terminals
■ BIOS Support
■ Character Technologies
■ Character Mapping
■ Character Boxes
■ Video Attributes
■ Video Pages
■ Two-Dimensional Graphics
■ Block Graphics
■ Bit-Mapped Graphics
■ Vector Graphics
■ Resolution
■ Graphic Attributes
■ Color Planes
■ Color Coding
■ Color Spaces
■ Color Mapping
■ Graphic Commands

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (1 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

■ Three-Dimensional Graphics
■ Tessellation
■ Texture Mapping
■ Depth Effects
■ Double Buffering
■ Stereoscopic Display Systems
■ Video Overlay
■ Video Capture
■ Image Compression
■ Filters and Codecs
■ JPEG
■ MPEG
■ Signals
■ Scanning
■ Synchronizing Signals
■ Retrace
■ Blanking
■ Front and Back Porches
■ Vertical Interval
■ Interfacing
■ Drivers
■ Application Program Interfaces
■ Windows GDI
■ DirectX
■ OpenGL
■ ActiveMovie

15

The Display System

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (2 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

Seeing is believing. If you couldn’t see the results of your calculations or language manipulations, the
personal computer would be worthless as a tool. You need some way of viewing the output of the
computer system to know what it has done and why you’re wasting your time feeding it data. Today’s
choice for seeing things that come from your computer is the video display, sort of like a television
that substitutes a cable to your CPU for an antenna.
Ground zero in a modern PC display is called VGA, named after IBM’s pioneering Video Graphics
Array that was introduced in 1987. This little bit of history remains the default mode of nearly every
PC video system. When you can’t get anything to work and your Windows-based PC pops up in safe
mode, VGA is what you see.
Although it’s the minimum, it’s still pretty good. The VGA system is actually able to create better
images than any television set. And that’s just for starters. The best display systems put more than six
times as much detail on your screen.

Background

PC displays were not always this good—and they are not always as good as they can be. The roots of
the high-resolution screen with vivid, moving 3D graphics are humble, indeed. They reach back to the
uncertain first days of computers—a time before monitors, even before monitor technology was
developed. Even today, when stripped of its software, a PC display system is a homely thing, designed
for nothing more than pasting white text on a black screen—or, as was the case with the first
PCs—green on black. On its own, your PC thinks in text alone, and even that in not a very aesthetic
way. The words it generates have the same old clunky monospaced characters as an old-fashioned
typewriter.
Fortunately, PC display systems are amenable to dressing up, loading software that gives them
spectacular graphic abilities. The key to that is the graphic environment, part of today’s modern
operating systems. The software—actually, layer upon layer of it—tells your PC how to paint pictures
and do a job good enough Rembrandt would be amazed (if just the PC alone wouldn’t do it for a 17th
Century artist).
In this tour of the technologies that make your PC's display system work, we’ll work our way through
the layers, starting out at the bottom basics built into your PC, and follow a historic path through the
layers to the techniques that will let your PC display top quality still images and video.

Teletype Output

The starting point for PC display systems must be the computer. When engineers ushered in Harvard
Mark I in 1943, there was no such thing as a computer monitor. Television itself was little more than a
bluish dream in the minds of broadcasters, barely off to a flickery black and white start (the first
commercial licenses went into effect in July 1941) then put on hold for the years of World War II. The
first computers were mechanical devices that had no ports or even electrical signals for linking up to
embryonic TVs. Consequently, these first data processing machines shared the same output
technology that was used by their predecessor, the mechanical adding machine—printed paper. The

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (3 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

telegraph industry had long before figured out how to actuate the keys of a typewriter to electrical
control, creating the first electrical printer called the teletype.
Developed to convey words and numbers across continents, the teletype took electrical codes and
converted them to keystrokes that printed characters on paper. The classic teletype merely relayed
keystrokes made on one keyboard to the distant printer, sort of like a stretched, trans-continental
typewriter. The computer added a novel twist—a machine created the keystrokes from scratch instead
of merely registering the finger presses of a human. These early computers fed characters to the
teletype printer the same way as if they had begun at some different keyboard—one character at a
time in a long series.
Teletypes had a long association with early computers. Even today, the data that’s sent by a computer
to its output device as a string of printable characters is termed teletype output. The character string, if
converted to the correct code, would run a mechanical teletype happily through reams of coarse paper.

Video Terminals

Although the teletype has reached a status somewhere between endangered species and museum
piece, the teletype output method of data transmission and display still does service to today’s high
tech toys. Instead of hammering away at paper, however, these machines send their character strings
to the electronic equivalent of the teletype, the computer terminal. These terminals are often called
Video Data Terminals (sometimes Video Display Terminals) or VDTs because they rely on video
displays to make their presentations to you. They are terminal because they reside at the end of the
communications line, in front of your eyes.
A terminal at its most rudimentary is the classic dumb terminal. This device is mentally challenged
not only by its lack of processing abilities (which we noted in Chapter 1, "Background") but also in
the way it displays what you see. It puts each character on its screen exactly as it is received through
the umbilical cable linking it to its computer host. It’s a teletype printing on a phosphor coated screen
instead of paper. The refinements are few—instead of rattling off the edge of the paper, a too long
electronic line more likely will "wrap" or scroll down to the line below. The terminal never runs out of
paper—it seemingly has a fresh supply of blank screen below that rolls upward as necessary to receive
each additional line. Alas, the output it generates is even more tenuous than the flimsiest tissue and
disappears at the top of the screen, perchance never to be seen again.
The brains in a smart terminal, on the other hand, allow it to recognize special commands for
formatting its display and may even be able to do some computer-like functions on its own. In fact,
today’s highly regarded Network Computer is little more than a smart terminal with a more powerful
processor and a new name meant to erase old, bad memories. Despite its brain power, the smartest of
terminals (and even NCs) are often relegated to working as ordinary dumb terminals and simply
relaying characters and commands to their screens.
A few other characteristics that distinguish the operation of a mechanical teletype are carried over into
the display of teletype output on video terminals. The paper on which the teletype prints moves in but
one direction. Neither the paper nor the output of the teletype ever goes backwards. Like a stock
ticker, the teletype merely churns out an unending string of text. The teletype cannot type over
something it did before, and it cannot jump ahead without patiently rolling its paper forward as if it
has printed so many blank lines.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (4 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

In the electronic form of the computer terminal, the teletype method of text handling means that when
one character changes on the screen, a whole new screen full of text must be generated and sent to the
terminal. The system cannot back up to change the one character, so it must rush headlong forward,
reworking the whole display along the way.
Mammoth primeval computers and rattling teletypes might seem to share little in common with the
quiet and well-behaved PC sitting on your desk. The simplest of programs, however, still retain this
most primitive way of communicating with your video screen. They generate characters and send
them one by one to the video display, only instead of traveling across the globe, the text merely
shuffles from one place in memory to another inside the machine. These programs in effect operate as
if the video system of your computer was the screen of a terminal that mimics an age-old teletype.

BIOS Support

This method of display is understandably often called a teletype display, and it is all that’s required to
be built into a PC. Teletype display technology is a vestige of PC ancestry that’s used only by
rudimentary programs—and sophisticated software on which the programmers have shirked
responsibility for making things look better. Teletype-type output is, however, the highest level of
support provided by the system BIOS in most PCs and is thus required to make a PC a PC.
The basic PC BIOS gives several layers of teletype output. In the most primitive, a program must load
one character at a time into a microprocessor register, issue a video interrupt—010(Hex)—and wait
while the microprocessor checks where to put the character (a several step process in itself), pushes
the character into the appropriate place in memory, and finally returns to the program to process the
next character. The most advanced teletype mode lets a program put an entire line of text on the
screen through a similar, equally cumbersome process.
Because of this software overhead, the actual speed of teletype displays depends on the available
microprocessor power. The faster the processor, the quicker the software runs, and the snappier the
onscreen display.
In basic teletype mode, characters are written on the screen from left to right, from screen top to
bottom, merely scrolling after each line is full or ended with a carriage return. More advanced display
technologies are able to write anywhere on the monitor screen using formatting instructions much as
smart terminals do. For example, commands in the PC BIOS let your programs locate each character
anywhere on the screen.

Character Technologies

The most notable aspect of teletype technology is that it is character-oriented. The smallest unit of
information it deals with is a text character. At the times when your PC steps back to this technology
of yesteryear—for example, the first few moments of booting up, before it spins its disks to load the
basic code of your operating system—your computer lets you see its world only in terms of letters and
numbers.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (5 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

Character Mapping

The technology underlying these character-based displays is termed character mapping. The name
refers to the character map, a special range of addresses that’s sometimes called screen memory or
display memory. The memory of the character map is reserved for storing the characters that will
appear on the screen. Simple programs, like your PC’s boot-up BIOS routines, write text on the screen
by pushing bytes into the proper places in that memory. Just as a street on a roadmap corresponds to
the location of a real street, each byte of display memory corresponds to a character position on the
screen.
The most common operating mode of the character-mapped display systems used by PCs divides the
screen into a matrix (essentially a set of pigeon holes with each hole corresponding to one position on
the screen) that measures 80 characters wide and 25 high. To display a character on the screen, a
program loads the corresponding code into the memory location associated with its matrix cell. To put
the image on the screen, the display system reads the entire matrix, translates it into a serial data
stream that scans across the monitor screen, and moves the data to the video output. In other words, it
creates the exact bit pattern that will appear on the screen on the fly, computing each nanosecond of
the video signal in real time. From there, the signal is the monitor’s problem.
For your programs, writing characters to the screen is simply a matter of writing directly to screen
memory. Consequently, this display technique is often called direct writing. It is the fastest way to put
information on a PC screen.
Once an advanced operating system like Windows loads, however, your PC steps away from character
mapping. The operating system imposes itself between your programs and the BIOS. The operating
system captures the characters your text oriented programs attempt to fling directly at the screen. The
operating system can then re-compute the map, making it larger or smaller and, in the latter case,
moving it to a designated area of the screen. The need to capture and re-compute slows down the
speed of the character-mapped display system and can make it the slowest way to write to your PC’s
display.
Character mapping is more versatile than teletype technology. Programs can push characters into any
screen location in any order that they please—top, bottom, left, or right, even lobbing one letter atop
another, overwriting the transitory existence of each. This versatility gives character mapping its
native mode speed. Screen updates occur quickly because the system has direct access to the screen
and need not go through the multiple steps the BIOS requires. Moreover, only the character (or
characters) needing to be changed have to be pushed into place. Once a character has been pushed into
the display memory matrix, it stays there until changed by the program that put it there—or any other
software that reaches into that area of memory.
In the PC scheme of things, programs may handle character mapping in either of two ways, through
the BIOS or by writing to screen memory directly.
When using the BIOS, writing a single character is a two step process: one BIOS command lets a
program specify the location to start writing anywhere on the screen, and a second command can then
write a character in that position. As with teletype displays, however, BIOS-mediated character
mapping suffers the speed penalty of software overhead. Where direct writing takes only a few

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (6 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

commands to display each character, BIOS-mediated character mapping takes dozens.


When direct writing, programs simply move the code assigned to a character to the memory location
corresponding to that character’s screen position—a one step process that requires only one
microprocessor instruction. This technique makes only one demand from programs: they need to
know the exact location of each screen memory address. For all applications to work on all PCs, the
addresses used by each system must be the same—or there needs to be some means of determining
what addresses are used. The designers of the first PC reserved two block of addresses (one for color
text, one for monochrome) in High DOS Memory for holding characters for screen memory, but it
refused to make them an official standard. In their vision, only the BIOS was supposed to be used to
put characters into video memory. Software writers, however, found that the only way to get
acceptable speed from their software was to use this character mapped mode. The industry’s reliance
on these addresses made them into unofficial standards with which no manufacturer bothers to
tamper. It also complicated the lives of the programmers writing new operating systems—capturing
direct writes to memory is more complex than intercepting BIOS calls. This problem in capturing
these direct writes underlies the slow and unsatisfying way early versions of Windows (3.1 and
before) displayed old DOS programs.
In basic text modes, your PC uses one set of screen memory addresses when it is operating in color
and the other set when in monochrome. To determine which mode your system is currently using, the
PC BIOS provides a special flag—called the video mode flag, although originally termed the video
equipment flag—located at absolute memory location 0463(hex).
When the video mode flag is set to 0D4(hex), your system is running in color and the chain of
addresses starting at 0B8000(hex) is used for text screen memory. In monochrome, the flag is set to
0B4(hex) to indicate the use of addresses starting at 0B0000(hex). For compatibility reasons, all
newer PC video systems are also capable of operating through these same addresses even though they
may store additional video information elsewhere. These address ranges are off limits to programs and
for storing the BIOS code of expansion boards.

Character Boxes

In text modes, the display memory addresses hold codes that have nothing to do with the shapes
appearing on the monitor screen except as a point of reference. The actual patterns of each character
that appears on the screen are stored in a special ROM chip called the character ROM that’s part of
the video circuitry of the computer. The code value that defines the character is used by the video
circuitry to look up the character pattern that matches it. The bit pattern from the character ROM is
scanned and sent to the screen to produce the final image.
Modern display adapters allow you to download your own fonts (typefaces) into onboard RAM that’s
reserved from the same block that would serve as the character map. These downloaded fonts can be
used as if they were located in ROM with the same ease of manipulation as ROM-based fonts.
Downloaded fonts appear just the same whether pushed on the screen through the teletype or
direct-access technique.
Each onscreen character is made from an array of dots, much like the text output of a teletype or dot
matrix printer. PC and display adapter manufacturers use several video standards to build individual
characters out of different size dot arrays. The framework in which the dots of an individual character

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (7 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

are laid out, called the character box, is a matrix like a crossword puzzle. The character box is
measured by the number of dots or cells comprising its width and its height. For example, Figure 15.1
shows a series of characters formed in character boxes measuring 15 by 9 cells.
Figure 15.1 Characters each formed in a 9x15 cell box.

The text modes used by various early display standards all had their own, distinctive character boxes.
The standard Video Graphics Array (VGA) text screen uses a 9x16 character box. Each character
takes up a space on the screen measuring nine dots wide and sixteen dots high. Other operating modes
and display systems use character boxes of different sizes. Earlier standards include Monochrome
Display Adapter, which used a character box measuring 9x14; Color Graphics Adapter, which used,
8x8; and the Enhanced Graphics Adapter, 8x14.
The last vestige of character mode display technology that remains under Windows 95 is its DOS box.
You can select the height and width of the character box used in the Windows 95 DOS box to adjust
the size of the windows in which your DOS applications run in text mode. In the box Windows 95
comes with is a selection of character box sizes you can select from. The Microsoft Plus! utility adds
several more.
You can change the size of the character box in windowed DOS mode from the tool bar on your DOS
window or from the properties screen for your application. Make your selection from the leftmost
entry on the tool bar. Click on the down arrow, and Windows will show you the character box sizes
available to you with a display like that in Figure 15.2.
Figure 15.2 Selecting a character box for a windowed DOS box.

The size of the character box does not exactly describe how large each character is or how many dots
are used in forming it. To improve readability, individual characters do not necessarily take up the
entire area that a character box affords. For instance, text characters on most monochrome displays
keep one row of dots above and one below those used by each character to provide visible separation
between two adjacent lines of text on the screen.

Video Attributes

The character mapped displays of most PC video systems do not store each letter adjacent to the next.
Instead, each onscreen character position corresponds to every other byte in screen memory; the
intervening bytes are used as attribute bytes. Even numbered bytes store character information; odd
bytes, attributes.
The attribute byte determines the highlighting or color of displayed character that’s stored in the
preceding memory byte. The codes used in monochrome and color displays are different.
Monochrome characters are allowed the following attributes: normal, highlighted (brighter onscreen
characters), underlined, and reserve video characters (dark on light instead of the normal light on
dark). The different attributes can be combined, although in the normal scheme of things highlighted
reverse video characters make the character background brighter instead of highlighting the character
shape itself. These monochrome display attributes are listed in Table 15.1.

Table 15.1. Monochrome Display Attributes

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (8 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

Byte value Attribute


00 Non-display
01 Underline
07 Normal
09 Intensified underline
0F Intensified
71 Reverse video underline
77 Reverse video
79 Reverse video intensified underline
7F Reverse video intensified
81 Blinking underline
87 Blinking normal
89 Blinking intensified underline
8F Blinking intensified
F1 Blinking reverse video underline
F7 Blinking reverse video
F9 Blinking intensified reverse video underline
FF Blinking intensified reverse video

Color systems store two individual character hues in the attribute byte. The first half of the byte (the
most significant bits of the digital code of the byte) code the color of the character itself. The latter
half of the attribute (the least significant bits) code the background color. Because four bits are
available for storing each of these colors, this system can encode 16 foreground and 16 background
colors for each character (with black and white considered two of these colors). In normal operation,
however, one bit of the background color code indicates a special character attribute—blinking. This
attribute allows any color combination to blink, but also cuts the number of hues available for
backgrounds in half (to eight colors—all intensified color choices eliminated). When you or your
software need to be able to display a full 16 background colors, a status bit allows the character
flashing feature to be defeated. Color display attributes are shown in Table 15.2.

Table 15.2. Color Display Attributes

Nibble value Foreground Color Background color Flashing


0 Black Black No
1 Blue Blue No
2 Green Green No
3 Red Red No

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (9 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

4 Cyan Cyan No
5 Magenta Magenta No
6 Brown Brown No
7 Light gray Light gray No
8 Dark gray Black Yes
9 Bright blue Blue Yes
A Bright green Green Yes
B Pink Red Yes
C Bright cyan Cyan Yes
D Bright magenta Magenta Yes
E Yellow Brown Yes
F White Light gray Yes

Because each character on the screen requires two bytes of storage, a full 80-character column by
25-character row of text (a total of 2000 characters) requires 4000 bytes of storage. In the basic PC
monochrome video system, 16 kilobytes are allotted to store character information. The basic (and
basically obsolete) color system reserved 64 kilobytes for this purpose.

Video Pages

The additional memory does not go to waste, however. It can be used to store more than one screen of
text at a time, with each separate screen called a video page. Either basic video system is designed to
quickly switch between these video pages so that onscreen images can be changed almost instantly.
Switching quickly allows a limited degree of animation. The technique is so useful that even today’s
most advanced 3D graphics boards use it, although with pictures instead of text.

Two-Dimensional Graphics

Graphics are such a central part of the displays of all modern computers that it’s hard to imagine PCs
without the ability to paint picture perfect images on their screens. Yet the first PCs relegated graphics
to the afterthought department. Simply adding color required a revolutionary new display system.
The only graphics available on the first PCs were block graphics, akin in more than name to the first
toys of toddlers, mere playthings that you wouldn’t want to use for serious work. The first real PC
graphic display system made television look good, which in any other context would be an
insurmountable challenge to the imagination. The foundation of the new display systems—called
bit-mapped graphics—proved powerful enough that in a few years PC display quality not only
equaled televisions but PCs were used in making television images. The modern PC graphic system
has taken a further step beyond and attempts to build a real (or real-looking) three-dimensional reality.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (10 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

The development of PC graphics is best described as accumulation rather than evolution. Each new
system builds upon the older designs, retaining full backward compatibility. Even the latest 3D
graphic systems retains the ability to work with the first rudimentary block graphics. Just as you share
genes with some of the lowest forms of life, like bacteria, planaria, and politicians, your sleek new PC
comes complete with state-of-the-art 1981 graphic technology.

Block Graphics

You don’t need a lot of computer power and an advanced operating system to put graphics on your
screen, which is good because in the early years PCs didn’t have a lot of power or decent operating
systems. In fact, even teletypes that are able only to smash numbers and letters on paper can print
primitive graphic images. By proper selection of characters, standing far from printouts, and
squinting, you could imagine you saw pictures in some printouts (a triangle of text might vaguely
resemble a Christmas tree, for example). Some people still go to elaborate lengths to create such
text-based images to pack into their e-mail. But such text-based images could hardly be confused with
photographs unless your vision was quite bad, your standards quite low, or your camera very peculiar.
When PCs operate like teletypes, their graphic output faces the same limitations as
printouts—characters can only approximate real world images. To try to improve matters, the
designers of the original PC took advantage of the extra potential of storing characters as byte values.
Because one byte can encode 256 different characters, and the alphabet and other symbols total far
short of that number, the first PC’s designers assigned special characters to some of the
higher-numbered bytes in its character set. Beyond dingbats and foreign language symbols, a few of
the extra characters were reserved for drawing graphic images from discrete shapes and patterned
blocks that partly or entirely fill in the character matrix.
When your PC is operating in text mode, such as in the DOS box, you can still create rough graphic
images by strategically locating these character blocks on the screen so that they form larger shapes.
Other extra characters comprise a number of single and double lines as well as corners and
intersections of them to draw borders around text areas. The characters are building blocks of the
graphics images, and consequently this form of graphics is termed block graphics. Figure 15.3 shows
the block graphic characters in the standard PC character set.
Figure 15.3 Standard PC block graphic characters.

To a PC display system, block graphics are considered text and are handled exactly like ordinary text
characters. All of the text attributes are available to every character of block graphics, including all of
the available text colors, highlighting, and inverse video characteristics. The characters are also
pushed onto the screen in text mode, which gives them high speed potential, but they are available
only in text mode or Windows’ DOS box. Because they use the high order ASCII characters—foreign
territory for most seven-bit e-mail systems—you cannot ordinarily use them for images in ordinary
e-mail.

Bit-Mapped Graphics

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (11 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 15

Windows marked the transition of the primary operating mode of PC display systems. From
character-based displays, Windows ushered in the age of the bit-mapped display. Bit-mapped graphics
improve the poor quality of block graphics by making the blocks smaller. The smaller the blocks
making an image, the finer grain it can show and the more detail. Physical aspects of the display
system impose a distinct and unbreakable limit on how small each block can be—the size of the
individual dots that make up the image on the video screen. The sharpest and highest quality image
that could be shown by any display system would individually control every dot on the screen.
These dots are often called pixels, a contraction of the descriptive term picture element. Like atomic
elements, pixels are the smallest building blocks from which known reality can be readily constructed.
The terms dot and pixel are often used as synonyms but their strict definitions are som

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh15.htm (12 de 12) [23/06/2000 05:57:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

Chapter 16: Display Adapters


The hardware that changes your pulsing digital PC's thoughts into the signals that can
be displayed by a monitor is called the display adapter. Over the years, the display
adapter has itself adapted to the demands of PC users, gaining color and graphics
abilities as well as increasing its resolution and range of hues. A number of
standards—and standard setters—have evolved, each improving the quality of what you
see on your monitor screen. Although display adapters themselves may disappear from
PCs, they will leave a legacy in the standards they set.

■ Background
■ History
■ Video Board Types
■ Circuits
■ Accelerator Chips
■ Background
■ Features
■ MMX Technology
■ Video Controllers
■ CRT Controller
■ VGA Controller
■ Hardware Cursor
■ RAMDACs
■ Memory
■ Frame Buffer Address
■ BIOS
■ PC Video BIOS
■ VGA BIOS
■ Super VGA

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (1 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

■ Mode Numbers
■ Memory Apertures
■ Internal Interfaces
■ ISA
■ Local Bus
■ UMA
■ VGA Auxiliary Video Connector
■ VESA Feature Connector
■ VESA Advanced Feature Connector
■ VESA Media Channel
■ Advanced Graphic Port
■ Architecture
■ Operation
■ Expansion Card
■ Connector
■ Standards
■ VGA
■ Text Modes
■ Graphics Modes
■ Monitor Requirements
■ Compatibility
■ Video
■ NTSC
■ S-Video
■ International Standards
■ Historic Standards
■ Monochrome Display Adapter
■ Hercules Graphics
■ Color Graphics Adapter
■ Double Scanned CGA
■ Enhanced Graphics Adapter
■ Memory Controller Gate Array
■ 8514/A
■ XGA

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (2 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

16

Display Adapters

Making light of electronic signals requires no extraordinary skill or complex circuitry. All it takes is a
light bulb and a press of a finger on a willing wall switch. Beyond that, however, things get more
difficult. Imagine mastering more than a million light bulbs, each with a dimmer rather than a switch,
and perfectly adjusting each half a hundred times a second.
Of course that's exactly the kind of chore you bought your computer for. Not just any computer will
do, though. It needs special circuitry to take control of that light show. Not only must it be able to
switch on and off the lights, dimming them when appropriate, it has to remember how each is
supposed to be set. And it has to imagine, visualize, and draw the patterns that the lights will reveal.
In a modern PC, all of these functions are adeptly handled by the display adapter, circuitry that adapts
computer signals to those that control your monitor. In most machines, the display adapter is a special
expansion board that serves primarily to make graphic images, hence the display adapter is often
called a graphics board. Because the graphics boards sends out signals in a form that resemble (but are
not identical to) those of your home video system, they are often termed video boards. Notebook PCs
lack video boards—they typically lack any conventional expansion boards at all—but all of them also
include display adapter circuitry on their motherboards.
No matter its name, the function of display adapter circuitry is the same—control. The adapter
controls every pixel that appears on your computer display. But there is one more essential element.
Just any control won't do. Give a room full of monkeys control of your a million light dimmers (you'll
need a mighty large room or a special breed of small, social simians) and the resulting patterns might
be interesting—and might make sense at about the same time your apes have completed duplicating
the works of Shakespeare. The display adapter circuitry also organizes the image, helping you make
sense from the chaos of digital pulses in your PC. It translates the sense of your computer's thoughts
into an image that makes sense to you.
Key to the display adapter’s ability to organize and communicate is standardization—the rules that let
your computer and its video circuitry know the correct way to control the image on your monitor.
Computer makers build their display circuits to conform to certain industry standards, and
programmers write their magnum opuses to match. The standards control the quality of the images
that you see—most importantly color, resolution, and refresh. Working within these standards, video
board makers do their best to eke the most speed from their circuits. Their efforts relentlessly push at
the standards. Something’s got to give, and not surprisingly it has been the standards. Your demands
for more speed and quality have pushed the PC industry through the reigns of several hardware
standards and into a new realm where traditional hardware standards no longer matter.
New technologies wait in the wings, ready to fulfill the final revolution—mating the most intelligent
machines ever made with what some regard as the stupidest entertainment system, television.
Although we’ll adjure from judging the wisdom of such a merger, we’ll take a good look at what will

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (3 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

make it possible, likely, and even desirable. Most importantly, we’ll examine what goes on behind the
scenes, and what makes those scene on your monitor screen better than ever before.

Background

Since the introduction of the PC, display adapters and display standards have evolved hand in hand.
Programmers have followed close behind, often prodding to push things ahead even faster. Over the
years, several standards have emerged, unleashing their momentum as great waves, splashing across
the industry, ebbing away, and leaving puddles of advocates slowly evaporating in the heat of the
latest innovations. If you're not careful, you can step into one of those lingering puddles. You'll not
just wet your feet but muddy your vision.
The history of the video board is divided into two eras: the time of the IBM standards and the
Microsoft/Intel succession, separated by a transitional phase of emerging technologies and loose
standards. In the first few years of PC, display standards followed the systems with which IBM
equipped its PCs, which marked a steady progression of improved features and quality. Then, when
IBM forfeited its industry leadership position, a vacuum appeared, fitfully filled by Video Electronics
Standards Association, but which actually ushered the modern generation of Windows hardware
non-standards, led by Microsoft and Intel.
In the beginning, the only way to assure compatibility with both hardware and software was to
duplicate the official IBM product. Display adapter makers duplicated everything from memory
locations to port addresses to fonts for the character generator. If you were into board fabrication, it
was a quick way to make money. But not progress. Products were stuck at the same resolution and
color level as dictated by IBM. Real progress came with leaving the nuts and bolts behind and
concentrating on what it takes to make a realistic, moving image.

History

Although beauty lies in the beholder's eye, the first PC's screen was something only a zealot (or a
secretary under duress) could love—ghostly green text that lingered on as the screen scrolled, crude
block graphics, the kinds of stuff you thought you outgrew when you graduated from crayons to
pencils. The most positive thing you could say for the original display system, the Monochrome
Display Adapter, or MDA, that IBM introduced with its first PC in 1981, was that you never had any
trouble making up your mind about what you wanted. Nor did you have to worry about
compatibility—or such trivialities as art, color, aesthetics, or creativity. Only later in 1982 did IBM
add color in its Color Graphics Adapter or CGA system, creating an entirely different display system,
incompatible with MDA hardware but (mostly) compatible with PC software. This ancient transition
has set the pattern for every change in PC display systems to come. Backward compatibility is an
expected part of any new standard. Change your display system, and you expect your old software to
continue to run.
The result of this expectation often has been a challenge for hardware designers in crafting new
products. It also results in a little bit of MDA and CGA in even the latest display systems in the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (4 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

newest PCs. Should you dig a PC program written in 1981 from a fossil stratum filled with
mastodons, saber-toothed cats, and Irish elk, your newest PC won’t harbor a bit of doubt about how to
handle images designed to display on the ancient hardware.
IBM’s next stab at PC video was the Enhanced Graphics Adapter or EGA, introduced in 1984. EGA
combined both monochrome and color in one board, though only one at a time (you had to set
switches on the board to make it act as one or the other), and mixed in higher resolution graphics. It
worked with monitors made for either the MDA or CGA standards as well as to its own, incompatible
higher resolution standard. In retrospect it was sort of like mating a Ford Model T with a steamboat
and tying an outboard motor behind. It proved to be a technological dead end. Although current
display systems still can handle software crafted for EGA, its hardware design holds only historic
interest.
The Video Graphic Array or VGA, introduced by IBM in 1987, represented a thorough rethinking of
display technology—thorough and forward-thinking enough that it forms the basis of all modern
display adapters. Strip your PC down to its minimal configuration without fancy operating systems,
without driver software, without acceleration, and you’ll see VGA. The least of laptop computers
may, in fact, go no further. But VGA provided a solid foundation on which all of today’s graphics
technology was built. It remains the one enduring standard in PC display systems.
Not that no has tried to set other hardware standards. IBM created two other hardware systems that
expanded on VGA. Its 8514/A display system, known only as the model number of the initial IBM
video board, was introduced along with VGA as a higher resolution alternative in 1987. It won little
favor because its hardware used a flickery interlaced monitor (see Chapter 17, "Displays") and
because, in a fit of corporate hubris, IBM initially refused to disclose details of its operation. When
the company improved on the 8514/A to create its Extended Graphics Array or XGA in 1990, few
others adopted it. Unlike 8514/A, however, IBM revealed all about XGA in hope of making it an open
standard.
Two aspects of XGA survive. As with 8514/A, XGA was an accelerated video system. As such it
paved the way for today’s graphic accelerators and 3D accelerators. The commands that controlled it
serve as the core of instructions for most accelerated display hardware. The XGA name occasionally
is also used as a description of video systems with 1024 by 768 pixel resolution. This usage is usually
incorrect because the defining characteristic of XGA is its software interface, a set of graphic
commands.
The lack of any pace setting standard beyond the basic VGA resolution level did not stop the makers
of video boards from exploring higher resolution levels. As they created new products, they were
careful to duplicate VGA while adding their own extensions at resolution levels and operating
frequencies of their own choosing. These extended VGA boards quickly became known as
SuperVGA, although only the VGA part had any semblance to a standard. Nearly all were
incompatible with one another at higher resolutions, requiring special drivers for DOS and any
applications you wanted to push beyond VGA. Most of these early products operated at 800 by 600
pixel resolution, many reached to 1024 by 768. What level you could actually use was another matter.
You had to match your monitor as well as your software to your video board.
Resolution wasn’t so much a problem as was the timing of signals. Without a standard for guidance,
video board makers set the timing of their video signals—the relationship between the sync pulses and
the beginning of the picture—at whatever value they thought appropriate. Timing differences translate
to differences in the position of the video image on the monitor screen. Although multi-scanning
monitors, which were just coming on to the market in force, could accommodate a variety of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (5 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

resolution levels, they couldn’t by themselves sort out the timing differences. Because of the
differences, the image made by one video board might appear squarely in the center of a monitor
while that from another might lose its right edge behind the screen bezel. Worse, when shifting
resolutions the position of the image might change dramatically. As a result every time you shifted
resolution levels, you’d have to tweak the image size and positioning controls on your monitor—if
your monitor gave you image size and positioning controls.
Few people liked this system. Most people complained and blamed the makers of monitors. After all,
it was the monitors that needed adjustment. The video industry needed a timing standard, and the
driving force behind that standard was, quite naturally, at the time the leading maker of
multi-scanning monitors, NEC Technologies. While standing in his kitchen amid dishes packed for
moving from California to Chicago in 1987, Jim Schwabe of NEC got the idea for a new organization
of video companies to hammer out a set of timing standards. He even thought of a name for the group,
the Video Electronics Standards Association or VESA.
Within the first three years of its existence, VESA came to embrace literally every maker of display
adapters, most monitor manufacturers, and even large computer companies like IBM and Compaq.
VESA quickly grew into the display industry forum. All current hardware standards beyond VGA
widely recognized by the PC industry have been developed by VESA.
VESA sorted out the timing problem by publishing a set of Discrete Monitor Timing standards that
not only define the synchronizing rates at various resolution levels but also specify the relative timing
of the sync and image signals to assure standardized image placement. To make the job of matching
software and high resolution video modes, the organization developed the VESA BIOS Extensions or
VBE through which programs can determine the capabilities of video boards and how to access high
resolution modes. In addition, the organization shepherded the first local bus standard onto the market
and has developed internal interfaces for multimedia circuitry inside PCs. It continues to develop new
standards for the timing and connection of video systems.
With VESA resolving most of the hardware problems, standardization issues for the most innovative
of today’s display technologies—2D and 3D graphics acceleration—have shifted upstream. You
software must be able to link with the instruction sets controlling the accelerators. This task is now
handled by your PC’s operating system, which for most people means Windows. In effect, Windows
pushes the PC video standard back to where it was originally envisioned: as a software interface.

Video Board Types

The design of the PC requires that every video board actually be two boards in one. All video boards
need VGA compatibility to assure that they will work properly as your PC wakes up. At boot time, the
video board must work in its most primitive mode—one compatible with all PC software written since
time began. All PCs boot in VGA mode and remain in that mode until your operating system loads the
proper software drivers to move your display system into its high resolution operating mode. If
something goes wrong and your system cannot load its drivers, VGA mode remains to help you sort
things out. For example, the Safe Mode of Windows 95 operates in VGA mode without requiring
special drivers to load.
In operating mode, conventional hardware compatibility standards do not matter. The drivers bridge
across any oddities and mysteries. Compatibility means only that you have a driver that matches both

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (6 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

your video board and operating system. Meet that one requirement, and anything goes.
This design has won video board makers complete freedom in crafting high resolution, high
performance products. It underlies the use of both 2D and 3D graphic accelerators. It also means that
conventional video standards are essentially irrelevant. For their high resolution modes, board makers
can choose whatever system resources (I/O ports and memory addresses) and whatever set of
commands that they want to use.
Although (as this chapter will show you) the PC industry is awash in video standards, most of them
are obsolete, suitable for tinkerers, misers, and those who get a thrill nursing along technologies that
were pronounced dead long ago. Any PC worth wasting your time on has one of four essential types
of video boards or its equivalent inside. These include:
VGA boards, the most basic video boards that match the VGA standard and nothing
more
SuperVGA boards that follow the VESA standards for higher resolutions but use dumb
frame buffers and offer no acceleration
Graphic accelerator boards that work with 2D drawing commands and deliver high
resolutions
3D accelerator boards that work with 3D commands
Some people use the termed "SuperVGA" as an indicator of video resolution. In this nomenclature, a
SuperVGA board or SuperVGA video system in a notebook computer produces 800 by 600 pixel
resolution. In this scheme, the next step up in resolution is 1024 by 768 which is sometimes called
UltimateVGA or, usually erroneously, XGA.
Each of the four board types in this list is backwardly compatible with the types preceding it. A 3D
accelerator board can also handle 2D drawing as well as basic SuperVGA and VGA displays. The
first two board types on the list—those using only dumb frame buffers—are essentially obsolete, but
they cannot be ignored. You’ll find them in older PCs, and you’ll find their technologies lurking
inside even the latest designs. They’re worth taking a look at, providing you don’t look too long.
All four are built using the same basic design and circuits. They differ in the advanced features that
they add to optimize their performance with modern operating systems.

Circuits

The display system in PCs usually takes the form of a video (or graphic) board. Some PCs integrate
the functions of the video board onto their motherboards, but the circuitry and even the logical host
connection are exactly the same as they would be with a separate video board
No matter its placement, the video circuitry performs the same functions. In its frame buffer (or in
main memory in systems using Unified Memory Architecture) it creates the image your PC will
display. It then rasterizes the memory mapped image and converts the digital signals into analog
format compatible with your monitor.
The modern video board usually has five chief circuits that carry out these functions, although some

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (7 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

boards lack some of these elements. A graphic accelerator chip builds the image, taking commands
from your software and pushing the appropriate pixel values into the frame buffer. By definition,
VGA and SuperVGA boards lack an accelerator chip and require your microprocessor to construct the
image. Memory forms the frame buffer that stores the image created on the board. A video controller
reads the image in the frame buffer and converts it to raster form. A RAMDAC then takes the digital
values in the raster and converts them into analog signals of the proper level. And a video BIOS holds
extension code that implements VGA and SuperVGA functions and allows the board to work without
your operating system installing special drivers.

Accelerator Chips

Of all the chips on a video board, the most important is the graphic accelerator. The chip choice here
determines the commands the board understands—for example, whether the board can carry out 3D
functions in its hardware or depends on your PC’s microprocessor for the processing of 3D effects.
The speed at which the accelerator chip operates determines how quickly your system can build image
frames. This performance directly translates into how quickly your system responds when you give a
command that changes the screen (for example, dropping down a menu) or how many frames get
dropped when you play back a video clip. The accelerator also limits the amount and kind of memory
in the frame buffer as well as the resolution levels of the images that your PC can display, although
other video board circuits can also impose limits. In short, the graphic accelerator is the most
important chip in the entire video system.
That said, the accelerator is optional, both physically and logically. Old video boards lack
accelerators, hence they are not "accelerated." That means your PC’s microprocessor must execute all
drawing instructions. In addition, even boards with accelerators may not accelerate all video
operations. The board may lack commands to carry out some video tasks, or the board’s driver may
not take advantage of all the features of the board’s accelerator. In such circumstances, the drawing
functions will be emulated by a Hardware Emulation Layer by your operating system—which means
that your microprocessor gets stuck with the accelerator's drawing work.
Note that the MMX instructions of newer Intel microprocessors overlap the functions of graphic
accelerators. Depending on the driver software in your system, drawing commands may be executed
either by your graphic accelerator or the MMX circuitry of your microprocessor. Whether the
operations carried out by the MMX section of the microprocessor are accelerated is a topic you can
spend evenings arguing about with friends. Although the microprocessor may be doing the drawing,
performance will be accelerated over what you would get without MMX.

Background

The graphic accelerator is an outgrowth of an older chip technology, the graphic coprocessor. An
early attempt to speed up the display system, the graphic coprocessor was introduced as a
supplemental microprocessor optimized for carrying out video-oriented commands.
The graphic coprocessor added speed in three ways. By carrying out drawing and image manipulation
operations without the need for intervention by the microprocessor, the coprocessor freed up the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (8 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

microprocessor for other jobs. Because the graphic coprocessor was optimized for video processing, it
could carry out most image-oriented operations faster than could the microprocessor even if the
microprocessor were able to devote its full time to image processing. The graphic coprocessor also
broke through the bus bottleneck that was (at the time of the development of graphic coprocessor
technology) choking video performance. When the microprocessor carried out drawing functions, it
had to transfer every bit bound for the monitor through the expansion bus—at the time, the slow ISA
bus. The coprocessor was directly connected to the frame buffer and could move bytes to and from the
buffer without regard to bus speed. The microprocessor only needed to send high level drawing
commands across the old expansion bus. The graphic coprocessor would carry out the command
through its direct attachment to the frame buffer.
The workstation market triggered the graphic coprocessor. Microprocessor makers altered their
general purpose designs into products that were particularly adept at manipulating video images.
Because the workstation market was multi-faceted with each different hardware platform running
different software, the graphic coprocessor had to be as flexible as possible—programmable just like
its microprocessor forebears.
These coprocessors joined the PC revolution in applications which demanded high performance
graphics. But the mass acceptance of Windows made nearly every PC graphics intensive. The
coprocessor was left behind as chip makers targeted the specific features needed by Windows and
trimmed off the excess—programmability. The result was the fixed function graphic coprocessor,
exactly the same technology better known now as the graphic accelerator.
Graphic coprocessors never died. Rather, they went into the witness protection program. The latest
generation of the chips once called coprocessors are now termed Digital Signal Processors and are
still exploited in specialized, high performance video applications. For example, the Texas
Instruments TMS320C80 Digital Signal Processor is a direct heir of the old TMS32010 and
TMS32020 graphic coprocessor chips used in early accelerated video boards.
The most recent evolution of graphic acceleration technology has produced the 3D accelerator.
Rather than some dramatic breakthrough, the 3D accelerator is a fixed function graphic coprocessor
that includes the ability to carry out the more common 3D functions in its hardware circuitry. Just as
an ordinary graphic accelerator speeds up drawing and windowing, the 3D accelerator gives a boost to
the 3D rendering.
As with the microprocessors, graphic and 3D accelerators come in wide varieties with different levels
of performance and features. Each maker of graphic accelerators typically has a full line of products
ranging from basic chips with moderate performance designed for low cost video boards to high
powered 3D products aimed at awing you with benchmark numbers far beyond the claims of their
competitors and, often, reality.
The first significant fixed function graphic accelerators were made by S3 Corporation. The company
prefers to think of its name as S-cubed but it is generally pronounced S-three (the name is derived
from Solid State Systems). The company’s 86C911 chip set the pace for the first generation of
accelerators. Stripped to its essentials, the chip was a hardware implementation of the features most
relevant to Windows applications, drawn from the instruction set of the IBM Extended Graphics
Array (XGA) coprocessor. Designed to match the ISA bus, the S3 86C911 used 16-bit architecture all
around—internally and linking it to its 1MB maximum of VRAM. Although it could handle
resolutions up to 1280 to 1024, its color abilities were limited by its RAMDAC connection.
Other manufacturers followed with their own 16-bit chips; most jumped directly into the second

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (9 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

generation with 32-bit chips. The race quickly evolved into one of wider and wider internal bus width,
up to 128 bits. Most chips now use 64-bit or wider technology, both for internal processing and for
accessing the frame buffer.

Features

Bits aren’t everything. The performance and output quality of a graphic accelerator depends on a
number of design variables. Among the most important of these are the width of the registers it uses
for processing video data, the amount and technology of the memory it uses, the ability of the chip to
support different levels of resolution and color, the speed rating of the chip, the bandwidth of its
connection to your PC and display, and the depth and extent of its command set as well as how well
those commands get exploited by your software. A final difference, that’s declining in importance
with the acceptance of graphic operating systems, is the accelerator’s handling of standard VGA
signals.

Register Width

Graphic accelerators work like microprocessors dedicated to their singular purpose, and internally
they are built much the same. The same design choice that determines microprocessor power also
affects the performance of graphic accelerator chips. The internal register width of a graphic
accelerator determines how many bits the chip works with at a time. As with microprocessors, the
wider the registers, the more data that can be manipulated in a single operation.
The basic data type for modern graphic operations is 32 bits—that’s the requirement of 24-bit
TrueColor with an alpha channel. Most graphic and 3D accelerators at least double that and can move
pixels two (or four) at a time in blocks.
Because the graphic or 3D accelerator makes the video circuitry of your PC a separate, isolated
system, concerns about data and bus widths elsewhere in your PC are immaterial. The wide registers
in graphic accelerators work equally well no matter whether you run 16-bit software (DOS and
Windows 95) or 32-bit software (Windows NT and OS/2), no matter what microprocessor you have or
what bus you plug your video board into.

Memory Technology

Graphics accelerators can be designed to use standard dynamic memory (DRAM), dual ported video
memory (VRAM), or either type. VRAM memory delivers better performance because it can handle
its two basic operations (writing and reading, corresponding to image updates and writing to the
screen) simultaneously. VRAM is, however, more expensive than DRAM. Although memory prices
are always falling, many manufacturers skimp here to deliver products at lower prices.
The prodigious amounts of memory required by large frame buffers, double buffering, Z-buffering,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (10 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 16

and other 3D operations make memory speed an important issue in the design of video boards.
Manufacturers are adapting all major high speed memory technologies to their products. In the next
few years, the industry is likely to move to RAMBus memory because of its wide bandwidth. Some
boards using RAMBus are already available and demonstrate excellent speed in operations normally
constrained by memory speed. The Advanced Graphic Port is optimized for operation with RAMBus
memory or Intel’s promised nDRAM, which is derived from RAMBus technology.

Resolution Support

The design of a graphic accelerator also sets the maximum amount of memory that can be used in the
frame buffer, which in turn sets upper limits on the color and resolution support of a graphic
accelerator. Other video board circuit choices may further constrain these capabilities. In general,
however, the more memory, the higher the resolution and the greater the depth of color the accelerator
can manage.
Every graphic accelerator supports three basic resolutions: standard VGA 640 by 480 pixel graphics,
SuperVGA 800 by 600 pixels, and 1024 by 768 pixels. Beyond the basic trio, designers often push
higher, depending on other constraints. Besides the standard increments upward (1280 by 1024 and
1600 by 1200 pixels) some makers throw in intermediate values so that you can coax monitors to their
maximum sharpness for the amount of memory you have available for the frame buffer.

Color Support

Many of today’s graphics accelerators are all in one video solutions, so they contain RAMDACs as
well as video controller circuitry. These built-in RAMDACs obey the same rules as standalone chips,
for example speed ratings. Foremost in importance is the color depth the chips can produce. Some
graphic accelerators rely on standard VGA-style DAC and are limited to 18-bit VGA-style color (six
bits of each primary color) and can only discriminate between 262,144 colors. Most newer graphic
accelerators with built-in DACs have full 24-bit (or 32-bit) color support, enabling them to display the
16.7 million hues of True Color. Most 3D accelerators depend on external RAMDACs so their color
capabilities are determined by board design rather than the chip.

Speed Rating

The higher the resolution a graphic accelerator produces, the more pixels it must put on the screen. At
a given frame rate, more pixels means each one must be produced faster—it gets a smaller share of
each frame. Consequently, higher resolution accelerator chips

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh16.htm (11 de 11) [23/06/2000 06:02:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Chapter 17: Displays


A display is the keyhole you peer through to spy on what your PC is doing. You can't do
your work without a display, and you can't work well without a good one. The final
quality of what you see—the detail, sharpness, and color—depends on the display you
use. No longer a function only of TV-like monitors, today's computer displays
increasingly rely on new technologies to achieve flat screens and high resolutions.

■ Background
■ Cathode Ray Tubes
■ Physical Characteristics
■ Phosphors
■ Color Temperature
■ Persistence
■ Electron Guns
■ Convergence
■ Purity
■ Shadow Masks
■ Aperture Grilles
■ Required Dot Pitch
■ Line Width
■ Screen Curvature
■ Resolution Versus Addressability
■ Anti-Glare Treatment
■ Image Characteristics
■ Screen Size
■ Overscan and Underscan
■ Aspect Ratio
■ Image Sizing

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (1 de 53) [23/06/2000 06:13:54 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

■ Image Distortion
■ Image Controls
■ Electronics
■ Synchronizing Frequency Range
■ Interlacing
■ Bandwidth
■ Energy Star
■ Monitor Types
■ Multiscanning
■ Fixed Frequency
■ Identification
■ Hard Wired Coding
■ Display Data Channel
■ Manual Configuration
■ Flat Panel Display Systems
■ LCD
■ Nematic Technology
■ Cholesteric Technology
■ Passive Matrix
■ Active Matrix
■ Response Time
■ Field Emission Displays
■ Electro-Luminescent Displays
■ Gas-Plasma
■ LED
■ Practical Considerations
■ Resolution
■ Size
■ Labeling
■ Connectors
■ Video
■ Pin Jacks
■ Nine-Pin D-Shell Connectors
■ Fifteen-Pin High-Density D-Shell Connectors
■ BNC Connectors

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (2 de 53) [23/06/2000 06:13:54 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

■ Audio
■ Enhanced Video Connector

17

Displays

You cannot see data. The information that your computer processes is nothing but ideas, and ideas are
intangible no matter whether in your mind or your computer's. Whereas you can visualize your own
ideas, you cannot peer directly into the pulsing digital thought patterns of your computer. You
probably have no right to think that you could—if you can't read another person's thoughts, you
should hardly expect to read the distinctly non-human circuit surges of your PC.
Although most people—at least those not trained in stage magic—cannot read thoughts per se, they
can get a good idea of what's going on in another person's mind by carefully observing his external
appearance. Eye movements, facial expressions, gestures, and sometimes even speech can give you a
general idea about what that other person is thinking, although you will never be privy to his true
thoughts. So it is with computers. You'll never be able to see electrons tripping through logical gates,
but you can get a general idea of what's going on behind the screen by looking into the countenance of
your computer—its display. What the display shows you is a manifestation of the results of the
computer's thinking.
The display is your computer's line of communication to you, much as the keyboard enables you to
communicate with it. Like even the best of friends, the display doesn't tell you everything; but it does
give you a clear picture, one from which you can draw your own conclusions about what the computer
is doing.
Because the display has no direct connection to the computer's thoughts, the same thoughts—the same
programs—can generate entirely different onscreen images while working exactly the same way
inside your computer. Just as you can't tell a book's contents from its cover, you cannot judge the
quality of a computer from its display.
What you can see is important, however, because it influences how well you can work with your
computer. A poor display can lead to eyestrain and headaches, making your computer literally a pain
to work with. A top quality display means clearly defined characters, sharp graphics, and a system
that's a pleasure to work with.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (3 de 53) [23/06/2000 06:13:54 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Background

Although the terms are often used interchangeably, a display and a monitor are distinctly different. A
display is the image producing device itself, the screen that you see. The monitor is a complete box
that adds support circuitry to the display. This circuitry converts the signals set by the computer (or
some other device, such as a videocassette recorder) into the proper form for the display to use.
Although most monitors operate under principles like those of the television set, displays can be made
from a variety of technologies, including liquid crystals and the photon glow of some noble gases.
Because of their similar technological foundations, monitors to a great extent resemble the humble old
television set. Just as a monitor is a display enhanced with extra circuitry, the television is a monitor
with even more signal conversion electronics. The television incorporates into its design a tuner or
demodulator that converts signals broadcast by television stations or a cable television company into
about the same form as those signals used by monitors. Beyond the tuner, the television and monitor
work in much the same way. Indeed, some old-fashioned computer monitors work as televisions as
long as they are supplied the proper signals.
New monitors have developed far beyond their television roots, however. They have greater sharpness
and purity of color. To achieve these ends, they operate at higher frequencies than television stations
can broadcast.
Computer displays and monitors use a variety of technologies to create visible images. A basic
bifurcation divides the displays of desktop computers and those of laptop machines. Most desktop
computers use systems based on the same cathode ray tube technology akin to that used in the typical
television set. Laptop and notebook computers chiefly use liquid crystal displays. Occasionally, some
designers switch hit with technologies and stuff LCDs into desktop machines—something we're
destined to see more in the future—while aged portable computers weighed themselves down with
CRT displays.

Cathode Ray Tubes

The oldest electronic image generating system still in use is the cathode ray tube. The name is purely
descriptive. The device is based on a special form of vacuum tube—a glass bottle that is partially
evacuated and filled with an inert gas at very low pressure. The tube of the CRT is hardly a tube but
more flask shaped with a thin neck that broadens like a funnel into a wide, nearly flat face. Although a
CRT appears to be made like a simple bottle—in fact, people in the monitor business sometimes refer
to CRTs as "bottles"—its construction is surprisingly complex and involves a variety of glasses of
many thicknesses. The face of the typical CRT, for example, often is about an inch thick.
The cathode in the CRT name is a scientific term for a negatively charged electrode. In a CRT, a
specially designed cathode beams a ray of electrons toward a positively charged electrode, the anode.
(Electrons, having a negative charge, are naturally attracted to positive potentials.) Because it works
like a howitzer for electrons, the cathode of a CRT is often called an electron gun.
The electrons race on their ways at a substantial fraction of the speed of light, driven by the high

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (4 de 53) [23/06/2000 06:13:54 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

voltage potential difference between the cathode and anode, sometimes as much as 25,000 volts.
At the end of their flight to the anode, the electrons crash into a coating made from phosphor
compounds that has the amazing ability to convert the kinetic energy of the electrons into visible light.

Physical Characteristics

The CRT is a physical entity that you can hold in your hand, drop on the floor, and watch shatter.
Little of its design is by chance—nearly every design choice in making the CRT has an effect on the
image that you see.
Four elements of the CRT exert the greatest influence on the kind and quality of image made by a
monitor. The phosphors chosen for the tube affect the color and persistence of the display. The
electron guns actually paint the image, and how well they work is a major factor in determining image
sharpness. In color CRTs, the shadow mask or aperture grille limit the ultimate resolution of the
screen. The face of the screen and the glare reflected from it affect both image contrast and how happy
you will be in working with a monitor.

Phosphors

At the end of the electrons' short flight from the gun in the neck of a CRT to the inside of its wide, flat
face lies a layer of a phosphor-based compound with a wonderful property—it glows when struck by
an electron beam. The image you see in a CRT is the glow of the electrically-stimulated phosphor
compounds, simply termed phosphors in the industry. Not all the phosphorous compounds used in
CRTs are the same. Different compounds and mixtures glow various colors and for various lengths of
time after being struck by the electron beam.
A number of different phosphors are used by PC-compatible monitors. Table 17.1 lists some of these
phosphors and their characteristics.

Table 17.1. Phosphors and Their Characteristics

Type Steady-state color Decay color *Decay time Uses or comments


P1 Yellow-Green Yellow-Green 15 oscilloscopes, radar
P4 White White 0.1 display, television
P7 White Yellow-Green unavail. oscilloscopes, radar
P11 Blue Blue 0.1 photography
P12 Orange Orange unavail. radar
P16 Violet Violet sh ultraviolet
P19 Orange Orange 500 radar

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (5 de 53) [23/06/2000 06:13:54 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

P22R Red Red 0.7 projection


P22G Yellow-Green Yellow-Green 0.06 projection
P22B Blue Blue 0.06 projection
P26 Orange Orange 0.2 radar, medical
P28 Yellow-Green Yellow-Green 0.05 radar, medical
P31 Yellow-Green Yellow-Green 0.07 oscilloscope, display
P38 Orange Orange 1000 radar
P39 Yellow-Green Yellow-Green 0.07 radar, display
P40 White Yellow-Green 0.045 med. persist. display
P42 Yellow-Green Yellow-Green 0.1 display
P43 Yellow-Green Yellow-Green 1.5 display
P45 White White 1.5 photography
P46 Yellow-Green Yellow-Green 0.16 flying spot scanners
P55 Blue Blue 0.05 projection
P56 Red Red 2.25 projection
P101 Yellow-Green Yellow-Green 0.125 display
P103 White White 0.084 P4 w/bluish background
P104 White White 0.085 high efficiency P4
P105 White Yellow-Green 100+ long persistence P7
P106 Orange Orange 0.3 display
P108 Yellow-Green Yellow-Green 125 P39 w/bluish backgr.
P109 Yellow-Green Yellow-Green 0.08 high efficiency P31
P110 Yellow-Green Yellow-Green 0.08 P31 w/bluish backgr.
P111 Red/green Red/green unavail. voltage penetration
P112 Yellow-Green Yellow-Green unavail. ir lightpen doped P39
P115 White White 0.08 yellower P4
P118 White White 0.09 display
P120 Yellow-Green Yellow-Green 0.075 P42 w/bluish backgr.
P122 Yellow-Green Yellow-Green 0.075 display
P123 Infrared unavail. unavail. infrared
P124 Yellow-Green Yellow-Green 0.13 yellow part of P4
P127 Green Yellow-Green unavail. P11+P39 for light pens
P128 Yellow-Green Yellow-Green 0.06 ir lightpen doped P31
P131 Yellow-Green Yellow-Green unavail. ir lightpen doped P39
P133 Red to green Red to green varies current-sensitive

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (6 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

P134 Orange Orange 50 European phosphor


P136 White White 0.085 enhanced contrast P4
P137 Yellow-Green Yellow-Green 0.125 high efficiency P101
P138 Yellow-Green Yellow-Green 0.07 enhanced contrast P31
P139 Yellow-Green Yellow-Green 70 enhanced contrast P39
P141 Yellow-Green Yellow-Green 0.1 enhanced contrast P42
P143 White Yellow-Green 0.05 enhanced contrast P40
P144 Orange Orange 0.05 enhanced contrast P134
P146 Yellow-Green Yellow-Green 0.08 enhanced contrast P109
P148 Yellow-Green Yellow-Green unavail. lightpen applications
P150 Yellow-Green Yellow-Green 0.075 data displays
P154 Yellow-Green Yellow-Green 0.075 displays
P155 Yellow-Green Yellow-Green unavail. lightpen applications
P156 Yellow-Green Yellow-Green 0.07 lightpen applications
P158 Yellow Yellow 140 medium persistence
P159 Yellow-Green Yellow-Green unavail. enhanced contrast P148
P160 Yellow-Green Yellow-Green 0.07 data displays
P161 Yellow-Green Yellow-Green 0.07 data displays
P162 Yellow-Green Yellow-Green 0.1 data displays
P163 White White 2 photography
P164 White Yellow-Green 0.1 displays
P166 Orange Orange unavail. ir lightpens
P167 White White 0.075 display
P168 Yellow-Green Yellow-Green 0.075 projection
P169 Yellowish Yellowish 1.5 display
P170 Orange Orange unavail. enhanced contrast P108
P171 White Yellow-Green 0.2 display
P172 Green Green unavail. lightpen displays
P173 Infrared unavail. unavail. lightpen
P175 Red Red 0.6 display
P176 Yellow-Green Yellow-Green 0.2 photography
P177 Green Green 0.1 data displays
P178 Yellow-Green Yellow-Green 0.1 displays
P179 White White 1 displays
P180 Yellow-Orange Yellow-Orange 0.075 displays

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (7 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

P181 Yellow-Green Yellow-Green unavail. color shutter displays


P182 Orange Orange 50 displays
P183 Orange Orange unavail. lightpen displays
P184 White White 0.075 displays
P185 Orange Orange 30 enhanced contrast P134
P186 Yellow-Green Yellow-Green 25 displays
P187 Yellow-Green Yellow-Green unavail. lightpen P39
P188 White White 0.05 White displays
P189 White White unavail. White displays
P190 Orange Orange 0.1 displays
P191 White White 0.12 White displays
P192 White White 0.2 White displays
P193 White White 0.08 White displays
P194 Orange Orange 17 displays
P195 White White 0.125 inverse displays
Decay time is the
approximate time in
milliseconds for a
display to decay to 10
percent of its emission
level.

The type of phosphor determines the color of the image on the screen. Several varieties of amber,
green, and whitish phosphors are commonly used in monochrome displays. Color CRT displays use
three different phosphors painted in fine patterns across the inner surface of the tube. The patterns are
made from dots or stripes of the three additive primary colors—red, green, and blue—arrayed next to
one another. A group of three dots is called a color triad or color triplet.
One triad of dots makes up a picture element, often abbreviated as pixel (although some
manufacturers prefer to shorten picture element to pel).
The makers of color monitors individually can choose each of the three colors used in forming the
color triads on the screen. Most monitor makers have adopted the same phosphor family, which is
called P22 (or B22, which is the same thing with a different nomenclature), so the basic color
capabilities of most multi-hued monitors are the same.
The color monitor screen can be illuminated in any of its three primary colors by individually hitting
the phosphor dots associated with that color with the electron beam. Other colors can be made by
illuminating combinations of the primary colors. By varying the intensity of each primary color, an
infinite spectrum can be generated.
Monochrome displays have their CRTs evenly coated with a single, homogenous phosphor so that
wherever the electron beam strikes, the tube glows in the same color. The color of the phosphors
determines the overall color that the screen glows.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (8 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Three colors remain popular for monochrome computer displays—amber, green, and white. Which is
best is a matter of both preference and prejudice. Various studies support the superiority of each of
these colors:
Green. Green screens got a head start as PC displays because they were IBM's choice for
most of its terminals and the first PC display—as well as classic radar displays and
oscilloscopes. It is a good selection for use where ambient light levels are low; part of its
heritage is from the days of oscilloscope and radar screens (most of which remain
stubbornly green). Over the last few years, however, green has fallen from favor as the
screen of choice.
Amber. In the 1980s, amber-colored screens rose in popularity because they are,
according to some studies, easier on the eyes and more readable when the surrounding
environmental light level is bright. Yellow against black yields one of the best perceived
contrast combinations, making the displays somewhat easier on your eyes. Amber also
got a push as being a de facto European monitor standard.
White. Once white screens were something to be avoided, if just from their association
with black and white televisions. A chief reason was that most early monochrome
displays used a composite interface and gave low onscreen quality.
Apple's Macintosh and desktop publishing forced the world to re-evaluate white. White is the color of
paper that executives have been shuffling through offices over the ages. White and black also happen
to be among the most readable of all color combinations. In 1987, IBM added impetus to the
conversion of the entire world to white with the introduction of the VGA and its white screen
monochrome display.
If you look closely, you might see fine specs of colors, such as a bright yellow dappled into so-called
"white" phosphors. Manufacturers mix together several different phosphors to fine tune the color of
the monochrome display—to make it a cool, blue television "white" or a warm, yellowish paper
"white."
No good dividing line exists between ordinary white and paper white displays. In theory, paper white
means the color of the typical bond paper you type on, a slightly warmer white than the blue tinged
glow of most "white" monitors. But "paper whiteness" varies with who is giving the name.
Often ignored yet just as important to screen readability as the phosphor colors is the background
color of the display tube. Monochrome screen backgrounds run the full range from light gray to
nearly black. Darker screens give more contrast between the foreground text and the tube background,
making the display more readable, particularly in high ambient light conditions.
The background area on a color screen—that is, the space between the phosphor dots—is called the
matrix, and it is not illuminated by the electron beam. The color of the matrix determines what the
screen looks like when the power is off—pale gray, dark green-gray, or nearly black. Darker and
black matrices give an impression of higher contrast to the displayed images. Lighter gray matrices
make for purer white. The distinctions are subtle, however, and unless you put two tubes side by side,
you're unlikely to be able to judge the difference.

Color Temperature

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (9 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

If your work involves critical color matching, the color temperature of your monitor can be an
important issue. White light is not white, of course, but a mixture of all colors. Alas, all whites are not
the same. Some are richer in blue, some in yellow. The different colors of white are described in their
color temperature, the number of Kelvins (degrees Celsius above absolute zero) that a perfect
luminescent body would need to be to emit that color.
Like the incandescence of a hot iron horseshoe in the blacksmith's forge, as its temperature gets higher
the hue of a glowing object shifts from red to orange to yellow and on to blue white. Color
temperature simply assigns an absolute temperature rating to these colors. Figure 17.1 illustrates the
range of color temperatures.
Figure 17.1 The color temperatures associated with various conditions.

For example, ordinary light bulbs range from 2,700 to 3,400 Kelvins. Most fluorescent lights have
non-continuous color spectra rich in certain hues (notably green) while lacking others that makes
assigning a true color temperature impossible. Other fluorescent lamps are designed to approximate
daylight with color temperatures of about 5,000 Kelvins.
The problem with color matching arises because pigments and paper only reflect light, so their actual
color depends on the temperature of the light illuminating them. Your monitor screen emits light, so
its color is independent of illumination—it has its own color temperature that may be (and is likely)
from that lighting the rest of your work. Monitors are designed to glow with the approximate color
temperature of daylight rather than incandescent or fluorescent light.
Alas, not everyone has the same definition of daylight. Noonday sun, for instance, ranges from 5,500
to 6,000 Kelvins. Overcast days may achieve a color temperature of 10,000 Kelvins because the
scattered blue glow of the sky (higher color temperature) dominates the yellowish radiation from the
sun. The colors and blend of the phosphors used to make the picture tube screen and the relative
strengths of the electron beams illuminating those phosphors determine the color temperature of a
monitor. Some engineers believe the perfect day is a soggy, overcast afternoon suited only to ducks
and Englishmen and opt to run their monitors with a color temperatures as high as 10,000 Kelvins.
Others, however, live in a Kodachrome world where the color temperature is the same 5,300 Kelvins
as a spring day with tulips in the park.

Persistence

CRT phosphors also differ in persistence, which describes how long the phosphor glows after being
struck by the electron beam. Most monitors use medium persistence phosphors.
Persistence becomes obvious when it is long. Images take on a ghostly appearance, lingering for a few
seconds and slowly fading away. Although the effect may be bothersome, particularly in a darkened
room, it's meant to offset the effect of another headache producer: flicker.
Exactly what it sounds like, flicker is the quick flashing of the screen image caused by the image
decaying before it gets re-scanned by the electron beam. The persistence of vision (a quality of the
human visual system) makes rapidly flashing light sources appear continuously lit. Fluorescent lights,
for example, seem to glow uninterruptedly even though they switch on and off 120 times a second
(twice the nominal frequency of utility supplied electricity).

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (10 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

The lingering glow of long persistence phosphors bridges over the periods between passes of electron
beams when they stretch out too long for human eyes to blend them together. Long persistence
phosphors are thus often used in display systems scanned more slowly than usual, such as interlaced
monitors. The IBM Monochrome display, perhaps the most notorious user of long persistence green
phosphors, is scanned 50 times a second instead of the more normal (and eye-pleasing) 60 or higher.
Long persistence phosphors need not be green, however. Long persistence color systems also are
available for use in applications where flicker is bothersome. Most often, long persistence color
phosphors are used in interlaced systems that are scanned more slowly than non-interlaced displays.
Long persistence phosphors also frustrate light pens, which depend on detecting the exact instant a dot
of phosphor lights up. Because of the lingering glow, most light pens perceive several dots to be lit
simultaneously. The pen cannot zero in on a particular dot position on the screen.

Electron Guns

To generate the beams that light the phosphors on the screen, a CRT uses one or more electron guns.
An electron gun is an electron emitter (a cathode) in an assembly that draws the electrons into a sharp,
high speed beam. To move the beam across the breadth of the tube face (so that the beam doesn't light
just a tiny dot in the center of the screen), a group of powerful electromagnets arranged around the
tube, collectively called the yoke, bend the electron beam in the course of its flight. The magnetic field
set up by the yoke is carefully controlled and causes the beam to sweep each individual display line
down the face of the tube.
Monochrome CRTs have a single electron gun that continuously sweeps across the screen. Most color
tubes have three guns, although some color televisions and monitors boast "one gun" tubes, which
more correctly might be called "three guns in one." The gun count depends on the definition of a gun.
Like all color CRTs, the one gun tubes have three distinct electron emitting cathodes that can be
individually controlled. The three cathodes are fabricated into a single assembly that allows them to
be controlled as if they were generating only a single beam.
In a three gun tube, the trio of guns is arranged in a triangle. One gun tubes arrange their cathodes in a
straight line, often earning the epithet inline guns. In theory inline guns should be easier to set up, but
as a practical matter, excellent performance can be derived from either arrangement.
The three guns in a color CRT emit their electrons simultaneously, and the three resulting beams are
steered together by the yoke. Individual adjustments are provided for each of the three beams,
however, to ensure that each beam falls exactly on the same triplet of color dots on the screen as the
others. Because these controls help the three beams converge on the same triad, they are called
convergence controls. The process of adjusting them is usually termed alignment.

Convergence

The three electron beams inside any color monitor must converge on exactly the right point on the
screen to illuminate a single triad of phosphor dots. If a monitor is not adjusted properly—or if it is

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (11 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

not designed or made properly—the three beams cannot converge properly to one point. Poor
convergence results in images with rainbow-like shadows and a loss of sharpness and detail, as
illustrated in Figure 17.2. Individual text characters no longer appear sharply defined but become two-
or three-color blurs. Monochrome monitors are inherently free from such convergence problems
because they have but one electron beam.
Figure 17.2 Excellent (left) and poor convergence on a monitor screen.

Convergence problems are a symptom rather than a cause of monitor deficiencies. Convergence
problems arise not only from the design of the display, but also from the construction and setup of
each individual monitor. It can vary widely from one display to the next and may be aggravated by
damage during shipping.
The result of convergence problems is most noticeable at the screen periphery because that's where
the electron beams are the most difficult to control. When bad, convergence problems can be the
primary limit on the sharpness of a given display, having a greater negative effect than wide dot pitch
or low bandwidth ( both of which are discussed later in this chapter).
Many monitor makers claim that their convergence is a given fraction of a millimeter at a particular
place on the screen. If a figure is given for more than one screen location, the center of the screen
invariably has a lower figure—tighter, better convergence—than a corner of the screen.
The number given is how far one color may spread from another at that location. Lower numbers are
better. Typical monitors may claim convergence of about 0.5 (one-half) millimeter at one of the
corners of the screen. That figure often rises 50 percent higher than the dot pitch of the tube, making
the convergence the limit on sharpness for that particular monitor.
Misconvergence problems often can be corrected by adjustment of the monitor. Many monitors have
internal convergence controls. A few, high resolution (and high cost) monitors even have external
convergence adjustments. But adjusting monitor convergence is a job for the specialist—and that
means getting a monitor converged can be expensive, as is any computer service call.
Many monitor makers now claim that their products are converged for life. Although this strategy
should eliminate the need to adjust them (which should only be done by a skilled technician with the
correct test equipment), it also makes it mandatory to test your display before you buy it. You don't
want a display that's been badly converged for life.

Purity

The ability of a monitor to show you an evenly lit screen that does not vary in color across its width is
termed purity. A monitor with good purity will be able to display a pure white screen without a hint of
color appearing. A monitor with poor purity will be tinged with one color or another in large patches.
Figure 17.3 illustrates the screens with good and poor purity.
Figure 17.3 Comparison of good and bad monitor purity.

Poor purity often results from the shadow mask or aperture grille of a cathode ray tube becoming
magnetized. Degaussing the screen usually cures the problem. Most larger monitors have built-in
automatic degaussers.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (12 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

You can degauss your monitor with a degaussing loop designed for color televisions or even a bulk
tape eraser. Energize the degausing coil or tape eraser in close proximity to the screen, then gradually
remove the coil to a distance of three or more feet away before switching it off. The gradually
declining alternating magnetic field will overpower the static field on the mask, and the gradual
removal of the alternating field prevents the strong field from establishing itself on the mask.

Shadow Masks

Just pointing the electron beams at the right dots is not enough because part of the beam can spill over
and hit the other dots in the triplet. The result of this spillover is a loss of color purity—bright hues
become muddied. To prevent this effect and make images as sharp and colorful as possible, all color
CRTs used in computer displays and televisions alike have a shadow mask—a metal sheet with fine
perforations in it, located inside the display tube and a small distance behind the phosphor coating of
the screen.
The shadow mask and the phosphor dot coating on the CRT screen are critically arranged so that the
electron beam can only hit phosphor dots of one color. The other two colors of dots are in the
"shadow" of the mask and cannot be seen by the electron beam.
The spacing of the holes in the shadow mask to a great degree determines the quality of the displayed
image. For the geometry of the system to work, the phosphor dots on the CRT screen must be spaced
at the same distance as the holes in the mask. Because the hole spacing determines the dot spacing, it
is often termed the dot pitch of the CRT.
The dot pitch of a CRT is simply a measurement of the distance between dots of the same color. It is
an absolute measurement, independent of the size of the tube or the size of the displayed image.
The shadow mask affects the brightness of a monitor's image in two ways. The size of the holes in the
mask limits the size of the electron beam getting through to the phosphors. Off-axis from the
guns—that is, toward the corners of the screen—the round holes appear oval to the gun and less of the
beam can get through. As a result, the corners of a shadow mask screen are often dimmer than the
center, although the brightness difference may not be distinguishable.
The mask also limits how high the electron beam intensity can be in a given CRT. A stronger
beam—which makes a brighter image—holds more energy. When the beam strikes the mask, part of
that energy is absorbed by the mask and becomes heat, which raises the temperature of the mask. In
turn, this temperature rise makes the mask expand unpredictably, distorting it minutely and blurring
the image. To minimize this heat induced blur, monitor makers are moving to making shadow masks
from materials that have a low coefficient of thermal expansion. That is, they change size as little as
possible with temperature. The alloy Invar is favored for shadow masks because of its capability to
maintain a nearly constant size as it warms.

Aperture Grilles

With all the problems associated with shadow masks, you might expect someone to come up with a

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (13 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

better idea. Sony Corporation did exactly that, inventing the Trinitron picture tube.
The Trinitron uses an aperture grille—slots between a vertical array of wires—instead of a mask. The
phosphors are painted on the inner face of the tube as interleaved stripes of the three additive primary
colors. The grille blocks the electron beam from the wrong stripes just as a shadow mask blocks it
from the wrong dots. The distance between two sequential stripes of the same color is governed by the
spacing between the slots between the wires—the slot pitch of the tube. Because the electron beam
fans out as it travels away from the electron gun and stripes are farther from the gun than is the mask,
the stripes are spaced a bit farther apart than the slot pitch. Their spacing is termed screen pitch. For
example, a 0.25 millimeter slot pitch Trinitron might have a screen pitch of 0.26 millimeter. Figure
17.4 shows how slot pitch as well as dot pitch are measured.
Figure 17.4 Measuring dot pitch and slot pitch.

The wires of the aperture grille are quite thick, about two-thirds the width of the slot pitch. For
example, in a Trinitron with a 0.25 slot pitch, the grille wires measure about 0.18 millimeters in
diameter because each electron beam is supposed to illuminate only one-third of the screen. The wires
shadow the other two-thirds from the beam to maintain the purity of the color.
The aperture grille wires are held taut, but they can vibrate. Consequently, Trinitron monitors have
one or two thin tensioning wires running horizontally across the screen. Although quite fine, these
wires cast a shadow on the screen that is most apparent on light-colored screen backgrounds. Some
people find the tensioning wire shadows objectionable, so you should look closely at a Trinitron
before buying.
Trinitrons hold a theoretical brightness advantage over shadow mask tubes. Because the slots allow
more electrons to pass through to the screen than do the tiny holes of a shadow mask, a Trinitron can
(in theory) create a brighter image. This added brightness is not borne out in practice. However,
Trinitrons do excel in keeping their screens uniformly bright. The aperture grille wires of a Trinitron
block the beam only in one dimension, and so don't impinge as much on the electron beam at the
screen edges.
Thanks to basic patents, Sony had exclusive rights to the Trinitron design. However, those patents
began expiring in 1991, and other manufactures were quick to begin working with the technology.
Other patents, however, cover manufacturing and other aspects of building successful Trinitrons.
Consequently, an expected flood of Trinitron clones never appeared. In fact, the only new alternative
to the Trinitron was introduced by Mitsubishi in 1993. Called Diamondtron by its manufacturer, the
new design is based on aperture grille technology, but uses a refined electron gun. Whereas the
Trinitron combines three guns into a single focusing mechanism, the Diamondtron gives each gun its
own control. According to Mitsubishi, this refinement allows more precise beam control and a more
accurate and higher resolution image.

Required Dot Pitch

No matter whether a monitor uses a shadow mask with a dot pitch or an aperture grille with a slot
pitch, the spacing of image triads on the screen is an important constituent in monitor quality. A
monitor simply cannot put dots any closer together than the holes in mask or grille allow. It's easy to
compute the pitch necessary for a resolution level in a computer system. Just divide the screen size by

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (14 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

the number of dots required to be displayed.


For example, a VGA text display comprises 80 columns of characters each nine dots wide, for a total
of 720 dots across the screen. The typical twelve-inch (diagonal) monitor screen is roughly 9.5 inches
or 240 millimeters across. Hence, to properly display a VGA text image, the dot pitch must be smaller
than .333 (or 240/720) millimeter, assuming the full width of the screen is used for display. Often a
monitor's image is somewhat smaller than full screen width and such displays require even finer dot
pitch. The larger the display, the coarser the dot pitch can be for a given level of resolution.

Line Width

Another factor limits the sharpness of monitor images, the width of the lines drawn on the screen.
Ideally, any vertical or horizontal line on the screen will appear exactly one pixel wide, but in
practical monitors the width of a line may not be so compliant. If lines are narrower than one pixel
wide, thin black lines will separate adjacent white lines and wide white areas will be thinly striped in
black. If the line width exceeds the size of a pixel, the display's ability to render fine detail will be
lost.
The ideal line width for a monitor varies with the size of the screen and the resolution displayed on
the screen. As resolution increases, lines must be narrower. As screen size goes up (with the
resolution constant), line width must increase commensurately. For example, a monitor with an active
image area that's ten inches wide (about what you'd expect from a 14-inch display) will require an
ideal line width of 1/64th inch or about 0.4 millimeter.
Several factors influence the width of lines on the screen. The monitor must be able to focus its
electron beam into a line of ideal width. However, width also varies with the brightness of the
beam—brighter beams naturally tend to expand out of focus. Consequently, when you increase the
brightness of your monitor, the sharpness of the image may decline. For this reason, test laboratories
usually make monitor measurements at a standardized brightness level.

Screen Curvature

Most CRTs have a distinctive shape. At one end, a narrow neck contains the electron gun or guns.
Around the neck fits the deflection yoke, an external assembly that generates the magnetic fields that
bend the electron beams to sweep across the inner surface of the wide face of the tube. The tube
emerges from the yoke as a funnel-like flaring, which enlarges to the rectangular face of the screen
itself. This face often (but becoming much less common) is a spherically curving surface.
The spherical curve of the face makes sense for a couple of reasons. It makes the distance traveled by
the electron beam more consistent at various points on the screen, edge to center to edge. A truly flat
screen would require the beam to travel farther at the edges than at the center and would require the
beam to strike the face of the screen obliquely, resulting in image distortion. Although this distortion
can be compensated for electrically, the curving screen helps things along.
In addition, the CRT is partly evacuated, so normal atmospheric pressure is constantly trying to crush

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (15 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

the tube. The spherical surface helps distribute this potentially destructive force more evenly, making
the tube stronger.
Screen curvature has a negative side effect. Straight lines on the screen appear straight only from one
observation point. Move your head closer, farther away, or to one side, and the supposedly straight
lines of your graphics images will bow this way and that.
Technology has made the reasons underlying spherical curved screens less than compelling. The
geometry of inline guns simplifies tube construction and alignment sufficiently that cylindrically
curved screens are feasible. They have fewer curvilinear problems because they warp only one axis of
the image. Trinitrons characteristically have faces with cylindrical curves. Most shadow mask tubes
have spherical faces.
In the last few years, the technical obstacles to making genuinely flat screens have been surmounted.
A number of manufacturers now offer flat screen monochrome displays, which are relatively simple
because compensation for the odd geometry is required by only one electron beam.
The first color flat screen was Zenith's flat tension mask system. The tension mask solves the
construction problems inherent in a flat screen color system by essentially stretching the shadow
mask. Its flat face and black matrix make for very impressive images, only the case of the monitor
itself is bulky and ugly to look at; and an internal fan made the first model as much a pain for the ears
as the screen was a pleasure for the eyes. Since then, such monitors have become less power hungry,
but they remain more costly than more conventional designs.
Today's so called flat square tubes are neither flat nor square. They are, however, flatter and squarer
than the picture tubes of days gone by so they suffer less curvilinear distortion.

Resolution Versus Addressability

The resolution of a video system refers to the fineness of detail that it can display. It is a direct
consequence of the number of individual dots that make up the screen image and thus is a function of
both the screen size and the dot pitch.
Because the size and number of dots limit the image quality, the apparent sharpness of screen images
can be described by the number of dots that can be displayed horizontally and vertically across the
screen. For example, the resolution required by the Video Graphics Array system in its standard
graphics mode is 640 dots horizontally by 480 vertically. Modern display systems may produce image
with as many as 1600 by 1200 dots in their highest resolution mode.
Sometimes, however, the resolution available on the screen and that made by a computer's display
adapter are not the same. For example, a video mode designed for the resolution capabilities of a color
television set hardly taps the quality available from a computer monitor. On the other hard, the
computer generated graphics may be designed for a display system that's sharper than the one being
used. You might, for instance, try to use a television in lieu of a more expensive monitor. The
sharpness you actually see would then be less than what the resolution of the video system would
have you believe.
Actual resolution is a physical quality of the video display system—the monitor—that's actually being
used. It sets the ultimate upper limit on the display quality. In color systems, the chief limit on

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (16 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

resolution is purely physical—the convergence of the system and the dot pitch of the tube. In
monochrome systems, which have no quality limiting shadow masks, the resolution is limited by the
bandwidth of the monitor, the highest frequency signal with which it can deal. (Finer details pack
more information into the signals sent from computer system to monitor. The more information in a
given time, the higher the frequency of the signal.)
A few manufacturers persist in using the misleading term addressability to describe the quality of their
monitors. Addressability is essentially a bandwidth measurement for color monitors. It indicates how
many different dots on the screen the monitor can point its electron guns at. It ignores, however, the
physical limit imposed by the shadow mask. In other words, addressability describes the highest
quality signals the monitor can handle, but the full quality of those signals is not necessarily visible to
you onscreen.

Anti-Glare Treatment

Most mirrors are made from glass, and glass tries to mimic the mirror whenever it can. Because of the
difference between the index of refraction of air and that of glass, glass is naturally reflective. If you
make mirrors, that's great. If you make monitors—or worse yet, use them—the reflectivity of glass
can be a big headache. A reflection of a room light or window from the glass face of the CRT can
easily be brighter than the glow of phosphors inside. As a result, the text or graphics on the display
tends to "wash out" or be obscured by the brightness.
The greater the curvature of a monitor screen, the more apt it is to have a problem with reflections
because more of the environment gets reflected by the screen. A spherical monitor face acts like one
of those huge convex mirrors strategically hung to give a panoramic view of shoplifters or cars
sneaking around an obscured hairpin turn. The flatter the face of the monitor, the less of a worry
reflections are. With an absolutely flat face, a slight turn of the monitor's face can eliminate all glare
and reflections.
You can't change the curve of your monitor's face. However, help is available. Anti-glare treatments
can reduce or eliminate reflections from the face of most CRTs. Several glare reduction technologies
are available, and each varies somewhat in its effectiveness.

Mesh

The lowest tech and least expensive anti-glare treatment is simply a fabric mesh, usually nylon. The
mesh can either be placed directly atop the face of the screen or in a removable frame that fits about
half an inch in front of the screen. Each hole in the mesh acts like a short tube, allowing you to see
straight in at the tube, but cutting off light from the sides of the tube. Your straight-on vision gets
through unimpeded, while glare that angles in doesn't make it to the screen.
As simple as this technique is, it works amazingly well. The least expensive after-market anti-glare
system uses mesh suspiciously similar to pantyhose stretched across a frame. Unfortunately this mesh
has an unwanted side effect. Besides blocking the glare, it also blocks some of the light from the
screen and makes the image appear darker. You may have to turn the brightness control up to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (17 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

compensate, which may make the image bloom and lose sharpness.

Mechanical

Glare can be reduced by mechanical means—not a machine that automatically intercepts glare before
it reaches the screen, but mechanical preparation of the screen surface. By lightly grinding the glass
on the front of the CRT, the face of the screen can be made to scatter rather than reflect light. Each
rough spot on the screen that results from the mechanical grinding process reflects light randomly,
sending it every which direction. A smooth screen reflects a patch of light all together, like a mirror,
reflecting any bright light source into your eyes. Because the light scattered by the ground glass is
dispersed, less of it reaches your eyes and the glare is not as bright. However, because the coarse
screen surface disperses the light coming from inside the tube as well as that reflected from the tube
face, it also lessens the sharpness of the image. The mechanical treatment makes text appear slightly
fuzzier and out of focus, which to some manufacturers is a worse problem than glare.

Coating

Glare can be reduced by applying coatings to the face of the CRT. Two different kinds of coatings can
be used. One forms a rough film on the face of the CRT. This rough surface acts in the same way as a
ground glass screen would, scattering light.
The screen also can be coated with a special compound like magnesium fluoride. By precisely
controlling the thickness of this coating, the reflectivity of the surface of the screen can be reduced.
The fluoride coating is made to be a quarter the wavelength of light (usually of light at the middle of
the spectrum). Light going through the fluoride and reflecting from the screen thus emerges from the
coating out of phase with the light striking the fluoride surface, visually canceling out the glare.
Camera lenses are coated to achieve exactly the same purpose, the elimination of reflections. A proper
coating can minimize glare without affecting image sharpness or brightness.

Polarization

Light can be polarized; that is, its photons can be restricted to a single plane of oscillation. A
polarizing filter allows light of only one polarization to pass. Two polarizing filters in a row can be
arranged to allow light of only one plane of polarization to pass (by making the planes of polarization
of the filters parallel), or the two filters can stop light entirely when their planes of polarization are
perpendicular.
The first filter lets only one kind of light pass; the second filter lets only another kind of light pass.
Because none of the second kind of light reaches the second filter, no light gets by.
When light is reflected from a surface, its polarization is shifted by 90 degrees. This physical principle
makes polarizing filters excellent reducers of glare.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (18 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

A sheet of polarizing material is merely placed a short space in front of a display screen. Light from a
potential source of glare goes through the screen and is polarized. When it strikes the display and is
reflected, its polarization is shifted 90 degrees. When it again reaches the filter, it is out of phase with
the filter and cannot get through. Light from the display, however, only needs to go through the filter
once. Although this glow is polarized, there is no second screen to impede its flow to your eyes.
Every anti-glare treatment has its disadvantage. Mesh makes an otherwise sharp screen look fuzzy
because smooth characters are broken up by the cell structure of the mesh. Mechanical treatments are
expensive and tend to make the screen appear to be slightly "fuzzy" or out of focus. The same is true
of coatings that rely on the dispersion principle. Optical coatings, Polaroid filters, and even mesh
suffer from their own reflections. The anti-glare material itself may add its own bit of glare. In
addition, all anti-glare treatments—polarizing filters in particular—tend to make displays dimmer.
The polarizing filter actually reduces the brightness of a display to one-quarter its untreated value.
Even with their shortcomings, however, anti-glare treatments are amazingly effective. They can ease
eyestrain and eliminate the headaches that come with extended computer use.

Image Characteristics

Physical aspects of a monitor and its electronics control the size, shape, and other aspects of the
images it displays. These qualities are defined and characterized in a number of ways. The most
rudimentary is screen size—the bigger your monitor screen, the larger the images it can make.
Because of the underscanning common among computer monitors, however, the actual image size is
almost always smaller than the screen. The aspect ratio of the image describes its shape independent
of its size. Most monitors give you a variety of controls to alter the size and shape of the image, so
you are the final arbiter of what things look like on your monitor screen.

Screen Size

The most significant measurement of a CRT-based monitor is the size of its screen. Although
seemingly straightforward, screen size has been at best an ambiguous measurement and at worst
downright misleading.
The confusion all started with television, where confusion often beings. The very first television sets
had round CRTs, and their size was easy to measure—simply the diameter of the tube. When
rectangular tubes became prevalent in the 1950s, the measurement shifted to the diagonal of the face
of the tube. The diagonal was, of course, the closest equivalent to the diameter of an equivalent round
tube. It was also the largest dimension that a television manufacturer could reasonably quote.
Unlike television images, which usually cover the entire face of the CRT, computer monitors limit
their images to somewhat less. Because the image is most difficult to control at the edges of the
screen, monitor makers maintain higher quality by restricting the size of the image. They mask off the
far edges of the CRT with the bezel of the monitor case.
That bezel means that no image can fill the entire screen—at least no image that you can entirely see.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (19 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

The tube size becomes irrelevant to a realistic appraisal of the image. Some monitor makers persisted
in using it to describe their products. Fortunately, most of the industry recognized this measuring
system as optimistic exaggeration, and began using more realistic diagonal measurement of the actual
maximum displayable image area.
VESA adopted the diagonal of the maximum image area as the measurement standard in its Video
Image Area Definition standard, Version 1.1, which it published on October 26, 1995. This standard
requires that screen image area be given as horizontal and vertical measurements of the actual active
image area when the monitor is set up by the manufacturer using the manufacturer's test signals. The
dimensions must be given in millimeters with an assumed maximum variance of error of plus and
minus two percent. Wider tolerances are allowed but must be explicitly stated by the manufacturer. In
no case can the expressed image dimensions exceed the area visible through the monitor bezel.
Because the aspect ratio of PC monitor displays is 4:3 (see the "Aspect Ratio" section that follows),
computation of the horizontal and vertical screen dimensions from the diagonal is easy. The diagonal
represents the hypotenuse of a 3-4-5 right triangle, and that ratio applies to all screen sizes. Table 17.2
lists the dimensions for the most common nominal screen sizes.

Table 17.2. Nominal CRT Screen Dimensions

Diagonal Millimeters Inches


Horizontal Vertical Horizontal Vertical
14 inches 284 213 11.2 8.4
15 inches 305 229 12 9
16 inches 325 244 12.8 9.6
17 inches 345 259 13.6 10.2
20 inches 406 305 16 12
21 inches 427 320 16.8 12.6

Portrait displays, which are designed to give you a view more like the printed sheets that roll out of
your laser printer and into envelopes, merely take an ordinary CRT and turn it on its side. The market
is not large enough to justify development of custom CRTs for portrait applications. Moreover, the
4:3 aspect ratio works fine because the "active" image on a sheet of letterhead—the space actually
occupied by printing once you slice off the top, bottom, left, and right margins—is about eight by ten
inches, a nearly perfect fit on a standard picture tube. When measuring the images on these portrait
displays horizontal becomes vertical, and all measurements rotate 90 degrees.

Overscan and Underscan

Two monitors with the same size screens may have entirely different onscreen image sizes.
Composite monitors are often afflicted by overscan; they attempt to generate images larger than their
screen size, and the edges and corners of the active display area may be cut off. (The overscan is often

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (20 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

designed so that as the components inside the monitor age and become weaker, the picture shrinks
down to normal size—likely over a period of years.) Underscan is the opposite condition—the image
is smaller than nominal screen size. For a given screen size, an overscanned image will appear larger
at the expense of clipping off the corners and edges of the image as well as increasing distortion at the
periphery of the image. Underscanning wastes some of the active area of the monitor screen. Figure
17.5 illustrates the effects of underscanning and overscanning on the same size screen.
Figure 17.5 Underscan and overscan compared.

Underscan is perfectly normal on computer displays and does not necessarily indicate any underlying
problems unless it is severe—for example when it leaves a two-inch black band encircling the image.
Underscanning helps keep quality high because image geometry is easier to control nearer the center
of the screen than it is at the edges. Pulling in the reins on the image can ensure that straight lines
actually are displayed straight. Moreover, if you extend the active image to the very edge of the bezel
or if you change your viewing position so that you are not facing the screen straight on, the edge of
the image may get hidden behind the bezel. The glass in the face of the screen is thicker than you
might think, on the order of an inch (25 millimeters), enough that the third dimension will interfere
with your most careful alignment.
On the other hand, while overscan gives you a larger image and is the common display mode for
video systems, it is not a good idea for PC monitor images. Vital parts of the image may be lost
behind the bezel. You may lose to overscan the first character or two from each line of type of one
edge of a drafting display. With video, however, people prefer to see as big an image as possible and
usually pay little attention to what goes on at the periphery. Broadcasters, in fact, restrict the
important part of the images that they deal with to a safe area that will be completely viewable even
on televisions with substantial overscan.

Aspect Ratio

The relationship between the width and height of a monitor screen is termed its aspect ratio. Today the
shape of the screen of nearly every monitor is standardized, as is that of the underlying CRT that
makes the image. The screen is 1.33 times wider than it is high, resulting in the same 4:3 aspect ratio
used in television and motion pictures before the wide screen phenomenon took over. Modern
engineers now prefer to put the vertical number first to produce aspect ratios that are less than one.
Expressed in this way, video has a 3:4 aspect ratio, a value of 0.75.
The choice of aspect ratio is arbitrary and a matter of aesthetics. According to classical Greek
aesthetics, the Golden Ratio with a value of about 0.618 is the most beautiful. This beauty is
mathematical as well as aesthetic, the solution to the neat little equation x+1 = 1/x. The exact value of
the Golden Ratio is irrational, (SQRT(5)-1)/2. Expressed as a ratio of horizontal to vertical, the
Golden Ratio is roughly 1.618, the solution to x-1 = 1/x.)
Various display systems feature their own aspect ratios. The modern tendency is toward wider aspect
ratios. For example, High Definition Television (HDTV) stretches its aspect ration from the 3:4 of
normal video and television to 9:15. The normal negatives you make with your 35mm camera have a
4:6 aspect ratio. The reason video is so nearly square carries over from the early days of television
when cathode ray tubes had circular faces. The squarer the image, the more of the circular screen was
put to use. Figure 17.6 compares the aspect ratios of three common display systems.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (21 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Figure 17.6 Aspect ratios of display systems.

The image on your monitor screen need not have the same aspect ratio of the tube, however. The
electronics of monitors separate the circuitry that generates the horizontal and vertical scanning
signals and results in their independent control. As a result, the relationship between the two can be
adjusted, and that adjustment results in an alteration of the aspect ratio of the actual displayed image.
For example, by increasing the amplification of the horizontal signal, the width of the image is
stretched, raising the aspect ratio.
Normally, you should expect that the relative gains of the horizontal and vertical signals will be
adjusted so that your display shows the correct aspect ratio on its screen. A problem develops when a
display tries to accommodate signals based on different standards. This mismatch is particularly
troublesome with VGA displays because the VGA standard allows images made with three distinct
line counts—350, 400, and 480.
All else being equal, an image made from 350 lines is less than three-quarters the height of a 480-line
image. A graphic generated in an EGA-compatible mode shown on a VGA display would therefore
look quite squashed. A circle drawn on the screen would look like an ellipse; a orange would more
resemble a watermelon.

Image Sizing

Monitors that match the VGA standard compensate for such obtuse images with the sync-polarity
detection scheme. The relative polarities of the horizontal and vertical sync signals instruct the
monitor in which mode and line count the image is being set. The monitor then compensates by
adjusting its vertical gain to obtain the correct aspect ratio no matter the number of lines in the image.
Not all monitors take advantage of this sync signaling system. Shifting display modes with such a
monitor can lead to graphics displays that look crushed. Others use a technique called autosizing that
allows the monitor to maintain a consistent image size no matter what video signal your display
adapter is sending it without regard to VGA sync coding. Monitor makers can achieve autosizing in
several ways. True autosizing works, regardless of the signal going to the monitor, and scales the
image to match the number of display lines. Mode sensitive autosizing works by determining the
display mode used for an image from the frequency of the signal. It then switches the size to a pre-set
standard to match the number of lines in the signal. Monitors often combine VGA sync-sensing with
mode sensitive autosizing.

Image Distortion

Between the electron guns and the phosphors in a cathode ray tube, the electron beam passes through
an electronic lens and deflection system that focuses and steers the beam to assure that its path across
the screen is the proper size and in the proper place. The electronics of the monitor control the lens
and deflection system, adjusting it throughout out the sweep of the electron beam across the screen. In
addition to their other chores. the electronics must compensate for the difference in the path of the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (22 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

electron beam at different screen positions. Modern monitors do a magnificent job of controlling
beam placement.
When the control system is not properly adjusted, however, the image may exhibit any of a number of
defects. Because these defects distort the image from its desired form, they are collectively called
image distortion.
The two most common forms of image distortion are barrel distortion and pincushion distortion.
Barrel distortion causes vertical or horizontal lines in the image to bow outward so that the center of
the lines lies closer to the nearest parallel edge of the screen. Pincushion distortion causes the vertical
or horizontal lines in the image to bow inward so that the center of the lines is closer to the center of
the screen. Figure 17.7 shows these two kinds of image distortion.
Figure 17.7 Barrel and pincushion distortion.

Barrel and pincushion distortion arise from the same cause, improper image compensation, and are
essentially opposites of one another. Overcompensate for pincushion distortion and you get barrel
distortion. Collectively the two are sometimes simply called pincushioning no matter which way the
lines bow.
Pincushioning is always worse closer to the edges of the image. All monitors have adjustments to
compensate for pincushioning, although these adjustments are not always available to you. They may
be hidden inside the monitor. Other monitors may include pincushioning adjustments in their control
menus. Technician usually use test patterns that display a regular grid on the screen to adjust monitors
to minimize pincushioning. You can usually use a full screen image to adjust the pincushioning
controls so that the edges of the desktop background color are parallel with the bezel of your monitor.
Less common is trapezoidal distortion that leaves lines at the outer edge of the screen straight but not
parallel to the bezel. In other words, instead of your desktop being a rectangle, it is a trapezoid with
one side shorter than its opposite side. As with pincushioning, all monitors have controls for
trapezoidal distortion but not all make them available to you as the user of the monitor. If your
monitor does have an external control for trapezoidal distortion, you adjust it as you do for
pincushioning.

Image Controls

A few (far from a majority) monitors make coping with underscan, overscan, and odd aspect ratios
simply a matter of twisting controls. These displays feature horizontal and vertical size (or gain)
controls that enable you to adjust the size and shape of the image to suit your own tastes. With these
controls—providing they have adequate range—you can make the active image touch the top, bottom,
and sides of the screen bezel or you can shrink the bright area of your display to a tiny (but
geometrically perfect) patch in the center of your screen.
Size and position controls give you command of how much screen the image on your monitor fills.
With full range controls, you can expand the image to fill the screen from corner to corner or reduce it
to a smaller size that minimizes the inevitable geometric distortion that occurs near the edges of the
tube. A full complement of controls includes one each of the following: horizontal position
(sometimes termed phase), vertical position, horizontal size (sometimes called width), and vertical
size (or height).

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (23 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

A wide control range is better than a narrow one. Some monitors skimp on one or more controls and
limit you in how large you can make the onscreen image. Worse, sometimes a monitor maker doesn't
include a control at all. For example, some monitors have no horizontal size controls. As a result you
cannot adjust both the size and aspect ratio of the image.
The optimum position for these controls is on the front panel where you can both adjust them and
view the image at the same time. Controls on the rear panel require you to have gorilla-like arms to
reach around the monitor to make adjustments while checking their effect.
Image controls come in two types, analog and digital.
Analog controls are the familiar old knobs like you find on vintage television sets. Twist one way and
the image gets bigger; twist the other and it shrinks. Analog controls have one virtue—just by looking
at the knob you know where they are set, whether at one or the other extreme of their travel. The
control itself is a simple memory system; it stays put until you move it again. Analog controls,
however, become dirty and wear out with age, and they usually enable you to set but one value per
knob—one value that must cover all the monitor's operating modes.
Digital controls give you pushbutton control over image parameters. Press one button, and the image
gets larger or moves to the left. Another compensates in the opposite direction. Usually digital
controls are linked with a microprocessor, memory, and mode sensing circuitry so that you can pre-set
different image heights and widths for every video standard your monitor can display.
Digital controls don't get noisy with age and are more reliable and repeatable, but you never know
when you are approaching the limit of their travel. Most have two speed operation—hold them in
momentarily and they make minute changes; keep holding down the button and they shift gears to
make gross changes. Of course, if you don't anticipate the shift, you'll overshoot the setting you want
and spend a few extra moments zeroing in on the exact setting.
Size and position controls are irrelevant to LCD and similar alternate display technologies. LCD
panels are connected more directly to display memory so that memory locations correspond nearly
exactly to every screen position. There's no need to move the image around or change its shape
because it's forever fixed where it belongs.
Most CRT-based displays also carry over several controls from their television progenitors. Nearly
every computer monitor has a brightness control, which adjusts the level of the scanning electron
beam; this in turn makes the onscreen image glow brighter or dimmer. The contrast control adjusts the
linearity of the relationship between the incoming signal and the onscreen image brightness. In other
words, it controls the brightness relationship that results from different signal levels—how much
brighter high intensity is. In a few displays, both the brightness and contrast function are combined
into a single "picture" control. Although a godsend to those who might get confused by having to
twiddle two knobs, the combined control also limits your flexibility in adjusting the image to best suit
your liking.
Other controls ubiquitous to televisions usually are absent from better computer monitors because
they are irrelevant. Vertical hold, color (saturation), and hue controls only have relevance to
composite video signals, so they are likely only to be found on composite interfaced displays. The
vertical hold control tunes the monitor to best decipher the vertical synchronizing signal from the
ambiguities of composite video signals. The separate sync signals used by other display standards
automatically remove any ambiguity. Color and hue only adjust the relationship of the color
subcarrier to the rest of the composite video signal and have no relevance at all to non-composite
systems.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (24 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Electronics

The image you see onscreen is only part of the story of a complete display system. The video signals
from your PC must be amplified and processed by the electronics inside the monitor to achieve the
right strength and timing relationships to put the proper image in view.
The basic electronic components inside a monitor are its video amplifiers. As the name implies, these
circuits simply increase the strength (amplify) of the approximately one-volt signals they receive from
your PC to the thousands of volts needed to drive the electron beam from cathode to phosphor.
Monochrome monitors have a single video amplifier; color monitors, three (one for each primary
color).
In an analog color monitor, these three amplifiers must be exactly matched and absolutely linear. That
is, the input and output of each amplifier must be precisely proportional, and it must be the same as
the other two amplifiers. The relationship between these amplifiers is called color tracking. If it varies,
the color of the image on the screen won't be what your software had in mind.
The effects of such poor color tracking are all bad. You lose precision in your color control. This is
especially important for desktop publishing and presentation applications. With poor color tracking,
the screen can no longer hope to be an exact preview of what eventually appears on paper or film.
You may even lose a good fraction of the colors displayable by your video system.
What happens is that differences between the amplifiers cause one of the three primary colors to be
emphasized at times and de-emphasized at others, casting a subtle shade on the onscreen image. This
shading effect is most pronounced in gray displays—the dominant color(s) tinge the gray.
Although you don't have to worry about color tracking in a monochrome display, the quality of the
amplifier nevertheless determines the range of grays that can be displayed. Aberrations in the
amplifier cause the monitor to lose some of its grayscale range.
The relationship between the input and output signals of video amplifiers is usually not linear. That is,
a small change in the input signal may make a greater than corresponding change in the output. In
other words, the monitor may exaggerate the color or grayscale range of the input signal—contrast
increases. The relationship between input and output is referred to as the gamma of the amplifier. A
gamma of one would result in an exact correspondence of the input and output signals. However,
monitors with unity gammas tend to have washed out, pastel images. Most people prefer higher
gammas, in the range 1.5 to 1.8, because of their contrastier images.
If you want a monitor that works both with today's VGA, SuperVGA, and sharper display systems as
well as old graphics adapters like CGA and EGA, you need a monitor with TTL capability. Most
modern computer monitors have only analog inputs, which cannot properly display the digital (TTL)
signals used by these older standards. You also have to check the color support of such monitors. To
be compatible with the CGA standard, a monitor must be capable of handling a 16-color digital input,
sometimes called RGBI for the four (red, green, blue, and intensity) signals from which it is made.
EGA compatibility requires 64-color capability.

Synchronizing Frequency Range

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (25 de 53) [23/06/2000 06:13:55 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Today's variety of signal standards makes it almost mandatory that your monitor be able to
synchronize to a wide range of synchronizing frequencies. You have two frequencies to worry about.
Vertical frequency, sometimes called the refresh rate or frame rate, determines how often the
complete screen is updated. The horizontal synchronizing frequency (or horizontal scan rate) indicates
the rate at which the individual scan lines that make up the image are drawn.
These frequency ranges are important to you because they determine with which video standards the
monitor can work. Most monitors start at the 31.5 KHz used by the VGA system. If you need a
monitor compatible with older display standards, you'll need even lower ranges. The CGA system
requires a horizontal frequency of 15.75 KHz; MDA, 18 KHz; and EGA, 22 KHz. Usually the high
end of the horizontal frequency range is the more troublesome. In general the higher the resolution
you want to display, the higher the horizontal frequency.
The lowest frame rate normally required is the 59 Hz used by some early VESA modes, although the
old MDA standard requires a 50 Hz frame rate. The highest frame rates are the 85 Hz signals used by
some new VESA standards. Table 17.3 lists the scanning frequencies of most common computer
display systems.

Table 17.3. Scanning Frequencies Specified by Monitor Standards

Standard Resolution Vert. Sync (Frame rate) Horz. Sync (Line rate)
MDA 720 x 350 50 Hz. 18.3 KHz.
CGA 640 x 200 60 Hz. 15.75 KHz.
EGA 640 x 350 60 Hz. 21.5 KHz.
MCGA (Graphics) 640 x 480 60 Hz. 31.5 KHz.
MCGA (Text) 720 x 400 70 Hz. 31.5 KHz.
VGA (Graphics) 640 x 480 60 Hz. 31.5 KHz.
VGA (Text) 720 x 400 70 Hz. 31.5 KHz.
Macintosh 640 x 480 67 Hz. 35.0 KHz.
XGA-2 640 x 480 75.0 Hz. 39.38 KHz.
VESA 640 x 480 75 37.5 KHz.
Apple Portrait 640 x 870 76.5 Hz. 70.19 KHz.
VESA guideline 800 x 600 56 Hz. 35.5 KHz.
VESA guideline 800 x 600 60 Hz. 37.9 KHz.
VESA standard 800 x 600 72 Hz. 48.1 KHz.
VESA standard 800 x 600 75 Hz. 46.875 KHz.
RasterOps & 1024 x 768 75.1 Hz. 60.24 KHz.
Supermac 1024 x 768 75.1 Hz. 60.24 KHz.
VESA guideline 1024 x 768 60 Hz. 48.3 KHz.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (26 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

VESA standard 1024 x 768 70.1 Hz. 56.5 KHz.


VESA standard 1024 x 768 75 Hz. 60 KHz.
8514/A 1024 x 768 44 Hz.* 35.5 KHz.
XGA 1024 x 768 44 Hz.* 35.5 KHz.
XGA-2 1024 x 768 75.8 Hz. 61.1 KHz.
Apple 2-page 1152 x 870 75 Hz. 68.68 KHz.
VESA standard 1280 x 1024 75 Hz. 80 KHz.

Note that the 8514/A and XGA systems have very low frame rates, 44 Hz, because they are interlaced
systems (see the following "Interlacing" section). To properly display these signals a monitor must
have sufficient range to synchronize with the field rate of these standards, which is twice the frame
rate.

Interlacing

Interlaced systems like the 8514/A and the first implementations of XGA used a trick developed for
television to help put more information onscreen using a limited bandwidth signal. Instead of
scanning the image from top to bottom, one line after another, each frame of the image is broken into
two fields. One field consists of the odd numbered lines of the image; the other the even numbered
lines. The electron beam sweeps across and down, illuminating every other line and then starts from
the top again and finishes with the ones it missed on the first pass.
This technique achieves an apparent doubling of the frame rate. Instead of sweeping down the screen
30 times a second (the case of a normal television picture), the top to bottom sweep occurs 60 times a
second. Whereas a 30 frame per second rate would noticeably flicker, the ersatz 60 frame per second
rate does not—at least not to most people under most circumstances. Some folks' eyes are not fooled,
however, so interlaced images have earned a reputation of being flickery. Figure 17.8 shows how an
interlace monitor is scanned.
Figure 17.8 Progressive versus interlaced scanning.

Interlacing is used on computer display signals to keep the necessary bandwidth down. A lower frame
rate lowers the required bandwidth of the transmission channel. Of all the prevailing standards, only
the original high resolution operating mode of the 8514/A display adapter and the first generation of
XGA use interlacing. Their frame rate, 44 Hz, would cause distinct flicker. Interlacing drives the field
rate up to 88 Hz.

Bandwidth

Perhaps the most common specification usually listed for any sort of monitor is bandwidth, which is
usually rated in megahertz. Common monitor bandwidths stretch across a wide range—figures from
12 to 100 megahertz are sometimes encountered.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (27 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

In theory, the higher the bandwidth, the higher the resolution and sharper the image displayed. In the
case of color displays, the dot pitch of the display tube is the biggest limit on performance. In a
monochrome system, however, bandwidth is a determinant of overall sharpness. The PC display
standards do not demand extremely wide bandwidths. Extremely large bandwidths are often
superfluous.
The bandwidth necessary in a monitor is easy to compute. A system ordinarily requires a bandwidth
wide enough to address each individual screen dot plus an extra margin to allow for retrace times.
(Retrace times are those periods in which the electron beam moves but does not display, for instance
at the end of each frame when the beam must move from the bottom of the screen at the end of the last
line of one frame back up to the top of the screen for the first line of the next frame.)
A typical color display operating under the VGA standard shows 288,000 pixels (a 729 by 400 pixel
image in text mode) 70 times per second, a total of 20.16 million pixels per second. An 800 by 600
pixel Super VGA display at 75 Hz must produce 36 million pixels per second.
Synchronizing signals require their own slice of monitor bandwidth. Allowing a wide margin of about
25 percent for retrace times, it can be seen that for most PC applications, a bandwidth of 16 megahertz
is acceptable for TTL monitors, and 10 megahertz of bandwidth is sufficient for sharp composite
video displays, figures well within the claims of most commercial products. For VGA, 25 megahertz
is the necessary minimum.
Multiplying the dot clock by 25 percent yields an acceptable estimate of the bandwidth required in a
monitor. For the standards IBM promulgated, the company listed actual bandwidth requirements.
Table 17.4 summarizes these requirements and calculates estimates for various PC display standards.

Table 17.4. Dot Clocks and Recommended Bandwidths for Video Standards

Video Standard Dot Clock Recommended Bandwidth


MDA 12.6 MHz 16.3 MHz
CGA 7.68 MHz 14.3 MHz
EGA 13.4 MHz 16.3 MHz
PGC 18.4 MHz 25 MHz
VGA (350- or 480-line mode) 18.4 MHz 25 MHz
VGA (400-line mode) 20.2 MHz 28 MHz
8514/A 34.6 MHz 44.9 MHz
VESA 800x600, 75 Hz 36 MHz 45 MHz
VESA 1024x768, 75 Hz 60 MHz 75 MHz
VESA 1280x1024, 75 Hz 100 MHz 125 MHz

Although the above estimates are calculated using the dot clock of the display, the relationship
between dot clock and bandwidth is not as straightforward as the calculations imply. Because in real
world applications the worst case display puts an illuminated pixel next to a dark one, the actual
bandwidth required by a display system should be the dot clock plus system overhead. A pair of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (28 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

on/off pixels exactly corresponds to the up and down halves of a single cycle. The higher bandwidth
you calculate from the dot clock allows extra bandwidth that gives greater image sharpness—sharply
defining the edges of a pixel requires square waves, which contain high frequency components.
Consequently the multiplication of the dot clock by display overhead offers a good practical
approximation of the required bandwidth, even though the calculations are on shaky theoretical
ground.

Energy Star

Compared to some of the things that you might connect to your PC, a monitor consumes a modest
amount of electricity. A laser printer can draw as much as a kilowatt when its fuser is operating. A
typical PC requires 100 to 200 watts. A typical monitor requires only about 30 watts. Unlike the laser
fuser, however, your monitor may stay on all day long, and an office may have hundreds of them,
each continually downing its energy dosage. Those watts add up, not just in their power consumption
but also their heat production that adds to the load on the office air conditioning.
To help cut the power used by computer equipment, the Environmental Protection Agency started the
Energy Star program, which seeks to conserve power while computers and their peripherals are in
their idle states. For monitors, Energy Star means that the monitor powers down to one of two lower
power conditions or shuts off entirely when its host computer has not been used for a while.
Energy Star compliant monitors have four operating modes: on, standby, suspend, and off. During
normal operation when the monitor displays an active image generated by your PC, it is on. In
standby mode, the monitor cuts off the electron beam in its CRT and powers down some of its
electronics. It keeps the filament or heat of the CRT (the part that has to warm up to make the tube
work) hot so that the monitor can instantly switch back to its on state. In suspend mode, the filament
and most of the electronics of the monitor switch off. Only a small portion of the electronics of the
monitor remain operational to sense the incoming signals, ready to switch the monitor back on when
the need arises. This conserves most of the power that would be used by the monitor but requires the
CRT to heat up before the monitor can resume normal operation. In other words, the monitor trades
rapid availability for a reduction in power use. In off mode, the monitor uses no power but requires
you to manually switch it on.
To enable your PC to control the operating mode of your monitor without additional electrical
connections, VESA developed its Display Power Management Standard. This system uses the two
synchronizing signals your video board supplies to your monitor to control its operating mode. To
signal the monitor to switch to standby operation, your video card switches off only its horizontal
sync signal. To signal the monitor to switch to suspend mode, your video board cuts both the video
signal and the vertical sync but leaves the horizontal sync on. In off mode, all signals are cut off.
Table 17.5 summarizes these modes.

Table 17.5. VESA Display Power Management Summary

Monitor state Video Vertical sync Horizontal sync DPMS Recovery time Power savings
On On On On Mandatory None None

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (29 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Standby On On Off Optional Short Minimal


Suspend Off Off On Mandatory Longer Substantial
Off Off Off Off Mandatory Warm-up Maximum

Advanced operating systems monitor your system usage and send out the standby and/or suspend
signals when you leave your system idle for a pre-determined time. Your video driver software
controls the DPMS signals. Note that screen saver software defeats the purpose of DPMS by keeping
your system active even when it is not in use. You can trim your power usage by relying on DPMS
rather than a screen saver to shut down your monitor.
Many monitors made before the DPMS standard was created often incorporate their own power
saving mode that's initiated by a loss of the video signal for a pre-determined time. In other words,
they sense when your screen is blank and start waiting. If you don't do something for, say, five
minutes, they switch the monitor to standby or suspend no matter the state of the synchronizing
signals. The DPMS system is designed so that these monitors, too, will power down (although only
after the conclusion of both the DPMS and their own internal timing cycles.)
To enable the DPMS system under Windows 95, you must tell the operating system that your monitor
is Energy Star compliant. If your monitor is in the Windows 95 database, Windows will know
automatically whether the monitor is compliant. You can also check the Energy Star compliance in
the Change Display Settings screen. You must also activate the DPMS screen blanker from the Screen
Saver tab of your Display Properties screen.

Monitor Types

The world of PC monitors is marked by a profusion of confusion. To make sure that you get the right
type of display, you must describe it with specificity. Saying color or monochrome is not enough. You
must also indicate the signal standard to which the monitor must abide. The standard is dictated by the
video adapter used by the monitor; but some monitors work with different adapters, and many
adapters are flexible in regard to your monitor choice. However, certain terms are in general use to
describe and distinguish particular monitor types.

Multiscanning

By far, the most popular computer monitors are multiscanning color displays. They are called
multiscanning because they accept a wide variety of synchronizing frequencies so they can operate
under almost any display standard.

Fixed Frequency

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (30 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Monochrome means exactly what its root words say—mono means one and chrome indicates color.
Monochrome monitors show their images in one color, be it green, amber, white, puce, or alizarin
crimson. Monochrome does not describe what sort of display adapter the monitor plugs into. Among
the monitors available, you have three choices that give you long odds at finding the right
combination by chance. A fourth, the multiscanning monochrome display, accepts almost any
monochrome signal.

TTL Monochrome

The original display type offered by IBM—the one that plugs into the Monochrome Display
Adapter—is distinctly different from any monitor standard made for any other purpose. It uses digital
input signals and separate lines for both its horizontal and vertical synchronizing signals.
Its digital signals match the level used by integrated circuits of the Transistor-Transistor Logic (TTL)
family. These chips operate with tightly defined voltage ranges indicating a logical one or zero. (Five
volts is nominally considered a digital one, although that's the input voltage level of TTL chips. The
maximum level TTL signals ever reach is about 4.3 volts.) Because of their use of TTL signals, such
monitors are often called TTL monochrome displays. They can only be plugged into MDA or
compatible display adapters (including the Hercules Graphics Board).
TTL monochromes are the least expensive monitors (and the oldest monitor technology) still sold
with computer systems today. When manufacturers want to skimp somewhere, they may substitute a
TTL monochrome display system for a monochrome VGA system. Avoid such systems if you can
because fewer applications support Hercules graphics than support VGA. Consequently, you should
consider such a monitor if you only want text displays, and a few dollars are very important to you.

Composite Monochrome

A monitor bearing no description other than merely "monochrome" is most likely a composite
monochrome monitor. This type of monitor offers the lowest resolution of any monochrome system
available for PCs, the same level as a CGA color display but without the redeeming virtue of color.
Because the composite monochrome monitor uses the same signal as home and professional video
systems, it is as ubiquitous as it is hard on the eyes. Designed for the mass market, the composite
monochrome monitor is likely to be the least expensive available. It can only be plugged into a CGA
or compatible display adapter. The built-in display of the unlamented IBM Portable Personal
Computer is actually a composite monochrome monitor. About the only real use of a monochrome
composite display today is in multimedia systems to preview video images.

VGA Monochrome

As with TTL monochrome monitors, VGA monochrome monitors follow a unique frequency
standard. Because the Monochrome VGA display quickly won acceptance, it spawned a number of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (31 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

compatibles. These all are incompatible with other video standards but plug into any VGA-style
output.
A VGA monochrome monitor works with any VGA display adapter without change. It displays VGA
graphics without a hitch—but also without color, of course.

Composite Color

Generic video monitors—the kind you're likely to connect to your VCR or video camera—use the
standard NTSC composite video signal. This signal standard has long been used with PCs—starting
with the CGA adapter and the PCjr's built-in display system. Composite signals have never really
gone away. They are still used where computer generated graphics are destined for television and
video productions. They also link into some multimedia systems. The 3.58 megahertz color subcarrier
specified by the NTSC standard limits their color sharpness, however, so the best you can expect
should you want to use a composite color display for general use is readable 40-column text. In other
words, composite color is a special purpose product, nothing you want to connect for average,
everyday computing.

RGB

The original color display for the IBM PC—the Personal Computer Color Display, IBM model
5151—used three discrete digital signals for each of the three primary colors. From these signals, the
display type earned the nickname RGB from the list of additive primary colors: Red, Green, and Blue.
To be completely accurate, of course, this style of monitor should be termed RGBI, with the final "I"
standing for intensity, per the CGA standard.
Except for the interface signal, the RGB monitor works like a composite color monitor, using the
same frequencies, but substituting digital signals for analog. Because there's no need for the NTSC
color subcarrier, bandwidth is not limited by the interface, and RGB monitors appear much sharper
than composite monitors, even though they display the same number of lines. RGB monitors work
with the CGA, EGA (in its degraded CGA mode), and compatible display adapters as well as the
PCjr. Because of the low resolution of CGA systems, CGA monitors are about as dead and forgotten
as the PCjr.

Enhanced RGB

Moving up to EGA quality requires a better display, one able to handle the 22.1 KHz horizontal
synchronizing frequency of the EGA standard. In addition, its interface is somewhat different. While
still digital, it must accommodate intensity signals for each of the three primary colors. The EGA
signals require a matching EGA connection on the display.
As with CGA, EGA is essentially obsolete. No new systems are sold with it anymore. Rather than

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (32 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

getting a new monitor to work with your existing EGA card when you old monitor fails, you'll
probably save time and headaches by upgrading to VGA.

VGA Color

VGA displays were introduced by necessity with the PS/2s. They use analog inputs and a 31 KHz
horizontal synchronizing frequency to match with the VGA standard. VGA is now the minimum you
should demand in a computer monitor.

Identification

The prevalence of multi-scanning monitors with wildly different capabilities makes getting the most
from your investment a challenge. You must be able to identify not only the display abilities of the
monitor but also those of your video board, then make the best possible match between them. If you're
wrong, you won't get everything that you've paid for. Worse, you might not see anything intelligible
on your screen. Worse still, you face a tiny chance of actually harming your monitor or video board.

Hard Wired Coding

The problem is neither new nor one that arose with multi-scanning systems. When the VGA system
was new, IBM developed a rudimentary system for letting its PCs determine the type of monitor that
was connected—limited, of course, to the monitors IBM made. The system had limited capabilities. It
could identify whether the monitor was monochrome or color and whether it met merely the VGA
standard or advanced into higher resolution territory. At the time, IBM only offered four monitors and
that modest range defined the extent of the selection.
The IBM scheme was to use three of the connections between the monitor and the video board to
carry signals identifying the monitor. These first signals were crude—a simple binary code that put
the signal wire either at ground potential or with no connection. Table 17.6 lists this coding system.

Table 17.6. Monitor Identification Coding Used by IBM

Display type Size IBM model ID 0 ID 1 ID 2


Monochrome 12 inch 8503 NC Ground NC
Color 12 inch 8513 Ground NC NC
Color 14 inch 8512 Ground NC NC
Hi-resolution 15 inch 8514 Ground NC Ground

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (33 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Display Data Channel

This rudimentary standard was not up to the task of identifying the wide range of monitor capabilities
that became available in the years after the introduction of VGA. Yet adding true Plug-and-Play
capabilities to your PC requires automatically identifying the type of monitor connected to your PC so
that the display adapter (and the rest of the system) can be properly configured. To meet this
challenge, VESA developed the Display Data Channel, an elaborate monitor identification system
based on the same connections as the early IBM system but with greatly enhanced signals and
capabilities.
Through the DDC, the monitor sends an Extended Display Identification or EDID to your PC. In
advanced form the DDC moves data both ways between your monitor and your PC using either the
I2C or ACCESS.bus serial interfaces discussed in Chapter 21, "Serial Ports." The DDC2B standard
uses I2C bus signaling on two of the wires of the monitor connection to transfer data both ways
between the monitor and its attached video board. DDC2AB uses a full ACCESS.bus connection and
protocol which allow you to connect other computer peripherals (for example, your keyboard) to your
monitor rather than the system unit.
All levels of the DDC system gain information about your monitor in the same way. The monitor
sends out the EDID as a string of serial bits on the monitor data line, pin 12. Depending on the level
of DDC supported by the monitor, the EDID data stream is synchronized either to the vertical sync
signal generated by the video board present on pin 14 of 15-pin video connectors or to a separate
serial data clock (SCL) that's on pin 15 in DDC2 systems. One bit of data moves with each clock
cycle. When the system uses vertical sync as the clock, the data rate will be in the range from 60 to 85
Hz. With DDC-compliant monitors, your video board can temporarily increase the vertical sync
frequency to up to 25 KHz to speed the transmission of this data. Using the SCL signal when both
video board and monitor support it, data rates as high as 100 KHz are possible.
The serial data takes the form of nine-bit sequences, one per byte. The first eight bits encode the data,
most significant bit first. The last bit can be either a zero or one at the choice of the monitor
manufacturer. The only restriction is that the ninth bit must have the same value for every byte.
The DDC system sends data from the monitor to your display adapter in 128 byte blocks. The first of
these is the Extended Display Identification or EDID block. It is optionally followed by an Extended
EDID block or additional proprietary manufacturer data blocks. Table 17.7 lists the structure of the
basic EDID.

Table 17.7. Basic EDID Structure

Start Byte Length Description


0 8 bytes Header
8 10 bytes Vendor / product identification
18 2 bytes EDID version / revision

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (34 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

20 15 bytes Basic display parameters / features


35 19 bytes Established / standard timings
54 72 bytes Timing descriptions x 4 (18 bytes each)
126 1 byte Extension flag
127 1 byte Checksum

The header is always the same data pattern and serves to identify the EDID information stream. The
vendor identification is based on EISA manufacturer identifications. The product identification is
assigned by the manufacturer. It includes the month and year of manufacture of the monitor.
The basic display parameters that EDID relays to your system include the maximum size of your
monitor's display expressed as the largest width and height of the image area. Your applications or
operating system can use this information to automatically set the proper scaling for fonts displayed
on the screen. The timing data includes a bit representing the ability to support each of the various
VESA standards so that your system can determine the possible modes and frequencies your monitor
can use.
In addition to basic DDC support, VESA provides for two higher levels of standardization. The
DDC2B system uses the Philips I2C signaling system to transfer data bi-directionally across the
interface. The DDCAB system includes full ACCESS.bus support that supplies a low speed serial
interconnection bus suitable for linking such peripherals as keyboards and pointing devices through
the monitor.
Because standard expansion buses do not provide suitable connections for routing ACCESS.bus
signals, VESA has defined an auxiliary connector for the purpose, a five-pin "Berg" connector. Table
17.8 lists the signal assignments of this connector.

Table 17.8. Access Bus Connector Signal Assignments

Pin Function
1 Ground
2 Mechanical key
3 Serial data (SDA)
4 +5V ACCESS.bus supply voltage
5 Data clock (SCL)

Monitors that are compliant with DDC use the same connectors as ordinary VGA displays. All the
active video and synchronizing signals are located on the same pins of the connector no matter the
DDC level the monitor uses, or even if it doesn't use DDC at all. The only difference is the definition
of the monitor identification signals. DDC1 video boards sacrifice the Monitor ID Bit 1 pin, number
12, as a channel to receive identification data from your monitor. DDC2 systems make this signal
bi-directional and take over pin 15 for use in carrying the clock signal. In any case, pin 9 may be used
to supply five volts for running accessory devices. Table 17.9 lists the signal assignments of the VGA
15-pin connector under DDC.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (35 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Table 17.9. VESA Display Data Channel Signal Assignments

Pin DDC1 Host DDC 2 Host DDC1,2 Display


1 Red video Red video Red video
2 Green video Green video Green video
3 Blue video Blue video Blue video
4 Monitor ID bit 2 Monitor ID bit 2 Optional
5 Return Return Return
6 Red video return Red video return Red video return
7 Green video return Green video return Green video return
8 Blue video return Blue video return Blue video return
9 +5V (optional) +5V (optional) +5V load (optional)
10 Sync return Sync return Sync return
11 Monitor ID bit 0 Monitor ID bit 0 Optional
12 Data from display Bi-directional data Bi-directional data
13 Horizontal sync Horizontal sync Horizontal sync
14 Vertical sync Vertical sync Vertical sync
15 Monitor ID bit 3 Data clock (SCL) Data clock (SCL)

Manual Configuration

If your monitor or video board does not support any level of the DDC specification, you will be left to
configure your system on your own. In general, you won't need to perform any special configuration
to a multi-scanning monitor to match it to your video board if the signals from the video board are
within the range of the monitor. That's the whole point of the multi-scanning display—it adjusts itself
to accommodate any video signal.
That said, you may not get the most from your monitor. You might slight on its refresh rate and
quickly tire your eyes with a flickering display. Worse, you might exceed its capabilities and end up
with a scrambled or blank screen.
Windows includes its own configuration process that attempts to optimize the signals of your
compliant video adapter with your monitor type. Windows already knows what video board you
have—you have to tell it when you install your video drivers. Without a DDC connection, however,
Windows is cut off from your monitor, so you must manually indicate the brand and model of monitor
you're using.
You make these settings by clicking on the Change Display Type button in the Setting tab of your

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (36 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Display properties menu. Click on the button, and you see a screen like that shown in Figure 17.9.
Figure 17.9 The Change Display Type screen in Windows 95.

Flat Panel Display Systems

CRTs are impractical for portable computers, as anyone who has toted a forty pound, first generation
portable computer knows. The glass in the tube itself weighs more than most of today's portable
machines, and running a CRT steals more power than most laptop or notebook machines budget for
all their circuitry and peripherals.

LCD

The winner in the display technology competition was the Liquid Crystal Display, the infamous LCD.
Unlike LED and gas-plasma displays, which glow on their own, emitting photons of visible light,
LCDs don't waste energy by shining. Instead, they merely block light otherwise available. To make
patterns visible, they either selectively block reflected light (reflective LCDs) or the light generated by
a secondary source either behind the LCD panel (backlit LCDs) or adjacent to it (edgelit LCDs). The
backlight source is typically an electroluminescent (EL) panel, although some laptops use Cold
Cathode Fluorescent (CCF) for brighter, whiter displays with the penalty of higher cost, greater
thickness, and increased complexity.

Nematic Technology

A number of different terms describe the technologies used in the LCD panels themselves, terms like
supertwist, double supertwist, and triple supertwist. In effect, the twist of the crystals controls the
contrast of the screen, so triple supertwist screens have more contrast than ordinary supertwist.
The history of laptop and notebook computer displays has been led by innovations in LCD
technology. Invented by RCA in the 1960s (General Electric still receives royalties on RCA's basic
patents), LCDs came into their own with laptop computers because of their low power requirements,
light weight, and ruggedness.
An LCD display is actually a sandwich made from two plastic sheets with a very special liquid made
from rod-shaped or nematic molecules. One important property of the nematic molecules of liquid
crystals is that they can be aligned by grooves in the plastic to bend the polarity of light that passes
through them. More importantly, the amount of bend the molecules of the liquid crystal give to the
light can be altered by applying an electrical current through them.
Ordinary light has no particular orientation, so liquid crystals don't visibly alter it. But polarized light
aligns all the oscillations of its photons in a single direction. A polarizing filter creates polarized light
by allowing light of a particular polarity (or axis of oscillation) to pass through. Polarization is key to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (37 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

the function of LCDs.


To make an LCD, light is first passed through one polarizing filter to polarize it. A second polarizing
filter, set to pass light at right angles to the polarity of the first, is put on the other side of the liquid
crystal. Normally, this second polarizing filter stops all light from passing. However, the liquid crystal
bends the polarity of light emerging from the first filter so that it lines up with the second filter. Pass a
current through the liquid crystal and the amount of bending changes, which alters in turn the amount
of light passing through the second polarizer.
To make an LCD display, you need only selectively apply current to small areas of the liquid crystal.
The areas to which you apply current are dark; those that you don't, are light. A light behind the LCD
makes the changes more visible.
Over the past few years, engineers have made several changes to this basic LCD design to improve its
contrast and color. The basic LCD design outlined above is technically termed twisted nematic
technology or TN. In their resting state, the liquid molecules of the TN display always bend light by
90 degrees, exactly counteracting the relationship between the two polarizing panels that make up the
display.
By increasing the bending of light by the nematic molecules, the contrast between light and dark can
be increased. An LCD design that bends light by 180 to 270 degrees is termed a supertwist nematic or
simply supertwist display. One side effect of the added twist is that the appearance of color artifacts
results in the yellowish green and bright blue hues of many familiar LCD displays.
This tinge of color can be canceled simply by mounting two supertwist liquid crystals back to back so
that one bends the light in the opposite direction of the other. This design is logically termed a double
supertwist nematic or simply double supertwist) display. This LCD design is currently popular among
laptop PCs with black and white VGA-quality displays. It does have a drawback, however. Because
two layers of LCD are between you and the light source, double supertwist panels appear darker or
require brighter backlights for adequate visibility.
Triple supertwist nematic displays instead compensate for color shifts in the supertwist design by
layering both sides of the liquid crystal with thin polymer films. Because the films absorb less light
than the twin panels of double supertwist screens, less backlight—and less backlight power—is
required for the same screen brightness.

Cholesteric Technology

In 1996, the Liquid Crystal Institute of Kent State University developed another liquid crystal
technology into a workable display system and began its commercial manufacture. Termed cholesteric
LCDs, this design uses crystals that switch between transmissive and reflective states instead of
twisting. These changes are more directly visible and require no polarizers to operate. In that
polarizing panels reduce the brightness of nematic displays by as much as 75 percent, cholesteric
LCDs can be brighter. Early screens are able to achieve high contrast ratios without backlights.
Cholesteric screens have a second advantage. They are bi-stable. That is, maintaining a given pixel in
either the transmissive or reflective phase requires no energy input. Once switched on, a pixel stays on
until switched off. The screen requires power only to change pixels. In fact, a cholesteric screen will
retain its last image even after it is switched off. Power usage in notebook PC applications is likely to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (38 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

be as low as 10 percent that of nematic panels.


The fabrication technologies used to make the cholestric displays also allow for final detail. Kent
State has already demonstrated grayscale panels with resolutions as high as 200 pixels per inch.
Although initial production was limited to grayscale displays, color cholestric panels are currently
under development.

Passive Matrix

Nematic LCDs also come in two styles based on how the current that aligns their nematic molecules is
applied. Most LCD panels have a grid of horizontal and vertical conductors, and each pixel is located
at the intersection of these conductors. The pixel is darkened simply by sending current through the
conductors to the liquid crystal. This kind of display is called a passive matrix.

Active Matrix

The alternate design, the active matrix, is more commonly referred to as Thin Film Transistor (TFT)
technology. This style of LCD puts a transistor at every pixel. The transistor acts as a relay. A small
current is sent to it through the horizontal and vertical grid, and in response the transistor switches on
a much higher current to activate the LCD pixel.
The advantage of the active matrix design is that a smaller current needs to traverse the grid, so the
pixel can be switched on and off faster. Whereas passive LCD screens may update only about half a
dozen times per second, TFT designs can operate at ordinary monitor speeds—ten times faster. That
increased speed equates to faster response—for example, your mouse won't disappear as you move it
across the screen.
The disadvantage of the TFT design is that it requires the fabrication of one transistor for each screen
pixel. Putting those transistors there requires combining the LCD and semiconductor manufacturing
processes. That's sort of like getting bricklayers and carpenters to work together.
To achieve the quality of active matrix displays without paying the price, engineers have upped the
scan on passive panels. Double scanned passive works exactly like the name says: they scan their
screens twice in the period that a normal screen is scanned only once. Rather than go over each pixel
two times, the electronics of a double-scanned display divides the screen into two halves and scans
both at the same time. The idea is something like the interlacing of CRT screens, lowering the
required scanning frequency, but the arrangement and effect are different. Double scanned displays
split the screen in the middle into upper and lower halves. The split means that each pixel gets twice
as long for updates, as would be the case if the whole screen were scanned at the same frequency. As
a result, double scanning can eke out extra brightness, contrast, and speed. They do not, however,
reach the quality level set by active matrix screens.

Response Time

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (39 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

The LCD panel equivalent of persistence is response time. Charging and discharging individual pixels
requires a finite period, and the response time measures this period. The time to charge and the time to
discharge a given pixel can be, and often are, different, and are typically individually specified. For
example, the off times of some active screens may be twice that of the on time.
The ambient temperature can have dramatic effects on the response time of an LCD panel. At
freezing, the response time of a panel may be three times longer (slower) than at room temperature.
At room temperature, an active matrix display pixel has a response time on the order of 10-50
milliseconds.

Field Emission Displays

The chief challenger to the Liquid Crystal Display is the Field Emission Display or FED. Several
manufacturers are actively developing FED technology. Commercial FED panels were offered in
early 1996, although with smaller dimensions than required for PC displays. FED displays are
nevertheless expected in new notebook PCs, with 10-inch panels entering commercial production in
late 1997. Manufacturers are also developing small (2.5-inch) panels for hand held televisions and
large panels (40 inches) for wall hung displays.
In a radical bit of retro design, the FED uses the same basic illumination principle as the cathode ray
tube. A flux of electrons strikes phosphor dots and causes them to glow. As with the CRT, the
electrons must flow through a vacuum, so the FED panel is essentially a flattened vacuum tube.
Instead of a single electron gun for each color, however, the FED uses multiple, microscopic cones as
electron emitters. Several hundred of these cathode emitters serve each image pixel. Each group of
emitters has its own drive transistor, much like an active matrix LCD panel. Each emitter is cone
shaped, a configuration which favors electron emission.
At the other side of the panel, each pixel has a conventional trio of phosphor dots, one for each
primary color. Each dot has associated with it a separate transparent anode that attracts the electron
flux. To separate the three colors of the pixel, the drive electronics activate each of the three anodes in
sequence so the total display time of the pixel is split in thirds, one-third for each color. The whole
assembly fits between two glass panels, which form the bottle of the vacuum tube. Figure 17.10
illustrates the construction used by one FED manufacturer.
Figure 17.10 Cross-section of a Field Emission Display.

The short travel of the electron flux in the FED dramatically reduces the voltage required for the
operation of the device. The anode in a FED may be only about 200 microns from the nearest cathode.
Instead of a potential of thousands of volts between the anode and cathode, the FED operates at about
350 volts. The FED is essentially a current device, relying on a high current in the form of a dense
flux of electrons to provide sufficient illumination. This high current presents one of the major
technological obstacles to designing successful FEDs. Conventional phosphors deteriorate rapidly
with the onslaught of electrons, quickly and readily burning in.
A FED behaves more like a CRT than an LCD. Its electron currents respond quickly to changes in
their drive voltages. The response time of a FED display is typically a few microseconds compared to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (40 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

the millisecond responses of LCD panels. FEDs also have wider viewing angles than LCD panels.
The image on a FED screen is viewable over an angle of about 160 degrees, much the same as a CRT.
In addition, FED technology promises to be more energy efficient than LCDs, capable of delivering
about the same screen brightness while using half the power.

Electro-Luminescent Displays

The backlights used by many LCD displays use electro-luminescent technology. Some manufacturers
are working to develop flat panel displays that do away with the LCD and instead use an EL panel
segmented into individual pixels. Monochrome screens have already been developed, but color
screens are problematic. Although green and blue EL elements have operating lives long enough for
commercial applications, in excess of 5,000 hours, current red EL elements operate only about half as
long. The longer wavelength of red light requires higher currents, which shorten the life of the
materials. Manufacturers believe, however, that the technology will be successful in the next few
years. Its primary application is expected to be wall hung displays.

Gas-Plasma

One alternative is the gas-plasma screen, which uses a high voltage to ionize a gas and cause it to emit
light. Most gas plasma screens have the characteristic orange-red glow of neon because that's the gas
they use inside. Gas-plasma displays are relatively easy to make in the moderately large sizes perfect
for laptop computer screens and yield sharpness unrivaled by competing technologies. However,
gas-plasma screens also need a great deal of power—several times the requirements of LCD
technology—at high voltages, which must be synthesized from low voltage battery power.
Consequently, gas-plasma displays are used primarily in AC-power portables. When used in laptops,
the battery life of a gas-plasma equipped machine is quite brief, on the order of an hour.

LED

In lieu of the tube, laptop designers have tried just about every available alternate display technology.
These include the panels packed with light-emitting diodes—the power-on indicators of the
1980s—that glow at you as red as the devil's eyes. But LEDs consume extraordinary amounts of
power. Consider that a normal, full size LED can draw 10 to 100 milliwatts at full brilliance and that
you need 100,000 or so individual elements in a display screen; you get an idea of the magnitude of
the problem. Certainly the individual display elements of an LED screen would be smaller than a
power-on indicator and consume less power, but the small LED displays created in the early days of
portable PCs consumed many times the power required by today's technologies. LEDs also suffer the
problem that they tend to wash out in bright light and are relatively expensive to fabricate in large
arrays.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (41 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Practical Considerations

When you buy a notebook PC, you're usually stuck with the flat panel display system that the
manufacturer chooses to install. Because everything you do with your PC depends on what you see on
your screen, the display system is one of the most important factors in selecting a notebook PC. In
addition to the underlying technology, two other characteristics distinguish flat panel display systems:
resolution and size. Although the terminology is the same as for CRT-based monitors, the flat panel
systems raise different issues.

Resolution

Onscreen resolution is an important issue with flat panel displays; it determines how sharp text
characters and graphics will appear. Today, three resolution standards are dominant: CGA (640 x
200); double scanned CGA (640 x 400); and VGA (640 x 480).
Most people prefer the last because it's exactly equivalent to today's most popular desktop displays so
it can use the same software and drivers.
CGA resolution is visibly inferior, producing blocky, hard to read characters, and remains used only
in the least expensive laptops.
Double scanned CGA offers a good compromise between cost and resolution. It's actually as sharp as
text mode CGA. Graphics pose a problem, as double scanned CGA mode is not supported by a wide
base of software. Under Windows, however, many double scanned CGA systems are compatible with
Toshiba and AT&T 640 x 400 pixel drivers.
VGA poses particular problems for LCD displays because it's really more three standards under a
single name, operating with modes that put 350, 400, or 480 lines on the screen. Most VGA display
panels have 480 rows of dots to accommodate these lines.
A problem develops with VGA images made with lower line counts. Many old notebook LCD screens
displayed only the active lines and left the rest of the screen blank. For example, a panel would leave
80 lines blank during 400-line text mode displays on a 480-line LCD. The result was a black band at
both the top and bottom of the screen.
Today's notebook PCs usually avoid this problem and fill the entire screen no matter what mode its
video system operates in. Two different techniques adjust the image size, depending on whether the
panel operates in text or graphics mode.
VGA text mode uses 400 lines and would otherwise leave 80 lines on the screen blank. To sidestep
this problem, flat panel controllers either use a taller font or insert blank lines between lines of normal
text fonts. Each technique has its drawbacks but overall either is more pleasing than the otherwise
blank areas on the screen. Substituting fonts may result in compatibility problems with software that
assumes a given font size (which, of course, it should not in text mode). Blank line insertion results in
squat characters that appear widely spaced (in print, the effect is called leading). Block graphics may
appear disjointed with vertical lines turning into dashes.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (42 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

In 350-line graphics mode, flat panel controllers use line replication to duplicate a sufficient number
of existing image lines to compensate for the blank area of the screen. In effect, line replication is an
exotic form of the double scanning that expands 200-line graphics into 400-line displays. The math
resulting from stretching a 350-line image to fill a 480-line screen requires a specific pattern of
repeated and unrepeated lines. In text mode, this can make adjacent lines of text appear to be different
sizes, so the technique is usually reserved for graphics.
With text mode and EGA-style graphics left far behind by operating systems with integrated VGA
and better resolution, the need for both of these technologies is disappearing. However, if you ever
need to step back and run older software, you'll appreciate a display system that can cope well with
various image heights.

Size

Unlike with CRT-based monitors, the entire area of an LCD display is usable. Because the image is a
perfect fit into the pixels of the LCD display, there are no issues of underscan and overscan. The size
of the screen is the size of the image.
The VESA Video Image Area Definition standard applies to LCDs just as it does CRTs. Most LCD
panel makers, however, specify the dimensions of their products with the same diagonal that served
manufacturers so well for color televisions. Common sizes range from 10.4 inches to 13.3 inches,
with even larger sizes under development.
Because most LCD panels are used on notebook PCs, their size is ultimately constrained by the
dimensions of the computer itself. For notebook machines that approximate the size of a stack of true
notepads, the maximum screen size is about 12.1 inches. Some manufacturers (notably NEC) have
developed notebook PCs with larger screens, but the computers themselves are necessarily larger in
length and width. Larger LCD panels are slated to replace CRTs as the basis for desktop and, in a new
application, wall hung displays. Some manufacturers are experimenting with screens as large as 42
inches across for such applications.

Labeling

Every manufacturer of flat panel has traditionally given its products its own part number that is
completely different from similar products offered by other makers even if the two products perform
identically and are meant to be interchangeable. To help sort through this confusion, VESA developed
a standard nomenclature for flat panel display systems using the multi-part format shown in Figure
17.11.
Figure 17.11 VESA flat panel nomenclature.
The six parts of the designation identify the important operational differences between display panels.
These include the kind of scanning and synchronization, the number of data lines and bits per pixel
used by the panel, and the maximum onscreen resolution. As presently implemented, this
nomenclature applies only to nematic screens.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (43 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

The first letter indicates whether the panel is active or passive. The next three numbers, separated by
colons, indicate the number of data bits that encode each pixel in the standard red-green-blue order.
Normally, all three will be the same and take a value from one to six. A monochrome panel uses only
the first (red) position but maintains the other two numbers as zero for consistency in the
nomenclature. From these pixels values you can determine the number of colors or grayscales that the
panel can reproduce. The formula is as follows:
Number of colors = 2r+g+b
The next letter indicates whether the panel is single scan or dual scan. An "S" indicates a single scan
panel, a "D" indicates dual scan. The next letter indicates the type of synchronization used by the
screen, "F" for FPFRAME/FPLINE synchronization, "D" for DRDY synchronization, and A for
panels that support either form of sync.
The next number indicates the number of data lines connecting the panel to its control system, which
translates into the number of bits the panel will accept per clock cycle. Most panels fall in the range 9
through 18, inclusive.
The final figures indicate the addressability of the screen listed as the number of horizontal pixels and
vertical pixels. In that the addressability of most LCD screens extends only to all the visible pixels,
this figure also represents the visible resolution of the screen. Note that this nomenclature describes
only the electrical characteristics of the panel and does not extend to physical characteristics like size.
Panels with identical designations will plug into the same electronics even if they are different
physical sizes.

Connectors

Monitors can be grouped by the display standard they support, mostly based upon the display adapter
card they are designed to plug into. One basic guide that helps you narrow down the compatibility of a
display just by inspecting its rear panel is the input connector used by the monitor. After all, if you
cannot plug a monitor into your computer, odds are it is not much good to you.
Three styles of connectors are shared by different PC video standards. By name, these three
connectors are the RCA-style pin jack, the 9-pin D-shell, and the 15-pin "high-density" D-shell. In
addition, some high resolution monitors use three or more BNC connectors for their input signals. In
addition, VESA has created a new connector standard, the Enhanced Video Connector, that combines
video with a host of other signals so that a single connection can link just about everything to your
PC.

Video

The video connection carries the image between your video board and monitor. Several standards are
used by computer monitors. These vary with the signals used by the monitor and the applications it
serves.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (44 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Video systems based on composite video signals commonly use pin jacks. These signals allow you to
connect multiple monitors together.
Computer displays most commonly use systems with D-shell connectors. Each display standard has
its own distinct arrangement of signals. All computer display systems are designed to connect a single
monitor to a given output.
Multimedia systems often combine the need for both as well as audio. A new connector system called
the Enhanced Video Connector promises to put all the required multimedia signals in a single
connector.

Pin Jacks

The bull's eye jack used on stereo and video equipment is used by most manufacturers for the
composite video connections in PC display systems, although a wealth of monitors and television sets
made by innumerable manufacturers also uses this connector. This connector does give you many
choices for alternate displays—that is, if you don't mind marginal quality.
Composite monitors (those dealing with the composite video and NTSC color only) rank among the
most widely available and least expensive in both color and monochrome. Even better quality
television sets have such jacks available. Figure 17.12 illustrates a typical pin jack.
Figure 17.12 A jack for video pin connectors.

Although you can use any composite video display with a CGA or compatible color card, the signal
itself limits the possible image quality to okay for monochrome, acceptable for 40-column color, and
unintelligible for 80-column color. Nevertheless, a composite video display—already a multipurpose
device—becomes even more versatile with a computer input.

Daisy Chaining

A side benefit of pin plug/composite video displays is that most have both input and output jacks.
These paired jacks enable you to daisy chain multiple monitors to a single video output. For example,
you can attach six composite video monitors to the output of your computer for presentations in
classroom or boardroom.
In many cases, the jacks just loop through the display (that is, they connect together). The display
merely bridges the input video signal and alters it in no other manner. You can connect a nearly
unlimited number of monitors to these loop-through connections with no image degradation. Some
monitors, however, buffer their outputs with a built-in video amplifier. Depending on the quality of
the amplifier, daisy chaining several of these monitors can result in noticeable image degradation.
One way to tell the difference is by plugging the output of the display into the output of your
computer. Most amplifiers don't work backwards, so if the display has a buffering amplifier nothing
appears onscreen. If you do get an image comparable to the one you get when plugging into the input
jack, the signal just loops through the display.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (45 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Analog Voltage Level

The specifications of composite monitors sometimes include a number describing the voltage level of
the input signal. This voltage level can be important when selecting a composite display because all
such monitors are essentially analog devices.
In analog monitors, the voltage level corresponds to the brightness the electron beam displays
onscreen. A nominal one-volt peak to peak input signal is the standard in both the video and computer
industries and should be expected from any composite monitor. The VGA system requires a slightly
different level—0.7 volts.

Termination

For proper performance, a composite video signal line must be terminated by an impedance of 75
ohms. This termination ensures that the signal is at the proper level and that aberrations do not creep
in because of an improperly matched line. Most composite input monitors (particularly those with
separate inputs and outputs) feature a termination switch that connects a 75-ohm resistor across the
video line when turned on. Only one termination resistor should be switched on in any daisy chain,
and it should always be the last monitor in the chain.
If you watch a monitor when you switch the termination resistor on, you'll notice that the screen gets
dimmer. That's because the resistor absorbs about half the video signal. Because composite video
signals are analog, they are sensitive to voltage level. The termination cuts the voltage in half and,
consequently, dims the screen by the same amount. Note that the dim image is the proper one.
Although bright might seem better, it's not. It may overload the circuits of the monitor or otherwise
cause erratic operation.
Composite monitors with a single video input jack and no video output usually have a termination
resistor permanently installed. Although you might try to connect two or more such monitors to a
single CGA composite output (with a wye cable or adapter), doing so is unwise. With each additional
monitor, the image gets dimmer (the signal must be split among the various monitors) and the CGA
adapter is required to send out increasing current. The latter could cause the CGA to fail.

Nine-Pin D-Shell Connectors

The first video connector used by PCs was the nine-pin D-shell connector. As with all D-shell
connectors, its shape assures that all signal pins will be properly matched with their mates. Its nine
pins provide enough signal positions for both monochrome and color systems. Figure 17.13 illustrates
a 9-pin D-shell jack.
Figure 17.13 A nine-pin D-shell jack.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (46 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

All PC digital display standards and one specialized analog standard used this connector design.
These standards include: monochrome, standard RGB (CGA), and enhanced RGB (EGA), and
professional RGB (PGA). In addition, some early multiscanning monitors used this connector for their
input signals. All of these standards arrayed their signals differently on the connector pins.
Although this connector has fallen into disuse, it remains troublesome or troubling because of the
huge potential for confusion it presents. You must follow one important rule when you approach this
connector—know what kind of display adapter you are about to plug into. Making the wrong choice
can be fatal to your display, particularly if you try to plug an IBM Monochrome Display into a CGA
adapter. The mismatch of synchronizing frequencies leads to the internal components of the display
overheating and failing.
A mismatch is easy to spot—you simply can't make sense of the image on the screen. You may see a
Venetian-blind pattern of lines; the screen may flash; or it may look like the vertical hold failed in a
dramatic way. Should you observe any of these patterns or hear a high-pitched squeal from your
display and see nothing onscreen, immediately turn off your display. Hunt for the problem while the
life of your monitor is not ticking away.

Monochrome Display Adapter

The video signal of the MDA system is digital. The only thing about it that matters to a monitor is
whether voltage is present or not. MDA monitors ignore the voltage level of the video signal. This
video signal will also drive analog monitors so that you can see monochrome images on them.
However the signal strength is substantially higher than normal video signals (5-volt TTL level
instead of 1-volt video). Moreover, because the intensity signal appears on a separate pin, highlighting
will not appear in such a connection. Note, too, the MDA system uses separate sync with distinct pins
for its horizontal and vertical synchronizing signals, and so requires a monitor that accommodates
separate sync. Table 17.10 lists the signal assignments of an MDA connector.

Table 17.10. MDA Signal Assignments

Pin Function
1 Ground
2 Ground
3 Not used
4 Not used
5 Not used
6 Intensity
7 Video
8 Horizontal sync
9 Vertical sync

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (47 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Color Graphics Adapter

The CGA system uses three pins not used by the MDA system for its discrete color signals. The
intensity signal remains at the same location, and the pin used by MDA for video is not used under
CGA. Table 17.11 lists the signal assignments of a CGA connection. Although this system would
seem to prevent any difficulties when mismatching a CGA monitor and MDA video board or vice
versa, such mismatches can result in equipment damage. The synchronizing signals, which appear on
the same pins in both connection systems, are sufficiently different to cause damage if they are
applied to the wrong monitor type for a sustained period.

Table 17.11. CGA Interface Signal Assignments

Pin Function
1 Ground
2 Ground
3 Red
4 Green
5 Blue
6 Intensity
7 Reserved
8 Horizontal sync
9 Vertical sync

Enhanced Graphics Adapter

For compatibility with all IBM monitors, the EGA used the same connector as previous video boards,
a nine-pin, female D-shell connector. The definitions of its signal pins were controlled by the setup
DIP switches depending on the type of monitor that was to be connected to the board. Both the
monochrome and CGA-compatible color schemes corresponded exactly with the MDA and CGA
standards to allow complete compatibility. For EGA displays, the CGA intensity pin was used for the
intensity signal of the green gun. Additional intensity signals were added for the red and blue guns.
Table 17.12 lists the signal assignments.

Table 17.12. EGA Connector Signal Assignments

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (48 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Pin Function
1 Ground
2 Secondary red
3 Primary red
4 Primary green
5 Primary blue
6 Secondary green
7 Secondary blue
8 Horizontal sync
9 Vertical sync

Because EGA boards can supply signals complying with the EGA, CGA, or MDA standards, you
must know which standard an EGA board is set to use before connecting a monitor. Otherwise it may
cause damage just as it would for a CGA-MDA mismatch.

Professional Graphics Adapter

The first display system to use analog signals was IBM's specialized Professional Graphics Adapter.
Designed for RISC-based workstations, the PGA system introduced many concepts that would find
their way into VGA. Despite the similarity between its signals and those of VGA, the PGA system
used a different connector, one based on the same nine-pin D-shell shared by all the other IBM
display systems pre-dating VGA. Table 17.13 lists the signal assignment of this connector.

Table 17.13. PGA Connector Signal Assignments

Pin Function
1 Red
2 Green
3 Blue
4 Composite sync
5 Mode control
6 Red ground return
7 Green ground return
8 Blue ground return
9 Ground

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (49 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

Fifteen-Pin High-Density D-Shell Connectors

The most common connector on PC monitors is the 15-pin high-density D-shell connector. Originally
put in use by IBM for its first VGA monitors, it has been adopted as an industry standard for all but
the highest performance computer displays.
Because the signals generated by the VGA are so different from those of previous IBM display
systems, IBM finally elected to use a different, incompatible connector so the wrong monitor wouldn't
be plugged in with disastrous results. Although only nine connections are actually needed by the VGA
system (eleven if you give each of the three video signals its own ground return as IBM specifies), the
new connector is equipped with 15 pins. It's roughly the same size and shape as a nine-pin D-shell
connector but before IBM's adoption of it, this so-called high-density 15-pin connector, as shown in
Figure 17.14, was not generally available. Nearly all of today's VGA-based display systems (including
SuperVGA) use this connector.
Figure 17.14 A 15-pin high-density D-shell jack.

In addition to allowing for four video signals (three primary colors and separate sync) and their
ground returns, the VGA connector provides a number of additional functions. In the original VGA
design, it enabled the coding of both monitor type and the line count of the video signal leaving the
display adapter. The modern adaptation of the connector to the VESA DDC standard redefines several
pins for carrying data signals. Table 17.14 lists the signal assignments used by this connector for the
basic VGA and SuperVGA systems.

Table 17.14. VGA and SuperVGA Connector Pin-Out

Pin Function
1 Red video
2 Green video
3 Blue video
4 Reserved
5 Ground
6 Red return (ground)
7 Green return (ground)
8 Blue return (ground)
9 Composite sync
10 Sync return (ground)
11 VESA Display Data Channel
12 Reserved
13 Horizontal sync
14 Vertical sync

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (50 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

15 VESA Display Data Channel

IBM's 8514 and 8515 displays as well as 8514/A and XGA display adapters also use the same
connector even though they at times use different signals. Again, however, IBM has incorporated
coding in the signals to ensure that problems do not arise. The 8514/A and XGA adapters can sense
the type of display connected to them, and do not send out conflicting signals. The 8514 and 8515
monitors operate happily with VGA signals, so problems do not occur if it is plugged into an ordinary
VGA output.
The advantage of the 15-pin connector is convenience. One cable does everything. On the downside,
the connector is not arranged for proper high speed operation and its deficiencies can limit high
frequency performance, which in video terms equates to sharpness when operating at high resolutions
and refresh rates. Consequently, the highest resolution systems often forego the 15-pin connector for
separate BNC connectors for each video channel.

BNC Connectors

True high resolution systems use a separate coaxial cable for every signal they receive. Typically,
they use BNC connectors to attach these to the monitor. They have one very good reason. Connectors
differ in their frequency handling capabilities, and capacitance in the standard 15-pin high-density
D-shell connector can limit bandwidth, particularly as signal frequencies climb into the range above
30 MHz. BNC connectors are designed for frequencies into the gigahertz range, so they impose few
limits on ordinary video signals.
Monitors can use either three, four, or five BNC connectors for their inputs. A three-connector system
integrates both horizontal and vertical synchronizing signals with the green signal. The resulting mix
is called sync-on-green. Others use three connectors for red, green, and blue signals and a fourth for
horizontal and vertical sync combined. This scheme is called composite sync. Five connector systems
use three color signals: one for horizontal sync, and one for vertical sync. These are called separate
sync systems.

Audio

Although the number of monitors (particularly those with composite inputs) with audio and video
capabilities had been declining, they have enjoyed a resurgence over the last year. IBM, Apple, and
many clone monitors are adding audio (both input and output). This can be useful in at least two
cases—to take advantage of the new voice synthesis and voice digitization options now becoming
available for PC systems, and to amplify the three-voice audio output of the PCjr. Most monitor audio
amplifiers, even those with modest specifications (limited audio frequency bandwidth and output
powers less than a watt), can handle either job adequately.
Only recently has music making become a passion among PC makers, and even the most favored
systems still relegate audio to plug-in accessories. Although you can add accessories to transform the
musical mission of your PC, you'll also want to add better quality audio circuitry than you'll get with

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (51 de 53) [23/06/2000 06:13:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

any PC. A patch cord to connect the add on accessories to your stereo system will do just fine.

Enhanced Video Connector

Among the other problems with the standard VGA connector that's so popular on today's monitors,
one stands in the way of getting truly high resolution images on your monitor—limited bandwidth.
The VGA connector has a maximum bandwidth of about 150 MHz, so it already constrains the quality
of some display systems. Conventional high frequency video connectors (primarily BNC) require a
separate cable for each video signal and allow no provisions for auxiliary functions such as monitor
identification.
The VESA Enhanced Video Connector is designed to solve these video and identification problems
and incorporate sufficient additional signals to permit linking complete multimedia systems with a
single plug. The final EVC standard allows for four wide bandwidth video signals along with 30 other
data connections in a connector not must larger than today's VGA.
Unlike other connector standards, the EVC was not designed to accommodate any new kinds of
signals. Instead it is a carrier for existing interconnection standards. It allows grouping together nearly
all of the next generation of high speed computer connections in a single cable. The point is that you
can put your PC on the floor, run an EVC connection to your monitor, and connect all your desktop
accessories—keyboard, mouse, printer—to your monitor. The snarl of cables leading to the distant PC
disappears thanks to the magic of the EVC.
In addition to RGB video with composite sync, the EVC standard includes provisions for composite
and S-video signals as well as international video standards (PAL and SECAM) in systems that
support DDC for monitor identification and negotiation. Connections are also provided for analog
audio signals (both inputs and outputs). To connect additional peripherals, EVC also accommodates
both Universal Serial Bus and IEEE 1394 ("FireWire") high speed serial interface signals. Other
connections provide DC power for charging notebook computers.
The centerpiece of the EVC design is the Molex MicroCross connection system which uses four pins
for video connections separated by a cross-shaped shield. The design links coaxial cables and allows
for bandwidths of 500 MHz. The additional data signals are carried through 30 additional contacts
arranged in a 3 by 10 matrix. Figure 17.15 shows the contact arrangement on an EVC connector.
Figure 17.15 The VESA Enhanced Video Connector.

Although the EVC design is not a true coaxial connector, it provides a good impedance match (within
5 percent of a nominal 75 ohms) and shielding that is 98 percent effective. The signal bandwidth of
the connection is approximately 2 GHz.
All EVC connectors do not have to include all the signals specified by the standard. System designers
are free to choose whatever signals they would like to include. That said, VESA recommends that
manufacturers adopt one of three levels of support or signal sets for EVC for their products. These
include Basic, Multimedia, and Full.
The Basic signal set is the minimum level of support required for devices using EVC. It includes only
the video signal lines and DDC. At the Basic level, EVC operates as a standard video connector much
like today's VGA connector but with greatly improved bandwidth. The DDC connection allows the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (52 de 53) [23/06/2000 06:13:57 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 17

monitor and its host to negotiate the use of higher resolution signals. VESA recommends that the
Basic signal set be included in any subset of the EVC signals that a system designer chooses to
support.
The Multimedia signal set adds audio support to the Basic signal set, allowing a single cable to carry
video and audio to a suitable monitor. VESA foresees that the Multimedia signal set will usually be
supplemented with USB signaling.
The Full configuration includes all of the signals provided under the EVC standard.
EVC connectors used for reduced signal sets such as Basic or Multimedia need not include physical
pins or contacts for the unused connections. For example, a Basic EVC video connector may have
only 2 of 30 subsidiary pins, those for the DCC link.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh17.htm (53 de 53) [23/06/2000 06:13:57 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Chapter 18: Audio


The noise making spectrum of PCs ranges from the beeps and squeaks of the tiny internal
speaker to an aural rush equal in quality to today's best stereo CDs. PCs can generate,
manipulate, record, and play back sounds of all sorts and even control other noise
makers such as music synthesizers. Today's high quality sound capability distinguishes
multimedia PCs from ordinary, visual bound systems.

■ Background
■ Analog Audio
■ Frequency
■ Amplitude
■ Decibels
■ Impedance
■ Distortion
■ Digital Audio
■ Sampling Rate
■ Resolution
■ Bandwidth
■ Synthesis
■ Subtractive Synthesis
■ Additive Synthesis
■ FM Synthesis
■ Wave Table Synthesis
■ Advanced Techniques
■ Internet Sound Systems
■ Compression
■ TrueSpeech
■ MPEG

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (1 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

■ Looped Sound
■ Hardware
■ Basic Sound System
■ Tone Generator
■ Amplifier
■ Loudspeaker
■ Drivers
■ Sound Boards
■ Compatibility
■ Control
■ Quality
■ Transducers
■ Microphones
■ Loudspeakers
■ MIDI
■ Background
■ Interface
■ Connectors
■ Wiring
■ Protocol
■ Messages
■ Operation
■ Tone Coding
■ Standards
■ General MIDI
■ Basic and Extended MIDI
■ GS Format
■ Downloadable Sounds
■ XMIDI
■ MIDI Manufacturers Association
■ Installation
■ System Resources
■ Audio Connections
■ Speaker Wiring
■ Amplifier Wiring

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (2 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

18

Audio

Of the five senses, most people only experience personal computers with four: touch, smell, sound,
and sight. Not that computers are tasteless—although a growing body of software doesn't even aspire
that high—but most people don't normally drag their tongues across the cases of their computers.
Touch is inherent in typing and pushing around a mouse or digitizer cursor. Smell is more limited
still—what you appreciate in opening the box holding your new PC or what warns you when the fan
in the power supply stops, internal temperatures rise, and roasting resistors and near inflammatory
components begin to melt.
Most interactions with PCs involve sight: what you see on the monitor screen and, if you're not a
touch typist, a peek down at the keyboard. High resolution graphics make sight perhaps the most
important part of any PC—or at least the most expensive.
To really experience your PC, however, you need an added sensual dimension—sound. In fact, sound
distinguishes the ordinary PC from today's true multimedia machine. Most PCs are mainly limited to
visual interaction. A multimedia PC extends the computer's capabilities of interacting with the world
to include sound. It can generate sounds on its own, acting like a music synthesizer or noise generator,
and it can control external devices that do the same thing through a MIDI interface. It can record or
sample sounds on any standard computer medium (the hard disk being today's preferred choice) with
sonic accuracy every bit as good (even better) than commercial stereo CDs. All the sounds it makes
and stores can be edited and manipulated: tones can be stretched; voices shifted; noises combined;
music mixed. It can play back all the sounds it makes and records with the same fidelity, pushing the
limits of even the best stereo systems.
Unfortunately, the native endowment of most PCs is naught but a squeaker of a loudspeaker that
makes soprano Mickey Mouse sound like the Mormon Tabernacle Choir in comparison. The
designers of the first PCs simply thought sound unnecessary. After all, the noise that calculating
machines made was to be avoided. All they thought important were warning signals, so that's all the
PC got. Images fared little better: text screens little hinted at today's graphic potential of the PC.
The audible omission of the PC's designers can be corrected by adding a sound board. A basic
requirement of a multimedia PC, the sound board gives your PC the capability to synthesize and
capture a variety of sounds, play them back, and control external devices.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (3 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Background

Sound is a physical phenomenon, best understood as a rapid change in air pressure. When a physical
object moves, it forces air to move also. After all, air and the object cannot take up the same place at
the same time. Air is pushed away from the place the object moves to and rushes into the empty place
where the object was. Of course, the moving air has to come from somewhere. Ideally, the air pushed
out of the way would retreat to the vacuum left behind when the object moved. Unfortunately, the air,
much like any physical entity, cannot instantly transport itself from one place to another. The speed at
which the air moves depends on its density; the higher the pressure, the greater the force pushing the
air around. Indeed, moving the object creates an area of high pressure in front of it—where the air
wants to get out of the way—and low pressure behind. Air is dumb—or in today's politically correct
language—knowledge challenged. The high pressure doesn't know that an exactly matching area of
low pressure exists behind the object, so the high pressure pushes out in all directions. As it does, it
spreads out and the pressure decreases.
Simply moving an object creates a puff of air. Sound arises when the object moves rapidly, vibrating.
As it moves one way, it creates the high pressure puff that travels off. It moves back, and a
corresponding low pressure pulse pops up and follows the high pressure. As the object vibrates, a
steady train of these high and-low pressure fronts moves steadily away from it.
The basic principles of sound should be obvious from this rudimentary picture. Sound requires a
medium for transmission. The speed of sound depends not on the moving object but on the density of
the air (or other medium). The higher the density, the faster the sound moves. The intensity of the
sound pressure declines with distance as more and more air interacts with the
compression-decompression cycles. Unconstrained, this decline would follow the infamous inverse
square law because as the sound travels in one dimension, it must spread over two. By confining or
directing the air channel, however, you can alter the rate of this decay.
Human beings have a mechanism called the ear that detects pressure changes or sound waves. The ear
is essentially a mechanical device, a transducer, that's tuned to react to the pressure changes that make
up what we call sound.
To fit its digital signals into this world of pressure waves is a challenge for the PC in several ways.
The PC need a convenient form for manipulating sound. Fortunately, sound has an analog in the
electronic world called analog audio, which uses electrical signals to represent the strengths of the
pressure waves. PCs turn these electrical signals into digital audio that's compatible with
microprocessors, other digital circuits, and sound systems. Of course, the PC is not limited to sounds
supplied by others—it can create the digital signals itself, a process called synthesis. To turn those
digital signals back into something that approaches sound—back to audio again—your PC uses its
own audio circuitry or a sound board that includes both a digital to analog converter and an amplifier.
(The sound board also likely contains a synthesizer of some kind.) Finally, your PC plugs into
loudspeakers that convert the audio into pressure waves once again.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (4 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Analog Audio

Sound is an analog phenomenon. It has two primary characteristics—loudness (or amplitude) and
frequency—that vary over a wide range. Sounds can be loud or soft or any gradation in between.
Frequencies can be low or high or anything in between.

Frequency

Sound frequencies are measured in Hertz, just like the frequencies of computer clocks or radio
signals. The range of frequencies that a human being can hear depends on his age and sex. Younger
and female ears generally have wider ranges than older and male ears. Most sources list the range of
human hearing as being from 20 Hertz to 15,000 Hertz (or as high as 20,000 Hertz if your ears are
particularly good).
Lower frequencies correspond to bass notes in music and the thumps of explosions in theatrical
special effects. High frequencies correspond to the "treble" in music, the bright, even harsh sounds
that comprise overtones in music—the brightness of strings, the tinkle of jingle bells—as well as hissy
sounds like sibilants in speech, the rush of waterfalls, and overall background noise.
Low frequencies have long wavelengths, in the range of ten feet (three meters) for middle bass notes.
The long wavelengths allow low frequencies to easily bend around objects and, from a single speaker,
permeate a room. Moreover, human hearing is not directionally sensitive at low frequencies. You
cannot easily localize a low frequency source. Acoustical designers exploit this characteristic of low
frequencies when they design low frequency loudspeakers. For example, because you cannot
distinguish the locations of individual low frequency sources, a single speaker called a subwoofer is
sufficient for very low frequencies even in stereo and multi-channel sound systems.
High frequencies have short wavelengths, measured in inches or fractions of an inch (or centimeters).
They can easily be blocked or reflected by even small objects. Human hearing is acutely sensitive to
the location of higher frequency sources.

Amplitude

Amplitude describes the strength or power of the sound. The amplitude of sound traveling through the
air is usually expressed as its sound pressure level or SPL. The threshold of human hearing is about
0.0002 microbar—which means a pressure change of 1/5,000,000,000th (one five billionth) of normal
atmospheric pressure. In other words, the ear is a sensitive detector of pressure changes. Were it any
more sensitive, you might hear the clink of Brownian motion as dust particles ricochet through the air.
In audio systems, electrical signals take the place of sound pressure waves. These signals retain the
characteristic frequencies of the original sounds, but their amplitude refers to variations in electrical
strength. Usually the voltage in an audio system represents the amplitude of pressure of the original
sound waves.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (5 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Decibels

A term that you'll usually see engineers use in measuring amplitude loudness is the decibel. Although
the primary unit is actually the Bel, named after Alexander Graham Bell, the inventor of the hydrofoil
(and, yes, the telephone), engineers find units of one-tenth that quantity to be more manageable.
The decibel represents not a measuring unit but a relationship between two measurements. The Bel is
the ratio between two powers expressed as a logarithm. For example, a loud sound source may have
an acoustic power of one watt while a somewhat softer source may only generate one milliwatt of
power, a ratio of 1000:1. The logarithm of 1000 is 3, so the relationship is 3 Bels or 30 decibels—one
watt is 30 decibels louder than one milliwatt.
In addition to reducing power relationships to manageable numbers, decibels also approximately
coincide with the way we hear sounds. Human hearing is also logarithmic. That means that something
twice as loud to the human ear does not involve twice the power. For most people, for one sound to
appear twice as loud as another it must have ten times the power. Expressed as dB, this change is an
increase in level of three dB because the logarithm of 10 is 0.3, so the relationship is 0.3 Bels or 3
decibels.
Engineers also use the decibel to compare voltages and sound pressures. The relationship and math
are different, however, because in a circuit in which everything else is held constant, the voltage will
be proportional to the square root of the power. Consequently, a doubling of voltage represents a 6 dB
change.
Most commonly you'll see dB used to describe signal to noise ratios and frequency response
tolerances. The unit is apt in these circumstances because it is used to reflect relationships between
units. Sometimes, however, people express loudness or signal levels as a given number of "dB." This
usage is incorrect and meaningless because it lacks a reference value. When the reference unit is
understood or specified, however, dB measurements are useful.
Any unit may be used as a reference for measurements expressed in dB. Several of these have
common abbreviations, as listed in Table 18.1.

Table 18.1. Reference Units in dB Measurements

Abbreviation Reference unit


0 dBj 1 millivolt
0 dBv 1 volt
0 dBm 1 milliwatt
0 dBk 1 kilowatt

In sound systems, the dBm system is most common. The electronic industry adopted (in May 1939) a
power level of one milliwatt in a 600 ohm circuit as the standard reference for zero dBm.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (6 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

You'll also encounter references to volume units or VU, as measured by the classic VU meter. A true
VU meter is strictly defined and the zero it indicates reflects a level 4 dB above 0 dBm. The meters on
tape and cassette recorders have VU designations but are not, strictly speaking, VU meters. They are
usually referenced to a specific recording level on the tape. Similarly, meters on radio transmitters are
often calibrated in VU, but zero corresponds not to the input line level but the output modulation
level—0 VU usually means 100 percent modulation.

Impedance

Nearly all practical electrical circuits waste some of the electricity flowing through them, the
exception being superconducting designs made from exotic materials that operate at very low
temperatures. This electrical waste becomes heat. It makes both toasters and microprocessors hot.
Engineers call the characteristic of a direct current circuit that causes this waste resistance because it
describes how a material resists the flow of electricity. Resistance is measured in a unit called the ohm
after German physicist G. S. Ohm. Neat electrical trivia: the opposite of resistance is conductivity, for
which the measuring units is the mho.
The alternating current used by audio circuits complicates matters. Some electrical devices conduct
some frequencies better than others. This frequency sensitive opposition to the flow of alternating
current is called reactance. The sum of the resistance and reactance of a circuit at a given frequency is
called its impedance.
Impedance is an important measure in audio circuits. It governs how well the two ends of a
connection in an electrical circuit match. When the impedance of a source does not match the
impedance of the target device, electrical power gets wasted. In theory, matching impedances is the
best policy—a circuit achieves optimum power transfer with matched impedances.
Power is a primary concern when mating amplifiers and loudspeakers, so impedance matching is a
primary tactic when connecting speakers. Moreover, if a speaker has too low of an impedance, it may
draw currents in excess of the capabilities of the output circuits of the amplifier, potentially
overloading and perhaps damaging the amplifier. Impedance matching is also important in high
frequency network signals because the energy that is not transferred becomes noise in the network
that may interfere with its proper operation.
In low level audio circuits, power transfer is not a critical issue. More important is the signal voltage
because low level circuits are generally treated as voltage amplifiers. The critical issue is that the
voltage levels expected by the circuits you connect together are the same. Most low level circuits use
bridging connections in which a high impedance input gets connected to a low impedance output.
Bridging connections waste power, but that's generally not a concern with voltage amplifiers.
These unmatched impedances work to your benefit in two particular instances with PCs and their
sound systems. Many sound boards provide only speaker outputs. You can usually connect these
directly to the AUX input of a stereo amplifier without overpowering the circuits because of the
impedance mismatch. For example, a 1-watt amplifier with an 8-ohm output impedance produces only
about 125 millivolts, just about right for AUX inputs rated for 100 to 150 millivolt signals. Similarly,
most headphones have high internal impedances (for example, 600 ohms), so they can plug into
speaker outputs without submitting their transducers and your ears to the full power of the amplifier.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (7 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Plug 600-ohm headphones into an 8-ohm amplifier output, and only 1/75th of the power gets through.

Distortion

No one is perfect, and neither are audio amplifiers. All analog audio amplifiers add subtle defects to
the sound called distortion. In effect, distortion adds unwanted signals—the defects—to the desired
audio. The most common way to express distortion is the ratio between the unwanted and wanted
signals expressed as a percentage. In modern low level circuits, the level of added distortion is
vanishingly small, hundredths of a percent or less, and is often lost in the noise polluting the signal.
Only if your hearing is particularly acute might you be able to detect the additions.
In power amplifiers, the circuits that produce the high level signals used to operate non-powered
loudspeakers, the addition of distortion can rise to levels that are not only noticeable but also
objectionable, from one-tenth to several percent. Power amplifier distortion also tends to increase as
the level increases—as you turn the volume up.
Better amplifiers produce less distortion. This basic fact has direct repercussions when it comes to
getting the best sound quality from a PC. The sound boards typically found in PCs produce a lot of
distortion. You can often get appreciably better audio quality by plugging your stereo system (which
is designed for low distortion even at high power levels) into the low level outputs of your sound
board so that the audio signal avoids the sound board's own power amplifier. In fact, better quality
sound boards often lack power amplifiers in the recognition that any included circuitry is apt to be of
lower sound quality due to the restrictions of installing them on a circuit board with limited power
supply capacity.

Digital Audio

Computers, of course, use digital signals, as do many modern stereo components such as Compact
Disc players and Digital Audio Tape systems. Once a sound signal has been translated from analog
into digital form it becomes just another form of data that your PC can store or compute upon. Digital
technology adds new terms to the audio vocabulary and raises new concerns.
Digital recording of sound turns music into numbers. That is, a sound board examines audio
waveforms thousands of times every second and assigns a numerical value to the strength of the sound
every time it looks; it then records the numbers. To reproduce the music or noise, the sound board
works backward. It takes the recorded numbers and regenerates the corresponding signal strength at
intervals exactly corresponding to those at which it examined the original signal. The result is a
near-exact duplication of the original audio.
The digital recording process involves several arbitrary variables. The two most important are the rate
at which the original audio signal is examined—called the sampling rate—and the numeric code
assigned to each value sampled. The code is digital and is defined as a given number of bits, the bit
depth or resolution of the system. The quality of sound reproduction is determined primarily by the
values chosen for these variables.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (8 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Sampling Rate

The sampling rate limits the frequency response of a digital recording system. The highest frequency
that can be recorded and reproduced digitally is half the sampling frequency. This top frequency is
often called the Nyquist frequency. Higher frequencies become ambiguous and can be confused with
lower frequency values, producing distortion. To prevent problems, frequencies higher than half the
sampling frequency must be eliminated—filtered out—before they are digitally sampled. Because no
audio filter is perfect, most digital audio systems have cut-off frequencies somewhat lower than the
Nyquist frequency. The Compact Disc digital audio system is designed to record sounds with
frequencies up to about 15 KHz, and it uses a sampling rate of 44.1 KHz. Table 18.2 lists the
sampling rates in common use in a variety of applications.

Table 18.2. Common Digital Sampling Rates

Rate (Hz) Application


5563.6 Apple Macintosh, lowest quality
7418.3 Apple Macintosh, low quality
8000 Telephone standard
8012.8 NeXT workstations
11,025 PC, low quality (1/4 CD rate)
11,127.3 Apple Macintosh, medium quality
16,000 G.722 compression standard
18,900 CD-ROM/XA long-play standard
22,050 PC, medium quality (1/2 CD rate)
22,254.5 Basic Apple Macintosh rate
32,000 Digital radio, NICAM, long-play DAT, HDTV
37,800 CD-ROM/XA higher-quality standard
44,056 Professional video systems
44,100 Basic CD standard
48,000 DVD, Audio Codec '97, Professional audio recording
96,000 DVD at highest audio quality

The odd numbers used by some of the standards are often less arbitrary than they look. For example,
the 22,254.5454 Hz rate used by the Apple Macintosh system matches the horizontal line rate of the
video display of the original 128K Macintosh computer system. For Mac people, that's a convenient

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (9 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

number. The 44,056 rate used by some professional video systems is designed to better match the
sampling rate to the video frame rate.

Resolution

The number of bits in a digital code or bit depth determines the number of discrete values it can
record. For example, an eight-bit digital code can represent 256 distinct objects, be they numbers or
sound levels. A recording system that uses an 8-bit code can thus record 256 distinct values or steps in
sound levels. Unfortunately, music and sounds vary smoothly rather than in discrete steps. The
difference between the digital steps and the smooth audio value is distortion. This distortion also adds
to the noise in the sound recording system. Minimizing distortion and noise means using more steps.
High quality sound systems—that is, CD-quality sound—require a minimum of 16-bit code.

Bandwidth

Sampling rate and resolution determine the amount of data produced during the digitization process,
which in turn determines the amount that must be recorded. In addition, full stereo recording doubles
the data needed because two separate information channels are required. The 44.1 KHz sampling
frequency and 16-bit digital code of stereo CD audio result in the need to process and record about
150,000 bits of data every second, about nine megabytes per minute.
For full CD compatibility, most newer sound boards have the capability to digitize at the CD level.
Intel's Audio Codec '97 specification requires a bit more, a 48K sampling rate, and undoubtedly
stereophiles will embrace the extraordinarily high optional 96K sampling rate allowed by the DVD
standard. For most PC operations, however, less can be better—less quality means less data to save in
files and ship across the Internet. The relatively low quality of loudspeakers attached to PCs, the
ambient background noise in offices, and the noise the PC and its fan and disks make themselves
make the nuances of top quality sound inaudible anyway.
To save disk space and processing time, PC sound software and most sound boards give you the
option of using less resource intensive values for sampling rate and bit depth. Moreover, many older
sound boards were not powerful enough for full CD quality. Consequently, you will find sound boards
that support intermediary sampling frequencies and bit densities. Many older sound boards also limit
themselves to monophonic operation. The MPC specification only requires 8-bit digitization support.
Most sound boards support 22 and 11 KHz sampling; some offer other intermediate values such as 8,
16, or 32 KHz. You can trim you data needs in half simply by making system sounds monophonic
instead of stereo.
If you are making original recordings of sounds and music, you will want to use as high a rate as is
consistent with your PC's resources. Often the application will dictate your format. For example, if
you want to use your CD-R drive to master audio CDs, you'll need to use the standard CD format,
stereo 16-bit quantization at a 44.1 KHz sampling rate. On the other hand, the best trade off between
quality and bandwidth for Internet-bound audio is 11 KHz sampling with 8-bit quantization.
Note that the format of the data sets a limit on quality without determining the actually quality of what

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (10 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

you will hear. In other words, you can do better than the quality level you set through choice of bit
depth and sampling rate. The shortcomings of practical hardware, particularly inexpensive sound
boards and loudspeakers, destine the quality of the sound that actually makes it into the air to be less
realistic than the format may allow.

Synthesis

Making a sound electronically is easy. After all, any AC signal with a frequency in the range of
human hearing makes a noise when connected to a loudspeaker. Even before the age of electronics,
Hermann Helmholtz discovered that any musical tone is made from vibrations in the air that
correspond to a periodic (but complex) waveform. Making an electronic signal sound like something
recognizable is not so simple, however. You need exactly the right waveform.
The basic frequency generating circuit, the oscillator, produces a very pure tone, so pure that it sounds
completely unrealistic—electronic. Natural sounds are not a single frequency but collections of many,
related and unrelated, at different strengths.
A tone from a musical instrument, for example, comprises a single characteristic frequency
(corresponding to the note played) called the fundamental and a collection of other frequencies, each a
multiple of the fundamental, called overtones by scientists or partials by musicians. The relationship
of the loudness of the overtones to one another gives the sound of the instrument its distinctive
identity, its timbre, and makes a note played on a violin sound different from the same note played on
a flute. Timbre is a product of the many resonances of the musical instrument, which tend to reinforce
some overtones and diminish others.
Noises differ from musical tones because they comprise many, unrelated frequencies. White noise, for
example, is a random collection of all frequencies.
The one happy result of all sounds being combinations of frequencies (a principle discovered in
relation to periodic waves by Jean Baptiste Joseph Fourier in the late 18th century) is that creating any
sound requires only putting together frequencies in the right combination. So synthesizing sounds
should be easy—all you need to know is the right combination. At that, synthesis becomes a little
daunting. Trial and error experimentation at finding the right combinations is daunting at best because
the number of frequencies and the possible strengths of each frequency are both infinite, so you end
up dealing with numbers that strain most pocket calculators—like infinity times infinity. Add in the
fact that natural sounds vary from one instant to the next, meaning that each instant represents a
different frequency combination, giving you yet another infinity to deal with, and sound synthesis
suddenly seems to slip to the far side of impossible.
In truth, the numbers are much more manageable than the dire situation outlined in the preceding
paragraph. For example, musical sounds involve only a few frequencies—the fundamental and
overtones within the range of human hearing (both in frequency range and strength). But synthesizing
sounds from scratch remains a challenge. Reality cannot yet be synthesized. Even the best synthesis
systems only approach the sound of real world musical instruments and not-so-musical noises. The
best—or most real—sound quality produced by a sound board is thus not synthesized but recorded.
Even the best synthesizers sound like synthesizers even when they attempt to emulate an acoustical
instrument such as a piano. You don't have to be particularly musically attuned to distinguish a real
instrument from a synthesized one. But that's not necessarily a quality judgment. Some synthesizers

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (11 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

sound better than others. Although one may not be more realistic sounding than another, it may be
more pleasing. In fact, high end synthesizers are treasured for their unique musical characteristics
much as Steinway or Bosendorfer pianos are. Just don't expect a synthesizer to exactly replicate the
sound of your Steinway.
Electronic designers have come up with several strategies that synthesize sound with varying degrees
of success. Many techniques have been used to make synthesizers, including subtractive synthesis,
additive synthesis, frequency modulation, and wave table synthesis. Each is discussed in the following
sections.

Subtractive Synthesis

The first true music synthesizers (as opposed to electronic instruments, which seek to replicate rather
than synthesize sounds) used analog technology. The first of these machines was created in the late
1950s and was based on the principle of subtractive synthesis. These early synthesizers generated
tones with special oscillators called waveform generators that made tones already rich in harmonics.
Instead of the pure tones of sine waves, they generated square waves, sawtooth waves, and odd
intermediary shapes. In itself, each of these oscillators generated a complex wave rich in harmonics
that had its own distinctive sound. These initial waveforms were then mixed together and shaped
using filters that emphasized some ranges of frequencies and attenuated others. Sometimes one tone
was used to modulate another to create waveforms so strange they sounded like they originated in
foreign universes.
Analog synthesis was born in an age of experimentation when the farthest reaches of new music were
being explored. Analog synthesizers made no attempt to sound like conventional instruments—after
all, conventional instruments could already do that and the outposts of the avant garde had traipsed
far beyond the fringes of conventional music. The goal of analog synthesis was to create new
sounds—sounds not found in nature; sounds never before heard; sounds like the digestive system of
some giant dyspeptic dinosaur. Analog synthesizers sounded unmistakably electronic.
As the depths of new music were being plumbed, digital technology appeared as an alternative to
analog designs. The first digital synthesizers sought merely to duplicate the function of the analog
units using an alternate technology that gave greater control. In fact, digital synthesis gave so much
control over sounds that it became possible not just to create new sounds, but also to create (or at least
approximate) any sound. The goal of synthesis also shifted to mimicking conventional
instruments—that is, expensive, hand-crafted instruments—with cheap, sound-nearly-alike digital
substitutes. With one mass produced electronic box—the digital synthesizer—a musician could put an
entire orchestra at his fingertips.

Additive Synthesis

Recreating the sounds of actual instruments required entirely different technologies than had been
used in new-sound synthesizers. The opposite of a subtractive synthesizer is an additive synthesizer.
Instead of starting with complex waves and filtering away the unwanted parts, the additive synthesizer
builds sounds in the most logical way—by adding together all the frequencies that make up a musical

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (12 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

sound. Whereas this chore was difficult if not impossible with analog circuitry, the precision of digital
electronics made true additive synthesis a reality. The digital additive synthesizer mathematically
created the pattern that mixing tones would create. The resulting digital signal would then be
converted into an analog signal (using a digital-to-analog converter) that would drive a loudspeaker or
recording system.
The additive synthesizer faced one large problem in trying to create lifelike sounds: the mix of
frequencies for each note of an instrument is different. In fact, the mix of frequencies changes from
the initial attack when a note begins (for instance, when a string is struck by a piano hammer) to its
final decay. To produce sounds approaching reality, the synthesizer required a complete description of
every note it would create at various times in its generation. As a result, a true additive-type digital
synthesizer is a complex—and expensive—device.
Practical sound synthesis for PC peripherals is based on much more modest technologies than purely
additive synthesis. Two primary alternatives have become commercially popular in the synthesizers
incorporated into PC sound boards. These are FM synthesis and wave table synthesis.

FM Synthesis

While working at Stanford Artificial Intelligence Laboratories in 1973, John M. Chowning made an
interesting discovery. Two pure sine wave tones could be combined to make interesting sounds using
frequency modulation. Although the principle corresponded to no natural phenomenon, it could be
used to create sounds with close to natural attacks and decays.
The resulting FM synthesis works by starting with one frequency or tone called a carrier and altering
it with a second frequency called a modulator. When the modulator is a low frequency of a few Hertz,
the carrier frequency rises and falls much like a siren. When the carrier and modulator are close in
frequency, however, the result is a complex wave. Varying the strength of the modulator changes the
mix of frequencies in the resulting waveform, altering its timbre. (Changing the strength of the carrier
merely makes the sound louder or softer.) By changing the relationship between the carrier and
modulator, the timbre changes in a natural sounding way.
A basic FM synthesis system needs only two oscillators producing sine waves to work. However, a
synthesizer with a wider combination of carriers and modulators can create an even more complex
variety of waveforms and sounds. Each of the sine waves produced by an FM synthesizer is called an
operator. Popular synthesizers have four to six operators.
The greatest strength of FM synthesis is that it is inexpensive to implement; all it takes is a chip. On
the other hand, FM synthesis cannot quite duplicate real world sounds. The sounds created through
FM synthesis are recognizable—both as what they are supposed to represent and as synthesized
sounds.

Wave Table Synthesis

An alternate technique used for creating sounds is wave table synthesis. Also known as sampling,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (13 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

wave table synthesis starts not with pure tones but with representative waveforms for particular
sounds. The representations are in the form of the sound's exact waveform, and all the waveforms that
a product can produce are stored in an electronic table, hence the name of the technology. The
waveforms for a given instrument or sound are only templates that the synthesizer manipulates to
produce music or what is supposed to pass as music. For example, the wave table may include a brief
burst of the tone of a flute playing one particular note. The synthesizer can then alter the frequency of
that note to play an entire scale and alter its duration to generate the proper rhythm.
Although wave table synthesis produces more lifelike sounds than FM synthesis, they are not entirely
realistic because it does not replicate the complete transformation of musical sounds from attack to
decay nor the subtle variation of timbre with the pitch produced by a synthesized instrument. Some
wave table synthesizers have specific patterns for the attack, sustain, and decay of notes, but
mathematically derive the transitions between them. These come closer to reality, but still fall short of
perfection. In general, wave table synthesized notes all have the same frequency mix and
consequently have a subtle but unreal sameness to them.
On the other hand, wavetable synthesis is the PC hardware maker's delight. Because all the
waveforms are stored digitally just like all other sounds, they can be reconstituted without any special
hardware like synthesizer chips. All that's needed is an audio digital to analog, which has to be
incorporated in any multimedia PC anyway. A programmer can create the necessary waveforms for
any sound he can imagine using software alone. The only trouble is that putting together the necessary
waveforms digitally takes a lot of processor power, so much that older, slower PCs can't handle the
chore in real time. When the process need not be done in real time, however, your PC can transform
synthesizer instructions (typically in the form of a MIDI file) into synthesized music without
additional expensive synthesis hardware. MIDI Renderer uses this technique to endow MIDI files
with top quality synthesized sound.
Wavetable synthesizer boards sidestep the demand for processor power by incorporating their own
processing abilities. To keep their performance as high as possible, many of these products put the
reference waveforms they need into ROM memory, saving any delays that might be needed for disk
access. The downside of this fast storage is that the waveform reference hogs storage space. Many
hardware based waveform synthesizers have hundreds of thousands of bytes of wavetable ROM;
some have multiple megabytes of reference waveforms.

Advanced Techniques

Scientifically inclined musicians and musically inclined scientists are never satisfied with the sound of
synthesized instruments—and never will be until they can make an all electronic violin sound better
than a Stradivarius in good hands. As modern electronics puts more and more processing power in
their hands, they are developing ever more elaborate techniques to generate the most realistic sounds
possible.
The latest trend is modeling of actual instruments. They make mathematical models of musical
instruments that reflect how the physical attributes of the instrument affect the sounds it makes.
Instead of deconstructing the waveform the instrument makes, they seek to construct the waveform in
the same manner as the instrument itself. For example, they might start with a basic tone generated by
scraping a string with a bow (an elaborate model in itself), then temper it with the resonances of the
instrument's body.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (14 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Another aspect of advanced synthesis designs is increased control. Current synthesizers distill a
musician's (or program's) control of an instrument to a few parameters: the press and release of a key
and "touch," the speed at which the key is struck. While this description is reasonably complete for a
keyboard instrument, it ignores the modulations possible with bowed instruments or those through
which the musician blows. The newest synthesizers often include a more elaborate control system
focused on an additional sensor such as an instrument-like tube the musician blows through. The
musician can then use his breath pressure to continuously signal the synthesizer how to modulate the
music.
Although these experimental synthesizers are currently aimed at live performance, nothing prevents
the acquisition of their control information for automated playback or editing. MIDI does not
currently accommodate the increased data needs of such synthesizers, but new standards will
undoubtedly accompany any new technology synthesizers into the market mainstream.

Internet Sound Systems

Go online and you'll be confronted with a strange menagerie of acronyms describing sound systems
promising everything from real time delivery of rock concerts, background music from remote radio
stations a zillion miles away, and murky audio effects percolating in the background as you browse
past pages and pages of uninteresting web sites. All of this stuff is packaged as digital audio of some
kind, else it could never traverse the extent of the Internet. Rather than straight digital audio, however,
it is processed and compressed into something that fits into an amazingly small bandwidth. Then, to
get it to you, it must latch on to its own protocol. It's amazing that anything gets through at all, and, in
truth, some days (and connections) are less amazing than others.
The biggest hardware obstacle to better Internet sound is bandwidth. Moving audio digitally consumes
a huge amount of bandwidth, and today's typical communications hardware (discussed in Chapter 22,
"Modems") is simply not up to the chore. A conventional telephone conversation with a frequency
response of 300 to 3,000 Hertz—hardly hi-fi—gets digitized by the telephone system into a signal
requiring a bandwidth of 64,000 bits per second. That low fidelity data is a true challenge to cram
through a modem that has but a 28,800 or even 33,400 bit-per-second data rate. As a result, all
Internet sound systems start with data compression of some kind to avoid the hardware imposed
bandwidth limits.
The Internet poses another problem for audio systems—the web environment itself is rather
inhospitable to audio signals. From its inception, the net was developed as an asynchronous packet
switched network. Its primary protocol, TCP, was not designed for the delivery of time critical
isosynchronous data like live audio. When downloading a file (or a Web page, which is essentially a
file as well) it doesn't matter whether a packet gets delayed, but a late packet in an audio stream is less
than worthless—it's an interruption that can ruin whatever you're listening to. Some Internet sound
systems abandon TCP for audio transfer and use the UDP protocol instead, which can complicate
matters with systems and firewalls designed expressly for TCP. Other Internet sound systems rely on
TCP, citing that it assures top audio quality in transferred files.
Several streaming audio players are available for download from the web. The three most popular are
Internet Wave from Vocaltec ( http:www.vocaltec.com), RealAudio from Progressive Networks (
http://www.realaudio.com) and Shockware from Macromedia ( http://www.macromedia.com). To

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (15 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

play sounds from all the sites on the web, you need all three.
Internet Wave uses ordinary TCP to distribute audio, which means it goes everywhere normal web
pages do, and all audio packets are guaranteed to be delivered. Current versions work at any of four
recommended source sampling rates, 5,500, 8,000, 11,025, or 16,000 Hz so quality ranges from
telephone level to radio broadcast quality in mono. Internet Wave audio uses two files on the server, a
media file with the extension .VMF and a stub file .VMD. Along with players, a beta version of the
encoder is available for free download from the Vocaltec site.
RealAudio is distributed free as a low fidelity player—it delivers AM radio quality sound both online
and offline (from files you've previously downloaded). You must buy server support and the
production version of the full-featured hi-fi RealAudio player. Both are monophonic only. The latest
RealAudio server can automatically negotiate the correct bandwidth (lo-fi or hi-fi) to send you.
RealAudio uses UDP for file transfer so it gains some speed at the penalty of possibly losing packets
(and quality). RealAudio files have the extension .RA.
Shockwave is aimed primarily at delivering audio to accompany multimedia presentations made for
playing back through Macromedia's Director program but can also direct streaming audio through the
Internet. A number of sites use Shockwave to delivery streaming audio using TCP. The Shockwave
format allows for a variety of quality levels including full CD quality stereo. Shockwave audio files
wear the extension .SWA.

Compression

The Internet is not the only place where the size of digital audio files becomes oppressive. At about
ten megabytes per minute, audio files quickly grow huge. Archiving more than a few sound bites
quickly becomes expensive in terms of disk space.
To squeeze more sound into a given amount of storage, digital audio can be compressed like any other
data. Actually audio lends itself to compression. The more efficient algorithms take into account the
special character of digital sound. The best rely on psychoacoustic principles, how people actually
hear sound. They discard inaudible information, not wasting space on what you cannot hear.
The algorithms for compressing and decompression digital audio are sometimes called codecs, short
for compressors-decompressors. Several have become popular for different applications. Windows 95
includes support for the CCITT G.711 A-law and u-lw, DSP Group TrueSpeech, IMA ADPCM, GSM
6.10, Microsoft ADPCM, and Microsoft PCM converter codecs. You can view the audio codecs
installed in your Windows 95 system by opening the Multimedia icon in Control Panel, choosing the
Advanced tab, and double clicking on Audio Compression Codecs. You'll see a display like Figure
18.1.
Figure 18.1 A display of the installed audio codecs under Windows 95.

Other than to view this listing, you normally won't have to deal with the various codecs. Windows
automatically uses the appropriate codec when necessary to play back an audio file. When recording,
your software will prompt you to select the codec to use if a selection is available.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (16 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

TrueSpeech

Ground zero for online sound and Windows compression is the TrueSpeech system, which is both an
international standard and a standard part of Windows. TrueSpeech, developed by the DSP Group,
was adopted as G.723 by the ITU as the audio standard for video conferencing over ordinary
telephone connections. Microsoft also incorporated TrueSpeed as a real time audio playback
technology that could be bundled with the Windows 95 operating system.
Originally created when 14,400 bps modems were the norm, TrueSpeech was optimized for that data
rate, compressing the 64 kbit/sec rate used for ordinary telephony to below the modem rate. The basic
TrueSpeech compression algorithm is lossy, so it sacrifices quality in favor of data rate. Moreover, the
TrueSpeed system uses only one compression ratio, optimized for the 14,400 bps rate. As a
consequence, switching to a higher speed connection will not improve its quality. At its target rate,
however, it works well and delivers better quality than the popular Real Audio system. Further,
TrueSpeech incorporate no mechanism to assure real time reconstruction of transmissions so if a
connection is slow, the audio will be interrupted.

MPEG

Although usually regarded as a video standard, the MPEG standards discussed in Chapter 15, "The
Display System," also describe the audio that accompanies its moving images. The applications of
MPEG audio are widespread—its compression system is used by Digital Compact Cassettes, digital
broadcasting experiments, and the DVD.
MPEG audio is not one but a family of audio coding schemes based on human perception of sound.
The basic design has three layers, which translate directly into sound quality. The layers, numbered 1
through 3, form a hierarchy of increasing complexity that yields better quality at the same bit rate.
Each layer is built upon the previous one and incorporates the ability to decode signals coded under
the lower layers. Table 18.3 lists the layers and their bit rates.

Table 18.3. MPEG Layers and Bit Rates Compared

Layer Allowed range Target or optimum Example application


1 32 to 448 kbits/sec 192 kb/sec Digital Compact Cassette
2 32 to 384 kbits/sec 128 kb/sec MUSICAM (Broadcasting)
3 32 to 320 kbits/sec 64 kb/sec DVD, Internet sound

As the layer number increases, the encoding becomes more complex. The result is a greater amount of
compression. Because greater compression requires more processing, there is apt to be more latency
(signal delay) as the layer number increases.
The layer number does not affect perceived sound quality. All layers permit sampling frequencies of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (17 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

32, 44.1, or 48 kbits/sec. No matter the layer, the output quality is dependent on the bit rate
allowed—the higher the bit rate, the higher the quality. The different standards allow higher quality to
be maintained at lower bit rates. At their target bit rates, all three layers deliver sound quality
approaching that of CDs.
Unlike other standards, MPEG does not define compression algorithms. Instead the layers provide
standards for the data output rather than how that output is achieved. This descriptive approach allows
developers to improve the quality of the algorithms as the technology and their discoveries permit.
Header information describes the level and methodology of the compression used in the data that
follows.
MPEG is asymmetrical in that it is designed with a complex encoder and a relatively simple decoder.
Ordinarily, you will only decode files. Only the producer or distributor of MPEG software needs an
encoder. The encoding process does not need to (and often does not) take place in real time. All layers
use a polyphase filter bank with 32 subbands. Layer 3 also adds a modified discrete cosine
transformation that helps increase its frequency resolution.

Looped Sound

When you want to include sound as a background for an image that your audience may linger over for
a long but predetermined time, for example to add background sounds to a Web page, you can
minimize the storage and transmission requirements for the audio by using looped sound. As the name
implies, the sound forms an endless loop, the end spliced back to the beginning. The loop can be as
short as a heartbeat or as long as several musical bars. The loop simply repeats as long as the viewer
lingers over the Web page. The loop only requires as much audio data as it takes to code a single pass,
no matter how long it plays.
No rule says that the splice between the end and beginning of the loop must be inconspicuous, but if
you want to avoid the nature of the loop becoming apparent and distracting it should be. When
looping music, you should place the splice so that the rhythm is continuous and regular. With random
sounds, you should match levels and timbre at the splice. Most PC sound editors will allow you to
finely adjust the splice. Most Internet sound system support looped sounds.

Hardware

The job of the audio circuitry in your PC is to set the air in motion, making sounds that you can hear
to alert you, to entertain you, to amaze you. PCs have had sound circuitry from the very beginning.
But in the beginning of PCs, as in the beginning of life on earth, things were primitive, about the
audio equivalent of amoebic blobs bobbing along en masse in black swamps. From humble
beginnings, however, PC sound systems have evolved to parity with the stuff in all but the best and
most esoteric stereo systems.
From the standpoint of a computer, sound is foreign stuff. Indeed, it's something that happens to
stuff—air—while the computer deals with the non-stuff of logical thoughts. Video images are much
more akin to computer electronics—at least the photons that you see are electromagnetic. Sound is

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (18 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

purely mechanical, and that makes the computer's job of dealing with it tough. To make sound
audible, it somehow has to do something mechanical. It needs a transducer, a device that transmits
energy from one system to another—from the electrical PC to the kinetic world of sound.

Basic Sound System

Although the audio abilities of some PCs rival the best stereo systems, the least common denominator
among them is low indeed. The basic sound system that you're assured of finding in all PCs is exactly
the same primitive design that IBM bolted into its original PC.
To be charitable, the basic PC sound system wasn't designed for high fidelity. In fact, it was conceived
as a beeper. Its goal was to generate pure if harsh tones to alert you to events occurring in your PC, for
example the beep code of the BIOS. After all, in 1981 computers didn't sing opera.
This basic sound system has three components, a tone generator, an amplifier, and a loudspeaker—all
three of which must be called rudimentary because there's nothing lower on the scale. When all
worked together, you could make your PC beep as easily as typing Ctrl-I. The frequency and
amplitude of the tone was predetermined by IBM's engineers. You were lucky to get any noise at all,
let alone a choice.
Clever programmers quickly discovered that they could easily alter the tone, even play short ditties
with their data. As programmers got clever, they found they could modulate the primitive sound
system and indeed make the PC sing. Considering the standard equipment, you can make your PC
sound surprisingly good just by adding the right driver to Windows.

Tone Generator

The fundamental tone generation circuit is the oscillator, the same as the clock that generates the
operating frequency of your PC's microprocessor. The only difference is that the tone generator
operates at lower frequencies, those within the range of human hearing (once they are translated into
sounds).
The first PCs used one of the channels of their PC's 8253 or 8254-2 timer/counter integrated circuit
chips as the required oscillator. Modern PCs integrate the same functions into their chipsets.
No matter the implementation, the circuits work the same. The timer develops a train of pulses by
turning a voltage on and off. The timing of these cycles determines the frequency of the tone the
circuit produces.
The PC timer/counter chip starts with a crystal controlled fixed frequency of 1.19 MHz and divides it
into the audio range. A register in the timer chip stores a 16-bit divisor by which value the timer
reduces the oscillator frequency. Loading the highest possible value into the divisor register (65,535)
generates the lowest possible tone the basic PC sound system can produce, about 18 Hz, low enough
to strain the limits of normal hearing were the PC's speaker able to reproduce it. Divisor values above
about 64 produce tones beyond the upper range of human hearing.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (19 de 59) [23/06/2000 06:27:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

To use the timer to generate tones in a PC, you must first set up the timer/oscillator chip by writing
the value 0B6(Hex) to the timer's control port at I/O port address 043(Hex). The frequency divisor for
the PC tone generator then gets loaded into the I/O port at 042(Hex). This eight-bit port expects to
receive two data bytes in sequence, the least significant byte first.
Because of the circuit design of the PC, these tones are produced as square waves, which means they
are not pure tones but are rich in overtones or harmonics. Musically they sound harsh—exactly what
the doctor (or engineer) order for warning tones.
The basic PC sound system was designed to produce tones of the frequency set by the divisor for short
periods. These sounding periods are defined by gating the output of the oscillator on and off.
To turn the speaker on, you activate bit 0 of the register at I/O port 061(Hex). Resetting this bit to zero
switches the speaker off. (You must exercise care when tinkering with these bits; other bits in this
register control the keyboard.)
In this basic operating mode, the dynamics of the signal are limited. The output of the timer/oscillator
chip is set at a constant level—the standard digital signal level—so the sound level produced by the
speaker does not vary. All the sounds produced by the PC's motherboard have the same level. Some
tones generated by the PC timer sound louder than others primarily because they are more obnoxious.
They are made from the exact right combination of frequencies to nag at the aesthetic parts of your
brain. That's about all they were designed to do. Listen long enough and you'll agree that the PC's
designers succeeded beyond their wildest dreams at creating obnoxious sound.
Using a technique called pulse width modulation, programmers discovered they could use even this
primitive control system to add dynamics to the sounds they generated. Pulse width modulation uses
the duty cycle of a high frequency signal coupled with a low pass filter to encode the loudness of an
analog signal equivalent. The loudness of a sound corresponds to the length of a signal pulse of a high
frequency carrier wave. A soft sound gets a brief pulse while loud sounds are full strength square
waves. The low pass filter eliminates the high carrier frequency from the signal and leaves a variable
strength audio signal (the modulation).

Amplifier

The output of the tone generator chip is too weak for sounding a speaker at a listenable level. The chip
simply cannot supply enough current. The standard way to boost signal strength is with an amplifier.
The basic PC sound system uses a simple operational amplifier. Modern systems incorporate this
circuitry into the basic motherboard chipset. In any case, even with the boost, the signal amounts to
only about 100 to 200 milliwatts, not enough to shake a theater with THX.
The standard PC design also adds a low pass filter and a current limiting resistor between the driver
and the loudspeaker. The low pass filter eliminates frequencies higher than normal hearing range (and
some in the upper ranges that you probably can readily hear). PCs often use higher frequencies to
make audible sounds, and the low pass filter prevents these artifacts from leaking into the speaker. In
other words, it smoothes things out.
A resistor (typically about 33 ohms) in series with the loudspeaker prevents the internal loudspeaker
of a PC from drawing too much current and overloading the driver circuit. A resistor also lowers the
loudness of the speaker because it absorbs some power as part of the current limiting process.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (20 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Although some circuit tinkerers bypass this resistor to make their PCs louder, doing so risks damage
to the driver circuit.

Loudspeaker

The actual noisemaker in the PC's basic sound system is a small, two- to three-inch dynamic
loudspeaker. To most PC designers, the internal loudspeaker is an obligatory headache. They have to
put one somewhere inside the PC no matter how inconvenient. Usually the speaker gets added almost
as an afterthought. After all, all you have to do is hear it. You don't have to hear it well.
Because PC designers make no effort at optimizing the quality of the sound of the basic speaker, it is
usually unbaffled. Its small size and acoustic prevent it generating any appreciable sound at low
frequencies. Its physical mounting and design limit is high frequency range. Outside of replacing the
loudspeaker with something better, you cannot do anything to break through these limits.
In most PCs, the speaker connects to the motherboard using a simple, short two-wire twisted pair
cable. The motherboard usually, but not always, includes a four-pin loudspeaker connector. In many
PCs, one of the pins is removed for keying the polarity of the speaker connection. One matching hole
in the speaker cable connection often is blocked. Only two pins of the four pins of the motherboard
connector are active, the two at the ends of the connector. The center one or two pins aren't connected
to anything. Figure 18.2 shows typical speaker connections.
Figure 18.2 Basic PC speaker connections.

In most PCs, the loudspeaker is electrically floating. That is, neither of its terminals is connected to
the chassis or any other PC wiring except for the short to which it is soldered. When the speaker is
electrically floating, the polarity of its connection is irrelevant, so the keying of the connection is
essentially meaningless. In other words, if the speaker connector on your PC is not keyed—if all four
pins are present in the motherboard connector and none of the holes in the speaker connector are
plugged—don't worry. The speaker will operate properly no matter how you plug it in.

Drivers

Other than beeping to indicate errors, the basic sound system in a PC has a life of leisure. If left to its
own devices, it would sit around mute all the while you run your PC. Applications can, however, take
direct hardware control and use the basic sound system for audible effects. For example, text to
speech converters can use the basic sound system with pulse width modulation techniques to simulate
human speech.
Under all versions of Windows, only the native, tone generating abilities of the basic sound system get
used, again just to beep warnings. Microsoft and some others have developed speaker drivers that
allow the built-in basic sound system to play more elaborate noises such as the Windows Sound when
starting the operating system. These drivers let you play WAV files only. They do not synthesize
sounds, so they cannot play MIDI files (see the "MIDI" section that follows) nor do they work with
most games.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (21 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

The prototype of these drivers is called the Windows Speaker Driver and was developed by Microsoft
strictly for Windows 3.1. The speaker driver has not been updated for more recent versions of
Windows. In fact, the history of the speaker driver is even more checkered—it was included with the
beta test versions of Windows 3.1 but not the release version. During the development of Windows
3.1, Microsoft found that this driver sometimes misbehaved with some programs. To avoid support
headaches, Microsoft elected not to include the driver in the basic Windows 3.1 package. The driver is
included in the Microsoft Driver Library and Microsoft does license it to developers to include with
their products. It remains available from a number of sources, as are other, similar drivers.
These drivers will work under Windows 95, although Microsoft offers no explicit support of such
operation. If you're used to simple beeps, these drivers sound amazingly convincing. They are not,
however, a substitute for a real sound system based on a sound board.

Sound Boards

Clearly the basic sound system in PCs is inadequate for the needs of modern multimedia. Getting
something better requires additional circuitry. Traditionally, all of the required electronics get
packaged on a single expansion card termed a sound board. Higher quality audio has become such a
necessity in modern PCs that most new notebook machines include all of the circuitry of a sound
board on their motherboards. Many new desktop PCs and replacement motherboards also make all of
the functions of a sound board an integral part of their designs. The physical location of the circuits is
irrevelant to their normal operation. Electrically and logically, they are equivalent.
To cope with the needs of multimedia software and the demands of human hearing and expectation,
the sound board needs to carry out several audio related functions using specific hardware features.
Foremost is the conversion of digital sound data into the analog form that speakers can shake into
something that you can hear using a digital to analog converter. In addition, most sound boards
sample or record sounds for later playback with a built-in analog to digital converter. They also create
sounds of their own using built-in synthesizers. Sound boards also include mixer circuits that combine
audio from all the sources available to your PC—typically a microphone; the output of the sound
board's digital to analog converter (which itself combines the synthesizer, WAV files read from disk,
and other digital sources); the analog audio output of your PC's CD player; and an auxiliary input
from whatever audio source tickles your imagination. Finally, the sound board includes an amplifier
that takes this aural goulash and raises it to ear pleasing volume.
Sound boards may include additional functions, one of which is required by the various Multimedia
PC specifications—a MIDI interface. This additional connection lets you link your PC to electronic
musical instruments, for example, allowing your PC to serve as a sequencer or, going the other way,
connecting a keyboard to control the sound board's synthesizer. Some makers of sound boards want
their products to act as single slot multimedia upgrades, so they include CD drive interfaces on their
sound boards.
Sound boards can be distinguished in several ways. The most important of these divisions are the
three C's of sound boards—compatibility, connections, and quality. Compatibility determines the
software with which a given sound board will work. The connections the board supports determine
what you can plug in, usually MIDI and CD devices. Quality influences how satisfied you will be
with the results, essentially whether you will be delighted or dismayed by your foray into multimedia.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (22 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Compatibility

Compatibility often is the more important, because if your software can't coax a whimper from your
sound board, you won't hear anything no matter what you plug in or how well the circuitry on the
board might be able to do its job. Compatibility issues arise at two levels, hardware and software.
More practically, you can regard these levels as DOS and Windows compatibility (or games and
Windows, if DOS is foreign to your nomenclature).
For the most part, compatibility refers to the synthesizer abilities of a sound board. For your game to
make the proper noises, it must be able to tell the sound board exactly what to do, when and whether
to belch and boom. Your software needs to know which ports to access the functions of the sound
board. To work with games at the DOS level (and below), many of these features must be set in
hardware, although you will still have to install driver software. At the Windows compatibility level,
driver software determines compatibility. The DOS level of compatibility is more rigorous. It is also
the level you require if you want to be able to run the full range of PC games.
Most games and other software require compliance with two basic de facto industry standards, Ad Lib
and Sound Blaster. Beyond this, some games may also require features of particular sound boards.

Ad Lib

The basic level of hardware compatibility required for DOS games is with Ad Lib, the maker of one of
the first sound boards to gain popularity in the sound board business. Because it had the widest user
base early when noisy games were becoming popular, many game programmers wrote their products
to take advantage of the specific hardware features of the Ad Lib board. Even the newest hardware
standards for sound board, Audio Codec '97, requires basic Ad Lib compatibility.

Sound Blaster

Another company, Creative Labs, entered the sound board business and built upon the Ad Lib base.
Its Sound Blaster product quickly gained industry acceptance as a superset of the Ad Lib standard; it
did everything the Ad Lib board did and more. The Sound Blaster found a huge market and raised the
standard for sound synthesis among game products. Because programmers directly manipulated the
hardware registers of the Sound Blaster to make the sounds they wanted, to run most games and
produce the proper sounds you need a sound board that is hardware compatible with the Sound
Blaster. Several iterations of Sound Blaster hardware were produced; the minimal level of
compatibility to expect today is with Sound Blaster Version 1.5.
The Sound Blaster relies on a particular integrated circuit to produce its array of synthesized sounds,
the Yamaha YM3812. This chip has a single output channel, so it can produce only monophonic
sound, even when it is installed on a sound board that's otherwise called stereo. Some sound boards

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (23 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

use two of these chips to produce stereo. The YM3812 has a fixed repertory of eleven voices, six of
which are instrumental and five for rhythm.
A newer FM synthesis chip has become popular on better sound boards, the Yamaha YMF262 or
OPL3. Not only does the OPL3 have more voices—20—but it also uses more sophisticated
algorithms for synthesis. It also can produce a full stereo output. Because it is backwardly compatible
with its forebear, sound boards using it can gain both better synthesis and Sound Blaster
compatibility.
The degree of Sound Blaster compatibility is critical when you're investigating portable PCs because
the hardware needs of the Sound Blaster are incompatible with the PC Card standard. The PC Card
bus in current form (Version 2.1) does not include all of the signals needed by the Sound Blaster
interface; although PC Card-based sound boards can approximates true Sound Blaster compatibility,
they cannot, for example, play Doom. To avoid the problem, many notebook computer manufacturers
add Sound Blaster circuitry to the motherboards so you don't need to use PC Card.
The Sound Blaster interface works by sending data through two control ports, an address/status port
located at 0388(Hex) and a write only data port at 0389(Hex). These ports serve to access the Sound
Blaster's 244 internal registers. The Sound Blaster also assigns four ports to speakers with addresses
that vary with the base address you assign the board. By default, the data ports for the left speaker are
at 0220(Hex) and 0221(Hex); for the right speaker, 0222(Hex) and 0223(Hex). You can make music
by sending data directly to these ports. Most hardware level programming, however, takes the form of
sending function calls through the Sound Blaster's driver software which uses a software interrupt for
access to its functions.
To activate a Sound Blaster function, you load the appropriate values required by a given function call
into specific registers of your PC's microprocessor, then issue the designated software interrupt. The
function to be called is designated in the BX register of the microprocessor; the BH half of the register
indicates one of five major functions handled by an individual driver (control, FM synthesis, Voice
from disk, Voice from memory, or MIDI), and the BL register indicates exactly what to do. Table
18.4 summarizes these functions.

Table 18.4. Sound Blaster Function Calls

Function name Register settings Additional comments


BH BL CX
Get SBSIM version number 0 0 Not used On exit, AH=major version number; AL=minor
version number
Query drivers 0 1 Not used On exit, AX bit values show drivers in use; bit 0=FM; bit 1=voice
from disk; bit 2=voice from memory; bit 3=control; bit 4=MIDI
Load file into extended memory 0 16 Not used AX indicates file type (0=VOC file); CX, SBSIM
handle to use; DS:DX points to file name
Free extended memory 0 19 Not used AX indicates SBSIM handle of file to be cleared
Start FM sound source 1 0 0 SBSIM file handle in AX

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (24 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Play FM sound 1 1 0
Stop FM sound 1 2 0
Pause FM sound 1 3 0
Resume FM sound 1 4 0
Read FM sound source 1 5 0 On exit, AX=0 indicates sound is stopped; AX=FFFF indicates
sound playing.
Start voice from disk 2 0 0 SBSIM file handle in AX
Play voice from disk 2 1 0
Stop voice from disk 2 2 0
Pause voice from disk 2 3 0
Resume voice from disk 2 4 0
Read voice from disk 2 5 0 On exit, AX=0 indicates sound is stopped; AX=FFFF indicates
sound playing.
Start voice from memory 3 0 0 AX point to file in conventional memory; DX:AX points to file
in extended memory
Play voice from memory 3 1 0
Stop voice from memory 3 2 0
Pause voice from memory 3 3 0
Resume voice from memory 3 4 0
Read voice from memory status 3 5 0 On exit, AX=0 indicates sound is stopped; AX=FFFF
indicates sound playing.
Show volume level 4 0 Not used On entry, AX shows source; on exit AH=left channel volume,
AL=right channel volume
Set volume level 4 1 Not used On entry, AX indicates source to change; DH=left volume,
DL=right volume
Get gain setting 4 2 Not used On entry, AX=1; on exit, AH=left channel gain, AL=right channel
gain
Set gain 4 3 Not used On entry, AX=1; DH=left channel gain, DL=right channel gain
Show tone settings 4 4 Not used On entry, AX=0 for treble, AX=1 for bass; on exit, AH=left
channel setting, AL=right channel
Set tone 4 5 Not used On entry, AX=0 for treble, AX=1 for bass; DH=left channel setting,
DL=right channel
Start MIDI source 5 0 0 SBSIM file handle in AX

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (25 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Play MIDI source 5 1 0


Stop MIDI source 5 2 0
Pause MIDI source 5 3 0
Resume MIDI source 5 4 0
Read MIDI status 5 5 0 On exit, AX=0 indicates sound is stopped; AX=FFFF indicates sound
playing.

When the Sound Blaster interface driver loads, it chooses the first available interrupt within the range
080(Hex) to 0BF(Hex), inclusive. Programs needing to use its function calls can find the interrupt by
looking for the signature "SBSIM" offset 103(Hex) bytes from the start of the interrupt vector
segment address.

Windows

To produce sounds under Windows, a sound board requires a Windows-compatible software driver.
With the right driver, a sound board can play the standard Windows sounds as well as any WAV files
you like, even if it does not have hardware compatibility with the Ad Lib or Sound Blaster standards.
Because WAV files are digitally recorded audio data, they do not require synthesis. All Windows
sounds, from the login noise to the standard riff when you make an error are WAV files.
Newer drivers for sound boards should be compatible with DirectSound, the audio portion of the
DirectX set of application interfaces.

Audio Codec '97

In developing its own two chip audio system for PCs, Intel Corporation published the specifications of
the system as if it were a standard by which all audio systems could be measured. The result, termed
Audio Codec '97, represents a reasonable target for the designers of audio systems and may well
become the standard that Intel planned. The two chips comprise a controller that handles all digital
operations and an analog chip that turns the computer signals into audio. The two are connected by a
five-wire serial connection termed the AC link.
The heart of the design is two digital to analog converters capable of operating at a 48-kilohertz
sampling rate to generate a pair of stereo audio channels. The specification requires that the controller
be able to simultaneously process four signals, two inputs and two outputs. Each channel must be able
to translate the signals to or from at least six sampling rates to the 48 kilohertz basic rate of the
system. These include 8.0, 11.025, 16.0, 22.05, 32.0, 44.1 kilohertz. To maintain true hi-fi quality, the
specification requires a single to noise ratio of 90 dB.
The DACs are fed by a mixer that accepts inputs from all the digital sources in the PC, including the
synthesizer section of the Intel codec chipset. For inputs, the system includes a pair of analog to
digital converters that also operate at 48 kilohertz as well as an optional third, matching channel

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (26 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

dedicated as a microphone input.

Control

The DirectX interface also provides a means through which application software can control
multimedia equipment. In addition, the various Multimedia PC specifications require that sound
boards incorporate two very specific control functions for external devices, a CD interface and a MIDI
interface. It also must incorporate an analog mixer to control audio levels.

CD Interface

Augmenting its synthesis and sampling functions, an MPC-compliant sound board must also be able
to control a CD ROM drive. Besides controlling the drive, the sound board must also have a direct
connection to the CD ROM drive for audio information, delivered in analog rather than digital form.
The CD interface circuitry on the sound board usually takes one of three forms—an
ATAPI-compatible AT Attachment (IDE) connector, a SCSI port, or a proprietary port. Although
common among early sound boards, particularly those sold in kits as complete multimedia upgrades,
the acceptance of the low cost AT Attachment interface for CD ROM drives has greatly diminished
proprietary offerings.
The intent of the CD ROM interface is to assure that you can connect a CD drive to your PC because
you need the drive if you want even a pretense of multimedia capabilities. If your PC has a suitable
interface or even a CD drive already attached, the sound board port is superfluous. In fact, the
interface on a sound board is better ignored. Standalone ATA and SCSI host adapters usually deliver
better performance than the circuitry built into sound boards—for example, you may have a
PCI-based disk interface that will have substantially wider bandwidth than the ISA-based interface on
the sound board. Although the sound board interface will usually have sufficient performance for
mid-range CD drives, it may be hard pressed to keep up with a 12x or 16x drive and definitely will be
out of its league with a hard disk. In other words, don't even think of connecting a hard disk drive to
the SCSI port on a sound board. If the board gives you the option, switch off the interface circuitry or
simply don't install the driver for it. That way you can put the system resources otherwise used by the
port to better use.

Instrument Interface

The MPC specification also requires that a compliant sound board include a Musical Instrument
Device Interface or MIDI port. Most people don't put this port to work. Its primary application is in
making music with your PC rather than playing back what someone else has already written. You
don't even need a MIDI port to play back the MID music files that encode MIDI instructions. Your
playback software will usually route the MIDI instructions to the synthesizer circuitry on your sound
board, generate the music there, and play it through the mixer and amplifier on the sound board. The

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (27 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

MIDI instructions never need to cross the MIDI port.


That said, if you want to use your PC as a sequencer to record, edit, and play back your own musical
compositions, or if you want to use an external piano style (as opposed to typewriter) keyboard to
operate your sound board's synthesizer, your MIDI port will be your lifeline.
The MIDI standard describes the signals and protocols that travel through the connections between
MIDI devices. It does not govern how the MIDI port links to your PC. Although in theory your PC
could use any ports and addresses for controlling its MIDI circuitry, a standard evolved early in the
history of MIDI in PCs. Roland Corporation developed a very popular MIDI adapter, their model
MPU-401. To get the best performance from their programs, the writers of music software took direct
hardware control of the MPU-401. The resources used by the MPU-401 quickly became the standard
in the industry. The MIDI circuitry of most sound boards mimics the MPU-401, and most MIDI
software requires MPU-401 compatibility.

Mixers

In addition to commanding external devices, sound boards also provide important control functions
for music making and audio playback. The mixer circuitry in the sound board serves as a volume
control for each of the various sources that it handles.
An audio mixer combines several signals into one, for example, making the distinct signals of two
instruments into a duet in one signal. Most audio mixers allow you to individually set the volume
levels of each of the signals that they combine.
Mixers work by summing their input signals. Analog mixers work by adding together their input
voltages. For example, an analog mixer would combine a 0.3-volt signal with a 0.4-volt signal to
produce a 0.7-volt signal. Digital mixers combine digital audio signals by adding them together
mathematically using their digital values. The results are the same as in analog mixing, only the type
of audio signal differs.
In your PC, however, the difference is significant. The sound board performs analog mixing in real
time in its onboard circuitry. Most PCs use their microprocessors to do digital mixing, and they
usually don't make the mix in real time. For example, your PC may mix together the sounds from two
files and let you later play back the combination.
All sound boards incorporate mixer circuitry that lets you combine all the analog signals the board
works with. The resulting audio mixture goes to your speakers and the analog to digital converter on
the board so that you can record it into a file.
To control the relative volume levels of the various mixer inputs, most sound board makers provide
mixer software that gives you an onscreen slider to adjust each input interactively.

Quality

In sound boards, quality wears more than one face. Every board has designed-in abilities and,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (28 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

likewise, limits that control what the board might possibly do. Lurking beneath, however, is the
quality that a given board can actually produce. Dreams and reality being as they are, most sound
boards aspire higher than they perform. This difference is hardly unexpected and would not be an
issue were not the difference so great. A sound board may have specifications that put it beyond
Compact Disc quality and yet perform on par with an ancient AM radio crackling away in a
thunderstorm.
In general, most sound boards list their quality abilities in the range of digital signals they manage.
That is, a given sound board may support a range of sampling frequencies and bit depths. Nearly any
modern sound board worth considering for your PC will list capabilities at least as good as the
44.1-KHz sampling and 16-bit resolution of the CD medium. Newer boards should accommodate the
48-KHz sampling of professional audio and the DVD system.
In terms of digital quality, the CD rates should be good for a flat frequency response from DC to 15
kilohertz with a signal to noise ratio of about 96 decibels. Commercial sound boards miss both of
those marks.
The shortfalls arise not in the digital circuitry—after all, any change in a digital signal, including those
that would degrade sound quality, are errors, and no modern PC should let errors arise in the data it
handles. Rather, the analog circuitry on the sound board teases, tortures, and truncates the signals that
travel through it. Both frequency response and dynamic range suffer.
Most of the time the manhandling of the audio signals makes little difference. When sounds play
through typical PC loudspeakers, you probably won't hear the deficiencies. Most inexpensive PC
speakers (which means most PC speakers) are so bad that they mask the signal shortfalls. Listen
through headphones or through a good stereo system, however, and the problems become readily
apparent.
Many sound boards shortchange you on frequency response. Certainly the sampling rate of a digital
system automatically imposes a limit on the high frequencies that the system can handle. At the other
end of the sound spectrum, there should be no such limitation. A digital system should easily be
capable of encoding, storing, and reproducing not only the lowest frequencies that you can hear but
also sounds lower than you can hear and even levels of direct current. Less expensive sound boards do
not dip so low, however. In fact, many boards cannot process low frequency sounds within the range
of human hears, often missing the fundamental frequencies of bass notes and the stomach wrenching
rumbles of computer game special effects. These limits arise in the analog circuitry on the board from
the coupling of AC signals through the various amplifier stages.
Most common analog systems require coupling capacitors between amplifier stages (and in the
output) to block both supply voltages and direct current errors from interfering with the amplified
audio signal. The size of the capacitor (its rating in microfarads, a unit of measure of electrostatic
capacity) determines the low frequency limit of the overall system. Larger capacitors are more
expensive and harder to place on compact circuit boards, so manufacturers of budget products are apt
to skimp on capacitor size. Better audio circuits use larger capacitors that pass lower frequencies or
direct coupled designs that totally eliminate their need (and push response down to DC).
The size of any capacitor in the amplifier output is particularly critical because, as the power level
increases, the required capacity to pass a given frequency increases. Consequently, the first place the
low frequency limit of a sound board suffers usually is in its power amplifier, the part that provides a
high level signal suitable for unpowered speakers. It is not unusual for low priced sound boards to cut
off sounds below 150 Hz, which means few bass frequencies get through. Because the low

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (29 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

frequencies are not present as the speaker jacks of such sound boards, plugging in better amplified
speakers or even your stereo system will do nothing to ameliorate the low frequency deficiencies.
Line level outputs, as opposed to speaker outputs, are more likely to preserve low frequencies, and
they should be preferred for quality connections.
The other signal problem with inexpensive sound boards is noise. The noise level in any digital
system is constrained by the bit depth, and the typical 16-bit system pushed the noise floor down
below that of many stereo components. Listen critically to the sounds from many sound boards and
CD ROM drives, however, and you'll hear squeaks and peeps akin the conversations of Martians as
well as more mundane sounds like swooshes and hisses. Most of these sounds are simply extraneous
signals intercepted by the sound board, mixed with the sounds you want to hear, and amplified to
offend your ears. Not only do these arcane sounds interfere with your listening enjoyment, they may
also make their imprint on the sounds you digitize and store, forever preserving the marginal quality
of your poor choice of sound board for posterity. Most sound boards keep the level of these noises
below that which you can hear through inexpensive speakers, but listen to soft music from your CD
drive, and you may find your peace shattered by Martian madness.
Better sound boards incorporate shielding to minimize the pick-up of extraneous noises. For example,
the circuit traces on the boards will be shielded by ground planes, extra layers of circuit traces at
ground potential covering the inner traces of multi-layer printed circuit boards. Only the best sound
boards can digitize analog signals without raising the overall noise level well above the 92 to 96 dB
promised by good CD drives.
If you plan on using a sound board for transferring analog audio to digital form, for example, to back
up your old vinyl phonograph records to CD, you will want a sound board that guarantees low noise
and low frequency response extending down to 20 Hz.

Transducers

The bridge between the electronic world of audio (both analog and digital) and the mechanical world
of sound is the acoustic transducer. The microphone converts sound into audio, and the loudspeaker
converts audio into sound.

Microphones

All sound boards have microphone inputs to enable you to capture your voice into the digital medium.
You can use digital transcriptions of your voice to annotate reports, spreadsheets, and other files or
incorporate them into multimedia presentations. With a suitable sound board, you can even connect
high quality microphones to your PC and make digital recordings of music, edit them, and write them
to CDs to play in your stereo system.
The job of the microphone is simple: to translate changes in air pressure into voltage changes. The
accuracy of the microphone's translation determines the quality of the sound that can be recorded. No
microphone is perfect. Each subtly distorts the translation, not making the results unidentifiable but
minutely coloring the captured sound. One side of the microphone designer's art is to make these

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (30 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

colorations as pleasing as possible. Another side of the art is to attempt to make the microphone work
more like the human ear, tuning in only to what you want to hear, rejecting unwanted sounds.

Technologies

Engineers can use any of several technologies to build microphones. The microphones that you're
most likely to connect to a sound board to capture your voice are dynamic. A dynamic microphone
acts like a small electrical generator or dynamo, using a moving magnetic field to induce a current in a
coil of wire. To detect changes in air pressure, a dynamic microphone puts a diaphragm into the path
of sound waves. The diaphragm is typically made from lightweight plastic and formed into a domed
shape or something even more elaborate to stiffen it . The diaphragm connects to a lightweight coil of
wire called a voice coil that's wrapped around a small, usually cylindrical, permanent magnet. The
voice coil is suspended so that it can move across the magnet as the diaphragm vibrates. The moving
coil in the permanent magnetic field generates a small voltage, which provides the signal to the
microphone input of your sound board.
Most microphones used for recording music today use a different operating principle. Called
condenser microphones (or sometimes, capacitor microphones), they modify an exiting voltage
instead of generating a new one. In a classic condenser microphone, the diaphragm acts as one plate
of an electrical capacitor (which, in the days of vacuum tubes was often called a condenser, hence the
name of the microphone). As the diaphragm vibrates, the diaphragm capacitance changes, which in
turn modifies the original voltage.

Directionality

Microphones are often described by their directionality, how they responds to sounds coming from
different directions. An omnidirectional microphone does not discriminate between sounds no matter
what direction they come from. It hears everything the same in a full circle around the microphone. A
unidirectional microphone has one preferred direction in which it hears best. It partially rejects from
other directions. Most unidirectional microphones are most sensitive to sounds directly in front of
them. Sounds in the preferred direction are called on-axis sounds. Those that are not favored are
called off-axis sounds. The most popular unidirectional microphone is called the cardioid microphone
because of the heart-like shape of its pattern of sensitivity, kardia being Greek for heart.
Hypercordioid microphones focus their coverage more narrowly while maintaining the basic cardioid
shape. Bi-directional microphones are, as the name implies, sensitive to sounds coming from two
directions, generally the front and rear of the microphone that resembles the numeral "8."
Consequently bi-directional microphones are sometimes called figure eight microphones. This design
is chiefly used in some special stereophonic recording techniques. Figure 18.3 illustrates the major
type of microphone directional patterns.
Figure 18.3 Microphone directional patterns.

The inexpensive microphones that accompany cassette tape recorders and some sound boards are
typically omnidirectional dynamic microphones. If you want to minimize external noises or office

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (31 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

commotion when annotating documents, a better unidirectional microphone will often make a vast
improvement in sound quality.

Electrical Characteristics

The signals produced by microphones are measured in several ways. The two most important
characteristics are impedance and signal level.
Microphones are known as low impedance (from 50 to 600 ohms) and high impedance (50,000 and
more ohms). Some microphones have switches that allow you to change their impedance. Plugging a
microphone of one impedance into a circuit meant for another results in low power transfer—faint
signals. Nearly all professional microphones and most others now operate at low impedance as do
most microphone inputs. If your microphone has an impedance switch, you'll usually want it set to the
low (150 ohm) position.
The signal levels produced by microphones are measured in millivolts or dB (decibels) at a given
sound pressure level. This value is nominal. Loud sounds produce higher voltages. Most microphones
produce signals described as -60 to -40 dB, and will work with most microphone inputs. If you shout
into any microphone, particularly one with a higher output (closer to -40 dB), its output level may be
too high for some circuits to process properly, particularly those in consumer equipment—say your
PC's sound board. The high level may cause distortion. Adding an attenuator (or switching the
microphone with output level switches to a lower level) will eliminate the distortion.
Microphone signals can be balanced or unbalanced. Balanced signals require two wires and a ground;
unbalanced, one wire and a ground. Balanced signals are more immune to noise. Unbalanced signals
require less sophisticated electronic input circuitry. Most sound boards use unbalanced signals. Most
professional microphones produce balanced signals.
You can often convert a balanced signal into an unbalanced one (so you can connect a professional
microphone to your sound board) by tying together one of the two signal wires of the balanced circuit
with the ground. The ground and the other signal wire then act as an unbalanced circuit.

Connectors

Both inexpensive microphones and sound boards use the same kind of connector, known as a
miniature phone plug. Better quality, professional microphones with balanced signals typically use
XLR connectors (named after the model designation of one of the original designs) with three pins for
their two signals and ground. In these connectors, pin one is always ground. In balanced circuits, pin
two carries the positive signal; pin three, the negative. When used in unbalanced circuits, pins one and
three are usually connected.
Phone plugs have two or three connections. The end of the plug is called the tip, and the shaft of the
connector is called the sleeve. Some connectors have a third contact in the form of a thin ring between
the tip and the sleeve. This ring is called the ring. Figure 18.4 illustrates a typical phone plug.
Figure 18.4 Components of a typical phone plug.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (32 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

In unbalanced audio circuits, the tip is always connected to the hot or positive signal wire, and the
sleeve is connected to the shield or ground. With balanced signals, positive still connects to the tip,
negative connects to the ring, and the shield or ground goes to the sleeve. In stereo connections, the
left channel goes to the tip, the right channel to the ring, and the common ground or shield goes to the
sleeve.

Loudspeakers

From the standpoint of a PC, moving air is a challenge as great as bringing together distant worlds, the
electronic and the mechanical. To make audible sounds, the PC must somehow do mechanical work.
It needs a transducer, a device that transmits energy from one system to another—from the electrical
PC to the kinetic world of sound. The device of choice is the dynamic loudspeaker, invented in 1921
by Kellogg Rice.
The dynamic loudspeaker reverses the dynamic microphone design. An electrical current activates a
voice-coil (a solenoid or coil of wire that gives the speaker its voice) that acts as an electromagnet
which is wrapped around a permanent magnet. The changing current in the voice-coil changes its
magnetic field, which changes its attraction and repulsion of the permanent magnet, which makes the
voice-coil move in proportion to the current change. A diaphragm called the speaker cone is
connected to the voice-coil and moves with the voice-coil to creates the pressure waves of sound. The
entire assembly of voice-coil, cone, and supporting frame is called a speaker driver.
The art of loudspeaker design only begins with the driver. The range of human hearing far exceeds the
ability of any driver to reproduce sound uniformly. Accurately reproducing the full range of
frequencies that you can hear requires either massive electronic compensation (called equalization by
audio engineers) or using multiple speaker drivers, with each driver restricted to a limited frequency
range.
Commercial speaker systems split the full audible frequency range into two or three ranges to produce
two way and three way speaker systems. Modern systems may use more than one driver in each range
so a three way system may actually have five drivers.
Woofers operate at the lowest frequencies, which mostly involve bass notes, usually at frequencies of
150 Hertz and lower. Tweeters handle the high frequencies associated with the treble control,
frequencies that start somewhere in the range 2,000 to 5,000 Hertz and wander off to the limits of
human hearing. Midrange speaker drivers take care of the range in between. A crossover divides the
full range of sound into the individual ranges required by the specialized speaker drivers.
The term "subwoofer" is also used to describe a special, auxiliary baffled speaker system meant to
enhance the sound of ordinary speakers by extending their low frequency range. Because the human
ear cannot localize low frequency sounds, you can place this subwoofer anywhere in a listening room
without much effect on stereophonic imaging. The other, smaller speakers are often termed satellite
speakers.

Baffles and Enclosures

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (33 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

The low frequency range is particularly difficult for speaker systems to reproduce. The physics of
sound requires that more air move at lower frequencies to achieve the same pressure changes or
loudness, so larger speakers do a better job generating low frequency sounds. But the packaging of the
speaker also influences its low frequency reproduction. At low frequencies, the pressure waves
created by a loudspeaker can travel a substantial distance in the time it takes the speaker cone to move
in and out. In fact, when frequencies are low enough, the air has time to travel from the high pressure
area in front of the speaker to the low pressure area behind an outward moving speaker cone. The
moving air cancels out the air pressure changes and the sound. At low frequencies—typically those
below about 150 Hz—a loudspeaker in free air has little sound output. The small size and free air
mounting of the loudspeakers inside PCs severely constrain their ability to reproduce low frequencies.
To extend the low frequency range of loudspeakers, designers may install the driver in a cabinet that
blocks the air flow from the front to the back of the speaker. The cabinet of a speaker system is often
termed a baffle or enclosure. Strictly speaking, the two terms are not equivalent. A baffle controls
flow of sound while an enclosure is a cabinet that encircles the rear of the speaker.
Not just any cabinet will do. The design of the cabinet influences the ultimate range of the system as
well as its ability to deliver uniform frequency response. As with any enclosed volume, the speaker
enclosure has a particular resonance. By tuning the resonance of the enclosure, speaker system
designers can extend the frequency range of their products. Larger enclosures have lower resonances,
which helps accentuate the lowest frequencies speaker drivers can produce.
Most speaker enclosures use one of two designs. Acoustic suspension speaker systems seal the low
frequency driver in a cabinet, using the confined air to act as a spring (which in effect "suspends" the
speaker cone in its resting position). A ducted speaker or tuned port speaker or bass reflex speaker
puts a vent or hole in the cabinet. The vent both lowers the resonance of the enclosure and, when
properly designed, allows the sound escaping from the vent to reinforce that produced by the speaker
driver. Ducted speakers are consequently more efficient, producing louder sounds for a given power,
and can be smaller for a given frequency range.
Although tuning a speaker cabinet can extend its frequency range downward, it can't work magic. The
laws of physics stand in the way of allowing a speaker of a size that would fit on your desk or
bookshelf or inside a monitor from reproducing bass notes at levels you can hear. For most business
applications, reproducing low frequencies isn't necessary and may even be bothersome to coworkers
in adjacent cubicles when you start blasting foreign agents and aliens.

Subwoofers and Satellites

A subwoofer extends the low frequency abilities of your PC's sound system for systems that need or
require it. The distinguishing characteristic of the subwoofer is that it is designed to supplement other
speaker systems and reproduce only the lowest audible frequencies, typically those from 20 to 100
Hertz. Because these low frequencies are essentially non-directional (your ear cannot tell where they
are coming from), a single subwoofer suffices in stereo and multi-channel sound systems.
The classic speaker system puts all the drivers for various frequency ranges in a single cabinet to
produce a full range speaker system. One major trend in speaker design is to abandon the full range
layout and split the cabinetry. Designers put the midrange speakers and tweeters into small cabinets
called satellite speakers and rely on one or two subwoofers to produce the low frequencies. Practical

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (34 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

considerations underlie this design. The small size of the satellites allows you to place them where
convenient for the best stereo imaging, and you can hide the non-directional woofers out of sight.

Passive and Active Systems

The speakers you add to your PC typically come in one of two types, active or passive. The difference
is that active speakers have built-in amplifiers while passive speakers do not.
Most speaker systems designed for stereo systems are passive. They rely on your receiver or amplifier
to provide them with the power they need to make sound. Most computer speakers are active, with
integral amplifiers designed to boost the weak output of the typical sound board to the level required
for filling a room with sound.
The amplifiers in active speakers are like any other audio amplifiers with output power measured in
watts (and in theory matched to the speakers) and quality measured in terms of frequency response
and distortion. The big difference is that most active speakers, originally designed for portable stereos,
operate from battery power. If you plan to plug active speakers into your desktop PC, ensure that you
get a battery eliminator power supply so you can plug them into a wall outlet. Otherwise if you're
serious about multimedia you'll be single handedly supporting the entire battery industry.
Most sound boards produce sufficient power to operate small passive speaker systems. Their outputs
are almost uniformly about four watts because all use similar circuitry to generate the power. This
level is enough even for many large stereo style passive speaker systems. Active speakers still work
with these higher powered sound boards and in many cases deliver better (if just louder!) sound
through their own amplifiers.

MIDI

Another way you and your PC can make beautiful music is by controlling external electronic musical
instruments. Instead of generating sounds itself, your PC becomes a sequencer, a solid state surrogate
conductor capable of leading a big band or orchestra of electronic instruments in a cacophony of your
own creation. A sequencer is nothing more than a memory and messaging system with editing
capabilities. The memory required by the sequencer is supplied by your PC's hard disk. The editing is
the software for your music making. The principle messaging system used for electronic instruments
is the MIDI interface.

Background

MIDI, the Musical Instrument Digital Interface, is the principal control system used for linking
electronic instruments. It's not tied to PCs or any particular computer system. In fact, although MIDI
is an intrinsic part of today's multimedia standards, it predates multimedia computers. MIDI is the
standard connection for plugging electronic instruments and accessories together. In essence, MIDI is

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (35 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

both hardware (a special kind of serial port) and software (a protocol for transferring commands
through the port). And, amazingly, it's one of the few sound board standards that's actually
standardized enough that it works with just about everything that says MIDI without compatibility
worries. MIDI enables synthesizers, sequencers, home computers, rhythm machines, and so on to be
interconnected through a standard interface.
Although MIDI is used for linking electronic music making instruments, the MIDI connection itself
carries no music. The MIDI wire is only for control signals. Like a remote control for your television,
it turns things on and off but doesn't carry the sound (or picture) at all. Historically, the primary
application of MIDI has been linking synthesizers and other electronic musical instruments, but the
current trend has been to broaden the application of the interface to embrace the control of other audio
devices as well.

Interface

The MIDI interface hardware itself is electronically and logically simple. It's just another kind of
serial port designed to provide a moderate speed port to pass commands to musical interfaces. Each
device connected using MIDI has both transmitting and receiving circuits, although some may have
only one or the other. A MIDI transmitter packages signals into the standard MIDI format and sends
them on their way. A MIDI receiver listens for commands on the MIDI bus and executes those meant
for it.
Every MIDI port has at its heart a UART chip that converts parallel computer data into serial form.
MIDI transmitters link to the MIDI bus using a line driver, which increases the strength of the UART
signal so that it can drive a five milliamp current loop. The driver also buffers the UART from
problems with the connection. The transmitter signal is designed to power exactly one MIDI receiver.
The UARTs in the MIDI system provide an asynchronous serial connection that operates at a fixed
speed of 31,250 bits per second. Because every byte transferred is framed with a start bit and stop bit,
it allows information to be exchanged at 3,125 bytes per second. Each data frame measures 320
microseconds long. The actual MIDI electrical signals are inverted; that is, a logical 0 on the MIDI
bus is indicated by switching the current on.
MIDI interfaces mate with your PC exactly as do any other ports. They communicate with your
system by exchanging bytes through an input/output port. Most MIDI port adapters prefer to use the
input/output port address of 330(Hex), because much MIDI software expects to find MIDI there and
often refuses to recognize MIDI at other locations. Alternatively, many sound boards use 220(Hex),
which often is the better choice because many SCSI host adapters also prefer the 330(Hex) base
address for communications. In any case, make sure that your MIDI software is aware of the port
address you choose for your MIDI adapter.
Each MIDI receiver links to the bus through an optoisolator, a device that uses an incoming electrical
signal to power a light emitting diode (LED). A photocell senses changes in brightness in the LED
and creates a corresponding electrical signal. The intermediary light (optical) beam isolates the
incoming electrical signal from those inside the MIDI device, preventing all sorts of nasties like
electrical shocks (which can harm you) and ground loops (which can harm the integrity of the MIDI
signal).

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (36 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Connectors

The physical embodiment of a MIDI transmitter is the Out connector on a MIDI device. The In
connector links to a MIDI receiver. A Thru connector, when present, is a second MIDI transmitter
directly connected to the receiver using the In connector.
The MIDI connectors themselves are standard full size five-pin DIN jacks (such as Switchcraft 57
GB5F). Only three connections are used: pin 2 is ground; pin 4 is the positive going side of the
differential signal; pin 5 is the negative going side. Pins 1 and 3 are not used and are unconnected.
Unlike with serial ports, there's no need to crossover conductors in going from one MIDI port to
another; all three connections are the same at both ends of the cable.
MIDI cables have matching male 5-pin DIN plugs on either end. They use shielded twisted pair wire
and can be up to 50 feet long (15 meters). The shield of the cable connects to pin 2 at both ends of the
cable.

Wiring

The wiring of even the most complex MIDI system can be as easy as stringing Christmas tree lights or
plugging in stereo components. All MIDI devices are daisy chained together. That is, you connect the
MIDI Out of one device to MIDI In on the next. Signals then travel from the first transmitter through
all the devices down to the last receiver. Ordinarily, the first transmitter is your keyboard or
sequencer; the rest of the devices in the chain are synthesizers or electronic instruments.
Thru connectors make your MIDI project more thought provoking. Because the signals at the Thru
connector on a device duplicate those at the In connector rather than the Out connector, the
information the device sends out does not appear on the Thru connector. Ordinarily, this situation
presents no problems because most downstream MIDI devices are musical instruments that act only as
receivers. If you have a keyboard connected to a sequencer, however, any device plugged into the
sequencer's Thru connector listens to and hears the keyboard, not the sequencer. To hear both, you
have to use the sequencer's Out connector.

Protocol

The most complex part of the MIDI system is its communications protocol, the signals sent over the
wiring. MIDI devices communicate with one another through the MIDI connection by sending
messages, which are nothing more than sequences of bytes. Each message begins with a status byte
that identifies the type of message being sent, for example, to switch on or off a musical note. The
status byte is usually followed by data bytes in groups of one or two (depending on the command),
which hold the information about what to do—for example, which note to switch on. Status bytes are
unambiguously identified by always having their most significant bit as a logical one. Data bytes
always have zero as their most significant bit.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (37 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Each MIDI system has 16 channels that can be individually addressed. The sounds generated by the
synthesizers or instruments in the MIDI system are called voices.

Messages

A MIDI file or data stream is a sequence of bytes. MIDI gains its structure by defining channels
entirely through software. This structures is not specifically declared but implicit in the codes used by
MIDI messages.
The least significant nibble (that is, the last four bits) of the status byte of the message defines the
channel to which it is addressed. The most significant nibble (the first four bits) of the status byte
define the function controlled by the channel message. The various channel messages are defined in
Table 18.5.

Table 18.5. MIDI Channel Mode Messages

Status Data 1 Data 2 Function


Bx 0 7A Local control off
Bx 7F 7A Local control on
Bx 0 7B All notes off
Bx 0 7C Omni Mode off
Bx 0 7D Omni Mode on
Bx 0 7E Mono Mode on; Poly Mode off (see Note)
Bx 0 7F Poly Mode on; Mono Mode off
The Mono Mode on command requires a mandatory third data byte which specifies the number of
channel in the range 1 to 16 in which voice messages are to be sent. The receiver assigns channels
sequentially to voices starting with the Basic Channel.

A MIDI system divides its channels into two types, Voice Channels and Basic Channels. A Voice
Channel controls an individual voice. A Basic Channel sets up the mode of each MIDI receiver for
receiving voice and control messages. Typically, a MIDI receiver is assigned one Basic Channel as a
default; later the device can be reconfigured. For example, an eight-voice synthesizer could be
reconfigured to respond as two four-voice synthesizers, each with its own Basic Channel. The MIDI
sequencer or keyboard could then send separate messages to each four-voice synthesizer as if it were a
physically separate instrument.
Messages sent to individual channels in the MIDI system are termed channel messages. Messages sent
through the voice channel are termed voice messages because they control a voice; those sent through
the basic channel are mode messages because they control the mode of the device listening to the
channel. Mode messages determine how a device responds to voice messages. Although voice

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (38 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

messages may also be sent through the basic channel, mode messages can use only the basic channel.
MIDI allows four modes to govern how the receiver routes the channel messages to individual voices.
These modes are distinguished by three characteristics that act as flags: Omni, Mono, and Poly.
Omni controls whether the channels are treated individually or as a single minded group. When Omni
is on, the channels are grouped together. In effect, the messages come in from all directions—control
is omnidirectional. The MIDI voices respond as if all the control signals get funneled together. When
Omni is off, voices listen solely to the one channel to which they are assigned. Each channel and
voice is individually linked, separate from the others.
Mono is short for monophonic. If Omni is on, Mono combines all channel messages and sends them
to a single designated voice. When Omni is off, Mono allows the assigning of channels to individual
voices. That is, each voice has individual control through a separate channel.
Poly is short for polyphonic and routes the messages on one channel to all the voices in the MIDI
receiver. When Omni and Poly are on, all messages combine and go to all voices. When Omni is off
but Poly is on, one channel controls all voices. In other words, when Poly is on, all voices play the
same notes. Note that Poly and Mono are mutually exclusive. When Poly is on, Mono is off. Table
18.6 summarizes MIDI receiver modes.

Table 18.6. MIDI Receiver Modes

Mode number Omni status Poly/ Mono Function


1 On Poly Voice messages received from all Voice channels assigned to voices polyphonically.
2 On Mono Voice messages received from all Voice Channels control only one voice,
monophonically.
3 Off Poly Voice messages received in Voice channel N only are assigned to voices
polyphonically.
4 Off Mono Voice messages received in Voice channels N thru N+M-1 are assigned
monophonically to voices 1 thru M, respectively. The number of voices M is
specified by the third byte of the Mono Mode Message.

When Poly is on, MIDI transmitters send all voices through a designated channel. When Poly is off,
Omni determines whether one or multiple voices are controlled. When on, voice messages for one
voice are sent through the designated channel. When Omni is off, a number of channels carry voice
messages for a like number of individually controlled voices. Table 18.7 summarizes MIDI
transmitter modes.

Table 18.7. MIDI Transmitter Modes

Mode number Omni status Poly/ Mono Function


1 On Poly All voice messages transmitted in Channel N.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (39 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

2 On Mono Voice messages for one voice sent in Channel N.


3 Off Poly Voice messages for all voices sent in Channel N.
4 Off Mono Voice messages for voices 1 thru M transmitted in Voice Channels N thru N+M-1,
respectively. (Single voice per channel).

MIDI devices, whether receivers or transmitters, can operate only under one mode at a time. In most
cases, both the transmitter and receiver operate in the same mode. If a receiver cannot operate in a
mode that a transmitter requests from it, however, it may switch to an alternate mode (in most cases,
this will be Omni on, Poly on); or it ignores the mode message. When powered up, all MIDI
instruments default to the Omni on, Poly on mode.
Mode messages affect only voice channels and not the definition of the basic channel. Consequently,
a receiver recognizes only those mode messages sent over its assigned Basic Channel, even if it is in a
mode with Omni on. A mode command (with the exception of those turning local control on or off)
automatically turns all notes off.
Besides voice and mode messages, each receiver in the MIDI system listens to system messages.
Because these are universal messages, the status byte of each one does not define an individual
channel. Three types of system messages are defined: common messages are meant to be heard by all
receivers in the MIDI system; exclusive messages are sent to all receivers, but are keyed by a
manufacturer's code so that only devices keyed to that code respond; and real time messages are used
to synchronize the various devices in the MIDI system.
Exclusive and real time messages are exceptions to the rule that all messages have a status byte
followed by multiples of one or two data bytes. The status byte of an exclusive message can be
followed by any number of data bytes. Its length is defined by a special End of Exclusive flag
byte—that is a value of 0F8(hex) or 11110111(binary)—or any other status byte. A real time message
consists only of a status byte. Manufacturers of MIDI equipment assign an ID code through which an
exclusive message accesses the equipment. The MIDI standard requests that manufacturers publish
the ID codes they use so that programmers can address and control it with exclusive messages. The
manufacturer also controls the format of the data bytes that follow their ID.
Real time messages can be (and often are) sent at any time, even during other messages. The action
called for by a real time message is immediately executed; then normal function of the system
continues. For example, the entire MIDI system is synchronized with the real time clock
message—0F8(hex)—sent from the transmitter at a rate of 24 clocks to the quarter note (a crotchet).
Some MIDI transmitters periodically send out Active Sensing messages—byte value 0FE(Hex)—to
indicate that the transmitter is still connected to the system and operating.
The Song Position Pointer tracks the number of MIDI beats that have elapsed since the beginning of a
song. One MIDI beat equals six MIDI clocks, one-quarter of a quarter note—a semiquaver. To move
to any position in a song (with a resolution of one beat), a Song Position Pointer status byte can be
sent followed by two data bytes indicating the pointer value.

Operation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (40 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

MIDI works by simply sending out strings of bytes. Each device recognizes the beginning of a
command by detecting a status byte with its most significant bit set high. With voice and mode
messages, the status byte simply alerts the MIDI receiver to the nature of the data bytes that follow.
Each device knows how many data bytes are assigned to each command controlled by a status byte,
and waits until it has received all the data in a complete command before acting on the command. If
the complete command is followed by more data bytes, it interprets the information as the beginning
of another command. It awaits the correct number of data bytes to complete this next command and
then carries it out. This feature—one status byte serving as the preamble for multiple commands—is
called Running Status in the MIDI scheme of things. Running Status ends as soon as another status
byte is received, with one exception. If the interrupting message is a real time message, the Running
Status resumes after the real time message is complete. If a subsequent status byte interrupts a
message before it is complete (for example, between the first and second data bytes of a message
requiring two data bytes), the interrupted message is ignored. The MIDI device won't do anything
until it receives a full and complete message (which may be the interrupting message).
Badly formatted or erroneous MIDI commands are generally ignored. If a given MIDI receiver does
not have a feature that a command asks for, it ignores the status and data bytes of that command. If a
MIDI transmitter inadvertently sends out a status byte not defined by the MIDI specification, the
status byte and all following data bytes are ignored until a valid status byte is sent.
Because of the MIDI coding system's nature, data bytes can have a value only from 0 to 127. Higher
values would require the most significant bit of the data byte to be set high, which would cause MIDI
devices to recognize it as a status byte. Most MIDI values consequently fall in the range of 0 to 127.
For example, MIDI recognizes 128 intensities and 128 musical notes.
After a MIDI system has been set up by assigning modes and programs, tunes can be played by
sending out voice messages. A Note On status byte plays a single note of the voice on the channel
indicated in the byte. The pitch of the note is defined by the first data byte, and its velocity
(corresponding to how hard the key on the keyboard is pressed—at least on velocity sensing
keyboards) is defined by the second data byte. The note continues as defined by the program until a
Note Off (either a specific Note Off or a general All Notes Off) message is sent, although in the
meantime the program may cause the note to decay to inaudibility. The individual Note Off message
also allows the control of a release velocity, a feature rare but available on some synthesizers.
MIDI provides for further control. Messages also control after touch, a feature of some keyboards that
enables you to change the sound of a note by pressing harder on a key after you've first depressed it to
sound the note. MIDI provides for two kinds of after touch: general—using status byte 0D(Hex)—that
applies to all notes currently being played in the channel; and specific—using status byte
0Ax(Hex)—that applies only to an individual key.
Another status byte allows MIDI to send the state of dials or switches on the keyboard called
controllers. One important controller, the shift wheel has its own status byte—0Ex(Hex)—assigned to
its control. Another status byte—0Bx(Hex)—relays data up to approximately 122 other controllers.

Tone Coding

MIDI encodes musical notes as discrete numeric values in steps generally corresponding to the
twelve-tone scale used in most Western music, but it need not. For example, electronic percussion

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (41 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

instruments may recognize note values for different non-chromatic drum sounds. In the MIDI scheme
of things, middle C is assigned a value of 60. Each semitone lower is one number lower; each
semitone higher is one number higher. The MIDI coding scheme thus covers a range 40 notes wider
than the 88 keys of most pianos, 21 notes below and 19 above. Table 18.8 lists MIDI values and their
corresponding chromatic notes.

Table 18.8. MIDI Note Values

MIDI Value Note Frequency (Hz)


1 C#/D-flat 17.32
2 D 18.35
3 D#/E-flat 19.45
4 E 20.60
5 F 21.83
6 F#/G-flat 23.12
7 G 24.50
8 G#/A-flat 25.96
9 A 27.50
10 A#/B-flat 29.14
11 B 30.87
12 C 32.70
13 C#/D-flat 34.65
14 D 36.71
15 D#/E-flat 38.89
16 E 41.20
17 F 43.65
18 F#/G-flat 46.25
19 G 49.00
20 G#/A-flat 51.91
21 A 55.00
22 A#/B-flat 58.27
23 B 61.74

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (42 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

24 C 65.41
25 C#/D-flat 69.30
26 D 73.42
27 D#/E-flat 77.78
28 E 82.41
29 F 87.31
30 F#/G-flat 92.50
31 G 98.00
32 G#/A-flat 103.83
33 A 110.00
34 A#/B-flat 116.54
35 B 123.47
36 C 130.81
37 C#/D-flat 138.59
38 D 146.83
39 D#/E-flat 155.56
40 E 164.81
41 F 174.61
42 F#/G-flat 185.00
43 G 196.00
44 G#/A-flat 207.65
45 A 220.00
46 A#/B-flat 233.08
47 B 246.94
48 C 261.63
49 C#/D-flat 277.18
50 D 293.66
51 D#/E-flat 311.13
52 E 329.63
53 F 349.23

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (43 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

54 F#/G-flat 369.99
55 G 391.99
56 G#/A-flat 415.30
57 A 440.00
58 A#/B-flat 466.16
59 B 493.88
60 Middle C 523.25
61 C#/D-flat 554.37
62 D 587.33
63 D#/E-flat 622.25
64 E 659.25
65 F 698.46
66 F#/G-flat 739.99
67 G 783.99
68 G#/A-flat 830.61
69 A 880.00
70 A#/B-flat 932.33
71 B 987.77
72 C 1046.50
73 C#/D-flat 1108.73
74 D 1174.66
75 D#/E-flat 1244.51
76 E 1318.51
77 F 1396.91
78 F#/G-flat 1479.97
79 G 1567.98
80 G#/A-flat 1661.22
81 A 1760.00
82 A#/B-flat 1864.65
83 B 1975.53

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (44 de 59) [23/06/2000 06:27:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

84 C 2093.00
85 C#/D-flat 2217.46
86 D 2349.31
87 D#/E-flat 2489.01
88 E 2637.01
89 F 2793.82
90 F#/G-flat 2959.95
91 G 3135.95
92 G#/A-flat 3322.43
93 A 3520.00
94 A#/B-flat 3729.31
95 B 3951.07
96 C 4186.01
97 C#/D-flat 4434.92
98 D 4698.63
99 D#/E-flat 4978.03
100 E 5274.04
101 F 5587.65
102 F#/G-flat 5919.91
103 G 6271.92
104 G#/A-flat 6644.87
105 A 7040.00
106 A#/B-flat 7458.62
107 B 7902.13
108 C 8372.02
109 C#/D-flat 8869.84
110 D 9397.27
111 D#/E-flat 9956.06
112 E 10548.08
113 F 11175.30

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (45 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

114 F#/G-flat 11839.81


115 G 12543.84
116 G#/A-flat 13289.74
117 A 14080.00
118 A#/B-flat 14917.24
119 B 15804.26
120 C 16744.03
121 C#/D-flat 17739.68
122 D 18794.54
123 D#/E-flat 19912.12
124 E 21096.15
125 F 22350.59
126 F#/G-flat 23679.62
127 G 25087.69
Frequency values assume standard Western tuning, that is, A=440 Hz.

Some synthesizer voices use the note values, not to encode tones, but to key distinct percussive
sounds. A melody rendered by one of these voices sounds more like a catfight in a drumset than it
does music.

Standards

One basic document describes all that has been standardized about MIDI, the MIDI Specification
itself. Although it is termed MIDI 1.0, the specification has been updated several times since its
inception in 1984.
Until 1995, the MIDI Specification actually comprised five booklets. An update in that year combined
all of them into a 300-odd page book with six sections including the basic MIDI standard, voice
mapping (General MIDI, described in the "General MIDI" section that follows), standard MIDI file
formats, the MIDI time code, show control, and machine control under MIDI.
Beyond the basic MIDI specifications, several extensions have been proposed. One, the
Downloadable Sounds system is expected to be adopted by the MIDI organization in 1997. Another,
the GS format, is widely used in the music industry but is not an official MIDI standard. A
specification proposed as the next generation of the MIDI standard called XMIDI has been disavowed
by the MIDI organization.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (46 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

General MIDI

What a voice sounds like depends on the synthesizer generating it. A voice controls a program on the
synthesizer, and the program is a property of the synthesizer. The same MIDI messages may elicit
different sounds from different synthesizers. Because you might expect some consistency between
synthesizers, electronic instrument makers defined 128 programs, assigning a numeric value and
descriptive name to each. The result is called General MIDI.
If you create a MIDI file using a General MIDI device, it will play back using the same
instrumentation on any other device that also supports General MIDI. Only the instrumentation will
be the same, however. The sound may vary depending on the quality of the synthesizers you use. You
might, for example, create a MIDI file using the limited synthesizer built into an early PC sound
board. If you later connect your system to a high quality advanced synthesizer that supports General
MIDI, you'll hear the full quality of the new synthesizer. Table 18.9 summarizes the assignments of
the General MIDI standard.

Table 18.9. General MIDI Instrument Program Map

Program Number Group Instrument


1 Piano Acoustic Grand
2 Piano Bright Acoustic
3 Piano Electric Grand
4 Piano Honky Tonk
5 Piano Electric Piano 1
6 Piano Electric Piano 2
7 Piano Harpsichord
8 Piano Clav
9 Chrom percussion Celesta
10 Chrom percussion Glockenspiel
11 Chrom percussion Music Box
12 Chrom percussion Vibraphone
13 Chrom percussion Marimba
14 Chrom percussion Xylophone
15 Chrom percussion Tubular Bells
16 Chrom percussion Dulcimer

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (47 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

17 Organ Drawbar Organ


18 Organ Percussive Organ
19 Organ Rock Organ
20 Organ Church Organ
21 Organ Reed Organ
22 Organ Accordian
23 Organ Harmonica
24 Organ Tango Accordian
25 Guitar Acoustic Guitar (nylon)
26 Guitar Acoustic Guitar (steel)
27 Guitar Electric Guitar (jazz)
28 Guitar Electric Guitar (clean)
29 Guitar Electric Guitar (muted)
30 Guitar Overdriven Guitar
31 Guitar Distortion Guitar
32 Guitar Guitar Harmonics
33 Bass Acoustic Bass
34 Bass Electric Bass(finger)
35 Bass Electric Bass(pick)
36 Bass Fretless Bass
37 Bass Slap Bass 1
38 Bass Slap Bass 2
39 Bass Synth Bass 1
40 Bass Synth Bass 2
41 Strings Violin
42 Strings Viola
43 Strings Cello
44 Strings Contrabass
45 Strings Tremolo Strings
46 Strings Pizzicato Strings

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (48 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

47 Strings Orchestral Strings


48 Strings Timpani
49 Ensemble String Ensemble 1
50 Ensemble String Ensemble 2
51 Ensemble SynthStrings 1
52 Ensemble SynthStrings 2
53 Ensemble Choir Aahs
54 Ensemble Voice Oohs
55 Ensemble Synth Voice
56 Ensemble Orchestra Hit
57 Brass Trumpet
58 Brass Trombone
59 Brass Tuba
60 Brass Muted Trumpet
61 Brass French Horn
62 Brass Brass Section
63 Brass SynthBrass 1
64 Brass SynthBrass 2
65 Reed Soprano Sax
66 Reed Alto Sax
67 Reed Tenor Sax
68 Reed Baritone Sax
69 Reed Oboe
70 Reed English Horn
71 Reed Bassoon
72 Reed Clarinet
73 Pipe Piccolo
74 Pipe Flute
75 Pipe Recorder
76 Pipe Pan Flute

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (49 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

77 Pipe Blown Bottle


78 Pipe Skakuhachi
79 Pipe Whistle
80 Pipe Ocarina
81 Synth lead Lead 1 (square)
82 Synth lead Lead 2 (sawtooth)
83 Synth lead Lead 3 (calliope)
84 Synth lead Lead 4 (chiff)
85 Synth lead Lead 5 (charang)
86 Synth lead Lead 6 (voice)
87 Synth lead Lead 7 (fifths)
88 Synth lead Lead 8 (bass+lead)
89 Synth pad Pad 1 (new age)
90 Synth pad Pad 2 (warm)
91 Synth pad Pad 3 (polysynth)
92 Synth pad Pad 4 (choir)
93 Synth pad Pad 5 (bowed)
94 Synth pad Pad 6 (metallic)
95 Synth pad Pad 7 (halo)
96 Synth pad Pad 8 (sweep)
97 Synth effects FX 1 (rain)
98 Synth effects FX 2 (soundtrack)
99 Synth effects FX 3 (crystal)
100 Synth effects FX 4 (atmosphere)
101 Synth effects FX 5 (brightness)
102 Synth effects FX 6 (goblins)
103 Synth effects FX 7 (echoes)
104 Synth effects FX 8 (sci-fi)
105 Ethnic Sitar
106 Ethnic Banjo

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (50 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

107 Ethnic Shamisen


108 Ethnic Koto
109 Ethnic Kalimba
110 Ethnic Bagpipe
111 Ethnic Fiddle
112 Ethnic Shanai
113 Percussive Tinkle Bell
114 Percussive Agogo
115 Percussive Steel Drums
116 Percussive Woodblock
117 Percussive Taiko Drum
118 Percussive Melodic Tom
119 Percussive Synth Drum
120 Percussive Reverse Cymbal
121 Sound effects Guitar Fret Noise
122 Sound effects Breath Noise
123 Sound effects Seashore
124 Sound effects Bird Tweet
125 Sound effects Telephone Ring
126 Sound effects Helicopter
127 Sound effects Applause
128 Sound effects Gunshot

In addition to these 128 instruments, the General MIDI specification assigns 47 drum sounds to a
percussion key map. These assignments are shown in Table 18.10.

Table 18.10. General MIDI Percussion Key Map

Program Instrument
35 Acoustic bass drum
36 Bass drum 1
37 Side stick

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (51 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

38 Acoustic snare
39 Hand clap
40 Electric snare
41 Low floor tom
42 Closed hi-hat
43 High floor tom
44 Pedal hi-hat
45 Low tom
46 Open hi-hat
47 Low-mid tom
48 High-mid tom
49 Crash cymbal 1
50 High tom
51 Ride cymbal 1
52 Chinese cymbal
53 Ride bell
54 Tambourine
55 Splash cymbal
56 Cowbell
57 Crash cymbal 2
58 Vibraslap
59 Ride cymbal 2
60 High bongo
61 Low bongo
62 Mute high conga
63 Open high conga
64 Low conga
65 High timbale
66 Low timbale
67 High agogo

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (52 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

68 Low agogo
69 Casaba
70 Maracas
71 Short whistle
72 Long whistle
73 Short guiro
74 Long guiro
75 Claves
76 High wood block
77 Low wood block
78 Mute cuica
79 Open cuica
80 Mute triangle
81 Open triangle

So that you can be sure whether a given device supports the General MIDI specification, the MIDI
Manufacturers Association has developed a logo that may be displayed only by conforming devices.
Use of the logo is controlled by the association, and equipment makers must earn its approval before
they can put the logo on their products. Figure 18.5 illustrates the General MIDI logo.
Figure 18.5 The General MIDI logo.

To be controlled, General MIDI voices (or any other voices controllable through MIDI) are assigned
to channels, although the correspondence need not be one to one. For example, all voices can share
one channel or one voice can be controlled by all channels. Which voice responds to which channel is
controlled by setting up each receiver by sending messages through the system.
The General MIDI specification sets the first nine channels for instruments and the tenth for
percussion. The remaining channels, 11 through 16, are left for individual musicians to use as they see
fit. Most of the time they are not used.

Basic and Extended MIDI

Wonderful as General MIDI is, for the first generation of PC sound boards it was a high standard,
indeed. Most of them were unable to support the ten channels of General MIDI that were generally
assigned to instruments, let alone the full sixteen. Consequently, Microsoft came up with its own
definitions of MIDI devices for use with Windows: Basic MIDI and Extended MIDI.
Basic MIDI devices have only four channels, corresponding to MIDI channels 13, 14, 15, and 16.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (53 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Extended MIDI devices have ten channels, corresponding to MIDI channels 1 through 10 and
assigned according to the General MIDI specification. Table 18.11 briefly summarize the three levels
of MIDI implementations.
Table 18.11. MIDI Implementations
Standard Creator Number of channels Channel numbers
Basic Microsoft 4 13 thru 16
Extended Microsoft 10 1 thru 10
General MIDI 16 1 thru 16

This arrangement is effective for Microsoft's purposes. It allows the same MIDI file to be played on
both Basic and Extended devices with separate and distinct programs for each. The basic devices will
ignore the voices on the channels used by the extended devices, and vice versa. Programmers creating
MIDI files for playback through Windows thus make two programs: one for the limited abilities of
basic MIDI and one for extended. Most General MIDI programs will play through extended MIDI
devices without a problem, although any voices assigned the top six channels will be silent.
These definitions have resulting in three classes of MIDI products. The Basic MIDI standard is low
enough to include virtually every sound board you can buy for your PC. Both you and software
publishers can depend on Basic MIDI working with any MPC-compatible PC. Extended MIDI,
supported by the latest sound boards, allows complex compositions and gives you enough versatility
that you can create intricate compositions. General MIDI assures the best compatibility with most
mainstream MIDI products.
As a practical matter, this scheme helps assure that you don't have to worry about the details of
mapping if all you want to do is play back commercially distributed MIDI files. General MIDI and
Microsoft mapping assure software developers that they can supply files that will make the best of
whatever hardware you have installed in your multimedia PC.

GS Format

To allow the individual addressing of a large number of sounds, Roland Corporation elaborated on
General MIDI to create its own GS format. Although it completely complies with General MIDI, GS
permits the addressing of 16,384 sounds by using the Program Change message and a control change
message (each of which can have one of 128 values). GS also allows for more detailed control of
expressions, for example, allowing for eight models of reverb and chorus effects. Its polyphony
support extends to 24 voices, which can be assigned a priority so when a files is played back on a
device that supports fewer voices than it was written for, the most important voices remain intact.
Although manufacturers other than Roland now use the GS format, it is not an industry standard. GS
format devices are, however, completely compatible with General MIDI.

Downloadable Sounds

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (54 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

In January 1997, the MIDI Manufacturers Association announced an extension to the General MIDI
specification to allows new voices to augment those already defined. Termed the Downloadable
Sounds (DLS) format, the new standard describes a means by which new samples can be added to
wavetable synthesizers. The new samples in the form of wavetable files can add new voices or
augment existing instruments with special effects. Using DLS, game developers and musical
composers can add customized instruments to compliant sound boards or improve the quality of an
existing voice with a new sample. Although at the time of this writing, the DLS standard had not been
formalized, the MIDI Manufacturers Association expected adoption of the standard in the first half of
1997.

XMIDI

Another attempt to address the shortfalls of the General MIDI standard is Extended MIDI, developed
by the British firm Digital Design and Development in 1995. Although XMIDI, as it is usually
termed, has been promoted by its developer as the next generation of MIDI, the music industry has yet
to embrace it. In fact, in early 1997, the MIDI Manufacturers Association made a specific policy
statement in which it concluded that XMIDI was not likely to be adopted by the music industry and
that the MMA membership was not interested in adopting the proprietary XMIDI design which had a
single hardware source as an industry standard.

MIDI Manufacturers Association

The MIDI standard is governed by the MIDI Manufacturers Association, an industry consortium,
which also governs the use of the MIDI logo and assigns manufacturers' identification numbers for
companies developing products under the standard. The association publishes the MIDI standard (but
only on paper) and posts news about the standard and music industry on the Internet. For information
about the MIDI standard and how to acquire a copy of the full document, you can contact the
association at the following address:
MIDI Manufacturers Association
Post Office Box 3173
La Habra, California 90632
Telephone: (310) 947-8689
Fax: (310) 947-4569
Email: mma@midi.org

WWW: http://www.midi.org

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (55 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Installation

Sound boards are heavy feeders when it comes to system resources. A single sound board may require
multiple interrupts, a wide range of input/output ports, and a dedicated address range in High DOS
memory. Because of these extensive resource demands, the need for numerous drivers, and often poor
documentation, sound boards are the most frustrating expansion products to add to a PC. In fact, a
sound board may be the perfect gift to surreptitiously gain revenge, letting you bury the hatchet with
an estranged friend without the friend knowing you've sliced solidly into his back.
Some of the connections and concerns in installing a sound board have already been discussed in
other contexts. The CD drive audio connection was discussed in Chapter 12, "Compact Discs." The
digital control interface for CD drives present on some sound boards was discussed in Chapter 9,
"Storage Interfaces." Those, however, are the simple connections for the sound board. The actual
audio issues are more complex.

System Resources

Most of the difficulties in installing a sound board arise from the multiple functions expected from it.
Because of the need for hardware compatibility with both the Ad Lib and Sound Blaster standards,
any sound board must duplicate all the registers and other hardware features of those products for
their synthesizers to work with older games. MIDI software often requires a specific port assignment.
Controlling the analog to digital conversion and analog mixer circuits require additional controls.
Specific enhancements as well as CD ROM interfaces must also be addressed.
Depending on the vintage of the sound board you want to install, the procedure will vary. Most
current sound boards use electronic settings compatible with the Plug-and-Play standard. Older boards
may force you to tangle with a bank of DIP switches or a row or two of jumpers to select the
resources used by the board. A number of boards are transitionary—they use electronic settings but do
not fully support Plug-and-Play. Your owners' manual should document what form these settings take
and, if manual configuration is required, it should list the factory defaults (always a good starting
place) and how to make changes.
Even when you must configure your sound board manually, you don't have full freedom of choice.
Should you want to run DOS-based games, many of the resource assignments are pre-ordained. For
example, you must use DMA channel 1. DOS games require Sound Blaster compatibility, and the
Sound Blaster's DMA usage is set to channel 1. If you don't want to play games, however, another
channel choice may be more appropriate. Higher DMA channel numbers are 16-bit transfers, while
lower DMA channels (1, 2, and 3) are only 8 bits wide. Achieving the highest degree of Sound Blaster
compatibility also dictates setting the base I/O port assignment for your sound board to 220(Hex), the
default assignment of the Sound Blaster. In addition, the default Sound Blaster interrupt assignment is
5. You can use alternate settings for the interrupt and I/O port settings because the Sound Blaster
allows for alternate values. However, you'll have to alter the options used with your sound board
driver software to match your manual configuration. Under Windows 95, you may have to alter the
resource settings assigned the sound board through Device Manager.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (56 de 59) [23/06/2000 06:27:51 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

Most electronically configured sound boards automatically adjust themselves to the settings you make
through Device Manager. Others do not. You may have to run the software setup program supplied by
the sound board maker, then adjust Device Manager to reflect your new settings. Moreover,
sometimes the resources automatically assigned by Windows 95 during hardware installation to
boards that do not fully support the Plug-and-Play standard can cause conflicts. If your sound board
does not work after installation, you should try alternate resource settings first.
A further installation difficulty is that Windows may not automatically exclude from management the
memory range used by your sound board. Should your sound board not work at its alternate resource
settings, check the memory addresses it uses and ensure they are excluded from management.
Old drivers can also cause problems with sound boards. When installing a new sound board driver,
you'll also want to remove any old sound board drivers that you've previously installed. The procedure
is specific to your operating system. For example, under the Windows 3.1 family, you would select
REMOVE from the drivers menu, highlight the old driver name, and click on OK. You may need to
eliminate references to old sound board drivers from your CONFIG.SYS file even when running
under Windows 95. When upgrading from Windows 3.1 to Windows 95, some sound board makers
require that you manually delete the old Windows 3.1 drivers by erasing them from your disk to
prevent later problems. The general rule is to install only the latest drivers and remove all old sound
board drivers.

Audio Connections

Most sound boards have integral audio amplifiers designed for power external speakers. Because of
constraints imposed by your PC, however, these amplifiers are rudimentary. Typically, they produce
little output power—usually between 100 milliwatts and one watt. Worse, many cut off low
frequencies below 100 Hertz, the very frequencies that have the most impact in the sound effects often
played through sound boards. Although you can connect an auxiliary amplifier and speaker system to
your sound board to overcome the power shortage, you usually cannot overcome the low frequency
cut-off.

Speaker Wiring

Commercial computer speakers make wiring easy. Most sound boards use stereophonic miniature
phone jacks for their outputs, and speaker systems designed for computers have matching plugs. The
wires from both speaker systems in a stereo pair may join at this plug—the two speakers share a
single plug. Often, however, each speaker has its own wire and plug. These systems require that you
chain the speakers. Again, you face two design variations.
In one variation, the speakers are not quite identical. One of them, usually the left speaker, has a jack
near where the wire leads into the speaker. In this case, you plug the left speaker into your sound
board and plug the right speaker into a jack on the left speaker. If you accidentally plug the right
speaker into the sound board, you won't hurt anything but you won't have a place to plug in the left
speaker.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (57 de 59) [23/06/2000 06:27:52 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

The other variation uses speakers that both have jacks on the back. You can plug either speaker into
your sound board and plug the other speaker into the first. Although the wires carry both signals
through both speakers, the right speaker only taps the right signal, and the left speaker taps only the
left signal.
You're not limited to commercially designated computer speakers. You can plug high efficiency
stereo speakers or power speakers designed for portable personal stereo systems directly into your
sound board. Or you can plug your sound board into the Auxiliary jack (or CD jack) of a receiver or
pre-amplifier and route the sound through an ordinary stereo system. Commercial adapter cables are
readily available to convert the single stereo miniature phone plug into two pin (or phono) plugs for
your stereo.
If you choose to connect your own speakers to a sound board, be they commercial speaker systems or
drivers you have lying around from that old autosound installation that went awry, be certain you
match levels and impedances. Most sound boards are rated to deliver full power into loads with a
four-ohm impedance. Higher impedance causes no worry because a higher impedance load typically
reduces the drain on an amplifier (making it less prone to overloading and failure). Although a lower
impedance can be dangerous, nearly all speakers have a four-ohm or greater impedance. The only
time you may encounter a lower impedance is if you try to connect two speakers in parallel to one
channel of your sound board. Don't try it.
Adding subwoofers complicates matters. Passive subwoofers usually incorporate their own crossovers
that intercept low frequencies and prevent them from being passed on to the rest of your speakers.
Consequently, the subwoofer must be connected between your ordinary speakers and amplifier. When
you connect a single subwoofer and satellites, all signals first go to the subwoofer, and the satellites
plug into the subwoofer. Systems with two subwoofers pair one subwoofer with one satellite, so the
wire from each channel leads to the associated subwoofer, thence its satellite. Active subwoofers,
when not part of a speaker package, typically bridge your amplifier outputs, wired in parallel to your
existing speakers.
A surround sound amplifier makes subwoofer wiring easy: most surround systems provide dedicated
subwoofer outputs.

Amplifier Wiring

If you want the best sound quality, you may be tempted to plug your sound board into your stereo
system. For such purposes, you can consider you stereo as an active speaker system. Just connect an
unused input of your receiver or preamp to the output of your sound board (the auxiliary output is
best, but the speaker output will likely work, too). If your PC and stereo are not both properly
grounded, however, you may inadvertently cause damage. With improper grounding, there can be odd
voltage differences between the chassis of your PC and stereo system—enough of a difference that
you can draw sparks when plugging in the audio cable. This can have detrimental effects on both
computer and stereo circuitry. In other words, make sure that you properly ground both PC and stereo.
Most sound board outputs are high level, meant for direct connection with loudspeakers. Using high
power buffer circuits, they can drive low impedance loads presented by loudspeakers, typically 4 to
16 ohms. Although these connections are designed for loudspeakers, they match the high level audio
inputs of most preamplifiers and receivers well enough that you can usually connect them directly

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (58 de 59) [23/06/2000 06:27:52 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 18

with a patch cord.


Do not worry about blasting a full watt into the input of your receiver. The input circuits of your
receiver or preamplifier are sensitive to the voltage level rather than the power in the signal. Because
amplifier inputs typically have a high input impedance—at least 2,000 ohms and possibly as high as
100,000 ohms—little of the current in the sound board output can flow through the amplifier input
circuit. The voltage levels match well enough that the signals are compatible, although you may have
to adjust the volume control on high powered sound boards to prevent overloading your receiver's
inputs with too much voltage.
A sound board with a one-watt output into a four-ohm load produces a two-volt output. A
100-milliwatt sound board produces a 0.62-volt output, again assuming a four-ohm impedance. Most
high level receiver and preamplifier auxiliary inputs operate with a voltage level of 0.1 to 1 volt. Do
not, however, plug the speaker output signals of your sound board into microphone inputs. The
voltage levels in the sound board signal will likely overload most microphone inputs.
If you choose to make a direct connection to a receiver or other external amplifier, turn down the
volume or loudness control on both your sound board and receiver to a minimal level before you
begin. Play some sound through your sound board and slowly increase the volume control on your
receiver to the position you use for your normal listening level. Finally, increase the level of your
sound board until it reaches a pleasing listening level through your receiver.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh18.htm (59 de 59) [23/06/2000 06:27:52 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Chapter 19: Parallel Ports


Parallel ports are well-defined, convenient, and quick—probably the most trouble free
connection you can make with your PC. Once the exclusive province of printers, with the
advent of the Enhanced Parallel Port they promise to be the universal interface—the duct
tape of PC ports. An increasing number of peripherals are taking advantage of the fast,
sure, parallel connection. But all parallel ports are not the same, nor are all parallel
connections. A port that works for a printer may fail dismally when you attempt to
transfer files across it. It is all a matter of design.

■ IEEE 1284
■ History
■ Connectors
■ The A connector
■ The B Connector
■ The C Connector
■ Adapters
■ Cable
■ Electrical Operation
■ Compatibility Mode
■ Nibble Mode
■ Byte Mode
■ Enhanced Parallel Port Mode
■ Extended Capabilities Port Mode
■ Logical Interface
■ Input/Output Ports
■ Device Names
■ Interrupts
■ Port Drivers

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (1 de 52) [23/06/2000 06:38:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

■ Control
■ Traditional Parallel Ports
■ Enhanced Parallel Ports
■ Performance Issues
■ Timing
■ Data Compression
■ Bus Mastering
■ Plug-and-Play
■ Benefits
■ Requirements
■ Operation
■ GP-IB Interface

19

Parallel Ports

When it comes to connecting peripherals to your PC, the parallel port appears to be the answer to your
prayers. At one time, the chief value of the parallel port was its being a foolproof connection for
printers. You could plug in a cable and everything would work. No switches to worry about, no mode
commands, no breaking out the break-out box to sort through signals with names that sound
suspiciously similar to demons from Middle Earth.
The parallel connection proved too intriguing for engineers, however. It was inherently faster than the
only other standard PC port at the time, the RS-232 serial port. By tinkering with the parallel port
design, the engineers first broke the intimate link between the port and your printer, and opened it as a
general purpose high speed connection. Not satisfied with a single standard, they developed several.
In the process, the port lost its trouble free installation.
Now new interconnection standards, designed for utter simplicity so that even the basest fool can
connect them properly, stand to steal the parallel port's role as the highest speed player. While the
industry tries to catch up with the engineers, however, the parallel port will likely remain an important
interconnection standard.
Today the parallel port is not a singular thing. Through the years, a variety of parallel port standards
have developed. Today you face questions about three standard connectors and four operational
standards, not to mention a number of proprietary detours that have appeared along the way. Despite
their innate differences—and often great performance differences—all are termed parallel ports. You

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (2 de 52) [23/06/2000 06:38:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

may face combinations of any of the three connectors with any of the four operational standards in a
given connection. With just a look, you won't be able to tell the difference between them, or what
standard the connection uses. In fact, you might become aware of the differences only when you start
to scratch your head and wonder why your friends can unload files from their notebook computers ten
times faster than you can.
Beside the traditional parallel port, several other PC interface systems use parallel connections. For
example, the classic SCSI port is a parallel interface. Most of these other parallel designs are most
applicable for specific hardware. For example, because SCSI finds its most important application in
linking hard disk drives, we've already discussed it in the chapter about disk interfaces (Chapter 9,
"Storage Interfaces"). One other parallel interface system, the General Purpose Interface Bus or GPIB,
has been used among PCs to link a variety of peripherals. Because of its parallel nature—and because
we couldn't think of a better place to put it—we've included a brief discussion of GPIB in this chapter,
too.

IEEE 1284

The defining characteristic of the parallel port design is implicit in its name. The port is parallel
because it conducts its signals through eight separate wires—one for each bit of a byte of data—that
are enclosed together in a single cable. The signal wires literally run in parallel from your PC to their
destination—or at least they did. Better cables twist the physical wires together but keep their signals
straight (and parallel).
In theory, having eight wires means you can move data eight times faster through a parallel
connection than through a single wire. All else being equal, simple math would make this statement
true. Although a number of practical concerns make such extrapolations impossible, throughout its
life, the parallel port has been known for its speed. It beat its original competitor, that RS-232 port
hands down, outrunning the serial port's 115.2 kbit/sec maximum by factors from two to five even in
early PCs. The latest incarnations of parallel technology put the data rate through the parallel
connection to over 100 times faster than the basic serial port rate.
In simple installations, for example when used for its original purpose of linking a printer to your PC,
the parallel port is a model of installation elegance. Just plug your printer in, and the odds are it will
works flawlessly—or that whatever flaws appear won't have anything to do with the interconnection.
Despite such rave reviews, parallel ports are not trouble free. All parallel ports are not created equal.
A number of different designs have appeared during the brief history of the PC. Although new PCs
usually incorporate the latest, most versatile, and highest speed of these, some manufacturers skimp.
Even when you buy a brand new computer, you may end up with a simple printer port that steps back
to the first generation of PC design.
A suitable place to begin this saga is to sort out this confusion of parallel port designs by tracing its
origins. As it turns out, the history of the parallel port is a long one, older even than the PC, although
the name, and our story, begins with its introduction.

History

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (3 de 52) [23/06/2000 06:38:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Necessity isn't just the mother of invention. It also spawned the parallel port. As with most great
inventions, the parallel port arose with a problem that needed to be solved. When IBM developed its
first PC, its engineers looked for a simplified way to link to a printer, something without the hassles
and manufacturing costs of a serial port. The simple parallel connection, already used in a similar
form by some printers, was an elegant solution. Consequently, IBM's slightly modified version
became standard equipment on the first PCs. Because of its intended purpose, it quickly gained the
"printer port" epithet. Not only were printers easy to attach to a parallel port, they were the only thing
that you could connect to these first ports at the time.
In truth, the contribution of PC makers to the first parallel port was minimal. They added a new
connector that better fit the space available on the PC. The actual port design was already being used
on computer printers at the time. Originally created by printer maker Centronics Data Computer
Corporation and used by printers throughout the 1960s and 70s, the connection was electrically
simple, even elegant. It took little circuitry to add to a printer or PC even in the days when designers
had to use discrete components instead of custom designed circuits. A few old-timers still cling to
history and call the parallel port a Centronics port.
The PC parallel port is not identical to the exact Centronics design, however. In adapting it to the PC,
IBM substituted a smaller connector. The large jack used by the Centronics design had 36 pins and
was too large to put where IBM wanted it—sharing a card retaining bracket with a video connector on
the PC's first Monochrome Display Adapter. In addition, IBM added two new signals to give the PC
more control over the printer and adjusted the timing of the signals traveling through the interface. All
that said, most Centronics-style printers worked just fine with the original PC.
At the time, the PC parallel port had few higher aspirations. It did its job, and did it well, moving data
one direction—from PC to printer—at rates from 50 to 150 kilobytes per second. It, or subtle
variations of it, became ubiquitous if not universal. Any printer worth connecting to a PC used a
parallel port (or so it seemed).
In 1987, however, IBM's engineers pushed the parallel port in a new direction. The motivation for the
change came from an odd direction. The company decided to adopt the 3.5-inch floppy disk drives for
its new line of PS/2 computers at a time when all the world's PC data was mired on 5.25-inch
diskettes. The new computers made no provision for building in the bigger drives. Instead, IBM
believed that the entire world would instantly switch over to the new disk format. People would need
to transfer their data once and only once to the new disk format. To make the transfer possible, the
company released its Data Migration Facility, a fancy name for a cable and a couple disks. You used
the cable to connect your old PC to your new PS/2, and software on the disks to move files through
the parallel port from the old machine and disks to the new ones.
Implicit in this design is the ability of the PS/2 parallel port to receive data as well as send it out, as to
a printer. The engineers tinkered with the port design and made it work both ways, creating a
bi-directional parallel port. Because of the design's intimate connection with the PS/2, it is sometimes
termed the PS/2 parallel port.
The Data Migration Facility proved to be an inspirational idea despite its singular shortcoming of
working in only one direction. As notebook computers became popular, they also needed a convenient
means to move files between machines. The makers of file transfer programs like Brooklyn Bridge
and LapLink knew a good connection when they saw it. By tinkering with parallel port signals, they
discovered that they could make any parallel port operate in both directions and move data to and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (4 de 52) [23/06/2000 06:38:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

from PCs.
The key to making bi-directional transfers on the old-fashioned one way ports was to redefine signals.
They redirected four of the signals in the parallel connector that originally had been designed to
convey status information back from the printer to your PC. These signals already went in the correct
direction. All that the software mavens did was to take direct control of the port and monitor the
signals under their new definitions. Of course, four signals can't make a byte. They were limited to
shifting four bits through the port in the backward direction. Because four bits make a nibble, the new
parallel port operating mode soon earned the name nibble mode.
Four-bits-at-a-time had greater implications than just a new name. Half as many bits also means half
the speed. Nibble mode operates at about half the normal parallel port rate—still faster than
single-line serial ports but no full parallel speed.
If both sides of a parallel connection had bi-directional ports, however, data transfers ran at full speed
both ways. Unfortunately, as manufacturers began adapting higher performance peripherals to use the
parallel port, what they once thought was fast performance became agonizingly slow. Although the
bi-directional parallel port more than met the modest data transfer needs of printers and floppy disk
drives, it lagged behind other means of connecting hard disks and networks to PCs.
Engineers at network adapter maker Xircom, Incorporated decided to do something about parallel
performance and banded together with notebook computer maker Zenith Data Systems to find a better
way. Along the way, they added Intel Corporation, and formed a triumvirate called Enhanced Parallel
Port Partnership. They explored two ways of increasing the data throughput of a parallel port. They
streamlined the logical interface so that your PC would need less overhead to move each byte through
the port. In addition, they tightly defined the timing of the signals passing through the port,
minimizing wasted time and helping assure against timing errors. They called the result of their efforts
the Enhanced Parallel Port.
On August 10, 1991, the organization released its first description of what they though the next
generation of parallel port should be and do. They continued to work on a specification until March
1992, when they submitted Release 1.7 to the Institute of Electrical and Electronic Engineers (the
IEEE) to be considered as an industry standard.
Although the EPP version of the parallel port can increase its performance by nearly tenfold, that
wasn't enough to please everybody. The speed potential made some engineers see the old parallel port
as an alternative to more complex expansion buses like the SCSI system. With this idea in mind,
Hewlett-Packard joined with Microsoft to make the parallel port into a universal expansion standard
called the Extended Capabilities Port (or ECP). In November 1992, the two companies released the
first version of the ECP specification aimed at computers that use the ISA expansion bus. This first
implementation adds two new transfer modes to the EPP design—a fast two way communication
mode between a PC and its peripherals, and another two way mode with performance further
enhanced by simple integral data compression—and defines a complete software control system.
The heart of the ECP innovation is a protocol for exchanging data across a high speed parallel
connection. The devices at the two ends of each ECP transfer negotiate the speed and mode of data
movement. Your PC can query any ECP device to determine its capabilities. For example, your PC
can determine what language your printer speaks and set up the proper printer driver accordingly. In
addition, ECP devices tell your PC the speed at which they can accept transmissions and the format of
the data they understand. To assure the quality of all transmissions, the ECP specification includes
error detection and device handshaking. It also allows the use of data compression to further speed

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (5 de 52) [23/06/2000 06:38:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

transfers.
On March 30, 1994, the IEEE Standards Board approved its parallel port standard, IEEE-1284-1994.
The standard included all of the basic modes and parallel port designs including both ECP and EPP. It
was submitted to the American National Standards Institute and approved as a standard on September
2, 1994.
The IEEE 1284 standard marks a watershed in parallel port design and nomenclature. The standard
defines (or redefines) all aspect of the parallel connection, from the software interface in your PC to
the control electronics in your printer. It divides the world of parallel ports in two: IEEE
1284-compatible devices, which are those that will work with the new interface, which in turn
includes just about every parallel port and device ever made; and IEEE 1284-compliant devices, those
which understand and use the new standard. This distinction is essentially between pre- and
post-standardization ports. You can consider IEEE 1284-compatible ports to be "old technology" and
IEEE 1284-compliant ports to be "new technology."
Before IEEE 1284, parallel ports could be divided into four types: Standard Parallel Ports,
Bi-directional Parallel Ports (also known as PS/2 parallel ports), Enhanced Parallel Ports, and
Extended Capabilities Ports. The IEEE specification redefines the differences in ports, classifying
them by the transfer mode they use. Although the terms are not exactly the same, you can consider a
Standard Parallel Port one that is able to use only nibble-mode transfers. A PS/2 or Bi-directional
Parallel Port from the old days is one that can also make use of byte-mode transfers. EPP and ECP
ports are those that use EPP and ECP modes, as described by the IEEE 1284 specification.
EPP and ECP remain standards separate from IEEE 1284, although they have been revised to depend
on it. Both EPP and ECP rely on their respective modes as defined in the IEEE specification for their
physical connections and electrical signaling. In other words, IEEE 1284 describes the physical and
electrical characteristics of a variety of parallel ports. The other standards describe how the ports
operate and link to your applications.

Connectors

The best place to begin any discussion of the function and operation of the parallel port is the
connector. After all, the connector is what puts the port to work. It is the physical manifestation of the
parallel port, the one part of the interface and standard you can actually touch or hold in your hand. It
is the only part of the interface that most people will ever have to deal with. Once you know the ins
and outs of parallel connectors, you'll be able to plug in the vast majority of PC printers and the
myriad other things that now suck signals from what was once the printer's port.
Unfortunately, as with the variety of operating modes, the parallel port connector itself is not a single
thing. Parallel port connectors come in enough different and incompatible designs to make matters
interesting, enough subtle wiring variations to make troubleshooting frustrating, and enough need for
explanation that this book can find its way into a fourth edition. Although the long-range prognosis
for those suffering from connector confusion is good—eventually parallel ports will gravitate to a
single connector design—in the short-term matters are destined only to get more confusing.
Before the IEEE-1284 standard was introduced, equipment designers used either of two connectors
for parallel ports. On the back of your PC you would find a female 25-pin D-shell connector, IBM's

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (6 de 52) [23/06/2000 06:38:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

choice to fit a parallel port within the confines allowed on the MDA video adapter. On your printer
you would find a female 36-pin ribbon connector patterned after the original Centronics design. In its
effort to bring standardization to parallel ports, the IEEE used these two designs as a foundation,
formalizing them into the standard and giving them official names, the 1284-A and 1284-B
connectors. The standard also introduced a new, miniaturized connector, 1284-C, similar to the old
Centronics ribbon connector but about half the size.

The A connector

The familiar parallel port on the back of your PC was IBM's pragmatic innovation. At the time of the
design of the original PC, many computers used a 37-pin D-shell connector for their printer ports that
mated with Centronics-style printers. This connector was simply too long (about four inches) for
where IBM wanted to put it. Slicing off 12 pins would make a D-shell connector fit, and it still could
provide sufficient pins for all the essential functions required in a parallel port as long as some of the
ground return signals were doubled (and tripled) up. Moreover, the 25-pin D-shell was likely in stock
on the shelves wherever IBM prototyped the PC because the mating connector had long been used in
serial ports. IBM chose the opposite gender (a female receptacle on the PC) to distinguish it from a
serial connection.
To retain compatibility with the original IBM design, other computer makers also adopted this
connector. By the time the IEEE got around to standardizing the parallel port, the 25-pin D-shell was
the standard. The IEEE adopted it as its 1284-A connector. Figure 19.1 shows a conceptual view of
the A-connector.
Figure 19.1 The IEEE-1284 A connector, a female 25-pin D-shell jack.

The individual contacts appear as socket holes, spaced at intervals of one-tenth inch, center-to-center.
On the printer jack as it appears in the illustration, pin one is on the upper right, and contacts are
consecutively numbered right to left. Pin 14 appears at the far right on the lower row, and again the
contacts are sequentially numbered right to left. (Because you would wire this connector from the
rear, the contact number would appear there in more familiar left to right sequence.) The socket holes
are encased in plastic to hold them in place, and the plastic filler itself is completely surrounded by a
metal shell that extends back to the body of the connector. The entire connector measures about two
inches wide and half an inch tall when aligned as shown in the illustration.
The studs at either side of the connector are 4-40 jack screws, which are essentially extension screws.
They fit into the holes in the connector and attach it to a chassis. Instead of slotted heads, they provide
another screw socket to which you can securely attach screws from the mating connector.
As a receptacle or jack for mounting on a panel such as the back of your PC, this connector is
available under a number of different part numbers, depending on their manufacturer. Some of these
include AMP 747846-4, Molex 82009, and 3M Company 8325-60XX and 89925-X00X. Mating
plugs are available as AMP 747948-1, Molex 71527, and 3M 8225-X0XX.
Of the 25 contacts on this parallel port connector, 17 are assigned individual signals for data transfer
and control. The remaining eight serve as ground returns. Table 19.1 lists the functions assigned to
each of these signals as implemented in the original IBM PC parallel port and most compatible
computers until the IEEE 1284 standard was adopted. In its compatibility mode, the IEEE standard

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (7 de 52) [23/06/2000 06:38:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

uses essentially these same signal assignments.

Table 19.1. The Original IBM PC Parallel Port Pin-Out

Pin Function
1 Strobe
2 Data bit 0
3 Data bit 1
4 Data bit 2
5 Data bit 3
6 Data bit 4
7 Data bit 5
8 Data bit 6
9 Data bit 7
10 Acknowledge
11 Busy
12 Paper end (Out of paper)
13 Select
14 Auto feed
15 Error
16 Initialize printer
17 Select input
18 Strobe ground
19 Data 1 and 2 ground
20 Data 3 and 4 ground
21 Data 5 and 6 ground
22 Data 7 and 8 ground
23 Busy and Fault ground
24 Paper out, Select, and Acknowledge ground
25 AutoFeed, Select input, and Initialize ground

Under the IEEE 1284 specification, the definition of each signal on each pin is dependent on the
operating mode of the port. Only the definitions change; the physical wiring inside your PC and inside
cables does not change—if it did, shifting modes would be far from trivial. The altered definitions
change the protocol, the signal handshaking that mediates each transfer.
A single physical connector on the back of your PC can operate in any of these five modes, and the
signal definitions and their operation will change accordingly. Table 19.2 lists these five modes and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (8 de 52) [23/06/2000 06:38:26 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

their signal assignments.

Table 19.2. IEEE 1284-A Connector Signal Assignments in All Modes

Pin Compatibility mode Nibble mode Byte mode EPP mode ECP mode
1 nStrobe HostClk HostClk nWrite HostClk
2 Data 1 Data 1 Data 1 AD1 Data 1
3 Data 2 Data 2 Data 2 AD2 Data 2
4 Data 3 Data 3 Data 3 AD3 Data 3
5 Data 4 Data 4 Data 4 AD4 Data 4
6 Data 5 Data 5 Data 5 AD5 Data 5
7 Data 6 Data 6 Data 6 AD6 Data 6
8 Data 7 Data 7 Data 7 AD7 Data 7
9 Data 8 Data 8 Data 8 AD8 Data8
10 nAck PtrClk PtrClk Intr PeriphClk
11 Busy PtrBusy PtrBusy nWait PeriphAck
12 PError AckDataReq AckDataReq User defined 1 nAckReverse
13 Select Xflag Xflag User defined 3 Xflag
14 nAutoFd HostBusy HostBusy nDStrb HostAck
15 nFault nDataAvail nDataAvail User defined 2 nPeriphRequest
16 nInit nInit nInt nInt nReverseRequest
17 nSelectIn 1284 Active 1284 Active nAStrb 1284 Active
18 Pin 1 (nStrobe) ground return
19 Pins 2 and 3 (Data 1 and 2)
ground return
20 Pins 4 and 5 (Data 3 and 4)
ground return
21 Pins 6 and 7 (Data 5 and 6)
ground return
22 Pins 8 and 9 (Data 7 and 8)
ground return
23 Pins 11 and 15 ground return
24 Pins 10, 12, and13 ground
return
25 Pins 14, 16, and 17 ground
return

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (9 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Along with standardized signal assignments, IEEE 1284 also gives us a standard nomenclature for
describing the signals. In Table 19.2 and all following that refer to the standard, signal names prefaced
with a lower-case "n" indicate the signal goes negative when active—that is, the absence of a voltage
means the signal is present.
Mode changes are negotiated between your PC and the printer or other peripheral connected to the
parallel port. Consequently, both ends of the connection switch modes together so the signal
assignments remain consistent at both ends of the connection. For example, if you connect an older
printer that only understands Compatibility Mode, your PC cannot negotiate any other operating mode
with the printer. It will not activate its EPP or ECP mode, so your printer will never get signals it
cannot understand. This negotiation of the mode assures backward compatibility among parallel
devices.

The B Connector

The parallel input on the back of your printer is the direct heir of the original Centronics design, one
that has been in service for more than two decades. Figure 19.2 offers a conceptual view of this
connector.
Figure 19.2 The IEEE-1284 B connector, a 36-pin ribbon jack.

At one time this connector was called an "Amphenol" connector after the name of the manufacturer of
the original connector used on the first ports, an Amphenol 57-40360. Amphenol used the trade name
"Blue Ribbon" for its series of connectors that included this one, hence the ribbon connector name.
Currently this style of connector is available from several makers, each of which uses its own part
number. In addition to the Amphenol part, some of these include AMP 555119-1, Molex 71522, and
3M Company 3367-300X and 3448-62. The mating cable plug is available as AMP 554950-1 or
Molex 71522.
The individual contacts in the 36-pin receptacle take the form of fingers or ribbons of metal. In two
18-contact rows they line the inside of a rectangular opening that accepts a matching projection on the
cable connector. The overall connectors from edge to edge measures about 2.75 inches long and about
0.66 inch wide. The individual contacts are spaced at 0.085 inch center-to-center. On the printer jack
as it appears in the illustration, pin one is on the upper right, and contacts are consecutively numbered
right to left. Pin 19 appears at the far right on the bottom row, and again the contacts are sequentially
numbered right to left. (In wiring this connector, you would work from the rear, and the numbering of
the contacts would rise in the more familiar left to right.)
The assignment of signals to the individual pins of this connector has gone through three stages. The
first standard was set by Centronics for its printers. In 1981, IBM altered this design somewhat by
redefining several of the connections. Finally, in 1994, the IEEE published its standard assignments
which, like those of the A-connector, vary with operating mode.
The Centronics design serves as the foundation for all others. It, with variations, was used by printers
through those made in the early years of the PC. Table 19.3 shows its signal assignments. This basic
arrangement of signals has been carried through, with modification, to the IEEE 1284 standard. As far
as modern printers go, however, this original Centronics design can be considered obsolete. Those
printers not following the IEEE standard invariably use the IBM layout.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (10 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Table 19.3. Centronics Parallel Port Signal Assignments

Pin Function
1 Strobe
2 Data bit 0
3 Data bit 1
4 Data bit 2
5 Data bit 3
6 Data bit 4
7 Data bit 5
8 Data bit 6
9 Data bit 7
10 Acknowledge
11 Busy
12 Paper end (Out of paper)
13 Select
14 Signal ground
15 External oscillator
16 Signal Ground
17 Chassis ground
18 +5 VDC
19 Strobe ground
20 Data 0 ground
21 Data 1 ground
22 Data 2 ground
23 Data 3 ground
24 Data 4 ground
25 Data 5 ground
26 Data 6 ground
27 Data 7 ground
28 Acknowledge ground
29 Busy ground
30 Input prime ground
31 Input prime

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (11 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

32 Fault
33 Light detect
34 Line count
35 Line count return (isolated from ground)
36 Reserved

The Centronics layout includes some unique signals not found on later designs. The Line Count (pins
34 and 35) connections provide an isolated contact closure each time the printer advances its paper by
one line. The Light Detect signal (pin 33) provides an indication whether the lamp inside the printer
for detecting the presence of paper is functioning. The External Oscillator signal (pin 15) provides a
clock signal to external devices, one generally in the range of 100 KHz to 200 KHz. The Input Prime
signal (pin 31) serves the same function as the later Initialize signal. It resets the printer, flushing its
internal buffer.
The IBM design eliminates the signals (but essentially only renames Input Prime) and adds two new
signals, Auto feed and Select input, discussed in the following "Operation" section. This layout
remains current as IEEE 1284 compatibility mode on the 1284-B connector. Its signal assignments are
listed in Table 19.4.

Table 19.4. IBM Parallel Printer Port Signal Assignments

Pin Function
1 Strobe
2 Data bit 0
3 Data bit 1
4 Data bit 2
5 Data bit 3
6 Data bit 4
7 Data bit 5
8 Data bit 6
9 Data bit 7
10 Acknowledge
11 Busy
12 Paper end (Out of paper)
13 Select
14 Auto feed
15 No connection
16 Ground
17 No connection

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (12 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

18 No connection
19 Strobe ground
20 Data 0 ground
21 Data 1 ground
22 Data 2 ground
23 Data 3 ground
24 Data 4 ground
25 Data 5 ground
26 Data 6 ground
27 Data 7 ground
28 Paper end, Select, and Acknowledge ground
29 Busy and Fault ground
30 Auto feed, Select in, and Initialize ground
31 Initialize printer
32 Error
33 No connection
34 No connection
35 No connection
36 Select input

As with the A-connector, the IEEE 1284 signal definitions on the B-connector change with the
operating mode of the parallel port. The signal assignments for each of the five IEEE operating modes
are listed in Table 19.5.

Table 19.5. IEEE 1284-B Connector Signal Assignments in All Modes

Pin Compatibility mode Nibble mode Byte mode EPP mode ECP mode
1 nStrobe HostClk HostClk nWrite HostClk
2 Data 1 Data 1 Data 1 AD1 Data 1
3 Data 2 Data 2 Data 2 AD2 Data 2
4 Data 3 Data 3 Data 3 AD3 Data 3
5 Data 4 Data 4 Data 4 AD4 Data 4
6 Data 5 Data 5 Data 5 AD5 Data 5
7 Data 6 Data 6 Data 6 AD6 Data 6
8 Data 7 Data 7 Data 7 AD7 Data 7
9 Data 8 Data 8 Data 8 AD8 Data8

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (13 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

10 nAck PtrClk PtrClk Intr PeriphClk


11 Busy PtrBusy PtrBusy nWait PeriphAck
12 PError AckDataReq AckDataReq User defined 1 nAckReverse
13 Select Xflag Xflag User defined 3 Xflag
14 nAutoFd HostBusy HostBusy nDStrb HostAck
15 Not defined
16 Logic ground
17 Chassis ground
18 Peripheral logic high
19 Ground return for pin 1
(nStrobe)
20 Ground return for pin 2 (Data
1)
21 Ground return for pin 3 (Data
2)
22 Ground return for pin 4 (Data
3)
23 Ground return for pin 5 (Data
4)
24 Ground return for pin 6 (Data
5)
25 Ground return for pin 7 (Data
6)
26 Ground return for pin 8 (Data
7)
27 Ground return for pin 9 (Data
8)
28 Ground return for pins 10, 12,
and13 (nAck, PError, and
Select)
29 Ground return for pins 11 and
32 (Busy and nFault)
30 Ground return for pins 14, 31,
and 36 (nAutoFd, nSelectIn,
and nInit))
31 nInit nInit nInit nInit nReverseRequest
32 nFault nDataAvail nDataAvail User Defined 2 nPeriphRequest
33 Not defined
34 Not defined

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (14 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

35 Not defined
36 nSelectIn 1284 Active 1284 Active nAStrb 1284 Active

Again, the port modes and the associated signal assignments are not fixed in hardware but change
dynamically as your PC uses the connection. Although your PC acts as host and decides which mode
to use, it can only negotiate those that your printer or other parallel device understands. Your printer
(or whatever) determines which of these five modes could be used while your PC and its applications
picks which of the available modes to use for transferring data.

The C Connector

Given a chance to start over with a clean slate and no installed base, engineers would hardly come up
with the confusion of two different connectors with an assortment of different, sometimes compatible
operate modes. The IEEE saw the creation of the 1284 standard as such an opportunity, one which
they were happy to exploit. To eliminate the confusion of two connectors and the intrinsic need for
adapters to move between them, they took the logical step: they created a third connector, IEEE
1284-C.
All devices compliant with IEEE 1284 Level 2 must use this connector. That requirement is the
IEEE's way of saying, "Let's get rid of all these old, confusing parallel ports with their strange timings
and limited speed and get on with something new for the next generation." Once the entire world
moves to IEEE 1284 Level 2, you'll have no need of compatibility, cable adapters, and other such
nonsense. In the meantime, as manufacturers gradually adopt the C-connector for their products, you'll
still need adapters but in even greater variety.
All that said, the C-connector has much to recommend it. It easily solves the original IBM problem of
no space. Although it retains all the signals of the B-connector, the C-connector is miniaturized, about
half the size of the B-connector. As a PC-mounted receptacle, it measures about 1.75 inches long by
.375 inch wide. It is shown in a conceptual view in Figure 19.3.
Figure 19.3 Conceptual view of the 1284-C parallel port connector.

The actual contact area of the C-connector is much like that of the B-connector with contact fingers
arranged inside a rectangular opening that accepts a matching projection on the mating plug. The
spacing between the individual contacts is reduced, however, to 0.05 inch, center-to-center. This
measurement corresponds to those commonly used on modern printed circuit boards.
The C-connector provides a positive latch using clips that are part of the shell of the plug. The clips
engage latches on either side of the contact area, as shown in the figure. Squeezing the side of the plug
spreads the clips and releases the latch.
The female receptacle (as shown) is available from a number of manufacturers. Some of these include
AMP 2-175925-5, Harting 60-11-036-512, Molex 52311-3611, and 3M 10236-52A2VC. Part
numbers of the mating plug include AMP 2-175677-5, Harting 60-13-036-5200, Molex 52316-3611,
and 3M 10136-6000EC.
Every signal on the C-connector gets its own pin and all pins are defined. As with the other
connectors, the signal assignments depend on the mode in which the IEEE 1284 port is operating.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (15 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Table 19.6 lists the signal assignments for the 1284-C connector in each of the five available modes.

Table 19.6. IEEE 1284-C Connector Signal Assignments in All Modes

Pin Compatibility mode Nibble mode Byte mode EPP mode ECP mode
1 Busy PtrBusy PtrBusy nWait PeriphAck
2 Select Xflag Xflag User defined 3 Xflag
3 nAck PtrClk PtrClk Intr PeriphClk
4 nFault nDataAvail nDataAvail User Defined 2 nPeriphRequest
5 PError AckDataReq AckDataReq User defined 1 nAckReverse
6 Data 1 Data 1 Data 1 AD1 Data 1
7 Data 2 Data 2 Data 2 AD2 Data 2
8 Data 3 Data 3 Data 3 AD3 Data 3
9 Data 4 Data 4 Data 4 AD4 Data 4
10 Data 5 Data 5 Data 5 AD5 Data 5
11 Data 6 Data 6 Data 6 AD6 Data 6
12 Data 7 Data 7 Data 7 AD7 Data 7
13 Data 8 Data 8 Data 8 AD8 Data8
14 nInit nInit nInit nInit nReverseRequest
15 nStrobe HostClk HostClk nWrite HostClk
16 nSelectIn 1284 Active 1284 Active nAStrb 1284 Active
17 nAutoFd HostBusy HostBusy nDStrb HostAck
18 Host logic high
19 Ground return for pin 1 (Busy)
20 Ground return for pin 2
(Select)
21 Ground return for pin 3 (nAck)
22 Ground return for pin 4
(nFault)
23 Ground return for pin 5
(PError)
24 Ground return for pin 6 (Data
1)
25 Ground return for pin 7 (Data
2)
26 Ground return for pin 8 (Data
3)

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (16 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

27 Ground return for pin 9 (Data


4)
28 Ground return for pin 10 (Data
5)
29 Ground return for pin 11 (Data
6)
30 Ground return for pin 12 (Data
7)
31 Ground return for pin 13 (Data
8)
32 Ground return for pin 14
(nInit)
33 Ground return for pin 15
(nStrobe)
34 Ground return for pin 16
(nSelectIn)
35 Ground return for pin 17
(nAutoFd)
36 Peripheral logic high

Adapters

The standard printer cable for PCs is an adapter cable. It rearranges the signals of the A-connector to
the scheme of the B-connector. Ever since the introduction of the first PC, you needed this sort of
cable just to make your printer work. Over the years they have become plentiful and cheap.
Unfortunately, as cables get cheaper and sources become more generic and obscure, quality is apt to
slip. Printer cables provide an excellent opportunity for allowing quality to take a big slide. If you
group all the grounds together as a single common line, you're left with only 18 distinct signals on a
printer cable. In that some of the grounds are naturally group together, this approach might seem
feasible, particularly since you can save the price of a 25-conductor cable. In fact, IBM took this
approach with its first printer cable. Low cost printer cables still retain this design. The wiring of this
adapter cable is given in Table 19.7.

Table 19.7. Printer Cable, 18-Wire Implementation

PC end 25-pin connector Function Printer end 36-pin connector


1 Strobe 1
2 Data bit 0 2

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (17 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

3 Data bit 1 3
4 Data bit 2 4
5 Data bit 3 5
6 Data bit 4 6
7 Data bit 5 7
8 Data bit 6 8
9 Data bit 7 9
10 Acknowledge 10
11 Busy 11
12 Paper end (Out of paper) 12
13 Select 13
14 Auto feed 14
15 Error 32
16 Initialize printer 31
17 Select input 36
18 Ground 19-30,33
19 Ground 19-30,33
20 Ground 19-30,33
21 Ground 19-30,33
22 Ground 19-30,33
23 Ground 19-30,33
24 Ground 19-30,33
25 Ground 19-30,33

Some PC and printer manufacturers did not exploit all the control signals that were part of the basic
parallel port design. In fact, many early printers would not function properly if they received these
control signals. Many of these printers (and some early PCs) required proprietary adapter cables to
make them work.
A modern printer cable contains a full 25 connections with the ground signals divided among separate
pins. For example OS/2, unlike DOS, requires the use of all 25 pins in the IBM parallel printer
connection. A generic printer cable which makes only 18 connections may not work with OS/2. If
your printer doesn't work properly with OS/2 and does with DOS, the cable is the first place to suspect
a problem.
Using all 25 wires is the preferred and correct wiring for a classic parallel printer adapter cable. If you
buy or make a cable and plan to use it with classic parallel connections, it should connect all 25 leads
at both ends. The IEEE recognizes this cable layout to adapt 1284-A to 1284-B connectors. Table 19.8
shows the wiring of this adapter. (The different nomenclature given for names of signal functions
reflects the official IEEE usage. We've modified a few of the official IEEE signal designations for
clarity, particularly those of the ground return lines.)

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (18 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Table 19.8. 25-Wire Parallel Printer Adapter (IEEE 1284-A to 1284-B)

Host end A connector Function Peripheral end B connector


1 nStrobe 1
2 Data bit 1 2
3 Data bit 2 3
4 Data bit 3 4
5 Data bit 4 5
6 Data bit 5 6
7 Data bit 6 7
8 Data bit 7 8
9 Data bit 8 9
10 nAck 10
11 Busy 11
12 PError 12
13 Select 13
14 nAutoFd 14
15 nFault 32
16 nInit 31
17 nSelectIn 36
18 Pin 1 (nStrobe) ground return 19
19 Pins 2 and 3 (Data 1 and 2) ground return 20 and 21
20 Pins 4 and 5 (Data 3 and 4) ground return 22 and 23
21 Pins 6 and 7 (Data 5 and 6) ground return 24 and 25
22 Pins 8 and 9 (Data 7 and 8) ground return 26 and 27
23 Pins 11 and 15 ground return 29
24 Pins 10, 12, and 13 ground return 28
25 Pins 14, 16, and 17 ground return 30

As new peripherals with the 1284-C connector become available, you'll need to plug them into your
PC. To attach your existing PC to a printer or other device using the C-connector, you'll need an
adapter cable to convert the A-connector layout to the C-connector design. Table 19.9 shows the
proper wiring for such an adapter as adopted in the IEEE 1284 specification (again with a
modification in signal nomenclature from the official standard for clarity).

Table 19.9. Parallel Interface Adapter, 1284-A to 1284-C Connectors

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (19 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Host end A connector Function Peripheral end C connector


1 nStrobe 15
2 Data bit 1 6
3 Data bit 2 7
4 Data bit 3 8
5 Data bit 4 9
6 Data bit 5 10
7 Data bit 6 11
8 Data bit 7 12
9 Data bit 8 13
10 nAck 3
11 Busy 1
12 PError 5
13 Select 2
14 nAutoFd 17
15 nFault 4
16 nInit 14
17 nSelectIn 16
18 Pin 1 (nStrobe) ground return 33
19 Pins 2 and 3 (Data 1 and 2) ground return 24 and 25
20 Pins 4 and 5 (Data 3 and 4) ground return 26 and 27
21 Pins 6 and 7 (Data 5 and 6) ground return 28 and 29
22 Pins 8 and 9 (Data 7 and 8) ground return 30 and 31
23 Pins 11 and 15 ground return 19 and 22
24 Pins 10, 12, and 13 ground return 20, 21, and 23
25 Pins 14, 16, and 17 ground return 32, 34, and 35

If your next PC or parallel adapter uses the C-connector and you plan to stick with your old printer,
you'll need another variety of adapter, one that translates the C-connector layout to that of the
B-connector. Table 19.10 lists the wiring required in such an adapter.

Table 19.10. Parallel Interface Adapter, 1284-C to 1284-B

Host end C connector Function Peripheral end B connector


1 Busy 11

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (20 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

2 Select 13
3 nAck 10
4 nFault 32
5 PError 12
6 Data 1 2
7 Data 2 3
8 Data 3 4
9 Data 4 5
10 Data 5 6
11 Data 6 7
12 Data 7 8
13 Data 8 9
14 nInit 31
15 nStrobe 1
16 nSelectIn 36
17 nAutoFd 14
18 Host logic high No connection
19 Ground return for pin 1 (Busy) 29
20 Ground return for pin 2 (Select) 28
21 Ground return for pin 3 (nAck) 28
22 Ground return for pin 4 (nFault) 29
23 Ground return for pin 5 (PError) 28
24 Ground return for pin 6 (Data 1) 20
25 Ground return for pin 7 (Data 2) 21
26 Ground return for pin 8 (Data 3) 22
27 Ground return for pin 9 (Data 4) 23
28 Ground return for pin 10 (Data 5) 24
29 Ground return for pin 11 (Data 6) 25
30 Ground return for pin 12 (Data 7) 26
31 Ground return for pin 13 (Data 8) 27
32 Ground return for pin 14 (nInit) 30
33 Ground return for pin 15 (nStrobe) 19
34 Ground return for pin 16 30
(nSelectIn)
35 Ground return for pin 17 30
(nAutoFd)

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (21 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

36 Peripheral logic high 18


The following pins on the 1284-B
connector are not connected: 15,
16, 17, 33, 34, and 35. Connector
shields are connected at each end.

Note that although both the B- and C-connectors have 36-pins, they do not have the same signals.
Several signals share ground connections on the B-connector while several other pins are not
connected.

Cable

The nature of the signals in the parallel port are their own worst enemy. They interact with themselves
and the other wires in the cable to the detriment of all. The sharp transitions of the digital signals blur.
The farther the signal travels in the cable, the greater the degradation that overcomes it. For this
reason, the maximum recommended length of a printer cable was ten feet. Not that longer cables will
inevitably fail—practical experience often proves otherwise—but some cables in some circumstances
become unreliable when stretched for longer distances.
The lack of a true signaling standard before IEEE 1284 made matters worse. Manufacturers had no
guidelines for delays or transition times, so these values varied among PC, printer, and peripheral
manufacturers. Although the signals might be close enough matches to work through a short cable,
adding more wire could push them beyond the edge. A printer might then misread the signals from a
PC, printing the wrong character or nothing at all.
Traditional printer cables are notoriously variable. As noted in the discussion of adapters,
manufacturers scrimp where they can to produce low cost adapter cables. After all, cables are
commodities and the market is highly competitive. When you pay under $10 for a printer cable that
comes without a brand name, you can never be sure of its electrical quality.
For this reason, extension cables are never recommended for locating your printer more than ten feet
from your PC. Longer distances require alternate strategies—opting for another connection (serial or
network) or getting a printer extension system that alters the signals and provides a controlled cable
environment.
What length you can get away with depends on the cable, your printer, and your PC. Computers and
printers vary in their sensitivity to parallel port anomalies like noise, crosstalk, and digital blurring.
Some combinations of PCs and printers will work with lengthy parallel connections, up to fifty feet
long. Other match-ups may balk when you stretch the connection more than the recommended ten
feet.
The high speed modes of modern parallel ports make them even more finicky. When your parallel
port operates in EPP or ECP modes, cable quality becomes critical even for short runs. Signaling
speed across one of these interfaces can be in the megahertz range. The frequencies far exceed the
reliable limits of even short runs of the dubious low cost printer cables. Consequently, the IEEE 1284
specification precisely details a special cable for high speed operation. Figure 19.4 offers a conceptual
view of the construction of this special parallel data cable.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (22 de 52) [23/06/2000 06:38:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Figure 19.4 IEEE 1284 cable construction details.

Unlike standard parallel wiring, the data lines in IEEE 1284 cables must be double shielded to prevent
interference from affecting the signals. Each signal wire must be twisted with its ground return. Even
though the various standard connectors do not provide separate pins for each of these grounds, the
ground wires must be present and run the full length of the cable.
The difference between old-fashioned "printer" cables and those that conform to the IEEE 1284
standard is substantial. Although you can plug in a printer with either a printer- or IEEE
1284-compliant cable, devices that exploit the high speed potentials of the EPP or ECP designs may
not operate properly with a non-compliant cable. Often even when a printer fails to operate properly,
the cable may be at fault. Substituting a truly IEEE 1284-compliant cable will bring reluctant
connections to life.

Electrical Operation

In each of its five modes, the IEEE 1284 parallel port operates as if it were some kind of completely
different electronic creation. When in compatibility mode, the IEEE 1284 port closely parallels the
operation of the plain vanilla printer port of bygone days. It allows data to travel in one direction only,
from PC to printer. Nibble mode gives your printer (or more likely, another peripheral) a voice, and
allows it to talk back to your PC. In nibble mode, data can move in either of two directions, although
asymmetrically. Information flows faster to your printer than it does on the return trip. Byte mode
makes the journey fully symmetrical.
With the shift to EPP mode, the parallel port becomes a true expansion bus. A new way of linking to
your PC's bus gives it increased bi-directional speed. Many systems can run their parallel ports ten
times faster in EPP mode than in compatibility, nibble, or byte modes. ECP mode takes the final step,
giving control in addition to speed. ECP can do just about anything any other expansion interface
(including SCSI) can do.
Because of these significant differences, the best way to get to know the parallel port is by considering
each separately as if it were an interface unto itself. Our examination will follow from simple to
complex, which also mirrors the history of the parallel port.
Note that IEEE 1284 deals only with the signals traveling through the connections of the parallel
interface. It establishes the relationship between signals and their timing. It concerns itself neither
with the data that is actually transferred, command protocols encoded in the data, nor with the control
system that produces the signals. In other words, IEEE 1284 provides an environment under which
other standards such as EPP and ECP operate. That is, ECP and EPP modes are not the ECP and EPP
standards, although those modes are meant to be used by the parallel ports operating under respective
standards.

Compatibility Mode

The least common denominator among parallel ports is the classic design that IBM introduced with

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (23 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

the first PC. It was conceived strictly as an interface for the one way transfer of information. Your PC
sends data to your printer and expects nothing in return. After all, a printer neither stores information
nor creates it on its own.
In conception, this port is like a conveyor that unloads ore from a bulk freighter or rolls coal out of a
mine. The raw material travels in one direction. The conveyor mindlessly pushes out stuff and more
stuff, perhaps creating a dangerously precarious pile, until its operator wakes up and switches it off
before the pile gets much higher than his waist.
If your printer had unlimited speed or an unlimited internal buffer, such a one way design would
work. But like the coal yard, your printer has a limited capacity and may not be able to cart off data as
fast as the interface shoves it out. The printer needs some way of sending a signal to your PC to warn
about a potential data overflow. In electronic terms, the interface needs feedback of some kind—it
needs to get information from the printer that your PC can use to control the data flow.
To provide the necessary feedback for controlling the data flow, the original Centronics port design
and IBM's adaptation of it both included several control signals. These were designed to allow your
PC to monitor how things are going with your printer—whether data is piling up, whether it has
sufficient paper or ribbon, whether the printer is even turned on. Your PC can use this information to
moderate the outflowing gush of data or to post a message warning you that something is wrong with
your printer. In addition, the original parallel port included control signals sent from your PC to the
printer to tell it when the PC wants to transfer data and to tell the printer to reset itself. The IEEE 1284
standard carries all of these functions into compatibility mode.
Strictly speaking, then, even this basic parallel port is not truly a one way connection, although its
feedback provisions were designed strictly for monitoring rather than data flow. For the first half of its
life, the parallel port kept to this design. Until the adoption of IEEE-1284, this was the design you
could expect for the port on your printer and, almost as likely, those on your PC.
Each signal flowing through the parallel port in compatibility mode has its own function. These
signals include the following.

Data Lines

The eight data lines of the parallel interface convey data in all operating modes. In compatibility
mode, they carry data from the host to the peripheral on connector pins 2 through 9. The higher
numbered pins are the more significant to the digital code. To send data to the peripheral, the host puts
a pattern of digital voltages on the data lines.

Strobe Line

The presence of signals on the data lines does not, in itself, move information from host to peripheral.
As your PC gets its act together, it may change the pattern of data bits. No hardware can assure that all
eight will always pop to the correct values simultaneously. Moreover, without further instruction your
printer has no way knowing whether the data lines represent a single character or multiple repetitions

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (24 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

of the same character.


To assure reliable communications, the system requires a means of telling the peripheral that the
pattern on the data lines represents valid information to be transferred. The strobe line does exactly
that. Your PC pulses the strobe line to tell your printer that the bit pattern on the data lines is a single
valid character that the printer should read and accept. The strobe line gives its pulse only after the
signals on the data lines have settled down. Most parallel ports delay the strobe signal by about half a
microsecond to assure that the data signals have settled. The strobe itself lasts for at least half a
microsecond so that your printer can recognize it. (The strobe signal can last up to 500 microseconds.)
The signals on the data lines must maintain a constant value during this period and slightly afterward
so that your printer has a chance to read them.
The strobe signal is negative going. That is, a positive voltage (+5VDC) stays on the strobe line until
your printer wants to send the actual strobe signal. Your PC then drops the positive voltage to near
zero for the duration of the strobe pulse. The IEEE 1284 specification calls this signal nStrobe.

Busy Line

Sending data to your printer is thus a continuous cycle of setting up the data lines, sending the strobe
signal, and putting new values on the data lines. The parallel port design typically requires about two
microseconds for each turn of this cycle, allowing a perfect parallel port to dump out nearly half a
million characters a second into your hapless printer. (As we will see, the actual maximum throughput
of a parallel port is much lower than this.)
For some printers, coping with that data rate is about as daunting as trying to catch machine gun fire
with your bare hands. Before your printer can accept a second character, its circuitry must do
something with the one it has just received. Typically it will need to move the character into the
printer's internal buffer. Although the character moves at electronic speeds, it does not travel
instantaneously. Your printer needs to be able to tell your PC to wait for the processing of the current
character before sending the next.
The parallel port's busy line gives your printer the needed breathing room. Your printer switches on
the busy signal as soon as it detects the strobe signal and keeps the signal active until it is ready to
accept the next character. The busy signal can last for a fraction of a second (even as short as a
microsecond) or your printer could hold it on indefinitely while it waits for you to correct some error.
No matter how long the busy signal is on, it keeps your PC from sending out more data through the
parallel port. It functions as the basic flow control system.

Acknowledge Line

The final part of the flow control system of the parallel port is the acknowledge line. It tells your PC
that everything has gone well with the printing of a character, or its transfer to the internal buffer. In
effect, it is the opposite of the busy signal, telling your PC that the printer is ready rather than
unready. Where the busy line says "Whoa!" the acknowledge line says "Giddyap!" The acknowledge
signal is the opposite in another way; it is negative going where busy is positive going. The IEEE

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (25 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

1284 specification calls this signal nAck.


When your printer sends out the acknowledge signal, it completes the cycle of sending a character.
Typically the acknowledge signal on a conventional parallel port lasts about eight microseconds,
stretching a single character cycle across the port to ten microseconds. (IEEE 1284 specifies the
length of nAck to be between 0.5 and 10 microseconds.) If you assume the typical length of this signal
for a conventional parallel port, the maximum speed of the port works out to about 100,000 characters
per second.

Select

In addition to transferring data to the printer, the basic parallel port allows your printer to send signals
back to your PC so your computer can monitor the operation of the printer. The original IBM design
of the parallel interface includes three such signals that tell your PC when your printer is ready,
willing, and able to do its job. In effect, these signals give your PC the ability to remote sense the
condition of your printer.
The most essential of these signals is select. The presence of this signal on the parallel interface tells
your PC that your printer is online. That is, that your printer is switched on and is in its online mode,
ready to receive data from your PC. In effect, it is a remote indicator for the online light on your
printer's control panel. If this signal is not present, your PC assumes that nothing is connected to your
parallel port and doesn't bother with the rest of its signal repertory.
Because the rest state of a parallel port line is an absence of voltage (which would be the case if
nothing were connected to the port to supply the voltage), the select signal takes the form of a positive
signal (nominally +5VDC) that in compatibility mode under the IEEE 1284 specification stays active
the entire period your printer is online.

Paper Empty

To print anything your printer needs paper, and the most common problem that prevents your printer
from doing its job is running out of paper. The paper empty signal warns your PC when your printer
runs out. The IEEE 1284 specification calls this signal PError for "paper error," although it serves
exactly the same function.
Paper empty is an information signal. It is not required for flow control because the busy signal more
than suffices for that purpose. Most printers will assert their busy signals for the duration of the period
they are without paper. Paper empty tells your PC about the specific reason that your printer has
stopped data flow. This signal allows your operating system or application to flash a message on your
monitor to warn you to load more paper.

Fault

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (26 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

The third printer-to-PC status signal is fault, a catch-all for warning of any other problems that your
printer may develop—out of ink, paper jams, overheating, conflagrations, and other disasters. In
operation, fault is actually a steady state positive signal. It dips low (or off) to indicate a problem. At
the same time, your printer may issue its other signals to halt the data flow including busy and select.
It never hurts to be extra sure. Because this signal is negative going, the IEEE specification calls it
nFault.

Initialize Printer

In addition to the three signals your printer uses to warn of its condition, the basic parallel port
provides three control signals that your PC can use to command your printer without adding anything
to the data stream. Each of these three provides its own hard-wired connection for a specific purpose.
These include one to initialize the printer, another to switch it to online condition if the printer allows
a remote control status change, and a final signal to tell the printer to feed the paper up one line.
The initialize printer signal helps your computer and printer keep in sync. Your PC can send a raft of
different commands to your printer to change its mode of operation, change font, alter printing pitch,
and so on. Each of your applications that share your printer might send out its own favored set of
commands. And many applications are like sloppy in-laws that comes for a visit and fail to clean up
after themselves. The programs may leave your printer in some strange condition, such as set to print
underscored boldface characters in agate size type with a script typeface. The next program you run
might assume some other condition and blithely print out paychecks in illegible characters.
Initialize printer tells your printer to step back to ground zero. Just as your PC boots up fresh and
predictably, so does your printer. When your PC sends your printer the initialize printer command, it
tells the printer to boot up, that is, reset itself and load its default operating parameters with its start up
configuration of fonts, pitches, typefaces, and the like. The command has the same effect as you
switching off the printer and turning it back on and simply substitutes for adding a remote control arm
on your PC to duplicate your actions.
During normal operation, your PC puts a constant voltage on the initialize printer line. Removing the
voltage tells your printer to reset. The IEEE 1284 specification calls this negative going signal nInit.

Select Input

The signal that allows your PC to switch your printer online and offline is called select input. The
IEEE 1284 specification calls it nSelectIn. It is active, forcing your printer online, when it is low or
off. Switching it high deselects your printer.
Not all printers obey this command. Some have no provisions for switching themselves on and
offline. Others have setup functions (such as a DIP switch) that allow you to defeat the action of this
signal.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (27 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Auto Feed XT

At the time IBM imposed its print system design on the rest of the world, different printers interpreted
the lowly carriage return in one of two ways. Some printers took it literally. Carriage return mean to
move the printhead carriage back to its starting position on the left side of the platen. Other printers
thought more like typewriters. Moving the printhead full left also indicated the start of a new line, so
they obediently advance the paper one line when they got a carriage return command. IBM, being a
premiere typewriter maker at the time, opted for this second definition.
To give printer developers flexibility, however, the IBM parallel port design included the Auto Feed
XT signal to give your PC command of the printer's handling of carriage returns. Under the IEEE 1284
specification, this signal is called nAutoFd. By holding this signal low or off, your PC commands your
printer to act in the IBM and typewriter manner, adding a line feed to every carriage return. Making
this signal high tells your printer to interpret carriage returns literally and only move the printhead.
Despite the availability of this signal, most early PC printers ignored it and did whatever their setup
configuration told them to do with carriage returns.

Nibble Mode

Early parallel ports used uni-directional circuitry for their data lines. No one foresaw the need for your
PC to acquire data from your printer, so there was no need to add the expense or complication of
bi-directional buffers to the simple parallel port. This tradition of single-direction design and
operation continues to this day in the least expensive (which, of course, also means "cheapest")
parallel ports.
Every parallel port does, however, have five signals that are meant to travel from the printer to your
PC. These include (as designated by the IEEE 1284 specification) nAck, Busy, PError, Select, and
nFault. If you could suspend the normal operation of these signals temporarily, you could use four of
them to carry data back from the printer to your PC. Of course, the information would flow at half
speed, four bits at a time.
This means of moving data is the basis of nibble mode, so called because the PC community calls half
a byte (or those four bits) a nibble. Using nibble mode, any parallel port can operate
bi-directionally—full speed forward but half speed in reverse.
Nibble mode requires that your PC take explicit command and control the operation of your parallel
port. The port itself merely monitors all of its data and monitoring signals and relays the data to your
PC. Your PC determines whether to regard your printer's status signals as backward-moving data. Of
course this system also requires that the device at the other end of the parallel port—your printer or
whatever—know that it has switched into nibble mode and understand what signals to put where and
when. The IEEE 1284 specification defines a protocol for switching into nibble mode and how PC
and peripherals handle the nibble-mode signals.
The process is complex, involving several steps. First your PC must identify whether the peripheral
connected to it recognizes the IEEE standard. If not, all bets are off for using the standard. Products
created before IEEE 1284 was adopted relied on the software driver controlling the parallel port to be

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (28 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

matched to your parallel port peripheral. Because the two were already matched, they knew
everything they needed to know about each other without negotiation. The pair could work without
understanding the negotiation process or even the IEEE 1284 specification. Using the specification,
however, allows your PC and peripherals to do the matching without your intervention.
Once your PC and peripheral decide they can use nibble mode, your PC signals to the peripheral to
switch to the mode. Before the IEEE 1284 standard, the protocol was proprietary to the parallel port
peripheral. The standard gives all devices a common means of controlling the switchover.
After both your PC and parallel port peripheral have switched to nibble mode, the signals on the
interface get new definitions. In addition, nibble mode itself operates in two modes or phases, and the
signals on the various parallel port lines behave differently in each mode. These modes include
reverse idle phase and reverse data transfer phase.
In reverse idle phase, the PtrClk signal (nAck in compatibility mode) operates as an attention signal
from the parallel port peripheral. Activating this signal tells the parallel port to issue an interrupt
inside your PC, signaling that the peripheral has data available to be transferred. Your PC
acknowledges the need for data and requests its transfer by switching the HostBusy signal (nAutoFd
in compatibility mode) low or off. This switches the system to reverse data transfer phase. Your PC
switches the HostBusy signal high again after the completion of the transfer of a full data byte. When
the peripheral has mode data ready and your PC switches Host Busy back low again, another transfer
begins. If it switches low without the peripheral having data available to send, the transition
re-engaged reverse idle phase.
During reverse data transfer phase, information is coded across two transfers as listed in Table 19.11.
In effect, each transfer cycle involves to epi-cycles which move one nibble. First your peripheral
transfers the four bits of lesser significance, then the bits of more significance.

Table 19.11. Data Bit Definitions in Nibble Mode

Signal First epi-cycle contents Second epi-cycle contents


nFault Least significant bit Data bit 5
Xflag Data bit 2 Data bit 6
AckDataReq Data bit 3 Data bit 7
PtrBusy Data bit 4 Most significant bit

Because moving a byte from peripheral to PC requires two nibble transfers, each of which requires the
same time as one byte transfer from PC to peripheral, reverse transfers in nibble mode operate at half
speed at best. The only advantage of nibble mode is its universal compatibility. Even before the IEEE
1284 specification, it allowed any parallel port to operate bi-directionally. Because of this speed
penalty alone, if you have a peripheral and parallel port that lets you choose the operating mode for
bi-directional transfers, nibble mode is your least attractive choice.

Byte Mode

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (29 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Unlike nibble mode, byte mode requires special hardware. The basic design for byte mode circuitry
was laid down when IBM developed its PS/2 line of computers and developed the Data Migration
Facility. By incorporating bi-directional buffers in all eight of the data lines of the parallel port, IBM
enabled them to both send and receive information on each end of the connection. Other than that
change, the new design involved no other modifications to signals, connector pin assignments, or the
overall operation of the port. Before the advent of the IEEE standard, these ports were known as PS/2
parallel ports or bi-directional parallel ports.
IEEE 1284 does more than put an official industry imprimatur on the IBM design, however. The
standard redefines the bi-directional signals and adds a universal protocol of negotiating bi-directional
transfers.
As with nibble mode, a peripheral in byte mode uses the PtrClk signal to trigger an interrupt in the
host PC to advise that the peripheral has data available for transfer. When the PC services the
interrupt, it checks the port nDataAvail signal, a negative going signal which indicates a byte is
available for transfer when it goes low. The PC can then pulse off the HostBusy signal to trigger the
transfer using the HostClk (nStrobe) signal to read the data. The PC raises the HostBusy signal again
to indicate the successful transfer of the data byte. The cycle can then repeat for as many bytes as need
to be sent.
Because byte mode is fully symmetrical, transfers occur at the same speed in either direction. The
speed limit is set at the performance of the port hardware, the speed at which the host PC handles the
port overhead, and by the length of timing cycles set in the IEEE 1284 specification. Potentially the
design could requires as little as four microseconds for each byte transferred, but real world systems
peak at about the same rate as conventional parallel ports, 100,000 bytes per second.

Enhanced Parallel Port Mode

When it was introduced, the chief innovation of the Enhanced Parallel Port was its improved
performance, thanks to a design that hastened the speed at which your PC could pack data into the
port. The EPP design altered port hardware so that instead of using byte-wide registers to send data
through the port, your PC could dump a full 32-bit word of data directly from its bus into the port. The
port would then handle all the conversion necessary to repackage the data into four byte-wide
transfers. The reduction in PC overhead and more efficient hardware design enabled a performance
improvement by a factor of ten in practical systems. This speed increase required more stringent
specifications for printer cables. The IEEE 1284 specification does not get into the nitty-gritty of
linking the parallel port circuitry to your PC, so it does not guarantee that a port in EPP mode will
deliver all of this speed boost. Moreover, the IEEE 1284 cable specs are not as demanding as the
earlier EPP specs.
EPP mode of the IEEE 1284 specification uses only six signals in addition to the eight data lines for
controlling data transfers. Three more connections in the interface are reserved for use by individual
manufacturers and are not defined under the standard.
A given cycle across the EPP mode interface performs one of four operations: writing an address,
reading an address, writing data, or reading data. The address corresponds to a register on the
peripheral. The data operations are targeted on that address. Multiple data bytes may follow a single

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (30 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

address signal as a form of burst mode.

nWrite

Data can travel both ways through an EPP connection. The nWrite signal tells whether the contents of
the data lines are being sent from your PC to a peripheral or from a peripheral to your PC. When the
nWrite signal is set low, it indicates data is bound for the peripheral. When set high, it indicates data
sent from the peripheral.

nDStrobe

Sound boards are heavy feeders when it comes to system resources. A single sound board may require
multiple interrupts, a wide range of input/output ports, and a dedicated address range in High DOS
memory. Because of these extensive resource demands, the need for numerous drivers, and often poor
documentation, sound boards are the most frustrating expansion products to add to a PC. In fact, a
sound board may be the perfect gift to surreptitiously gain revenge letting you bury the hatchet with
an estranged friend without the friend knowing you've sliced solidly into his back.
As with other parallel port transfers, your system needs a signal to indicate when the bits on the data
lines are valid and accurate. EPP mode uses a negative going signal called nDStrobe for this function
in making data operations. Although this signal serves the same function as the strobe signal on a
standard parallel port, it has been moved to a different pin, that used by the nAutoFd signal in
compatibility mode.

nAStrobe

To identify a valid address on the interface bus, the EPP system uses the nAStrobe signal. This signal
uses the same connection as does nSelectIn during compatibility mode.

nWait

To acknowledge that a peripheral has properly received a transfer, it deactivates the negative going
nWait signal (making it a positive voltage on the bus). By holding the signal positive, the peripheral
signals the host PC to wait. Making the signal negative indicates that the peripheral is ready for
another transfer.

Intr

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (31 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

To signal the host PC that a peripheral connected to the EPP interface requires immediate service, it
sends out the Intr signal. The transition between low and high states of this signal indicates a request
for an interrupt (that is, the signal is edge-triggered). EPP mode does not allocate a signal to
acknowledge that the interrupt request was received.

nInit

The escape hatch for EPP mode is the nInit signal. When this signal is activated by making it low, it
forces the system out of EPP mode and back to compatibility mode.

Extended Capabilities Port Mode

When operating in ECP mode, the IEEE 1284 port uses seven signals to control the flow of data
through the standard eight data lines. ECP mode defines two data transfer signaling protocols—one
for forward transfers (from PC to peripheral) and one for reverse transfers (peripheral to PC)—and the
transitions between them. Transfers are moderated by closed loop handshaking that guarantees that all
bytes get where they are meant to go, even should the connection be temporarily disrupted.
Because all parallel ports start in compatibility mode, your PC and its peripherals must first negotiate
with one another to arrange to shift into ECP mode. Your PC and its software initiate the negotiation
(as well as managing all aspects of the data transfers). Following a successful negotiation to enter
ECP mode, the connection enters its forward idle phase.

HostClk

To transfer information or commands across the interface, your PC starts from the forward idle phase
and puts the appropriate signals on the data line. To signal to your printer or other peripheral that the
values on the data lines are valid and should be transferred, your PC activates its HostClk signal,
setting it to a logical high.

PeriphAck

The actual transfer does not take place until your printer or other peripheral acknowledges the
HostClk signal by sending back the PeriphAck signal, setting it to a logical high. In response, your PC
switches the HostClk signal low. Your printer or peripheral then knows it should read the signals on
the data lines. Once it finishes reading the data signals, the peripheral switches the PeriphAck signal
low. This completes the data transfers. Both HostClk and PeriphAck are back to their forward idle

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (32 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

phase norms, ready for another transfer.

nPeriphRequest

When a peripheral needs to transfer information back to the host PC or to another peripheral, it makes
a request by driving the nPeriphRequest signal low. The request is a suggestion rather than a
command because only the host PC can initiate or reverse the flow of data. The nPeriphRequest
typically causes an interrupt in the host PC to make this request known.

nReverseRequest

To allow a peripheral to send data back to the host or to another device connected to the interface, the
host PC activates the nReverseRequest signal by driving it low, essentially switching off the voltage
that otherwise appears there. This signals to the peripheral that the host PC will allow the transfer.

nAckReverse

To acknowledge that it has received the nReverseRequest signal and that it is ready for a
reverse-direction transfer, the peripheral asserts its nAckReverse signal, driving it low. The peripheral
can then send information and commands through the eight data lines and the PeriphAck signal.

PeriphClk

To begin a reverse transfer from peripheral to PC, the peripheral first loads the appropriate bits onto
the data lines. It then signals to the host PC that it has data ready to transfer by driving the PeriphClk
signal low.

HostAck

Your PC responds to the PeriphClk signal by switching the HostAck signal from its idle logical low to
a logical high. The peripheral responds by driving PeriphClk high. When the host accepts the data, it
responds by driving the HostAck signal low. This completes the transfer and returns the interface to
the reverse idle phase.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (33 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

Data Lines

Although the parallel interface uses the same eight data lines to transfer information as do other IEEE
1284 port modes, it supplements them with an additional signal to indicate whether the data lines
contain data or a command. The signal used to make this nine-bit information system changes with
the direction of information transfer. When ECP mode transfers data from PC host to a peripheral
(that is, during a forward transfer), it uses the HostAck signal to specific command or data. When a
peripheral originates the data being transferred (a reverse transfer), it uses the PeriphAck signal to
specify command or data.

Logical Interface

All parallel ports, no matter the speed, technology, or operating mode, must somehow interface with
your PC, its operating system, and your applications. After all, you can't expect to print if you can't
find your printer, so you shouldn't expect your programs to do it, either. Where you might need a map
to find your printer, particularly when your office makes the aftermath of a rock concert seem
organized, your programs need something more in line with their logical nature that serves the same
function. You look for a particular address on a street. Software looks for function calls, interrupt
routines, or specific hardware parameters.
That list represents the steps that get you closer and closer to the actual interface. A function call is a
high level software construct, part of your operating system or a driver used by the operating system
or your applications. The function call may in turn ask for an interrupt, which is a program routine
that either originates in the firmware of your PC or is added by driver software. Both the function call
and interrupt work reach your interface by dipping down to the hardware level and looking for
specific features. Most important of these are the input/output ports used by your parallel interface.

Input/Output Ports

The design of the first PC linked the circuitry of the parallel port to the PC's microprocessor through a
set of input/output ports in your PC. These I/O ports are not ports that access the outside world but
rather are a special way a microprocessor has to connect to circuitry. An I/O port works like a
memory address—the microprocessor signals address value to the PC's support circuitry, then it sends
data to that address. The only difference between addressing memory and I/O ports is that data for the
former goes to the RAM in your PC. In the later case, the addressing is in a separate range that's links
to other circuitry. In general, the I/O port addresses link to registers, a special kind of memory that
serves as a portal for passing logical values between circuits.
The traditional design for a parallel port used three of these I/O ports. The EPP and ECP designs use
more. In any case, however, the I/O ports take the form of a sequential block. The entire range of I/O
ports used in a parallel connection usually gets identified by the address of first of these I/O ports
(which is to say the one with the lowest number or address). This number is termed the base address

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (34 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

of the parallel port. Every parallel port in a given PC must have a unique base address. Two parallel
ports inside a single PC cannot share the same base address, nor can they share any of their other I/O
ports. If you accidentally assign two parallel ports the same base address when configuring your PC's
hardware, neither will likely work.
The original PC design made provisions for up to three parallel ports in a single system, and this limit
has been carried through to all IBM-compatible PCs. Each of these has its own base address. For the
original PC, IBM chose three values for these base addresses, and these remain the values used by
most hardware makers. These basic base addresses are 03BC(Hex), 0378(Hex), and 0278(Hex).
Manufacturers rarely use the first of these, 03BC(Hex). IBM originally assigned this base address to
the parallel port that was part of the long obsolete IBM Monochrome Display Adapter or MDA card.
IBM kept using this name in its PS/2 line of computers, assigning it to the one built-in parallel port in
those machines. There was no chance of conflict with the MDA card because the MDA cannot be
installed inside PS/2s. Other computer makers sometimes use this base address for built-in parallel
ports. More often, however, they use the base address of 0378(Hex) for such built-in ports. Some
allow you to assign either address—(or even 0278(Hex)—using their setup program, jumpers, or DIP
switches.

Device Names

These base address values are normally hidden from your view and your concern. Most programs and
operating systems refer to parallel ports with port names. These names take the familiar Line PrinTer
form: LPT1, LPT2, and LPT3. In addition, the port with the name LPT1 can also use the alias PRN.
The correspondence between the base address of a parallel port and its device name varies with the
number of ports in your PC. There is no direct one to one relationship between them. Your system
assigns the device names when it boots up. One routine in your PC's BIOS code searches for parallel
ports at each of the three defined base addresses in a fixed order. It always looks first for 03BC(Hex),
then 0378(Hex), then 0278 (Hex). The collection of I/O ports at the first base address that's found gets
assigned the name LPT1; the second, LPT2; the third, LPT3. The BIOS stores the base address values
in a special memory area called the BIOS data area at particular absolute addresses. Because I/O port
addresses are 16 bits long, each base address is allocated two bytes of storage. The base address of the
parallel port assigned the LPT1 is stored at absolute memory location 0000:0408; LPT2, at
0000:040A; LPT3, 0000:040C. This somewhat arcane system assures you that you will always have a
device called LPT1 (and PRN) in your PC if you have a parallel interface at all, no matter what set of
I/O ports it uses.

Interrupts

The design of the original PC provided two interrupts for use by parallel ports. Hardware interrupt
07(Hex) was reserved for the first parallel port, and hardware interrupt 05(Hex) was reserved for the
second.
DOS, Windows, and most applications do not normally use hardware interrupts to control printers.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (35 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

When interrupts run short in your PC and you need to find one for a specific feature, you can often
steal one of the interrupts used by a printer port.
The key word in the discussion of parallel port interrupts is printer. If you use your parallel port for
some other purpose, you may not be able to steal its interrupt. Drivers for EPP and ECP ports may use
interrupts, and modems that use parallel ports usually make use of interrupts. If you need an interrupt
and you're not sure whether your parallel port needs it, try reassigning it where you need it. Then try
to print something while you use the feature that borrowed the interrupt. If there's a problem, you'll
know it before you risk your data to it.

Port Drivers

The PC printer port was designed to be controlled by a software driver. Under DOS, you might not
notice these drivers because they are part of your PC's ROM BIOS. The printer interrupt handler is
actually a printer driver.
In reality, only a rare program uses this BIOS-based driver. It's simply too slow. Because the
hardware resources used by the parallel port are well known and readily accessed, most programmers
prefer to directly control the parallel port hardware to send data to your printer. Many applications
incorporate their own print routines or use printer drivers designed to take this kind of direct control.
More advanced operating systems similarly take direct hardware control of the parallel port through
software drivers which take over the functions of the BIOS routines. Windows, up through Version
3.11, automatically used its own integral drivers for your printer ports, although EPP and ECP
operation require that you explicitly load drivers to match. More advanced operating systems,
including OS/2 and Windows 95, always use external drivers to take control of your PC's ports.
You can check or change the parallel port driver your system uses when you run Windows 95 through
the Printer Port properties folder. To access this folder, run Device Manager. From the Start button,
select Settings, then Control Panel. Click on the System icon in Control Panel. Select the Device
Manager tab. Click on the line for Ports (COM and LPT), then highlight the LPT port for which you
want to check the driver. Finally, click on the properties button. You'll see a screen like that shown in
Figure 19.5.
Figure 19.5 The Windows 95 parallel port properties folder.

Under the heading Driver files:, you'll see your parallel port drivers listed. You can change the driver
used by this port by clicking on the Change Driver button. When you do, you'll see a window like that
shown in Figure 19.6.
Figure 19.6 Updating your parallel port driver under Windows 95.

By default, the Models list will include only those drivers that are compatible with the port that the
Windows Plug-and-Play system has detected in your PC. The list will change with the Manufacturer
you highlight. You can view all the available drivers from a given manufacturer (or the standard
drivers) by selecting Show all devices. If the driver you want is not within the current repertory of
your Windows 95 system, you can install a driver from a floppy or CD ROM disk (or one that you've
copied to your hard disk) by clicking on the Have Disk button, which prompts you for the disk and
path name leading to the driver.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (36 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

To install a new driver, highlight it and click on the OK button. Windows takes care of the rest.

Control

Even in its immense wisdom, a microprocessor can't fathom how to operate a parallel port by itself. It
needs someone to tell it how to move the signals around. Moreover, the minutiae of constantly taking
care of the details of controlling a port would be a waste of the microprocessor's valuable time.
Consequently, system designers created help systems for your PC's big brain. Driver software tells the
microprocessor how to control the port. And port hardware handles all the details of port operation.
As parallel ports have evolved, so have these aspects of their control. The software that controls the
traditional parallel port that's built into the firmware of your PC has given way to a complex system of
drivers. The port hardware, too, has changed to both simplify operation and to speed it up.
These changes don't follow the neat system of modes laid down by IEEE 1284. Instead, they have
undergone a period of evolution in reaching their current condition.

Traditional Parallel Ports

In the original PC, each of its parallel ports linked to the PC's microprocessor through three separate
I/O ports, each controlling its own register. The address of the first of these registers serves as the base
address of the parallel port. The other two addresses are the next higher in sequence. For example,
when the first parallel port in a PC had a base address of 0378(hex), the other two I/O ports assigned it
had addresses of 0379(Hex) and 037A(Hex).
The register at the base address of the parallel port serves a data latch called the printer data register
which temporarily holds the values passed along to it by your PC's microprocessor. Each of the eight
bits of this port is tied to one of the data lines leading out of the parallel port connector. The
correspondence is exact. For example, the most significant bit of the register connects to the most
significant bit on the port connector. When your PC's microprocessor writes a value to the base
register of the port, the register latches those values until your microprocessor sends newer values to
the port.
Your PC uses the next register on the parallel port, corresponding to the next I/O port, to monitor what
the printer is doing. Termed the printer status register, the various bits that your microprocessor can
read at this I/O port carry messages from the printer back to your PC. The five most significant bits of
this register directly correspond to five signals appearing in the parallel cable: bit 7 indicates the
condition of the busy signal; bit 6, acknowledge; bit 5, paper empty; bit 4, select; and bit 3, error. The
remaining three bits of this register (bits 2, 1, and 0—the least significant bits) served no function in
the original PC parallel port.
To send commands to your printer, your PC uses the third I/O port, offset two ports from the base
address of the parallel port. The register there, called the printer control register, relays commands
through its five least significant bits. Of these, four directly control corresponding parallel port lines.
Bit 0 commands the strobe line; bit 1 the Auto Feed XT line; bit 2, the initialize line; and Bit 3, the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (37 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

select line.
To enable your printer to send interrupts to command the microprocessor's attention, your PC uses bit
4 of the printer control register. Setting this bit high causes the acknowledge signal from the printer to
trigger a printer interrupt. During normal operation your printer, after it receives and processes a
character, changes the acknowledge signal from a logical high to a low. Set bit 4, and your system
detects the change in the acknowledge line through the printer status register and executes the
hardware interrupt assigned to the port. In the normal course of things, this interrupt simply instructs
the microprocessor to send another character to the printer.
All of the values sent to the printer data register and the printer control register are put in place by
your PC's microprocessor, and the chip must read and react to all the values packed into the printer
status register. The printer gets its instructions for what to do from firmware that is part of your
system's ROM BIOS. The routines coded for interrupt vector 017(Hex) carry out most of these
functions. In the normal course of things, your applications call interrupt 017(Hex) after loading
appropriate values into your microprocessors registers, and the microprocessor relays the values to
your printer. These operations are very microprocessor intensive. They can occupy a substantial
fraction of the power of a microprocessor (particularly that of older, slower chips) during print
operations.

Enhanced Parallel Ports

Intel set the pattern for Enhanced Parallel Port by integrating the design into the 386SL chip set
(which comprised a microprocessor and a support chip, the 386SL itself and the 82360SL I/O
subsystem chip, which together required only memory to make a complete PC). The EPP was
conceived as a superset of the standard and PS/2 parallel ports. As with those designs, compatible
transfers require the use of the three parallel port registers at consecutive I/O port addresses. However,
it adds five new registers to the basic three. Although designers are free to locate these registers
wherever they want because they are accessed using drivers, in the typical implementation, these
registers occupy the next five I/O port addresses in sequence.

EPP Address Register

The first new register (offset three from the base I/O port address) is the called the EPP address
register. It provide a direct channel through which your PC can specify addresses of devices linked
through the EPP connection. By loading an address value in this register, your PC could select among
multiple devices attached to a single parallel port, at least once parallel devices using EPP addressing
become available.

EPP Data Registers

The upper four ports of the EPP system interface (starting at offset four from the base port) link to the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (38 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

EPP data registers which provide a 32-bit channel for sending data to the EPP data buffer. The EPP
port circuitry takes the data from the buffer, breaks it into four separate bytes, then sends the bytes
through the EPP data lines in sequence. Substituting four I/O ports for the one used by standard
parallel ports moves the conversion into the port hardware, relieving your system from the
responsibility of formatting the data. In addition, your PC can write to the four EPP data registers
simultaneously using a single 32-bit double-word in a single clock cycle in computers that have 32-bit
data buses. In lesser machines, the EPP specification also allows for byte-wide and word-wide (16-bit)
write operations through to the EPP data registers.
Unlike standard parallel ports that require your PC's microprocessor to shepherd data through the port,
the Enhanced Parallel Port works automatically. It needs no other signals from your microprocessor
after it loads the data in order to carry out a data transfer. The EPP circuitry itself generates the data
strobe signal on the bus almost as soon as your microprocessor writes to the EPP data registers. When
your microprocessor reads data from the EPP data registers, the port circuitry automatically triggers
the data strobe signal to tell whatever device that's sending data to the EPP connection that your PC is
ready to receive more data. The EPP port can consequently push data through to the data lines with a
minimum of transfer overhead. This streamlined design is one of the major factors that enables the
EPP to operate so much faster than standard ports.

Fast Parallel Port Control Register

To switch from standard parallel port to bi-directional to EPP operation requires only plugging values
into one of the registers. Although the manufacturers can use any design they want, needing only to
alter their drivers to match, most follow the pattern set in the SL chips. Intel added a software
controllable fast parallel port control register as part of the chipset. This corresponds to the unused
bits of the standard parallel port printer control register.
Setting the most significant bit (bit 7) of the fast parallel port control register high engages EPP
operation. Setting this bit low (the default) forces the port into standard mode. Another bit controls
bi-directional operation. Setting bit 6 of the fast parallel port control register high engages
bi-directional operation. When low, bit 6 keeps the port unidirectional.
In most PCs, an EPP doesn't automatically spring to life. Simply plugging your printer into EPP
hardware won't guarantee fast transfers. Enabling the EPP requires a software driver which provides
the link between your software and the EPP hardware.

Extended Capabilities Ports

As with other variations on the basic parallel port design, your PC controls an Extended Capabilities
Port through a set of registers. To maintain backward compatibility with products requiring access to a
standard parallel port, the ECP design starts with the same trio of basic registers. However, it
redefines the parallel port data in each of the port's different operating modes.
The ECP design supplements the basic trio of parallel port registers with an additional set of registers
offset at port addresses 0400(Hex) higher than the base registers. One of these, the extended control

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (39 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

register, controls the operating mode of the ECP port. Your microprocessor sets the operating mode
by writing to this port, which is located offset by 0402(Hex) from the base register of the port. The
ECP port uses additional registers to monitor and control other aspects of the data transfer. Table
19.12 lists the registers used by the ECP, their mnemonics, and the modes in which they function.

Table 19.12. Extended Capabilities Port Register Definitions

Name Address Mode Function


Data Base PC, PS/2 Data register
ecpAFifo Base ECP ECP FIFO (Address) buffer
DSR Base+1 All Status register
DCR Base+2 All Control register
cFifo Base+400 EPP Enhanced Parallel Port FIFO (data) buffer
ecpDFifo Base+400 ECP ECP FIFO (data) buffer
tFifo Base+400 Test Test FIFO
cnfgA Base+400 Configuration Configuration register A
cnfgB Base+401 Configuration Configuration register B
ecr Base+402 All Extended control register

As with other improved parallel port designs, the ECP behaves exactly like a standard parallel port in
its default mode. Your programs can write bytes to its data register (located at the port's base address
just as with a standard parallel port) to send the bits through the data lines of the parallel connection.
Switch to EPP or ECP mode, and your programs can write at high speed to a register as wide as 32
bits. The ECP design allows for transfers 8, 16, or 32 bits wide at the option of the hardware designer.
To allow multiple devices to share a single parallel connection, the ECP design incorporates its own
addressing scheme that allows your PC to separately identify and send data to as many as 128 devices.
When your PC wants to route a packet or data stream through the parallel connection to a particular
peripheral, it sends out a channel address command through the parallel port. The command includes
a device address. When an ECP parallel device receives the command, it compares the address to its
own assigned address. If the two do not match, the device ignores the data traveling through the
parallel connection until your PC sends the next channel address command through the port. When
your PC fails to indicate a channel address, the data gets broadcast to all devices linked to the parallel
connection.

Performance Issues

As with any interface, you want your parallel connection to operate at the highest possible speed. The
speed of a parallel connection can be difficult to pin down. Several variables affect it. For example,
the parallel cable itself sets the upper limit on the frequencies of the signals that the port can use,
which in turn limits the maximum data rate. At practical cable lengths, which means those less then

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (40 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

the recommended 10-foot maximum, cable effects on parallel port throughput are minimal. Other
factors that come into play include: the switching speed of the port circuitry itself, the speed at which
your PC can write to various control and data registers, the number of steps required by the BIOS or
software driver to write a character, the ability of the device at the other end of the connection to
accept and process the data sent to it, and the delays necessary in the timing of the various parallel
port signals that are necessary to insure the integrity of the transfer.

Timing

The timing of parallel port signals is actually artificially slow to accommodate the widest variety of
parallel devices. Because the timing was never standardized before the IEEE 1284 specification,
manufacturers had to rely on loose timing—meaning a wider tolerance of errors achieved through a
slower signaling rate—to assure any PC could communicate with any printer or other parallel
peripheral.
When system timing of an older standard parallel port is set at the minimum that produces the widest
compatibility, the transmission of a single character requires about ten microseconds. That speed
yields a peak transfer rate of 100,000 bytes per second. Operated at the tightest timing allowed by the
IEEE 1284 specification, a conventional parallel port can complete a single character transfer cycle in
four microseconds, yielding a peak throughput of 250,000 bytes per second.
Add in all the overhead at both ends of the connection, and those rates can take a bad tumble. With a
fast PC and fast peripheral, you can realistically expect 80 to 90 kilobytes per second through the
fastest conventional parallel port.
The EPP specification allows for a cycle time of one-half microsecond in its initial implementations.
That translates to a peak transfer rate of two megabytes per second. In actual operation with normal
processing overhead, EPP ports come close to half that rate, around 800 kilobytes per second.
Such figures do not represent the top limit for the EPP design, however. In future versions of the EPP
standard, timing constraints may be tightened to require data on the interface to become valid within
100 nanoseconds. Such future designs allow for a peak transfer rate approaching eight megabytes per
second. Such a rate actually exceeds the speed of practical transfers across the ISA bus. Taking full
advantage of an EPP connection will require a local bus link.

Data Compression

One very effective way of increasing the speed of information through any interface is to minimize
the number of bytes you have to move. By compressing the digital code—that is, reducing it to a more
bit efficient format—you can reduce the number of bytes needed to convey text, graphics, and files.
Already popular in squeezing more space from disks (for example, with DriveSpace and Stacker),
tapes in backup systems, and modem connections, data compression is also part of the Extended
Capabilities Port standard.
As an option, the ECP system allows you to compress the data you send through the parallel interface

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (41 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

to further increase the speed of transfers. The port circuitry itself handles the compression and
decompression, invisible to your PC and its software as well as to the peripheral at the other end of the
connection. The effect on your transfers is the same as increasing the speed of the signals across the
parallel cable but without all the electrical problems.
The ECP design uses a simple form of compression called Run Length Encoding or RLE. As with any
code, RLE can take many different forms but the basic principle is the same. Long repetitions of the
same digital pattern get reduced to a single occurrence of the pattern and a number indicating how
many times the pattern is repeated. The specific RLE algorithm used by the ECP system works at the
byte level. When the same byte is repeated in a sequence of data, the system translates it into two
bytes: one indicating the original code and a multiplier. Of course, if bytes do not repeat, this basic
form of RLE is counterproductive. Using two bytes to code one increases the number of bytes
required for a given amount of data. To minimize the impact of this expansion, the RLE algorithm
used by the ECP system splits the difference. Half the possible byte values are kept untouched and are
used by the code to represent the same single byte values as in the incoming data stream. The other
byte values serve as multipliers. If one of the byte values that are reserved for multipliers appears in
the incoming data stream, it must be represented by two bytes (the byte value followed by a multiplier
of one). This system allows two bytes to encode repeated character streams up to 128 bytes long.
At its best, this system can achieve a compression ratio of 64 to 1 on long repetitions of a single byte
value. At worst, the system expands data by a ratio of 1 to 2. With real world data, the system
achieves an overall compression ratio approaching 2 to 1, effectively doubling the speed of the
parallel interface whatever its underlying bit-per-second transfer rate.
RLE data compression can be particularly effective when you transfer graphics images from your PC
to your printer. Graphic images often contain long sequences of repeated bytes representing areas of
uniform color. RLE encoding offers little benefit to textual exchanges because text rarely contains
long repetitions of the same byte or character. Of course, sending ordinary text to a printer usually
doesn't strain the capabilities of even a standard parallel port, so the compression speed boost is
unnecessary.

Bus Mastering

System overhead is the bane of performance in any data transfer system. The more time your PC's
microprocessor spends preparing and moving data through the interface, the less of its time is
available for other operations. The problem is most apparent during background printing in PCs using
older, slower microprocessors. Most applications give you the option of printing in the background so
you can go on to some other task while your PC slowly spools out data to your printer. All too often,
the PC slows down so much during background printing that it's virtually useless for other work. This
problem occurs in PCs as powerful as 486-based machines.
The slowdown has several sources. Your microprocessor may have to rasterize a full-page image
itself (as it often does when printing from Windows) or it may spend its time micro-managing the
movement of bytes from memory to the registers of the parallel interface. Although system designers
can do anything to improve the speed of the former case, short of using a more powerful
microprocessor, they have developed several schemes to minimize system overhead. One dramatic
improvement comes with sidestepping the printer BIOS routines and taking direct control of the
interface circuitry. Another is to take the transfer job from the microprocessor and give it to some

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (42 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

other circuit. This last expedient underlies the technology of bus mastering.
Bus mastering can improve overall system (and printing) performance two ways. The circuit
managing the transfers can be more efficient than your microprocessor at the chore. It may be able to
move bytes faster. And, by removing responsibility from your microprocessor, it prevents data
transfers from bogging down the rest of your PC. Your microprocessor has more of its time for doing
whatever a microprocessor does.
In systems that allow the bus mastering of parallel ports, the transfers are typically managed by your
system's DMA (Direct Memory Access) controller. Your microprocessor sets up the
transfer—specifying where the bytes are coming from, where they are to go, and how many to
move—and lets the DMA controller take over the details. The DMA controller then takes control of
the bus, becoming its master, and moving the bytes across it.
Bus mastered parallel transfers have not won wide favor. The technology does not work well on the
ISA expansion bus, and IBM introduced it late in the life of the Micro Channel system. Moreover, the
high processing speed of modern 486 and better microprocessors coupled with comparatively low
throughput of the standard parallel interface makes bus mastering an unnecessary complication.
Although not currently applied to PCI or VL Bus systems (both of which support bus mastering), the
technology could give a boost to EPP and ECP performance because of the higher throughput and
simplified means of transfer bytes to those interfaces.

Plug-and-Play

The Plug-and-Play system developed by computer manufacturers with the intention of making your
life simpler—or at least dealing with the setup of your PC easier—extends to input/output ports and
printers. Plug-and-Play technology allows your PC to detect and identify the various hardware devices
that you connect to your computer. For example, a printer that understands and uses the Plug-and-Play
system can identify itself to your PC and tell your PC which software driver is best to use.
The basic mechanism required for the Plug-and-Play system to work for printers is built into the IEEE
1284 specification. The actual identification and matching of drivers gets handled by your PC's
operating system.

Benefits

Equipment made in accord with the Plug-and-Play specification tells your PC the system resources it
needs, and your PC can then automatically assign those resources to the equipment. Unlike when you
set up hardware yourself, your PC can infallibly (or nearly so) keep track of the resource demands and
usages of each device you connect. Plug-and-Play technology lets your PC not only resolve conflicts
between devices that need the same or similar hardware resources, but also the system prevents
conflicts from occurring in the first place.
You only need to concern yourself, if at all, with two aspects of Plug-and-Play when you connect your
printer—how it configures your ports and how it deals with your printer itself. Although you shouldn't

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (43 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

even have to worry about these details most of the time, understanding the magic can help you better
understand your PC and subvert the system when it creates instead of eliminates a problem.
Printers that conform to the requirements of the Plug-and-Play system enable several automatic
features. A printer can then specify its device class, and the Plug-and-Play operating system will
install features and drivers that work with that device class. The system allows your printer to identify
itself with a familiar name instead of some obscure model number and use that name throughout the
configuration process. That way you can understand what's going on instead of worrying about some
weird thing in your computer with a name that looks eerily like the markings on the side of a UFO.
And the Plug-and-Play printer can tell you what other peripherals that it works with.

Requirements

For the Plug-and-Play system to work at all, you need to run an operating system that has
Plug-and-Play capabilities. Windows 95 is the first operating system to fully support the technology.
DOS offers no Plug-and-Play support, and OS/2 Warp includes only a trifling bit of Plug-and-Play
technology, used chiefly in administering PC Card slots. It cannot automatically identify your printer.
Ideally, your PC and all the peripherals connected to it will comply with the Plug-and-Play
specifications. If you buy a new PC in these enlightened times, you should expect that level of
compliance. If you have an older system (or a new system into which you've installed old peripherals),
however, you probably won't have full Plug-and-Play compliance. That's okay because in most cases
a Plug-and-Play operating system can make do with what you have. For example, Windows 95 can
identify your printer as long as it follows the Plug-and-Play standard even if you have cluttered your
PC with old expansion boards that don't mesh with the standard.
To automatically identify your printer, the Plug-and-Play system needs only to be able to signal to
your printer and have it send back identification data. Your parallel port is key to this operation, but
the demands made from it for Plug-and-Play operation are minimal. The port may use any of the
standard IEEE connector designs. It must also support, at minimum, nibble-mode bi-directional
transfers. Nearly every parallel port ever made fits these requirement. Plug-and-Play prefers a port
that follows the ECP design, and for the sake of maximum printer performance, so should you.
Of course, a printer must have built-in support of the Plug-and-Play standard if it is to take advantage
of the technology. The primary need is simple. Your printer must be able to send to your PC
Plug-and-Play identification information so your PC will know what kind of printer you've connected.
So that your system can be certain about the kind of printer you have, it requires three forms of
identification called key values. Three additional key values optimize the operation of the
Plug-and-Play system.

Operation

The IEEE 1284 specification provides a mechanism through which your PC's operating system can
query a device connected to a parallel port. When your PC sends out the correct command, the printer
responds first by sending back two bytes indicating how much identification data it has stored. This

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (44 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

value is the length of the identification data in bytes, including the two length-indicating bytes. The
first byte of these is the more significant.
After your PC gets the length information, it can query your printer for the actual data with another
command. Your Plug-and-Play printer responds by sending back the key value information stored
inside its configuration memory (which may be ROM, Flash RAM, or EEPROM).
The three required identifications for Plug-and-Play to work are the Manufacturer, Command Set, and
Model of your printer. Each of these is stored as a string of case-sensitive characters prefaced by the
type of identification. The IEEE 1284 specification abbreviates these identifications as MFG, CMD,
and MDL. For example, your printer might respond with these three required values like this:
MFG: Acme Printers; CMD: PCL; MDL: Roadrunner 713
The manufacturer and model identifications are unique to each manufacturer. These values should
never change and typically will be stored in ROM inside your printer. Ideally, the command set
identification tells your computer what printer driver to use, Hewlett-Packards Printer Control
Language (PCL) in the example line. Although it is often a fixed value in a given printer, if your
printer allows you to plug in additional emulations or fonts, the value of the command set identifier
should change to match. Note that Windows 95 ignores the command set key value. Instead, when it
automatically sets up your printer driver, it relies on manufacturer and model information to
determine which driver to use.
Windows 95 generates its own internal Plug & Play identification for working with key value data. It
generates its ID value by combining the Manufacturer and Model values and appending a four-digit
checksum. If the Manufacturer and Model designations total more than 20 characters, Windows 95
cuts them off at 20 characters but only after it calculates the checksum. The result is a string 24 or
fewer characters long. Finally Windows 95 adds the preface LPTENUM\ (indicating the parallel port
enumerator) so that it knows the path through which to find the printer. The result is the printer's
Plug-and-Play ID that Windows 95 uses internally when matching device drivers to your printer. For
example, the internal Windows 95 ID for a Hewlett-Packard LaserJet 4L printer would be the
following character string:

LPTENUM\Hewlett-PackardLaserC029
Printer manufacturers can add, at their option, other identification information to the Plug-and-Play
key values. The IEEE 1284 specification envisions Comment and Active Command Set entries.
Microsoft defines its own trio of options: Class (abbreviated CLS), Description (or DES), and
Compatible ID (or CID). These values are not case-sensitive.
The Class key value describes the general type of device. Microsoft limits the choices to eight: FDC,
HDC, Media, Modem, Net, Ports, or Printer.
The Description key value is a string of up to 128 characters that is meant to identify the
Plug-and-Play device in a form that human beings understand. Windows 95 uses the Description
when referring to the device on screen when it cannot find a data (INF) file corresponding to the
device. Normally Windows would retrieve the onscreen identification for the device from the file. The
Description key value keeps things understandable even if you plug in something Windows has never
encountered before.
The Compatible ID key value tells Windows if your printer or other device will work exactly like

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (45 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

some other product for which Windows might have a driver. For example, it allows the maker of a
printer cloned from an Epson MX-80 to indicate it will happily use the Epson printer driver.
Once Windows 95 has identified your printer, its command set, and compatibilities, it uses these
values to search through for the data it needs to find the drivers required by your printer and properly
configure them. Of course, you always have the option to override the automatic choices when you
think you know better than Mother Microsoft.

GP-IB Interface

Long before the standard parallel port had evolved its high
speed extensions and enhancements, even before IBM
unleashed its first PC onto the market, a number of
applications had need for a relatively high speed yet simple
data interface. Among the first areas that embraced such
connections was scientific instrumentation. Engineers,
scientists, and technicians often need to take extensive
series of measurements at regular intervals to explore
phenomena and test their latest creations. They were early
to embrace the idea of automating measurements, both to
ensure reliability and to gain a good night's sleep without
the alarm going off at 3 AM to warn of the need to make
the next series of measurements. Connecting the various
test and measurement instruments together and to a
computer programmed to operate them assure researchers
both peaceful slumber and results not compromised by
bleary eyes and a lack of caffeine.
The leading manufacturer of scientific tests and
measurements, Hewlett-Packard Company, developed its
own parallel interface to link together its test equipment

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (46 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

dubbed the Hewlett-Packard Interface Bus or HP-IB. The


design became so popular that other companies including
HP's competitors adopted it. In 1978 the design was
sanctioned by the Institute of Electrical and Electronic
Engineers as a formal standard, known as IEEE-488. In this
guise, it wears a less proprietary common name, the
General Purpose Interface Bus.
The HP-IB (Hewlett-Packard Interface Bus) name survives
today. Rather than an alias for the industry standard,
Hewlett-Packard maintains that HP-IB is a proprietary
interface but one that provides for compatibility and two
way communication between devices that follow the
IEEE-488 standard. Devices that commonly use IEEE-488
include automated test and measurement equipment,
printers, plotters, and PCs.
No matter the name you give it, the basic IEEE-488 design
comprises 16 separate connections to move data and
commands between electronic devices. Eight of these
connections carry data in a true byte-wide bus. Three lines
provide handshaking and flow control between the various
devices that are linked together. The remaining five lines
allow for arbitration and management of the bus
connections. The standard connector also provides eight
ground connections, one of which is a chassis or earth
ground. Data and commands flow between the linked
devices on the eight data lines asynchronously, governed
by the handshake signals.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (47 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

The standard connector used by the GPIB system


resembles a parallel port B connector but has only 24
connections. It has two parallel rows of ribbon contacts
arranged around a tab that's inside a shield. To assure the
physical integrity of the connection, female plugs and jacks
have bail wires that latch into male connectors. Figure 19.7
shows one form of female GPIB jack.
Figure 19.7 A 24-place GP-IB female jack.
As a true bus, the cables used in a GP-IB system are all
straight-through, connected pin-for-pin one end to another.
No twists, turns, flip-flops, or crossovers are required for
linking together various devices. Table 19.13 lists the
signal assignments for the various pins in a standard GP-IB
connector.

Table 19.13. GP-IB Signal Assignments

Pin Signal name Function


1 D101 Data Bus 1
2 D102 Data Bus 2
3 D103 Data Bus 3
4 D104 Data Bus 4
5 EO1 End or Identify
6 DAV Data Valid
7 NRFD Not Ready for Data
8 NDAC Not Data Accepted
9 IFC Interface Clear
10 SRQ Service Request

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (48 de 52) [23/06/2000 06:38:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

11 ATN Attention
12 SHIELD Earth Ground
13 D105 Data Bus 5
14 D106 Data Bus 6
15 D107 Data Bus 7
16 D108 Data Bus 8
18 GND Signal Ground
19 GND Signal Ground
20 GND Signal Ground
21 GND Signal Ground
22 GND Signal Ground
23 GND Signal Ground
24 GND Signal Ground

Devices are wired together into a bus by daisy chaining. A


device may have two connectors to facilitate the daisy
chain connection or the cable may have two interface
connections at its ends, one male (which plugs into a
device) and, on the opposite side, a female connector that
allows you to plug in a second cable to run to the next
device.
The GP-IB standard defines three types of devices that
operate on the bus: talkers, listeners, and controllers.
A talker is a device that sends data out or talks to any of the
other devices linked by the bus. A listener is the device that
receives the data from the talker. A controller manages the
interactions between the various devices that are linked by
the bus.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (49 de 52) [23/06/2000 06:38:29 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

These definitions are dynamic. A given device may operate


one moment as a talker and the next moment as a listener,
depending on its current function in the system. The
controller sends out commands that make a given device
act as talker or listener.
GP-IB specifies not only the bus hardware but also a
transfer protocol and set of commands. These commands
are sent to devices like data across the data lines. To
identify commands and distinguish them from data, the
controller activates the Attention (ATN, pin 11) line on the
bus to indicate a command byte.
Because GP-IB is a bus, all devices linked by it share the
same signals. To route data or commands to a specific
device, each device attached to the bus must be given a
unique address. Addresses on the GP-IB range from zero to
seven, allowing for a maximum of eight devices. Each
address corresponds to one of the data lines, which is used
as a signal to address the device. Some equipment puts
restrictions on some addresses. For example, some HP
printers assign address seven as "listen only." In this mode,
the printer listens to all the traffic on the bus regardless of
its address. This allows the printer to serve as a log of all
bus traffic, a useful function in monitoring scientific
apparatus.
GP-IB addresses are typically set with a DIP switch on the
back of the device near the interface connector.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (50 de 52) [23/06/2000 06:38:29 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

In normal operation, a controller manages the bus by


polling. That is, it periodically sends out commands to each
device to see which require service. For example, during an
experiment the controller may poll a thermocouple linked
to the bus to determine (and log) the current temperature of
an experiment.
Individual devices can also require immediate service by
activating the Service Request (SRQ, pin 10) line on the
bus. The controller then responds by polling the bus to
determine which device made the request, and finally
servicing the request.
The controller can monitor the bus with either parallel or
serial polling.
The controller uses parallel polling when it needs to check
all devices on the bus or wants to determine which device
activated the SRQ bus line. To start a parallel poll, the
controller activates both the End or Identify (EOI, pin 5)
and Attention (ATN, pin 11) bus lines at the same time.
Each device that has both been enabled to respond and
requires service activates the data line with the number
corresponding to its address. The controller enables devices
with GP-IB commands.
In serial polling, the controller addresses each device
sequentially, using the bus data lines to send out the
address of the device to be polled. As it is identified, each
device responds in turn, sending out data should it have

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (51 de 52) [23/06/2000 06:38:29 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 19

been programmed to do so.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh19.htm (52 de 52) [23/06/2000 06:38:29 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Chapter 20: Printers and Plotters


Hard copy is what printers and plotters make, the real paperwork that you can hold in
your hand. Despite the dreams in years gone by of the paperless office, paper remains
the medium of choice—only mail is in a headlong rush to go electronic, and even that
hasn't lightened the load of your letter carrier. Because of the need for putting things on
paper, the external peripheral that you're most likely to connect to your PC is the printer.
The printer is not a singular thing, but rather a creature of many technologies. Each has
its own advantages and disadvantages, which themselves vary with what you want to
do.[/block]

■ Printers
■ History
■ Personal Printing
■ Shift to Graphics
■ Fundamentals
■ Speed
■ Quality
■ Color
■ Print Engines
■ Impact Dot Matrix
■ Inkjets
■ Laser
■ Direct Thermal
■ Thermal Transfer
■ Dye Diffusion
■ Daisy Wheels and Tulips
■ Paper Handling
■ Sheet Feeders

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (1 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

■ Continuous form Feeding


■ Consumables
■ Cartridges
■ Refilling
■ Paper
■ Printer Control
■ Character Mode
■ Line Printer Graphics
■ Postscript
■ PCL
■ Fonts
■ Storage and Retrieval
■ Font Cartridges
■ Downloadable Character Sets
■ Font Formats
■ Printer Sharing
■ Hardware Devices
■ Software Sharing
■ Plotters
■ Technologies
■ Flatbed Plotters
■ Drum Plotters
■ Output Quality
■ Color
■ Interfacing
■ Control Languages
■ Performance
■ Alternatives

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (2 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

20

Printers and Plotters

Printing is the art of moving ink from one place to another. Although that definition
likely will please only a college instructor lost in his own vagueness, any more precise
description fails in the face of a reality laced with printouts of a thousand fonts and far
fewer thoughts. A modern computer printer takes ink from a reservoir and deposits in on
paper or some other medium in patterns determined by your ideas and your PC. In other
words, a printer makes your thoughts visible.
Behind this worthy goal is one of the broadest arrays of technology in data processing,
including processes akin to hammers, squirt guns, and flashlights. The range of
performance is wider than with any other peripheral. Various printers operate at speeds
from lethargic to lightning-like, from slower than an arthritic typist with one hand tied
behind his back to faster than Speedy Gonzales having just munched tacos laced with
amphetamines. They are packaged as everything from one pound totables to truss
stressing monsters and look like anything from Neolithic bricks to Batman's nightmares.
Some dot paper with text quality that rivals that of a professional publisher and chart out
graphics with speed and sharpness that put a plotter to shame. Some make a
two-year-old's handiwork look elegant.
The classification of printers runs a similar, wide range. You can distinguish machines by
their quality, speed, technology, purpose, weight, color, or any other of their innumerable
(and properly pragmatic) design elements.
A definitive discourse on all aspect of printer technology would be a never-ending tale
because the field is constantly changing. New technologies often arise and old ones are
revived and refined. Innovations are incorporated into old machines. And, seemingly
obsolete ideas recur.

Printers

Obviously, the term "computer printer" is a general one that refers not to one kind of
machine but several. Even in looking at the mechanical aspects of the typical printer's job
of smudging paper with ink, you discover that many ways exist to put a computer's
output on paper, just as more than one method exists for getting your house cat to part
with its pelt.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (3 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

History

The technology of printing followed the same pattern as display systems—the first
devices were character oriented, but bit mapped quickly evolved as the preferred choice.
This pattern should be hardly unexpected. The first PCs were character oriented. Only
when PC hardware assumed enough power and performance could individual bits be
manipulated fast and well enough to make bit mapping viable.
Printing is, in fact, far older than display technology; far older, indeed, than the PC,
computers, or even electronics. Printing began with stone tablets, clay, and styli and
slowly made its way to the papyrus patch and on to the scriptorium.
Printing became publishing with Johannes Gutenberg's 15th Century development of
movable type. The essence of Gutenberg's invention was to take woodblock printing that
made a page size image as a unit and break it into pieces. Each alphabetic letter became
its own woodblock. In effect, Gutenberg invented the character based technology that
served the first generation of PC printers so well.
The printing press is, of course, a machine for mass production while the computer
printer is designed for more personal production of hard copy. For four centuries after
Gutenberg, personal communications remained exactly where they had been for the
millenia before: quill on papyrus or paper.

Personal Printing

The first machine truly designed for personal communications was Christopher Sholes'
typewriter. Sholes' goal was not speed but clarity, adding uniformity and legibility of the
printing press to individual business papers (the hard copy of the day). Only decades later
did the development of touch typing give the typewriter its speed lead over the quill.
The first generation of PC printers were direct descendants of Sholes' original office
typewriter. They use exactly the same technology to get ink onto paper—the force of
impact. Although an old-fashioned typewriter is a mechanical complexity (as anyone
knows who has tried putting one back together after taking it apart), its operating
principle is quite simple. Strip away all the cams, levers, and keys, and you see that the
essence of the typewriter is its hammers.
Each hammer strikes against an inked ribbon, which is then pressed against a sheet of
paper. The impact of the hammer against the ribbon shakes and squeezes ink onto the
paper. Absorbed into the paper fibers, the ink leaves a visible mark or image in the shape
of the part of the hammer that struck at the ribbon, typically a letter of the alphabet.
In the earliest days of personal computers—before typewriter makers were sure that PCs
would catch on and create a personal printer market—a number of companies adapted
typewriters to computer output chores. The Bytewriter of 1980 was typical of the
result— a slow, plodding computer printer with full typewriter keyboard. It could do
double duty as fast as your fingers could fly, but it was no match for the computer's

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (4 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

output.
One device, short-lived on the marketplace, even claimed that you could turn your
typewriter into a printer simply by setting a box on the keyboard. The box was filled with
dozens of solenoids and enough other mechanical parts to make the Space Shuttle look
simple. The solenoids worked as electronically controlled "fingers," pressing down each
key on command from the host computer. Interesting as it sounds, they tread the thin line
between the absurd and surreal. More than a little doubt exists as to whether these
machines, widely advertised in 1981, were ever actually sold.

Shift to Graphics

All of these machines were, like Gutenberg's printing press, character oriented, exactly
suited to the needs of character oriented terminal based early hobbyist computers. The
1982 advent of graphics for the PC shattered character based technology to bits. That is,
instead of working with individual characters, PCs began the inexorable changeover to
building the characters and graphics they displayed on screen from bits stored in a
memory map. In order to make hard copy from the rough graphics early PCs showed on
their screens, printers too made the transition to bit image technology.
The result was the dot matrix printer, a machine that prints characters much in the
manner they are formed on a monitor screen. With a dot matrix printer, the raw material
for characters on paper is much the same as it is on the video screen—dots. A number of
dots can be arranged to resemble any character that you want to print. To make things
easier for the printer (and its designer), printers that form their characters from dots
usually array those dots in a rectilinear matrix like a crossword puzzle grid.
This kind of printer gets its name because it places each of its dots within a character
matrix. Although the dot matrix as a description clearly applies to any printer that forms
text on the fly from individual dots rather than a preset pattern, the term has come to
mean one specific printer technology, the impact dot matrix printer. A more general term
for a printer that uses this technology is bit image printer, but that term has fallen into
disuse because almost all modern printers use the technology and there's no sense in
belaboring the obvious.
The impact dot matrix printer reflected the movement of personal printing from the
mechanical to the information age. It substituted electronic control of each dot for the
mechanical complexity of managing several dozen character patterns on individual
hammers.
Today, the impact dot matrix printer survives at the bottom end of the printer range and
in specialized applications, the last heir of the hammer and ribbon technology of the
typewriter. Its days are clearly numbered as newer technologies pound impact printing
into oblivion. Modern inkjet printers match the low dot matrix price but deliver sharper
text and graphics, better color, and less noise. Laser printers (and related technologies)
outrun dot matrix printers and improve further on the crispness of text and graphics.
Several other printer technologies play more specialized roles. Near silent thermal

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (5 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

printers are the low price leaders. Thermal wax transfer printers brighten graphics with
the richest, most saturated colors. Dye diffusion printers extend the hard copy color
spectrum to the widest printable range.
The native mode of most bit image printers allows you to decide where to place
individual dots on the printed sheet using a technique called all points addressable
graphics or APA graphics. With a knowledge of the appropriate printer instructions, you
or your software can draw graphs in great detail or even make pictures resembling the
halftone photographs printed in newspapers. The software built into the printer allows
every printable dot position to be controlled—specified as printed (black) or not (white).
An entire image can be built up like a television picture, scanning lines several dots wide
(as wide as the number of wires in the print head) down the paper.
This graphics printing technique takes other names, too. Because each individual printed
dot can be assigned a particular location or "address" on the paper, this feature is often
called dot addressable graphics. Sometimes, that title is simplified into dot graphics.
Occasionally, it appears as bit image graphics because each dot is effectively the image
of one bit of data. When someone uses the term "graphics" without adorning it with one
of these descriptive adjectives or phrases, it means the same thing—again, all printer
graphics today are all points addressable.

Fundamentals

In judging and comparing printers, you have several factors to consider. These include
printing speed, on-paper quality, color capabilities, the print engine, media handling, and
the cost of various consumables. Although many of these issues seem obvious, even
trivial, current printer technologies—and hype—add some strange twists. For example,
speed ratings may be given in units that can't quite be compared. Quality is more than a
matter of dots per inch. And color can take on an added dimension—black.

Speed

No one wants to wait. When you finish creating a report or editing a picture, you want
your hard copy immediately—you've finished; so should your computer. The ideal
printer would be one that produces its work as soon as you give the "print command," all
fifty thousand pages of your monthly report in one big belch.
No printer yet has achieved instantaneous operation. In fact, practical machines span a
wide range of printing speeds.
All modern operating systems include print spooling software that lets your programs
print as quickly as possible, then sends the data to your printer at whatever speed it
operates, however slowly. You can print a memo or encyclopedia while you use your PC
for something else. The speed of your printer becomes an issue only when you need
something in a hurry—your publisher's hot breath is pouring down your back for your

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (6 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

thousand page novel while your printer rolls out pages with all the speed and abandon of
a medieval scribe. If you have a tight deadline, if you have a lot of printing to do, or if
you share your printer with others and you all have a lot of work to do, printer speed
becomes an important issue. The speed of your printer—or at least how that speed is
measured—depends on the kind of printer you have.
Engineers divide computer printers into two basic types: line printers and page printers.
A line printer, as its name implies, works on text one line at a time. It usually has a print
head that scans across the paper one line of characters at a time. Most line printers render
one line at a time, starting to print each line as soon as it receives all the character data to
appear in that line. A page printer rasterizes a full page image in its own internal memory
and prints it one line of dots at a time. It must receive all the data for a full page before it
begins to print the page.
Line printers and page printers produce equivalent results on paper. For engineers the
chief difference is what gets held in the printer's memory. For you, the chief difference is
how the engineers specify the speed of the two kinds of printer.

Measuring Units

The two most common measurements of printer speed are characters per second and
pages per minute. Both should be straightforward measures. The first represents the
number of characters a printer can peck out every second. The second, the number of
completed pages that roll into the output tray every minute. In printer specification
sheets, however, both are theoretical measures that may have little bearing on how fast a
given job gets printed.
Line printer speed is usually measured in characters per second. Most printer
manufacturers derive this figure theoretically. They take the time the print head requires
to move from the left side of a page to the right side and divide it into the number of
characters that might print on the line. This speed is consequently dependent on the width
of the characters, and manufacturers often choose the most favorable value.
The highest speed does not always result from using the narrowest characters, however.
The rate at which the print head can spray dots of ink (or hammer print wires) is often
fixed, so to print narrower characters the printer slows its print head.
Characters per second does not directly translate into pages per minute, but you can
determine a rough correspondence. On a standard sheet of paper, most line printers
render 80 characters per line and 60 lines per page, a total of 4800 characters per page.
Because there are 60 seconds in every minute, each page per minute of speed translates
into 80 characters per second. Or you can divide the number of characters per second by
80 to get an approximate page per minute rating.
This conversion can never be exact, particularly on real world printing chores. Few
documents you print will fill every line from left to right with dense text. A line printer
uses time only to work on lines that are actually printed. Modern printers have sufficient
built-in intelligence to recognize areas that will appear blank on the printed sheet and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (7 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

don't bother moving their print heads over the empty spaces. Page printers, on the other
hand, must scan an entire page, even if it only has a single line of text on it. A line
printer, on the other hand, dispenses with a single-line page in a few seconds.

Engine Speed Versus Throughput

Even within a given family of printers, ratings do not reflect real world performance. The
characters per second or pages per minute rating usually given for a printer does not
indicate the rate at which you can expect printed sheets to dribble into the output tray.
These speed measurements indicate the engine speed, the absolute fastest the mechanism
of the printer allows paper to flow through its path. A number of outside factors slow the
actual throughput of a printer to a rate lower—often substantially so—than its engine
speed.
With line printers, the speed ratings can come close to actual throughput. The major
slowdowns for line printers occur only when lines are short and the print head changes
direction often, and when the print head travels long distances down the sheet without
printing.
With page printers, the difference between theory and reality can be dramatic. Because of
their high resolutions, line printers require huge amounts of data to make bit image
graphics. The transfer time alone for this information can be substantial. Page printers
suffer this penalty most severely when your PC rasterizes the image and sets the entire
page as a bit image to the printer. On the other hand, if the printer rasterizes the image,
the processing time for rasterization adds to the print time. In either case, it is rare indeed
for pages to be prepared as quickly as the engine can print them when graphics are
involved. Instead of pages per minute, throughput may shift to minutes per page.

Modes

Some printers operate in a number of different modes that trade off print quality for
speed. These modes vary with the technology used by the printer.
Three of the most common modes for impact dot matrix printers are draft, near letter
quality, and letter quality. Draft mode delivers the highest speed and the lowest quality.
To achieve the highest possible speed, in draft mode the printers move their print heads
faster than they can fire to print at each dot position on the paper. Typically, in draft
mode these printers can blacken only every other dot. The thinly laid dots give text and
graphics a characteristic gray look in draft mode. Near letter quality mode slows the print
head so that text characters can be rendered without the machine gun, separated dots look
of draft mode. Because the dot density is higher, characters appear fully black and are
easier to read. Letter quality mode slows printing further, often using two or more passes
to give as much detail as possible to each individual character. The printer concentrates
on detail work, adding serifs and variable line weights to characters to make them look
more like commercially printed text.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (8 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Inkjet printers aren't bothered by the mechanical limitations of impact printers, so need
not worry so much about dot density at higher speeds. Nevertheless, the time required to
form each jet of ink they print constrains the speed of the print head. Most inkjet printers
operate at the maximum speed the jet forming process allows all the time. However, your
choice of printing mode still affects output speed. Only the modes are
different—typically you choose between black and white and color, and the speed
difference between them can be substantial. Most color inkjet designs have fewer nozzles
for color ink than they do for black ink. Typically, each of the three primary colors will
have one-third the number of nozzles as black. As a result, the print head must make
three times as many passes to render color in the same detail as black, so color mode is
often one-third the speed of black and white printing.
The speed relationship between color and black and white printing varies widely,
however. In comparing the speeds of two printers, you must be careful to compare the
same mode. The most relevant mode is the one you're likely to use most. If you do
mostly text, then black and white speed should be the most important measure to you. If
you plan to do extensive color printing, compare color speeds.
Although many bit image printers don't allow you to directly alter their resolution, you
can accelerate printing by making judicious choice through software. A lower resolution
requires less time for rendering the individual dots, so in graphics mode choosing a lower
resolution can dramatically accelerate print speed. Windows allows you to choose the
resolution at which your printer operates as part of the Graphics tab in its Printer
Properties menu, as shown in Figure 20.1.
Figure 20.1 Selecting printer resolution under Windows 95.

Selecting a lower resolution can dramatically lower the time required to print a page
because it reduces rendering time. At low resolutions, graphics printing speed can
approach engine speed. The downside is, of course, you might not like the rough look of
what you print.

Quality

The look of what you get on paper isn't completely in your control. By selecting the
resolution your printer uses, you can raise speed and lower quality. But every printer
faces a limit to the maximum quality it can produce. This limit is enforced by the design
of the printer and its mechanical construction. The cause and measurement of these
constraints vary with the printer technology, whether your machine is a line printer or
page printer.

Line Printers

Line printer have two distinct operating modes. They can accept text as ASCII
characters, select their bit patterns from an internal character generator, and print them as

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (9 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

a single sequence on each line, one line at a time. Alternately, in their graphics modes,
they accept bit image data from your PC and simply render the bit patterns chosen by
your PC on paper. Those bit patterns can include both text characters and graphics.
Although the printer renders one line at a time in graphics mode, it has no idea what it is
printing on each line or how sequential lines relate to one another, even whether tall
characters span two or more lines. Your PC keeps track of all of that.
The issue involved in determining the on-paper quality of the printer depend on the
operating mode. In text mode, the quality of the characters printed by any line printer is
determined by three chief factors—the number of dots in the matrix that makes up each
individual character, the size of the dots in the matrix, and the addressability of the
printer. The denser the matrix (the more dots in a given area) and the smaller the dots, the
better the characters look. Higher addressability allows the printer to place dots on paper
with greater precision.
The minimal character matrix of any printer measures 5 x 7 (horizontal by vertical) dots
and is just sufficient to render all the upper and lower case letters of the alphabet
unambiguously—and not aesthetically. The dots are big and they look disjointed. Worse,
the minimal matrix is too small to let descending characters ("g," "j," "p," "q" and "y")
droop below the general line of type and makes them look cramped and scrunched up.
Rarely do you encounter this minimal level of quality today except in the cheapest,
closeout printers and machines designed solely for high speed printing of drafts.
The minimum matrix used by most commercial impact dot matrix printers measures 9 x
9 dots, a readable arrangement but still somewhat inelegant in a world accustomed to
printed text. Newer 18- and 24-pin impact dot matrix printers can form characters with
12 by 24 to 24 by 24 matrices. Inkjet printers may form characters in text mode from
matrices measuring as large as 72 by 120 dots.
In the shift from character mode to graphics mode, issues of the character matrix
disappear. The chief determinants of quality become addressability and resolution.
As with computer displays, the resolution and addressability of any kind of printer often
are confused. Resolution indicates the reality of what you see on paper; addressability
indicates the more abstract notion of dot placement. When resolution is mentioned,
particularly with impact dot matrix printers, most of the time addressability is intended.
A printer may be able to address any position on the paper with an accuracy of, say,
1/120 inch. If an impact print wire is larger than 1/120 inch in diameter, however, the
machine never is able to render detail as small as 1/120 inch. Inkjet printer mechanisms
do a good job of matching addressability and resolution, but those efforts easily get
undone when you use the wrong printing medium. If inkjet ink gets absorbed into paper
fibers, it spreads out and obscures the excellent resolution many of these machines can
produces.
Getting addressability to approach resolution is a challenge for the designer of the impact
dot matrix printer (and one of the many reasons this technology has fallen from favor).
The big dots made by the wide print wires blurs out the detail. Better quality impact dot
matrix printers have more print wires, and they are smaller. Also, the ribbon that is
inserted between the wires and paper blurs each dot hammered out by an impact dot
matrix printer. Mechanical limits also constrain the on-paper resolution of impact
machines.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (10 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Impact dot matrix printers use a variety of tricks to improve their often marginal print
quality. Often, even bi-directional printers slow down to single direction operation when
quality counts. To increase dot density, they retrace each line two or more times, shifting
the paper half the width of a dot vertically, between passes, filling in the space between
dots. Uni-directional operation helps ensure accurate placement of each dot in each pass.

Page Printers

With non-impact bit image printers, resolution and addressability usually are the same,
although some use techniques to improve apparent resolution without altering the number
of dots they put in a given area.
Resolution Enhancement Technology or ReT improves the apparent quality of on-paper
printing within the limits of resolution—it can make printing look sharper than would
ordinarily be possible. The enhancement technology, introduced by Hewlett-Packard in
March 1990, with its LaserJet III line of printers, works by altering the size of toner dots
at the edges of characters and diagonal lines to reduce the jagged steps inherent in any
matrix bit image printing technique. Using ReT, the actual on-paper resolution remains at
the rated value of the print engine—for example 300 or 600 dpi—but the optimized dot
size makes the printing appear sharper.
Increasing resolution is more than a matter of refining the design of print engine
mechanics. The printer's electronics must be adapted to match including adding more
memory—substantially more. Memory requirements increase as the square of the linear
dot density increases. Doubling the number of dots per inch quadruples memory needs.
At high resolutions, the memory needs for rasterizing the image can become
prodigious—about 14MB for a 1200 dpi image. Table 20.1 lists the raster needs for
common monochrome printer resolutions.

Table 20.1. Raster Memory Size for Monochrome Printer Resolutions

Resolution Dots Bytes


75 dpi 450,000 56,250
150 dpi 1,800,000 225,000
300 dpi 7,200,000 900,000
360 dpi 10,368,000 1,296,000
600 dpi 28,800,000 3,600,000
720 dpi 41,472,000 5,184,000
1200 dpi 115,200,000 14,400,000

Adding color, of course, increases the memory requirements. Fortunately, the color bit
depth used by common printer technologies doesn't impose the same extreme demands as

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (11 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

monitors. A printer has only a few colors corresponding to the hues of its inks and,
except for continuous tone technologies such as dye diffusion, the range of each color
usually is limited to on or off. Thankfully, color resolutions are generally substantially
lower than monochrome, defined by the size of the color super-pixels rather than
individual dots. In any case, the raster memory requirements of a color printer are
substantially higher than monochrome.
Note that when printing text, page printers may operate in a character mapped mode, so
memory usage is not as great. Even with minimal memory, a printer can store a full page
image in ASCII or a similar code, one byte per letter, as well as the definitions for the
characters of several fonts. In this mode, the printer generates the individual dots of each
character as the page is scanned.
Moving to higher resolutions makes other demands on a printer as well. For example, in
laser printers finer resolutions require improved toner formulations because, at high
resolutions the size of toner particles limits sharpness much as the size of print wires
limits impact dot matrix resolution. With higher resolution laser printers, it becomes
increasingly important to get the right toner, particularly if you have toner cartridges
refilled. The wrong toner limits resolution just as a fuzzy ribbon limits the quality of
impact printer output.

Color

Printers start with the primaries when it comes to color. They start with inks
corresponding to the three primary colors—red, yellow, and blue. If you want anything
beyond those, the printer must find some way of mixing them together. This mixing can
be physical or optical.
The physical mixing of colors requires that two or more colors of ink actually mix
together while they are wet. Printer inks are, however, designed to dry rapidly so the
colors to be mixed must be applied simultaneously or in quick succession. Few printers
rely on the physical mixing of inks to increase the number of colors they produce.
Optical mixing takes place in either of two ways. One color of ink can be applied over
another (that has already dried), or the colors can be applied adjacent to one another.
Applied multiple layers of color require that the inks be to some degree transparent, as a
truly opaque ink would obscure the first color to be applied. Most modern printer inks are
transparent, which allows them to be used on transparencies for overhead projection as
well as paper. The exact hue of a transparent ink is, of course, dependent on the color of
the medium it is applied to.
Optical mixing also takes place when dots of two or more colors are intermixed. If the
dots are so close together than the eye cannot individually resolve each one, their colors
blend together on the retina, blending the individual hues together. Most PC color
printers take advantage of this kind of optical mixing by dithering.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (12 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Three Versus Four Color Printers

Color primaries in printing aren't so simple as the familiar threesome. To achieve better
color reproduction, printers use a skewed set of primary colors—magenta instead of red,
cyan instead of blue, and plain old ordinary yellow. Even this mix is so far from perfect
that when all are combined they yield something that's often far from black.
Consequently, better printers include black in their primary colors.
Black, in fact, may play two roles in a color printer. Many inkjet printers allow you to
choose between black and white and color operation as simply as swapping ink
cartridges. In these machines, black is treated as a separate hue that cannot be mixed in
blends with the three color primaries. These three-color printers render colors only from
the three primaries even though some machines can print pure black when using a
suitable black only ink cartridge. The approximation of black made from the three
primaries is termed composite black, and often has an off color cast. Four-color printers
put black on the same footing as the three primary hues and mix with all four together.
This four-color printing technique gives superior blacks, purer grays, and greater depth to
all darker shades.
To further increase the range of pure colors possible with a printer, manufacturers are
adding more colors of ink. For example, some new Hewlett-Packard inkjet printers such
as the DeskJet 693C offer the option of replacing the black ink cartridge with a second
three-color ink cartridge, bringing the total number of primary hues to six. This greater
range in primaries translates into more realistic reproduction of photographs with less
need for other color enhancing techniques, such as dithering.

Dithering

Color televisions do an excellent job with their three primaries and paint a nearly infinite
spectrum. But the television tube has a luxury most printers lack. The television can
modulate its electron beam and change its intensity. Most printers are stuck with a single
intensity for each color. As a result, the basic range of most printers is four pure colors
and seven when using mixtures, blending magenta and blue to make violet, magenta and
yellow for orange, and blue and yellow for green. Count the background color of the
paper being printed upon, and the basic range of most color printers is eight hues.
Commercial color printing faces the same problem of trying to render a wide spectrum
from four primary colors. To extend the range of printing presses, graphic artists make
color halftones. They break an image into dots photographically, using a screen. Using
special photographic techniques (or, more often today, a computer) they can vary the size
of the dot with the intensity of the color.
Most computer printers cannot vary the size of their dots. To achieve a halftone effect,
they use dithering. In dithering, colors beyond the range of pure hues of which a printer
is capable are rendered in patterns of primary colored dots. Instead of each printed dot
representing a single pixel of an image, dithering uses a small array of dots to make a

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (13 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

single pixel. These multiple dot pixels are termed super-pixels. By varying the number of
dots that actually get printed with a given color of ink in the super-pixel matrix, the
printer can vary the perceived intensity of the color.
The problem with dithering is that it degrades the perceived resolution of the color
image. The resolution is limited by the size of the super-pixels rather than the individual
dots. For example, to attempt to render an image in true color (eight bits per primary), the
printer must use super-pixels measuring eight by eight dots. The resolution falls by an
equivalent factor. A printer with 600 dpi resolution yields a color image with 75 dpi
resolution.

Drivers

Getting good color with dithering is more art than science. The choice of dithering
pattern determines how smoothly colors can be rendered. A bad choice of dithering
pattern often results in a moire pattern overlaid on your printed images or wide gaps
between super-pixels. Moreover, colors don't mix the same onscreen and on paper. The
two media often use entirely different color spaces (RGB for your monitor, CYMK for
your printer), requiring a translation step between them. Inks only aspire to be pure
colors. The primary colors may land far from the mark, and colors blended from them
may be strange indeed.
Your printer driver can adjust for all of these issues. How well the programmer charged
with writing the driver does his job is the final determinant in the color quality your
printer produces. A good driver can create photo quality images from an inkjet printer
while a bad driver can make deplorable pictures even when using the same underlying
print engine. Unfortunately, the quality of a printer's driver isn't quantified on the
specifications sheet. You can only judge it by looking at the output. For highest quality,
however, you'll always want driver software written for your particular model of printer,
not one that your printer emulates. Moreover, you'll want to get the latest driver. You
may want to periodically cruise the website of your printer maker to catch driver updates
as they come out.

Print Engines

The actual mechanism that forms an image on paper is called the print engine. Each uses
a somewhat different physical principle to put ink on paper. Although each technology
has its strengths, weaknesses, and idiosyncrasies, you might not be able to tell the
difference between the pages they print. Careful attention to detail has pushed quality up
to a level where the paper rather than the printer is the chief limit on resolution, and color
comes close to photographic, falling short only on the depth that only a thick gelatin
coating makes possible. In making those images, however, the various print engine
technologies work differently at different speeds, at different noise levels, and with
different requirements. These differences can make one style of print engine a better

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (14 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

choice for your particular application than others.

Impact Dot Matrix

The modern minimal printer uses an impact dot matrix print engine. The heart of the
machine is a mechanical print head that shuttles back and forth across the width of the
paper. A number of thin print wires act as the hammers that squeeze ink from a fabric or
Mylar ribbon to paper.
In most impact dot matrix printers, a seemingly complex but efficient mechanism
controls each of the print wires. The print wire normally is held away from the ribbon
and paper, and against the force of a spring, by a strong permanent magnet. The magnet
is wrapped with a coil of wire that forms an electromagnet, wound so that its polarity is
the opposite of that of the permanent magnet. To fire the print wire against the ribbon and
paper, this electromagnet is energized (under computer control, of course), and its field
neutralizes that of the permanent magnet. Without the force of the permanent magnet
holding the print wire back, the spring forcefully jabs the print wire out against the
ribbon, squeezing ink onto the paper. After the print wire makes its dot, the
electromagnet is de-energized and the permanent magnet pulls the print wire back to its
idle position, ready to fire again. Figure 20.2 shows a conceptual view of the mechanism
associated with one print head wire.
Figure 20.2 Conceptual view of impact dot matrix print head mechanism.

The two-magnets-and-spring approach is designed with one primary purpose—to hold


the print wire away from the paper (and out of harm's way) when no power is supplied to
the printer and the print head. The complexity is justified by the protection it affords the
delicate print wires.
The print head of a dot matrix printer is made from a number of these print wire
mechanisms. Most first generation personal computer printers and many current
machines use nine wires arrayed in a vertical column. To produce high quality, the
second generation of these machines increased the number of print wires to 18 or 24.
These often are arranged in parallel rows with the print wires vertically staggered,
although some machines use different arrangements. Because the larger number of print
wires fit into the same space (and print at the same character height), they can pack more
detail into what they print. Because they are often finer than the print wires of lesser
endowed machines, the multitude of print wires also promises higher resolution.
No matter the number of print wires, the print head moves horizontally as a unit across
the paper to print a line of characters or graphics. Each wire fires as necessary to form the
individual characters or the appropriate dots for the graphic image. The impact of each
wire is precisely timed so that it falls on exactly the right position in the matrix. The
wires fire on the fly—the print head never pauses until it reaches the other side of the
paper.
A major factor in determining the printing speed of a dot matrix machine is the time
required between successive strikes of each print wire. Physical laws of motion limit the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (15 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

acceleration that each print wire can achieve in ramming toward the paper and back.
Thus, the time needed to retract and re-actuate each print wire puts a physical limit on
how rapidly the print head can travel across the paper. It cannot sweep past the next dot
position before each of the print wires inside it is ready to fire. If the print head travels
too fast, dot positioning (and character shapes) would become rather haphazard.
To speed operation, some impact dot matrix machines print bi-directionally, rattling out
one row from left to right, then the next row right to left. This mode of operation saves
the time that would ordinarily be wasted when the carriage returns to the left side of the
page to start the next line. Of course, the printer must have sufficient memory to store a
full line of text so that it can be read backwards.
Adding color to an impact dot matrix printer is relatively straightforward. The color the
impact printer actually prints is governed by the ink in or on its ribbon. Although some
manufacturers build color impact printer using multiple ribbons, the most successful (and
least expensive) designed used special multi-colored ribbons lined with three or four
bands corresponding to the primary colors. To change colors, the printer shifts the ribbon
vertically so a differently hued band lies in front of the print wires. Most of the time the
printer will render a row in one color, shift ribbon colors, then go across the same row in
a different color. The extra mechanism required is simple and inexpensive, costing as
little as 50 dollars extra. (Of course, the color ribbon costs more and does not last as long
as its monochrome equivalent.)
Although the ribbons used by most of these color printers are soaked with three or four
colors of ink, they can achieve seven colors on paper by combining color pairs. For
example, laying a layer of blue over a layer of yellow results in an approximation of
green.
As with their typewriter progenitors, all impact dot matrix printers have a number of
desirable qualities. Owing to their heritage of more than a century of engineering
refinement, they represent a mature technology. Their designs and functions are
relatively straightforward and familiar.
Most impact printers can spread their output across any medium that ink has an affinity
for, including any paper you might have lying around your home, from onion skin to thin
cardstock. While both impact and non-impact technologies have been developed to the
point that either can produce high quality or high speed output, impact technology takes
the lead when you share one of the most common business needs, making multi-part
forms. Impact printers can hammer an impression not just through a ribbon, but through
several sheets of paper as well. Slide a carbon between the sheets or, better yet, treat the
paper for non-carbon duplicates, and you get multiple, guaranteed identical copies with a
single pass through the mechanism. For a number of business applications—for example
the generation of charge receipts—exact carbon copies are a necessity, and impact
printing is an absolute requirement.
Impact printers reveal their typewriter heritage in another way. The hammer bashing
against the ribbon and paper makes noise, a sharp staccato rattle that is high in amplitude
and rich in high frequency components, penetrating and bothersome as a dental drive or
angry horde of giant, hungry mosquitoes. Typically, the impact printer rattles and prattles
louder than most normal conversational tones, and it is more obnoxious than an
argument. The higher the speed of the impact printer, the higher the pitch of the noise

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (16 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

and the more penetrating it becomes.


Some printer makers have toned down their boisterous scribes admirably—some printers
as fast as 780 character per second are as quiet as 55 dB, about the level of a quiet PC
fan. But you still want to leave the room when an inexpensive impact printer (the best
selling of all printers) grinds through its assignment.

Inkjets

Today's most popular personal printers use inkjet print engines. The odd name "inkjet"
actually describes the printing technology. If it conjures up images of the Nautilus and
giant squid or a B-52 spraying out blue fluid instead of a fluffy white contrail, your mind
is on the right track. Inkjet printers are electronic squids that squirt out ink like miniature
jet engines fueled in full color. While this technology sounds unlikely— a printer that
sprays droplets of ink onto paper—it works well enough to deliver image sharpness on a
par with most other output technologies.
In essence, the inkjet printer is a line printer, little more than a dot matrix printer with the
hammer impact removed. Instead of a hammer pounding ink onto paper, the inkjet flings
it into place from tiny nozzles, each one corresponding to a print wire of the impact dot
matrix printer. The motive force can be an electromagnet or, as is more likely today, a
piezoelectric crystal (a thin crystal that bends when electricity is applied across it). A
sharp, digital pulse of electricity causes the crystal to twitch and force ink through the
nozzle into its flight to paper. The types of inkjet engines are commonplace: thermal,
piezoelectric, and phase-change.
At heart, the basic technology of all three kinds of inkjets is the same. The machines rely
on the combination of the small orifice in the nozzle and the surface tension of liquid ink
to prevent a constant dibble from the jets. Instead of oozing out, the ink puckers around
the hole in the inkjet the same way that droplets of water bead up on a waxy surface. The
tiny ink droplets scrunch together rather than spreading out or flowing from the nozzle
because the attraction of the molecules in the ink (or water) is stronger than the force of
gravity. The inkjet engine needs to apply some force to break the surface tension and
force the ink out, and that's where the differences in inkjet technologies arise.

Thermal Inkjets

The most common inkjet technology is called thermal because it uses heat inside its print
head to boil a tiny quantity of water based ink. Boiling produces tiny bubbles of steam
that can balloon out from the nozzle orifices of the print head. The thermal mechanism
carefully controls the bubble formation. It can hold the temperature in the nozzle at just
the right point to keep the ink bubble from bursting. Then, when it needs to make a dot
on the paper, the print head warms the nozzle, the bubble bursts, and the ink sprays from
the nozzle to the paper to make a dot. Because the bubbles are so tiny, little heat or time
is required to make and burst the bubbles; the print head can do it hundreds of times in a

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (17 de 83) [23/06/2000 06:47:08 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

second.
This obscure process was discovered by a research specialist at Canon way back in 1977,
but developing it into a practical printer took about seven years. The first mass marketed
PC inkjet printer was the Hewlett-Packard ThinkJet, introduced in May 1984, that used
the thermal inkjet process (which HP traces back to a 1979 discovery by HP researcher
John Vaught). This single-color printer delivered 96 dot per inch resolution at a speed of
150 characters per second, about on a par with the impact dot matrix printers available at
the same time. The technology—not to mention speed and resolution—have improved
substantially since then. The proprietary name BubbleJet used by Canon for its inkjet
printer derives from this technology, although thermal bubble design is also used in
printers manufactured by DEC, Hewlett-Packard, Lexmark, and Texas Instruments.
The heat that makes the bubbles is the primary disadvantage of the thermal inkjet system.
It slowly wears out the print head, requiring that you periodically replace it to keep the
printer working at its best. Some manufacturers minimize this problem by combining
their printers' nozzles with their ink cartridges so that when you add more ink you
automatically replace the nozzles. With this design you never have to replace the nozzles,
at least independently, because you do it every time you add more ink.
Because nozzles ordinarily last much longer than the supply in any reasonable inkjet
reservoir, other manufacturers make the nozzles a separately replaceable part. The
principle difference between these two systems amounts to nothing more than how you
do the maintenance. Although the combined nozzles and ink approach would seem to be
more expensive, the difference in the ultimate cost of using either system is negligible.

Piezo Inkjets

The alternative inkjet design uses the squirt gun approach—mechanical pressure to
squeeze the ink from the print head nozzles. Instead of a plunger pump, however, these
printers usually use special nozzles that squash down and squeeze out the ink. These
nozzles are made from a piezoelectric crystal, a material that bends when a voltage is
applied across it. When the printer zaps the piezoelectric nozzle with a voltage jolt, the
entire nozzle flexes inward, squeezing the ink from inside and out the nozzle, spraying it
out to the paper. This piezoelectric nozzle mechanism is used primarily by Epson in its
Stylus line of inkjet printers, except for the Stylus 300.
The chief benefit of this design, according to Epson, is a longer lived print head. The
company also claims it yields cleaner dots on paper. Bursting bubbles may make halos of
ink splatter while the liquid droplets from the piezo printers form more solid dots.

Phase-Change Inkjets

The third twist on inkjet technology concentrates on the ink more than its motion. Instead
of using solvent based inks that are fixed (that is, that dry) by evaporation or absorption

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (18 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

into the print medium, they use inks that harden, changing phase from liquid to solid.
Because of this phase-change, this form of inkjet is often called a phase-change inkjet.
The ink starts as solid sticks or chunks of specially dyed wax. The print head melts the
ink into a thin liquid that is retained in a reservoir inside the print head. The nozzles
mechanically force out the liquid and spray it on paper. Ink printers melt sticks or chunks
of wax-based ink into a liquid, which they spray on to paper or other printing medium.
The tiny droplets, no longer heated, rapidly cool on the medium, returning to their solid
state. Because of the use of solid ink, this kind of printer is sometimes called a solid
inkjet printer.
The first printer to use phase-change technology was the Howtek Pixelmaster in the late
1980s. Marketed mostly as a specialty machine, the Howtek made little impression in the
industry. Phase-change technology received its major push from Tektronix with its
introduction of the Phaser III PXi in 1991. Tektronix refined phase-change technology to
achieve smoother images and operation. Where the Pixelmaster used plastic-based inks
that left little lumps on paper and sometimes clogged the print head, the Phaser III used
wax-based inks and a final processing step, a cold fuser, which flattened the cold ink
droplets with a steel roller as the paper rolled out of the printer.
No matter the technology, all inkjet printers are able to make sharper images than impact
dot matrix technology because they do not use ribbons, which would blur their images.
The on-paper quality of an inkjet can equal and often better that of more expensive laser
printers (see the "Laser" section that follows). Even inexpensive models claim resolution
as high or higher than laser printers, say about 720 dots per inch.
Another advantage of the inkjet is color. Adding color is another simple elaboration.
Most color impact printers race their print heads across each line several times, shifting
between different ribbon colors on each pass, for example printing a yellow row, then
magenta, then cyan, and finally black. Inkjet printers typically handle three or four colors
in a single pass of the print head, although the height of colored columns often is shorter.
The liquid ink of inkjet printers can be a virtue when it comes to color. The inks remain
fluid enough, even after they have been sprayed on paper, to physically blend together.
This gives color inkjet printers the ability to actually mix their primary colors to create
intermediary tones. The range of color quality from inkjet printers is wide. The best yield
some of the brightest, most saturated colors available from any technology. The vast
majority, however, cannot quite produce a true-color palette.
Because inkjets are non-impact printers, they are much quieter than ordinary dot matrix
engines. Without hammers pounding ink paper like a myopic carpenter chasing an
elusive nail, inkjet printers sound almost serene in their everyday work. The tiny droplets
of ink rustle so little air they make not a whisper. About the only sound you hear from
them is the carriage coursing back and forth.
As mechanical line printers, however, inkjet engines have an inherent speed disadvantage
when compared to page printers. Although they deliver comparable speeds on text when
they use only black ink, color printing slows them considerably, to one-third speed or
less.
The underlying reason for this slowdown is that most color inkjets don't treat colors
equally and favor black. After all, you'll likely print black more often than any color or

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (19 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

blend. A common Lexmark color inkjet print head illustrates the point. It prints columns
of color only 16 dots high while black columns are 56 dots high. See Figure 20.3.
Printing a line of color the same height as one in black requires multiple passes even
though the printer can spray all three colors with each pass.
Figure 20.3 Print heads from a color inkjet printer showing nozzle placement.

Inkjet technology also has disadvantages. Although for general use you can consider
them to be plain-paper printers, able to make satisfactory images on any kind of stock
that will feed through the mechanism, to yield their highest quality inkjets require special
paper with controlled absorbency. You also have to be careful to print on the correct side
of the paper because most paper stocks are treated for absorption only on one side. If you
try to get by using cheap paper that is too porous, the inks wick away into a blur. If the
paper is too glossy, the wet ink can smudge.
Early inkjet printers also had the reputation, often deserved, of clogging regularly. To
avoid such problems, better inkjets have built-in routines that clean the nozzles with each
use. These cleaning procedures do, however, waste expensive ink. Most nozzles now are
self-sealing, so that when they are not used air cannot get to the ink. Some manufacturers
even combine the inkjet and ink supply into one easily changeable module. If, however,
you pack an inkjet away without properly purging and cleaning it first, it is not likely to
work when you resurrect it months later.

Laser

The one revolution that has changed the faces of both offices and forests around the
world was the photocopier. Trees plummet by the millions to provide fodder for the
duplicate, triplicate, megaplicate. Today's non-impact, bit image laser printer owes its life
to this technology.
At heart, the laser printer principle is simple. Some materials react to light in strange
ways. Selenium and some complex organic compounds modify their electrical
conductivity in response to exposure to light. Both copiers and laser printers capitalize on
this photoelectric effect by focusing an optical image on a photo-conductive drum that
has been given a static electrical charge. The charge drains away from the conductive
areas that have been struck by light but persists in the dark areas. A special pigment
called a toner is then spread across the drum, and the toner sticks to the charged areas. A
roller squeezes paper against the drum to transfer the pigment to the paper. The pigment
gets bonded to the paper by heating or "fusing" it.
The laser printer actually evolved from the photocopier. Rather than the familiar
electrostatic Xerox machine, however, the true ancestor of the laser printer was a similar
competing process called electro-photography, which used a bright light to capture an
image and make it visible with a fine carbon-based toner. The process was developed
during the 1960s by Keizo Yamaji at Canon. The first commercial application of the
technology, called New Process to distinguish it from the old process (xerography), was a
Canon photocopier released in 1968.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (20 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

The first true laser printer was a demonstration unit, made by Canon in 1975, that was
based on a modified photocopier. The first commercial PC laser printer came in 1984
when Hewlett-Packard introduced its first LaserJet, which was based on the Canon CX
engine. At heart, it and all later lasers use the same process, a kind of heat set, light
inspired offset printing.
The magic in a laser printer is forming the image by making a laser beam scan back and
forth across the imaging drum. The trick, well known to stage magicians, is to use
mirrors. A small rotating mirror reflects the laser across the drum, tracing each scan line
across it. The drum rotates to advance to the next scan line, synchronized to the flying
beam of laser light. To make the light and dark pattern of the image, the laser beam is
modulated on and off. It's rapidly switched on for light areas, off for dark areas, one
minuscule dot at a time to form a bit image.
The major variations on laser printing differ only in the light beam and how it is
modulated. LCD-shutter printers, for example, put an electronic shutter (or an array of
them) between a constant light source (which need not be a laser) and the imaging drum
to modulate the beam. LED printers modulates ordinary Light-Emitting Diodes as their
optical source. In any case, these machines rely on the same electro-photographic process
as the laser printer to carry out the actual printing process.
The basic laser printer mechanism requires more than just a beam and a drum. In fact, it
involves several drums or rollers, as many as six in a single-color printer and more in
color machines. Each has a specific role in the printing process. Figure 20.4 shows the
layout of the various rollers.
Figure 20.4 Conceptual view of the laser printer mechanism.

The imaging drum, often termed the OPC for optical photoconductor, must first be
charged before it will accept an image. A special roller called the charging roller applies
the electrostatic charge uniformly across the OPC.
After the full width of an area of the OPC gets its charge, it rotates in front of the
modulated light beam. As the beam scans across the OPC drum and the drum turns, the
system creates an electrostatic replica of the page to be printed.
To form a visible image, a developing roller then dusts the OPC drum with particles of
toner. The light-struck areas with an electrostatic charge attract and hold the toner against
the drum. The unexposed parts of the drum do not.
The printer rolls the paper between the OPC drum and a transfer roller, which has a
strong electrostatic charge that attracts the toner off the drum. Because the paper is
between the transfer roller and the OPC drum, the toner collects on the paper in accord
with the pattern that was formed by the modulated laser. At this point only a slight
electrostatic charge holds the toner to the paper.
To make the image permanent, the printer squeezes the paper between a fuser and backup
roller. As the paper passes through, the printer heats the fuser to a high temperature—on
the order of 350 degrees Fahrenheit (200 degrees Celsius). The heat of the fuser and the
pressure from the backup roller melt the toner and stick it permanently on the paper. The
completed page rolls out of the printer.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (21 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Meanwhile, the OPC drum continues to spin, wiping the already-printed area against a
cleaning blade that scrapes any leftover toner from it. As the drum rotates around to the
charging roller again, the process repeats. Early lasers used OPC drums large enough to
hold an entire single-page image. Modern machines use smaller rollers that form the
image as a continuous process.
Although individual manufacturers may alter this basic layout to fit a particular package
or to refine the process, the technology used by all laser machines is essentially the same.
At a given resolution level, the results produced by most mechanisms is about the same,
too. You need an eye loupe to see the differences. The major difference is that
manufacturers have progressively refined both the mechanism and the electronics to
produce higher resolutions. Basic laser printer resolution starts at 300 dots per inch. The
mainstream is now at the 600 dpi level. The best PC-oriented laser printers boast 1200
dpi resolution.
In most lasers, the resolution level is fixed primarily by the electronics inside the printer.
The most important part of the control circuitry is the Raster Image Processor, also
known as the RIP. The job of the RIP is to translate the string of characters or other
printing commands into the bit image that the printer transfers to paper. In effect, the RIP
works like a video board, interpreting drawing commands (a single letter in a print stream
is actually a drawing command to print that letter), computing the position of each dot on
the page, and pushing the appropriate value into the printer's memory. The memory of
the printer is arranged in a raster just like the raster of a video screen, and one memory
cell—a single bit in the typical black and white laser printer—corresponds to each dot
position on paper.
The RIP itself may, by design, limit a laser printer to a given resolution. Some early laser
printers made this constraint into an advantage, allowing resolution upgrades through
after-market products that replaced the printer's internal RIP and controlled the printer
and its laser through a video input. The video input earns its name because its signal is
applied directly to the light source in the laser in raster scanned form (like a television
image), bypassing most of the printer's electronics. The add-in processor can modulate
the laser at higher rates to create higher resolutions.
Moving from 300 dpi to 600 dpi and 1200 dpi means more than changing the RIP and
adding memory, however. The higher resolutions also demand improved toner
formulations because, at high resolutions, the size of toner particles limits sharpness
much as the size of print wires limits impact dot matrix resolution. With higher resolution
laser printers, it becomes increasingly important to get the right toner, particularly if you
have toner cartridges refilled. The wrong toner limits resolution just as a fuzzy ribbon
limits the quality of impact printer output.
Adding color to a laser printer is more than dumping a few more colors of toner. The
laser must separately image each of its three or four primary colors and transfer the toner
corresponding to each to the paper. The imaging process for each color requires forming
an entire image by passing it past the OPC drum. Forming a complete image
consequently requires three or four passes of the drum.
Exactly what constitutes a pass varies among manufacturers. Most color laser printers use
three or four distinct passes of each sheet of paper. The paper rolls around the drum and
makes four complete turns. Each color gets images separately on the drum, then

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (22 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

separately transferred to the sheet. The printer wipes the drum clean between passes.
So-called "one-pass" printing, pioneered by Hewlett-Packard, still requires the drum to
make four complete passes as each color gets separately scanned on the drum, and toner
is dusted on the drum separately for each color. The paper, however, only passes once
through the machine to accept the full color image at once and to have all four colors
fused together onto the paper. The first three colors merely transfer to the drum. After the
last color—black—gets coated on the drum, the printer runs the paper through and
transfers the toner to it. The paper thus makes a single pass through the printer, hence the
"one-pass" name.
This single-pass laser technology yields no real speed advantage. The photoconductor
drum still spins around the same number of times as a four-pass printer. The speed at
which the drum turns and the number of turns it makes determines engine speed, so the
one-pass process doesn't make a significant performance increase.
The advantage to one-pass color laser printing comes in the registration of the separate
color images. With conventional color laser systems, the alignment of the paper must be
critically maintained for all four passes in order for all the colors to properly line up.
With the one-pass system, paper alignment is not a problem. Only the drum needs to
maintain its alignment, which is easy to do because it is part of the mechanism rather
than an interloper from the outside world.
No matter the number of passes, adding color in laser printing subtracts speed. In general,
color laser speed falls to one-quarter the monochrome speed of a similar engine because
of the requirement of four passes. (With three-pass printing, speed falls to one-third the
monochrome rate). For example, a printer rated at 12 pages per minute in monochrome
will deliver about 3 ppm in color. Even allowing for this slowdown, however, color
lasers are usually faster than other color page printers. They are also often quieter.
Compared to thermal wax transfer printers, a popular high quality color technology, they
are also economical because a laser uses toner only for the actual printed image. Thermal
wax machines need a full page of ink for each page printed no matter the density of the
image.

Direct Thermal

A printer that works on the same principle as a woodburning set


might seem better for a Boy Scout than an on-the-go executive, but
today's easiest to tote printers do exactly that—the equivalent of
charring an image on paper. Thermal printers use the same electrical
heating of the wood burner, a resistance that heats up with the flow of
current. In the case of the thermal printer, however, the resistance
element is tiny and heats and cools quickly, in a fraction of a second.
As with inkjets, the thermal print head is the equivalent of that of a
dot matrix printer, except that it heats rather than hits.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (23 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Thermal printers do not, however, actually char the paper on which


they print. Getting paper that hot would be dangerous, precariously
close to combustion (although it might let the printer do double duty
as a cigarette lighter). Instead, thermal printers use special, thermally
sensitive paper that turns from white to near-black at a moderate
temperature.
Thermal technology is ideal for portable printers because few moving
parts are involved—only the print head moves, nothing inside it. No
springs and wires means no jamming. The tiny, resistive elements
require little power to heat, actually less than is needed to fire a wire
in an impact printer. Thermal printers can be lightweight, quiet, and
reliable. They can even run on batteries.
The special paper they require is one drawback. Not only is it costly
(because it is, after all, special paper) but it feels funny and is prone
to discolor if it is inadvertently heated to too high a temperature;
paper cannot tell the difference between a hot print head and a cozy
corner in the sun.
Gradually, thermal printers are becoming special application
machines. Inkjets have many of the same virtues and use more
reasonably priced and available paper; therefore, low cost inkjets are
invading the territory of the thermal machines.

Thermal Transfer

Engineers have made thermal technology more independent of the paper or printing
medium by moving the image-forming substance from the paper to a carrier or ribbon.
Instead of changing a characteristic of the paper, these machines transfer pigment or dyes
from the carrier to the paper. The heat from the print head melts the binder holding the
ink to the carrier, allowing the ink to transfer to the paper. On the cool paper, the binder
again binds the ink in place. In that the binder is often a wax, these machines are often
called thermal wax transfer printers.
These machines produce the richest, purest, most even, and most saturated color of any
color print technology. Because the thermal elements have no moving parts, they can be
made almost arbitrarily small to yield high resolutions. Current thermal wax print engines
achieve resolutions similar to those of laser printers. However, due to exigencies of print
head designs, the top resolution of these printers extends only in one dimension
(vertical). Top thermal wax printers achieve 300 dots per inch horizontally and 600 dots

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (24 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

per inch vertically.


Compared to other technologies, however, thermal wax engines are slow and wasteful.
They are slow because the thermal printing elements must have a chance to cool off
before advancing the 1/300 of an inch to the next line on the paper. And they are wasteful
because they use wide ink transfer sheets, pure colors supported in a wax-based medium
clinging to a plastic film base—sort of like a Mylar typewriter ribbon with a gland
condition. Each of the primary colors to be printed on each page requires a swath of
inked transfer sheet as large as the sheet of paper to be printed—that is nearly four feet of
transfer sheet for one page. Consequently, printing a full-color page can be expensive,
typically measured in dollars rather than cents per page.
Because thermal wax printers are not a mass market item and each manufacturer uses its
own designs for both mechanism and supplies, you usually are restricted to one source
for inksheets—the printer manufacturer. While that helps assure quality (printer makers
pride themselves on the color and saturation of their inks), it also keeps prices higher
than they might be in a more directly competitive environment.
For color work, some thermal wax printers give you the choice of three- or four-pass
transfer sheets and printing. A three-pass transfer sheet holds the three primary colors of
ink—red, yellow, and blue—while a four-color sheet adds black. Although black can be
made by overlaying the three primary colors, a separate black ink gives richer, deeper
tones. It also imposes a higher cost and extends printing time by one-third.
From these three primary colors, thermal wax printers claim to be able to make anywhere
from seven to nearly seventeen million colors. That prestidigitation requires a mixture of
transparent inks, dithering, and ingenuity. Because the inks used by thermal wax printers
are transparent, they can be laid one atop another to create simple secondary colors. They
do not, however, actually mix.
Expanding the thermal wax palette further requires pointillistic mixing, laying different
color dots next to each other and relying on them to visually blend together in a distant
blur. Instead of each dot of ink constituting a picture element, a group of several dots
effectively forms a super-pixel of an intermediate color.
The penalty for this wider palette is a loss of resolution. For example, super-pixels
measuring five by five dots would trim the resolution of a thermal wax printer to 60 dots
per inch. Image quality looks like a color halftone—a magazine reproduction—rather
than a real photograph. Although the quality is shy of perfection, it is certainly good
enough for proofs of what is going to a film recorder or the service bureau to be made
into color separations.
A variation of the thermal wax design combines the sharpness available from the
technology with a versatility and cost more in line with ordinary dot matrix printers.
Instead of using a page-wide print head and equally wide transfer sheets, some thermal
wax machines use a line-high print head and a thin transfer sheet that resembles a Mylar
typewriter ribbon. These machines print one, sharp line of text or graphics at a time,
usually in one color—black. They are as quiet as inkjets but produce sharper, darker
images.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (25 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Dye Diffusion

For true photo quality output from a printer, today's stellar technology is the thermal dye
diffusion process, sometimes called thermal dye sublimation. Using a mechanism similar
to that of the thermal wax process, dye diffusion printers are designed to use penetrating
dyes rather than inks. Instead of a dot merely being present or absent, as in the case of a
thermal wax printer, diffusion allows the depth of the color of each dot to vary. The
diffusion of the dyes can be carefully controlled by the print head. Because each of the
three primary colors can have a huge number of intensities (most makers claim 256), the
palette of the dye diffusion printer is essentially unlimited.
What is limited is the size of the printed area in some printers. The output of most dye
diffusion printers looks like photographs in size, as well as color. Another limit is cost.
The newer, more exotic technology pushes dye diffusion machines into the pricing
stratosphere. Dye diffusions only now are knocking on the $10,000 pricing barrier.

Daisy Wheels and Tulips

The original typewriter and all such machines made through the
1970s were based on the same character forming principal as the
original creation of Johannes Gutenberg. After laboriously carving
individual letters out of wood, daubing them with sticky black ink,
and smashing paper against the gooey mess, Gutenberg brought
printing to the West by inventing the concept of movable type. Every
letter he printed was printed fully-formed from a complete, although
reversed, image of itself. The character was fully formed in advance
of printing. Every part of it, from the boldest stroke to the tiniest
serif, was printed in one swipe of the press. Old-fashioned
typewriters adapted Gutenberg's individual character type to an
impact mechanism.
In the early days of personal computing, a number of machines used
this typewriter technology and were grouped together under the term
fully-formed character printers. Other names for this basic
technology were letter quality printers, daisy wheel printers, and a
variation called the thimble printer. These more colorful names came
from the designs of their print elements, which held the actual
character shapes that were pressed against ribbon and paper. Figure
20.5 illustrates a daisy wheel which, if you apply enough

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (26 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

imagination, resembles the familiar flower of the family Compositae


with many petals around a central disk.
Figure 20.5 A fully formed character printer daisy wheel.

Nearly all of the fully formed character printers that are likely to be
connected to a personal computer use the impact principle to get their
ink on paper. Rather than having a separate hammer for each letter,
however, the characters are arranged on a single, separate element
that is inserted between a single hammer and the ribbon. The
hammer, powered by a solenoid that is controlled by the electronics
of the printer and your computer, impacts against the element. The
element then squeezes the ink off the ribbon and on to the paper. To
allow the full range of alphanumeric characters to be printed using
this single-hammer technique, the printing element swerves, shakes,
or rotates each individual character that is to be formed in front of the
hammer as it is needed.
Fully formed character technology produces good quality output, in
line with better typewriters. The chief limitation, in fact, is not the
printing technology but the ribbon that is used. Some daisy wheel
printers equipped with a Mylar film ribbon can give results almost on
a par with the work of a phototypesetter.
Fully formed character technology, like the typewriter it evolved
from, is essentially obsolete, and for several good reasons. The
movable type design limited them solely to text printing and crude
graphics. Fully formed character printers also constrained you to a
few typefaces. You could only print the typefaces—and font
sizes—available on the image-forming daisy wheels or thimbles.
Worst of all, these machines also were painfully
slow—budget-priced models hammered out text at a lazy 12 to 20
characters per second, and even the most expensive machines
struggled to reach 90 characters per second. Other technologies (in
particular, laser printers) now equal or exceed the quality of fully
formed character printers, run far ahead in speed, and impose little or
no price penalty.

Paper Handling

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (27 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Key to the design of all printers is that their imaging systems operate in only one
dimension, one line at a time, be it the text-like line of the line printer or the single raster
line of the page printer. To create a full two-dimensional image, all printers require that
the printing medium—typically paper—move past the print mechanism. With page
printers, this motion must be smooth and continuous. With line printers, the paper must
cog forward, hold its position, then cog forward to the next line. Achieving high
resolutions without distortion requires precision paper movement with a tolerance of
variation far smaller than the number of dots per inch the printer is to produce.
Adding further complexity, the printer must be able to move paper or other printing
medium in and out of its mechanism. Most modern printers use sheet feeders that can
pull a single page from a stack, route it past the imaging mechanism, and stack it in an
output tray. Older printers and some of the highest speed machines, use continuous form
paper, which trades a simplified printer mechanism for your trouble in tearing sheets
apart. Each paper handling method has its own complications and refinements.

Sheet Feeders

The basic unit of computer printing is the page, a single sheet of paper, so it is only
natural for you to want your printer to work with individual sheets. The computer
printing process, however, is one that works with volume—not pages but print jobs, not
sheets but reams.
The individual sheet poses problems in printing. To make your ideas, onscreen images,
and printed hard copy agree, each sheet must be properly aligned so that its images
appear at the proper place and at the proper angle on each sheet. Getting perfect
alignment can be vexing for both human and mechanical hands. Getting thousands into
alignment is a project that might please only the Master of the Inquisition. Yet every
laser printer and most inkjet printers and a variety of other machines face that challenge
every time you start a print job.
To cope with this hard copy torture, the printer requires a complex mechanism called the
cut-sheet feeder or simply the sheet feeder. You'll find considerable variation in the
designs of the sheet feeders of printers. All are complicated designs involving cogs,
gears, rods, and rollers, and every engineer appears to have his own favorite arrangement.
The inner complexity of these machines is something to marvel at but not dissect, unless
you have too much time on your hands. Differences in the designs of these mechanisms
do have a number of practical effects: the capacity of the printer, which relates to how
long it can run without your attention; the kinds and sizes of stock that roll through; how
sheets are collated and whether you have to spend half an afternoon to get all the pages in
order; and duplex printing that automatically covers both sides of each sheet.
No matter the details, however, all sheet feeders can trace their heritage back to one
progenitor design, the basic friction feed mechanism.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (28 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Friction Feed

When you load a single sheet of paper into a modern inkjet printer, it reverts to friction
feed, which uses the same technology as yesteryear's mechanical typewriter. It moves
paper through its mechanism by squeezing it between the large rubber roller, called a
platen, and smaller drive rollers. The roller system shifts each sheet through the printer.
Friction between the rubber and the paper or other printing medium gives the system a
positive grip that prevents slipping and assures that each sheet gets where it's supposed
to. This friction also gives the technology its name.
The name and concept of the platen harks back to the days of impact printing. Its main
purpose was as the impact absorber for the pounding hammers of the typewriter or
printer. The rubber roller cushioned the hammers while offering sufficient resistance to
let the system make a good impression. In typewriters and many printers, the platen did
double duty, also serving as the main friction drive roller. Many inkjet and some impact
printers now have flat platens that are separate from the drive system. On the other hand,
the OPC drum in a laser printer acts like the typewriter's platen in serving as part of the
friction feed mechanism.
Although all sheet-fed printers use a friction mechanism, the term friction feed is usually
reserved for machines, like the old typewriter, that require you to manually load each
page to be printed. The loading process is often complex in itself—you must pull out the
bail arm that holds the paper around the platen, insert each individual sheet, line it up to
be certain that it is square (so that the print head does not type diagonally across the
sheet), lock it in place, push the bail arm down, and finally signal to the machine that all
is well. Easier said than done, of course—and more tedious, too, particularly if you
decide to print a computerized version of the Encyclopedia Britanica.
On the positive side, however, printers that have these so-called friction feed mechanisms
can handle any kind of paper you can load into them, from your own engraved stationery
to pre-printed forms, from W-2s to 1040s, from envelopes to index cards. They will deal
with sheets of any reasonable size—large enough to pass between rollers without
slipping out, and small enough that you don't have to fold over the edge to fit it through.
Better friction fed printers have accessory mechanisms that automatically load individual
sheets. Often termed cut-sheet feeders or bin-feed mechanisms, these add-ons operate as
robotic hands that peel off individual sheets and line them up with the friction feed
rollers. As adapters they sit atop the printer and often execute a dance akin to the
jitterbug as they shuffle through their work. You can think of them as mechanical
engineering masterpieces, complex Band-Aids, or simply obsolete but interesting
technology.
Modern printers integrate the feed mechanism with the rest of the printer drive system.
You load cut sheets into a bin or removable tray, and the printer takes over from there.
The mechanism is reduced to a number of rollers chained, belted, or geared together that
pull the paper smoothly through the printer. This integrated design reduces complexity,
increases reliability, and often trims versatility. Its chief limitations are in the areas of
capacity and stock handling.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (29 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Capacity

The most obvious difference between sheet feed mechanisms of printers is capacity.
Some machines are made only for light, personal use and have modestly sized paper bins
that hold 50 or fewer sheets. In practical terms, that means that every ten to fifteen
minutes, you must attend to the needs of the printer, loading and removing the wads of
paper that course through it. Larger trays require less intervention. Printers designed for
heavy network use may hold several thousand sheets at a time.
The chief enemy of capacity is size. A compact printer must necessarily devote less
space—and thus less capacity—to stocking paper. A tray large enough to accommodate a
ream (500 sheets) of paper would double the overall volume of some inkjet printers. In
addition, larger tray capacities make building the feed mechanism more difficult. The
printer must deal with a larger overall variation in the height of the paper stack, which
can challenge both the mechanism and its designer.
A printer needs at least two trays or bins, one to hold blank stock waiting to be printed
and one to hold the results of the printing. These need not be, and often are not, the same
size. Most print jobs range from a few to a few dozen sheets, and you will usually want
to grab the results as soon as the printing finishes. An output bin large enough to
accommodate your typical print job usually is sufficient for a personal printer. The input
tray usually holds more so that you need bother loading it less frequently—you certainly
don't want to deal with the chore every time you make a printout.

Media Handling

Most printers are designed to handle a range of printing media, from paper stock and
cardboard to transparency acetates. Not all printers handle all types of media. Part of the
limitation is in the print engine itself. Many constraints arise from the feed mechanism,
however.
With any cut-sheet mechanism, size is an important issue. All printers impose minimum
size requirements on the media you feed them. The length of each sheet must be long
enough so that one set of drive rollers can push it to the next. Too short sheets slide
between rollers, and nothing, save your intervention, can move them out. Similarly, each
sheet must be wide enough that the drive rollers can get a proper grip. The maximum
width is dictated by the width of the paper path through the printer. The maximum length
is enforced by the size of paper trays and the imaging capabilities of the printer engine.
In any case, when selecting a printer you must be certain that it can handle the size of
media you want to use. Most modern printers are designed primarily for standard letter
size sheets; some but not all accommodate legal size sheets. If you want to use other
sizes, take a close look at the specifications. Table 20.2 lists the dimensions of common
sizes of paper.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (30 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Table 20.2. Dimensions of Common Paper Sizes

Designation Height Millimeters Width Millimeters Height Inches Width Inches


A9 37 52 1.5 2.1
B9 45 64 1.8 2.5
A8 52 74 2.1 2.9
B8 64 91 2.5 3.6
A7 74 105 2.9 4.1
B7 91 128 3.6 5.0
A6 105 148 4.1 5.8
B6 128 182 5.0 7.2
A5 148 210 5.8 8.3
Octavo 152 229 6 9
B5 182 256 7.2 10.1
Executive 184 267 7.25 10.5
A4 210 297 8.3 11.7
Letter 216 279 8.5 11
Legal 216 356 8.5 14
Quarto 241 309 9.5 12
B4 257 364 10.1 14.3
Tabloid 279 432 11 17
A3 297 420 11.7 16.5
Folio 309 508 12 20
Foolscap 343 432 13.5 17
B3 364 515 14.3 20.3
A2 420 594 16.5 23.4
B2 515 728 20.3 28.7
A1 594 841 23.4 33.1
B1 728 1030 28.7 40.6
A0 841 1189 33.1 46.8
B0 1030 1456 40.6 57.3

Most sheet-fed printers cannot print to the edges of any sheet. The actual image area is
smaller because drive mechanisms may reserve a space to grip the medium and the
engine may be smaller than the sheet to minimize costs. If you want to print to the edge
of a sheet, you often need a printer capable of handling larger media; then you must trim
each page when it is done. Printing to (and beyond) the edge of a sheet is termed full

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (31 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

bleed printing. Only a few sheet-fed printers are capable of managing the task.
Printing media also differ in weight, which roughly corresponds to the thickness of paper.
In general, laser printers are the most critical in regard to media weight. The capabilities
of a given printer are listed as a range of paper weights the printer can handle, in the case
of laser printers typically from 16 to 24 pounds (most business stationery uses 20-pound
stock). If you want to print heavier covers for reports, your printer needs to be able to
handle 70-pound paper. Similarly, printer specifications will reveal whether the
mechanism can deal with transparency media and label sheets.
Laser printers impose an additional specification on paper stock, moisture content. The
moisture content of paper affects its conductivity. The laser printing process is based on
carefully controlled static charges, including applying a charge to the paper to make toner
stick to it. If paper is too moist or conductive, the charge and the toner may drain away
before the image is fused to the sheet. In fact, high humidity around a laser printer can
affect the quality of its printouts—pale printouts or those with broken characters can
often be traced to paper containing too much moisture or operating the printer in a high
humidity environment (which in turn makes the paper moist).
Most modern printers readily accommodate envelopes, again with specific, enforced size
restrictions. As with paper, envelopes come in standard sizes, the most common of which
are listed in Table 20.3.

Table 20.3. Common Envelope Sizes (Flap Folded)

Designation Height Millimeters Width Millimeters Height Inches Width Inches


6¾ 91.4 165 3.6 6.5
Monarch 98.4 190.5 3.875 7.5
Com-10 195 241 4.125 9.5
DL 110 220 4.33 8.66
C5 165 229 6.5 9.01

With a modern computer printer, you should expect to load envelopes in the normal
paper tray. Be wary of printers that require some special handling of envelopes—you
may find it more vexing than you want to deal with.

Collation

The process of getting the sheets of a multi-page print job in proper order is called
collation. A printer may take care of the process automatically or leave the chore to you.
When sheet-fed printers disgorge their output, it can fall into the output tray one of two
ways—face up or face down. Although it might be nice to see what horrors you have
spread on paper immediately rather than saving up for one massive heart attack, face

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (32 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

down is the better choice. When sheets pile on top of one another, face down means you
do not need to sort through the stack to put everything in proper order.
Most printers now automatically collate by stacking sheets face down. A few have
selectable paths that give you the choice of face-up or face-down output. The Windows
95 printer drive also gives you a choice of whether to collate your print job
electronically, before it is sent to your printer. It can handily print the last page first so
that it falls to the bottom of the stack.

Duplex Operation

A duplex printer is one that automatically prints on both sides of each sheet when you
want it to. The chief advantage of double-sided printing is, of course, you use half as
much paper, although you usually need thicker, more expensive stock so that one side
does not show through to the other.
You can easily simulate duplex printing by printing one side, turning each sheet over,
and printing the other. When you have a multi-page print job, however, it can be
daunting to keep the proper pages together. A single jam can ruin the entire job.
With laser printers, you should never try to print on both sides of a sheet except using a
duplex printer. When printing the second size, the heat of the second fusing process can
melt the toner from the first pass. This toner may stick to the fuser and contaminate later
pages. With sufficient build-up, the printer may jam. Duplex printers eliminate the
problem by fusing both sides of the sheet at once.

Continuous form Feeding

Scrolls went out of fashion for nearly everything except computer


printing about two thousand years ago. Until the advent of laser and
inkjet printers, however, most computer printing relied on the same
concept as the scrolls stored so carefully near the Dead
Sea—continuous form paper, a single long sheet that was nearly
endless. The length was intentional. Printer paper handling
mechanisms were so complex and time consuming to deal with, akin
to threading an old movie projector with film a foot wide, that the
only sane thing to do was to minimize the need for loading. The
longer the sheet, the longer the printer could run without attention,
without your involvement, and without paper-cuts and cursing.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (33 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Although nearly all continuous form printing systems are based on


friction mechanisms, the paper feeding method they use takes various
forms. Each of these requires a particular paper handling mechanism
and, often, a particular kind of paper stock. The choices include roll
feeding, pin feeding, and tractor feeding, the latter with several
options.

Roll Feed

One way to reduce the number of times you must slide a sheet of
paper into the friction feed mechanism is to make the paper longer. In
fact, you could use one, long continuous sheet, the classic scroll of
Biblical fame. Some inexpensive printers do exactly that, wrapping
the long sheet around a roll (like toilet paper). The printer just pulls
the paper through as it needs it. By rigidly mounting a roll holder at
the back of the printer, the paper can be kept in reasonable alignment
and skew can be eliminated. Roll feed is most common for simple
narrow width printers such as those used to generate cash register
tapes.
The shortcoming of this system is, of course, that you end up with
one long sheet. You have to tear it to pieces or carefully cut it up
when you want traditional 8.5 by 11 output. Expensive roll fed
printers, such as some thermal wax transfer printers and even some
fax machines, incorporate cutoff bars that slice off individual sheets
from the paper roll. These add a bit of versatility and frugality. You
can make your printouts any length, and when they are short you can
save paper.

Pin Feed and Tractor Feed

While roll fed paper could be perforated at 11-inch intervals so that


you could easily and neatly tear it apart, another problem arises. Most
continuous form friction feeding mechanisms are not perfect. The
paper can slip so that, gradually, the page breaks in the image and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (34 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

page breaks at the perforations no longer correspond. In effect, the


paper and the image can get out of sync.
By locking perforations in the edge of the paper inside sprockets that
prevent slipping, the image and paper breaks can be kept in sync.
Two different paper feeding systems use sprocketed paper to avoid
slippage. Pin feed uses drive sprockets which are permanently affixed
to the edges of the platen roller. The pin feed mechanism,
consequently, can handle only one width of paper, the width
corresponding to the sprocket separation at the edges of the platen.
Tractor feed uses adjustable sprockets that can be moved closer
together or farther apart to handle nearly any width paper that fits
through the printer.
Tractor feed mechanisms themselves operate either uni-directionally
or bi-directionally. As the names imply, a uni-directional tractor
only pulls (or pushes) the paper through in a single direction
(hopefully forward). The bi-directional tractor allows both forward
and backward paper motion, which often is helpful for graphics,
special text functions (printing exponents, for instance), and lining up
the top of the paper with the top of the print head.

Push and Pull Tractors

The original tractor mechanism for printers was a two step affair.
One set of sprockets fed paper into the printer and another set pulled
it out. For the intended purpose of tractor feeding, however, the two
sets of sprockets are one more than necessary. All it takes is one set
to lock the printer's image in sync with the paper.
A single set of sprockets can be located in one of two positions,
either before or after the paper wraps around the platen in front of the
print head. Some printers allow you to use a single set of tractors in
either position. In others, the tractors are fixed in one location or the
other.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (35 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Push tractors are placed in the path of the paper before it enters the
printer. In effect, they push the paper through the machine. The
platen roller helps to ease paper through the printer while the push
tractor provides the principal force and keeps the paper tracking
properly. This form of feeding holds a couple of advantages. You can
rip the last sheet of a printout off without having to feed an extra
sheet through the printer or rethread it. The tractor also acts
bi-directionally with relative ease, pulling the paper backwards as
well as pushing it forward.
Pull tractors are located in the path of the paper after it emerges from
the printmaking mechanism. The pull tractor pulls paper across the
platen. The paper is held flat against the platen by its friction and the
resistance of pulling it up through the mechanism. The pull tractor is
simpler and offers fewer potential hazards than push designs.
Although most pull tractors operate only uni-directionally, they work
well in high speed use on printers with flat, metal (instead of round
rubber) platens. Because of their high speed operation, typically
several pages per minute, the machines naturally tend to be used for
large print jobs during which the waste of a single sheet is not a
major drawback.

Consumables

Consumables are those things that you printer uses up, wears out, or burns through as it
does its work. Paper is the primary consumable, and the need for it is obvious with any
printer. Other consumables are less obvious, sometimes even devious in the way they can
eat into your budget.
You probably think you are familiar with the cost of these consumables. A couple of
months after the old dot matrix ribbon starts printing too faintly to read, you finally get
around to ordering a new $5 ribbon to hold you through for the rest of the decade. But if
you buy one of today's top quality printers—laser, thermal wax, and dye diffusion—you
may be in for a surprise. When the toner or transfer sheet runs out, the replacement may
cost as much as did your old dot matrix printer.
The modern trend in printer design and marketing is to follow the "razor blade" principle.
The marketers of razors (the non-electric shavers) discovered they could make greater
profits selling razor blades by offering the razors that use them at a low price, even a loss.
After all, once you sell the razor you lock in repeat customers for the blades.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (36 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Similarly, the prices of many inkjet and laser printers have tumbled while the
consumables remain infuriatingly expensive, often a good fraction (like one-third) the
cost of the printer itself. This odd situation results from the magic of marketing. By
yourself you can't do anything about it, but you must be aware of it to buy wisely.
If you truly want to make the best buy in getting a new printer, you must consider its
overall cost of ownership. This total cost includes not only the purchase price but the cost
of consumables and service over the life of the printer. Take this approach and you'll
discover a more expensive printer is often less expensive in the long run.
When you have a small budget, however, the initial price of a printer becomes paramount
because it dictates what you can afford. Even in this situation, however, you should still
take a close look at the price of consumables. Two similarly priced printers may have
widely varying consumables costs.

Cartridges

Laser printers use up a bit of their mechanism with every page they print. The organic
photoconductor drum on which images are made gradually wears out. (A new drum
material, silicon, is supposed to last for the life of the printer, but few printer models
currently use silicon drums.) In addition, the charging corona or other parts may also
need to be periodically replaced. And, of course, you need toner.
Laser printer manufacturers have taken various approaches to replacing these
consumables. Hewlett-Packard's LaserJets, for example, are designed with one-piece
cartridges that contain both the drum and toner. The whole assembly is replaced as a
single unit when the toner runs out. Other laser printers are designed so that the toner,
drum, and sometimes the fuser, can be replaced individually.
The makers of the latter style of printer contend that the drum lasts for many times more
copies than a single shot of toner, so dumping the drum before its time is wasteful. On
the other hand, the all-in-one cartridge folks contend that they design their drums to last
only as long as the toner.
Surprisingly, from a cost standpoint the choice of technology does not appear to make a
difference. (From an ecological standpoint, however, the individual replacement scheme
still makes more sense.)
A similar situation reigns among inkjet printers. Some designs incorporate the print head
nozzles into the ink cartridge. Others make the nozzles a separately replaceable item.
Although the latter should have a cost advantage and a convenience disadvantage, as a
practical matter the differences are not significant.
A more important issue to consider with inkjets is single versus separate cartridges for
ink colors. Many printers, typically the less expensive models, use a single cartridge for
all three primary ink colors. If you use all three colors equally, this is a convenient
arrangement. Most of the time, however, one color will run out before another and force
you to scrap a cartridge still holding a supply of two colors of rather expensive ink. If
you are frugal or simply appalled at the price of inkjet ink, you'll want a separate

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (37 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

cartridge for each ink color.

Refilling

One way to tiptoe around the high cost of laser printer consumables is to get toner or ink
cartridges refilled. Most manufacturers do not recommend this—because they have no
control over the quality of the toner, they can't guarantee that someone else's replacement
works right in their machines. Besides, they miss the profits in selling the toner or ink.
Quality really can be an issue, however. The Resolution Enhancement technology of the
HP's LaserJet III series, for example, requires toner with a particle size much smaller
than that of toner used by other printers. You cannot tell the difference in toner just by
looking at it—but you can when blotchy gray pages pour out of the printer. When you get
cartridges refilled, you must be sure to get the proper toner quality.

Paper

When comparing the costs of using different printer technologies, do not forget to make
allowances for machines that require special paper. In most cases, approved media for
such printers is available only from the machine's manufacturer. You must pay the price
the manufacturer asks, which, because of the controlled distribution and special
formulation, is sure to be substantially higher than buying bond paper at the office supply
warehouse.
With inkjet printers, paper is another profit area for machine makers. Getting the highest
quality from an inkjet requires special paper. Inkjet ink tends to blur (which reduces both
sharpness and color contrast) because it dries at least partly by absorption into paper.
Most inkjet printers work with almost any paper stock, but produce the best
results—sharpest, most colorful—with specially coated papers that have controlled ink
absorption. On non-absorbent media (for example, projection acetates), the ink must dry
solely by evaporation, and the output is subject to smudging until the drying process
completes. Of course, the treated paper is substantially more expensive, particularly if
you restrict yourself to buying paper branded by the printer maker. If you want true 720
dpi quality, however, it is a price you have to pay.

Printer Control

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (38 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

To make what you see on paper resemble what you preview on your
monitor screen, your printer requires guidance from your computer
and your software to tell it exactly how to make a printout look. The
computer must send the printer a series of instructions, either to
control the most intimate operation of the dumb printer or to coax
special features from the brainy machine.
The instructions from the computer must be embedded in the
character stream because that is the only data connection between the
printer and its host. These embedded instructions can take on any of
several forms.

Character Mode

In printing text from your PC, a printer receives a stream of ASCII


code characters. The printer matches those characters to one of the
fonts in its memory, which may be ROM or RAM. It then uses the
pattern from memory to control the placement of bits on paper. All
commands to the printer guide the formation and arrangement of
these characters.
Because the entire process is oriented toward making characters, the
commands to the printer are gauged in terms of characters and lines
of text. Most commands deal with fonts and character positioning.
For example, the printer may have several fonts in memory and use
specific commands to select each one. Other commands tell the
printer to emphasize characters or to slant them into italics.
Placement commands tell the printer where to put the characters
including how to space them.
The printer's interface uses the same signals to carry character data as
well as commands. The printer needs some way to distinguish one
from the other so that it doesn't print its commands on paper garbled
with the text you actually want to see as hard copy. To prevent this
confusion, most printers use a combination of two ideas for their
commands: special control characters that code commands that also
tell the printer not to print the character symbol; and escape
sequences, a series of characters preceded by a special command

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (39 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

telling the printer not to print the command characters.

Control Characters

Some of the most necessary instructions are the most common, for
instance to backspace, tab, or even underline characters. In fact, these
instructions are so commonplace that they were incorporated into the
ASCII character set and assigned specific values. To backspace, for
instance, your computer just sends your printer a byte with the ASCII
value 08, the backspace character. Upon receiving this character, the
printer backspaces instead of making a mark on the paper. The entire
group of these special ASCII values make up the set of control
characters.
The American National Standards Institute has defined function and
symbols for the control characters in the ASCII code. Table 20.4 lists
these control characters.

Table 20.4. ANSI Control Characters

ASCII value Control value Name Function


0 ^@ NUL Used as a fill character
1 ^A SOH Start of heading (indicator)
2 ^B STX Start of text (indicator)
3 ^C ETX End of text (indicator)
4 ^D EOT End of transmission; disconnect character
5 ^E ENQ Enquiry; request answerback message
6 ^F ACK Acknowledge
7 ^G BEL Sounds audible bell tone
8 ^H BS Backspace
9 ^I HT Horizontal tab
10 ^J LF Line feed
11 ^K VT Vertical tab
12 ^L FF Form feed

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (40 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

13 ^M CR Carriage return
14 ^N SO Shift out; changes character set
15 ^O SI Shift in; changes character set
16 ^P DLE Data link escape
17 ^Q DC1 Data Control 1, also known as XON
18 ^R DC2 Data Control 2
19 ^S DC3 Data Control 3, also known as XOFF
20 ^T DC4 Data Control 4
21 ^U NAK Negative acknowledge
22 ^V SYN Synchronous idle
23 ^W ETB End of transmission block (indicator)
24 ^X CAN Cancel; immediately ends any escape sequence
25 ^Y EM End of medium (indicator)
26 ^Z SUB Substitute (also, end-of-file marker)
27 ^[ ESC Escape; introduces escape sequence
28 ^\ FS File separator (indicator)
29 ^] GS Group separator (indicator)
30 ^^ RS Record separator (indicator)
31 ^_ US Unit separator (indicator)
32 SP Space character
127 NUL No operation
128 Reserved Reset parser with no action (Esc)
129 Reserved Reset parser with no action (Esc A)
130 Reserved Reset parser with no action (Esc B)
131 Reserved Reset parser with no action (Esc C)
132 IND Index; increment active line (move paper up)
133 NEL Next line; advance to first character of next line
134 SSA Start of selected area (indicator)
135 ESA End of selected area (indicator)
136 HTS Set horizontal tab (at active column)
137 HTJ Horizontal tab with justification
138 VTS Set vertical tab stop (at current line)
139 PLD Partial line down
140 PLU Partial line up
141 RI Reverse index (move paper down one line)

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (41 de 83) [23/06/2000 06:47:09 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

142 SS2 Single shift 2


143 SS3 Single shift 3
144 DCS Device control string
145 PU1 Private use 1
146 PU2 Private use 2
147 STS Set terminal state
148 CCH Cancel character
149 MW Message writing
150 SPA Start of protected area (indicator)
151 EPA End of protected area (indicator)
152 Reserved Function same as Esc X
153 Reserved Function same as Esc Y
154 Reserved Function same as Esc Z
155 CSI Control sequence introducer
156 ST String terminator
157 OSC Operating system command (indicator)
158 PM Privacy message
159 APC Application program command

Note that control characters fall into two ranges, those with an ASCII
value of 32 and below, and those with an ASCII value of 127 or
higher, with some repetition between the two. With the exception of
ASCII 127, the higher code values require a full eight-bit digital
code. Many early printers, particularly those using serial connections,
were capable of understanding only seven-bit values. In addition,
many manufacturers have assigned extra printable characters to the
higher ASCII code values. Hence only the lower set of ASCII control
characters is truly universal.

Escape Sequences

The number of ASCII characters available for printer commands are


few—particularly when the printer uses a seven-bit code or an
extended character set—and the number of functions that the printer
can carry out are many. To sneak additional instructions through the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (42 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

data channel, most printers use escape sequences.


The name comes from the control character selected as the marker
that indicates that the group of characters following it is a command
rather than printable data. This special introducer character has the
ASCII value of 27 and is called "escape" by programmers,
abbreviated ESC.
In most commands, the escape character by itself does nothing. It
serves only as an attention getter. It warns the printer that the ASCII
character or characters that follow should be interpreted as
commands rather than printed out. Table 20.5 lists some examples of
simple escape sequences, those designated by ANSI for seven-bit
printer environments.

Table 20.5. Escape Sequences for Seven-Bit Environments

Escape sequence Function


Esc D Index
Esc E Vertical line
Esc H Set horizontal tab
Esc Z Set vertical tab
Esc K Partial line down
Esc L Partial line up
Esc M Reverse index
Esc N Single shift 2
Esc O Single shift 3
Esc P Device control string
Esc [ Control sequence introducer
Esc \ String terminator
Esc ] Operating system command
Esc ^ Private message
Esc _ Application program command

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (43 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Even this range of commands is modest compared to the possibilities


and potentials of a modern printer. Consequently, every printer
maker has expanded on the basic list of simple escape sequences.
Each printer maker decides what commands its printers can carry out
and develops its own set of escape sequences. These, together with
control characters, make up the command set of the printer. The
command set determines the compatibility of the printer. Programs
must be able to send the printer the commands that it understands.
With modern operating systems, the printer's driver sends out the
actual commands, so it is important to match the driver you use to
your printer. When you install a printer, you select the proper driver,
and this process ensures that you send the correct commands. When
using old DOS applications, however, you must configure each
application to match your printer.
One way smaller printer manufacturers avoided the need to write
their own drivers (or coerce programmers to include commands
specific to their products in applications) was to make their printer
respond to the commands of another printer. In other words, the
smaller manufacturer designed its products to use the same command
set as a larger, better known, and more widely supported
manufacturer. The products of the smaller manufacturer were said to
emulate those of the larger manufacturer. Often printers are made to
respond to the commands used by a number of other printers. All the
command sets that a given printer knows how to use are termed the
emulations of the printer.
The very early market for high quality printers was dominated by two
companies, the Diablo division of Xerox and Qume, owned by the
ITT conglomerate at the time the PC was introduced. (The company
has changed hands several times since then.) Although the daisy
wheel printers made by these manufacturers are now modern
antiques, their commands were widely used by other printer
manufacturers. Even some of the latest laser and inkjet printers
incorporate many of their escape sequences into their command sets
because they provide a good range of commands required in text
printing. A condensed listing of the two command sets is given in
Table 20.6.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (44 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Table 20.6. Diablo and Qume Printer Escape Sequences

Escape sequence Function


Esc BS *Backspace 1/120 inch
Esc LF *Negative (backwards) line feed
Esc SO Shift to primary mode
Esc SI Return to normal mode
Esc RS n *Define vertical spacing increment as n-1
Esc US n *Set horizontal space increment to n-1
Esc VT n *Absolute vertical tab to line n-1
Esc HT *Absolute horizontal tab to column n-1
Esc SP Print special character position 004
Esc SUB I Initialize printer
Esc SUB SO Terminal self-test
Esc CR P Initialize printer
Esc 0 *Set right margin
Esc 1 *Set horizontal tab stop
Esc 2 *Clear all horizontal tab stops
Esc 3 *Graphic on 1/60 inch
Esc 4 *Graphics off
Esc 5 *Forward print
Esc 6 *Backward print
Esc 8 *Clear horizontal tab stop
Esc 9 *Set left margin
Esc . Auto line feed on
Esc , Auto line feed off
Esc < Auto bi-directional printing on
Esc > Auto bi-directional printing off
Esc + Set top margin
Esc - Set bottom margin
Esc @ T Enter user test mode
Esc # Enter secondary mode
Esc $ *WPS (proportional spaced printwheel) on
Esc % *WPS (proportional spaced printwheel) off

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (45 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Esc ( n Set tabs at n (n can be a list)


Esc ) n Clear tabs at n (n can be a list)
Esc / Print special character position 002
Esc C n m Absolute horizontal tab to column n
Esc D *Negative half-line feed
Esc E n m Define horizontal space increments
Esc F n m Set form length
Esc G *Graphics on 1/120 inch
Esc H n m l Relative horizontal motion
Esc I Underline on
Esc J Underline off
Esc K n Bold overprint on
Esc L n n Define vertical spacing increment
Esc M n Bold overprint off
Esc N No carriage movement on next character
Esc O Right margin control on
Esc P n Absolute vertical tab to line n
Esc Q Shadow print on
Esc R Shadow print off
Esc S No print on
Esc T No print off
Esc U *Half-line feed
Esc W Auto carriage return/line feed on
Esc V n m l Relative vertical paper motion
Esc X Force execution
Esc Y Right margin control off
Esc Z Auto carriage return/line feed off
Esc e Sheet feeder page eject
Esc i Sheet feeder insert page from tray one
Esc x Force execution
Qume Sprint 11 commands shown; *indicates commands shared by Diablo 630.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (46 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Because the manufacturers developed these commands for fully


formed character printers, the command sets lack the ability to
control the text enhancing features made possible with modern bit
mapped technology. Consequently, makers of bit image printers
developed their own command sets to take care of these
enhancements. The most widely emulated of these are the command
sets developed by Epson for its line of dot matrix printers. Most
modern, low cost, impact dot matrix printers emulate the Epson
command set.
Early dot matrix printers sometimes emulated IBM's Graphics
Printer, a machine based on the Epson MX-80. The IBM machine
used many but not all Epson commands. The chief differences
between the two were their character sets. IBM used the upper half of
the 256 ASCII values for a variety of special symbols, the IBM
extended character set, while Epson used those values for italics.
Over time, Epson recognized that its command set had become a
standard language for printers, and it developed and promoted it as
such. This language is commonly was known as Esc/P because of the
form of its commands.
When in 1992 Epson added state-of-the-art high resolution inkjet
printers to its lineup, instead of shifting to another control system, it
expanded Esc/P to include support for scaleable fonts, raster
graphics, and page-printer oriented advanced paper handling to create
the Esc/P2 system. Many modern inkjet and laser printers have
emulation modes in which they use the Esc/P2 system. Table 20.7
lists the escape sequences used by the Esc/P2 system.

Table 20.7. Epson Esc/P2 Command Set Escape Sequences

Code Byte values Function


Decimal Hex
Esc EM 27 25 1B 19 Cut sheet feeder control
Esc SP 27 32 1B 20 Selects character space
Esc ! 27 33 1B 21 Selects mode combinations

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (47 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Esc $ 27 36 1B 24 Set absolute horizontal tab


Esc % 27 37 1B 25 Selects active character set
Esc ( - 27 40 116 1B 28 74 Select line/score
Esc ( ^ 27 40 94 1B 28 5E Print data as characters
Esc ( C 27 40 67 1B 28 43 Set page length (defined unit)
Esc ( G 27 40 71 1B 28 47 Select graphics mode
Esc ( U 27 40 85 1B 28 55 Set unit
Esc ( V 27 40 86 1B 28 56 Set absolute vertical position
Esc ( c 27 40 99 1B 28 63 Set page format
Esc ( i 27 40 105 n 1b 28 69 n Microweave mode; on=1, off=0
Esc ( t 27 40 116 1B 28 74 Assign character table
Esc ( v 27 40 118 1B 28 76 Set relative vertical position
Esc : 27 58 1B 3A Copies ROM to user RAM
Esc & 27 38 1B 26 Defines user characters
Esc . 27 46 1B 2E Print raster graphics
Esc \ 27 92 1B 5C Move print head
Esc @ 27 64 1B 40 Initialize printer
Esc - n 27 45 n 1B 2D n Underline mode
n=1 or 49, turns underline mode on
n=0 or 48, turns underline mode off
Esc * n 27 42 n 1B 2A n Select bit image mode (See Table 11.10)
Esc 0 27 48 1B 30 Set line spacing at 1/8 inch
Esc 2 27 50 1B 32 Set line spacing at 1/6 inch
Esc 3 n 27 51 n 1B 33 n Set line spacing at n/216 inch (n between 0 and 255)
Esc 4 27 52 1B 34 Turns alternate character (italics) set on
Esc 5 27 53 1B 35 Turns alternate character (italics) set off
Esc 6 27 54 1B 36 Deactivate high order control codes
Esc 7 27 55 1B 37 Restores high order control codes
Esc D 27 68 1B 44 Set horizontal tab stop
Esc E 27 69 1B 45 Turns emphasized mode on
Esc F 27 70 1B 46 Turns emphasized mode off

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (48 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Esc G 27 71 1B 47 Turns double-strike mode on


Esc H 27 72 1B 48 Turns double-strike mode off
Esc I 27 73 1B 49 Control code select
Esc J n 27 74 n 1B 4A n Tentative n/216-inch line spacing
Esc K 27 75 1B 4B Normal-density bit image data follows
Esc L 27 76 1B 4C Dual-density bit image data follows
Esc M 27 77 1B 4D Elite-sized characters on
Esc N n 27 78 n 1B 4E n Set number of lines to skip-over perforation
n=number of lines to skip between 1 and 127
Esc O 27 79 1B 4F Turn skip-over perforation off
Esc P 27 80 1B 50 Elite mode off/Pica-sized characters on
Esc Q n 27 81 n 1B 51 n Sets the right margin at column n
Esc R n 27 82 n 1B 52 n Selects international character set
n=0, USA
n=1, France
n=2, Germany
n=3, England
n=4, Denmark I
n=5, Sweden
n=6, Italy
n=7, Spain
n=8, Japan
n=9, Norway
n=10, Denmark II
Esc S n 27 83 n 1B 53 n Superscript/subscript on mode
n=0 or 48, superscript mode on
n=1 or 49, subscript mode on
Esc T 27 84 1B 54 Turns superscript/subscript off
Esc U n 27 85 n 1B 55 n Unidirectional/bidirectional printing
n=0 or 48, turn bidirectional printing on
n=1 or 49, turn unidirectional printing on
Esc W n 27 87 n 1B 57 n Enlarged (double-width) print mode
n=1 or 49, enlarged print mode on
n=0 or 48, enlarged print mode off
Esc X 27 88 n 1B 58 n Select font by pitch and point
Esc Y 27 89 1B 59 Double-speed, dual-density bit image data follows
Esc Z 27 90 1B 5A Quadruple-density bit image data follows

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (49 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Esc g 27 103 1B 67 Select 15 width


Esc k 27 107 1B 6B Select family of type styles
Esc l n 27 108 1B 6C n Sets the left margin at column n
Esc p n 27 112 n 1B 70 n Proportional printing
n=0 or 48, turn proportional printing off
n=1 or 49, turn proportional printing on
Esc z 27 122 1B 7A Select letter quality or draft

Line Printer Graphics

Printing graphics with a line printer is a matter of controlling the


mechanical movement of the print head and the firing of a hammer or
spraying of jets. Commands to print graphics are consequently
couched in terms of these basic movements and operations.
Line printer graphics commands generally take one of two forms:
banded graphics, in which the image data is split into horizontal
bands, and raster graphics, in which the image is described one line
of dots at a time.

Banded Graphics

To control the printing of banded graphics, the general form of the


command is to define each line in graphic terms. A line comprises a
number of columns of a few rows of dots. The rows correspond to the
number of wires or nozzles (or whatever) in the print head that are
actually used to print graphics. Graphic line printing involves telling
the print head which dots to make at each column/row position in its
travels. An entire image is thus made from a series of horizontal
bands.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (50 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Line printers differ to a great degree in how precisely they can move
paper through their mechanisms. Some are designed to allow
exacting tolerances and move each sheet in increments of the tiniest
fractions of an inch (as small as 1/720 inch). Although the distance
between the nozzles or wires in the print head is fixed, the printer can
improve its resolution by moving the paper slightly so that the dots of
the second printer row fall between instead of below the previous
line. Similarly, to increase the bit density horizontally, the printer
need only move its print head a smaller increment between dot
making operations.
Graphic commands to print dots instruct the print head to make a tall
column of dots—9, 24, 48 or more dots high—in parallel. Depending
on the number of printing elements in the print head, a single column
in one band may require anywhere from one to six (or more) bytes of
code. To make graphics, most line printers require only a series of
these data bytes.
Making sense out of the data requires the printer to know how to
interpret it—not only which nozzles or wires to fire but also how fast
to move the print head across the sheet and how far to advance the
paper between each band. Each of the speed/distance options defines
a graphic printing mode, which most printer makers describe as a
graphic resolution. To begin printing graphics, a program sends out a
command to set the graphics mode, then spews the data to the printer.
The mode command works like an escape code, instructing the
printer not to print the command and to print the ensuing data as a
graphic instead of text. In fact, the graphics mode command usually
is an escape code. For example, the Esc/P system incorporates
several mode-setting commands. Early printers used a number of
different escape codes for setting different modes (for example, Esc
^, Esc K, Esc L, Esc Y, and Esc Z). All of these have been replace by
a single graphics mode command, Esc *.
Besides indicating the resolution (encoded as the first byte following
the command, as noted in Table 20.8), a complete Esc * command
also indicates the number of columns to print (the first byte is the
least significant), followed by all of the necessary data bytes.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (51 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Table 20.8. Esc/P Graphics Modes and Resolutions

Esc * parameter Horizontal Vertical Adjacent dots Dots per


resolution (dpi) resolution (dpi) OK? column
24-wire 48-wire
0 60 60 60 Yes 8
1 120 60 60 Yes 8
2 120 60 60 No 8
3 240 60 60 No 8
4 80 60 60 Yes 8
6 90 60 60 Yes 8
32 60 180 180 Yes 24
33 120 180 180 Yes 24
38 90 180 180 Yes 24
39 180 180 180 Yes 24
40 360 180 180 No 24
64 60 N/A 60 Yes 48
65 120 N/A 120 Yes 48
70 90 N/A 180 Yes 48
71 180 N/A 360 Yes 48
72 360 N/A 360 No 48
73 360 N/A 360 Yes 48

Note that due to mechanical limitations of impact print heads, in


some modes printing dots in the same row in two adjacent columns is
not permitted.
After the printer finishes the last of the specified data bytes, it
resumes normal operation, waiting for further Esc * commands, other
escape codes, or text to print.

Raster Graphics

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (52 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

When printers have internal buffers large enough to hold an entire


band of graphics, they can accept data in raster form and rearrange it
as suitable print head instructions. The raster describes the image one
line of dots at a time, in sequence, the data of each line followed by
that of the line directly beneath it.
For example, Esc/P2 adds another new graphic command, Esc . (that
is, the escape character followed by a period) for raster graphics. This
command describes the format of the image data including the
horizontal and vertical dot density or resolution, the number of row
of dots and the number of columns of dots. It also indicates whether
the raster data has been compressed.
In the Esc/P2 scheme, this command allows for printer resolutions as
high as 3600 dots per inch because it uses that increment as its basic
measuring unit. The format of the command requires six descriptive
bytes before the image data, as follows:
Whether the image is compressed (0 for uncompressed, 1
for compressed)
The horizontal dot density in units of 1/3600 inch
The vertical dot density in units of 1/3600 inch
The number of rows of dots in this command (one for true
raster graphics)
The least significant byte of the number of columns
The most significant byte of the number of columns
For example, to select 720 dpi graphics mode using uncompressed
data, the command would look like this:

Esc . 0 5 5 1 nL nH d1 ... dn

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (53 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

The variables nL and nH represent the number of columns and d1


through dn are the actual raster image data.

Postscript

Although page printers create a bit image of a full page before


printing it, describing a page as a raster or other form of bit image
and sending such data to the printer is time consuming, wasteful, and
often unnecessary. Most printed pages involve text, sometimes with a
smattering of graphics. To encode such data more efficiently, Adobe
Systems developed Postscript as a page description language in
1985. Postscript is a programming language; instead of carrying out a
calculation, it tells a printer (or other device) how to arrange text and
graphics on a printed page.
The Postscript language comprises a group of commands and codes
that describe graphic elements and indicate where they are to appear
on the printed page. Your computer sends high level Postscript
commands to your laser printer, and the printer executes the
commands to draw the image itself. In effect, the data processing
load is shifted to the printer, which, in theory, has been optimized for
implementing such graphics commands. Nevertheless, it can take
several minutes for the printer to compute a full page image after all
the Postscript commands have been transferred to it. (Older
Postscript printers might take half an hour or more to work out a full
page of graphics.)
The advantage of Postscript is its versatility. It uses outline fonts,
which can be scaled to any practical size. Moreover, PostScript is
device and resolution independent, which means that the same code
that controls your 300 dpi printer runs a 2500 dpi typesetter—and
produces the highest possible quality image at the available
resolution level. You can print a rough draft on your LaserJet from a
Postscript file and, after you have checked it over, send the same file
to a typesetter to have a photo-ready page made.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (54 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

In June 1990, Adobe Systems announced a new version of Postscript,


Level 2, which incorporated several enhancements. The most obvious
are speed and color. Postscript Level 2 can dash through documents
four to five times quicker thanks to getting the font rendering
technology used in Adobe Type Manager. In addition, Postscript
incorporates a new generalized class of objects called "resources"
that can be pre-compiled, named, and cached and downloaded to the
memory (or disk) inside a Postscript device. Nearly anything that is
printed can be classed as a resource—artwork, patterns, forms are
handled in this streamlined manner. Postscript Level 2 also manages
its memory uses much better, no longer requiring that programs
pre-allocate memory for downloaded fonts and bit mapped graphics.
It also incorporates new file management abilities to handle disk
based storage inside PostScript devices. In addition, Postscript Level
2 has built-in compression/decompression abilities so that bit mapped
images (and other massive objects) can be transmitted more quickly
in compressed form, then expanded inside the printer or other device.
Color first was grafted onto Postscript in 1988, but Postscript Level 2
takes color to heart. Where each Postscript device had its own
proprietary color handling methods, with Postscript Level 2, color is
device independent. To improve color quality, the new version also
allows color halftone screening at any angle, which helps eliminate
more patterns and make sharper renderings.
Level 2 also enhances font handling. Old Postscript limited fonts to
256 characters each. Level 2 allows for composite fonts which allow
an essentially unlimited number of characters. Larger fonts are
particularly useful for languages that do not use the Roman alphabet
(such as Japanese) or those that have a wealth of diacritical marks.
Level 2 also incorporates Display PostScript, an extension that is
designed to translate Postscript code into screen images. Device
independent support for many of the more generalized printer
features is also available so that paper trays, paper sizes, paper
feeding, even stapling, can be controlled through Postscript.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (55 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Of course, ordinary Postscript printers cannot take advantage of these


new features, while Level 2 machines are generally (though not
completely) backwardly compatible with older code. In most cases, a
Postscript Level 2 printer handles ordinary Postscript commands
without a problem, but realizing the full features of Level 2 requires
new Postscript 2 software drivers.
Postscript works best in describing text pages. In describing graphic
images, Postscript (as with any page description language) can
actually slow down graphic printing—particularly color printing. To
print a bit image with a page description language, your PC must first
translate the bit image into commands used by the page description
language. Then the printer must convert the commands into the
image raster before it can print out the image. This double conversion
wastes time. Printers that sidestep the page description language by
using their own specialized drivers typically send only the bits of the
image through the printer interface. The printer can then quickly
rasterize the image bits. The downside of the special driver technique
is that each application (or operating environment) requires its own
driver software—which usually means that such printers work only
with Windows and a handful of the most popular applications.

PCL

Hewlett-Packard's Printer Control Language, most often abbreviated


with its initials, PCL, was first developed as a means to control a
rudimentary (by current standards) inkjet printer. As HP has
introduced increasingly sophisticated laser printers, the language has
been expanded and adapted. It now is on its sixth major revision,
called PCL6.
PCL functions like an elaborate printer command set, with length
strings of characters initiating the various Laserjet functions. It is not
a true page description language.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (56 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

The initial versions of PCL—those before the PCL5—were little


more than a system of control codes for eliciting various printer
functions, including font selection. PCL5 pushed the language into
direct competition with PostScript by including more line drawing
commands and the ability to handle scalable (outline) fonts. Version
6 makes PCL a modular, object oriented language designed for
handling graphics intensive printing. Its primary goal is true
WYSIWYG printing, and it achieves this goal with innovations like
font synthesis.
Compared to earlier versions, PCL6 requires less processing on your
PC, allowing your applications to finish more quickly. Its more
complete set of graphics primitives accelerates the printing of
complex graphics and reduces the amount of data that your PC must
ship to your printer (or across the network line to your printer).
Introduced in March 1996, Version 6 is a standard feature of
PCL-based Hewlett-Packard printers introduced after that date.
PCL5 was formally introduced with the announcement of the
LaserJet III on February 26, 1990 and followed four earlier versions
of the PCL language. Although PCL is normally associated with
LaserJet printers, the initial two versions of PCL predated the
introduction of the first laser printer by any manufacturer. The first
printer to use the original version of PCL was the HP ThinkJet, an
inkjet engine. PCL3, the third version—was the language that
controlled HP's first laser printer, the original LaserJet.
Printers compatible with the PCL3 standard can use only cartridge
fonts in their text modes. Full page graphics must be generated by
their computer hosts and transferred bit by bit to the printer.
The next major revision to PCL was a response to the needs of early
desktop publishing and similar applications that demanded more than
just a few cartridge based fonts. This revision was PCL4, which
added the ability to have multiple fonts on the same page and to use
downloaded fonts. These were bit mapped fonts, however, and you
could only print one orientation on a given page. You also could do
some rudimentary box drawing and filling of boxes.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (57 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Besides scalable fonts, PCL5 adds vector graphics, sophisticated


page formatting capability, the ability to handle portrait and
landscape orientations on the same page, print white on black, and
turn fonts into some pattern or shade. In addition, PCL5 incorporates
a pared down version of the HP-GL, Hewlett-Packard's Graphics
Language that has become an industry standard means of
commanding plotters (see the following "Plotters" section).
PCL5 can yield on-paper images that are effectively identical to those
made on PostScript printers, but there are substantial differences
between PCL 5 and PostScript. PostScript is essentially device
independent. The PostScript code sent out of a PC is the same no
matter whether it is meant to control a relatively inexpensive desktop
laser or an expensive typesetting machine.
As a printer language, PCL5 is a device dependent. It currently works
only with 300 dpi laser printers, so its code cannot be used to drive
typesetters. But PCL5 is less expensive than PostScript because it
requires no license to use.
PCL5 would be a curiosity if it were only to be used in
Hewlett-Packard printers, as it was for the first year of its existence.
But that situation is changing and making PCL5 into the latest de
facto standard for controlling laser printers.
A number of companies have developed controllers for laser printers
that understand PCL5. These controllers are bought by printer
manufacturers to build their products. With easy access to these high
performance controllers, printer makers have embraced PCL-5 and
PCL-5e as the standards for compatibility. That means you can get
PostScript quality at a lower price, more than enough reason to look
for PCL5e compatibility in your next laser printer.

Architecture

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (58 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

PCL brings three control systems together under a single


banner—control characters (as described in the preceding "Control
Characters" section), native PCL commands, and HP-GL commands
(described in the following "Plotters" section). The heart of PCL is,
of course, its native commands, which provide access to the printer's
control structure. These commands operate all the features of the
printer beyond the range of control characters with the exception of
drawing vector graphics, which rely on HP-GL.
Among other functions, PCL commands can elicit an immediate
action from the printer or set a parameter that controls subsequent
functions, for example shifting from portrait to landscape printing
mode or specifying a font. Once a PCL command sets a parameter,
that setting remains in effect until another PCL command sends a
new setting for that parameter, another command alters the
parameter, or the printer is reset (either by command or by switching
it off). For this reason most commercial applications reset PCL
printers at the beginning of each print job so that they can be sure the
machine will operate with known parameter settings.
PCL commands must be sent to printers in the proper order to control
the production of each page. Hewlett-Packard calls this ordering the
command hierarchy and arranges the commands into eight groups:
job control commands, page control commands, cursor positioning
commands, font selection commands, font management commands,
graphics commands, print model features, and macro commands.
Job control commands remain in effect throughout the
print job and ordinarily are not sent again until the
beginning of the next print job. They tell your printer how
to handle the mechanics of the print job—where the image
appears on the page, which sides of the sheet or paper bin
to use, and what measuring units to use in the page
description. All job control commands are usually sent at
the beginning of a print job as a group.
Page control commands select the page source, size,
orientation, margins, and text spacing used in a documents.
These commands let you specify the distance between rows
and columns of text, set left and right margins, change the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (59 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

orientation of the page and text, and alter the line spacing
and page size.
Cursor positioning commands set the point of reference
for printing text called the cursor. The PCL cursor acts just
like the cursor on your monitor screen, indicating the
position the next character will print. The PCL cursor lets
you indicate the cursor position several ways—by moving
it to an absolute position on the page; or by making a
relative move, one in which the new cursor position is
measured in relation to the old position.
Font selection commands let you change the typeface in
use. You can select between built-in, cartridge, and soft
fonts. Under PCL, a printer identifies a font by several of
its characteristics including its symbol set, spacing, pitch,
height, style, stroke weight, and typeface family. The range
of font commands lets you set all of these values, each
value requiring a separate command. To speed processing,
a PCL printer keeps two fonts active—one primary and one
secondary—and shifts between them with a single
command.
Font management commands control the downloading
and manipulation of soft fonts. These commands let you
add a new font into the printer from your PC, to select it or
another font for printing at the cursor position, remove
fonts from memory, or carry out other housekeeping
functions.
Graphics commands instruct a printer how to build
dot-per-bit raster images and to fill or shade rectangular
areas with pre-defined patterns. Drawing more complex
shapes requires the use of HP-GL commands.
Print model features elaborate on graphics commands and
allow you to fill images and characters with a pre-defined
color or gray pattern, depending on which your printer
supports.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (60 de 83) [23/06/2000 06:47:10 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Macro commands reduce the number of commands that


you must send to your printer to carry out the most frequent
tasks. For example, you can use a single macro instruction
to describe a complete page format or add a graphic logo to
a letterhead. PCL versions more recent than Version 5 let
you nest macros, so one macro may call another. Macros
can be permanent or temporary—reset erases temporary
macros but leaves permanent macros in memory.
Switching off your printer erases both.

Command Structure

Every PCL printer command is an escape sequence that comprises


two or more characters. HP calls the first the introducer, and it is
always the ubiquitous Esc character. As in any escape sequence, the
introducer tells the printer that the next characters represent a
command rather than text. After the introducer, some PCL
commands use a single character to make two-character commands.
Others follow the introducer with several parameters to make
parameterized commands.
The two-character form of PCL command escape sequences
comprise only the introducer and a second character that defines the
operation for the printer to carry out. The second character may be
any ASCII value between 48 and 126 (decimal). PCL uses its
two-character commands for only a few functions including the three
following:
Esc E Printer reset
Esc 9 Resets left and right margins
Esc = Half-line feed

PCL's parameterized commands add one or more parameters after the


introducer and a second character that identifies the command. In
general, parameterized commands take the following form:

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (61 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

<Esc> X Y # z1 # z2 # Zn
Hewlett-Packard gives a specific name to each character in a
parametized command. These are as follows:
X is the parametized character that identifies the command
and additionally lets the printer know to expect additional
parameters. Its value must be within the range of byte
values between 33 and 47 (decimal) inclusive.
Y is the group character, which tells your printer the type
of function to carry out. It may range in byte value from 96
to 126 (decimal) inclusive.
# is a value field which specifies a numerical value using
one or more binary code decimal (BCD) characters. That
is, the individual bytes will have values within the range 48
to 57 (decimal) inclusive, corresponding to the ASCII
characters 0 to 9. This numerical value may optionally be
preceded by a plus or minus sign or may also include a
decimal point, these additional characters not counted in
the five total. The numeric value expressed in this field can
range from -32,767 to +3,767. If a value field contains no
number in a PCL command requiring one, your printer will
assume a value of zero.
Z1 and Z2 are parameter characters, which specify the
parameter associated with the preceding numerical value
field. PCL uses characters within the range 96 to 126
(decimal) inclusive to specify parameter characters.
Although two are shown, a given PCL command may
contain one or several parameter characters. They are used
to combine or concatinate escape sequences.
Zn is the termination character, which specifies a
parameter for the preceding value field exactly as does a
parameter field but also notifies your printer that the escape
sequence has ended. A termination character must be
within the range 64 to 94 (decimal) inclusive.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (62 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

PCL allows the combining of two related escape sequences into one,
providing you follow two rules. Rule one is that only commands
using the same parameterized and group characters—A and B in the
above example—can be combined. Rule two is that all characters in
the command except the termination character (Zn in the example)
must be in lower case. To combine multiple commands into one,
string all the character of all the commands together, throwing out the
first three bytes of every command after the first and making all text
characters except the last lower case. For example, you would
combine the sequences Esc & l 3 O with Esc & l 2 A to make the
sequence Esc & l 3 o 2 A. Your printer executes these combined
commands in left to right order as they appear on the line.

Fonts

Fonts differ in how the information for coding these light and dark
patterns is stored. Generally, fonts are stored using one of two
technologies, termed bit mapped and outline.
Bit mapped fonts encode each character as the pattern of dots that
form the matrix, recording the position and color of each individual
dot. Because larger character sizes require more dots, they require
different pattern codes than smaller characters. In fact, every size of
character, weight of character (bold, condensed, light, and so on),
even each character slant (Roman versus Italic), requires its own
code. In other words, a single type family may require dozens of
different, bit mapped fonts.
Outline fonts encode individual characters as mathematical
descriptions, essentially the stroke you would have to make to draw
the character. These stokes define the outline of the character, hence
the name for the technology. Your computer or printer then serves as
a raster image processor (often termed a RIP) that executes the
mathematical instructions to draw each character in memory to make
the necessary bit pattern for printing. With most typefaces, one
mathematical description makes any size of character—the size of
each individual stroke in the character is merely scaled to reflect the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (63 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

size of the final character (consequently outline fonts are often


termed scalable fonts). One code, then, serves any character size,
although different weights and slants require somewhat different
codes. A single type family can be coded with relatively few font
descriptions—normal, bold, Roman, and Italic combinations.
Simply scaling fonts produces generally acceptable results. To make
the clearest, most readable text, however, small characters typically
are shaped somewhat differently than large characters. For example,
the serifs on each letter may need to be proportionally larger for
smaller characters, or else they would disappear. Bit mapped fonts
automatically compensate for these effects because each size font can
be separately designed. In outline fonts, the equations describing
each stroke can include hints on what needs to be changed for best
legibility at particular sizes. Outline fonts that include this
supplementary information are termed hinted, and produce clearer
characters, particularly in large, headline and tiny, contract-style
sizes.
PostScript Level 2 takes outline fonts a level further. With its
Multiple Master fonts, some typefaces of the same family can be
encoded as a single font. That is, one font definition can cover italic,
Roman, and bold characters (as well as all sizes) of a given typeface.
The equations required for storing an outline font typically requires
more storage (more bytes on disk or in memory) than bit mapped
fonts, but storing an entire family of outline fonts requires
substantially less space than a family of bit mapped fonts (because
one outline font serves all sizes). For normal business printing, which
generally involves fewer than a dozen fonts (including size
variations), this difference is not significant. For graphic artists,
publishers, and anyone who likes to experiment with type and
printing, however, outline fonts bring greater versatility.
On the other hand, bit mapped fonts print faster. Outline fonts have to
go through an additional step—raster image
processing—computations which add to printing time. Bit mapped
fonts are directly retrieved from memory without any additional
footwork.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (64 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Storage and Retrieval

The information describing font characters has to be stored


somewhere. Considering that many megabytes may be involved, the
location of font storage can have important implications on how you
use your PC and printer.
Like every dot matrix printer, the laser printer has a few fonts built
in. Ubiquitous among lasers is the familiar, old 10-pitch Courier, the
default typeface held over from typewriter days. Probably the most
endearing characteristic of Courier (at least to software and printer
designers) is that it is monospaced—every character from "i" to "m"
is exactly the same width, making page layout easy to control. The
bit patterns for this typeface, consequently, are forever encoded into
the ROMs of nearly every machine. It can be—and usually
is—pulled up at an instant's notice simply by giving a command to
print a character.
A few other faces may be resident in the ROM of printers. The
number depends on many factors—generally if the manufacturer is
large, as few faces as possible are packed in ROM; smaller
manufacturers include more to give their products a competitive
edge.

Font Cartridges

Additional fonts can be added in several ways. The easiest to manage


is the font cartridge. The dot patterns for forming alternate character
fonts are stored in ROM chips held inside each cartridge. The
cartridge itself merely provides a housing for the chip and a
connector that fits a mate in the printer. By sliding in a cartridge, you
add the extra ROM in the cartridge to that in the printer. Many
impact and laser bit image printers have been designed to use font
cartridges.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (65 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Note that each manufacturers' cartridges are different and


incompatible (sometimes the cartridges of two models of printers
made by the same manufacturer are incompatible), although several
laser printer makers are making their machines compatible with
Hewlett-Packard laser printer cartridges.
The disadvantage of the font cartridge technique—besides the cost of
the fonts themselves—is the limited number of cartridge slots
available. A single cartridge may hold six to twelve fonts. With bit
image fonts in particular, such a small capacity can be confining. To
sidestep this issue, several enterprising developers have packed
dozens of fonts into a single cartridge.

Downloadable Character Sets

Most laser printers also allow you to download fonts. That is, you can
transfer character descriptions from your PC's memory to the RAM
inside your laser printer, where the individual characters can be
called up as needed just as if they were in ROM. These are called
downloadable character sets or soft fonts because they are transferred
as software. Typically, you buy soft fonts just like software, on
floppy disk, that you can copy to your PC's hard disk. You can store
as many soft fonts as your hard disk can hold for use in your laser
printer.
Soft fonts have many disadvantages, each with its own workaround.
For example, soft fonts can be inconvenient. Somehow, you must
transfer them from your PC to your printer, generally every time you
want to use them. You can avoid this problem with font manager
software. Each soft font you load into your printer steals a chunk of
your printer's RAM. The memory limits of your laser printer
constrain the number of soft fonts you can load at any given time,
although the prodigious amount of memory allowed by newer
printers ameliorates the problem somewhat (but not, of course, the
cost of additional printer memory).

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (66 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Some software can generate the bits patterns of fonts it needs by


itself, the equivalent of having soft fonts built into the program. It
then transmits the resulting bit patterns to your printer (instead of
sending a stream of characters that the printer renders into bit
patterns). Windows sometimes uses this font generation method,
depending on the fonts and printer you choose. In fact, TrueType
uses this strategy to simplify printing with Windows—it eliminates
most of the need to download fonts to your printer. On the other
hand, this technique imposes a hefty penalty—bit patterns take more
memory than characters, requiring more memory inside your laser
printer, more than a megabyte to print a full page at highest
resolution. Worse, the greater amount of data requires longer to
transmit to the printer, increasing print time.
With older printers following the Hewlett-Packard LaserJet printing
standard, all but the simplest graphics required this kind of bit image
transmission, as did printing anything but cartridge fonts and soft
fonts. Page description languages allow the transmission of an entire
page—text and graphics—in fast, coded form. Consequently, the
trend in better laser printers is to use page description languages.
The best known of these are PostScript and PostScript Level 2.
PostScript is a proprietary product of Adobe Systems, which licenses
printer makers to use it, making true PostScript printers substantially
more expensive than machines without it. To avoid this cost, many
manufacturers have turned to PostScript-compatible languages,
which mimic PostScript but cost the printer maker less. Currently,
these clones match PostScript well but have not kept up with the
transition to PostScript Level 2. PostScript takes advantage of outline
fonts. In fact, 35 outline fonts are built into most PostScript printers
(some budget machines have as few as 17 as standard equipment).
Hewlett-Packard printers use a more modest control language, PCL
(which stands for Printer Control Language). The latest LaserJet III
series uses PCL5, which can take advantage of outline fonts. Earlier
LaserJet printers (LaserJet, LaserJet Plus, and LaserJet II series) use
earlier PCL versions that do not accept outline fonts.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (67 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

The bottom line is that you need a PostScript, PostScript-compatible,


or PCL5-compatible printer to take advantage of most downloadable
fonts.

Font Formats

All outline fonts are not the same, however. Several standards have
arisen, and you must match the fonts you add to the standard used by
your hardware and software. Your principal choices are Intellifont,
Type 1 (PostScript), Speedo, and TrueType.
The native font format of the LaserJet III series of printers (and
PCL5) is called Intellifont. Developed jointly by Agfa Compugraphic
and Hewlett-Packard, it is notably fast in rasterizing and may just
have the most widespread use, considering the popularity of LaserJet
printers. Although you do not have to worry about font formats with
cartridge fonts—if the cartridge fits, it should work—the cartridges
you plug into your LaserJet or compatible printer use Intellifont
characters. Downloadable outline fonts for LaserJet III and
compatible printers use Intellifont format.
PostScript printers use Type 1 fonts, the format that probably offers
more font variety than any other. You can use Type 1 fonts with
Windows using Adobe Type Manager. Support for Type 1 fonts is
built into OS/2 versions 1.3 and later.
Many programs use Bitstream fonts, which have their own format
called Speedo. In general, this software works by generating the
characters in your PC and transferring them in bit image form to your
printer. Speedo fonts are used by Lotus 1-2-3 and Freelance (in their
DOS versions). You also can use Bitstream's Speedo fonts with
Windows using Bitstream's FaceLift for Windows.
Microsoft Windows 3.1 has its own format, called TrueType, that
also is used by Apple's System 7 operating system for the Macintosh.
Thirteen TrueType fonts come with Windows 3.1, and you can easily
install more using the Windows Control Panel. TrueType is
compatible with printers that use other font formats. For example,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (68 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

with LaserJets, for each font you want to use, TrueType generates a
bit map LaserJet font from one of its outline fonts, then downloads
the font to your printer. That way, it only needs to send characters
rather than bit maps to your printer. For PostScript printers,
TrueType similarly converts its fonts to PostScript outlines or bit
maps (depending on the font) and sends those to your printer.
Windows gives you font flexibility. All outline font packages are
equipped with programs that allow them to be installed in Windows.
Font managers are available for other major font formats (Adobe
Type Manager, FaceLift for Windows, Intellifont for Windows).

Printer Sharing

Two printers are not necessarily better than one— they are just more
expensive. In a number of business situations, you can save the cost
of a second printer by sharing one with two or more PCs and their
users. This strategy works because no one prints all the time—if he
did, he would have no time left to create anything worth printing.
Because normal office work leaves your printer with idle time, you
can put it to work for someone else.
When printers are expensive—as are better quality machines like
lasers and thermal wax printers—sharing the asset is much more
economical than buying a separate printer for everyone and smarter
than making someone suffer with a cheap printer while the quality
machine lies idle most of the day.
You have your choice of several printer sharing strategies, including
those that use nothing but software and those that are hardware
based.

Hardware Devices

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (69 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

The least expensive—in terms of out-of-pocket cost—is a simple


A/B switch box. As the name implies, this device consists of a box of
some kind that protects a multi-pole switch. The switch allows you to
reroute all 25 connections of a printer cable from one PC to another.
For example, in position A, your computer might be connected to the
printer; in position B, a coworker's PC would be connected. It is the
equivalent of moving the printer cable with the convenience of a
switch.
More expensive active printer sharing devices automatically make
the switch for you. They also add the benefit of arbitrating between
print jobs. Arbitration systems determine which PC has priority when
two or more try to print at once. The best sharing systems allow you
to assign a priority to every PC based upon its need and the corporate
pecking order. You should expect to get control software to let you
manage the entire printing system to accompany the more versatile
sharing devices.
Not all printer sharing boxes are alike. They differ in the amount of
memory they make available and in their arbitration systems. The
memory is used to buffer print jobs so that when one PC is printing,
others can continue to send printing instructions as if they were
running the printer, too. No time is lost by programs waiting for
printer access. More memory is generally better, although you might
not need a lot if you standardize your office on Windows or Unix or
some other software environment with a built-in software print
spooler. With today's graphic printing job, you want at least a
megabyte in any hardware printer sharing device.
Sharing devices also differ as to the number and kind of ports that
they make available. You need a port for every PC you want to
connect. You want parallel ports for easy connections, but serial ports
if PCs are located some distance (generally over 10 to 25 feet) from
the sharing device.
Some printer sharing devices plug into the I/O slots of printers.
Although these devices limit the number of available ports because of
size constraints, they also minimize costs because no additional case

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (70 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

or power supply is required. A few printers are designed to be shared,


having multiple or network inputs built in.

Software Sharing

Today software based printer sharing has replaced hardware as the


favored option. One reason is that it often comes at no cost, for
example when you've installed a small network for another purpose
such as file sharing or e-mail. It also simplifies the wiring of the
shared printer system and allows you to locate PCs at greater
distances from the printer. You also gain greater control and more
options. You can even connect more printers with fewer hassles. For
example, each PC connected in a simple network can share the
printer connected to any other PC (if you want it to, of course).
Before the advent of Windows for Workgroups (Version 3.11), a
zero-slot local area network provided an excellent means for sharing
both printers and files. The zero-slot LAN let you to connect several
PCs as a network using their serial ports. The only expenses involved
in sharing printers this way were the software itself and some
(relatively cheap) cable to connect the systems.
A true network adds speed to the zero-slot LAN. Although once
complicated by new operating systems and bulky drivers, since the
advent of Windows for Workgroups, basic networking has been built
into popular operating systems including Windows 95. You can
create a simple network for printer and file sharing just by adding a
network host adapter to each of your PCs and connecting them.
Chapter 23, "Networking," outlines the basics of networking and how
to link PCs and printer.

Plotters

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (71 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

In a world in which technologies change as quickly as a chameleon


climbing paisley wallpaper, plotters remain as resolute and steadfast
as the Great Stoneface or your brother-in-law's bad habits. In less
than a decade, microprocessors have raced ahead twenty times faster,
printers have gone from hammers to lasers, hard disks have grown
from five to five hundred megabytes, and plotters...
Well, let's say you step into a time warp, slip back ten years, and the
only proof of your travels you carry is one of today's state of the art
desktop plotters. One look at it, and not a soul from a decade ago
would believe you are a time traveler. Plotters today look the same as
yesterday— they work the same, and they deliver nearly the same
results.
But the story of the plotter involves more than that. Subtle changes
have been made electronically and philosophically. More
importantly, however, today's desktop plotters are just as useful as
they ever were, notwithstanding a broadside of new competition from
every quarter.
Today's desktop plotters offer the highest resolution of almost any
hard copy device you can plug into your PC, typically addressable to
one-thousandth of an inch. With such high accuracy, they can sketch
smooth curves and skew lines without a trace of jagginess. They are
quick enough to serve as your only graphic output device or, in major
installations, they are cheap enough to attach to workstations to take
the load off a larger plotter when only drafts are required.
The subtle changes that have been made to desktop plotters over the
last few years have made them more accessible, more usable, and
generally more compatible with you and your software. While
plotters work the same way they always have—they simply control
the movement of an ink pen across one or another drafting
medium—they have become smarter—and so have their makers.
Most plotters today are microprocessor based. The smartest process
the instructions they receive to move their pens as economically as
possible, optimizing pen travel and pen selection to waste the least
time. Manufacturers have wised up and adopted one standard
language for controlling their products.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (72 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Most available plotters are designed to recognize the commands of


Hewlett-Packard's HP-GL plotter language. Manufacturers now
document all the details of setting up their equipment to operate with
PCs and the most popular software. You no longer need to stay up
nights experimenting to find the right cable connections and setup
parameters.

Technologies

Plotter technology itself is unchanged, divided into two families, the


flatbeds or X-Y plotters and roller beds or drum plotters. The
difference is simply a matter of what moves.

Flatbed Plotters

The flatbed plotter is the magic moving hand in action. The plotting
medium is held fast against the flat plotting surface (the "bed" of the
"flatbed" name), and the mechanism moves the pen across the paper
in two dimensions (the "X" and "Y" of the alternate name), just as
you would draw a picture by hand.

Drum Plotters

The roller bed plotter restricts its pen to travel in one


dimension—laterally across the width of the drafting medium—and
lends new impetus to the paper. That is, to draw lines perpendicular
to the movement of the pen, the paper slides underneath the pen. The
"roller" in the bed is a cylinder or drum underneath the paper, which
provides the motive force.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (73 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Neither technology is an all around winner. When accuracy counts,


the flatbed design is the low cost winner. Building an accurate flatbed
is inherently less expensive because control over only one
mechanism—that which moves the pen—is required. Roller beds
require two discrete and fundamentally different systems, one for the
pen and one for the paper, in order to be precisely coordinated. This
added complexity can translate into higher costs. But when price
overrules resolution, roller bed plotters can sacrifice milli-inches of
precision for affordability.
On the other hand, roller bed plotters have an inherent speed
advantage. Paper is simply less massive than overreaching
mechanical arms and pen carriages, and Newton's Second Law (for
those needing a refresher in freshman physics: F=ma, force equals
mass times acceleration) says that it is less work to speed up a lighter
object. Little wonder the fastest plotters tend to be roller bed
machines.
Flatbed plotters have one advantage: They let you use virtually any
size of drafting medium up to their physical limits. Most drum or
roller bed plotters constrain your choice of widths of drafting
medium because they grip it only at its edges and, for design reasons,
the paper grippers are a fixed distance apart. Narrower widths of
drafting media just cannot be properly gripped. However, one
manufacturer, Hitachi, offers a roller bed plotter that uses a full width
drum that can grasp any width paper down to postcard size.
With a flatbed plotter, however, you do not face minimum size limits.
After all, flatbeds are just drafting tables with automated arms
attached. Anything that the table holds can be drawn upon—with the
proper software instructions, of course.
Plotters range in size from those that fit on top of your desk to
machines that ink sheets bigger than wallpaper. The most common
machines handle drafting media up to the ANSI B-size, that is, with
metes and bounds measuring 11 by 17 inches.
As with drafting tables, flatbed plotters require some means of
securing paper to the drawing surface. Ordinary drafting tape will
suffice, but is hardly an elegant solution. Plotter makers have adopted

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (74 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

several strategies to eliminate the pesky stickiness of tape. Some


plotters use the magic of static electricity to hold the drafting medium
down, a sticking strategy that works for all but a few media. One
manufacturer uses magnets instead of electricity to hold down
drafting copy, a tactic which should work with any medium except
thin sheets of iron.

Output Quality

The most important difference between cheap and expensive plotters


is precision. A better plotter has a smaller resolution or step size.
Step size is limited by a number of factors. The ultimate limit is the
plotter's mechanical resolution, the finest movements the hardware
can ever make, owing to the inevitable coarseness of the stepper
motors that move their pens. In most, but not all, cases step size is
further constrained by addressability. The smallest increments in
which HP-GL can move a plotter pen is 0.001 (one-thousandth) inch.
The least expensive plotters often have mechanical resolutions more
coarse than the addressing limits of HP-GL. In these cases, the
mechanism itself limits quality.
The smaller the step size, the smoother the curves a plotter can draw.
Each step shows as a right angle bump in a diagonal or curved line.
At the 0.001-inch limit of HP-GL, steps are less than a third the size
of a laser printer dot—very fine indeed, essentially invisible. With
less expensive plotters, however, each step may be plainly visible,
resulting in a self-describing condition called the jaggies.

Color

Desktop plotters also differ in the number of pens from which they
can select automatically. Although a rough correspondence exists
between color capabilities and the number of pens that a plotter can
handle, you can make multi-colored plots even with a one-pen

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (75 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

engine. Most plotters allow you to pause their work and exchange
pens, giving you manual control over the hues of their output. In
other words, a four-pen plotter is not necessarily limited to four
colors; it can ink drawings in as many tints as are available in
compatible pens. More pens is a matter of convenience only.
A larger number of pens, however, enables you to run your plotter on
auto pilot. Start the plot and you can empty the coffee pot and
socialize the morning away. You do not need to stand over the
machine and anticipate when to make the change.
Besides sheer numbers, plotters offer you a choice of pen types that
you can load. Which to use depends on the type of output you want
to create—paper and film plots require different kinds of ink, perhaps
different pen types. Some manufacturers offer refillable pens. These
offer you the higher quality (thinner, more consistent lines) of a
drafting pen without the hefty expense of making you buy a new one
when the ink runs out.
If you have a choice of pens, you should choose the most popular to
have the widest possible selection. The closest to a standard among
plotter pens is the design used by Hewlett-Packard machines and
followed by several manufacturers. As with anything else,
proprietary pen designs limit your options—and may require you to
pay a higher price.

Interfacing

With plotters, compatibility is a major issue. Traditionally, plotter


makers have viewed their products as professional tools, which
means they were designed to give smug engineers their
comeuppance. At minimum, you needed to have a special cable
manufactured for your particular installation.
Most of that frustration lies in the past, however. The typical plotter
today includes a Centronics-style parallel port that makes it as easy to
connect as a dot matrix printer.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (76 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Several machines still depend on RS-232 serial connections. Others


make it an option. (All plotters reviewed here were tested using
9600-bit-per-second serial transmissions so that their actual
performance could be isolated from communications concerns.) If
you choose to use a serial link, you probably want to buy the plotter
maker's own serial cable that has been designed to match the needs of
the particular plotter. The $50 or so you spend staves off the
Thorazine and straightjackets to which sorting out a serial link
usually lead.
Unless you already have other peripherals that use it, you probably
do not want to tangle with the cost, intricacies, and special software
drivers required to use the IEEE-488 (also known as GPIB, the
General Purpose Interface Bus, or HP-IB, the Hewlett-Packard
Interface Bus) connection that some plotters make available.

Control Languages

When you say your prayers tonight, you should thank your Provider
for the miracle of HP-GL. Most plotter manufacturers now have
adopted Hewlett-Packard Graphic Language (HP-GL and the newer
version HP-GL/2) to control their machines, making that language
the standard in this country. (While HP-GL is used internationally,
GP-GL is more prevalent in some markets.) Another alternative is
Digital Microprocessor Plotter Language (DMPL), developed by
Houston Instrument, which has several built-in functions that are not
present in HP-GL, such as built-in fonts, the ability to do closed area
fill with a single command, and built-in smoothing algorithms. Other
plotters have their own native languages, which often work faster
than HP-GL with programs that support them. But plotters that
understand HP-GL work with just about any program with a plotter
output.
Designed for plotters, HP-GL is a vector drawing language. That is,
its controls are the equivalent to drawing vector strokes with a pen
across paper. Drawing commands tell the printer to move its
imaginary pen equivalent in its rasterizer from point to point. In

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (77 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

general, the pen starts drawing at the current cursor position and
draws the length or shape specified in the command.
Nearly all commands in HP-GL take the form of two-letter
mnemonics, and nearly all accept one or more parameters to specify
positions, sizes, and other variables. In HP-GL, the individual
parameters used in a command are separated by a special character,
which in HP-GL terminology is called a separator. Although you can
use four characters as separators, including a comma, space, plus sign
(which also indicates positive numerical values), or minus sign
(which also indicates a negative value), HP recommends using
commas. You do not need to separate the mnemonic from the first
parameter. Each command ends with another special character called
a terminator. Most use semicolons as terminators, although in some
cases you can use a space or tab. The polyline encoded command
requires a semicolon as a terminator; the label command requires the
ASCII character 03(Hex) as its terminator; and the comment
command uses a double quote as its terminator. In cases where you
can use a semicolon as a terminator, the mnemonic of the next
command also acts as a terminator. Figure 20.6 shows the form of a
typical HP-GL/2 command.
Figure 20.6 HP-GL/2 command terminology.

You specify a cursor location for drawing in HP-GL using a


Cartesian coordinate system. PCL also uses a coordinate system for
specifying text locations. But, showing that PCL and HP-GL were
united by the equivalent of a shotgun marriage, the coordinate
systems used in the two languages are different. PCL locates the
origin of its coordinate system at the upper left corner of the sheet,
and coordinate values increase downward and to the right. HP-GL
puts the origin at the lower left, and coordinate values increase
upward and to the right. When you shift between giving PCL
commands and HP-GL commands, the coordinate system changes
even though you may be working on the same sheet.
HP-GL locates the points in its coordinate system in terms of its
prevailing measuring units. The default measuring units for HP-GL
are called plotter units and each measures 0.025 millimeter. You can
change the measuring units using the HP-GL scale (SC) command.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (78 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

There are about 1,016 plotter units in one inch.


In the current implementation of HP-GL, HP-GL/2, Hewlett-Packard
divides commands into seven functional groups. These include dual
context commands, configuration and status commands, vector
commands, polygon commands, line and fill attribute commands,
palette extensions, and character commands.
Dual context commands interact with the PCL language.
They allow you to switch from HP-GL to PCL, use the
fonts you've installed in your printer, and send a reset to
your printer.
Configuration and status commands allow you to set up
the measuring units and other drawing defaults (such as
rotating the coordinate system) used by HP-GL/2. Two of
these commands, advance full page (PC) and replot (RP),
are relevant only to HP-GL/2 plotters and are ignored by
laser printers using the language.
Vector commands are the basic HP-GL/2 drawing
instructions. The two most important are pen down (PD)
and pen up (PU). After you give a pen up instruction, pen
movement instructions move the cursor but do not produce
a drawn line. The effect is the same as lifting your pen
from the paper. After giving a pen down command, the
movements made by the pen register as lines in your
drawing.
Other vector commands allow you to draw lines and arcs.
The commands may call for relative movement, in which
the specified coordinates refer back to the current pen
position, or absolute movements, in which the specified
coordinates are made in reference to the origin. For
example, the plot absolute and plot relative instructions
draw one or more straight lines from the current pen
position to the coordinates given as parameters.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (79 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

Polygon commands allow you to draw circles and


rectangles and fill them without listing tedious strings of
basic plot and arc commands. You indicate rectangle size
by giving the coordinates of the corner of the finished
rectangle that will be diagonally opposite the current pen
position. In HP-GL terminology, an edge figure is an
outline; a fill figure is a solid drawing.
Line and fill attribute commands are closely allied with
the polygon commands. They enable you to change the
characteristics of the lines that you draw, both its width and
pattern. Similarly, these commands specify the pattern to
be used for filling shapes. Because HP-GL works in the
same order that you ordinarily would, you specify the line
or fill type before you draw a shape.
Palette extension commands extend a bit of extra
versatility to your drawings. The transparency mode
command affects how HP-GL/2 treats areas of white fill.
When transparent, other line, shapes, and text show
through as if the white fill were clear. When opaque, white
fill covers up anything drawn underneath it. The screened
vector command allows you to specify dot screen,
cross-hatch patterns, or custom patterns to use as fill inside
shapes.
Character commands add text handling to HP-GL/2's
drawing mode because PCL text commands are not
available while you are drawing. These allow you to select
a font, its size, slant, and the character orientation (portrait,
landscape, or anything in between). As with PCL mode,
HP-GL/2 allows for a primary and secondary font but calls
them standard and alternate.

Performance

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (80 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

As with printers, most plotters have built-in RAM to buffer plotting


instructions. The buffer memory helps free up your computer when
you plot. A large buffer allows the plotter to absorb most or all of the
instructions sent out of your PC and process them while your
computer does something else. Some plotters take further advantage
of their buffers by looking ahead at plotting instructions and
calculating how to minimize pen movement or pen changes, drawing
all black lines before switching to the red pen, for example. Such
optimization can substantially trim the time required for making
plots. The buffer can also allow you to make multiple copies of a plot
without tying up your PC. Speed differences between plotters can be
dramatic: Some machines plot in half the time of others.
The output quality among plotters tends toward uniformity except in
the lowest cost machines. Text renderings, however, depend on a
plotter's interpretation of HP-GL and the characteristics of any
internal character sets.

Alternatives

Putting pen to paper in these days when words, music, and money
flash electronically through wires at the speed of light seems about as
anachronistic as stoking the furnace or paying cash. Other
technologies are quicker, more colorful, and cheaper. Yet plotters
persist for several reasons.
When it comes to fast graphic output, the laser printer is without
peer. A typical full graphic page might pour out in less than a minute
while a plotter struggles five or ten minutes on the same chore. But
most lasers—and all affordable machines—are limited to a single
color and media no larger than ANSI A size (8.5 by 11 inches).
Plotters deftly draw in any color in which you can find a pen and
create nearly limitless combinations of colors (as long as you are
willing to manually change pens at the appropriate times). On the
other hand, even the most compact of these desktop plotters handle
sheets up to B size. And they are almost indifferent to the medium

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (81 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

you give them to plot upon—just match the proper pens, and they
happily ink paper, vellum, Mylar, or whatever you lay on their beds.
The comparison to color inkjet printers is nearly the same.
Inexpensive inkjets do not handle large sheets. Those that accept
B-size paper likely cost more than a comparable plotter. But inkjets
can stretch their color spectra through dithering and mixing inks on
paper. They can even add a more natural look by shading from one
hue to another, although not under direct manual control.
Plotters beat nearly all printers when it comes to accuracy; Most
move in steps of about one-thousandth of an inch, more than three
times finer than the 300 dots per inch delivered by the typical laser
engines. This extreme resolution is put to good use drawing curves
from which every trace of jagginess has been expunged.
But lasers lead when it comes to fine detail. Although the plotter can
draw almost absolutely smooth diagonals, thanks to their high
resolution, the finest details they can create are limited to the widths
of the lines drawn by their pens. Typically, the finest pen available
for plotting draws a line three-tenths of a millimeter wide. That's
about twelve times the width of the plotter's resolution or step size,
and about four times wider than the thinnest laser line.
On the other hand, plotters can draw in solid colors rather than the
spotty digital dots fused to paper by the lasers or sprayed by the
inkjets (not to mention the pointillism of impact dot matrix engines).
Plotter colors are pure and consistently toned.
Direct speed comparisons between printers and plotters are
impossible because they use different imaging techniques. Printers
are raster based devices; plotters draw vectors. As a result, which is
faster depends on what you want to draw.
Plotters might possibly finish simple drawings first, but lag when
images become more complex than a few lines. Dot addressed
printers (as opposed to those that use a language like PostScript), on
the other hand, devote about as much time to the simplest or most
complex drawings. They have to scan an entire sheet, no matter how
many lines are to be drawn. (Postscript printers take somewhat longer
for more complex drawings because transfer and processing times

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (82 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 20

must be added.)
Bottom line: Plotters are moderately priced, colorful, accurate, and
slow. Affordable laser printers are faster but lack color capabilities
and the 1/1000-inch resolution of plotters. Color lasers are quick,
costly, and not quite as sharp. Color inkjets provide multiple hues,
moderate speed, and costs comparable to plotters, but lack the ability
to create smooth, detailed drawings. For many applications, plotters
still deliver the right combination.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh20.htm (83 de 83) [23/06/2000 06:47:11 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Chapter 21: Serial Ports


Traditionally, serial communications has been the long distance choice—the best and
sometimes only way of getting your message out. Recent advances in technology and
standardization have made serial links the new choice for tying your PC to its
peripherals. In coming years, serial communication may become synonymous with PC
expansion.[/block]

■ Background
■ Clocking
■ Frames
■ Packets
■ Background
■ Error Handling
■ History
■ RS-232C
■ Electrical Operation
■ Connectors
■ 25-Pin
■ 9-Pin
■ Motherboard Headers
■ Signals
■ Definitions
■ Cables
■ Straight Through Cables
■ Adapter Cables
■ Crossover Cables
■ UARTs
■ 8250

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (1 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

■ 16450
■ 16550A
■ Register Function
■ Buffer Control
■ Identifying UARTs
■ Enhanced Serial Ports
■ Logical Interface
■ Port Names
■ Interrupts
■ ACCESS.bus
■ Architecture
■ Signaling
■ Transfers
■ Arbitration
■ Messages
■ Addresses
■ Connections
■ IrDA
■ History
■ Overview
■ Physical Layer
■ Infrared Light
■ Data Rates
■ Pulse Width
■ Modulation
■ Bit Stuffing
■ Format
■ Aborted Frames
■ Interference Suppression
■ Link Access Protocol
■ Primary and Secondary Stations
■ Frame Types
■ Addressing
■ Error Detection
■ Link Management Protocol

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (2 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

■ Universal Serial Bus


■ Background
■ Connectors
■ Cable
■ Data Coding
■ Protocol
■ Token Packets
■ Data Packets
■ Handshake Packets
■ IEEE-1394
■ Background
■ Performance
■ Timing
■ Setup
■ Arbitration
■ Architecture
■ Bus Management Layer
■ Transaction Layer
■ Link Layer
■ Physical Layer
■ Cabling

21

Serial Ports

For almost two decades, the serial port has been the least common denominator of
computer communications, an escape route for your long distance messages but one
burdened by its own ball and chain. Today that situation is changing. Where once there
was but one "serial port," today several serial communication standards vie for your
attention.
Serial communication once hobbled your PC with data rates out of the dark ages. The
classic serial port was a carryover from a previous generation of technology. Its low

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (3 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

speed was a sad match for the quick pulse of the PC. It was like having a medieval scribe
ink out your PC's owner's manual in glorious Gothic script, a trial of your patience that
took little advantage of current technology. (Then again, some PC documentation arrives
so late you just might suspect some scribe to be scrawling it out with quill and oxgall
ink.) While PCs generated millions of characters per second, classic serial ports doled out
a few hundred or thousand in the same time.
New serial technologies kick communications back into high gear. They are quick
enough not only to transfer text messages but to move your voice digitally or even handle
full motion video in real time. The also add the versatility of going wireless so you don't
have to tie yourself to your desk with a tangle of communications cables.
Today engineers have five chief choices for serial communications between your PC and
other devices. The classic serial port (best known by its official EIA standard
designation, RS-232C), ACCESS.bus, the IrDA optical connection, the Universal Serial
Bus, and P1394.
The least common denominator is the RS-232C port, standard equipment on nearly every
PC since 1984. Throughout the first decade and a half of personal computing, serial port
meant only RS-232C. But the standard is even older than the first PCs, having been a
telephone system standard long before. And, like most of the carryovers from early
technology, RS-232C brought its own baggage—a speed limit more severe than a stern
first grade teacher who believes that rulers are for discipline rather than measurement.
ACCESS.bus is an inexpensive but low speed serial connection to link multiple
undemanding devices with your PC. Rather than speed, its advantage over RS-232C is
versatility. It can connect more devices to your PC than all the RS-232C ports you or
your system could stand. Moreover, it is a simple standard, one without a confusion of
cables and connectors.
IrDA gives the RS-232C standard a new medium, sending signals through the air instead
of wires. Using infrared signals exactly like those of television remote controls, IrDA
allows you to transfer files between your notebook and desktop PC without wrestling
with a cable connection. If the relatively new standard takes hold, you may also link your
notebook PC to your printer or other peripherals with invisible light beams. The principal
drawback is IrDA's RS-232C heritage. The maximum IrDA data rate matches the low
speed of the RS-232C signals.
Universal Serial Bus is the PC generation's answer to serial communications. Unlike the
RS-232C based serial systems that are designed to link two devices, USB acts as a true
bus that can link as many as 127 devices to your PC without worries about matching
connectors and cables (although it remains a wire based design). Speed takes a quantum
leap over RS-232C with a peak data rate of 12 megabits per second as well as a low
speed mode that operates at 1.5 megabits per second. In order to make the "universal" in
the name a reality, the design goal of the new standard aims low: USB was designed to
be a low cost interface, cheap enough for every PC.
P1394 pushes serial technology further still, with a maximum data rate of 100 megabits
per second currently and rates as high as 400 megabits per second envisioned. More
expensive to implement than USB, it fits with the new SCSI-3 scheme of things and
offers a reliable means of linking high speed peripherals such as hard disks and real time

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (4 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

video systems to PCs. Table 21.1 compares the characteristics of some of the most
common serial port standards.

Table 21.1. A Comparison of Serial Interfaces

Standard Data rate (current) Medium Devices per port


RS-232C 115,200 bps Twisted pair 1
ACCESS.bus 100 Kbps 4-wire shielded cable 125
IrDA 4 Mbps Optical 126
USB 12 Mbps Special 4-wire cable 127
IEEE 1394 100 Mbps Special 6-wire cable 16

Despite the differences among these standards, all have a common basis. They treat data
one-dimensionally, as a long stream or series of bits. From this common ground, each
goes its own direction. At heart, however, all involve the same repackaging of data to
make it fit a channel with a single stream of data.

Background

No matter the name and standard, all serial ports are the same, at least functionally. Each
takes the 8, 16, or 32 parallel bits your computer exchanges across its data bus and turns
them sideways—from a broadside of digital blips into a pulse chain that can walk the
plank, single file. This form of communication earns its name "serial" because the
individual bits of information are transferred in a long series.
The change marks a significant difference in coding. The bits of parallel data are coded
by their position. That is, the designation of the bus line they travel confers value. The
most significant bit travels down the line designated for the most significant signal. With
a serial port, the significance is awarded by timing. The position of a bit in a pulse string
gives it its value. The later in the string, the more important the bit.
In a perfect world, a single circuit—nothing more than two wires, a signal line and a
ground—would be all that was necessary to move this serial signal from one place to
another without further ado. Of course, a perfect world would also have fairies and other
benevolent spirits to help usher the data along and protect it from all the evil imps and
energies lurking about trying to debase and disgrace the singular purity of serial transfer.
The world is, alas, not perfect, and the world of computers even less so. Many
misfortunes can befall the vulnerable serial data bit as it crawls through its connection.
One of the bits of a full byte of data may go astray, leaving a piece of data with a smaller
value on arrival than it had at departure—a problem akin to shipping alcohol by a courier
service operated by dipsomaniacs. With the vacancy in the data stream, all the other bits
will slip up a place and assume new values. Or the opposite case—in the spirit of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (5 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

electronic camaraderie, an otherwise well meaning signal might adopt a stray bit like a
child takes on a kitten, only to later discover the miracle of pregnancy and a progeny of
errors that ripple through the communications stream, pushing all the bits backward. In
either case, the prognosis is not good. With this elementary form of serial
communications, one mistaken bit either way, and every byte that follows will be in
error.
Establishing reliable serial communications means overcoming these bit error problems
and many others as well. Thanks to some digital ingenuity, however, serial
communications work and work well—well enough that you and your PC can depend on
them.

Clocking

In computers, a serial signal is one in which the bits of data of the digital code are
arranged in a series. They travel through their medium or connection one after another as
a train of pulses. Put another way, the pattern that makes up the digital code stretches
across the dimension of time rather than across the width of a data bus. Instead of the bits
of the digital code getting their significance from their physical position in the lines of
the data bus, the get their meaning from their position in time. Instead of traveling
through eight distinct connections, a byte of data, for example, makes up a sequence of
eight pulses in a serial communications system. Plot signal to time, and the serial
connections turns things sideways from the way they would be inside your PC.
Do you detect a pattern here? Time, time, time. Serial ports make data communications a
matter of timing. Defining and keeping time become critical issues in serial data
exchanges.
Engineers split the universe of serial communications into two distinct forms,
synchronous and asynchronous. The difference between them relates to how they deal
with time.
Synchronous communications require the sending and receiving system—for our
purposes, the PC and printer—to synchronize their actions. They share a common time
base, a serial clock. This clock signal is passed between the two systems either as a
separate signal or by using the pulses of data in the data stream to define it. The serial
transmitter and receiver can unambiguously identify each bit in the data stream by its
relationship to the shared clock. Because each uses exactly the same clock, they can
make the match based on timing alone.
In asynchronous communications the transmitter and receiver use separate clocks.
Although the two clocks are supposed to be running at the same speed, they don't
necessarily tell the same time. They are like your wristwatch and the clock on the town
square. One or the other may be a few minutes faster even though both operate at
essentially the same speed: a day has 24 hours for both.
An asynchronous communications system also relies on the timing of pulses to define the
digital code. But they cannot look to their clocks for infallible guidance. A small error in

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (6 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

timing can shift a bit a few positions, say from the least significant place to the most
significant, which can drastically affect the meaning of the digital message.
If you've ever had a clock that kept bad time—for example, the CMOS clock inside your
PC—you probably noticed that time errors are cumulative. They add up. If your clock is
a minute off today, it will be two minutes off tomorrow. The longer time elapses, the
more the difference in two clocks will be apparent. The corollary is also true: if you make
a comparison over a short enough period, you won't notice a shift between two clocks
even if they are running at quite different speeds.
Asynchronous communications banks on this fine slicing of time. By keeping intervals
short, they can make two unsynchronized clocks act as if they were synchronized. The
otherwise unsynchronized signals can identify the time relationships in the bits of a serial
code.
Isochronous communications involve time critical data. Your PC uses information that is
transferred isochronously in real time. That is, the data are meant for immediate display,
typically in a continuous stream. The most common examples are video image data that
must be displayed at the proper rate for smooth full motion video and digital audio data
that produces sound. Isochronous transmissions may be made using any signaling
scheme, be it synchronous or asynchronous. They usually differ from ordinary data
transfers in that the system tolerates data errors. It compromises accuracy for the proper
timing of information. Whereas error correction in a conventional data transfer may
require the retransmission of packets containing errors, an isochronous transfer lets the
errors pass through uncorrected. The underlying philosophy is that a bad pixel in an
image is less objectionable than image frames that jerk because the flow of the data
stream stops for the retransmission of bad packets.

Frames

The basic element of digital information in a serial communication system is the data
frame. Think of the word as a time frame, the frame bracketing the information like a
frame surrounds a window. The bits of the digital code are assigned their value in
accordance with their position in the frame. In a synchronous serial communications
system, the frame contains the bits of a digital code word. In asynchronous serial
communications, the frame also contains a word of data, but it has a greater significance.
It is also the time interval in which the clocks of the sending and receiving systems are
assumed to be synchronized.
When an asynchronous receiver detects the start of a frame, it resets its clock and then
uses its clock to define the significance of each bit in the digital code within the frame.
At the start of the next frame, it resets its clock and starts timing the bits again.
The only problem with this system is that an asynchronous receiver needs to know when
a frame begins and ends. Synchronous receivers can always look to the clock to know,
but the asynchronous system has no such luxury. The trick to making asynchronous
communications work is unambiguously defining the frame. Today's asynchronous
systems use start bits to mark the beginning of a frame and stop bits to mark its end. In

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (7 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

the middle are a group of data bits.


The start bit helps the asynchronous receiver find data in a sea of noise. In some systems,
the start bit is given a special identity. In most asynchronous systems, it is twice the
length of the other bits inside the frame. In others, the appearance of the bit itself is
sufficient. After all, without data, you would expect no pulses. When any pulse pops up,
you might expect it to be a start bit.
Each frame ends with one or more stop bits. They assure the receiver that the data in the
frame is complete. Most asynchronous communication systems allow for one, one and a
half, or two stop bits. Most systems use one because that length makes each frame shorter
(which, in turn, means that it takes a shorter time to transmit).
The number of data bits in a frame varies widely. In most asynchronous systems, there
will be from five to eight bits of data in each frame. If you plan to use a serial port to
connect a modern serial device to your PC, your choices will usually be to use either
seven bits or eight bits, the latter being the most popular.
In addition, the data bits in the frame may be augmented by error correction information
called a parity bit, which fits between the last bit of data and the stop bit. In modern serial
systems, any of five varieties of parity bits are sometimes used: odd, even, space, mark,
and none.
The value of the parity bit is keyed to the data bits. The serial transmitter counts the
number of digital ones in the data bits and determines whether this total is odd or even. In
the odd parity scheme, the transmitter will turn on the parity bit (making it a digital one)
only if the total number of digital ones in the data bits is odd. In even priority systems,
the parity bit is set as one only if the data bits contain an even number of digital ones. In
mark parity, the parity bit is always a mark, a digital one. In space parity, the parity bit is
always a space, a digital zero. With no parity, no parity bit is included in the digital
frames, and the stop bits immediately follow the data bits.
By convention, the bits of serial data in each frame are sent least significant bit first.
Subsequent bits follow in order of increasing significance. Figure 21.1 illustrates the
contents of a single data frame that uses eight data bits and a single stop bit.
Figure 21.1 A serial data frame with eight data bits and one stop bit.

Packets

A frame corresponds to a single character. Taken alone, that's not a whole lot of
information. A single character rarely suffices for anything except answering multiple
choice tests. To make something meaningful, you combine a sequence of characters to
form words and sentences.
The serial communications equivalent of a sentence is a packet. A packet is a
standardized group of characters or frames that makes up the smallest unit that conveys
information through the communications system.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (8 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Background

As the name implies, a packet is a container for a message, like a diplomatic packet or
envelope. The packet holds the data. In addition, in most packetized systems, the packet
also includes an address and, often, a description of its contents. Packets may also
include extra data to assure the integrity of their contents—for example, an error
detection or error correction scheme of some sort. Figure 21.2 shows a graphical
representation of the constituents of a data packet.
Figure 21.2 Constituents of a typical data packet.

The exact constituents of a packet depend on the communication protocol. In general,


however, all packets have much the same construction. They begin with a symbol or
character string that allows systems listening in to the communication channel to
recognize the bit pattern that follows as a packet.
Each packet bears an address that tells where it is bound. Devices listening in on the
communication channel check the address. If it does not match their own or does not
indicate that the packet is being broadcast to all devices—in other words, the packet
wears an address equivalent to "occupant"—the device ignores the rest of the packet.
Communications equipment is courteous enough not to listen in on messages meant for
someone else.
Most packets include some kind of identifying information that tells the recipient what to
do with the data. For example, a packet may bear a marker to distinguish commands from
ordinary data.
The bulk of the packet is made from the data being transmitted. Packets vary in size and
hence the amount of data that they may contain. Although there are no hard and fast
limits, most packets range from 256 to 2048 bytes.

Error Handling

Because no communication channel is error free, most packets include error detection or
error correction information. The principal behind error detection is simple: include
duplicate or redundant information that you can compare to the original. Because
communication errors are random, they are unlikely to affect both of two copies of the
transmitted data. Compare two copies sent along and if they do not match, you can be
sure one of them changed during transmission and became corrupted.
Many communications systems don't rely on complex error correction algorithms as are
used in storage and high quality memory systems. Communications systems have a
luxury storage systems do not; they can get a second chance. If an error occurs in
transmission, the system can try again—and again—until an error free copy gets through.
As a function of communication protocol, packets are part of the software standard used

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (9 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

by the communication system. Even so, they are essential to making the hardware—the
entire communication system—work properly and reliably.

History

Nearly every serial communication system now uses packets of some kind. In retrospect,
the idea of using packets seems natural, the logical way of organizing data. In fact,
however, the concept of using packets as a method of reliably routing data through
communications systems rates as an invention with a clear-cut history. The first inkling
of the concept of packet based communications dates back as early as 1960 when Paul
Baran, working at RAND Corporation, conceived the idea of a redundant,
packet-switched network. At the time nothing came of the idea because the chief
telecommunications supplier in the United States, AT&T, regarded a communications
system or network based on packets as unbuildable. AT&T preferred switching signals
throughout its vast network, a technology with which the company had become familiar
after nearly a century of development.
In 1965 Donald Watts Davies, working at the British National Physics Laboratory,
independently conceived the idea of packetized communications, and it was he who
coined the name "packet." Baran called them data blocks and his version of packet
switching was "distributed adaptive message block switching." The direct ancestor of
today's Internet, ARPAnet (see Chapter 22, "Modems"), development of which began in
1966, is usually considered the first successful packetized communication system.

RS-232C

The classic serial port in your PC wears a number of different names. IBM, in the spirit
of bureaucracy, sanctions an excess of syllables, naming the connection an
"asynchronous data communications port." Time pressed PC users clip that to "async
port" or "comm" port. Officialdom bequeaths its own term. The variety of serial link
accepted by the PC industry operates under a standard called RS-232C (one that was
hammered out by an industry trade group, the Electronics Industry Association or EIA),
so many folks call the common serial port by its numerical specification, an RS-232C
port.
So far we've discussed time in the abstract. But serial communications must occur at very
real data rates, and those rates must be the same at both ends of the serial connection, if
just within a frame when transmissions are asynchronous. The speed at which devices
exchange serial data is called the bit rate, and it is measured in the number of data bits
that would be exchanged in a second if bits were sent continually. You've probably
encountered these bit rates when using a modem. The PC industry uses bit rates in the
following sequence: 150; 300; 600; 1200; 2400; 4800; 9600; 19,200; 38,400; 57,600; and
115,200.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (10 de 67) [23/06/2000 06:51:43 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

This sequence results from both industry standards and the design of the original IBM
PC. The PC developed its serial port bit rate by using an oscillator that operates at 1.8432
MHz and associated circuitry that reduces that frequency by a factor of 1,600 to a basic
operating speed of 115,200 bits per second. For this base bit rate, a device called a
programmable divider mathematically creates the lower bit rates used by serial ports. It
develops the lower frequencies by dividing the starting rate by an integer. By using a
divisor of three, for example, the PC develops a bit rate of 38,400 (that is, 115200/3). Not
all available divisors are used. For example, designers never set their circuits to divide by
five.
You may have noticed that some modems use speeds not included in this sequence. For
example, today's popular V.34 modems operate at a base speed of 28,800 bits per second.
The modem generates this data rate internally. In general, you will connect your PC to
the modem so that it communicates at a higher bit rate, and the modem repackages the
data to fit the data rate it uses by compressing the data or telling your PC to halt the flow
of information until it is ready to send more bits.
The accepted standard in asynchronous communications allows for a number of variables
in the digital code used within each data frame. When you configure any serial device
and an RS-232C port, you'll encounter all of these variables: speed, number of data bits,
parity choices, and number of stop bits. The most important rule about choosing which
values to use is that the transmitter and receiver—your PC and the serial device— must
use exactly the same settings. Think of a serial communication as being an exchange of
coded messages by two spies. If the recipient doesn't use the same secret decoder ring as
the sender, he can't hope to make sense out of the message. If your serial peripheral isn't
configured to understand the same settings your PC sends out in its serial signals, you
can't possible hope to print anything sensible.
Normally you'll configure the bit rate of the serial port on a peripheral using DIP
switches or the serial peripheral's menu system. How to make the settings will vary with
the serial device you are installing, so you should check its instruction manual to be sure.
To use a serial port as a DOS device, you must use the DOS MODE command to set its
speed and other communication parameters. This setting only affects what you print from
DOS. Most programs and other operating systems take direct control of the PC's serial
ports when they need them, and these program override the values set using the MODE
command. You adjust the bit rates (and other serial parameters) used by your programs
as part of the setup procedures of your applications or operating system. In general you
should set the fastest data that both ends of your connection will allow.

Electrical Operation

Serial signals have a definite disadvantage compared to parallel; bits move one at a time.
At a given clock rate, fewer bits will travel through a serial link than a parallel one. The
disadvantage is on the order of 12 to 1. When a parallel port moves a byte in a single
cycle, a serial port take around a dozen—8 for the data bits, 1 for parity, 1 for stop, and 2
for start. That 9,600 bit-per-second serial connection actually moves text at about 800
character per second.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (11 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Compensating for this definite handicap, serial connections claim versatility. Their
signals can go the distance. Not just the mile or so you can shoot out the signal from your
standard serial port, but the thousands of miles you can make by modem—tied, of course,
to that old serial port.
The trade-off is signal integrity for speed. As they travel down wires, digital pulses tend
to blur. The electrical characteristics of wires tend to round off the sharp edges of pulses
and extend their length. The farther a signal travels, the less defined it becomes until
digital equipment has difficulty telling where the one pulse ends and the next begins. The
more closely spaced the pulses are (and, hence, the higher the bit rate), the worse the
problem becomes. By lowering the bit rate and extending the pulses and the time
between them, the farther the signal can go before the pulses blend together. (Modems
avoid part of this problem by converting digital signals to analog signals for the long
haul. PC networks achieve length and speed by using special cables and signaling
technologies.)
The question of how far a serial signal can reach depends on both the equipment and wire
that you use. You can probably extend a 9,600 bps connection to a hundred feet or more.
At a quarter mile, you'll probably be down to 1,200 or 300 bps (slower than even cheap
printers can type).
Longer wires are cheaper with serial connections, too, a point not lost on system
designers. Where a parallel cable requires 18 to 25 separate wires to carry its signals, a
serial link makes do with three: one to carry signals from your PC to the serial peripheral,
one to carry signals from the serial peripheral to the PC, and a common or ground signal
that provides a return path for both.
The electrical signal on a serial cable is a rapidly switching voltage. Digital in nature, it
has one of two states. In the communications industry, these states are termed space and
mark like the polarity signals. Space is the absence of a bit, and mark is the presence of a
bit. On the serial line, a space is a positive voltage, a mark is a negative voltage. In other
words, when you're not sending data down a serial line, it has an overall positive voltage
on it. Data will appear as a series of negative going pulses. The original design of the
serial port specification called for the voltage to shift from a positive 12 volts to negative
12 volts. Because 12 volts is an uncommon potential in many PCs, the serial voltage
often varies from positive 5 to negative 5 volts.

Connectors

The physical manifestation of a serial port is the connector that glowers on the rear panel
of your PC. It is where you plug your serial peripheral into your computer. And it can be
the root of all evil—or so it will seem after a number of long evenings during which you
valiantly try to make your serial device work with your PC only to have text disappear
like phantoms at sunrise. Again, the principal problem with serial ports is the number of
options that it allows designers. Serial ports can use either of two style of connectors,
each of which has two options in signal assignment. Worse, some manufacturers venture
bravely in their own directions with the all-important flow control signals. Sorting out all

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (12 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

of these options is the most frustrating part of serial port configuration.

25-Pin

The basic serial port connector is called a 25-pin D-shell. It earns its name from having
25 connections arranged in two rows that are surrounded by a metal guide that takes the
form of a rough letter D. The male variety of this connector—the one that actually has
pins inside it—is normally used on PCs. Most, but hardly all, serial peripherals use the
female connector (the one with holes instead of pins) for their serial ports. Although both
serial and parallel ports use the same style 25-pin D-shell connectors, you can distinguish
serial ports from parallel ports because on most PCs the latter use female connectors.
Figure 21.3 shows the typical male serial port DB-25 connector that you'll find on the
back of your PC.
Figure 21.3 The male DB25 connector used by serial ports on PCs.

Although the serial connector allows for 25 discrete signals, only a few of them are ever
actually used. Serial systems may involve as few as three connections. At most, PC serial
ports use ten different signals. Table 21.2 lists the names of these signals, their
mnemonics, and the pins to which they are assigned in the standard 25-pin serial
connector.

Table 21.2. 25-Pin Serial Connector Signal Assignments

Pin Function Mnemonic


1 Chassis ground None
2 Transmit data TXD
3 Receive data RXD
4 Request to send RTS
5 Clear to send CTS
6 Data set ready RTS
7 Signal ground GND
8 Carrier detect CD
20 Data terminal ready DTR
22 Ring indicator RI

Note that in the standard serial cable, signal ground (which is the return line for the data
signals on pins 2 and 3) is separated from the chassis ground on pin one. The chassis
ground pin is connected directly to the metal chassis or case of the equipment, much like
the extra prong of a three-wire AC power cable, and provides the same protective
function. It assures that the case of the two devices linked by the serial cable are at the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (13 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

same potential, which means you won't get a shock if you touch both at the same time.
As wonderful as this connection sounds, it is often omitted from serial cables. On the
other hand, the signal ground is a necessary signal that the serial link cannot work
without. You should never connect the chassis ground to the signal ground.

9-Pin

If nothing else, using a 25-pin D-shell connector for a serial port is a waste of at least 15
pins. Most serial connections use fewer than the complete 10; some as few as 4 with
hardware handshaking, 3 with software flow control. For the sake of standardization, the
PC industry sacrificed the cost of the other unused pins for years until a larger—or
smaller, depending on your point of view—problem arose: space. A serial port connector
was too big to fit on the retaining brackets of expansion boards along with a parallel
connector. In that all the pins in the parallel connector had an assigned function, the serial
connector met its destiny and got miniaturized.
The problem arose when IBM attempted to put both sorts of ports on one board inside its
Personal Computer AT when it was introduced in 1984. To cope with the small space
available on the card retaining bracket, IBM eliminated all the unnecessary pins but kept
the essential design of the connector the same. The result was an implementation of the
standard serial port that uses a 9-pin D-shell connector. To trim the 10 connections to 9,
IBM omitted the little used chassis ground connection.
As with the 25-pin variety of serial connector, the 9-pin serial jack on the back of PCs
uses a male connector. This choice distinguishes it from the female 9-pin D-shell jacks
used by early video adapters (the MDA, CGA, and EGA systems all used this style of
connector). Figure 21.4 shows the 9-pin male connector that's used on some PCs for
serial ports.
Figure 21.4 The male DB-9 plug used by AT-class serial devices.

Besides eliminating some pins, IBM also rearranged the signal assignments used in the
miniaturized connector. Table 21.3 lists the signal assignments for the 9-pin serial
connector introduced with the IBM PC-AT.

Table 21.3. IBM 9-Pin Serial Connector

Pin Function Mnemonic


1 Carrier detect CD
2 Receive data RXD
3 Transmit data TXD
4 Data terminal ready DTR
5 Signal Ground GND

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (14 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

6 Data set ready DSR


7 Request to send RTS
8 Clear to send CTS
9 Ring indicator RI

Other than the rearrangement of signals, the 9-pin and 25-pin serial connectors are
essentially the same. All the signals behave identically no matter the size of the connector
on which it appears.

Motherboard Headers

When a serial port is incorporated into motherboard circuitry, the motherboard maker
may provide either a D-shell connector on the rear edge of the board or a header from
which you must run a cable to an external connector. The pin assignments on these
motherboard headers usually conforms to that of a standard D-shell connector, allowing
you to use a plain ribbon cable to make the connection.
Intel, however, opts for a different pin assignment on many of its motherboards. Table
21.4 lists the pin assignments of most Intel motherboards.

Table 21.4. Intel Motherboard Serial Port Header Pin Assignments

Motherboard header pin Corresponding 9-Pin D-shell Pin Function


1 1 Carrier detect
2 6 Data set ready
3 2 Receive data
4 7 Request to send
5 3 Transmit data
6 8 Clear to send
7 4 Data terminal ready
8 9 Ring indicator
9 5 Signal ground
10 No connection No connection

Signals

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (15 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Serial communication is an exchange of signals across the serial interface. These signals
involve not just data but also the flow control signals that help keep the data flowing as
fast as possible—but not too fast.
First we'll look at the signals and their flow in the kind of communication system for
which the serial port was designed, linking a PC to a modem. Then we'll examine how
attaching a serial peripheral to a serial port complicates matters and what you can do to
make the connection work.

Definitions

The names of the signals on the various lines of the serial connector sound odd in today's
PC-oriented lingo because the terminology originated in the communications industry.
The names are more relevant to the realm of modems and vintage teletype equipment.
Serial terminology assumes that each end of a connection has a different type of
equipment attached to it. One end has a data terminal connected to it. In the old days
when the serial port was developed, a terminal was exactly that—a keyboard and a screen
that translated typing into serial signals. Today a terminal is usually a PC. For reasons
known but to those who revel in rolling their tongues across excess syllables, the term
Data Terminal Equipment is often substituted. To make matters even more complex,
many discussions talk about DTE devices—which means exactly the same thing as "data
terminals."
The other end of the connection had a data set, which corresponds to a modem. Often
engineers substitute the more formal name Data Communication Equipment or talk about
DCE devices.
The distinction between data terminals and data sets (or DTE and DCE devices) is
important. Serial communications were originally designed to take place between one
DTE and one DCE, and the signals used by the system are defined in those terms.
Moreover, the types of RS-232C serial devices you wish to connect determine the kind of
cable you must use.

Transmit Data

The serial data leaving the RS-232C port is called the transmit data line, which is usually
abbreviated TXD. The signal on it comprises the long sequence of pulses generated by
the UART in the serial port. The data terminal sends out this signal, and the data set
listens to it.

Receive Data

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (16 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

The stream of bits going the other direction—that is, coming in from a distant serial
port—goes through the receive data line (usually abbreviated RXD) to reach the input of
the serial port's UART. The data terminal listens on this line for the data signal coming
from the data set.

Data Terminal Read

When the data terminal is able to participate in communications, that is, it is turned on
and in the proper operating mode, it signals its readiness to the data set by applying a
positive voltage to the data terminal ready line, which is abbreviated as DTR.

Data Set Ready

When the data set is able to receive data, that is, it is turned on and in the proper
operating mode, it signals its readiness by applying a positive voltage to the data set
ready line, which is abbreviated as DSR. Because serial communications must be two
way, the data terminal will not send out a data signal unless it sees the DSR signal
coming from the data set.

Request To Send

When the data terminal is on and capable of receiving transmissions, it puts a positive
voltage on its request to send line, usually abbreviated RTS. This signal tells the data set
that it can send data to the data terminal. The absence of an RTS signal across the serial
connection will prevent the data set from sending out serial data. This allows the data
terminal to control the flow of the data set to it.

Clear To Send

The data set, too, needs to control the signal flow from the data terminal. The signal it
uses is called clear to send, which is abbreviated CTS. The presence of the CTS in effect
tells the data terminal that the coast is clear and the data terminal can blast data down the
line. The absence of a CTS signal across the serial connection prevents the data terminal
from sending out serial data.

Carrier Detect

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (17 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

The serial interface standard shows its roots in the communication industry with the
carrier detect signal, which is usually abbreviated CD. This signal gives a modem, the
typical data set, a means of signaling to the data terminal that it has made a connection
with a distant modem. The signal says that the modem or data set has detected the carrier
wave of another modem on the telephone line. In effect, the carrier detect signal gets sent
to the data terminal to tell it that communications are possible. In some systems, the data
terminal must see the carrier detect signal before it will engage in data exchange. Other
systems simply ignore this signal.

Ring Indicator

Sometimes a data terminal has to get ready to communicate even before the flow of
information begins. For example, you might want to switch your communications
program into answer mode so that it can deal with an incoming call. The designers of the
serial port provided such an early warning in the form of the ring indicator signal, which
is usually abbreviated RI. When a modem serving as a data set detects ringing
voltage—the low frequency, high voltage signal that makes telephone bells ring—on the
telephone line to which it is connected, it activates the RI signal, which alerts the data
terminal to what's going on. Although useful in setting up modem communications, you
can regard the ring indicator signal as optional because its absence usually will not
prevent the flow of serial data.

Signal Ground

All of the signals used in a serial port need a return path. The signal ground provides this
return path. The single ground signal is the common return for all other signals on the
serial interface. Its absence will prevent serial communications entirely.

Flow Control

This hierarchy of signals hints that serial communications can be a complex process. The
primary complicating factor is handshaking or flow control. The designers of the serial
interface recognized that some devices might not be able to accommodate information as
fast as others could deliver it, so they built handshaking into the serial communications
hardware using several special control signals to compensate.
This flow control signal become extremely important when you want to use a serial
connection to a slow device such as a plotter. Simply put, plotters aren't as quick as PCs.
As you sit around playing Freecell for the fourteenth hand while waiting for the blueprint
of your dream house to roll out, that news comes as little surprise. Plotters are

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (18 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

mechanical devices that work at mechanical speed. PCs are electronic roadrunners. A
modern PC can draw a blueprint in its memory much quicker than your plotter can ink it
on paper.
The temptation for your PC is to force feed serial devices, shooting data out like rice
puffs from a cannon. After the first few gulps, however, force fed serial devices choke.
With a serial connection, the device might let the next salvo whiz right by. Your plotter
may omit something important—like bedroom walls and bathroom plumbing—and leave
large gaps in your plan (but, perhaps, making your future life somewhat more
interesting). Flow control helps throttle down the onslaught of data to prevent such
omissions.
The concept underlying flow control is the same as for parallel and other ports: your
peripheral signals when it cannot accept more characters to stop the flow from your PC.
When the peripheral is ready for more, it signals its availability to your PC. Where the
traditional parallel port uses a simple hardware scheme of this handshaking, flow control
for the serial port is a more complex issue. As with every other aspect of serial
technology, flow control is a theme overwhelmed by variations.
The chief division in serial flow control is between hardware and software. Hardware
flow control involves the use of special control lines that can be (but don't have to be)
part of a serial connection. Your PC signals whether it is ready to accept more data by
sending a signal down the appropriate wire. Software flow control involves the exchange
of characters between PC and serial peripheral. One character tells the PC your
peripheral is ready and another warns that it can't deal with more data. Both hardware
and software flow control take more than one form. As a default, PC serial ports use
hardware flow control (or hardware handshaking). Most serial peripherals do, too.

Hardware Flow Control

Several of the signals in the serial interface are specifically designed to help handle flow
control. Rather than a simple on and off operation, however, they work together in an
elaborate ritual.
The profusion of signals seems overkill for keeping a simple connection such as that with
a plotter under control, and it is. The basic handshaking protocol for a serial interface is
built around the needs of modem communications. Establishing a modem connection and
maintaining the flow of data through it is one of the more complex flow control problems
for a serial port. Even a relatively simple modem exchange involves about a dozen steps
with a complex interplay of signals. The basic steps of the dance would go something
like this:
1. The telephone rings when a remote modem wants to make a connection. The data
set sends the ring indicator signal to the data terminal to warn of the incoming call.

2. The data terminal switches on or flips into the proper mode to engage in
communications. It indicates its readiness by sending the data terminal ready
signal to the data set.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (19 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

3. Simultaneously, it activates its request to send line.

4. When the data set knows the data terminal is ready, it answers the phone and
listens for the carrier of the other modem. If it hears the carrier, it sends out the
carrier detect signal.

5. The data set negotiates a connection. When it is capable of sending data down the
phone line, it activates the data set ready signal.

6. Simultaneously, it activates its clear to send line.

7. The data set relays bytes from the phone line to the data terminal through the
receive data line.

8. The data terminal sends bytes to the data set (and thence the distant modem)
through the transmit data line.

9. Because the phone line is typically slower than the data terminal-to-data set link,
the data set quickly fills its internal buffer. It tells the data terminal to stop sending
bytes by deactivating the clear to send line. When its buffer empties, it reactivates
clear to send.

10. If the data terminal cannot handle incoming data, it deactivates its request to send
line. When it can again accept data, it reactivates the request to send line.

11. The call ends. The carrier disappears, and the data set discontinues the carrier
detect signal, clear to send signal, and data set ready signal.

12. Upon losing the carrier detect signal, the data terminal returns to its quiescent state,
dropping its request to send and data terminal ready signals.

Underlying the serial dance are two rules. 1. The data terminal must see the data set
ready signal as well as the clear to send signal before it will disgorge data. 2. The data set
must see the data terminal ready and request to send signals before it will send out serial
data. Interrupting either of the first pair of signals will usually stop the data terminal from
pumping out data. Interrupting either of the second pair of signals will stop the data set
from replying with its own data.
The carrier detect signal may or may not enter into the relationship. Some data terminals
require seeing the carrier detect signal before they will transmit data. Others just don't
give a byte one way or the other.

Software Flow Control

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (20 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

The alternate means of handshaking, software flow control, requires your serial
peripheral and PC to exchange characters or tokens to indicate whether they should
transfer data. The serial peripheral normally sends out one character to indicate it can
accept data and a different character to indicate that it is busy and cannot accommodate
more. Two pairs of characters are often used, XON/XOFF and ETX/ACK.
In the XON/XOFF scheme, the XOFF character sent from your serial peripheral tells
your PC that its buffer is full and to hold off sending data. This character is also
sometimes called DC1 and has an ASCII value of 19 or 013(Hex). It is sometimes called
Control-S. (With some communications programs, you can hold down the Control key
and type S to tell the remote system to stop sending characters to your PC). Once your
serial peripheral is ready to receive data again, it sends out XON, also known as DC3, to
your PC. This character has an ASCII value of 17 or 011(Hex). It is sometimes called
Control-Q. When you hold down Control and type Q into your communications program,
it cancels the effect of a Control-S.
ETX/ACK works similarly. ETX, which is an abbreviation for End TeXt tells your PC to
hold off on sending more text. This character has an ASCII value of 3 (decimal or
hexadecimal) and is sometimes called Control-C. ACK, short for Acknowledge, tells
your PC to resume sending data. It has an ASCII value of 6 (decimal or hexadecimal),
and is sometimes called Control-F.
There's no issue as to whether hardware or software flow control is better. Both work and
that's all that's necessary. The important issue is what kind of flow control your serial
peripheral and software use. You must assure that your PC, your software, and your
serial peripheral use the same kind of flow control.
Your software will either tell you what it prefers or give you the option of choosing when
you load the driver for your peripheral. On your serial peripheral, you select serial port
flow control when you set it up. Typically, this will involve making a menu selection or
adjusting a DIP switch.

Cables

The design of the standard RS-232C serial interface anticipates that you will connect a
data terminal to a data set. When you do, all the connections at one end of the cable that
links them are carried through to the other end, pin for pin, connection for connection.
The definitions of the signals at each end of the cable are the same, and the function and
direction of travel (whether from data terminal to data set or the other way around) of
each is well defined. Each signal goes straight through from one end to the other. Even
the connectors are the same at either end. Consequently, a serial cable should be
relatively easy to fabricate.
In the real world, nothing is so easy. Serial cables are usually much less complicated or
much more complicated than this simple design. Unfortunately, if you plan to use a serial
connection for a printer or plotter, you have to suffer through the more complex design.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (21 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Straight Through Cables

Serial cables are often simpler than pin-for-pin connections from one end to the other
because no serial link uses all 25 connector pins. Even with the complex handshaking
schemes used by modems, only nine signals need to travel from the data terminal to the
data set, PC to modem. (For signaling purposes, the two grounds are redundant—most
serial cables do not connect the chassis ground.) Consequently, you need only make these
9 connections to make virtually any data terminal to data set link work. Assuming you
have a 25-pin D-shell connector at either end of your serial cable, the essential pins that
must be connected are 2 through 8, 20, and 22 on a 25-pin D-shell connector. With 9-pin
connectors at either end of your serial cable, all 9 connections are essential.
Not all systems use all the handshaking signals, so you can often get away with fewer
connections in a serial cable. The minimal case is a system that uses software
handshaking only. In that case, you need only three connections: transmit data, receive
data, and the signal ground. In other words, you need only connect pins 2, 3, and 7 on a
25-pin connector or pins 2, 3, and 5 on a 9-pin serial connector—providing, of course,
you have the same size connector at each end of the cable.
Although cables with an intermediate number of connections are often available, they are
not sufficiently less expensive than the nine-wire cable to justify the risk and lack of
versatility. So you should limit your choices to a nine-wire cable for systems that use
hardware handshaking or three-wire cables for those that you're certain use only software
flow control.
Manufacturers use a wide range of cable types for serial connections. For the relatively
low data rates and reasonable lengths of serial connections, you can get away with just
about everything, including twisted pair telephone wire. To ensure against interference,
you should use shielded cable, which wraps a wire braid or aluminum coated plastic film
about inner conductors to prevent signals leaking out or in. The shield of the cable should
be connected to the signal ground. (Ideally, the signal ground should have its own wire,
and the shield should be connected to chassis ground, but most folks just don't bother.)

Adapter Cables

If you need a cable with a 25-pin connector at one end and a 9-pin connector at the other,
you cannot use a straight through design even when you want to link a data terminal to a
data set. The different signal layouts of the two styles of connector are incompatible.
After all, you can't possibly link pin 22 on a 25-pin connector to a non-existent pin 22 on
a 9-pin connector.
This problem is not uncommon. Even though the 9-pin connector has become a de facto
standard on PCs, most other equipment, including serial plotter, printers and modems,
has stuck with the 25-pin standard. To get from one connector type to another, you need

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (22 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

an adapter. The adapter can take the form of a small assembly with a connector on each
end or of an adapter cable, typically from six inches to six feet long.
Although commercial adapters are readily available, you can easily make your own.
Table 21.5 shows the proper wiring for an adapter to link a 25-pin serial device to a 9-pin
jack on a PC, assuming a data terminal-to-data set connection.

Table 21.5. Wiring for 9-to-25 Pin Serial Port Adapter

25-pin connector 9-pin connector Mnemonic Function


2 3 TXD Transmit data
3 2 RXD Receive data
4 7 RTS Request to send
5 8 CTS Clear to send
6 6 RTS Data set ready
7 5 GND Signal ground
8 1 CD Carrier detect
20 4 DTR Data terminal ready
22 9 RI Ring indicator

Again, nine wires in a cable will suffice. For systems using only software flow control,
you need link only the three essential pins. Note, however, the three pins do not get
connected one-for-one. Pin 2 on the 25-pin connector goes to pin 3 on the 9-pin; pin 3 on
the 25-pin goes to pin 2 on the 9-pin. The ground on pin 7 of the 25-pin connector goes
to pin 5 of the 9-pin connector.

Crossover Cables

As long as you want to connect a computer serial port that functions to a modem, you
should have no problem with serial communications. You will be connecting a data
terminal to a data set, exactly what engineers designed the serial systems for. Simply
sling a cable with enough conductors to handle all the vital signals between the computer
and modem and, Voila! Serial communications without a hitch. Try it, and you're likely
to wonder why so many people complain about the capricious nature of serial
connections.
When you want to connect a plotter or printer to a PC through a serial port, however, you
will immediately encounter a problem. The architects of the RS-232C serial system
decided that both PCs and the devices are data terminals or DTE devices. The
designations actually made sense, at least at that time. You were just as likely to connect
a serial printer (such as a teletype) to a modem as you were a computer terminal. There
was no concern about connecting a printer to a PC because PCs didn't even exist back

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (23 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

then.
When you connect a plotter or printer and your PC—or any two DTE devices—with an
ordinary serial cable, you will not have a communication system at all. Neither machine
will know that the other one is even there. Each will listen on the serial port signal line
that the other is listening to, and each will talk on the line that the other talks on. One
device won't hear a bit the other is saying.
The obvious solution to the problem is to switch some wires around. Move the transmit
data wire from the PC to where the receive data wire goes on the plotter or printer. Route
the PC's receive data wire to the transmit data wire of the plotter or printer. A simple
crossover cable does exactly that, switching the transmit and receive signals at one end
of the connection.
Many of the device that you plug into a PC are classed as DTE or data terminals just like
the PC. All of these require a crossover cable. Table 21.6. lists many of the device you
might connect to your PC and whether they function as data terminals (DTE) or data sets
(DCE).

Table 21.6. Common Serial Device Types

Peripheral Device type Cable needed to connect to PC


PC DTE Crossover
Modem DCE Straight-through
Mouse DCE Straight-through
Trackball DCE Straight-through
Digitizer DCE Straight-through
Scanner DCE Straight-through
Serial printer DTE Crossover
Serial plotter DTE Crossover

Some serial ports on PCs (and some serial devices, too) offer a neat solution to this
problem. They allow you to select whether they function as DTE or DCE with jumpers or
DIP switches. To connect one of these to a plotter or printer using an ordinary straight
through cable, configure the PC's serial port as DCE.
This simple three-wire crossover cable works if you plan on using only software flow
control. With devices that require hardware handshaking, however, the three-wire
connection won't work. You need to carry the hardware handshaking signals through the
cable. And then the fun begins.
Your problems begin with carrier detect. The carrier detect signal originates on a data set,
and many data terminals need to receive it before they send out data. When you connect
two data terminals, neither generates a signal anything like carrier detect, so there's
nothing to connect to make the data terminals start talking. You have to fabricate the
carrier detect signal somehow.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (24 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Because data terminals send out their data terminal ready signals whenever they are
ready to receive data, you can steal the voltage from that connection. Most crossover
cables link their carrier detect signals to the data terminal ready signal from the other end
of the cable.
Both data terminals will send out their data terminal ready signals when they are ready.
They expect to see a ready signal from a data set on the data set ready connection.
Consequently, most crossover cables also link data terminal ready on one end to data set
ready (as well as carrier detect) at the other end. Making this linkup allows the two data
terminals at either end of the cable to judge when the other is ready.
The actual flow control signals are request to send and clear to send. The typical
crossover cable thus links the request to send signal from one end to the clear to send
connection at the other end. This link will enable flow control—providing, of course, the
two data terminal devices follow the RS-232C signaling standard. Table 21.7 shows
summarizes these connections.

Table 21.7. Basic Crossover Cable for Hardware Handshaking (25-Pin Connectors)

PC end Function Device end


2 Transmit data 3
3 Receive data 2
4 Request to send 5
5 Clear to send 4
6 Data set ready 20
7 Signal ground 7
8 Carrier detect 20
20 Data terminal ready 6
20 Data terminal ready 8

Unfortunately, this cable may not work properly when you link many serial devices to
the typical PC. A different design that combines the request to send and clear to send
signals and links them to carrier detect at the opposite end of the cable often works better
than the above by-the-book design. The wiring connections for this variety of crossover
cable are listed in Table 21.8.

Table 21.8. Wiring for a Generic Crossover Serial Cable (25-Pin Connectors)

PC end Function Device end 2 Transmit data 3 3 Receive data 2 4 Request to send 8 5
Clear to send 8 6 Data set ready 20 7 Signal ground 7 8 Carrier detect 5 8 Carrier detect 4
20 Data terminal ready 22 20 Data terminal ready 6 22 Ring indicator 20
A number of printers vary from the signal layout ascribed to RS-232C connections and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (25 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

use different connections for flow control. DEC serial printers, among others, use pin 19
instead of pin 20 for hardware flow control. These require another variation on the
generic crossover cable to make them work properly with PCs. The proper wiring is
shown in Table 21.9.

Table 21.9. Wiring for Crossover Serial Cable (25-Pin to 25-Pin) for DEC Printers
Using Pin 19 Handshake

PC end Function Device end


2 Transmit data 3
3 Receive data 2
4 Request to send 8
5 Clear to send 8
6 Data set ready 19
7 Signal ground 7
8 Carrier detect 5
8 Carrier detect 4
20 Data terminal ready 22
20 Data terminal ready 6

Some of the newer and more popular serial printers are in the LaserJet series made by
Hewlett-Packard. These use a simplified hardware flow control system that involves only
the DTR signal on the printer end of the cable. Earlier printer models use 25-pin
connectors, and Hewlett-Packard sells a crossover cable for these as its part number
17255D. Its wiring is shown in Table 21.10.

Table 21.10. HP 25-Pin to 25-Pin Serial Adapter Cable

PC end LaserJet end


Pin Signal Pin Signal
1 Chassis ground 1 Chassis ground
2 Transmit data 3 Receive data
3 Receive data 2 Transmit data
5 Clear to send 20 Data terminal ready
6 Data set ready 20 Data terminal ready
7 Signal ground 7 Signal ground

You can directly connect a PC-style 9-pin serial port to a LaserJet with a 25-pin serial
connector using Hewlett-Packard's adapter cable model 2424G. Its wiring is shown in

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (26 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Table 21.11.

Table 21.11. HP 9-Pin to 25-Pin Serial Adapter Cable

PC end LaserJet end


Pin Signal Pin Signal
2 Receive data 2 Transmit data
3 Transmit data 3 Receive data
5 Signal ground 7 Signal ground
6 Clear to send 20 Data terminal ready
8 Data set ready 20 Data terminal ready

More recent LaserJets use 9-pin serial connectors instead of the 25-pin variety. The
machines do not follow the IBM 9-pin standard used by the 9-pin jacks on PCs but
instead use its complement. If you consider the IBM-style connector DTE, then the
Hewlett-Packard LaserJet 9-pin connector is DCE. It requires its own adapter cable to
plug into standard 25-pin PC-style serial ports. The necessary adapter cable is available
from Hewlett-Packard as model number C2933A. Table 21.12 shows its wiring.

Table 21.12. HP 25-Pin to 9-Pin Serial Adapter Cable

PC end LaserJet end


Pin Signal Pin Signal
2 Transmit data 3 Receive data
3 Receive data 2 Transmit data
4 Request to send 7 Not used
5 Clear to send 8 Data terminal ready
6 Data set ready 6 Data terminal ready
7 Signal ground 5 Signal ground
8 Carrier detect 1 Request to send
20 Data terminal ready 4 Data set ready
22 Ring indicator 9 Not used

The Hewlett-Packard redefinition of the 9-pin serial connector for its printers has one big
benefit. You can connect a 9-pin PC serial port directly to a 9-pin HP printer port using a
straight through cable. Moreover, only HP's printers use only seven of the nine
connections. Table 21.13 shows the wiring of this cable, which is available from
Hewlett-Packard as its model C2932A.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (27 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Table 21.13. HP 9-Pin to 9-Pin Serial Adapter Cable

PC end LaserJet end


Pin Signal Pin Signal
1 Carrier Detect 1 Carrier detect
2 Receive data 2 Transmit data
3 Transmit data 3 Receive data
4 Data terminal ready 4 Data set ready
5 Signal ground 5 Signal ground
6 Clear to send 6 Data terminal ready
7 Request to send 7 Not used
8 Data set ready 8 Data terminal ready
9 Ring indicator 9 Not used

One way to avoid the hassle of finding the right combination of hardware handshaking
connections would appear to be letting software do it—avoiding hardware handshaking
and instead using the XON-XOFF software flow control available with most serial
devices. Although a good idea, even this expedient can cause hours of head scratching
when nothing works as it should—or nothing works at all.
When trying to use software handshaking, nothing happening is a common occurrence.
Without the proper software driver, your PC or PS/2 has no idea that you want to use
software handshaking. It just sits around waiting for a DSR and a CTS to come rolling in
toward it from the connected serial device.
You can sometimes circumvent this problem by connecting the data terminal ready to
data set ready and request to send to clear to send within the connectors at each end of
the cable. This wiring scheme satisfies the handshaking needs of a device with its own
signals. But beware. This kind of subterfuge will make systems that use hardware
handshaking print, too, but you'll probably lose large blocks of text when the lack of real
handshaking lets your PC continue to churn out data even after your printer shouts
"Stop!"
Finally, note that some people call crossover cables null modem cables. This is not
correct. A null modem is a single connector used in testing serial ports. It connects the
transmit data line to the receive data line of a serial port as well as crossing the
handshaking connections within the connector. Correctly speaking, a null-modem cable
is equipped with this kind of wiring at both ends. It forces both serial ports constantly on
and prevents any hardware flow control from functioning at all. Although such a cable
can be useful, it is not the same as a crossover cable. Substituting one for the other will
lead to some unpleasant surprises—text dropping from sight from within documents as
mysteriously and irrecoverably as D. B. Cooper.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (28 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

UARTs

A serial port has two jobs to perform. It must re-package parallel data into serial form,
and it must send power down a long wire with another circuit at the end, which is called
driving the line.
Turning parallel data into serial is such a common electrical function that engineers
created special integrated circuits that do exactly that. Called Universal Asynchronous
Receiver/Transmitter chips or UARTs, these chips gulp down a byte or more of data and
stream it out a bit at a time. In addition, they add all the other accouterments of the serial
signal—the start, parity, and stop bits. Because every serial practical connection is
bi-directional, the UART works both ways, sending and receiving, as its name implies.
Because the UART does all the work of serializing your PC's data signals, its operation is
one of the limits on the performance of serial data exchanges. PCs have used three
different generations of UARTs, each of which imposes its own constraints.
The choice of chip is particularly critical when you connect your serial port to modem to
it. When you communicate on-line with a modem, you're apt to receive long strings of
characters through the connection. Your PC must take each character from a register in
the UART and move it into memory. When your PC runs a multitasking system, it may
be diverted for several milliseconds before it turn its attention to the UART and gathers
up the character. Older UARTs must wait for the PC to take away one character before
they can accept another from the communications line. If the PC is not fast enough, the
characters pile up. The UART doesn't know what to do with them, and some of the
characters simply get lost. The latest UARTs incorporate small buffers, or memory areas,
that allow the UART to temporarily store characters until the PC has time to take them
away. These newer UARTs are more immune to character loss and are preferred by
modem users for high speed communications.
When you connect a printer to a serial port, you don't have such worries. The printer
connection is more a monologue than a dialogue—your PC chatters out characters and
gets very little backtalk from your printer. Typically, it will get only a single XOFF or
XON character to tell the PC to stop or start the data flow. Because there's no risk of a
pileup of inbound characters, there's no need for a buffer in the UART.
If you have both a modem and a serial printer attached to your PC, your strategy should
be obvious; the modem gets the port with the faster UART. Your printer can work with
whatever UART is left over.
The three UART chips that PC and peripheral makers install in their products are the
8250, 16450, and 16550A.

8250

The first UART used in PCs was installed in the original IBM PC's Asynchronous
Communications Adapter card in 1981. Even after a decade and a half, it is still popular

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (29 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

on inexpensive port adapter expansion boards because it is cheap. It has a one-byte


internal buffer that's exactly what you need for printing or plotting applications. It can
hold the XOFF character until your PC gets around to reading it. It is inadequate for
reliable two way communications at high modem speeds.

16450

In 1984, designers first put an improved version of the 8250, the 16450 UART, in PCs.
Although the 16450 has a higher speed internal design, it still retains the one-byte buffer
incorporated into its predecessor. Serial ports built using it may still drop characters
under some circumstances at high data rates. Although functionally identical, the 16450
and 8250 are physically different (they have different pin-outs), and you cannot substitute
one in a socket meant for the other.

16550A

The real UART breakthrough came with the introduction of the 16550 to PCs in 1987.
The first versions of this chip proved buggy, so it was quickly revised to produce the
16550A. It is commonly listed as 16550AF and 16550AFN, with the last initials
indicating the package and temperature rating of the chip. The chief innovation
incorporated into the 166550 was a 16-byte first-in, first out buffer (or FIFO). The buffer
is essential to high speed modem operating in multitasking systems, making this the chip
of choice for communications.
To maintain backward compatibility with the 16450, the 16550 ignores its internal buffer
until it is specifically switched on. Most communications programs activate the buffer
automatically. Physically, the 16550 and 16450 will fit and operate in the same sockets,
so you can easily upgrade the older chip to the newer one.

Register Function

The register at the base address assigned to each serial port is used for data
communications. Bytes are moved to and from the UART using the microprocessor's
OUT and IN instructions. The next six addresses are used by other serial port registers, in
order: the Interrupt Enable Register, the Interrupt Identification Register, the Line
Control Register, the Modem Control Register, the Line Status Register, and the Modem
Status Register. Another register, called the Divisor Latch, shares the base address used
by the Transmit and Receive registers and the next higher register used by the interrupt
enable register. It is accessed by toggling a setting in the line control register.
This latch stores the divisor that determines the operating speed of the serial port.
Whatever value is loaded into the latch is multiplied by 16. The resulting product is used

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (30 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

to divide the clock signal supplied to the UART chip to determine the bit rate. Because of
the factor of 16 multiplication, the highest speed the serial port can operate at is limited
to 1/16 the supplied clock (which is 1.8432 MHz). Setting the latch value to its
minimum, one, results in a bit rate of 115,200.
Registers not only store the values used by the UART chip but also are used to report to
your system how the serial conversation is progressing. For example, the line status
register indicates whether a character that has been loaded to be transmitted has actually
been sent. It also indicates when a new character has been received.
Although you can change the values stored in these registers manually using Debug or
your own programs, for the most part you'll never tangle with these registers. They do,
however, provide flexibility to the programmer.
Instead of being set with DIP switches or jumpers, the direct addressability of these
registers allows all the vital operating parameters to be set through software. For
instance, by loading the proper values into the line control register, you alter the word
length, parity, and number of stop bits used in each serial word.

Buffer Control

Operating system support for the buffer in the 16550 appeared only with Windows 3.1.
Even then it was limited in support to Windows applications only. DOS applications
require internal FIFO support even when they run inside Windows 3.1. Windows for
Workgroups (Version 3.11) extended buffer support to DOS applications running within
the operating environment. The standard communications drivers for OS/2 Warp and
Windows 95 operating systems will automatically take advantage of the 16550 buffer
when the chip is present.
Windows 3.1 uses its COMM.DRV for controlling the buffer of the 16650. You control
whether the buffer is activated by altering the COMxFIFO entries for each of your serial
ports in the [386Enh] Section of your SYSTEM.INI file of any member of the Window
3.1 family. To activate the buffer for a specific port, set COMxFIFO to one where the x is
the port designation. To deactivate the buffer, make the entry zero. For example, the
following entries in SYSTEM.INI will switch on the buffer for COM3 only:

[386Enh]
COM1FIFO=0
COM2FIFO=1
COM3FIFO=0
COM4FIFO=0
By default, Windows will activate the buffers in any 16550 chip that it finds.
Under Windows 95, you can control the FIFO buffer through the Device Manager
section of the System folder found in Control Panel. Once you open Control Panel, click
on the System icon. Click on the Device Manager tab, then the entry for Ports, which will
then expand to list the ports available in your system. Click on the COM port you want to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (31 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

control, then click on the Properties button below the listing, as shown in Figure 21.5.
Figure 21.5 Windows 95 Device Manager folder.

From the Communications properties screen, click on the Port Settings tab. In addition to
the default parameters set up for your chosen port, you'll see a button labeled Advanced.
Clicking on it will give you control of the FIFO buffer, as shown in Figure 21.6.
Figure 21.6 Disabling or enabling your UART FIFO buffer under Windows 95.

Windows 95 defaults to setting the FIFO buffer on if you have a 16550 UART or the
equivalent in your PC. To switch the buffer off, click on the checked box labeled User
FIFO buffers. The settings shown in Figure 21.6 are the defaults.

Identifying UARTs

One way of identifying the type of UART installed in your PC is to look at the
designation stenciled on the chip itself. Amid a sea of prefixes, suffixes, data codes,
batch numbers, and other arcana important only the chip makers, you'll find the model
number of the chip.
First, of course, you must find the chip. Fortunately, UARTs are relatively easy to find.
All three basic types of UART use exactly the same package, a 40-pin DIP (Dual In-line
Package) black plastic shell that's a bit more than 2 inches long and 0.8 inch wide. Figure
21.7 shows this chip package. These large chips tend to dominate multifunction or port
adapter boards on which you'll typically find them. Some older PCs have their chips on
their motherboards.
Figure 21.7 The 40-pin DIP package used by UARTs.

Unfortunately, the classic embodiment of the UART chip is disappearing from modern
PCs. Large ASICs (Application-Specific Integrated Circuits) often incorporate the
circuitry and functions of the UART (or, more typically, two of them). Most PCs
consequently have no UARTs for you to find, even though they have two built-in serial
ports.
The better way to identify your UARTs is by checking their function. That way you don't
have to open up your PC to find out what you've got. For example, when you set up a
serial port, Windows 95 will tell you whether you have a 16550A UART and allow you
to adjust its buffer.
If you're stuck with an older operating system, you can still use software to be sure that
the chip will act the way it's supposed to. Snooper programs will check out your UART
quickly and painlessly. Better still, you can determine the kind of UARTs in your PC (as
well as a wealth of other information) using Microsoft Diagnostics, the program
MSD.EXE, which is included with the latest versions of Windows. After you run MSD,
you'll see a screen like that shown in Figure 21.8.
Figure 21.8 The opening screen of the Microsoft Diagnostics program.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (32 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

From the main menu, choose the COM ports option by pressing "C" on your keyboard.
The program will respond by showing you the details of the serial ports that you have
installed in your system with a screen like that shown in Figure 21.9.
Figure 21.9 The Microsoft Diagnostics display of communications port parameters.

The last line of the display lists the UART chip used by each of your PC's serial ports.
Use the more recent chip for your external modem if you have the choice; the 8250 chips
(as shown in the example screen) are suitable for plotters, printers, mice, and other slow
moving critters.

Enhanced Serial Ports

Serious modem users may install an Enhanced Serial Port in their PCs, which acts like a
16550 but incorporates higher speed circuitry and a much larger buffer. Because you
have to install such an option yourself, you should know if you have one. Most Enhanced
Serial Ports are identified as 16550 UARTs by snooping programs. They were introduced
primarily to take advantage of higher modem speeds. Parallel modems and new serial
port designs such as USB provide a means of achieving the same end using recognized
industry standards, so you may want to avoid enhanced serial ports.

Logical Interface

Your PC controls the serial port UART through a set of seven registers built into the
chip. Although your programs could send data and commands to the UART (and,
through it, to your serial device) by using the hardware address of the registers on the
chip, this strategy has disadvantages. It requires the designers of systems to allocate once
and forever the system resources used by the serial port. The designers of the original
IBM PC were loathe to make such a permanent commitment. Instead they devised a more
flexible system that allows your software to access ports by name. In addition, they
worked out a way that port names would be assigned properly and automatically even if
you didn't install ports in some predetermined order.

Port Names

The names they assigned were COM1 and COM2. In 1987, the designers of DOS
expanded the possible port repertory to include COM3 and COM4. Under Windows 3.1,
up to nine serial ports could be installed in a PC using DOS conventions, although no
true standards for the higher port values exist. Windows 95 has enhanced its support of
serial ports to extend to 128 potential values. The implementation of these ports is
handled by the device driver controlling them.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (33 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Under the DOS conventions, the names that get assigned to a serial port depend on the
input/output port addresses used by the registers on the UART. PCs reserve a block of
eight I/O ports for the seven UART registers. These eight addresses are sequential, so
they can be fully identified by the first, the base address of the serial port.
Because of their long use, the first two base addresses used by the serial ports are
invariant, 3F8(Hex) and 2F8(Hex). Although the next two sometimes vary, the most
widely used values for the next two base addresses are 3E8(Hex) and 2E8(Hex).
Windows automatically assumes you will use these base addresses.
IBM devised an elaborate scheme for assigning port names to base addresses. When a PC
boots up, DOS reads all base address values for serial ports, then assigns port names to
them in order. Serial port names are consequently always sequential. In theory, you could
skip a number from the ordered listing of base addresses and still get a sequence of serial
port names starting with COM1. In practice, however, this can create setup difficulties.
You're better off assuming that the defaults listed in Table 21.14 for DOS and Windows
default serial port parameters are a hard and fast rule.

Table 21.14. Default Settings for DOS and Windows Serial Ports

Port name Base address Interrupt


COM1 03F8(Hex) 4
COM2 02F8(Hex) 3
COM3 03E8(Hex) 4
COM4 02E8(Hex) 3

Assigning alternate base addresses for higher port names when using the Windows 3.1
family requires adjusting settings in your PC's SYSTEM.INI file which is normally
located in your WINDOWS directory. If you look under the [386Enh] section of
SYSTEM.INI, you'll find entries for the variables COM3Base and COM4Base. You can
change the base address assignments by altering these values. Windows 95 allows much
more versatility. Normal hardware installation and Device Manager allow you to
configure additional serial ports to use any of a range of base addresses within the range
allowed by your PC and the driver software accompanying the port hardware.

Interrupts

Serial ports normally operate as interrupt-driven devices. That is, when they must
perform an action immediately, they send a special signal called an interrupt to your PC's
microprocessor. When the microprocessor receives an interrupt signal, it stops the work
it is doing, saves its place, and execute special software routines called interrupt
handlers.
A serial port generates a hardware interrupt by sending a signal down an interrupt

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (34 de 67) [23/06/2000 06:51:44 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

request line or IRQ. PC expansion buses typically have six to twelve separate interrupt
request lines. The ISA expansion bus is designed to allow one interrupt and one device to
use each IRQ line. More advanced expansion buses like EISA and PCI allow more than
one device to share each IRQ line.
Unless interrupts are shared, each serial port needs its own interrupt. Unfortunately, the
number of available interrupts is typically too small for four serial ports to get their own
IRQs. In most ISA computers, only interrupts 3 and 4 are typically used by serial ports.
When you have more than two serial ports installed in such a system, some ports must
share interrupts. Because ISA makes no provision for the sharing of these interrupts,
conflicts can arise when two devices ask the microprocessor to use the same interrupt at
the same time. The result can be characters lost in serial port communications, mice that
refuse to move, and even system crashes.
The usual culprits in such problems are modems and mice sharing a port. Printers rarely
cause problems because their interrupt demands are small—they usually need interrupt
service only for flow control. If you must share interrupts between devices, making the
serial port used by your plotter or printer share usually is the best choice. It will happily
share an interrupt with your modem's serial port and will rarely cause problems,
particularly if you refrain from printing and using your modem at the same time.
Windows can help resolve serial port interrupt conflict problems. It lets you designate
your own choice of interrupt for serial ports three and four. As with the base addresses
used by these ports, you tell Windows the values you want to use in your PC's
SYSTEM.INI file. You'll find entries for COM3Int and COM4Int under the heading
[386Enh] in SYSTEM.INI. Window 95 normally will automatically identify your serial
ports. However, you can change the settings using the Resources tab in the
Communications Port properties folder, as shown in Figure 21.10. You access this folder
exactly as you would change FIFO buffers settings as described earlier in this section.
Figure 21.10 The Resources tab on the Communications Port Properties folder.

In any case, you cannot haphazardly assign any value to serial port interrupts and expect
the serial port to automatically work. You also have to configure the port hardware to use
the same address you've indicated to Windows. Most serial ports allow you to choose the
interrupt from a short list using jumpers or DIP switches. You'll have to consult the
manual accompanying your PC or serial port adapter to determine the proper settings.

ACCESS.bus

Not appreciably faster than an RS-232C connection, ACCESS.bus earns interest with the
second half of its name. Unlike the RS-232C connection design to bridge two devices
together, ACCESS.bus acts as a bus and links as many as 125 devices to a single port. In
other words, although it is not faster than the old-fashioned serial connection, it is more
versatile.
Moreover, ACCESS.bus is already part of the PC universe, adopted as a means of
monitoring the condition of Smart batteries and by the Video Electronic Standards

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (35 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Association as part of the Display Data Channel interface. Although its use there is now
chiefly for identifying the monitor connected to your PC, the versatility of the interface
promises to allow your monitor to serve as the centerpiece for the desktop accessories of
your PC—the monitor can act as the hub to which your keyboard, mouse, and joystick
connect to your PC through DDC and ACCESS.bus.
Two words summarize the ACCESS.bus design—simple and slow. It uses a simple serial
interface with a well defined protocol. Its aim is to connect one or more low speed
input/output devices to a host computer. Instead of hardware signals for controlling
transfers, ACCESS.bus uses messages sent through the data channel.
The name ACCESS.bus is derived from its purpose, a bus for connecting ACCESSory
devices to a host computer system. Although based on a physical interface developed by
Philips Electronics called I2C for Inter-Integrated Circuit, the actual ACCESS.bus
specification was developed by Digital Equipment Corporation. Offered to the computer
industry as an open standard, ACCESS.bus does not require fees or royalties to use
despite its proprietary origins.

Architecture

ACCESS.bus is a multiple master design. Devices that connect using ACCESS.bus


operate either as masters or slaves, and these definitions can change dynamically. The
difference is that a master controls the transfer while the slave only receives data. The
master sends out both the serial clock and serial data signals.
Your PC, as part of the ACCESS.bus system, serves a special role. It is the host. As such,
it sets up the ACCESS.bus system, assigning addresses to individual devices every time
you switch the system on or add a new device to a running system. All transmissions
across the ACCESS.bus are between the host and another device, although the host may
act as master or slave during these transfers.
The ACCESS.bus system has three layers. The Physical Layer controls the signals and
transfer protocol, including the how packets are defined using the basic signals. The Base
Protocol outlines the essential content the messages, including the message format—the
function of each bit within a message, including headers and error detection—and defines
the commands that can be relayed through across the bus within messages. The
Application Protocol defines how information from different kinds of devices gets
packaged into messages. The current ACCESS.bus specification specifically defines
three classes of devices: keyboards, locators (essentially pointing devices such as mice),
and text devices (which can be either devices that sent textual data, such as a bar code
reader, or simply the identification text sent by a monitor as part of DDC).

Signaling

The ACCESS.bus uses four signals. One, SDA, transmits data across the bus. A second

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (36 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

line, SCLK, is a clock signal that defines when the data is valid and can be read. The bus
also provides a five-volt power connection cable of running low power devices. A
minimum of 50 milliamps must be supplied to the ACCESS.bus by the computer host.
All three signals share a common ground. Table 21.15 summarizes these signals, their
normal pin assignments, and wiring color code.

Table 21.15. ACCESS.bus Signals

Pin Function Mnemonic Color code


1 Ground GND Black
2 Serial data SDA Green
3 +5 VDC +5V Red
4 Serial clock SCL White

The data and clock signals operate at 100 kilohertz. Because the protocol adds an
acknowledge bit for every byte transferred and the overhead involved in packet headers
and error detection, the actual throughput of the ACCESS.bus is about 80 kbits/sec.
The ACCESS.bus data and clock signals are normally held at a logical high voltage at the
host computer either through a voltage source or by simply connecting the lines to a
positive voltage through a resistor. All devices monitor the state of these lines, detecting
high and low. Any device connected to the bus can pull this signal low, and it will appear
low to all devices regardless of which of them pulls it low.

Transfers

A start condition begins on the bus when a device pulls the data line
low while leaving the clock line high. When devices sense a start
condition, they regard the bus as busy and no other master will
attempt to send data down the bus. The master in control sends a stop
signal to indicate it has finished with the bus. The stop signal is a low
to high transition on the data line while the clock line is high.
After sending the start signal, the master clock forces the clock line
low. It then pulses the clock high to indicate valid data on the data
line. Each byte of data is sent as a series of eight bits, most
significant first, delineated by eight pulses of the clock signal.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (37 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Following the data bits, the master sends out an acknowledge bit,
essentially a pulse of the clock. The slave acknowledges receipt of
the byte by having pulled the data line low during the acknowledge
pulse of the clock. If the slave allows the data line to remain high
during the acknowledge clock pulse, the transfer is considered to
have been not acknowledged. Under this system, a device can
actively generate a "not acknowledged" signal. A device that is not
available or even not turned on automatically generates a "not
acknowledged" indication.

Arbitration

Arbitration on the ACCESS.bus is quite simple. Should two masters


attempt to send data at the same time, both can start transmitting. As
long as both send the same data down the bus both can continue
transmitting because the signals are the same and it simply makes no
difference where they come from. As soon as the signals differ—one
master puts a zero on the data line while the other puts a one
there—the master that puts the one on the bus loses the arbitration
and stops transmitting. The other master completes its transmission
normally.

Messages

Packets on the ACCESS.bus are termed messages. Each message has


a three-byte header, from one to 127 bytes of data, and a one-byte
check sum. The first byte indicates the address of the destination
device. The second, the address of the source device. The third
includes a one-bit protocol flag and seven bits specifying the number
of data bits in the message.
Both data exchanges and control functions get carried through the
ACCESS.bus system as messages. Most of the command messages
are used during system configuration. These include host messages to
force all devices on the bus to reset to their power-on condition, a

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (38 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

command to assign an address to a device, a command for each


device to identify itself, another to identify device capabilities, and
one that simply detects whether a given device is present on the bus.
Devices can respond with messages identifying themselves or their
capabilities. Devices can also send an attention message to let the
host know that a new device has been plugged into the bus.

Addresses

Of the 128 addresses possible with seven-bit encoding, three are


reserved. The host computer is always assigned the address 50(Hex).
The default address of all devices when they power up is 6E(Hex).
The system management device always is assigned 10(Hex). The
remaining 125 addresses are available for ACCESS.bus devices.
Most are dynamically assigned by the host computer, although a few
are used for specific devices such as Smart Battery Specification
chargers and VESA-standard monitors using the Display Data
Channel identification system.
To assign addresses, the host computer relies on the ACCESS.bus
arbitration procedure. The host broadcasts a message requesting each
device to send out its unique identification by addressing the message
to the default address, 6E(Hex). All devices respond at once, each
sending a message containing its identification back to the host. As
the bits flow, the devices that have a one in their identifications at a
given position but detect one or more other devices putting a zero on
the bus drop out. They lose the arbitration. Because each
identification is unique, eventually one device is left and completes
sending the identification. The master then assigns an address to that
device, instructing the device to respond only to that address. The
master then broadcasts the command to request identification and
repeats the process until all devices have been assigned addresses.

Connections

The physical embodiment of ACCESS.bus is a modular jack with four connections. The

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (39 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

basic ACCESS.bus jack is shown in Figure 21.11.


Figure 21.11 The ACCESS.bus female jack.

To allow for multiple devices on the ACCESS.bus, each device may have two
connectors, allowing one to serve as input and the other as output. Electrically, the two
jacks are identical—the same signals are present in the same locations on each. Devices
with only one connector or a permanently attached cable can attach as the last device on
the bus or through a T-connector that allows for further device additions.
The ACCESS.bus cable has four conductors, corresponding to the four signals on the
jacks. An overall shield prevents interference. The signals are not paired. The maximum
length of a cable is about 33 feet (ten meters), limited by the capacitance of the cable.
The overall reach of an ACCESS.bus connection can be extended with a repeater.
Cable connectors use special shielded modular-style connectors. The connectors, as
shown in Figure 21.12, are identical at both ends of the cable. The connections run
pin-for-pin straight through the cable.
Figure 21.12 The ACCESS.bus cable connector.

IrDA

The one thing you don't want with a portable PC is a cable to tether you down, yet most
of the time you must plug into one thing or another. Even a simple and routine chore like
downloading files from your notebook machine into your desktop PC gets tangled in
cable trouble. Not only do you have to plug both ends in, reaching behind your desktop
machine only a little more elegantly than fishing into a catch basin for a fallen
quarter—and, more likely than not, unplugging something else that you'll inevitably need
later only to discover the dangling cord—but you've got to tote that writhing cable along
with you wherever you go. There has to be a better way.
There is. You can link your PC to other systems and components with a light beam. On
the rear panel of many notebook PCs, you'll find a clear LED or a dark red window
through which your system can send and receive invisible infrared light beams. Although
originally introduced to allow you to link portable PCs to desktop machines, the same
technology can tie in peripherals like modems and printers, all without the hassle of
plugging and unplugging cables.

History

On June 28, 1993, a group of about 120 representatives from 50 computer-related


companies got together to take the first step in cutting the cord. Creating what has come
to be known as the Infrared Developers Association or IrDA, they aimed at more than
making your PC more convenient to carry. They also saw a new versatility and, hardly

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (40 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

incidentally, a way to trim their own costs.


The idea behind the get together was to create a standard for using infrared light to link
your PC to peripherals and other systems. The technology had already been long
established, not only in television remote controls but also in a number of notebook PCs
already in the market. Rather than build a new technology, the goal of the group was to
find common ground, a standard so that the products of all manufacturers could
communicate with the computer equivalent of sign language.
Hardly a year later on June 30, 1994, the group approved its first standard. The original
specification, now known as IrDA Version 1.0, essentially gave the standard RS-232C
port an optical counterpart, one with the same data structure and, alas, speed limit. In
August 1995, IrDA took the next step and approved high speed extensions that pushed
the wireless data rate to four megabits per second.

Overview

More than a gimmicky cordless keyboard, IrDA holds an advantage that makes computer
manufacturers—particularly those developing low cost machines—eye it with interest. It
can cut several dollars from the cost of a complex system by eliminating some expensive
hardware, a connector or two, and a cable. Compared to the other wireless technology,
radio, infrared requires less space because it needs only a tiny LED instead of a larger
and more costly antenna. Moreover, infrared transmissions are not regulated by the FCC
as are radio transmissions. Nor do they cause interference to radios, televisions,
pacemakers, and airliners. The range of infrared is more limited than radio and restricted
to line-of-sight over a narrow angle. However, these weaknesses can become strengths
for those who are security conscious.
The original design formulated by IrDA was for a replacement for serial cables. The link
was envisioned as a half-duplex system. Although communications go in both directions,
only one end of the conversation sends out data at any given time.
To make the technology easy and inexpensive to implement with existing components, it
was based on the standard RS-232C port and its constituent components, such as UARTs.
The original IrDA standard called for asynchronous communication using the same data
frame as RS-232C and the most popular UART data rates from 2400 to 115,200 bits per
second.
To keep power needs low and prevent interference among multiple installations in a
single room, IrDA kept the range of the system low. The expected separation between
devices using IrDA signals to communicate was about one meter (three feet). Some links
are reliable to two meters.
Similarly, the IrDA system concentrates the infrared beam used to carry data because
diffusing the beam would require more power for a given range and be prone to causing
greater interference among competing units. The laser diodes used in the IrDA system
consequently focus their beams into a cone with a spread of about 30 degrees.
After the initial serial port replacement design was in place, IrDA worked to make its

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (41 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

interface suitable for replacing parallel ports as well. That goal lead to the creation of the
IrDA high speed standards for transmissions at data rates of 0.576, 1.152 and 4.0
megabits per second. The two higher speeds use a packet based synchronous system that
requires a special hardware based communication controller. This controller monitors
and controls the flow of information between the host computer's bus and
communications buffers.
A watershed of differences separate low speed and high speed IrDA systems. Although
IrDA designed the high speed standard to be backwardly compatible with old equipment,
making the higher speeds work requires special hardware. In other words, although high
speed IrDA devices can successfully communicate with lower speed units, such
communications are constrained to the speeds of the lower speed units. Low speed units
cannot operate at high speed without upgrading their hardware.
IrDA defines not only the hardware but also the data format used by its system. The
group has published six standards to cover these aspects of IrDA communications. The
hardware itself forms the physical layer. In addition, IrDA defines a link access protocol
termed IrLAP and a link management protocol called IrLMP that describe the data
formats used to negotiate and maintain communications. All IrDA ports must follow
these standards. In addition, IrDA has defined an optional transport protocol and
optional Plug-and-Play extensions to allow the smooth integration of the system into
modern PCs. The group's IrCOMM standard describes a standard way for infrared ports
to emulate conventional PC serial and parallel ports.

Physical Layer

The physical layer of the IrDA system encompasses the actual hardware and transmission
method. Compared to other serial technologies, the hardware you need for an IrDA port
would appear to be immaterial. After all, it is wireless. However, your PC still needs a
port capable of sending and receiving invisible light beams.
A growing number of notebook computers have built-in IrDA facilities. In fact, IrDA is
one of the more important features to look for in a new notebook computers.
Desktop machines are another matter. The IrDA wave hasn't yet struck them.
Fortunately, you can add an IrDA port as easily as plugging in a serial cable. For
example, the Adaptec AirPort plugs into a serial port and gives you an optical eye to send
and receive IrDA signals.
The AirPort is only the first generation of IrDA accessories for desktops. By its nature, it
is limited to standard serial port speeds. After all, it can't run faster than the signals
coming to it. IrDA ports that take advantage of the newer, higher speeds will require
direct connection to your PC's expansion bus (or a more advanced serial port like the
Universal Serial Bus, discussed in the following "Universal Serial Bus" section).
In any case, the optical signals themselves form what the IrDA called the Physical Layer
of the system. IrDA precisely defines the nature of these signals.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (42 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Infrared Light

Infrared light is invisible electromagnetic radiation that has a wavelength longer than that
of visible light. Where you can see light that ranges in wavelength from 400 angstroms
(deep violet) to 700 angstoms (dark red), infrared stretches from 700 angstroms to 1,000
or more. IrDA specifies that the infrared signal used by PCs for communication have a
wavelength between 850 and 900 angstroms.

Data Rates

All IrDA ports must be able to operate at one basic speed, 9600 bits per second. All other
speeds are optional.
The IrDA specification allows for all the usual speed increments used by conventional
serial ports from 2400 bps to 115,200 bps. All of these speeds use the default modulation
scheme, RZI. High speed IrDA Version 1.1 adds three additional speeds, 576 kbps,
1.152Mbps, and 4.0 Mbps.
No matter the speed range implemented by a system or used for communications, IrDA
devices first establish communications at the mandatory 9600 bps speed using the link
access protocol. Once the two devices establish a common speed for communicating,
they switch to it and use it for the balance of their transmissions.

Pulse Width

The infrared cell of an IrDA transmitter sends out its data in pulses. Unlike the electronic
logic signals inside your PC, which are assumed to remain relatively constant throughout
a clock interval, the IrDA pulses last only a fraction of the basic clock period or bit cell.
The relatively wide spacing between pulses makes each pulse easier for the optical
receiver to distinguish.
At speeds up to and including 115,200 bits per second, each infrared pulse must be at
least 1.41 microseconds long. Each IrDA data pulse nominally lasts just 3/16 of the
length of a bit cell, although pulse widths a bit more than 10 percent greater remain
acceptable. For example, each bit cell of a 9600 bps signal would occupy 104.2
microseconds (that is, one second divided by 9600). A typical IrDA pulse at that data rate
would last 3/16 that period or 19.53 microseconds.
At higher speeds, the pulse minima are substantially shorter ranging from 295.2
nanoseconds at 576 Kbps to only 115 nanoseconds at 4.0 Mbps. At these higher speeds,
the nominal pulse width is one quarter of the character cell. For example, at 4.0 megabits
per second each pulse is only 125 nanoseconds long. Again, pulses about 10 percent
longer remain permissible. Table 21.16 summarizes the speeds and pulse lengths.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (43 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Table 21.16. IrDA Speeds and Modulation

Signaling Rate Modulation Pulse Duration


2.4 kb/s RZI 78.13 us
9.6 kb/s RZI 19.53 us
19.2 kb/s RZI 9.77 us
38.4 kb/s RZI 4.88 us
57.6 kb/s RZI 3.26 us
115.2 kb/s RZI 1.63 us
0.576 Mb/s RZI 434.0 ns
1.152 Mb/s RZI 217.0 ns
4.0 Mb/s 4PPM, single pulse 125 ns
4.0 Mb/s 4PPM, double pulse 250.0 ns

Modulation

Depending on the speed at which a link operates, it may use one of two forms of
modulation. At speeds lower than 4.0 megabits per second, the system employs
Return-to-Zero Invert (RZI) modulation. Actually, RZI is just a fancy way of describing
a simple process. Each pulse represents a logical zero in the data stream. A logical one
gets no infrared pulse. Figure 21.13 shows the relation between the original digital code,
the equivalent electrical logic signal, and the corresponding IrDA pulses.
Figure 21.13 Relationship between original code and the IrDA pulses.

At the 4.0 megabit per second data rate, the IrDA system shifts to pulse position
modulation. Because the IrDA system involves four discrete pulse positions, it is
abbreviated 4PPM.
Pulse position modulation uses the temporal position of a pulse within a clock period to
indicate a discrete value. In the case of IrDA's 4PPM, the length of one clock period is
termed the symbol duration and is divided into four equal segments termed chips. A
pulse can occur in one and only one of these chip segments, and the chip the pulse
appears in— its position inside the symbol duration or clock period—encodes its value.
For example, the four chips may be numbered 0, 1, 2, and 3. If the pulse appears in chip
2, it carries a value 2 (in binary, that's 10). Figure 21.14 shows the pulse position for the
four valid code values under IrDA at 4.0 Mbps.
Figure 21.14 Pulse positions for the four valid code values of 4PPM under IrDA.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (44 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

The IrDA system uses group coding under 4PPM. Each discrete pulse in one of the four
possible positions indicates one of four two-bit patterns. Each pulse in the IrDA 4PPM
system thus encodes two bits of data and four clock periods suffice for the transfer of a
full byte of data. Table 21.17 lists the correspondence between pulse position and bit
patterns for IrDA's 4PPM.

Table 21.17. Data to Symbol Translation for IrDA 4PPM Code

Data Bit Pair 4PPM Symbol


00 1000
01 0100
10 0010
11 0001

IrDA requires data to be transmitted only in eight-bit format. In terms of conventional


serial port parameters, a data frame for IrDA comprises a start bit, eight data bits, no
parity bits, and a stop bit for a total of ten bits per character. Note, however, that zero
insertion may increase the length of a transmitted word beyond this minimum. Any
inserted zeroes are removed automatically by the receiver and do not enter the data
stream. No matter the form of modulation used by the IrDA system, all byte values are
transmitted the least significant bit first.

Bit Stuffing

Note that with RZI modulation a long sequence of logical ones will
suppress pulses for the entire duration of the sequence. For example,
a sequence of the byte value 0FF(Hex) will include no infrared
pulses. If this suppression extends for a long enough period during
synchronous communications, the clocks in the transmitter and
receiver may become unsynchronized. To prevent the loss of sync,
moderate speed IrDA systems use a technique called bit stuffing.
Moderate speed IrDA systems operate synchronously using long data
frames that are self-clocking. To avoid long periods without pulses,
these systems rely on bit stuffing. After a predetermined number of
logical ones appear in the data stream, the system automatically
inserts a logical zero. The zero adds a pulse that allows the
transmitter and receiver to synchronize their clocks. The receiver,
when detecting an extended period without pulses automatically

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (45 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

removes the extraneous stuffed pulse from the data stream.


Moderate speed systems stuff a zero at the conclusion of each string
of five logical ones. The system calculates its CRC error correction
data before the data gets stuffed. The receiver strips off the stuffed
bits before performing its CRC.
Bit stuffing is only required during synchronous transmissions.
Because low speed IrDA systems operate asynchronously, there is no
need for bit stuffing. Nor is bit stuffing necessary at IrDA's highest
speed because 4PPM modulation guarantees a pulse within each
clock period.

Format

The IrDA system doesn't deal with data at the bit or byte level but
instead arranges the data transmitted through it in the form of
packets, which the IrDA specification also terms frames. A single
frame can stretch from 5 to 2050 bytes (and sometimes more) in
length. As with other packetized systems, an IrDA frame includes
address information, data, and error correction, the last of which is
applied at the frame level. The format of the frame is rigidly defined
by the IrDA Link Access Protocol standard, discussed in the
following "Link Access Protocol" section.

Aborted Frames

Whenever a receiver detects a string of seven or more consecutive


logical ones—that is, an absence of optical pulses—it immediately
terminates the frame in progress and disregards the data it received
(which is classed as invalid because of the lack of error correction
data). The receiver then awaits the next valid frame, signified by a
start-of-frame flag, address field, and control field. Any frame that
ends in this summary manner is termed an aborted frame.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (46 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

A transmitter may intentionally abort a frame or a frame may be


aborted because of an interruption in the infrared signal. Anything
that blocks the light path will stop infrared pulses from reaching the
receiver and, if long enough, abort the frame being transmitted.

Interference Suppression

High speed systems automatically mute lower speed systems that are
operating in the same environment to prevent interference. To stop
the lower speed link from transmitting, the high speed system sends
out a special Serial Infrared Interaction Pulse at intervals no longer
than half a second. The SIP is a pulse 1.6 microseconds long
followed by 7.1 microseconds of darkness, parameters exactly equal
to a packet start pulse. When the low speed system sees what it thinks
is a start pulse, it automatically starts looking for data at the lower
rates, suppressing its own transmission for half a second. Before it
has a chance to start sending its own data (if any), another SIP quiets
the low speed system for the next half second.

Link Access Protocol

To give the data transmitted through the optical link a common


format, the Infrared Data Association created its own protocols. Its
Link Access Protocol describes the composition of the data packets
transmitted through the system. IrLAP is broadly based on the
asynchronous data communications standards used in RS-232C ports
(no surprise here) that is, in turn, adapted from the more general
HDLC, High Level Data Link Control. Note that although the overall
operation of the IrDA link is the same at all speeds, IrDA defines
different protocols for its low and high speeds.

Primary and Secondary Stations

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (47 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

The foundation of IrLAP is distinguishing primary and secondary


stations. A primary station is the device that takes command and
controls the link transfers. Even when a given link involves more
than two devices, all transmissions must either go to or come from
the primary station. That is, all secondary devices send data only to
the primary station. The primary station, however, can target a single
secondary station or broadcast its data to all secondary stations.
In any IrDA connection, there can be only one primary station. The
role of primary station does not automatically fall on a given device.
For example, your PC is not automatically the primary station in the
sessions in which it becomes involved. At the beginning of any IrDA
link, the stations negotiate for the roles which they will play. After
one device becomes the primary station, however, it retains that role
until the link is ended.
An IrDA link begins when a device seeks to connect with another.
The first device may directly request a link to a known device or it
may sniff out and discover a device to which to make a connection.
For example, a notebook PC could continuously search for a desktop
mate, and when it finally comes into range with one, it begins to
negotiate a link.
To begin the link, the first device sends out a connection request at
the universal 9600 bps speed. This request includes the address of the
initiating device, the speed at which it wants to pursue the linkup, and
other parameters. The responding device assumes the role of the
secondary station and sends back identifying information, including
its own address and its speed capabilities. The two devices then
change to a mutually compatible speed and the initiating device, as
the primary station, takes command of the link. Data transfer begins.

Frame Types

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (48 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Three types of frames are used by IrLAP, following the model of


HDLC. Information frames transfer the actual data in a connection.
Unnumbered frames perform setup and management tasks. For
example, an unnumbered frame can establish a connection, remove a
connection, or search out or discover devices with which to make a
connection. Supervisory frames help control the flow of data between
stations. For example, a supervisory frame may be used to
acknowledge the receipt of a given frame or to warn that a given
station is busy.
Each frame is bracketed by two or more start fields at its beginning
and one or more stop fields at its conclusion. If a frame is not
bracketed by the proper flags, it will be ignored.
Next comes the address field, which identifies the source and
destination of the packet. The control field identifies the type of
packet, whether it contains data or control information. The data field
is optional because some packets used to control the link need no
data. The data field can be any length up to and including 2045 bytes
but must be a multiple of eight bits. The frame ends with a frame
check sequence used to detect errors and one or more stop flags.
Figure 21.15 shows the layout of a typical IrDA frame.
Figure 21.15 Constituents of a high speed IrDA data frame
(packet).
The start flag is simply a specific bit pattern that indicates the
beginning of a field. It serves to get the attention of the receiver. The
exact pattern used for the start flag depends on the speed of the data.
At speeds of 115,200 bps and below, the start flag is the byte values
0C0(Hex). A frame may include more than one start flag. The
stations participating in a link negotiate the number to use. Table
21.18 summarizes the components in a IrDA synchronous data frame
at low data rates.

Table 21.18. IrDA Low Speed (2400 to 115,200 bps) Frame


Components

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (49 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Mnemonic Definition Length Value


STA Start flag 8 bits 0C0(Hex)
ADDR Address field 8 bits Varies
DATA Control field 8 bits Varies
DATA Information field 2045 bytes Varies
FCS Error correction field 16 bits CRC
STO Stop flag 8 bits 0C1(Hex)

At higher speeds, both starting and end flags are given the value
07E(Hex). Each frame must have at least two start flags. A
transmitter can insert additional start flags, which are ignored by the
receiver. If a transmitter outputs two IrDA frames back to back, the
data in the two frames will be separated by at least one stop flag and
two start flags. If the transmitter sends two frame that are not back to
back, the stop flag of the first frame and the first start flag of the
second frame must be separated by at least seven pulse-free clock
cycles. Table 21.19 summarizes the components in a IrDA
synchronous data frame.

Table 21.19. IrDA Synchronous Data Frame Components at


576Kbps and Higher

Mnemonic Definition Length Value


STA Start flag 8 bits 07E(Hex)
ADDR Address field 8 bits Varies
DATA Control field 8 bits Varies
DATA Information field <2046 bytes Varies
FCS Frame check sequence 16 bits CRC error detection
STO Stop flag 8 bits 07E(Hex)

Addressing

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (50 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

The address field identifies the secondary station participating in the


communications link. When the primary station transmits a packet,
the address identifies the secondary station to which the packet is
destined. When a secondary station transmits a packet, the address
identifies the secondary station. The first bit identifies the direction
of the transmission, 1 from the primary station, 0 from the secondary
station. The remaining seven bits are the actual address. Two of the
128 possible addresses are reserved. The address 0000000(Binary)
identifies the primary station. The address 1111111(Binary) identifies
a packet as global, which means that it is transmitted to all secondary
nodes participating in the link. The addressing scheme constrains the
number of IrDA devices in a given system to 127—the primary
station and 126 secondary stations.

Error Detection

The error correction field is two bytes long. Its value is computed
from the bits in the entire frame except for the starting and ending
flags. In other words, the error correction covers not only the
information field but also the address and control fields.
IrDA error correction is based on the cyclical redundancy check
adopted by the CCITT. It is similar but not identical to the error
detection used in the XMODEM file transfer protocol.
The value of the cyclical redundancy check is computed using the
following algorithm:

CRC(x) = x16 + x12 + x5 + 1

Link Management Protocol

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (51 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Before an IrDA system can exchange data, it must establish a link.


The IrDA created a standard procedure for creating and managing
links call the Link Management Protocol. This standard covers all
aspects of establishing and ending communication between IrDA
devices, including such aspects as link initialization, discovering the
addresses of available devices, negotiating the speed and format of
the link, disconnection, shutdown, and the resolution of device
conflicts. Many aspects of IrDALMP were derived from the more
general HDLC protocols. Because of the broadcast nature of the
infrared signal, however, the IrDA group developed its own system
for detecting the presence of devices and resolving conflicts among
them.
One of the biggest problems faced by IrDALMP is the dynamic
nature of wireless connections. You can bring new devices into the
range of a master at any time, or similarly, remove them, changing
the very nature of the connection. The master must be able to
determine the devices with which it can communicate and keep alert
for changes. It must be able not only to search out those device
within its range (and operating in a compatible mode) but also new
devices that you suddenly introduce. Moreover, it must be able to
determine when each device involved in a communication session
ceases to participate—either by being turned off or by being moved
out of range. Where traditional hard wired serial connections had the
luxury of status signals that allow the easy monitoring of the status of
the connection and the devices linked through it, infrared devices
have nothing but pulses of light linking them.
Unlike the traditional point to point serial connection that involved
merely two devices exchanging data, the master in an IrDA link must
be able to manage and control the signals from multiple devices at
the same time. All must share the same optical channel. The master is
charged with doling out bandwidth to suit the connection and prevent
contention between the various devices.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (52 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Universal Serial Bus

Three drawbacks head any list of the most aggravating aspects of serial ports: low speed,
complex cabling, and the limited number of ports. The Universal Serial Bus breaks
through all three, combining a signaling rate of 12 Mbits/sec with a mistake proof wiring
system and almost unlimited number of connections. The standard also supports lower
speed devices sharing the same wiring system along with high speed devices. The low
speed signaling rate is 1.5 Mbits/sec.
First introduced in 1996, the USB is more than a successor to the RS-232C serial port. It
provides the basic mechanism for connecting most, if not all, peripherals to your PC.
Everything from your keyboard to cash register drawer can connect simply and quickly
with a USB plug.

Background

Designed for those who would rather compute than worry about hardware, the premise
underlying USB is the substitution of software intelligence for cabling confusion. USB
handles all the issues involved in linking multiple devices with different capabilities and
data rates with a layer cake of software. Along the way, it introduces its own new
technology and terminology.
USB divides serial hardware into two classes, hubs and functions. A USB hub provides
jacks into which you can plug functions. A USB function is a device that actually does
something. USB's designers imagined that a function may be anything that you can
connect to your computer including keyboards, mice, modems, printers, plotters,
scanners, or whatever.
Rather than a simple point to point port, the USB acts as an actual bus that allows you to
connect multiple peripherals to one jack on your PC with all of the linked devices sharing
exactly the same signals. Information passes across the bus in the form of packets, and all
functions receive all packets. Your PC accesses individual functions by adding a specific
address to the packets, and only the function with the correct address acts on the packets
addressed to it.
The physical manifestation of USB is a port, a jack that's part of a hub. Each physical
USB port connects to a single device, and a hub offers multiple jacks to let you plug in
several devices. You can plug one hub into another to provide several additional jacks
and ports to connect more devices. The USB design envisions a hierarchical system with
hubs connected to hubs connected to hubs. In that each hub allows multiple connections,
the reach of the USB system branches out like a tree—or a tree's roots. Figure 21.16
gives a conceptual view of the USB wiring system.
Figure 21.16 USB hierarchical interconnection scheme.

Your PC acts as the base hub for a USB system and is termed the host. The circuitry in

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (53 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

your PC that controls this integral hub and the rest of the USB system is called the bus
controller. Each USB system has one and only one bus controller.
The USB system doesn't care which device you plug into which hub or how many levels
down the hub hierarchy you put a particular device. All the system requires is that you
properly plug everything together following its simple rule—each device must plug into
a hub—and the USB software sorts everything out. This software, making up the USB
protocol, is the most complex part of the design. In comparison, the actual hardware is
simple—but the hardware won't work without the protocol.
The wiring hardware imposes no limit on the number of devices and functions that you
can connect in a USB system. You can plug hubs into hubs into hubs fanning out into as
many ports as you like. You do face limits, however. The protocol limits the number of
functions on one bus to 127 because of addressing limits. Seven bits are allowed for
encoding function addresses, and one of the potential 128 is reserved.
In addition, the wiring limits the distance at which you can place functions from hubs.
The maximum length of a USB cable is five meters. Because hubs can regenerate signals,
however, your USB system can stretch out for greater distances by making multiple hops
through hubs.
As part of the Plug-and-Play process, the USB controller goes on a device hunt when you
start your PC. It interrogates each device to find out what it is. It then builds a map that
locates each device by hub and port number. These become part of the packet address.
When the USB driver sends data out the port, it routes it to the proper device by this hub
and port address.
Wiring with USB is, by design, trouble free. Because all devices receive all signals, you
face no issues of routing. Because each port has a single jack that accepts one and only
one connector—and a connector of a specific matching type—you don't have to worry
about adapters, crossover cables or the other minutiae required to make old style serial
connections work.
On the other hand, USB requires specific software support. Any device with a USB
connector has the necessary firmware to handle USB built in. But your PC also requires
software to make the USB system work. Your PC's operating system must know how to
send the appropriate signals to its USB ports. In addition, each function must have a
matching software driver. The function driver creates the commands or packages the data
for its associated device. An overall USB driver acts as the delivery service, providing
the channel—called, in USB terminology, a pipe—for routing the data to the various
functions. Consequently, each USB you add to your PC requires software installation
along with plugging in the hardware.

Connectors

The USB system involves four different styles of connectors, two chassis-mounted jacks
and two plugs at the ends of cables. Each jack and plug comes in two varieties, A and B.
Hubs have A jacks. These are the primary outward manifestation of the USB port. The

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (54 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

matching A plug attaches to the cable that leads to the USB device. In the purest form of
USB, this cable is permanently affixed to the device, and you need worry about no other
plugs or jacks.
The USB standard allows for a second, different style of plug and jack meant only to be
used for inputs to USB devices. If a USB device (other than a hub) requires a connector
so that, as a convenience, you can remove the cable, it uses a USB "B" jack. The mating
plug is a "B" plug.
The motivation behind this multiplicity of connectors is to prevent rather than cause
confusion. All USB cables have an A plug at one end and a B plug at the other. One end
must attach to a hub and the other to a device. You cannot inadvertently plug things
together incorrectly.
Because all A jacks are outputs and all B jacks are inputs, only one form of detachable
USB cable exists—one with an A plug at one end and a B plug at the other. No crossover
cables or adapters are needed for any USB wiring scheme.

Cable

The physical USB wiring uses a special four-wire cable. Two conductors in the cable
transfer the data as a differential digital signal. That is, the voltage on the two conductors
is of equal magnitude and opposite polarity so that when subtracted from one another
(finding the difference) the result cancels out any noise that ordinarily would add equally
to the signal on each line. In addition, the USB cable includes a power signal, nominally
five volts DC, and a ground return. The power signal allows you to supply power for
external serial devices through the USB cable.
The two data wires are twisted together as a pair. The power cables may or may not be.
To achieve its high data rate, the USB specification requires that certain physical
characteristics of the cable be carefully controlled. Even so, the maximum length
permitted any USB cable is five meters.
One limit on cable length is the inevitable voltage drop suffered by the power signal. All
wires offer some resistance to electrical flow, and the resistance is proportional to the
wire gauge. Hence, lower wire gauges (thicker wires) have lower resistance. Longer
cables require lower wire gauges. At maximum length, the USB specification requires
20-gauge wire, which is one step (two gauge numbers) thinner than ordinary lamp cord.
The individual wires in the USB cable are color coded. The data signals form a
green-white pair, the +Data signal on green. The positive five-volt signal rides on the red
wire. The ground wire is black. Table 21.20 sums up this color code.

Table 21.13. USB Cable Color Code

Signal Color

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (55 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

+ Data Green
- Data White
VCC Red
Ground Black

Data Coding

To help ensure the integrity of the high speed data signal, the USB system uses a
combination of NRZI data encoding and bit stuffing. NRZI coding (inverse no return to
zero) uses a change in signal during a given period to indicate a logical zero and no
change in a period to indicate a logical one. Figure 21.17 illustrates the NRZI translation
scheme. Note that a zero in the code stream triggers each transition in resulting the NRZI
code. A continuous stream of logical zeros results in an on-off pattern of voltages on the
signal wires, essentially a square wave.
Figure 21.17 NRZI coding scheme used by USB.

The NRZI signal is useful because it is self-clocking. That is, it allows the receiving
system to regenerate the clock directly from the signal. For example, the square wave of
a stream of zeros acts as the clock signal. The receiver adjusts its timing to fit this
interval. It keeps timing even when a logical one in the signal results in no transition.
When a new transition occurs, the timer resets itself, making whatever small adjustment
might be necessary to compensate for timing differences at the sending and receiving
end.
Ordinarily, a continuous stream of logical ones would result in a constant voltage, an
extended stream without transitions. If the length of such a series of logical ones were
long enough, the sending and receiving clocks in the system might wander and lose their
synchronicity. Bit stuffing helps keep the connection in sync.
The bit stuffing technique used by USB system injects a zero after every continuous
stream of six logical ones. Consequently, a transition is guaranteed to occur at least every
seven clock cycles. When the receiver detects a lack of transitions for six cycles, then
receives the transition on the seventh, it can reset its timer. It also discards the stuffed bit
and counts the transition (or lack of it) occurring at the next clock cycle to be the next
data bit.

Protocol

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (56 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

As with all more recent interface introductions, the USB design uses
a packet based protocol.
All message exchanges require the exchange of three packets. The
exchange begins with the host sending out a token packet. The token
packet bears the address of the device meant to participate in the
exchange as well as control information that describes the nature of
the exchange. A data packet holds the actual information that is to be
exchanged. Depending on the type of transfer, either the host or the
device sends out the data packet. Despite the name, the data packet
may contain no information. The exchange ends with a handshake
packet which acknowledges the receipt of the data or other successful
completion of the exchange. A fourth type of packet, called Special,
handles additional functions.
All packets must start with two components, a Sync Field and a
Packet Identification. Each of these components is one byte long.
The Sync Field is a series of bits that produces a dense string of pulse
transitions using the NRZI encoding scheme required by the USB
standard. These pulses serve as a consistent burst of clock pulses that
allow all the devices connected to the USB bus to reset their timing
and synchronize themselves to the host. As encoded, the Sync Field
appears as three on/off pulses followed by a marker two pulses wide.
The raw data before encoding takes the value 00000001(binary),
although the data is meaningless because it is never decoded.
The Packet Identifier byte includes four bits to define the nature of
the packet itself and another four bits as check bits that confirm the
accuracy of the first four. Rather than a simple repetition, the check
bits take the form of a one's complement of the actual identification
bits (every zero is translated into a one). The four bits provides a
code that allows the definition of 16 different kinds of packet.
USB uses the 16 values in a two step hierarchy. The two more
significant bits specify one of the four types of packet. The two less
significant bits subdivide the packet category. Table 21.21 lists the
PIDs of the four basic USB packet types.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (57 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

Table 21.21. USB Packet Identifications

Bit pattern Packet type


XX00XX11 Special packet
XX01XX10 Token packet
XX10XX01 Handshake packet
XX11XX00 Data packet

Token Packets

Only the USB host sends out Token Packets. Each Token Packet
takes up four bytes, which are divided into five functional parts.
Figure 21.18 graphically shows the layout of a Token Packet.
Figure 21.18 Functional parts of a USB Token Packet.

The two bytes take the standard form of all USB packets. The first
byte is a Sync Field that marks the beginning of the token's bit
stream. The second byte is the Packet Identification.
The PID byte defines four types of token packets. These include an
Out packet that carries data from the host to a device; an In packet
that carries data from the device to the host; a Setup packet that
targets a specific Endpoint; and a Start of Frame packet that helps
synchronize the system. Table 21.22 matches the PID code with the
Token Packet type.

Table 21.22. Token Packet Types

Packet identification byte Token packet type


00011110 Out
01011010 Start of Frame (SOF)
10010110 In

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (58 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

11010010 Setup

For In, Out, and Setup Token Packets, the seven bits following the
PID encode the Address Field, which identifies the device that the
host wants to command or send data to. Four additional bits supply
an Endpoint number. An Endpoint is an individually addressable
section of a USB function. Endpoints give hardware designers the
flexibility to divide a single device into logically separate units. For
example, a keyboard with a built-in trackball might have one overall
address to act as a single USB function. Assigning individual
Endpoints to the keyboard section and the trackball section allows
device designers to individually address each part of the overall
keyboard.
Start of Frame packets differ from other USB packets in that they are
broadcast. All devices receive and decode, but do not acknowledge,
them. The 11 bits that would otherwise make up the Address and
Endpoint fields indicate a Frame Number. The host sends out one
Start of Frame packet each millisecond, as the name suggests
defining the beginning of the USB's one-millisecond frame. The host
assigns frame numbers incrementally, starting with zero and adding
one for each subsequent frame. When it reaches the maximum 11-bit
value (3072 in decimal), it starts over from zero. Figure 21.19 is a
graphical representation of the Start of Frame type of Token packet.
Figure 21.19 Constituents of a USB Start of Frame form of
Token Packet.

All Token Packets end with five bits of cyclic redundancy check
information. The CRC data provides an integrity check of the
Address Field and Endpoint. It does not cover the PID, which has its
own, built-in error correction.

Data Packets

The actual information transferred through the USB system takes the form of Data
Packets.
As with all USB packets, a Data Packet begins with a one-byte Sync Field followed by

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (59 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

the Packet Identification. The actual data follows as a sequence of zeroes to 1,023 bytes.
A two-byte cyclic redundancy check verifies the accuracy of only the data field. The PID
field relies on its own redundancy check mechanism. Figure 21.20 shows a graphical
representation of a USB Data Packet.
Figure 21.20 Constituents of a USB Data Packet.

The PID field ostensibly defines two types of Data Packet, Data 0 and Data 1.
Functionally, however, the two data types (and hence the PID) form an additional error
checking system between the data transmitter and receiver. The transmitter toggles
between Data 0 and Data 1 to indicate that it has received a valid acknowledgment of
receipt of the preceding data packet. In other words, it confirms the confirmation. Table
21.23 summarizes these USB data packet types.

Table 21.23. USB Data Packet Types

Packet identification Data packet type


00110011 Data 0
10110010 Data 1

For example, the transmitter sends out a Data Packet of the type Data 0. After the
receiver successfully decodes the packet, it sends an acknowledgment signal back to the
transmitter in the form of a Handshake Packet. If the transmitter successfully receives
and decodes the acknowledgment, the next Data Packet it sends will be Data 1. From this
change in Data Packet type, the receiver knows that its acknowledgment was properly
received.

Handshake Packets

Handshake packets are two bytes long, comprising a Sync Field and a Packet
Identification. Figure 21.21 graphically illustrates a USB Handshake Packet, and Table
21.24 lists the three forms of this packet type.
Figure 21.21 Constituents of a USB Handshake Packet.

Table 21.24. USB Handshake Packet Types

Packet Identification Byte Handshake type


00101101 ACK
01011010 NAK
11100001 STALL

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (60 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

IEEE-1394

Compared to the performance you've come to expect from your PC, serial ports are slow
at best. They are constrained not only by the pragmatic aspects of their design—UART
chips and the clocks that control them—but also by the medium through which the
signals travel. Single-ended signals and cables of dubious quality are the communications
equivalent of unleashing a go-kart with a lawnmower engine on an Autobahn that's
unfettered by speed limits. The chances your data will get where its going unscathed are
slim and, even if successful, the trip will be slow. As long as the medium remains the
same, improvements in serial signal speed will be chancy, if possible at all.
The best way to accelerate the serial trip is to redefine both the medium and the method.
The IEEE embarked on exactly that goal and is working on a proposal called P1394 to be
the serial port of the future. The goal of the effort is to give computer and peripheral
makers a low cost but high speed interface for linking devices and systems. Rather than
replacing the RS-232C port alone, proponents see P1394 as a substitute for all the odd
and varied ports on the back of your PC. P1394 has the potential for replacing not only
your serial port but the parallel port, SCSI port, even the video connector.
Cross the slowest port in your PC with the most cantankerous one, and what do you get?
Not an engineer's nightmare but a vision of the future called P1394. Although this up and
coming standard combines the serial technology of today's laggardly RS-232C port with
the intelligence of SCSI protocol, it takes the best instead of worst of each and makes an
interconnection system with the speed of local bus, the wiring ease of MIDI, and
economy in keeping with today's plunging PC prices. Add to the list of mandatory
equipment on your next PC another port.
More than the next generation of serial communications, P1394 will likely be the
connection that brings mass market simplicity to multimedia. One connection could do it
all, linking as many as 16 peripherals. Advocates of P1394 imagine it linking PCs not
just to traditional devices like CD ROM drives, hard disks, modems, printers, and
scanners but also to video cameras and stereo systems. Easier to plug together than
ordinary stereo components, P1394 eliminates the wiring confusion that scares
technophobes from trying and using computer technology. If you can manage plugging
your PC into a wall outlet, you can connect the most elaborate multimedia system. In
short, P1394 is key to pushing computing technology into home and everyday
entertainment.

Background

Development of the new standard began nearly a decade ago in September 1986 when
the IEEE (Institute of Electrical and Electronic Engineers) assigned a study group the
task of clearing the murk of thickening morass of serial standards. Hardly four months
later (in January 1987), the group had already outlined basic concepts underlying P1394,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (61 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

some of which still survive in today's standard—including low cost, a simplified wiring
scheme, and arbitrated signals supporting multiple devices. Getting the devil out of such
details as operating speed and the technologies needed to achieve it, took years because
needs, visions, and visionaries changed. Consensus on the major elements of the
standard—including the connector and the bus management—came only in 1993, and the
standard reached final form in late 1994. Achieving its worthy goals required four
breakthrough new technologies, including a novel encoding system that made high speed
safe for serial data, a self-configuration system that moved the headaches of setup from
users to the port circuitry, a time based arbitration system that guarantees all of the many
devices linked to a single port have fair and guaranteed access, and a means of delivering
time critical data like video without affecting the transfer of serial data.
P1394 truly offers something for everyone—today's relatively skilled PC user,
tomorrow's casual home user, and even machine makers.
For manufacturers, the cost of P1394 may prove most alluring. P1394 has the potential of
reducing the cost of external connections to PCs both in terms of money spent and panel
usage. Both of these savings originate in the design of the P1394 connector. P1394
envisions a single 6-wire plastic connector replacing most if not all of the standard port
connectors on a PC. As with today's SCSI, one P1394 port on a PC allows you to connect
multiple devices, up to 16 in current form.
The connector itself will cost manufacturers a few cents while the connectors alone for
an RS-232C port can cost several dollars (and that can be a significant portion of the
price of a peripheral or even PC). Moreover, a standard serial connector—that 25-pin
D-shell connector—by itself is much too large for today's miniaturized systems. It can't
fit a PCMCIA card by any stretch of the imagination or plastic work.
As less skilled people start tinkering with PCs and try linking them into multimedia
systems, the simplified setup and wiring of P1394 should earn their praises. Today's high
performance interface choice, SCSI, is about as friendly as a hungry bear awakened from
hibernation. Although backed by strong technology, SCSI is a confusion of connectors,
cables, terminators, and ID numbers. Wise folks find the best strategy is to stay out of the
way. Where cabling a SCSI system means following rules more obscure than those of a
fantasy adventure game, P1394 has exactly one wiring requirement: all P1394 devices in
a system must connect without loops. There are no terminations to worry about, no
different cable types like straight through and crossover, no cable length concerns, no
identification numbers, and no connector genders to change. You simply plug one end of
a P1394 cable into a jack on the back of the two devices you want to link. Most P1394
cables will have two or three jacks, so you can wire together elaborate webs. As long as
no more than one circuit runs between any two P1394 devices, the system will work. It's
even easier than a stereo system because there are no worries about input and output
jacks.
Down deeper, however, P1394 is more complex. Instead of simply needing a UART,
P1394 is a complex communications system with its own transfer protocol requiring new,
application specific, integrated circuits. Although those initially will be
expensive—estimated $15 per P1394 device—throughout the history of PCs, the cost of
standard silicon circuits has plummeted while the cost of connectors continued to climb.
Moreover, the current cost isn't entirely out of line with today's serial technology where a
16550AFN UART alone can cost $5 to $10. Just as the electrically more complex AT

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (62 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

interface replaced older interfaces for hard disks, P1394 stands to step in place of the
serial port.

Performance

For today's PC users, speed is probably the most important aspect of P1394. Serial
connections exchange simplicity for the constraints of moving one bit at a time through a
narrow data channel. For example, the standard RS-232C port on your PC tops out at
115,200 bits per second. Although the top rate is set by the timebase design of the
original IBM PC of 1981, electrical issues like interference and wire capacitance
constrain RS-232C transmissions to substantially slow data rates on longer connections.
In contrast, P1394 starts with a raw data rate of 100 megabits per second and some
devices will be able to shift bits at speeds up to four times that rate. Although P1394
imposes substantial software overhead because of its packet based nature and the needs
for addressing and arbitration, it still offers enough bandwidth to carry three
simultaneous video signal or 167 CD-quality audio signals at its base 100 Mbits/sec rate.
In current form, it allows hard disks to matches the 10MByte/sec transfer rate of Fast
SCSI-2 connections.

Timing

Reliability is a problem with any high speed circuit, and the designers of P1394 faced a
formidable challenge. More than does any comedian, P1394 depends on precise timing.
The meaning of each bit in a transmission depends on when the bit gets registered. At the
high data rates of P1394, signal jitter becomes a major problem. Each bit must be defined
to fit precisely into a frame 10 billionths of a second long; the slightest timing error can
cause an error. In designing P1394, engineers tried elaborate coding schemes to eliminate
jitter problems. In the end, they created an entirely new signaling system.
To minimize noise, data connections in P1394 use differential signals. Ordinary RS-232C
serial ports use single-ended signals. One wire carries the data, and the ground
connection serves as the return path. Differential signaling uses two wires that carry the
same signal but of different polarities. Receiving equipment subtracts the signal on one
wire from that on the other to find the data as the difference between the two signals. The
benefit of this scheme is that any noise gets picked up by the wires equally. When the
receiving equipment subtracts the signals on the two wires, the noise gets
eliminated—the equal noise signals subtracted from each other equal zero.
P1394 goes further, using two differential wire pairs. One pair carries the actual data; the
second pair, called the strobe lines, complements the state of the data pair so that one and
only one of the pairs changes polarity every clock cycle. For example, if the data line
carries two sequential bits of the same value, the strobe line reverses polarity to mark the
transition between them. If a sequence of two bits changes the polarity of the data lines (a
one followed by a zero or zero followed by a one), the strobe line does not change

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (63 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

polarity. Summing the data and strobe lines together exactly reconstructs the clock signal
of the sending system, allowing the sending and receiving devices to precisely lock up.

Setup

As with existing SCSI systems, P1394 allows you to connect multiple devices and uses
an addressing system so the signals sent through a common channel are recognized only
by the proper target device. The linked devices can independently communicate among
themselves without the intervention of your PC.
In order to communicate, however, devices must be able to identify one another.
Providing proper ID has been one of the recurring problems with SCSI, requiring you to
set switches on every SCSI device and then indicate your choices when configuring
software. P1394 eliminates such concerns with its own automated configuration process.
Whenever a new devices gets plugged into a P1394 system (or when the whole system
gets turned on), it starts its automatic configuration process. By signaling through the
various connections, each device determines how it fits into the system, either as a root
node, a branch, or a leaf. Each P1394 system has only one root, which is the foundation
around which the rest of the system organizes itself. The node also sends out a special
clock signal). P1394 devices with only one connection are leaves; those that link to
multiple devices are branches. Once the connection hierarchy is setup, the P1394 devices
determine their own ID numbers from their location in the hierarchy and send identifying
information (ID and device type) to their host.

Arbitration

P1394 also relies on timing for its arbitration system. As with a SCSI or network
connection, P1394 transfers data in packets, a block of data preceded by a header that
specifies where the data goes and its priority. In the basic cable based P1394 system,
each device sharing a connection gets a chance to send one packet in an arbitration period
that's called a fairness interval. The various devices take turns until all have had a chance
to use the bus. After each packet gets sent, a brief time called the sub-action gap elapses,
after which another device can send its packet. If no device starts to transmit when the
sub-action gap ends, all devices wait a bit longer, stretching the time to an arbitration
reset gap. After that time elapses, a new fairness interval begins, and all devices get to
send one more packet. The cycle continues.
To handle devices that need a constant stream of data for real time display, such as video
or audio signals, P1394 uses a special isochronous mode. Every 125 microseconds, one
device in the P1394 that needs isochronous data sends out a special timing packet that
signals isochronous devices that they can transmit. Each takes a turn in order of its
priority, leaving a brief isochronous gap delay between their packets. When the
isochronous gap delay stretches out to the sub-action gap length, then the devices using
ordinary asynchronous transfers take over until the end of the 125 microsecond cycle

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (64 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

when the next isochronous period begins.


The scheme guarantees that video and audio gear can move its data in real time with a
minimum of buffer memory. (Audio devices require only a byte of buffer; video may
need as many as six bytes!) The 125-microsecond period matches the sampling rate used
by digital telephone systems to help P1394 mesh with ISDN (Integrated Service Digital
Network) telephone systems.
The key to the power of P1394 is speed. The initial design of P1394 sets up a 100
megabit-per-second data transfer protocol. In addition, the standard defines two higher
speed data rates for future upgrades, 200 and 400 megabits per second.
From a manufacturer's standpoint, size and cost are as important as speed. P1394 has the
potential of reducing the cost of external connections to PCs both in terms of money
spent and panel usage. Both of these savings originate in the design of the P1394
connector. P1394 envisions a single 6-wire plastic connector replacing most, if not all, of
the standard port connectors on a PC.
The connector itself will cost manufacturers a few cents while the connectors alone for
an RS-232C port can cost several dollars (and that can be a significant portion of the
price of a peripheral or even PC). Moreover, a standard serial connector —25-pin
D-shell—by itself is much too large for today's miniaturized systems. It can't fit a
PCMCIA card by any stretch of the imagination or plastic work.
Those savings have a price. Far from simply needing a UART, P1394 is a complex
communications system with its own transfer protocol. It will require complex circuits to
work. As the interface becomes popular, however, the cost of this circuitry will quickly
plunge below the cost of the more sophisticated connectors used by other interfaces. Just
as the electrically more complex AT interface replaced older interfaces for hard disks,
P1394 stands to step in place of the serial port.
The future imagined by P1394 advocates is much like the early Macintosh computers that
depended on a single SCSI port for all system expansion. P1394 beats old SCSI both in
connector simplicity and cost. But it also joins SCSI—P1394 is one of the hardware
channels that are incorporated into the proposed SCSI-3 standard.
As with existing SCSI systems, P1394 allows you to connect multiple devices and uses
an addressing system so the signals sent through a common channel are recognized only
by the proper target device. The linked devices can independently communicate among
themselves without the intervention of your PC. But P1394 gives you greater wiring
flexibility than the current SCSI standards. To link multiple peripherals, you can daisy
chain them or split the cable into branches. In effect, the P1394 connection behaves like a
small (but fast) network.

Architecture

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (65 de 67) [23/06/2000 06:51:45 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

P1394 is a true architecture that is built from several layers, each of


which defines one aspect of the serial connection. These layers
include a bus management layer, a transaction layer, a link layer, and
a physical layer.

Bus Management Layer

This part of the P1394 standard defines the basic control functions as
well as the control and status registers required by connected devices
to operate their ports. This layer handles channel assignments,
arbitration, mastering, and errors.

Transaction Layer

The protocol that governs transactions across the P1394 connection is


called the transaction layer. That is, this layer mediates the read and
write operations. To match modern PCs, the transaction layer is
optimized to work with 32-bit double-words, although the standard
also allows block operations of variable length. The operation of this
layer was derived from the IEEE 1212 parallel data-transfer standard.

Link Layer

The logical control on the data across the P1394 wire is the link
layer, making the transfer for the transaction layer. Communications
are half-duplex transfers, but the link layer provides a confirmation
of the reception of data. Double-word transfers are favored, but the
link layer also permits exchanges in variable length blocks.

Physical Layer

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (66 de 67) [23/06/2000 06:51:46 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 21

The actual physical connections made by P1394 are governed by the


physical layer. This part of the standard includes both a protocol and
the medium itself. The physical protocol sublayer controls access to
the connection with full arbitration. The physical medium sublayer
comprises the cable and connectors.

Cabling

In initial form, the physical part of P1394 will be copper wires. The standard cable is a
complex weaving of six conductors. Data will travel down two shielded twisted pairs.
Two wires will carry power at 8 to 40 volts with sufficient current to power a number of
peripherals. Another shield will cover the entire collection of conductors. A small, 6-pin
connector will link PCs and peripherals to this cable.
The P1394 wiring standard allows for up to 32 hops of 4.5 meters (about 15 feet) each.
As with current communications ports, the standard allows you to connect and disconnect
peripherals without switching off power to them. You can daisy chain P1394 devices or
branch the cable between them. When you make changes, the network of connected
devices will automatically reconfigure itself to reflect the alterations.
The P1394 wiring scheme depends on each of the connected devices to relay signals to
the others. Pulling the plug to one device could potentially knock down the entire
connection system. To avoid such difficulties and dependencies, P1394 uses its power
connections to keep in operation the interface circuitry in otherwise inactive devices.
These power lines could also supply enough current to run entire devices. No device may
draw more than three watts from the P1394 bus, although a single device may supply up
to 40 watts. The P1394 circuitry itself in each interface requires only about 2 milliwatts.
The P1394 wiring standard allows for up to 16 hops of 4.5 meters (about 15 feet) each.
As with current communications ports, the standard allows you to connect and disconnect
peripherals without switching off power to them. You can daisy chain P1394 devices or
branch the cable between them. When you make changes, the network of connected
devices will automatically reconfigure itself to reflect the alterations.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh21.htm (67 de 67) [23/06/2000 06:51:46 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Chapter 22: Telecommunications


The greatest power and strength in using a computer comes not from sitting at a solitary
keyboard but by connecting with other machines and networks. You can exchange files,
programs, images, and information across telephone lines. But because most of today's
local telephone lines are analog, and computers are stolidly digital, you need a modems
to match them up. Modem speeds and variety (including fax modems) are greater than
ever before.

■ Analog Services
■ Background
■ Modulation
■ Signal Characteristics
■ Connection Enhancing Technologies
■ Combining Voice and Data
■ Modem Hardware
■ PC Interface
■ Data Preparation
■ Modulator
■ User Interface
■ Line Interface
■ Packaging
■ Indicators
■ Analog Modem Standards
■ Bell Standards
■ MNP Standards
■ CCITT/ITU Standards
■ Not-Yet Standards
■ Other Kinds of "Modems"

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (1 de 96) [23/06/2000 06:57:01 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

■ Digital Services
■ Telephone Services
■ T1
■ HDSL
■ SDSL
■ ADSL
■ VDSL
■ SDS 56
■ ISDN
■ Cable Services
■ Control
■ Operating System Level Control
■ Structure
■ Modem Identification
■ Device Level Control
■ Dual Modes
■ Hayes Command Set
■ Extended Modem Command Sets
■ S-Registers
■ Response Codes
■ Operation
■ Setup Strings
■ Dialing and Answering
■ Handshaking
■ Caller Identification
■ Protocols
■ The Internet
■ History
■ Structure
■ Operation
■ Performance Limits
■ Security
■ Fax
■ Background
■ Analog Standards

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (2 de 96) [23/06/2000 06:57:01 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

■ Group 3
■ Resolution
■ Data Rates
■ Compression
■ Binary File Transfer
■ Group 4
■ Interface Classes
■ Installation
■ Physical Matters
■ Switches
■ Connections
■ Software

22

Telecommunications

The PC achieves its peak power when it reaches out and touches the universe of other
computers. Using the telecommunications power of your PC, you can link up with the
World Wide Web, download the latest driver software, play games internationally, find
the most obscure facts, or simply fax your lunch order to the corner deli. The PC can do
all this and more with a single connection to the outside world. The prime target is, of
course, the Internet.
For years that connection was the same one you used to reach for making dates, ordering
pizza, and raving to the town council. By adding a modem to your PC, you could adapt it
to the telephone line and use your PC for exactly the same things. The old phone line was
a strange world to your PC, full of primitive and dangerous signals. It was the world of
analog, and it required special equipment like your modem to make a connection.
The new generation of telecommunications extends your PC with the signals it knows
best, digital. A number of these new digital services promise your PC faster and more
reliable distant communications. Everyone wants to get in the act—and money—of
linking your PC digitally. Not just your telephone company but the cable company and
even satellite operators want you to plug into (and pay for) their services.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (3 de 96) [23/06/2000 06:57:01 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Analog Services

A modem is standard equipment in every new PC, and with good reason. You need a
modem to go online and link with the World Wide Web. Consider a PC without a
modem, and you might as well buy a vacuum cleaner.
Today's modem is a world apart from those of only a few years ago. It's faster and, uh,
faster. A modem that doesn't run at a speed of at least 14,400 bits per second is simply
too slow to use on the Internet and a drag for any other kind of telecommunications.
Modern modems start there and reach from 28,800 bps to 33,600 bps, and even 56,000
bps. In fact, the next modem you buy may even exceed those speeds and—despite the
name—probably won't even be a modem.
With all of these changes in modem technology, one question remains: Why do you need
this extra piece of hardware to make your PC telecommunicate? After all, both your PC
and telephone talk with the same stuff, electricity and move messages back and forth
with ordinary electrical signals. Were not the giant corporations specializing in
computers and telephones such avowed rivals, you might suspect that they were in
cahoots to foist such a contrived accessory on the computer marketplace.
Step back and look at what a modem does, however, and you will gain new respect for
the device. In many ways, the modern modem is a miracle worker. A true time machine,
a modem bridges between today's digital computer technology and the analog telephone
interface that was devised more than a century ago. Although digital telephone and
telecommunication services are inching their way into homes and offices, most telephone
connections still rely on analog signals for at least part of their journey. Even these
digital services require something like a modem to make a connection. Although most
people call such digital linking devices modems, too, strictly speaking they are terminal
adapters. Nevertheless, no matter whether you have old fashioned analog or modern,
fast, and expensive digital telephone service, you need a box between your PC and the
phone line. And most folks will call it a modem.
More than converting between digital and analog signals, however, the best of today's
modems can squeeze more than a dozen data bits through a cable where only one should
fit. A fax modem can even cram a full page image through a thin 22-gauge telephone
wire in about 15 seconds.
The classic modem is a necessary bridge between digital and analog signals. The modern
modem usually does much more than connect. Most are boxes chock full of convenience
features that can make using them fast, simple, and automatic. The best of today's
modems not only make and monitor the connection but even improve it. They dial the
phone for you, remembering the number you want, and they will try again and again. A
modem will listen in until it's sure of good contact, and only then let you transmit across
the telephone line. Some even have built-in circuits to detect and correct the inevitable
errors that creep into your electrical conversations.

Background

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (4 de 96) [23/06/2000 06:57:01 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

A true modem is a necessary evil in today's world of telecommunications because we still


suffer from a telephone system that labors under devices that were standard even before
electronics were invented, at a time when solid state digital circuitry lay undreamed of,
almost a hundred years off. The first words out of Dr. Bell's speaking telegraph were
analog electrical signals, the same juice that flows through the receiver of your own
telephone. Although strictly speaking, digital communications are older than the
invention of the telephone—the conventional telegraph pre-dates the telephone by nearly
30 years (Samuel F. B. Morse wondered what God had wrought in 1844)—current digital
technology is a comparatively recent phenomenon.
The telephone system was designed only to handle analog signals because that's all that
speaking into a microphone creates. Over the years, the telephone system has evolved
into an elaborate international network capable of handling millions of these analog
signals simultaneously and switching them from one telephone set to another anywhere
in the world. In the last couple of decades, telephone companies have shifted nearly all of
their circuits to digital. Most central office circuitry is digital. Nearly every long distance
call is sent between cities and countries digitally. In fact, the only analog part of most
telephone connections is the local loop, the wires that reach out from the telephone
exchange to your home or office (and likewise extend from a distant exchange to the
telephone of whomever you're calling).
The chief reason any analog circuitry remains in the telephone system is that there are
hundreds of millions of plain old telephone sets (which the technologically astute call
simply POTS) dangling on the ends of telephone wires across the country. Even the wires
between you and your telephone exchange are probably capable of adroitly dealing with
digital signals. If you're willing to pay a premium—both for new equipment and for extra
monthly charges—you can go all digital with ISDN (see the following "ISDN" section).
As long as you stick with your POTS (telephone company jargon for "Plain Old
Telephone Set"), however, you'll still need a modem for communications.
Even with digital service, you still need a box or modem-like expansion board between
the digital circuitry of your PC and the analog phone line. The terminal adapter helps
match the signal standards between the different types of hardware. More importantly, at
least from the perspective of your telecommunications service provider (once quaintly
known as the "telephone company"), the terminal adapter protects the telephone network
from strange and spurious signals originating in your PC. It ensures that the digital phone
network gets exactly the kind of signals it requires.

Modulation

The technology that allows modems to send digital signals through the analog telephone
system is called modulation. In fact, the very name "modem" is derived from this term
and the reciprocal circuit (the demodulator) that's used in reception. Modem is a
foreshortening of the words MOdulator/DEModulator.
Modulation, and hence modems, are necessary because analog telephone connections do

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (5 de 96) [23/06/2000 06:57:01 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

not allow digital, direct current signals to pass. The modulation process creates analog
signals that contain all the digital information of the computer original but which can be
transmitted through the voice only channels of the telephone system.
More generally, modulation is the process of adapting a signal to suit a communications
medium by using the otherwise incompatible signal to modify another signal that's
compatible with the medium. In the case of a modem, modulation uses the digital data
signal to modify an analog signal so that the combination of the two can travel through
the analog telephone system.
The modulation process begins with a constant signal called the carrier, which carries or
bears the load of the digital (modulating) information. In most modulation systems, the
carrier is a steady state signal of constant amplitude (strength) and frequency and
coherent phase, the electrical equivalent of a pure tone. Because it is unchanging, the
carrier itself is devoid of content and information. It's the equivalent of one, unchanging
digital bit. The carrier is simply a package, like a milk carton. Although you need both
the carrier and milk carton to move their contents from one place to another, the package
doesn't affect the contents and is essentially irrelevant to the product. You throw it away
once you've got the product where you want it.
The signal that's electrically mixed with the carrier to modify some aspect of it is given
the same name as the process, modulation. The carrier wave and modulation are not
simply mixed together. If they were, the modulation would be simply filtered away by
the incompatible communications medium. Instead, the modulation alters the carrier
wave in some way so that it retains its essential characteristics and remains compatible
with its medium.
A modem modulates an analog signal that can travel through the telephone system with
the digital direct current signals produced by your PC. The modem also demodulates
incoming signals, stripping off the analog component and passing the digital information
to your PC. The resulting modulated carrier wave remains an analog signal
that—usually—easily whisks through the telephone system.
The modulation process has two requirements. The first is continued compatibility with
the communications medium so that the signal is still useful. Second, you must somehow
be able to separate the modulation from the carrier so that you can recover the original
signal.
Demodulation is the signal recover process, the complement of modulation. During
demodulation, the carrier is stripped away and the encoded information is returned to its
original form. Although logically just the complement of modulation, demodulation
usually involves entirely different circuits and operating principles, which adds to the
complexity of the modem.
The modulation/demodulation process brings several benefits, more than enough to
justify the complication of combining signals. Because electronic circuits can be tuned to
accept the frequency of one carrier wave and reject others, multiple modulated signals
can be sent through a single communications medium. This principle underlies all radio
communication and broadcasting. In addition, modulation allows digital,
direct-current-based information to be transmitted through a medium, like the telephone
system, that otherwise could not carry direct current signals.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (6 de 96) [23/06/2000 06:57:01 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Just as AM and FM radio stations use different modulation methods to achieve the same
end, modem designers can select from several modulation technologies to encode digital
data in a form compatible with analog transmission systems. The different forms of
modulation are distinguished by the characteristics of the carrier wave that are changed in
response to changes in data to encode information. The three primary characteristics of a
carrier wave that designers might elect to vary for modulation are its amplitude, its
frequency, and its phase. Modems take advantage of all of these kinds of modulation.

Carrier Wave Modulation

The easiest way to understand the technique of modulation is to look at its simplest form,
carrier wave modulation, which is often abbreviated as CW, particularly by radio
broadcasters.
We've previously noted that a digital signal, when stripped to its essential quality, is
nothing more than a series of bits of the information that can be coded in any of a variety
of forms. We use 0's and 1's to express digital values on paper. In digital circuits, the
same bits take the form of the high or low direct current voltages, the same ones that are
incompatible with the telephone system. However, we can just as easily convert digital
bits into the presence or absence of a signal that can travel through the telephone system
(or be broadcast as a radio wave). The compatible signal is, of course, the carrier wave.
By switching the carrier wave off and on, we can encode digital zeroes and ones with it.
The resulting CW signal looks like an interrupted burst of round sine waves, as shown in
Figure 22.1.
Figure 22.1 Carrier wave modulation.

The figure shows the most straightforward way to visualize the conversion between
digital and analog, assigning one full wave of the carrier to represent a digital one and the
absence of a wave a zero. In most practical simple carrier wave systems, however, each
bit occupies the space of several waves. The system codes the digital information not as
pulses per se but as time. A bit lasts a given period regardless of the number of cycles
occurring within that period, making the frequency of the carrier wave irrelevant to the
information content.
Although CW modulation has its shortcomings, particularly in wire-based
communications, its retains a practical application in radio transmission. It is used in the
simplest radio transmission methods, typically for sending messages in Morse code.
One of the biggest drawbacks of carrier wave modulation is ambiguity. Any interruption
in the signal might be misinterpreted as a digital zero. In telephone systems, the problem
is particularly pernicious. Equipment has no way of discerning whether a long gap
between bursts of carrier are actually significant data or a break in or end of the message.

Frequency Shift Keying

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (7 de 96) [23/06/2000 06:57:01 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

A more reliable way of signaling digital information is to use separate and distinct
frequencies for each digital state. For example, a digital 1 would cause the carrier wave
to change to a higher frequency, much as it causes a higher voltage. A digital 0 would
shift the signal to a lower frequency. Two different frequencies and the shifts between
them could then encode binary data for transmission across an analog system. This form
of modulation is called frequency shift keying or FSK because information is encoded in
(think of it being "keyed to") the shifting of frequency.
The "keying" part of the name is actually left over from the days of the telegraph when
this form of modulation was used for transmitting Morse code. The frequency shift came
with the banging of the telegraph key.
In practical FSK systems, the two shifting frequencies are the modulation that is applied
to a separate (and usually much higher) carrier wave. When no modulation is present,
only the carrier wave at its fundamental frequency appears. With modulation, the overall
signal jumps between two different frequencies. Figure 22.2 shows what an FSK
modulation looks like electrically.
Figure 22.2 Frequency shift keying.

Frequency shift keying is used in the most rudimentary of popular modems, the once
ubiquitous 300 bits-per-second modem that operated under the Bell 103 standard. This
standard, which is used by most modems when operating at a 300 bits-per-second data
rate, incorporates two separate FSK systems, each with its own carrier frequencies, one at
1200 and another at 2200 Hertz. Space modulation (logical zeroes) shifts the carrier
down by 150 Hertz, and mark modulation pushes the carrier frequency up by an equal
amount. Because the FSK modulation technique is relatively simple and the two
frequencies so distinct even through bad connections, the 300 bps speed of the Bell 103
standard is the most reliable, if slowest, common modem standard.

Amplitude Modulation

Carrier wave modulation is actually a special case of amplitude modulation. Amplitude is


the strength of the signal or the loudness of a tone carried through a transmission
medium, such as the telephone wire. Varying the strength of the carrier in response to
modulation to transmit information is called amplitude modulation. Instead of simply
being switched on and off as with carrier wave modulation, in amplitude modulation the
carrier tone gets louder or softer in response to the modulating signal. Figure 22.3 shows
what an amplitude modulated signal looks like electrically.
Figure 22.3 Amplitude modulation, signal strength (vertical) versus time
(horizontal).

Amplitude modulation is most commonly used by radio and television broadcasters to


transmit analog signals. It carries talk and music to your AM radio and the picture to your
television set. Engineers also exploit amplitude modulation for digital transmissions.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (8 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

They can, for example, assign one amplitude to indicate a logical 1 and another
amplitude to indicate a logical 0.
Pure amplitude modulation has one big weakness. The loudness of a telephone signal is
the characteristic most likely to vary during transmission. As the signal travels, the
resistance and impedance of the telephone wire tends to reduce the signal's strength; the
telephone company's amplifiers attempt to keep the signal at a constant level. Moreover,
noise on the telephone line mimics amplitude modulation and might be confused with
data. Consequently, pure amplitude modulation is not ordinarily used by modems.
Amplitude modulation technology is used by modems as part of a complex modulation
system.
The one exception to the rule that modems do not use amplitude modulation is the 56
Kbps modem system developed by Rockwell International, the highest speed data
connections available through local analog telephone lines. The Rockwell system
achieves its performance edge by acting unlike a modem. It treats the telephone
connection not as point to point analog service as it would have been in Alexander Bell's
day but as a digital communication system in which one part—the local loop between
you and the telephone company central office—is but a poorly performing section.
Rockwell considers what the device sends down your local phone line not as modulation
but instead as a special form of digital coding, one that encodes the digital signal as 256
levels of a carrier wave matched to the voltage levels used by the analog to digital
converter at the telephone company central office.

Frequency Modulation

Frequency shift keying is a special case of the more general technology called frequency
modulation. In the classic frequency modulation system used by FM radio, variations in
the loudness of sound modulate a carrier wave by changing its frequency. When music
on an FM station gets louder, for example, the radio station's carrier frequency shifts its
frequency more. In effect, FM translates changes in modulation amplitude into changes
in carrier frequency. The modulation does not alter the level of the carrier wave. As a
result, an FM signal electrically looks like a train of wide and narrow waves of constant
height, as shown in Figure 22.4.
Figure 22.4 Frequency modulation, signal strength (vertical) versus time
(horizontal).

In a pure FM system, the strength or amplitude of the signal is irrelevant. This


characteristic makes FM immune to noise. Interference and noise signals add into the
desired signal and alter its strength. FM demodulators ignore these amplitude changes.
That's why lightning storms and motors don't interfere with FM radios. This same
immunity from noise and variations in amplitude makes frequency modulation a more
reliable, if more complex, transmission method for modems.

Phase Modulation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (9 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Another variation on the theme of frequency modulation is phase modulation. This


technology works by altering the phase relationship between the waves in the signal. An
unmodulated carrier is train waves in a constant phase relationship. That is, the waves
follow one after another precisely in step. The peaks and troughs of the train of waves
flow in constant intervals. If one wave were delayed for exactly one wavelength, it would
fit exactly atop the next one.
By delaying the peak of one wave so that it occurs later than it should, you can break the
constant phase relationship between the waves in the train without altering the overall
amplitude or frequency of the wave train. In other words, you shift onset of a subsequent
wave compared to those that precede it. At the same time, you create a detectable state
change called a phase shift. You can then code digital bits as the presence or absence of a
phase shift.
Signals are said to be in phase when the peaks and troughs of one align exactly with
another. When signals are 180 degrees out of phase, the peaks of one signal align with
the troughs of the other. Quadrature modulation can shift the phase of the carrier wave by
180 degrees, moving it from exactly in phase to exactly out of phase with a reference
carrier, as shown in Figure 22.5. Note that the two waveforms shown start in phase and
then, after a phase shift, end up being 180 degrees out of phase.
Figure 22.5 Phase modulation showing 180-degree phase shift.

If you examine the shapes of waves that result from a phase shift, you'll see that phase
modulation is a special case of FM. Delaying a wave lengthens the time between its peak
and that of the preceding wave. As a result, the frequency of the signal shifts downward
during the change, although over the long term the frequency of the signal appears to
remain constant.
One particular type of phase modulation called quadrature modulation alters the phase of
the signal solely in increments of 90 degrees. That is the shift between waves occurs at
phase angles of 0, 90, 180, or 270 degrees. The "quad" in the name of this modulation
method refers to the four possible phase delays.
Quadrature modulation allows the encoding of data more complex than simple binary
bits. Its four possible shifts can specify four different values. For example, a 90-degree
shift might specify the value 1, a 180-degree shift the value 2, and a 270-degree shift the
value 3. The potential of encoding four states makes systems using quadrature
modulation prime candidates for group coding, discussed in the following "Group
Coding" section. Although quadrature modulation is useful in modem communications, it
is most often used in combination with other modulation techniques.

Complex Modulation

The various modulation techniques are not mutually exclusive. Modem makers achieve
higher data rates by combining two or more techniques to create complex modulation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (10 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

schemes.
In frequency shift keying modems, one bit of data causes one corresponding change of
frequency in the carrier wave. Every change of frequency or state carries exactly one bit
of information. The unit of measurement used to describe the number of state changes
taking place in the carrier wave in one second is the baud. The term "baud" was named
after J.M.E. Baudot, a French telegraphy expert. His full name (Baudot) is used to
describe a five-bit digital code used in Teletype systems.
In the particular case of the FSK modulation, one change of state per second (one baud)
conveys exactly one bit of information per second, and one baud is equal to a transfer of
digital information at a one-bit-per-second rate. Depending on the number of states used
in the communication system, however, a single transition (one baud) can convey less
than or more than one bit of information. For example, several different frequencies of
tones (that is, several different changes in carrier frequency) might be used to code
information. The changing from one frequency to another would take place at one baud,
yet because of the different possible changes that could be made, more than one bit of
information could be coded by that transition. Hence, strictly speaking, one baud is not
the same as one bit per second, although the terms are often incorrectly used
interchangeably. Unfortunately for M. Baudot, the confusion between "baud" and "bits
per second" has become so irremediably confused in common use that communications
engineers now often use the term symbol instead of baud when speaking of state changes.
This 300-bits-per-second rate using the simple FSK technique requires a bandwidth of
600 Hertz. The two 300 baud carriers, which require a 1200 Hz bandwidth (two times
600 Hz) and a wide guard band fit comfortably within the 2700 Hz limit.
Using the same simple modulation technique and exploiting more of the 2700 Hertz
bandwidth of the typical telephone line, modem speeds can be doubled to 600 baud.
Beyond that rate, however, lies the immovable bandwidth roadblock.

Group Coding

A data communications rate of 300 or even 600 bits per second is slow, slower than most
folks can read text flowing across the screen. Were long distance communications limited
to a 600 bits-per-second rate, the only people who would be happy would be the
shareholders of the various telephone companies. Information could, at best, crawl
slowly across the continent.
By combining several modulation techniques, modern modems can achieve much higher
data rates despite the constraints of ordinary dial-up telephone lines. Instead of merely
manipulating the carrier one way, they may modify two (or more) aspects of the constant
wave. In this way, every baud carries multiple bits of information.
The relationship between the number of bits that can be coded for each baud and the
number of signal states required is geometric. The number of required states skyrockets
as you try to code more data in every baud, as shown in Table 22.1.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (11 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Table 22.1. Signals States Required to Encode Bits

Number of states Bits per baud


2 1
4 2
16 4
64 8
256 16

Note, too, that this form of coding makes the transmitted data more vulnerable to error.
An error in one bad baud can ripple through multiple bits of data. Because the finer you
carve up a baud, the smaller the differences, errors result from smaller disruptions to the
signal.
These more complex forms of modulation don't add extra bandwidth to the
communications channel; remember, that's a function of the medium, which the modem
cannot change. Instead, they take advantage of the possibility of coding digital data as
changes between a variety of states of the carrier wave. The carrier wave, for example,
can be phase modulated with quadrature modulation so that it assumes one of four states
every baud.
Although you might expect these four states to quadruple modem speed, the relationship
is not quite that direct. To convert states into digital information, modems use a
technique called group coding in which one state encodes a specific pattern of bits. The
modem needs a repertoire of unique states wide enough to identify every different pattern
possible with a given number of bits. Two digital bits can assume any one of four distinct
patterns: 00, 01, 10, and 11. So, to encode those two bits into a single baud, a modem
needs four different states to uniquely identify each bit pattern. The ultimate speed of a
modem on an ideal connection would thus be determined by the number of states that are
available for coding.
Group coding is the key to advanced modulation techniques. Instead of dealing with data
one bit at a time, bits of digital code are processed as groups. Each group of data bits is
encoded as one particular state of the carrier.
As the example illustrates, however, the relationship between states and bits is not linear.
As the number of bits in the code increases by a given figure (and thus the potential
speed of the modulation technique rises by the same figure), the number of states
required increases to the corresponding power of two, the inverse logarithm of the
number of available states (tones, voltage or phases). A 2-bit-for-baud rate requires 4
separate carrier states for encoding; a 4-bit-for-baud rate needs 16 separate carrier states;
and an 8-bit-for-baud system require 256 states.
Most 1200-bits-per-second modems operate at 600 baud with four different carrier states
made possible by quadrature modulation. Modems that operate at data rates of 2400 bps
use a modulation method that's even more complex than quadrature modulation and
yields 16 discrete states while still operating at 600 baud. Each state encodes one of the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (12 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

16 different patterns of four digital bits. One baud on the telephone line carries the
information of four bits going into the modem.
More complex modulation systems combine two or more modulation methods to cram
more bits into every baud. For example, a modem can use a combination of several
different frequencies and amplitudes to create distinct states. The group code values can
be assigned to each state in a two-dimensional system that arrays one modulation method
on one axis and a second modulation method on another. You can then assign a group
code value to each discrete coordinate position. Although you could just start in the upper
left and list code values in order from left to right, top to bottom, by spacing similar
values apart they are easier for electronic systems to reliably distinguish. The result is a
matrix of numbers scattered as if in pigeonholes that, viewed graphically and with
enough imagination, looks like a trellis. Consequently, this kind of multiple modulation
is called trellis modulation. International modem standards set the arrangement of
modulation and code values for the trellis modulation used at each operating speed.
Figure 22.6 shows the constellation of values used by the 2400 bps v.22bis signaling
system. The sixteen different points in the constellation allow the coding of four bits per
baud, allowing the 2400 bps data rate on a 600 baud signal. four phase quadrants refer to
shifts between the four quadrature phase states, and the four possibilities of shifts allow
the encoding of two bits of data in addition to the four possible states in each quadrant.
Figure 22.6 The v.22bis signal constellation.

According to the free lunch principle, this system of seemingly getting something for
nothing using complex modulation must have a drawback. With high speed modems, the
problem is that the quality of the telephone line becomes increasingly critical as the data
rate is increased. Moreover, as modem speeds get faster, each phone line blip (baud)
carries more information, and a single error can have devastating effects.

Pulse Modulation

Modulation can also work in the opposite direction and allow a digital system to carry
what would otherwise be analog values or allow direct current systems to transmit
alternating current signals. In fact, this reverse form of modulation is the principle
underlying digital audio and the Compact Disc. For sake of completeness, we'll take a
look at this technology here, although we'll reserve a full discussion for Chapter 18,
"Audio."
Because this technology converts an analog signal into a set of digital pulses, it is often
called pulse modulation, although this term is usually reserved for applications that use
the resulting digital signal to modulate a carrier wave. Instead of "modulation," the
process of creating the raw digital direct current signals is often called digital coding.
The simplest form of most pulse modulation uses the number of pulses in a given period
to indicate the strength of the underlying analog signal.
Pulse width modulation varies the width of each pulse to correspond to the strength of the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (13 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

underlying analog signal. In other words, pulse width modulation translates the analog
voltage level into the duration of the pulses used in the digital signal.
Pulse code modulation is the most complex. It uses a digital number or code to indicate
the strength of the underlying analog signal. For example, the voltage of the signal is
translated into a number, and the digital signal represents that number in binary code.
This form of digital modulation is familiar from its wide application in digital audio
equipment like CD players.

Signal Characteristics

The place to begin a discussion of modems and telecommunications technology is with


the problem, getting information—be it digital data or your own dulcet voice—from one
place to another. Given a free reign and unlimited budget, you'd have no problem at all.
You could build the biggest transmitter and blast signals to distant corners of the
universe. But our budgets aren't unlimited. Nor do you have free access to the pathways
of communication. The limited resource would need to be split nearly six billion ways to
give everyone access.
The underlying problem crops up even in conversation. If everyone talked at once, you'd
never be able to make sense of the confusion. Similarly, if everyone tried to send out data
at the same time without concern for others trying to do likewise, nothing would likely
get through the sea of interference.
To keep order, communications are restricted to channels. The obvious channels are
those used by television broadcasters. But even each telephone call you make goes
through its channel.
The problem is that channel space is not unlimited. To keep communications economical,
telephone companies (for example) severely restrict the size and carrying capacity of
each channel so that they can squeeze in more individual channels.
The modulation that's added to the carrier contains information that varies at some rate.
Traditional analog signal sources (music or voice signals, for instance) contain a near
random mix of frequencies between 20 and 20,000 Hertz. Although digital signals start
off as direct current, which also has no bandwidth, every change in digital state adds a
frequency component. The faster the states change (the more information that's squeezed
down the digital channel, as measured in its bit rate), the more bandwidth the signal
occupies. The on and off rate of the digital signal is its frequency, and modulating the
carrier with it adds to the frequency range demanded by the carrier and modulation
combination. In other words, mixing in modulation increases the bandwidth needed by
the carrier; the more information that's added, the more bandwidth that's needed.

Channel Limits

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (14 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Like a great artist, the modem is constrained to work within the limits of its medium, the
telephone channel. These limits are imposed by the telephone systems. They arise in part
from characteristics of analog communications and the communications medium that's
used, primarily the unshielded twisted pair wire that runs between your business or home
and the telephone company central office. In long distance communication, you're also
constrained by arbitrary limits imposed by the telephone company that you use. Most
long distance calls get converted to digital signals at your local telephone exchange. Just
as an artist must overcome the limitation of his medium, turning its weaknesses into
strengths, the modem must struggle within the confines of the telephone connection and
turn the ancient technology to its advantage.

Signal Bandwidth

The primary limit on any communications channel is its bandwidth, and bandwidth is the
chief constraint on modem speed. Bandwidth merely specifies a range of frequencies,
from the lowest to the highest, that the channel can carry or that are present in the signal.
It is one way of describing the maximum amount of information that the channel can
carry. Bandwidth is expressed differently for analog and digital circuits. In analog
technology, the bandwidth of a circuit is the difference between the lowest and highest
frequencies that can pass through the channel. Engineers measure analog bandwidth in
kilohertz or megahertz. In a digital circuit, the bandwidth is the amount of information
that can pass through the channel. Engineers measure digital bandwidth in bits, kilobits,
or megabits per second. The kilohertz of an analog bandwidth and the kilobits per second
of digital bandwidth for the same circuit are not necessarily the same and often differ
greatly.
When using a modem, the data signals through communications channel are analog. The
analog bandwidth of the system consequently constrains the data carrying capacity.
The bandwidth of a simple pair of telephone wires decreases with its length because of
physical characteristics of the signals and wires. Scientifically speaking, capacitance in
the wires attenuates high frequencies.
The more severe limit comes with the digital conversion process at the telephone central
office. Most telephone companies use an eight kilohertz sampling rate and a bit depth of
eight bits for their digital long distance signals. The sampling rate alone limits the
maximum bandwidth of the telephone connection to less than four kilohertz. It is this
narrow bandwidth that modems and modem designers must contend with.

Channel Bandwidth

The bandwidth of a communications channel defines the frequency limits of the signals
that they can carry. This channel bandwidth may be physically limited by the medium
used by the channel or artificially limited by communications standards. The bandwidths
of radio transmissions, for example, are limited artificially, by law, to allow more

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (15 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

different modulated carriers to share the air waves while preventing interference between
them.
In wire-based communications channels, bandwidth is often limited by the wires
themselves. Certain physical characteristics of wires cause degradations in their high
frequency transmission capabilities. The capacitance between conductors in a cable pair,
for instance, increasingly degrades signals as their frequencies rise, finally reaching a
point at which a high frequency signal might not be able to traverse more than a few
centimeters of wire. Amplifiers or repeaters, which boost signals so that they can travel
longer distances, often cannot handle very low or very high frequencies, imposing more
limits.
Most telephone channels also have an artificial bandwidth limitation imposed by the
telephone company. To get the greatest financial potential from the capacity of their
transmissions cables, microwave systems, and satellites, telephone carriers normally limit
the bandwidth of telephone signals. One reason bandwidth is limited is so that many
separate telephone conversations can be stacked atop one another through multiplexing
techniques, which allow a single pair of wires to carry hundreds of simultaneous
conversations.
Although the effects of bandwidth limitation are obvious (it's why your phone doesn't
sound as good as your stereo), the telephone company multiplexing equipment works so
well that you are generally unaware of all the manipulations made to the voice signals as
they are squeezed through wires.

Bandwidth Limitations

One of the consequences of telephone company signal manipulations is a severe


limitation in the bandwidth of an ordinary telephone channel. Instead of the full
frequency range of a good quality stereo system (from 20 to 20,000 Hertz), a telephone
channel will only allow frequencies between 300 and 3000 Hertz to freely pass. This very
narrow bandwidth works well for telephones because frequencies below 300 Hertz
contain most of the power of the human voice but little of its intelligibility. Frequencies
above 3000 Hertz increase the crispness of the sound but don't add appreciably to
intelligibility.
Although intelligibility is the primary concern with voice communications (most of the
time), data transfer is principally oriented to bandwidth. The comparatively narrow
bandwidth of the standard telephone channel limits the bandwidth of the modulated
signal it can carry, which in turn limits the amount of digital information that can be
squeezed down the phone line by a modem.
Try some simple math and you will see the harsh constraints faced by your modem's
signals. A telephone channel typically has a useful bandwidth of about 2700 Hertz (from
300 to 3000 Hertz). At most, a carrier wave at exactly the center of the telephone
channel, 1650 Hz, burdened by two sidebands could carry data that varies at a rate no
greater than 1650 Hz. Such a signal would fill the entire bandwidth of the telephone
channel without allowing for a safety margin.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (16 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Guard Bands

Communications is a two way street, so most modems are designed with two channels,
one in each direction. Putting two channels on a single telephone line does more than cut
in half the bandwidth available to each channel. Separating the two channels is a guard
band, a width of unused frequencies that isolate the active channels and prevent
confusion between their separate carriers. The safety margin is, in effect, also a guard
between the carriers and the varying limit of the bandwidth.
Once you add in the needs of two communications channels and the guard bands, the
practical bandwidth limit for modem communications over real telephone channels that
have an innate 2700 Hertz bandwidth works out to about 2400 Hertz. That leaves 1200
Hertz for each of the two duplex channels. Getting the most information through that
limited bandwidth is a challenge to the inventiveness of modem designers and modem
standards in picking the best possible modulation method.

Shannon's Limit

Fortunately for your modem, it can use modulation technologies that are much more
efficient than this simple example. But the modem still faces an ultimate limit on the
amount of data that it can squeeze through an analog telephone line. This ultimate limit
combines the effects of the bandwidth of the channel and the noise level in the channel.
The greater the noise, the more likely that it will be confused with the information that
has to compete with it. This theoretical maximum data rate for a communication channel
is called Shannon's Limit. This fundamental law of data communications states that the
maximum number of digital bits that can be transmitted over a given communication path
in one second can be determined from the bandwidth (W) and signal to noise ratio (S/N,
expressed in decibels) by the following formula:
Maximum data rate = W log (1 + S/N)
The analog to digital converters used in telephone company central offices contribute the
noise that most limits modem bandwidth. In creating the digital pulse-coded modulation
(PCM) signal, they create quantization distortion that produces an effective signal to
noise ratio of about 36 dB. Quantization distortion results from the inability of the digital
system with a discrete number of voltage steps (256 in the case of telephone company
A/D converters) to exactly represent an analog signal that has an infinite number of
levels. At this noise level, Shannon's Limit for analog modems is close to the 33.6Kbps
of today's quickest products. The 2x system developed by U.S. Robotics and the K56flex
system developed by Rockwell and Lucent Technologies both sidestep Shannon's Limit
in racheting up to 56Kbps by matching the 256 amplitude modulation levels to the 256
levels of the telephone company A/D converters and thus sidestepping the issue of
quantization distortion.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (17 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Sidebands

In the simplest modulation systems, a modulated carrier requires twice the bandwidth of
the modulation signal. Although this doubling sounds anomalous, it is the direct result of
the combining of the signals. The carrier and modulation mix together and result in
modulation products called sidebands. There are two. One corresponds to the sum of the
frequency of the modulation added to the carrier. The other is the difference between the
carrier and the frequency of the modulation (modulation subtracted from the carrier). The
added result often is called the upper sideband, and the subtracted result is
correspondingly called the lower sideband. Figure 22.7 graphically illustrates the
relationship between the carrier and sidebands.
Because these upper and lower modulation products are essentially redundant (they
contain exactly the same information), one or the other can be eliminated without loss of
information to reduce the bandwidth of the modulated carrier to that of the modulation.
(This form of bandwidth savings, termed single sideband modulation, is commonly used
in broadcasting to squeeze more signals into the limited radio spectrum.)
Figure 22.7 Display of carrier wave, lower and upper sidebands.

Even with sideband squeezing, the fundamental fact remains that any modulated signal
requires a finite range of frequencies to hold its information. The limits of this frequency
range define the bandwidth required by the modulated signal.
Most of these technologies rely on the power of Digital Signal Processors to take
advantage of novel technologies, such as line probing, multidimensional trellis coding,
signal shaping, and protocol spoofing.
Line probing lets a pair of modems determine the optimal transfer method for a given
telephone connection. The two modems send a sequence of signals back and forth to
probe the limits of the connection and ascertain the higher modulation rate, best carrier
frequency, and coding technique that gives the highest throughput.
Multidimensional trellis coding is a way of making modem signals more resistant to
errors caused by noise in the telephone connection by carefully selecting the modulation
values assigned to the transmitted code.
Signal shaping improves signal to noise performance of the modem connection by
altering the power of the signal in certain circumstances. Signal points that occur
frequently are transmitted at higher power, and less frequent points at reduced power.
Protocol spoofing removes the redundant parts of data transfer protocols so that less data
needs to be transferred. In effect, it compresses the protocol to speed transmissions much
as data compression speeds data transfer. At the receiving end, the protocol is fully
reconstructed before being passed along for further processing.

Asynchronous Operation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (18 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

At lower speeds most modems are designed to operate asynchronously. That is, the
timing of one modem's signals doesn't matter as long as it is within wide limits. More
important is the actual bit pattern that is sent. That pattern is self-defining. Each character
frame holds enough data not only to identify the information that it contains but also to
define its own beginning and end.
Normally, the time at which a pulse occurs in relation to the ticking of a computer's
system clock determines the meaning of a bit in a digital signal, and the pulses must be
synchronized to the clock for proper operation. In asynchronous transmissions, however,
the digital pulses are not locked to the system clock of either computer. Instead, the
meaning of each bit of a digital word is defined by its position in reference to the clearly
(and unambiguously) defined start bit. In an asynchronous string, the start bit is followed
by seven or eight data bits, an optional parity bit for error detection, and one or two stop
bits that define the ends of the frame. (See Chapter 21, "Serial Ports.") Because the
timing is set within each word in isolation, each word of the asynchronous signal can be
independent of any time relations beyond its self-defined bounds.

Synchronous Operation

When speed's the thing (as it almost always is with PCs), asynchronous communications
rank as wasteful. All those start and stop bits eat up time that could be devoted to
squeezing in more data bits. Consequently, high speed modem transmission standards
and protocols as well as most leased-line modems do away with most extra overhead bits
of asynchronous communication by using synchronous transmission. In this method of
transmitting data across phone lines, the two ends of the channel share a common time
base, and the communicating modems operate continuously at substantially the same
frequency and are continually maintained in the correct phase relationship by circuits that
monitor the connection and adjust for the circuit conditions.
In synchronous transmissions, the timing of each bit independently is vital, but framing
bits (start and stop bits) are unnecessary, which makes this form of communication two
or three bits faster per byte transmitted.

Duplex

Communications are supposed to be a two way street. Information is supposed to flow in


both directions. You should learn something from everyone you talk to, and everyone
should learn from you. Even if you disregard the potential for success of such two way
communication, one effect is undeniable: it cuts the usable bandwidth of a data
communication channel in one direction in half because the data going the other way
requires its own share of the bandwidth.
With modems, such a two way exchange of information is called duplex communication.
Often it is redundantly called full duplex. A full duplex modem is able to simultaneously

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (19 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

handle two signals, usually (but not necessarily) going in opposite directions, so it can
send and receive information at the same time. Duplex modems use two carriers to
simultaneously transmit and receive data, each of which has half the bandwidth available
to it and its modulation.

Half Duplex

The alternative to duplex communications is half duplex. In half duplex transmission,


only one signal is used. To carry on a two way conversation, a modem must alternately
send and receive signals. Half duplex transmission allows more of the channel bandwidth
to be put to use but slows data communications, because often a modem must switch
between sending and receiving modes after every block of data crawls through the
channel.

Echoplex

The term duplex is often mistakenly used by some communications programs for PCs to
describe echoplex operation. In echoplex mode, a modem sends a character down the
phone line, and the distant modem returns the same character, echoing it. The echoed
character is then displayed on the originating terminal as confirmation that the character
was sent correctly. Without echoplex, the host computer usually writes the transmitted
character directly to its monitor screen. Although a duplex modem generates echoplex
signals most easily, the two terms are not interchangeable.
With early communications programs, echoplex was a critical setup parameter. Some
terminal programs relied on modem echoplex to display your typing on the screen. If you
had echoplex off, you won't see what you typed. Other terminal programs, however,
displayed every character that went through the modem, so switching echoplex on would
display two of every letter you typed, lliikkee tthhiiss. Web browsers don't bother you
with the need to select this feature. Most, however, work without echoplex.

Switching Modems

To push more signal through a telephone line, some modems attempt to mimic full
duplex operation while actually running in half duplex mode. Switching modems are half
duplex modems that reverse the direction of the signal at each end of the line in response
to the need to send data. This kind of operation can masquerade as full duplex most of
the time communications go only in one direction. You enter commands into a remote
access system, and only after the commands are received does the remote system respond
with the information that you seek. Although one end is sending, the other end is more
than likely to be completely idle.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (20 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

On the positive side, switching modems are able to achieve a doubling of the data rate
without adding any complexity to their modulation. But the switching process itself is
time consuming and inevitably involves a delay because the modems must let each other
know that they are switching. Because transmission delays across long distance lines are
often a substantial fraction of a second (most connections take at least one trip up to a
satellite and back down, a 50,000 mile journey that takes about a quarter of a second
even at the speed of light) the process of switching can eat huge holes into transmission
time.
Most software modem protocols require a confirmation for each block of data sent,
meaning the modem must switch twice for each block. The smaller the block, the more
often the switch must occur. Just one trip to a satellite would limit a switching modem
with an infinitely fast data rate using the 128-byte blocks of some early modem protocols
to 1,024 bits per second at the two-switches-per-second rate.

Asymmetrical Modems

Because of this weakness of switching modems, asymmetrical modems cut the waiting by
maintaining a semblance of two way duplex communications while optimizing speed in
one direction only. These modems shoehorn in a lower speed channel in addition to a
higher speed one, splitting the total bandwidth of the modem channel unequally.
Early asymmetrical modems were able to flip-flop the direction of the high speed
communications, relying on algorithms to determine which way is the best way. The
modern asymmetrical technologies have a much simpler algorithm—designed for
Internet communications, they assume you need a greater data rate downstream (to you)
than upstream (back to the server). This design is effective because most people
download blocks of data from the Internet (typically web pages rife with graphics) while
sending only a few commands back to the web server.
The latest 56K modems such as those using the K56flex and x2 technologies operate
asymmetrically. Cable modems and satellite connections to the Internet also use a
variation on asymmetrical modem technology. These systems typically provide you with
a wide bandwidth downlink from a satellite or cable system to permit you to quickly
browse pages but rely on a narrow channel telephone link—a conventional modem
link—to relay your commands back to the network.

Connection Enhancing Technologies

Getting the most from your modem requires making the best match between it and the
connection it makes to the distant modem with which you want to communicate.
Although you have no control over the routing your local phone company and long
distance carrier give to a given call (or even whether the connection remains consistent
during a given call), a modem can make the best of what it gets. Using line

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (21 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

compensation, it can ameliorate some problems with the connection. Fallback helps the
modem get the most from a substandard connection or one that loses quality during the
modem linkup. Data compression helps the modem move more data through any
connection, and error correction compensates for transitory problems that would result in
minor mistakes in transmissions.

Line Compensation

Although a long distance telephone connection may sound unchanging to your ear, its
electrical characteristics vary by the moment. Everything from a wire swaying in the
Wichita wind to the phone company's automatic rerouting of the call through Bangkok
when the direct circuits fill up can change the amplitude, frequency, and phase response
of the circuit. The modem then faces two challenges: not to interpret such changes as
data, and to maintain the quality of the line to a high enough standard to support its use
for high speed transmission.
Under modern communications standards, modems compensate for variations in
telephone lines by equalizing the telephone line. That is, two modems exchange tones at
different frequencies and observe how signal strength and phase shift with frequency
changes. The modems then change their signals to behave in the exact opposite way to
cancel out the variations in the phone line. The modems compensate for deficiencies in
the phone line to make signals behave the way they would have in the absence of
problems. If, for example, the modems observe that high frequencies are too weak on the
phone line, they will compensate by boosting high frequencies before sending them.
Modern modems also use echo cancellation to eliminate the return of their own signals
from the distant end of the telephone line. To achieve this, a modem sends out a tone and
listens for its return. Once it determines how long the delay is before the return signal
occurs and how strong the return is, the modem can compensate by generating the
opposite signal and mixing it into the incoming data stream.

Fallback

Most modems use at most two carriers for duplex communications. These carriers are
usually modulated to fill the available bandwidth. Sometimes, however, the quality of the
telephone line is not sufficient to allow reliable communications over the full bandwidth
expected by the modem even with line compensation. In such cases, most high speed
modems incorporate fallback capabilities. When the top speed does not work, they
attempt to communicate at lower speeds that are less critical of telephone line quality. A
pair of modems might first try 9600 bps and be unsuccessful. They next might try 4800,
then 2400, and so on until reliable communications are established.
Most modems fall back and stick with the slower speed that proves itself reliable. Some
modems, however, constantly check the condition of the telephone connection to sense
for any deterioration or improvement. If the line improves, these modems can shift back

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (22 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

to a higher speed.

Multiple Carrier Modems

Although most modems rely on a relatively complex form of modulation on one or two
carriers to achieve high speed, one clever idea now relegated to a historical footnote by
the latest modem standards is the multiple carrier modem. This type of modem used
relatively simple modulation on several simultaneous carrier signals. One of the chief
advantages of this system comes into play when the quality of the telephone connection
deteriorates. Instead of dropping down to the next incremental communications rate,
generally cutting data speed in half, multiple carrier modems just stop using the carriers
in the doubtful regions of the bandwidth. The communication rate may fall off just a
small percentage in the adjustment. (Of course, it could dip by as much as a normal
fallback modem.)

Data Compression

Although there's no way of increasing the number of bits that can cross a telephone line
beyond the capacity of the channel, the information handling capability of the modem
circuit can be increased by making each bit more meaningful. Many of the bits that are
sent through the telecommunication channel are meaningless or redundant—they convey
no additional information. By eliminating those worthless bits, the information content of
the data stream is more intense, and each bit is more meaningful. The process of paring
the bits is called data compression.
The effectiveness of compression varies with the type of data that's being transmitted.
One of the most prevalent data compression schemes encodes repetitive data. Eight
recurrences of the same byte value might be coded as two bytes, one signifying the value
and the second the number of repetitions. This form of compression is most effective on
graphics, which often have many blocks of repeating text. Other compression methods
may strip out start, stop, and parity bits.
At one time, many modem manufacturers had their own methods of compressing data so
that you needed two matched modems to take advantage of the potential throughput
increases. Today, however, most modems follow international compression standards so
that any two modems using the same standards can communicate with one another at
compressed data speeds.
These advanced modems perform the data compression on the fly in their own circuitry
as you transmit your data. Alternately, you can precompress your data before sending it
to your modem. Sort of like dehydrating soup, precompression (also known as file
compression) removes the unnecessary or redundant parts of a file, yet allows the vital
contents to be easily stored and reconstituted when needed. This gives you two
advantages: the files you send and receive require less storage space because they are
compressed, and your serial port operates at a lower speed for a given data throughput.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (23 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Note that once a file is compressed, it usually cannot be further compressed. So modems
that use on the fly compression standards cannot increase the throughput of
precompressed files. In fact, using one on the fly modem data compression system
(MNP5) actually can increase the transmission time for compressed files as compared to
not using modem data compression.

Error Checking and Error Correction

Because all high speed modems operate closer to the limits of the telephone channel, they
are naturally more prone to data errors. To better cope with such problems, nearly all
high speed modems have their own built-in error checking methods (which detect only
transmission errors) and error correction (which detects data errors and corrects the
mistakes before they get passed along to your PC). These error checking and error
correction systems work like communications protocols, grouping bytes into blocks and
sending cyclical redundancy checking information. They differ from the protocols used
by communications software in that they are implemented in the hardware instead of
your computer's software. That means that they don't load down your computer when it's
straining at the limits of its serial ports.
It can also mean that software communications protocols are redundant and a waste of
time. As mentioned in the case of switching modems, using a software-based
communications protocol can be counterproductive with many high speed modems,
slowing the transfer rate to a crawl. Most makers of modems using built-in error
checking advise against using such software protocols.
All modem error detection and error correction systems require that both ends of the
connection use the same error handling protocol. In order that modems can talk to one
another, a number of standards have been developed. Today, the most popular are MNP4
and V.42. You may also see the abbreviations LAPB and LAPM describing error
handling methods.
LAPB stands for Link Access Procedure, Balanced, an error correction protocol designed
for X.25 packet switched services like Telebit and Tymnet. Some high speed modem
makers adapted this standard to their dial-up modem products before the V.42 standard
(described later in this chapter) was agreed on. For example, the Hayes Smartmodem
9600 from Hayes Microcomputer Products includes LAPB error control capabilities.
LAPM is an acronym for Link Access Procedure for Modems and is the error correction
protocol used by the CCITT V.42 standard described later in the chapter.

Combining Voice and Data

Having but a single telephone line can be a problem when you need to talk as well as
send data. In the old days, the solution was to switch. You'd type a message to the person
at the other end of the connection like, "Go voice," pick up the telephone handset, and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (24 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

tell your modem to switch back to command mode so you could talk without its constant
squeal.
Anything you can do, your computer can do better—at least if your computer can do it at
all. Switching modes is something that a PC can do, so you might expect clever engineers
to find a better way to switch between voice and data on a single telephone line. In fact,
they found several. All are aimed at a high goal—videoconferencing using the data
capabilities of your modem to send compressed video back and forth while you use the
phone in a more traditional manner for your voice.
The first attempts at combining data and voice relied on fast, automatic switching to
create a technology called Simultaneous Voice and Data or SVD. Although the initial
system merely switched traditional analog voice signals around your data, modern
systems now use multiplexing to combine data and voice in digital form to create Digital
Simultaneous Voice and Data or DSVD modems.

VoiceView

First among the systems to yield a single line voice and data connection was the
VoiceView system developed by Radish Communications Systems in 1992. Initially
created as a proprietary product, it was released as an open SVD protocol (usually simply
termed "Radish") a year later. It has quickly gained support from major modem makers
such as AT&T, Hayes, Rockwell, and U.S. Robotics, as well as Intel and Microsoft
because it could be grafted into standard communications chipsets while adding
relatively little cost—probably under $50—to the cost of a conventional modem.
VoiceView is a true switching system. Rather than simultaneous voice and data on your
telephone line, it generates alternating voice and data signals. It does not alter the voice
signal from its original analog format. Instead, it constantly monitors what you say on the
line. During pauses in your speech, VoiceView switches into data mode and sends data
scurrying down the telephone line. Start talking again, and VoiceView again gives your
voice control of the data line. In other words, VoiceView takes advantage of unused time
during your voice connection to transfer data, stealing your telephone line for data.
Under VoiceView, your speech takes priority. One reason is that computers and their
digital signals have better handshaking than people. They readily wait their turns, holding
off the flow of data while many people are unwilling to stop talking. Moreover, you
expect your words to be delivered in real time, as you speak them. Any delay not only
impedes the flow of the conversation but will confuse you, too, when you hear your
voice, slightly delayed, fed back at you through the distant connection.

VoiceSpan

AT&T developed its own SVD system for teleconferencing, which it called VoiceSpan.
At heart a true packet-based multiplex system, VoiceSpan converts your voice into

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (25 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

digital form, and organizes the bytes of your voice and data into individual fixed length
packets. The mix of packets depends on what you do and say during a call. When you
don't speak, VoiceSpan fills the line with data packets. Speak up, and it mixes in voice
packets. The result is that data transmission slows as the stream of data packets get
diluted. Even when you talk, however, some data always gets through.
Because of its digitization of your speech, VoiceSpan can take advantage of
compression. Once compressed, the voice information leaves more room for the
secondary data stream. Digital information such as video image can move faster.

DSVD

DSVD is the latest development in combining voice and data on a single telephone line.
Developed by an industry consortium led by Creative Labs Inc., Hayes Microcomputer
Products, Intel Corporation, Rockwell International, and U.S. Robotics Inc., DSVD is
published as an industry standard by Intel. Version 1.0 was first published in October
1994. The current version, 1.1, was first published in January 1995.
DSVD works like AT&T's VoiceSpan by digitizing voice information into packets that
can be handled like ordinary digital data. It starts out with the same sampling rate and bit
depth as long distance telephone systems (8 KHz sampling with a sampling depth of
eight bits) so it does not affect voice quality except for the effects of voice compression.
Although the digitization process adds a slight delay (on the order of two or three
milliseconds) to voice transmissions, the lag is generally not perceptible.
Using a standard 28.8 kbit/sec modem, the DSVD modem can simultaneously send and
receive data at 19.2 kbits/sec and compressed voice at 9.6 kbits/sec. The modem also
operates as a standard V.34 modem and sends data at full speed. Version 1.1 of the
DSVD specification allows for two speech compression systems, TrueSpeech and
DigiTalk, The latter is the preferred method.
If you just pick up the handset of a telephone connected to a DSVD modem, your
telephone works conventionally. Your voice goes out as a normal analog signal, and you
can carry out a conversation as you normally would. If the other part of the connection
also has a DSVD modem, you can command your PC to switch the modem to DSVD
mode. To make the transition, your modem mutes the audio (so you don't hear its
negotiating tones), establishes a digital connection with the distant modem, and starts
converting your voice into digital form. When voice traffic resumes on the telephone
line, it is in digital form, sent along with the digital data traffic between the modems.
If either you or the distant party to the connection hang up your telephone handset, the
DSVD modems automatically steal the entire phone line for digital data, operating at the
maximum speed of your modem. If both ends of the connection pick up their handsets
during a digital data transfer, the DSVD modems automatically start interleaving voice
packets with the data.
The DSVD processing is handled as an additional protocol on top of the communications
standard (such as V.34) followed by the modems. The DSVD circuitry of the modem

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (26 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

simply packages the data that's sent using the conventional modem standard. As a result,
DSVD communications are no more difficult for a modem or telephone line to handle
than ordinary data traffic. DSVD is no more sensitive to line noise or interference than is
a normal data stream. DSVD can and does take advantage of V.42 error control when the
modems at both ends of the connection support that standard.

Modem Hardware

A modem is a signal converter that mediates communication between a computer and the
telephone network. In function, a modern PC modem has five elements—its interface
circuitry for linking with the host PC; circuits to prepare data for transmission by adding
the proper start, stop, and parity bits; the modulator that makes the modem compatible
with the telephone line; the user interface that gives you command of the modem's
operation; and the package that gives the modem its physical embodiment.

PC Interface

For a modem to work with your PC, the modem needs a means to connect to your PC's
logic circuits. At one time, all modems used a standard or enhanced serial port to link to
your PC. However, because the standard serial port tops out at a data rate that's too slow
to handle today's fastest modems—the serial limit is 115,200 bits per second while some
modems accept data at double that rate—modem makers have developed parallel
interfaced modems.
All modems, whether installed outside your PC, in one of its expansion slots, or in a
PCMCIA slot, make use of a serial or parallel communications port. In the case of an
internal PC modem, the port is embedded in the circuitry of the modem, and the
expansion bus of the PC itself becomes the interface.
With an external modem, this need for an interface (and the use of a port) is obvious
because you fill the port's jack with the plug of a cable running off to your modem. With
an internal modem, the loss is less obvious. You may not even detect it until something
doesn't work because both your modem and your mouse (or some other peripheral) try to
use the same port at the same time.
In the case of serial modems, this interface converts the parallel data of your PC into
serial form suitable for transmission down a telephone line. Modern modems operate so
fast that the choice of serial port circuitry (particularly the UART) becomes critical to
achieving the best possible performance.
The serial and parallel ports built into internal modems are just like any other serial ports.
You must assign them an input/output address and an interrupt. Most modems let you
make these selections either by hardware (with jumpers or switches) or during software
setup. Any modern modem or communications program will let you use any of the four
standard PC serial ports. Older hardware and software products may not, so it is

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (27 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

important to check each for flexibility and compatibility when acquiring a modem. In
other words, if your PC's two serial ports are already plugged with a mouse and a hand
scanner, then you will want both a modem and communications package that let you use
COM3 or COM4. You can still use a modem (or software package) that doesn't support
COM3 and COM4, but you will have to rearrange the other serial devices plugged into
your PC.
Parallel modems require special driver software. Most DOS-based communications
programs are incapable of operating a parallel modem.

Data Preparation

Modern modem communications require that the data you want to send be properly
prepared for transmission. This pre-transmission preparation helps your modem deliver
the highest possible data throughput while preventing errors from creeping in.
Most modem standards change the code used by the serial stream of data from the PC
interface into code that's more efficient, for example stripping out data framing
information for quicker synchronous transfers. The incoming code stream may also be
analyzed and compressed to strip out redundant information. The modem may also add
error detection or correction codes to the data stream.
At the receiving end, the modem must debrief the data stream and undo the compression
and coding of the transmitting modem. A micro controller inside the modem performs
these functions based on the communications standard you choose to use. If you select a
modem by the communications standards it uses, you don't have to worry about the
details of what this micro controller does.

Modulator

The heart of the modem is the circuitry that actually converts the digital information from
your PC into analog compatible form. Because this circuitry produces a modulated
signal, it is called a modulator.

User Interface

The fourth element in the modem is what you see and feel. Most modems give you some
way of monitoring what they do either audibly with a speaker or visually through a light
display. These features don't affect the speed of the modem or how it works but can make
one modem easier to use than another. Indicator lights are particularly helpful when you
want to troubleshoot communication problems.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (28 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Line Interface

Finally, the modem needs circuitry to connect with the telephone system. This line
interface circuitry (in telephone terminology, a data access arrangement) boosts the
strength of the modem's internal logic level signals to a level matching that of normal
telephone service. At the same time, the line interface circuitry protects your modem and
computer from dangerous anomalies on the telephone line (say, a nearby lightning
strike), and it protects the telephone company from odd things that may originate from
your computer and modem, say a pulse from your PC in its death throes.
From your perspective, the line interface of the modem is the telephone jack on its back
panel. Some modems have two jacks so that you can loop through a standard telephone.
By convention, the jack marked "Line" connects with your telephone line; the jack
marked "Phone" connects to your telephone.
Over the years, this basic five-part modem design has changed little. But the circuits
themselves, the signal processing techniques that they use, and the standards they follow
have all evolved to the point that modern modems can move data as fast as the theoretical
limits of telephone transmission lines allow.

Packaging

Internal modems plug into an expansion slot in your PC. The connector in the slot
provides all the electrical connections that are necessary to link to your PC. To make the
modem work, you only need plug in a telephone line. The internal modem draws power
from your PC, so it needs no power supply of its own. Nor does it need a case.
Consequently, the internal modem is usually the least expensive at a given speed.
Because internal modems plug into a computer's expansion bus, a given modem is
compatible only with computers using the bus for which it was designed. You cannot put
a PC internal modem in a Macintosh or workstation.
External modems are self-contained peripherals that accept signals from your PC through
a serial or parallel port and also plug into your telephone line. Most need an external
source of power, typically a small transformer that plugs into a wall outlet and—through
a short, thin cable—into the modem. At minimum, then, you need a tangle of three cables
to make the modem work. You have two incentives to put up with the cable snarl.
External modems can work with computers that use any architecture as long as the
computer has the right kind of port. In addition, external modems usually give you a full
array of indicators that can facilitate troubleshooting.
Pocket modems are compact external modems designed for use with notebook PCs. They
are usually designed to plug directly into a port connector on your PC, eliminating one
interface cable. Many eliminate the need for a power supply and cable by running from
battery power or drawing power from your PC or the telephone line.
PC Card modems plug into the PCMCIA slots that are typically found in notebooks.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (29 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

They combine the cable-free simplicity of internal modems with the interchangeability of
external modems (the PCMCIA interface was designed to work with a variety of
computer architectures). The confines of the PCMCIA slot also forces PC Card modems
to be even more compact than pocket modems. This miniaturization takes its toll in
higher prices, however, although the ability to quickly move one modem between your
desktop and portable PCs can compensate for the extra cost.
The confines of a PCMCIA slot preclude manufacturers from putting a full size modular
telephone jack on PC Card modems. Modem makers use one of two workarounds for this
problem. Most PC Card modems use short adapter cables with thin connectors on one
end to plug into the modem and a standard modular jack on the other. Other PC Card
modems use the X-Jack design, developed and patented by Megahertz Corporation. The
X-Jack pops out of the modem to provide a skeletal phone connector into which you can
plug a modular telephone cable. The X-Jack design is more convenient because you don't
have to carry a separate adapter with you when you travel. On the other hand, the X-Jack
makes the modem more vulnerable to carelessness. Yank on the phone cable, and it can
break the X-Jack and render the modem useless. Yanking on an adapter cable will more
likely pull the cable out or damage only the cable. On the other hand, the connectors in
the adapter cables are also prone to invisible damage that can lead to unreliable
connections.

Indicators

The principal functional difference between external (including pocket) and internal
(including PC Card) modems is that the former have indicator lights, which allow you to
monitor the operation of the modem and the progress of a given call. Internal modems,
being locked inside your PC, cannot offer such displays. Some software lets you simulate
the lights on your monitor, and Windows 95 will even put a tiny display of two of these
indicators on your task bar. These indicators can be useful in troubleshooting modem
communications, so many PC people prefer to have them available (hence they prefer
external modems).
The number and function of these indicators on external modems varies with the
particular product and the philosophy of the modem maker. Typically, you'll find from
four to eight indicators on the front panel of a modem, as shown in Figure 22.8.
Figure 22.8 Typical indicators on external modem front panel.

The most active and useful of these indicators are Send Data and Receive Data. These
lights flash whenever the modem sends or receives data from the telephone line. They let
you know what's going on during a communications session. For example, if the lights
keep flashing away but nothing appears on your monitor screen, you know you are
suffering a local problem, either in your PC, its software, or the hardware connection
with your modem. If the Send Data light flashes but the Receive Data light does not
flicker in response, you know that the distant host is not responding.
Carrier Detect indicates that your modem is linked to another modem across the
telephone connection. It allows you to rule out line trouble if your modem does not seem

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (30 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

to be getting a response. This light glows throughout the period your modem is
connected.
Off-Hook glows whenever your modem opens a connection on your telephone line. It
lights when your modem starts to make a connection and continues to glow through
dialing, negotiations, and the entire connection.
Terminal Ready glows when the modem senses that your PC is ready to communicate
with it. When this light is lit, it assures you that you've connected your modem to your
PC and that your PC's communications software has properly taken control of your serial
port.
Modem Ready glows whenever your modem is ready to work. It should be lit whenever
your modem is powered up and not in its test state.
High Speed indicates that the modem is operating at its fastest possible speed. Some
modems have separate indicators for each speed increment they support. Others forego
speed indicators entirely.
Auto Answer lights to let you know that your modem is in its answer mode. If your
telephone rings, your modem will answer the call (at least if its connected to the line that
is ringing).
Table 22.2 summarizes the mnemonics commonly used for modem indicators and their
functions.

Table 22.2. Modem Indicator Abbreviations and Definitions

Mnemonic Spelled out Meaning


HS High Speed Modem operating at highest speed
AA Auto Answer Modem will answer phone
CD Carrier Detect Modem in contact with remote system
OH Off Hook Modem off hook, using the phone line
RD Receive Data Modem is receiving data
SD Send Data Modem is transmitting data
TR Terminal Ready PC is ready to communicate
MR Modem Ready Modem is ready to communicate

Analog Modem Standards

Neither men nor modems are islands. Above all, they must communicate and share their
ideas with others. One modem would do the world no good. It would just send data out
into the vast analog unknown, never to be seen (or heard) again.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (31 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

But having two modems isn't automatically enough. Like people, modems must speak the
same language for the utterances of one to be understood by the other. Modulation is part
of the modem language. In addition, modems must be able to understand the error
correction features and data compression routines used by one another. Unlike most
human beings, who speak any of a million languages and dialects, each somewhat
ill-defined, modems are much more precise in the languages they use. They have their
own equivalent of the French Academy: standards organizations.
In the United States, the first standards were set long ago by the most powerful force in
the telecommunications industry, which was the telephone company. More specifically,
the American Telephone and Telegraph Company, the Bell System, which promoted
various Bell standards, the most famous being Bell 103 and Bell 212A. After the Bell
System was split into AT&T and the seven regional operating companies (RBOCs, also
known as the Baby Bells), other long distance carriers broke into the telephone
monopoly. In addition, other nations have become interested in telecommunications.
As a result of these developments, the onus and capability to set standards moved to an
international standards organization that's part of the United Nations, the International
Telecommunications Union (ITU) Telecommunications Standards Sector, which was
formerly the Comite Consultatif International Telegraphique et Telephoneique (in
English, that's International Telegraph and Telephone Consultative Committee). The
initials of the latter, CCITT, grace nearly all of the high speed standards used by modems
today, such as V.22bis, V.32, V.32bis, V.42, and V.42bis.
Along the way, a modem and software maker, Microcom, developed a series of modem
standards prefixed with the letters MNP, such as MNP4 and MNP5. The letters stand for
Microcom Networking Protocol. Some modems boast having MNP modes, but these are
falling from favor as the ITU (CCITT) standards take over.
Standards are important when buying a modem because they are your best assurance that
a given modem can successfully connect with any other modem in the world. In addition,
the standards you choose determine how fast your modem can transfer data and how
reliably it will work. The kind of communications you want to carry out determine what
kind of modem you need. If you're just going to send files electronically between offices,
you can buy two non-standard modems and get more speed for your investment. But if
you want to communicate with the rest of the world, you need a modem that meets the
international standards. The following sections discuss the most popular standards for
modems that are connected to PCs.

Bell Standards

Bell 103 comes first in any list of modem standards because it was the first widely
adopted standard, and it remains the standard of last resort, the one that works when all
else fails. It allows data transmissions at a very low speed. Bell 103 uses very simple
FSK modulation, thus it is the only standard in which the baud rate (the rate at which
signal changes) is equal to the data rate.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (32 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Bell 212A is the next logical step in a standards discussion because it was the next
modem standard to find wide application in the United States. It achieves a data transfer
rate of 1200 bits per second by adding quadrature modulation to a frequency modulated
600 baud signal. Consequently, a Bell 212A modem operates at 600 baud and transfers
information at 1200 bits per second. Although Bell 212A was, at one time, the most
widely used communication standard in America, many foreign countries prohibited the
use of Bell 212A, preferring instead the similar international standard, V.22.

MNP Standards

Microcom Networking Protocol is an entire hierarchy of standards, starting with MNP


Class 1, an out of date error correction protocol to MNP Class 10, Adverse Channel
Enhancements, which is designed to eke the most data transfer performance from poor
connections. MNP does not stand alone but works with modems that may conform to
other standards. The MNP standards specify technologies rather than speeds. MNP
Classes 2 through 4 deal with error control and are in the public domain. Classes 5
through 10 are licensed by Microcom and deal with a number of modem operating
parameters.
MNP Class 1 uses an asynchronous byte-oriented half duplex method of exchanging
data designed to make a minimum demand on the processor in the PC managing the
modem. It was originally designed to enable error free communications with first
generation PCs that had little speed and less storage. Using MNP Class 1 steals about 30
percent of the throughput of a modem, so a 2400-bits-per-second modem using MNP
Class 1 achieves an actual throughput of about 1690 bps.
MNP Class 2 takes advantage of full duplex data exchange. As with MNP Class 1, it is
designed for asynchronous operation at the byte level. MNP Class 2 achieves somewhat
higher efficiency and takes only about a 16 percent toll on throughput.
MNP Class 3 improves on MNP2 by working synchronously instead of asynchronously.
Consequently, no start and stop bits are required for each byte, trimming the data transfer
overhead by 25 percent or more. Although MNP3 modems exchange data between
themselves synchronously, they connect to PCs using asynchronous data links, which
means they plug right into RS-232 serial ports.
MNP Class 4 is basically an error correcting protocol but also yields a bit of data
compression. It incorporates two innovations. Adaptive Packet Assembly allows the
modem to package data in blocks or packets that are sent and error checked as a unit. The
protocol is adaptive because it varies the size of each packet according to the quality of
the connection. Data Phase Optimization eliminates repetitive control bits from the data
traveling across the connection to streamline transmissions. Together these techniques
can increase the throughput of a modem by 120 percent at a given bit rate. In other
words, using MNP4, a 1200-bits-per-second modem could achieve a
1450-bits-per-second throughput. Many modems have MNP4 capabilities.
MNP Class 5 is purely a data compression protocol that squeezes some kinds of data into
a form that takes less time to transmit. MNP5 can compress some data by a factor up to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (33 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

two, effectively doubling the speed of data transmissions. On some forms of data, such as
files that have been already compressed, however, MNP5 may actually increase the time
required for transmission.
MNP Class 6 is designed to help modems get the most out of telephone connections
independent of data compression. Using a technique called Universal Link Negotiation,
modems can start communicating at a low speed, and then, after evaluating the
capabilities of the telephone line and each modem, switch to a higher speed. MNP6 also
includes Statistical Duplexing, which allows a half duplex modem to simulate full duplex
operation.
MNP Class 7 is a more efficient data compression algorithm (Huffman encoding) than
MNP5, which permits increases in data throughput by factors as high as three with some
data.
MNP Class 9 (there is no MNP Class 8) is designed to reduce the transmission overhead
required by certain common modem operations. The acknowledgment of each data
packet is streamlined by combining the acknowledgment with the next data packet
instead of sending a separate confirmation byte. In addition, MNP9 minimizes the
amount of information that must be retransmitted when an error is detected by indicating
where the error occurred. Although some other error correction schemes require all
information transmitted after an error to be resent, an MNP9 modem needs only the data
that was in error to be sent again.
MNP Class 10 is a set of Adverse Channel Enhancements that help modems work better
when faced with poor telephone connections. Modems with MNP10 will make multiple
attempts to set up a transmission link, adjust the size of data packets they transmit
according to what works best over the connection, and adjust the speed at which they
operate to the highest rate that can be reliably maintained. One use envisioned for this
standard is cellular modem communications (the car phone).

CCITT/ITU Standards

ITU standards are those promulgated by the International Telecommunications Union,


part of the United Nations. These standards are used throughout the world.
V.22 is the CCITT equivalent of the Bell 212A standard. It delivers a transfer rate of
1200 bits per second at 600 baud. It actually uses the same form of modulation as Bell
212A, but it is not compatible with the Bell standard because it uses a different protocol
to set up the connection. In other words, although Bell 212A and V.22 modems speak the
same language, they are unwilling to start a conversation with one another. Some
modems support both standards and allow you to switch between them.
V.22bis was the first true world standard, adopted into general use in both the United
States and Europe. It allows a transfer rate of 2400 bits per second at 600 baud using a
technique called trellis modulation that mixes two simple kinds of modulation,
quadrature and amplitude modulation. Each baud has 16 states, enough to code any
pattern of four bits. Each state is distinguished both by its phase relationship to the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (34 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

unaltered carrier and its amplitude (or strength) in relation to the carrier. There are four
distinct phases and four distinct amplitudes under V.22bis, which, when multiplied, yield
the 16 available states.
V.32 is an international high speed standard that permits data transfer rates of 4800 and
9600 bits per second. At its lower speed, it uses quadrature amplitude modulation similar
to Bell 212A but at the higher baud rate of 2400 baud. At 9600 bits per second, it uses
trellis modulation similar to V.22bis but at 2400 baud and with a greater range of phases
and amplitudes.
Note that, while most Group III fax machines and modems operate at 9600 bits per
second, a fax modem with 9600 bps capability is not necessarily compatible with the
V.32 standard. Don't expect a fax modem to communicate with V.32 products.
V.32bis extends the V.32 standard to 14,400 bits per second while allowing fallback to
intermediary speeds of 7200 and 12,000 bits per second in addition to the 4800- and
9600-bits-per-second speeds of V.32. (Note that all these speeds are multiples of a basic
2400 baud rate.) The additional operating speeds that V.32bis has and V.32 does not are
generated by using different ranges of phases and amplitudes in the modulation.
At 14,400 bits per second, there are 128 potentially different phase/amplitude states for
each baud under V.32bis, enough to encode seven data bits in each baud. Other data rates
(including V.32) use similar relationships for their data coding. Because there are so
many phase and amplitude differences squeezed together, a small change in the
characteristics of a telephone line might mimic such a change and cause transmission
errors. Consequently, some way of detecting and eliminating such errors becomes
increasingly important as transmission speed goes up.
V.34 is the official designation of the high speed standard once known as V.fast. Under
the V.34 standard, as adopted in June 1994, modems can operate at data rates as high as
28,800 bits per second without compression over ordinary dial-up telephone lines. More
recently, the standard was amended to permit operation as fast as 33,600 bps. The V.34
standard also allows lower transmission rates at 24,000 and 19,200 bits per second and
includes backward compatibility with V.32 and V.32 bits.
The V.34 standard calls for modems to adapt to telephone line conditions to eke out the
greatest usable amount of bandwidth. Where V.32 modems operate at a fixed bandwidth
of 2400 Hz, with a perfect connection V.fast modems will be able to push their operating
bandwidth to 3429 Hz. V.34 modems will use line-probing techniques to try each
connection, and then apply advanced equalization to the line. To squeeze in as much
signal as possible, V.34 modems use multidimensional trellis coding and signal shaping.
V.34 was immediately preceded by two non-standards, V.32 terbo and V.FC (or V.Fast
Class), that were supported by various modem industry groups and chip manufacturers,
but which were not formally sanctioned by the ITU. Because technology provided the
power to allow higher speeds before the ITU could reach a consensus on a standard,
these quasi-standards were widely available until V.34 products became available.
V.32 terbo appeared in June 1993, when AT&T Microelectronics introduced modem
chips that operated at 19,200 bits per second using an extension of the V.32 modulation
scheme. The "terbo" in the name is a poor play on words. "Bis" in modem standards
stands for "second;" similarly "ter" means "third" (as in "tertiary"). "Terbo" means

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (35 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

nothing but conjures up the sound of respectability ("ter" for the third iteration of V.32)
and speed ("turbo" as in a high performance automobile with a turbocharger). The
technology was originally designed for picture phone technologies. V.32 terbo is
backwardly compatible with V.32bis standards and will connect with older modems at
the highest speed at which both can operate. With compression such as MNP5 or V.42
bis, V.32 terbo modems can operate at effective data rates as high as 115,200 bits per
second.
V.FC modems represent the interpretation of some modem makers and a chip maker
(Rockwell International) of a preliminary version of the V.34 standard. These V.FC
modems deliver true 28,800 bits per second speed using V.34 technology, but they don't
use the same handshaking to set up communications as V.34. (The V.FC products
pre-date the final V.34 agreements.)
V.34bis has not been officially recognized as a standard. Many of the latest modems that
operate at 33.6Kbps refer to themselves with this designation in anticipation of its
acceptance.
V.42 is a world wide error correction standard that is designed to help make V.32,
V.32bis, and other modem communications more reliable. V.42 incorporates MNP4 as an
"alternative" protocol. That is, V.42 modems can communicate with MNP4 modems but
a connection between the two won't use the more sophisticated V.42 error correction
protocol. At the beginning of each call, as the connection is being negotiated between
modems, a V.42 modem will determine whether MNP4 or full V.42 error correction can
be used by the other modem. V.42 is preferred, and MNP4 is the second choice. In other
words, a V.42 will first try to set up a V.42 session; failing that, it will try MNP4: and
failing that, it will set up a communications session without error correction.
V.42bis is a data compression protocol endorsed by the CCITT. Different from and
incompatible with MNP5 and MNP7, V.42bis is also more efficient. On some forms of
data, it can yield compression factors up to four, potentially quadrupling the speed of
modem transmissions. (With PCs, the effective maximum communication rate may be
slower because of limitations on serial ports, typically 38,400 bits per second.) Note that
a V.42bis-only modem cannot communicate with a MNP5-only modem. Unlike MNP5, a
V.42 modem never increases the transmission time of "incompressible" data. Worst case
operation is the same speed as would be achieved without compression.

Not-Yet Standards

Getting any standard approved by one of the governing bodies requires time, and the
modem industry is too competitive to wait. Consequently, the industry often has released
products not meeting standards to get ahead in marketing, for example V.Fast and
V.32ter modems. Releasing products before the formal adoption of standards also helps
the manufacturer guide the governing bodies in the direction of the standards favored by
the manufacturers.
Most modern modems use digital signal processors that can be reprogrammed to
accommodate changes in standards. These can be upgraded readily should the adopted

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (36 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

standard not match the design under which the modem was initially sold. One
manufacturer even went so far as to label its products "V.anything," alleging they could
be updated through software to match any standard.
In any case, two groups have created modems operating beyond the V.34 standard that
have not yet received official sanction. The non-standards they promote go by the names
K56flex and x2.
K56flex is a proprietary technology that uses quasi-digital signals—analog voltage levels
to represent digital values—that take advantage of the analog to digital converters in
telephone central offices to achieve a 56,000-bit-per-second data rate. Independently
developed and initially marketed as two different and incompatible system by Rockwell
and Lucent Technologies, the two companies agreed in November 1996, to combine their
work into a single standard.
x2 is a proprietary technology developed by U.S. Robotics that uses the same
analog-signal-level digital encoding as K56flex. In fact, the two systems differ only in
the handshaking used to set up the connection. Unfortunately, the handshake is the vital
part of getting the systems to work, so K56flex and x2 are not compatible—they cannot
talk to each other at their top speeds. Because both systems automatically negotiate the
higher common speed, however, the two system will communicate at the 33.6K V.34 rate
if the telephone connection can handle the signals.

Other Kinds of "Modems"

Beside the modems you normally use to link your PC to a standard telephone line, a
number of other devices are often called modems. Some of these, such as short haul
modems, are not modems at all. Others, such as leased line modems, act as modems but
are used in special applications.

Short Haul Modems

Some devices that are called "modems" don't follow this five-part design and aren't really
modems at all. Inexpensive short haul modems advertised for stretching the link between
your PC and serial printer actually involve minimal circuitry, typically nothing more than
digital buffers. Definitely not enough to modulate and demodulate signals. There's so
little circuitry, in fact, that it is often entirely hidden inside the shell of a simple cable
connector.
All that the short haul modem does is convert the digital output of a computer to another
digital form that can better withstand the rigors of a thousand feet of wire. Don't confuse
short haul modems with the real thing. A short haul modem won't communicate over a
dial-up telephone systems and isn't even legal to plug into your telephone wiring.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (37 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Leased Line Modems

Another distinction between modems is between dial-up modems and leased line
modems. The dial-up modem is what you think of when you hear the word "modem."
The dial-up modem connects with a standard telephone line just as an ordinary telephone
would. The dial-up modem links to the telephone system and can dial a line to make a
connection just like a telephone. You tie up your telephone line, and pay for the service,
only when the dial-up modem is connected (or making a connection) to a distant modem.
When you have no more data to send or receive, the dial-up modem hangs up so you
don't get charged for telephone time you don't need.
In contrast, the leased line modem is always connected to a dedicated telephone line
leased from the telephone company (hence the name). The leased line modem stays in
constant contact, and you pay for a continuous telephone connection.
The leased line modem has its own advantages. You never have to worry about a busy
signal or a connection not getting through (although you can be disconnected because of
line trouble). Moreover, the telephone company leases lines of various quality levels,
some that are much better than ordinary dial-up circuits. Better phone lines mean greater
data capacity, so leased line modems often are faster than the dial-up variety. Finally, the
constant connection means that you're always in touch. You get instant response. Remote
terminals, such as those on computerized airline reservation systems, typically use leased
line modems for this reason.

Digital Services

The V.34 standard may be the modem's last stand. Eventually today's modem standards
will be replaced by the next step in PC to PC communications, all digital dial-up
telecommunications. After all, nearly all traffic between telephone exchanges throughout
the world is digital. The only archaic analog step is the stretch between the exchange and
your home or office (called POTS by those in the know, which stands for Plain Old
Telephone Service). Although the last 10 years have seen improvements in the speed of
moving digital data through this ancient link (dial-up modems have moved from transfer
rates of the paltry 300 bits per second of Bell 103A to a 115,200 bps under V.34), it has
no future.

Telephone Services

One key player in the supply of digital telecommunications services is quite familiar, the
telephone company. Beyond traditional Plain Old Telephone Services (POTS, also used
as an acronym for Plain Old Telephone Sets), telephone companies have developed a
number of all digital communication services. Some of these have been around for a

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (38 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

while, aimed at business users with heavy data needs. Several new all digital services are
aimed directly at you as an individual consumer.
The range of digital services supplied by telephone companies is wide and spans a range
of data rates. Table 22.3 lists many of these and their maximum data rates.

Table 22.3. Maximum Data Rates of Digital Telecommunications Standards

Standard Connection type Downstream rate Upstream rate


V.34 Analog 33.6 Kbps 33.6 Kbps
SDS 56 Digital 56 Kbps 56 Kbos
ISDN Digital 128 Kbps 128 Kbps
SDSL Digital 1.544 Mbps 1.544 MBps
T1 Digital 1.544 Mbps 1.544 MBps
E1 Digital 2.048 Mbps. 2.048 Mbps
ADSL Digital 9 Mbps 640 Kbps
VDSL Digital 52 Mbps 2 Mbps

Certainly you will still talk on the telephone for ages to come (if your other family
members give you a chance, of course) but the nature of the connection may finally
change. Eventually, digital technology will take over your local telephone connection. In
fact, in many parts of America and the rest of the world, you can already order a special
digital line from your local telephone company and access all-digital switched systems.
You get the equivalent of a telephone line, one that allows you to choose any
conversation mate who's connected to the telephone network (with the capability of
handling your digital data, of course) as easily as dialing a telephone.
At least three such services are currently, or will soon be, available in many locations.
All are known by their initials: SDS 56, ISDN, and SMDS. Eventually you will probably
plug your PC into one of them or one of their successors.

T1

The basic high speed service provided by the telephone company is called T1, and its
roots go back to the first days of digital telephony in the early 1960s. The first systems
developed by Bell Labs selected the now familiar 8 KHz rate to sample analog signals
and translate them into 8-bit digital values. The result was a 64 Kbits/sec digital data
stream. To multiplex these digital signals on a single connection, Bell's engineers
combined 24 of these voice channels to create a data frame 193 bits long, the extra bit
length to define the beginning of the frame. The result was a data stream with a bit rate of
1.544 Mbits/sec. Bell engineers called the resulting 24-line structure DS1. AT&T used
this basic structure throughout its system to multiply the voice capacity of its telephone

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (39 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

system, primarily trunk lines between exchanges.


As telephone demand and private business exchanges (PBXs) became popular with larger
businesses, the telephone company began to offer T1 service directly to businesses. As
digital applications grew, T1 became the standard digital business interconnect. Many
web servers tie into the network with a T1 line.
A key feature of the DS1 format was that it was compatible with standard copper
telephone lines, although requiring repeaters (booster amplifiers) about every mile. The
signal itself is quite unlike normal analog telephone connections, however, and that
creates a problem. Its signal transmission method is called AMI (Alternate Mark
Inversion) which are in bipolar form. AMI represents a zero (or space) by the absence of
a voltage; a one (or mark) is represented by a positive or negative pulse, depending on
whether the preceding one was negative or positive. That is, marks are inverted on an
alternating basis. The formatting code for T1 transmissions over twisted pair copper
cable generates a signal with a bandwidth about equivalent to its data rate, 1.5 MHz. This
high speed signal creates a great deal of interference, so much that two T1 lines cannot
safely co-habit in the 50-pair cables used to route normal telephone services to homes.
Outside the United States, the equivalent of T1 services is called E1. Although based on
the same technology as T1, E1 combines 30 voice channels with 64 Kbit/sec bandwidth
to create a 2.048 Mbit/sec digital channel.

HDSL

The primary problem with T1 is the interference-causing modulation system it uses, one
based on 1960s technology. Using the latest modulation techniques, the
telecommunications industry developed a service with the same data rate as T1 or E1, but
it requires a much narrower bandwidth, from 80 to 240 KHz. One basic trick to the
bandwidth reduction technique is splitting the signal across multiple phone lines. For T1
data rate, the service uses two lines; for E1, three. Besides reducing interference, the
lower data rate allows longer links without repeaters, as much as 12,000 feet.
Unfortunately, the "subscriber" in the name of the standard was not meant to correspond
to you as an individual. It fits into the phone company scheme of things in the same place
as T1—linking businesses and telephone company facilities.

SDSL

Two lines (or three) to carry one service is hardly the epitome of efficiency. By altering
the modulation method, however, a single line can carry the same data as the two (or
three) of HDSL. The commercial version of this service is termed Single-line Digital
Subscriber Line or SDSL. It has an effective range of about 10,000 feet from the central
office without repeaters.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (40 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

The chief advantage of SDSL over other, higher performance services is that it is
symmetrical. The data rate is the same in both directions. Web servers and wide-area
network connection often require symmetrical operation, making SDSL the choice for
them.

ADSL

Most of the advanced services aimed at consumer Internet use are asymmetrical. They
have a higher downstream data rate from the server compared to their upstream rates,
from you back to the server. Telecommunications companies are not ignorant of this
situation and shortly after they developed HDSL they also created a higher speed but
asymmetrical alternative, which they termed, logically enough, Asymmetrical Digital
Subscriber Line or ADSL.
ADSL can move downstream data at speeds up to 9 Mbits/sec. Upstream, however, the
maximum rate is about 640 kbits/sec. ADSL doesn't operate at a single rate as does T1 or
SDSL. Its speed is limited by distance, longer distances imposing greater constraints. It
can push data downstream at the T1 rate for, at most, about 18,000 feet from the central
office. At half that distance, its downstream speed potential approaches 8.5 Mbits/sec.
Table 22.4 summarizes the downstream speeds and maximum distances possible with
ADSL technology.

Table 22.4. ADSL Downstream Data Rates

Equivalent service Downstream data rate Distance


T1 1.544 Mbits/sec 18,000 feet
E1 2.048 Mbits/sec 16,000 feet
DS1 6.312 Mbps 12,000 feet
ADSL 8.448 Mbits/sec 9,000 feet

The modulation system used by ADSL operates at frequencies above the baseband used
by ordinary telephone service or ISDN. Consequently, an ADSL line can carry high
speed digital signals and ordinary telephone signals simultaneously.

VDSL

The next step above ADSL is the Very high data-rate Digital Subscriber Line or VDSL.
A proposal only, the service is designed to initially operate asymmetrically at speeds
higher than ADSL but for shorter distances, potentially as high as 51.84 Mbits/sec
downstream for distances shorter than about 1,000 feet, falling to one-quarter that at

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (41 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

about four times the distance (12.86 Mbits/sec at 4,500 feet). Proposed upstream rates
range from 1.6 Mbits/sec to 2.3 Mbits/sec. In the long term, developers hope to make the
service symmetrical. VDSL is designed to work exclusively in an ATM network
architecture. As with ADSL, VDSL can share a pair of wires with an ordinary telephone
connection or even ISDN service.

SDS 56

Already available from some telephone operating companies in some exchanges,


Switched Data Services 56 (sometimes shortened to Switched-56) gives you a single
digital channel capable of a 56-kilobit-per-second data rate. Its signals are carried
through conventional copper twisted pair wiring (the same old stuff that carries your
telephone conversations). Telephone companies like PacBell view it as an interim service
to bridge the gap between ISDN service areas. To link to your PC, you need special
head-end equipment—the equivalent of a modem—for your PC. Of course, because the
signal stays digital, there's no need for modulation or demodulation. The signal stays
error free through its entire trip.
In some locales, SDS 56 is no more expensive than an ordinary business telephone line.
Installation costs, however, can be substantially higher (PacBell, for example, charges
$500 for installation) and some telephone companies may add extra monthly
maintenance charges in addition to the normal dial-up costs.
To take advantage of SDS 56, you need to communicate with someone who already has
SDS 56 services. Currently, SDS 56 is not an internationally agreed upon standard, so
your access to the service and its subscribers will be less universal than a
telephone-based modem link. The chief advantages of linking through SDS 56 are higher
speed, greater reliability, and data integrity.

ISDN

The initials stand for Integrated Services Digital Network, although waggish types will
tell you it means "I Still Don't Know" or "It Still Does Nothing." The latter seems most
apt because ISDN has been discussed for years with little to show for all the verbiage.
But ISDN is an internationally supported standard, one that promises eventually to
replace your standard analog telephone connection.

History

The first real movement toward getting ISDN rolling occurred in November 1992 when
AT&T, MCI, and Sprint embraced a standard they called ISDN-1. Under the new
standard, ISDN now has a consistent interface to connect end user equipment, local

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (42 de 96) [23/06/2000 06:57:02 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

telephone companies, and trunk carriers. This common interface now makes coast to
coast ISDN transmission possible.

Implementations

Today, two versions of ISDN are generally available. The simplest is the Basic Rate
Interface (BRI), which takes advantage of the copper twisted pair wiring that's already in
place linking homes and offices to telephone exchanges. Instead of a single analog signal,
an ISDN line carries three digital channels: two B (for Bearer) channels that can carry
any kind of data (digitally encoded voice, fax, text, and numbers) at 64,000 bps, and a D
(or Delta) channel, operating at 16,000 bps, that can carry control signals and serve as a
third data channel. The three channels can be independently routed to different
destinations through the ISDN system.
A single BRI wire enables you to transfer uncompressed data bi-directionally at the
64,000 bps rate, exactly like a duplex modem today but with higher speed and error free
transmission thanks to its all digital nature. Even during such high speed dual direction
connections, the D channel would still be available for other functions.
The more elaborate form of ISDN service is called the Primary Rate Interface. This
service delivers 23 B channels (each operating at 64,000 bits per second) and one D
channel (at 16,000 bits per second). As with normal telephone service, use of ISDN is
billed by time in use, not the amount of data transmitted or received.
The strength of BRI service is that it makes do with today's ordinary twisted pair
telephone wiring. Neither you nor the various telephone companies need to invest the
billions of dollars required to rewire the nation for digital service. Instead, only the
central office switches that route calls between telephones (which today are mostly
plug-in printed circuit boards) need to be upgraded.
Of course, this quick fix sounds easier than it is. The principal barriers aren't
technological, however, but economic. The changeover is costly and, because telephone
switching equipment has long depreciation periods, does not always make business sense
for the telephone company.
Once you have access to ISDN, you won't be able to plug your PC directly into your
telephone line, however. You will still need a device to interface your PC to the ISDN
line. You will need to match your equipment to the line and prevent one from damaging
the other using a device called an ISDN Adapter. Such adapters may have analog ports
that will allow you to connect your existing telephones to the ISDN network. ISDN
Adapters are already becoming available for use in the limited areas that have ISDN.
ISDN uses different conductors than does an ordinary telephone line in a standard jack.
Table 22.5 lists the signal assignments of ISDN in an RJ-45 (8 conductor) jack.

Table 22.5. ISDN 8P8C Jack Wiring

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (43 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Pin Mnemonic Signal Name Polarity Wire color


1 PS3 Power Source/Sink 3 Positive Blue
2 PS3 Power Source/Sink 3 Negative Orange
3 T/R Transmit/Receive Positive Black
4 R/T Receive/Transmit Positive Red
5 R/T Receive/Transmit Negative Green
6 T/R Transmit/Receive Negative Yellow
7 PS2 Power Sink/Source 2 Negative Brown
8 PS2 Power Sink/Source 2 Positive White

ISDN Modems

If you have ISDN service, you need a device to link your PC to the telephone line. Some
people call this device an "ISDN modem." In that both your PC and the ISDN connection
are complete digital, no modulation is necessary to match the two, so you don't need a
modem. You still need a device that matches the data rates and protocols of your PC to
the ISDN line and protects the line and your PC from suffering problems from one
another. The device that does this magic is called an ISDN terminal adapter.

Cable Services

The chief performance limit on telephone service is the twisted pair wire that runs from
the central office to your home or business. Breaking through its performance limits
would require stringing an entirely new set of wires throughout the telephone system.
Considering the billions of dollars invested in existing twisted pair telephone wiring, the
likelihood of the telephone company moving into a new connection system tomorrow is
remote.
Over the past two decades, however, other organizations have been hanging wires from
poles and pulling them underground to connect between a third and a half of the homes
in the United States—cable companies. The coaxial cables used by most such services
have bandwidths a hundred or more times wider than twisted pair. They regularly deliver
microwave signals to homes many miles from their distribution center.
Tapping that bandwidth has intrigued cable operators for years, and the explosive growth
of the Internet has set them salivating. Most operators foresee the time—already here in a
few locations—where you can connect your PC to the Internet through their coaxial
cable, and incidentally, send the check for Internet service to them.
Until recently, the one element lacking has been standardization. On September, 24,
1996, however, five major companies with interests in cable modems—Com21, General

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (44 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Instrument, Hewlett-Packard, LANcity, and Motorola—joined together to develop a


common data interface specification with the idea of building inter-operable cable
modems. The hoped to have worked out a standard by the end of 1996 and have modems
commercially available by the end of 1997.
The basic interface between the cable modem and your PC is an Ethernet link. The cable
modem plugs into an Ethernet host adapter in your PC or an internal cable modem
emulates an Ethernet adapter.
Grand as the idea of cable modems sounds, it suffers constraints as well. One problem
with cable modems is with the current wiring of cable systems themselves. They are
designed for program distribution, not for two way communications. The cable itself
does not limit the system to distribution, but many cable companies put amplifiers in
their systems to allow their signals to reach for miles beyond their offices. Most of the
amplifiers installed in existing cable systems operate in one direction only. To allow two
way data flow, the cable company must replace these amplifiers as well as upgrade the
head-end equipment that distributes the signals from their offices.
One expedient exploited by bringing two way data flow to current cable subscribers in
the short run is to use the cable connection solely for downstream data flow and make a
second link, usually by an ordinary telephone line, for the upstream data. This kind of
asymmetrical operation is exactly suited to Internet use.
In the long term, cable operator suffer a larger problem with their wiring systems. All of
the bandwidth on the coaxial cable running to your home is not yours alone. Unlike the
telephone company that strings a separate pair of wires to every home, the cable
company's coax operates as a bus. The same signals travel to you and your neighbors.
The design works well for broadcasting. A similar hardware linkup works well for
Ethernet in businesses. A problem arises when you start slicing up bandwidth—as with a
pie, the more ways you slice it, the smaller the pieces everyone gets. Divide the
bandwidth of a typical cable system by the number of subscribers, and you have less
bandwidth available to each than with an advanced telephone service. If everyone
attached to a cable system tried to use their modems at once, data transfer would slow to
a crawl.
As with local area networks, the overloading of the system can be controlled by slicing
the entire system into more manageable sub-units. By limiting each coaxial cable to
serving a single neighborhood, bandwidth limits can be sidestepped with a minimum of
rewiring of the cable system.
Making the cable modem connection requires both physical and logical installation. The
physical part is easy: you simply plug the cable modem into your network adapter. The
software installation is more complex. You need driver software to match your modem to
your application software and operating system. Most cable companies will provide you
with the proper driver and application software. Your primary concern is matching the
supplied software to your operating system. In general, you will need Windows 95 or a
newer operating system to take advantage of a cable modem. Your cable company should
advise you of the exact hardware and software requirements.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (45 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Control

You require more from a modem than just the exchange of information. Besides its basic
purpose of converting digital data into modulated audio signals, the modem is often
called upon to handle other chores of convenience. For instance, you probably want your
modem to automatically dial or answer your phone or just tell you whether the line is
busy or ringing when you dial. These features of the modem must be able to be
controlled by your computer, and the modem must be able to signal your computer about
what it does and what it finds out.
All modems have a native mode of control that uses a system of commands and
responses that make up the modem's language. Although even in the early days of PCs
you only needed to take direct control of all of the modem's functions when you wrote
programs, getting a modem to work successfully with communications applications often
required some familiarity with your modem's language. Proper operation usually required
sending a setup string to the modem. To get the most speed and error correction from
your modem and online service, you often had to edit or develop the setup string
yourself.
Modern operating systems simplify matters. The operating system often can identify your
modem and configure your applications automatically using the Plug-and-Play system or
by querying the modem—or, as a last resort, you—to determine the modem type. Once
your modem is set up for the operating system, all of your communications programs
automatically work with it without further need to deal with modem languages,
commands, or setup strings.

Operating System Level Control

Window 3.1 requires manual configuration and its applications often require you to enter
proper setup strings. Starting with Windows 95, however, the operating system integrates
the modem into its driver structure and application interface. Windows 95 uses the same
layered architecture for modem control as it does for other hardware interfaces.

Structure

Windows 95 and more recent Microsoft operating systems separate modems and related
communications systems into three levels: communication port drivers, a universal
modem driver, and the Win32 communication API.
The port driver controls the operation of the port linking to your modem. Typically, it
will involve a serial port but also embraces enhanced capabilities ports (ECP), and, in the
latest operating systems, Universal Serial Bus.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (46 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

The Windows universal modem driver, termed UniModem, is the key element that
eliminates the need to learn the language of the modem (or, in the case of programmers,
all modems). UniModem links to your modem using a mini-driver supplied by the
modem maker or, in the case of more common modems, included with the operating
system. It sends out the command to make the modem latch onto the telephone line, dial,
and connect with a given protocol.
At the other end, the Telephony Application Programming Interface or TAPI gives
programmers a standard set of calls for modem functions. Instead of issuing AT
commands (discussed below as the Hayes Command Set), the programmer uses a TAPI
call to tell the modem what to do. TAPI has another level, the Service Provider Interface,
which establishes the connection with the specific telephone network.

Modem Identification

Key to eliminating the hassles involved with properly setting up your modem is
identifying your modem to the operating system. Windows 95 provides several means for
modem identification, many of which are automatic. In the ideal case, your modem will
comply with the Plug-and-Play specifications, and Windows can automatically determine
its capabilities every time your PC boots up. With external modems, this recognition
requires the firmware in the modem be capable of responding with Plug-and-Play
identification information.
Without Plug-and-Play compliance, Windows 95 relies on its hardware configuration
process for identifying modems. A primary problem is that the most common command
set for modems, the AT commands do not include a function to identify the modem.
Consequently, older modems cannot tell your PC what they are, at least directly. To
identify older modems, Windows sends commands to the modem, checks the modem's
responses, and compares them to a database of known modem information.
This procedure isn't always successful. The results may be ambiguous, so you need to
check what Windows determines. In the worst case, you must use the hardware
installation procedure or alter the Modem Properties sheet to tell Windows what you
have.

Device Level Control

Taking direct control of your modem requires understanding several concepts including
the operating modes of modems, the command set of your modem, an extension to its
command set (and memory function) called its S-registers, and its response codes that
relay modem status to your PC. Early modems further confound this control system with
a series of switches that define the basic operation and emulations of the modem.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (47 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Dual Modes

Most modems operate alternately in one of two modes. In command mode, the modem
receives and carries out instructions sent by your computer. In communications mode, it
operates as transparently as a modem can, merely converting data.
Changing modes is mostly a matter of sending control characters to the modem. The
characters can only be received and processed in command mode. In communication
mode, they would be passed along down the telephone line.
Shifting from command mode to communications mode is easy; the modem is already
handling commands, so adding one more to its repertoire is no big deal. Shifting back
from communications mode to command mode is more problematic. In communications
mode, the modem is supposed to be relaying all the data it receives down the telephone
line. The most widely used method for initiating the switch from communications to
command mode involves a guard period, a brief interval during which no data is sent.
Then, a set of characters unlikely to be found in a normal communications string is sent,
followed by a final guard period. This mode-switching method is patented by Hayes
Microcomputer Products and must be licensed by all modems that use it. Most modems
use guard periods of one second and three plus signs as the command character sequence.

Hayes Command Set

Today, most modems use a standardized set of instructions called the Hayes command
set, named after the Hayes company, which developed it for its modems. For the most
part, the Hayes command set comprises several dozen modem instructions that begin
with a two-character sequence, AT, called attention characters. The letters must be in
uppercase. Other characters specifying the command follow the attention character.
Because the AT is part of nearly every command, the Hayes command set is also termed
the AT command set. A modem that understands the Hayes command set (or the AT
command set) is said to be Hayes-compatible. The AT command set itself is not patented
(although it is not useful without incorporating the patented mode-switching method).
Most AT commands follow the attention characters with one letter that specifies the
family of the command and another character that indicates the nature of the command.
For instance, H is standard for Hook. H0 means put the phone "on the hook" or hang up.
H1 indicates that the modem should take the phone off the hook, that is, make a
connection to the line.
Several commands and their modifiers can be combined on a single line after an initial
attention command. To command a Hayes or Hayes-compatible modem to dial
information on a tone-dialing line, for example, the proper sequence of commands would
read: ATDT155511212. The AT is the attention signal, D is the Dial command, the T
tells the modem to use tones for dialing, and the 15551212 is the number of the telephone
company information service.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (48 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

All AT commands must be followed by a carriage return. The modem waits for the
carriage return as a signal that the computer has sent the complete command and that the
modem should start processing it.

Extended Modem Command Sets

At the time the Hayes command set was developed, modems had relatively few special
features. As modems became more sophisticated, they became more laden with
capabilities and features. The original Hayes command set had to be extended to handle
all the possibilities. Note that many Hayes-compatible modems recognize only the
original command set. All of their features may not work with software that expects the
extended Hayes set.
Because the 26 letters of the alphabet proved too few to handle all these extended
commands, modem makers developed new commands with special preamble characters.
Hayes first extended beyond the alphabet using an ampersand ("&") followed by a letter.
Other manufacturers have used percent signs ("%") and backslashes ("\") as preamble
characters.
Because modem makers have added unique features to many of their products, they have
often defined their own unique extended modem commands. Each manufacturer, and
often each modem, recognizes a different set of commands. Although modem makers
have attempted to avoid assigning the commands used by another manufacturer for a
different purpose, many extended modem commands have multiple definitions. Only the
most basic commands can be assumed universal. Others are product-specific, and you
need to check your modem's manual to be sure what command does what.
Table 22.6 lists extended modem commands and designates which of these have
universal or near universal application.

Table 22.6. Hayes AT Command Set with Proprietary Extensions

Command Function
A Force answer mode*
A/ Re-execute last command*
A> Repeat mode; redials up to 10 times for
connection
AT Attention command*
B0 ITU (CCITT) Handshaking*
B1 Bell handshaking*
B15 Initiate calls using V.21 at 300 cps
B16 Initiate calls using Bell 103 at 300 cps

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (49 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

B41 Initiate calls using V.32 at 4800 bps


Initiate calls using V.32 at 9600 bps
C0 Carrier (transmitter) off*
C1 Carrier (transmitter) on*
D Dial command*
Dn Dial number stored as n
DL Dial last number dialed
DSn Dial number stored in modems memory at
location n
E0 Echo characters typed at /sent by PC*
E1 Do not each characters typed at/sent by PC*
F0 Select auto-detect mode (same as N1)
F0 Online echo on (half duplex)
F1 Select V.21 or Bell 103
F1 Online echo off (full duplex)
F3 Select V.23; 75 bps send, 1200 bps receive
F4 Select V.22; 1200 bps
F5 Select V.22bis; 2400 bps
F6 Select V.32/V.32bis; 4800 bps
F7 Select V.32bis; 7200 bps
F8 Select V.32/V.32bis; 9600 bps
F9 Select V.32bis; 12,000 bps
F10 Select V.32bis; 14,400 bps
H0 Hang up line*
H1 Take line off-hook*
I0 Returns 3-digit product code
I1 Returns ROM checksum
I2 Validate ROM checksum
I2 Display results from RAM test
I3 Returns firmware revision code
I3 Display duration of last call
I4 Returns modem identifier string
I4 Display current modem settings
I5 Returns country code
I5 Display non-volatile RAM settings
I6 Returns data pump mode and revision code

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (50 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

I6 Display dial diagnostics of last call


I7 Display product configuration information
I8 Reserved
I9 Reserved
I10 Display dial security account status
information
I11 Reserved
K0 Display current or last call duration
K1 Return actual time at ATI3
L0 Lowest speaker volume*
L1 Low speaker volume*
L2 Medium speaker volume*
L3 Highest speaker volume*
M0 Speaker always off*
M1 Speaker on until carrier detected*
M2 Speaker always on*
M3 Speaker on only during answering
M3 Speaker on after dialing, off after carrier
established
N0 Disable auto-mode detection; S37 determines
speed
N1 Enable auto-mode detection (same as F0)
O0 Enter data (online) mode*
O1 Enter data (online) mode and retrain modem*
P In dialing string, switch to pulse dialing*
Q0 Result codes sent to PC*
Q1 Quiet mode: result codes not sent to PC*
Q2 Result codes not sent to PC in answer mode
R Force dialing using answer frequencies*
S In dialing string, dial stored number*
Sn Establishes S-register n as the default
Sn=v Sets S-register n to value v*
Sn? Returns value of S-register n*
T In dialing string, switch to DTMF dialing*
V0 Send terse (numeric) responses*
V1 Send verbose (plain language) responses*

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (51 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

W In dialing string, wait for second dial tone*


W0 Report only connect speed
W1 Reports line speed, error protocol, and port
speed
W2 Reports only modem connection speed
X0 Sends OK, CONNECT, RING, NO
CARRIER, and ERROR
X1 Sends CONNECT speed in addition to X0
X1 Adds CONNECT (SPEED) to X0
X2 Send NO DIALTONE in addition to X1
X3 Sends BUSY in addition to X2
X3 Adds BUSY and NO ANSWER to X2; deletes
NO DIALTONE
X4 Sends all responses
X4 Adds NO DIALTONE to X3
X5 Adds RINGING and VOICE to X3
X6 Adds RINGING and VOICE to X4
X7 As X6 but drops VOICE
Y0 Disables long-space disconnect*
Y1 Enables long-space disconnect *
Z Restore factory default settings*
Z0 Reset to defined profile 0
Z1 Reset to profile 1
&A0 ARQ result codes disabled
&A1 ARQ result codes enabled
&A2 Additional VFC, HST, or V32 modulation
indicator
&A3 Additional error correction indicator
&B0 Serial port rate set variable
&B1 Serial port rate fixed
&B2 Serial port rate fixed for ARQ calls, variable
for non-ARQ calls
&C0 Forces DCD on*
&C1 DCD follows remote carrier*
&D0 Modem ignores DTR line
&D0 DTR always on

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (52 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

&D1 Modem assumes command state when DTR


switches
&D2 DTR switches off hook, answer mode off, to
command mode
&D3 DTR switches off initializes modem
&F Loads pre-set factory parameters
&Fn Load factory present configuration n
&G0 Disables guard tone*
&G1 Disables guard tone
&G1 Enables 550HZ (European) guard tone
&G2 Enables 1800Hz (U.K.) guard tone*
&H0 PC to modem flow control disabled
&H1 PC to modem hardware flow control (CTS)
&H2 PC to modem software flow control
(XON/XOFF)
&H3 PC to modem hardware and software flow
control
&I0 Modem to PC software flow control
(XON/XOFF) disabled
&I1 Modem passes type flow control to remote
system
&I2 Modem acts on software flow control, no
forwarding
&I3 Software flow control using ENQ/ACK in host
mode
&I4 Software flow control using ENQ/ACK in
terminal mode
&I5 Software flow control over modem link
without error control
&J0 Select RJ-11/ RJ-41S/ RJ-45S jack
&J1 Select RJ-12/ RJ-13 jack
&K0 Disables flow control
&K0 Disable data compression
&K1 Enable RTS/CTS hardware flow control
&K1 Auto enable data compression
&K2 Enable XON/XOFF software local flow
control
&K2 Data compression enabled

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (53 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

&K3 Enables RTS/CTS flow control


&K3 Selective data compression (MNP5 disabled)
&K4 Enables XON/XOFF software local flow
control
&K5 Transparent XON/XOFF
&K6 Enable XON/XOFF with pass-through
&K7 Enable RTS/CTS local with XON/XOFF
pass-through
&L0 Dial-up operation
&L1 Leased-line operation
&M0 Selects asynchronous mode*
&M1 Synchronous mode 1; asynchronous dialing,
then synchronous
&M1 Online synchronous mode without V.25bis
&M2 Synchronous mode 2; stored number dialing
&M3 Synchronous mode 3; manual dialing
&M4 Normal/ARQ Mode
&M5 Enter ARQ synchronous mode; disconnect if
unsuccessful
&M6 Enter V.25bis synchronous mode
&M7 Enter V.25bis synchronous mode with HDLC
protocol
&N0 Connection rate variable, negotiated by
modems
&N1 Connection rate set at 300bps (if remote
modem supports)
&N2 Connection rate set at 1200bps (if remote
modem supports)
&N3 Connection rate set at 2400bps (if remote
modem supports)
&N4 Connection rate set at 4800bps (if remote
modem supports)
&N5 Connection rate set at 7200bps (if remote
modem supports)
&N6 Connection rate set at 9600bps (if remote
modem supports)
&N7 Connection rate set at 12,000bps (if remote
modem supports)

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (54 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

&N8 Connection rate set at 14,400bps (if remote


modem supports)
&N9 Connection rate set at 16,800bps (if remote
modem supports)
&N10 Connection rate set at 19,200bps (if remote
modem supports)
&N11 Connection rate set at 21,600bps (if remote
modem supports)
&N12 Connection rate set at 24,000bps (if remote
modem supports)
&N13 Connection rate set at 26,400bps (if remote
modem supports)
&N14 Connection rate set at 28,800bps (if remote
modem supports)
&P Pulse dialing make/break ratio of 39:61
&P0 Make/break dial ratio of 39:61 at 10 pps*
&P1 Make/break dial ratio of 33:67 at 10 pps*
&P2 Make/break dial ratio of 39:61 at 20 pps
&P3 Make/break dial ratio of 33:67 at 20 pps
&Q0 Selects direct asynchronous mode
&Q5 Modem negotiates error correction
&Q6 Selects autosynchronous mode with speed
buffering
&Q8 Select MNP operation
&Q9 Conditional data compression
&R0 CTS tracks RTS in sync mode
&R0 RTS/CTS delay with received data
&R1 Modem ignores RTS
&R2 Hardware flow control of received data
&S0 Forces DSR continuously on*
&S1 DSR active after carrier detect; off after carrier
loss*
&S2 On loss of carrier, pulsed DSR with CTS
following CD
&S3 As &S2 but without CTS following CD
&S4 Modem sends PC a DSR at same time as CD
&T0 End test in progress
&T1 Starts local analog loopback test

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (55 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

&T3 Starts local digital loopback test


&T4 Responds to remote modem request for digital
loopback
&T5 Ignores remote modem request for digital
loopback
&T6 Request remote digital loopback without
self-test
&T7 Same as &T6 but with self-test
&T8 Starts local analog loopback with self-test
&U Enable trellis code modulation
&U0 Enable trellis code modulation
&U1 Disable trellis code modulation
&V Displays current and stored profiles and stored
numbers
&W Write current profile to memory
&W0 Sets profile 0 as current configuration
&W1 Sets profile 1 as current configuration
&X0 Set synchronous clock source to modem
internal clock
&X1 Set synchronous clock to PC source provided
on pin 24
&X2 Set synchronous clock to incoming modem
signal
&Y0 Modem uses profile 0
&Y0 Break handling: break is destructive but not
sent from modem
&Y1 Modem uses profile 1
&Y1 Break handling: break is destructive and
expedited
&Y2 Break handling: break is nondestructive and
expedited
&Y3 Break handling; break is nondestructive and
not expedited
&Zn Store telephone number n
&Zn=x Stores telephone number x at memory location
n
&Zn? Display telephone number stored at memory
location n
&ZCs Write string s to non-volatile memory

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (56 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

%An Create and configure security account n


%B0 Configure modem serial port rate to 110 bps
%B1 Configure modem serial port rate to 300 bps
%B2 Configure modem serial port rate to 600 bps
%B3 Configure modem serial port rate to 1200 bps
%B4 Configure modem serial port rate to 2400 bps
%B5 Configure modem serial port rate to 4800 bps
%B6 Configure modem serial port rate to 9600 bps
%B7 Configure modem serial port rate to 19,200
bps
%B8 Configure modem serial port rate to 38,400
bps
%B9 Configure modem serial port rate to 57,600
bps
%B10 Configure modem serial port rate to 115,200
bps
%C0 Disable data compression
%C0 Defer configuration changes until current call
ends
%C1 Enable MNP5 compression
%C1 Cancel configuration changes made during
remote access
%C2 Enable V.42bis compression
%C2 Force immediate configuration changes
%C3 Enables both MNP5 and V.42bis
%E=1 Erase local access password
%E=2 Erase autopass password
%E=3 Erase passwords in accounts 0-0
%E=4 Erase phone numbers in accounts 0-9
%E=5 Disable account, dialback, and new number
fields
%E0 Disables line-quality monitoring and
auto-retraining
%E1 Enables monitoring and retraining
%E2 Enables monitoring and fallback/fall forward
%E3 Enables monitoring, retraining, and fast
disconnect
%F0 Set data format to no parity, 8 data bits

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (57 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

%F1 Set data format to mark parity, 7 data bits


%F2 Set data format to odd parity, 7 data bits
%F3 Set data format to even parity, 7 data bits
%L Reports received signal level in -dBm
%L= Assign an account password at the local access
password
%P0= Disable password security
%P1= Disable password security
%P0=s Specify password string s for viewing
privileges only
%P1=s Specify password string s for view and
configuration privileges
%Pn? Display password n
%Q Reports line signal quality
%S=n Obtain access to security accounts without
disabling security
%T Off-hook modem detects tone frequencies of
dialing modems
%V=PWn Assign password in account n as autopass
password
\A0 64-character maximum MNP block size
\A1 128-character maximum MNP block size
\A2 192-character maximum MNP block size
\A3 256-character maximum MNP block size
\Bn Sends break to remote modem for n tenth
second
\G0 Disables XON/XOFF flow control with remote
modem
\G1 Enables XON/XOFF flow control with remote
modem
\Kn Defines break type
\L0 Uses stream mode for MNP connection
\L1 Uses interactive block mode for MNP
connection
\N0 Normal data link with speed buffering
\N1 Selects serial interface
\N2 Selects error correction mode
\N3 Sects auto-reliable (error correction) mode

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (58 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

\N4 LAPM error correction only


\N5 MNP5 error correction only
$H Help command; display help summary
-SSE=0 Disable DSVD operation globally
-SSE=1 Enable DSVD operation globally
, (comma) In dialing string, pause for two seconds*
; (semi-colon) In dialing string, return to command mode
after dialing*
" (quote) Dial letters literally
! (exclamation) In dialing string, flash hook switch*
@ (at sign) In dialing string, wait for second dial tone
/ (slash) Pause for 125 milliseconds
+++ Return to command mode*
Note: An asterisk indicates a
command with nearly universal
application.

S-Registers

Most modem commands are designed to work interactively to change the function of the
modem or to make the modem perform a defined action. As communications have
become more varied and challenging, modems have become both more complex and
flexible. Many modem features are entirely programmable, designed to be changed to
suit the needs of the connection you want to make.
This increased flexibility requires a more elaborate control system, one that reaches far
beyond a simple command set. Modems require a means of control to adjust their settings
and a means to store those settings.
Hayed developed a control system with exactly those features. This second layer system
uses a series of registers as both a channel to communicate with the modem and a portal
into the modem's memory. In the Hayes-designed system, you access these registers
using the "S" command. Consequently, they are called S-registers.
Early Hayes modems had provisions for 28 of these registers, although the functions of
some were not initially defined. Each register was named with an S-number, from S0
through S27. As modems became more complex, even this bountiful endowment proved
insufficient, and later modems made by Hayes and other manufacturers added more
S-registers until today some products use register names as high as S100.
As with AT commands, each manufacturer has added to the repertory of S-registers to
suit the needs of its own products. Most modem makers have attempted to assign unique

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (59 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

designations to the registers exclusive to their products. That is, they have tried to avoid
using register names used by other manufacturers for different purposes. While such
efforts are to be applauded, they have led to a profusion of incompatible registers and
functions. In fact, some manufacturers even define many of the basic 28 Hayes
S-registers in their own ways.
In general, only the lower thirteen Hayes S-registers (that is, S0 through S12) and part of
S16 are anywhere near universal among modems. Any S-registers above this minimal
endowment must be considered unique to their manufacturer.
S-registers are of two types. One accepts an integer value to define a parameter—for
example, the length of a tone or delay in increments of a fraction of a second. To set
these registers, you only need to write the appropriate integer value to the register. Out of
range values generally result in your modem responding with an error message while
leaving the previous register contents intact. Other S-registers are bit-mapped. That is,
the state of a bit (or bit pattern) determines whether a given modem feature is on or off.
Each bit (or pattern) in the register operates independently. To set these registers, you
must first determine the value of each of the eight bits stored in the register and encode
these values together as an eight-bit binary number. Bit 0 of the bit map is the least
significant bit of the resulting number, and bit 7 the most significant. Next, convert the
binary number to decimal, and write the decimal value to the S-register.
Table 22.7 lists the basic 13 universal S-registers used by most modem manufacturers as
well as some proprietary extensions.

Table 22.7. Commonly Used S-Register Functions

Register Range Units Default Description Scope


S0 0-255 rings 0 answer on ring # Near
universal
S1 0-255 rings 0 count number of Near
rings universal
S2 0-127 ASCII 43 escape code Near
universal
S4 0-127 ASCII 13 character used as Near
return universal
S4 0-127 ASCII 10 character used as Near
line feed universal
S5 0-32,127 ASCII 8 character used as Near
backspace universal
S6 2-255 sec. 2 time to wait for dial Near
tone universal
S7 1-255 sec. 30 time to wait for Near
carrier universal

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (60 de 96) [23/06/2000 06:57:03 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

S8 0-255 sec. 2 length of comma Near


pause universal
S9 1-255 0.1" 6 response time, Near
carrier detect universal
S10 1-255 0.1" 7 delay before hang up Near
universal
S11 50-255 0.001" 95 DTMF duration Varies
S12 20-255 0.02" 50 escape code dead Common
time
S14 bit-mapped modem options AA(Hex) Varies
Bit 0 reserved
Bit 1 command echo
0=no echo
1=echo
Bit 2 result codes
0=enabled
1=disabled
Bit 3 verbose mode
0=short form result
codes
1=verbose result
codes
Bit 4 dumb mode
0=modem acts smart
1=modem acts dumb
Bit 5 dial method
0=tone
1=pulse
Bit 6 reserved
Bit 7 originate/answer
mode
0=answer
1=originate
S16 bit-mapped 0 modem test options Common

bit 0 local analog


loopback

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (61 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

0=disabled
1=enabled
bit 1 reserved
bit 2 local digital
loopback
0=disabled
1=enabled
bit 3 status bit
0=loopback off
1=loopback in
progress
bit 4 initiate remote
digital loopback
0=disabled
1=enabled
bit 5 initiate remote
digital loopback
with test message
and error count
0=disabled
1=enabled
bit 6 local analog
loopback with self
test
0=disabled
1=enabled
bit 7 reserved
S18 0-255 seconds 0 test timer Common
S21 bit-mapped 0 modem options Hayes

bit 0 telco jack used


0=RJ-11/ RJ-41S/
RJ-45S
1=RJ-12/ RJ-13
bit 1 reserved
bit 2 RTS/CTS handling
0=RTS follows CTS

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (62 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

1=CTS always on
bit 3,4 DTR handling
0,0=modem ignores
DTR
0,1=modem to
command state when
DTR goes off
1,0=modem hangs
up when DTR goes
off
1,1=modem
initializes when
DTR goes off
bit 5 DCD handling
0=DCD always on
1=DCD indicates
presence of carrier
bit 6 DSR handling
0=DSR always on
1=DSR indicates
modem is off-hook
and in data mode
bit 7 long space
disconnect
0=disabled
1=enabled
S22 bit-mapped 76(Hex) modem option Common
register
bit 0,1 speaker volume
0,0=low
0,1=low
1,0=medium
1,1=high
bit 2,3 speaker control
0,0=speaker disabled
0,1=speaker on until
carrier detected
1,0=speaker always
on

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (63 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

1,1=speaker on
between dialing and
carrier detect
bit 4,5,6
result code options
0,0,0=300 baud
modem result codes
only
1,0,0=modem does
not detect dialtone or
busy
1,0,1=modem
detects dialtone only
1,1,0=modem
detects busy signal
only
1,1,1=modem
detects dialtone and
busy
other settings
undefined
bit 7 make/break pulse
dial ratio
0=39% make, 61%
break
1=33% make, 67%
break
S23 bit-mapped 7 modem option Common
register

bit 0 obey request from


remote modem for
remote digital
loopback
0=disabled
1=enabled
bit 1,2 communication
rate
0,0=0 to 300 bps
0,1=reserved
1,0=1200 bps

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (64 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

1,1=2400 bps
bit 3 reserved
bit 4,5 parity option
0,0=even
0,1=space
1,0=odd
1,1=mark/none
bit 6,7 guard tones
0,0=disabled
0,1=550 Hz. guard
tone
1,0=1800 Hz. guard
tone
1,1=reserved
S27 bit-mapped 40(Hex) modem options Hayes
register

bit 0,1 transmission mode


0,0=asynchronous
0,1=synchronous
with async call
placement
1,0=synchronous
with stored number
dialing
1,1=synchronous
with manual dialing
bit 2 dialup or lease-line
operation
0=dialup line
1=leased-line
bit 3 reserved
bit 4,5 source of
synchronous clock
0,0=local modem
0,1=host computer
or data terminal
1,0=derived from
received carrier

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (65 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

1,1=reserved
bit 6 Bell or CCITT
operation
0=CCITT v.22
bis/v.22
1=Bell 212A
bit 7 reserved

Controlling S-register settings is merely a matter of using the S command family. To set
a value in a given S-register, you must specify the register number, an "equals" sign, and
the new value to give the register. For example, to set register S10 to value 32, you
would send this command to your modem:

ATS10=32
where AT is the attention command, S indicates you want to change an S-register, 10 is
the register number, the = indicates you want to change its value, and the 32 is the new
value.
To read the value of an S-register, you use the S command again. After you specify the
register number, you only need to append a question mark to make an inquiry. For
example, to check the value store in register S10, you would send this command to your
modem:

ATS10?
where AT is the attention command, the S indicates you want to check an S-register, 10
is the register number, and the question mark denotes the command as an inquiry. Your
modem would respond 32, providing you had given the previous command to set the
register to that value.

Response Codes

Commands sent to a Hayes-compatible modem are one way. Without some means of
confirmation, you would never know whether the modem actually received your
command, let alone acted on it. Moreover, you also need some means for the modem to
tell you what it discovers about your connection to the telephone line. For instance, the
modem needs to signal you when it detects another modem at the end of the line.
Part of the Hayes command set is a series of response codes that serve the feedback
function. When the modem needs to tell you something, it sends code numbers or words
to apprise you of the situation via the same connection used to send data between your
computer and modem. In the Hayes scheme of things, you can set the modem to send

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (66 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

simple numeric codes, consisting solely of codes (which you can then look up in your
modem manual, if you have one) or verbose responses, which may be one or more words
long in something close to everyday English.
Typical responses include "OK" to signify that a command has been received and acted
on, "CONNECT 1200" to indicate that you have linked with a 1200 bits-per-second
modem, and "RINGING" to tell you that the phone at the other end of the connection is
ringing. Representative modem response codes are listed in Table 22.8. Although the use
of these messages is industry wide, no true standard exists. Beyond the basic first ten,
manufacturers have set their own result codes independently. Consequently, the same
number may have different meanings for different modems—and the chart has duplicate
entries for some values.

Table 22.8. Representative Modem Response Codes

Code Verbose message Definition


0 OK Command executed without error
1 CONNECT Connection established (at 300 bps)
2 RING Phone is ringing
3 NO CARRIER Carrier lost or never detected
4 ERROR Error in command line or line too long
5 CONNECT 1200 Connection established at 1200 bps
6 NO DIALTONE Dialtone not detected in waiting period
7 BUSY Modem detected a busy signal
8 NO ANSWER No silence detected when waiting quiet answer
10 CONNECT 2400 Connection established at 2400bps
11 RINGING Distant telephone ringing
11 CONNECT 4800 Connection established at 4800 bps
12 VOICE Human voice or answering machine detected
12 CONNECT 9600 Connection established at 9600 bps
13 CONNECT 9600 Connection established at 9600 bps
14 CONNECT 19200 Connection established at 19,200 bps
18 CONNECT 4800 Connection established at 4800 bps
20 CONNECT 7200 Connection established at 7200 bps
21 CONNECT 12000 Connection established at 12000 bps
25 CONNECT 14400 Connection established at 14400 bps
28 CONNECT 38400 Connection established at 38,400 bps
40 CARRIER 300 Carrier detected at 300 bps
46 CARRIER 1200 Carrier detected at 1200 bps

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (67 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

47 CARRIER 2400 Carrier detected at 2400 bps


47 CONNECT 16800 Connection established at 16800 bps
48 CARRIER 4800 Carrier detected at 4800 bps
50 CARRIER 9600 Carrier detected at 9600 bps
66 COMPRESSION: CLASS 5 MNP5 data compression enabled
67 COMPRESSION: V.42BIS V.42bis compression enabled
69 COMPRESSION: NONE Data compression disabled
70 PROTOCOL: NONE Standard asynchronous mode
77 PROTOCOL: V.42/LAPM V.42 error correction mode: LAPM
80 PROTOCOL: MNP Alternate error correction protocol: MNP
85 CONNECT 19200 Connection established at 19200 bps
91 CONNECT 21600 Connection established at 21600 bps
99 CONNECT 24000 Connection established at 24000 bps
103 CONNECT 26400 Connection established at 26400 bps
107 CONNECT 28800 Connection established at 28800 bps

Because the response codes flow from your modem to your computer as part of the
regular data stream, you may accidentally confuse them with text being received from the
far end of your connection. The scripts you write for communications programs (or those
supplied by the publishers with the programs themselves) usually use a "connect"
message to switch to terminal mode.

Operation

No matter how you send a command to your modem, it works the same way. It receives a
setup string from your application or the operating system setting its basic operating
parameters. It dials the phone in response to a dial command (or answers the line if it
rings and you tell it to). The modem then negotiates the connection so that it and the
modem at the other end of the line are on speaking terms. It uses handshaking to ensure
that nothing gets lost during its conversations. And it uses a hardware or software
protocol to detect and prevent errors in the data that it transfers.

Setup Strings

Back when modems only had one or two speeds and all communications protocols were
handled by your communications software, you usually just gave your modem a
command to dial the phone and waited for a connection. Today, however, there are many
options for configuring a modem, and they often differ, depending on the modem or

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (68 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

service with which you communicate. Whereas some modems remember their basic
configuration in EEPROM memory so that you can set it once and forget it, others need
to be reminded every time you use the modem. Consequently, you may need to change
the configuration of your modem every time you start your communications program or
even dial the phone.
To set or change the configuration of your modem, your communications program sends
a string of commands out your serial port. This often long block of characters is called a
setup string. Sending the right setup string is vitally important in assuring that your
modem works with your software and the connection you want to make.
Although the setup string looks as forbidding as a secret code written in Cyrillic
characters, you can deconstruct it easily. Each element of the string is drawn from the
modem command set and can be interpreted individually.
A setup string (or addition to your modem's dialing command) can be useful in avoiding
problems with call waiting on modem lines. The click that warns you of an incoming call
while you're using a line plays havoc with modems. Most interpret the click as a loss of
carrier and automatically disconnect. To avoid the problem, you can use your modem
setup string to reprogram the amount of time your modem waits before hanging up when
it detects a loss of carrier. Registers S9 and S10 control this delay. The command
ATS10=30 (or adding S10=30 to your setup string) will set the delay for three seconds,
enough to prevent the automatic hang up.
Another way to avoid the disconnect problem is to cancel call waiting. In most calling
areas dialing *70 on your phone before making a call disables call waiting during the
next call. You can add the *70 to your modem's dialing command to automatically defeat
call waiting. For example, change your dialing command from ATDT15551212 to
ATDT*70,15551212 (the comma adds a pause to give the telephone time to recover).

Dialing and Answering

Nearly all modems today fully automate the process of linking with another modem.
Your communications software sends commands to the modem to dial the phone, and the
other modem knows what to do when the telephone rings. Early modems, however, were
not so capable, and these automatic features had to be explicitly pointed out in modem
specifications.
An auto-dial modem has the innate capability to generate pulse-dial or DTMF (dual-tone
modulated frequency or touch tone) dialing signals independent of a telephone set. Upon
receiving a dialing command from you or your software, the modem dials the phone and
makes a connection. Without auto-dial, you would have to dial the phone yourself, listen
for the screech of the remote modem's answer, plug in your modem, and then hang up the
phone.
At the other end of the connection, an auto-answer modem can detect the incoming
ringing voltage (the low frequency, high voltage signal that makes the bell on a telephone
ring) and seize the telephone line as if it had been answered by you in person. After

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (69 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

seizing the phone line, the auto-answer modem sends a signal to its host computer to tell
it that it has answered the phone. The computer then can interact with the caller.
Nearly all modern modems also include automatic speed sensing. This feature allows the
modem to automatically adjust its speed to match that of a distant modem if it can. High
speed modems usually negotiate the highest possible shared speed at which to operate
using proprietary protocols.
Early modems attempt to adjust to the speed at which you send them data, again if it is
within the range of speeds that the modem can handle. Modems with built-in data
compression make this option undesirable because you always want to send data to the
modem at the highest speed your serial port allows and your modem accepts (at least
57,600 bits per second with a V.32 bis/V.42bis modem and 115,200 bits per second with
a V.34 modem). The modem that follows these standards adjusts its speed to match
whatever it connects with during the process of handshaking.

Handshaking

Once a modem makes a connection, it must negotiate the standard that it and the distant
modem follow. This process of negotiation is termed modem handshaking. The sequence
involved in the handshake differs with the type of modem and standard, and it can get
complicated as speeds increase.
When using the V.22bis standard, for example, the distant modem that detects a ring on
the telephone line goes off the hook (picks up the line), then does nothing for at least two
seconds. This period of silence, called the billing delay, is required by telephone
company rules to give the phone system a chance to determine that the connection has
been made and start billing you for it. After the billing delay, the remote modem sends
out its "answer tone," between 2.6 and 4 seconds of a 2100 Hz tone. The answer tone lets
anyone who dials your modem line as a wrong number know that they have not reached
their significant other. It also gives people with manual start modems a chance to put
their modems in data mode. It also lets the telephone company know that you have made
a data connection so that the phone company can switch off the echo suppressors on the
line and let your modems fend for themselves in canceling echoes. The dialing modem
doesn't do anything but listen while all of this posturing goes on.
After the answer tone is finished, the distant modem becomes quiet for about 75
milliseconds, marking the end of the answer tone. Then it sends out what you hear as a
burst of static. It's actually a 1200-bits-per-second signal (shuffling between 2250 and
2550 Hz) containing an unscrambled binary 1 signal—that is, pulses indicating a logical
positive in digital code. After about 155 milliseconds of this noise, the dialing modem
remains silent for about .5 second, and then sends out its 00 and 11 code patterns at 1200
bits per second for about 100 milliseconds, a signal called "S1." Then the dialing modem
switches to sending out a "scrambled" binary 1 signal (one that has its power smoothed
out across the modem's bandwidth). When the answering modem, which is still sending
out an unscrambled binary 1, recognizes the dialing modem's S1 signal, the answering
modem begins sending out the S1 for 100 milliseconds. Then it switches to a scrambled
binary 1 for 500 milliseconds. Finally, it ups the rate to 2400 bits per second and sends

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (70 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

out another 200 milliseconds of binary one. About 600 milliseconds after the dialing
modem detects the scrambled binary 1 signal from the answering modem, it too switches
to scrambled 1s at 2400 bits per second for 200 milliseconds. This completes the
negotiations, and the two modems are ready to transfer information.
The S1 signals let the two modems know that they will communicate using V.22bis
rather than slower speed V.22 or Bell 212A. If these signals are not detected, the modems
known quickly that they must communicate at 1200 bits per second. The two scrambled
binary 1 signals assure the two modems that their 2400-bits-per-second exchanges will
be successful.
Modem handshaking at higher speeds is even more complicated. In addition to
negotiating their own speed, the modems have to measure the characteristics of the
telephone line, for instance, to calibrate the delays to make their echo cancellers work.
Under V.32, for example, a connection starts out the same as in V.22bis because the
billing delay must still be accounted for. But after the delay, the V.32 answering modem
sends out a V.25 answer tone that reverses phase every 450 milliseconds to tell the
telephone system that the modems will handle the echo cancellation. This signal is the
clicking noise you hear at the beginning of a high speed modem connection.
The dialing modem waits one second after making the connection and sends out an 1800
Hz tone to let the answering modem (which is still sending out the answer tone) know
that it is a V.32 modem. If the answering modem hears this tone, after it completes its 2.6
to 4.0 second answer tone, it immediately tries to connect. If the answering modem
doesn't hear the dialing modem's tone, it first tries to connect as a V.22bis modem (by
sending the unscrambled binary 1 signal). If the dialing modem doesn't respond, the
answering modem tried to connect as V.32 once again just in case something interfered
with the initial attempt.
To make the V.32 connection, the answering modem sends out a combination of 600 and
3000 Hz tones for at least 27 milliseconds, then reverses the phase of the signal. The
dialing modem responds to the phase reversal by reversing the phase of its 1800 Hz
signal. The answering modem responds to this by reversing the phase of its signal again.
These three phase reversals allow the modems to time how long it takes a signal to
complete the entire connection, information needed to program the echo cancellation
circuitry inside the modems.
Next, the dialing modem sends out a training signal that's from 650 to 3525 milliseconds
long so that the answering modem can adjust its phone line equalization. The answering
modem waits for the dialing modem to finish, and then sends out its own training signal.
When the dialing modem has finished its setup, it signals back to the answering modem.
Then both modems exchange scrambled binary 1 signals for at least 53 milliseconds and
are ready to pass data. At this point, your modem will signal you CONNECT 9600.
Once the connection is made, the modems see whether they can use V.42 error correction
or V.42 bit compression. The dialing modem begins the process by sending out a stream
of alternating parity XON characters while listening for a response. If the answering
modem doesn't respond in 750 milliseconds, the dialing modem assumes that the
recipient doesn't understand and tries an alternate protocol (for example MNP). If the
answering modem understands the XON characters, it responds by sending the letters

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (71 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

"EC" (for error correction) 10 times and prepares to use synchronous LAPM protocol.
The dialing modem then uses the LAPM protocol to send out an Exchange Identification
frame (XID), which codes the details of the error correction it wants to use. The
answering modem responds with its selection of the features desired by the dialing
modem that it can use, and those become the basis of the error correction actually put to
work. These include whether to use V.42bis compression. The two modems then
exchange signals to enter data transfer mode.

Caller Identification

Some modems can take advantage of the telephone company's caller identification
service to screen calls for security. In areas supporting caller identification service,
central office equipment at the telephone company adds a three part identification signal
between the first and second ring of a telephone call. The format of this signal starts with
a series of alternating digital 1s and 0s followed by a marking state. The end part of the
signal is the actual caller identification data—not just the originating telephone number
but also a date and time stamp as well as a checksum. The entire signal lasts 450
milliseconds.
A modem designed for receiving caller identification can check the originating number
before connecting with an incoming call. The modem can assure that only calls from
approved numbers are accepted and can aid in maintaining a full audit trail of the calls
received.

Protocols

Even the best telephone connection sneaks errors into modem communications. When
sending text back and forth, the result is only a few garbled characters. But if you try to
send a complex web page or an executable program through a modem connection and an
error creeps in, the result is unpredictable.
Packet-based communications systems take care of such problems by including error
detection data in every packet. If a communication problem causes an error anywhere in
the system—including between your modem and PC—the bad packet can be detected,
and your system can ask for its retransmission. The TCP/IP protocol used by the Internet
assures that the pages of the World Wide Web arrive at your PC error free.
Although the Web has become the most popular and important application for modems,
it certainly is not the only one. At least for now, various online services and bulletin
boards persist in using direct dial connections. These don't benefit from TCP/IP or other
packet protocols. If errors are to be exorcised, the two ends of the standalone connection
must negotiate it themselves.
They have two choices: hardware-based error correction such as MNP5, and v.42 or
software protocols. Before the world agreed on standards for error correction in modem

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (72 de 96) [23/06/2000 06:57:04 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

connections, a number of different methods called file transfer protocols were developed
to help prevent errors when exchanging files. Many of these are still used in direct dial
(as opposed to network-based (Internet) connections.
● Xmodem. The first of these protocols was created in 1978 by Ward Christensen
and quickly became the de facto standard for error free file transmission. Called
Xmodem or MODEM7, Christensen's technique worked by breaking files into
blocks 128 bytes long, transmitting the blocks one at a time, and verifying the
accuracy of each. If the receiving system found an error, it requested that the bad
block be retransmitted as many times as necessary to get through error free (or
until the process automatically times out).
To detect errors, Xmodem uses a checksum byte added to each block which,
through application of an algorithm, indicates whether the transmission was
successful. When a block is successfully received, the receiving system sends back
an ACK byte (ASCII code TK); if unsuccessful, it sends back a NAK (ASCII TK)
byte.

● Xmodem CRC. Some errors in a block can result in a valid checksum. A more
robust form of Xmodem substitutes a Cyclical Redundancy Check for the
checksum (stealing another byte). Most systems that use Xmodem try to use the
CRC algorithm first (because it is more reliable), but if one of the systems trying to
communicate doesn't support CRC, the protocol reverts to ordinary Xmodem.

● Xmodem-1K. Another way to avoid the substantial overhead of acknowledging


multiple blocks is to use larger blocks. The Xmodem-1K protocol expands the
128-byte block of Xmodem CRC to 1024 bytes. The larger blocks give this
protocol a substantial advantage over its predecessor in moving larger files.

● WXmodem. One problem with Xmodem is that the sending modem must wait for
an acknowledgment between sending blocks. With long delays, this wait can
substantially slow transmissions. The WXmodem protocol (which stands for
Windowed Xmodem) removes the wait. As with ordinary Xmodem, it uses
checksummed 128-byte blocks, but the sending system assumes that every block is
properly received and sends all blocks one after another. The receiving modem
responds normally with ACK or NAK acknowledgments. Although the sending
modem often gets one to four blocks ahead of the receiving modem, it tracks the
acknowledgments it receives and can resend the proper block when necessary.

● Ymodem. As with Xmodem-1K, Ymodem takes advantage of larger blocks to


speed transmissions. It uses 1024-byte blocks and cyclic redundancy checking for
error detection. Although sometimes confused with Xmodem-1K, it differs by
including a batch mode that allows multiple files to be transferred with a single
command. Sometimes it is described as Ymodem Batch.

● Ymodem-g. After modems with built-in hardware error correction became


available, a new variant of the Ymodem protocol without software error recovery
was developed as Ymodem-g. Although Ymodem-g still breaks files into
1024-byte blocks for transfer, it sends them as a continuous stream. If an error

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (73 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

sneaks past the hardware error detection scheme and is reported back to the
sending system, the entire transfer is canceled. It also supports batch transfers.

● Zmodem. The principle behind Zmodem is that the only blocks that matter in error
correction are those that are received defectively. As with WXmodem and
Ymodem-g, Zmodem blasts out blocks non-stop, and it only acts on the NAKs it
gets from the receiving system. Zmodem compromises on block size, using
512-byte blocks. The protocol also has recovery capabilities. If a protocol transfer
is interrupted, the system can resume it later without the need to retransmit the
blocks already sent.

● Kermit. Developed at Columbia University in 1981, Kermit—named after Jim


Henson's favorite frog—was designed to ship files between dissimilar computer
systems. Unlike Xmodem, however, the way Kermit moves data is negotiable.
Kermit uses blocks (which the protocol calls packets) and checksum error
detection but adjusts its packet size to accommodate the fixed packet sizes used by
some computer systems or to work with marginal connections. It also can use
seven-bit connections to transfer eight-bit data by specially coding characters when
necessary. Kermit can also recover from major line errors by resynchronizing the
transmissions of modems after their interruption.

The Internet

The main reason most people buy a modem—or an entire PC, for that matter—is to
connect to the Internet. In truth, however, a modem does not connect your PC to the
Internet. The modem connects your PC into the Internet. More like a weld than glue, the
modem makes your PC part of the Internet.
Although the Internet is built using hardware, it is not hardware itself. Similarly, you
need hardware to connect to the Internet, but that hardware only serves as a means to
access what you really want: the information that the Internet can bring to your PC.
Without the right hardware, you could not connect to the Internet, but having the
hardware alone won't get you to the World Wide Web.
Despite its unitary name, there is no giant Internet in the sky or some huge office
complex somewhere. In fact, the Internet is the classic case of there being "no there
there." Like an artichoke, if you slice off individual petals or pieces of the Internet, you'll
soon have a pile of pieces and no Internet anywhere, and you won't find it among the
pieces. Rather, like the artichoke the Internet is the overall combination of the pieces.
Those pieces are tied together both physically and logically. The physical aspect is a
collection of wires, optical fibers, and microwave radio links that carry digital signals
between computers. The combination of connections forms a redundant network.
Computers are linked to one another in a web that provides multiple signal paths between
any two machines.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (74 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

The logical side is a set of standards for the signals that travel through that network. The
Internet uses various protocols depending on what kind of data is being transferred. The
chief protocol and the defining standard of the Internet is TCP/IP.
In truth, the Internet was not designed to link computers but to tie together computer
networks. As its name implies, the Internet allows data to flow between networks. Even if
you only have a single PC, when you connect with the Internet you must run a network
protocol the same as if you had slung miles of Ethernet cable through your home and
office. Whether you like it or not, you end up tangled in the web of networking when you
connect to the Internet.
The various protocols that make up the Internet require their own books for proper
discussion. We'll just take a moment here to put the Internet in perspective with your PC
and its communication hardware.

History

Locating the origins of the Internet depends on how primitive an ancestor you seek. The
thread of the development of the Internet stretches all the way back to 1958, if you pull
on it hard enough. The Internet's mother—the organization that gave birth to it—was
itself born in the contrail of Sputnik. In October 1957, the U.S.S.R. took the world by
surprise by launching the first artificial satellite and made the U.S. suddenly seem
technologically backward. In response, President Dwight D. Eisenhower launched the
Advanced Research Project Agency as part of the Department of Defense in January
1958.
Then, as now, ARPA's work involved a lot of data processing, much of it at various
university campuses across the country. Each computer, like the college that hosted it,
was a world unto itself. To work on the computer, you had to be at the college. To share
the results of the work on the computer, you needed a letter carrier with biceps built up
from carrying stacks of nine-track tapes from campus to campus. Information flowed no
faster between computers than did the mail.
Bob Taylor, working at ARPA in 1967, developed the idea of linking together into a
redundant, packet-based network all the computers of major universities participating in
the agency's programs. In October 1969, the first bytes crossed what was to become
ARPANet in tests linking Stanford Research Institute and the University of California at
Los Angeles. By December 1969, four nodes of the fledgling inter-networking system
were working.
The system began to assume its current identity with the first use of Transmission
Control Protocol in a network in July 1977. As a demonstration, TCP was used to link a
packet radio network, SATnet and ARPAnet. Then, in early 1978, TCP was split into a
portion that broke messages into packets, reassembled them after transmission, kept order
among the packets, and controlled error control, called TCP, and a second protocol that
concerned itself with the routing of packets through the linkage of network—Internet
Protocol. The two together made TCP/IP, the fundamental protocol of today's Internet.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (75 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

If the Internet actually had a birthday, it was January 1, 1983, when ARPAnet switched
over from Network Control Protocol to TCP/IP. (By that time, ARPAnet was only one of
many networks linked by TCP/IP.) To give a friendlier front-end to communications with
distant computer systems, Tim Berners-Lee, working at CERN in Geneva in 1990,
invented the World Wide Web.
The final step in the development of today's Internet came in 1991. In that year the
National Science Foundation, which was overseeing the operation of the Internet, lifted
its previous restrictions on its commercial use. The free market free-for-all began.

Structure

The best view of the Internet comes with following a packet from your PC. When you
log into a web site, you actually send a command to a distant server telling it to download
a page of data to your PC. Your web browser packages that command into a packet
labeled with the address of the server storing the page that you want. Your PC sends the
packet to your modem (or terminal adapter), which transmits it across your telephone or
other connection to your Internet Service Provider or ISP.
The ISP actually operates as a message forwarder. At the ISP, your message gets
combined with those from other PCs and sent through a higher speed connection (at least
you should hope it is a high speed connection) to yet another concentrator that eventually
sends your packet to one of five regional centers (located in New York, Chicago, San
Francisco, Los Angeles, and Maryland). There the major Internet carriers exchange
signals, routing the packets from your modem to the carrier that haul them to their
destination based on their Internet address.
One of the weaknesses of today's Internet is its addressing. All of the Internet addresses
are global. From the address itself, neither you nor a computer can tell where that address
is or, more importantly, how to connect to it. The routers in the Internet regional centers
maintain tables to help quickly send packets to the proper address. Without such
guidance, packets wander throughout the world looking for the right address. Worse, the
current Internet naming convention which assigns 32-bit addresses, doesn't have the
breadth necessary to accommodate all future applications of the Internet. Some experts
believe that the Internet will simply run out of available addresses sometime around the
turn of the century—whether the shortfall occurs before or after the millennium depends
on how pessimistic an expert you ask.
Internet addresses are separate and distinct from the domain names used as Uniform
Resource Locators (URLs) through which you specify Web pages. The domain names
give you a handle with a natural-language look. Internet addresses are, like everything in
computing, binary codes.
Even domain names are running short. Finding a clever and meaningful name for a web
site is a challenge that's ever increasing. Believing that one of the problems in the
shortage of URLs has been the relatively few suffixes available, one of the coordinating
agencies for Internet names, the International Ad Hoc Committee, proposed seven
additional suffixes in addition to the six already in use in the U.S. and the national

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (76 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

suffixes used around the world (.US for United States, .CA for Canada, and so on). Table
22.9 lists the present and proposed (as of 1997) suffixes.

Table 22.9. U.S. Internet Domain Name Suffixes

Ending Application
.arts Cultural groups
.com General business and individuals
.edu Schools
.firm Businesses
.gov Government
.info Information services
.mil Military
.net Internet service providers
.org Groups and organizations
.nom Individuals
.rec Recreational sites
.store Retailers
.web Web-related organizations

Operation

The World Wide Web is the most visually complicated and compelling aspect of the
Internet. Despite its appearances, however, the web is nothing more than another file
transfer protocol. When you call up a page from the web, the remote server simply
downloads a file to your PC. Your web browser then decodes the page, executing
commands embedded in it to alter the typeface and to display images at the appropriate
place. Most browsers cache several file pages (or even megabytes of them) so that when
you step back, you need not wait for the same page to download once again.
The commands for displaying text use their own language called Hypertext Markup
Language, or HTML. As exotic and daunting as HTML sounds, it's nothing more than a
coding system that combines formatting information in textual form with the readable
text of a document. Your browser reads the formatting commands, which are set off by a
special prefix so that the browser knows they are commands, and organizes the text in
accordance with them, arranging it on the page, selecting the appropriate font and
emphasis, and intermixing graphical elements. Writing in HTML is only a matter of
knowing the right codes and where to put them. Web Authoring tools embed the proper
commands using menu-driven interfaces so that you don't have to do the memorization.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (77 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Performance Limits

Linking to the Internet requires both hardware and software. The hardware runs a wide
gamut. Most people connect through standard telephone lines using a modem.
Okay, so your Internet access through your modem or digital connection isn't as fast as
you'd like. Welcome to the club. As the Duchess of Windsor never said, "You can never
be too rich or thin or have an Interconnection that's fast enough." Everyone would like
Web pages to download instantly. Barring that, they'd like them to load in a few seconds.
Barring that, they'd just like them to load before the next Ice Age.
The most tempting way to increase your Internet speed is to update your modem—from
last year's laggardly 28.8 Kbps model to a new 33.6 or 56K model. You may note a
surprising difference if you do—little change in your access speed. The small
improvement results from two factors: your adding less speed than you think and your
adding speed in the wrong place.
With analog modems, high speed performance depends on the quality of your
connection. The added edge of a 33.6K modem requires a near perfect telephone line.
Most long distance connections strain at squeezing the full bandwidth of a 28.8K modem
through. Altering the modem does nothing to improve your connection (and likely your
local phone company will do nothing to improve your connection, either). Because 56K
modems don't rely on modulation technology, they promise more hope in getting higher
speeds, but they come with their own limitations. Their asymmetrical design means any
speed boost is only in one direction, toward you (which is not a fatal defect). They are
also dependent on line quality—you need the best to get the best. And they require your
Internet service provider to also provide 56K access.
Any change in your Internet access system—a modem upgrade or switch to a digital
service—may reveal the dirty secret: you're working on the wrong bottleneck. I
remember the joy of plugging into T1 access to the Internet, only to discover it wasn't
any faster than my old, slow 28.8K modem. The slowdown I faced wasn't in the local
connection but in the Net itself.
You can easily check your Internet bottleneck and see what you can do about it. Pick a
large file and download it at your normal online time. Then, pry yourself out of bed early
and try downloading the same file at 6AM EST or earlier when Internet traffic is likely to
be low. If you notice an appreciable difference in response and download times, a faster
modem won't likely make your online sessions substantially speedier. The constraints
aren't in your PC but in the server and network itself.

Security

As originally conceived, the Internet is not just a means for moving messages between

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (78 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

PCs. It was designed as a link between computer systems that allowed scientists to share
machines. One researcher in Boston could, for example, run programs on a computer
system in San Francisco. Commands to computer systems move across wires just as
easily as words and images. To the computer and the Internet they are all data.
Much of the expense businesses put into connecting to the Internet involves undoing the
work of the original Internet creators. The first thing they install is a firewall that blocks
outsiders from taking control of the business's internal computer network. They must
remain constantly vigilant that some creative soul doesn't discover yet another flaw in the
security systems built into the Internet itself.
Can someone break into your PC through the Internet? It's certainly possible. Truth be
known, however, rummaging through someone's PC is about as interesting as burrowing
into his sock drawer. Moreover, the number of PCs out there makes it statistically
unlikely any given errant James Bond will commandeer your PC, particularly when
there's stuff much more interesting (and challenging to break into)—like the networks of
multi-billion dollar companies, colleges, government agencies, and the military.
The one weakness to the above argument is that it assumes whoever would break into
your PC uses a degree of intelligence. Even as a dull, uninteresting PC loaded with
naught but a two-edition-old copy of Office can be the target of the computer terrorist.
Generally, someone whose thinking process got stalled on issues of morality, the
computer terrorist doesn't target you as much as the rest of the world that causes him so
much frustration or boredom. His digital equivalent of a bomb is the computer virus.
A computer virus is program code added to your PC without your permission. The name,
as a metaphor to human disease, is apt. As with a human virus, a virus cannot reproduce
by itself—it takes command of your PC and uses its resources to duplicate itself.
Computer viruses are contagious in that they can be passed along from one machine to
another. And computer viruses vary in their effects, from deadly (wiping out the entire
contents of your hard disk) to trivial (posting a message on your screen). But computer
viruses are nothing more than digital code, and they are machine-specific. Neither you
nor your toaster nor your PDA can catch a computer virus from your PC.
Most computer viruses latch onto your PC and lie in wait. When a specific event
occurs—for example, a key date—they swing into action, performing whatever dreadful
act their designers got a chuckle from. To continue infecting other PCs, they also clone
themselves and copy themselves to whatever disks you use in your PC. In general,
viruses add their code to another program in your PC. They can't do anything until the
program they attach themselves to begins running. Virus writers like to attach their
viruses to parts of the operating system so that the code will load every time you run your
PC. Because anti-virus programs and operating systems now readily detect such viruses,
the virus terrorists have developed other tactics. One of the latest is the macro-virus that
runs as a macro to a program. In effect, the virus is written in a higher level language that
escapes detection by the anti-virus software.
Viruses get into your PC because you let them. They come through any connection your
PC has with the outside world including floppy disks and going online. Browsing web
pages ordinarily won't put you at risk because HTTP doesn't pass along executable
programs. Plug-ins may. Whenever you download a file, however, you run a risk of
bringing a virus with it. Software and drivers that you download are the most likely

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (79 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

carriers. Most webmasters do their best to assure that they don't pass along viruses. You
should always be wary when you download a program from a less reputable site.
There is no such thing as a sub-band or sub-carrier virus that sneaks into your PC through
a "sub-band" of your modem's transmissions. Even were it possible to fiddle with the
operation of a modem and add a new, invisible modulation to it, the information encoded
on it could never get to your PC. Every byte from an analog modem must go through the
UART in the modem or serial port, then be read by your PC's microprocessor. The
modem has no facility to link a sideband signal (even if there were such a thing) to that
data stream.

Fax

Nearly every high speed modem sold today has built-in fax capabilities. This bonus
results from the huge demand for fax in the business world coupled with the trivial cost
of adding fax capabilities to a modern modem. Everything that's necessary for fax comes
built into the same chipsets that make normal high speed modem communications
possible.
That said, few people take advantage of the fax capabilities of the modern PC modem.
Although nearly every modem now offered for PCs is capable of both sending and
receiving faxes, a five-year study taken by the Gallup Organization and reported in the
Wall Street Journal (April 23, 1996) indicated that 90 percent of the people in Fortune
500 companies that were surveyed used standalone fax machines rather than PC fax. In
smaller companies where budgets are more an issue, about 80 percent still preferred
standalone fax to PC based fax. On the other hand, the study found that the majority
(about 60 percent) of fax users believed that fax machines deliver the most reliable
communications and quickest responses when compared to e-mail, voice mail, and
overnight courier services. Hardly surprisingly, fax machine maker Pitney-Bowes
Facsimile Systems sponsored the study.
On the other hand, that leaves millions or tens of millions of people using their PCs to
send out faxes. With today's driver fax software that makes sending a fax as easy as
printing a letter (if not more so—no ribbons, toner, or paper to deal with), those
percentages are sure to shift. Documents already on paper probably will remain the
province of the standalone fax machine, but those you create on your PC have their best
outlet in your fax modem.

Background

Fax, short for facsimile transmissions, gives the power of Star Trek's transporter system
(usually without the aliens and pyrotechnics) to anyone who needs to get a document
somewhere else in the world at the speed of light. Although fax doesn't quite
dematerialize paper, it does move the images and information a document contains across

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (80 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

continents and reconstructs it at the end of its near instantaneous travels. The recipient
gets to hold in his own hands a nearly exact duplicate of the original, the infamous
reasonable facsimile.
From that angle, fax is a telecopier—a Xerox machine with a thousand miles of wire
between where you slide the original in and the duplicate falls out. In fact, the now aging
telecopiers made by Xerox Corporation were the progenitors of today's fax machines.
Today fax involves the use of a fax modem, a device that converts page scans into a form
compatible with the international telephone system. Or you can use a standalone fax
machine, which combines a fax modem with a scanner, printer, and telephone set. In the
PC realm, the term "fax modem" also refers to adapter boards that slide into expansion
slots to give the host computer the capability to send and receive fax transmissions.
In a classic fax system, you start using fax by dialing up a distant fax system using a
touch pad on your fax machine, just as you would any other telephone. You slide a sheet
of paper into the fax's scanner, and the page curls around a drum in front of a
photodetector. Much as a television picture is broken into numerous scan lines, a fax
machine scans images as a series of lines, takes them one at a time, and strings all of the
lines scanned from a document into a continuous stream of information. The fax machine
converts the data stream into a series of modulated tones for transmission over the
telephone line. After making a connection at the receiving end, another fax machine
converts the data stream into black and white dots representing the original image, much
as a television set reconstructs a TV image. A printer puts the results on paper using
either thermal or laser printer technology.
PC fax systems can do away with the paper. PC fax software can take the all-electronic
images you draw or paint with your graphics software and convert it into the standard
format that's used for fax transmissions. A fax modem in your PC can then send that data
to a standard fax machine, which converts the data into hard copy form. Or your PC fax
system can receive a transmission from a standard fax machine and capture the image
into a graphics file. You can then convert the file into another graphic format using
conversion software, edit the image with your favorite painting program, or turn its text
contents into ASCII form using OCR software. You can even turn your PC into the
equivalent of a standard fax machine by adding a scanner to capture images from paper.
Your printer will turn fax reception into hard copy, although at a fraction of the speed of
a standalone fax machine.
Larger businesses with PC networks incorporate fax servers to allow their employees to
share a common facility for sending their PC-based faxes. The fax server eliminates the
need for each PC in the network to be equipped with its own fax modem and telephone
line.
Reception through fax servers has been problematic, however, because conventional fax
messages provide no easy means for electronic routing through a network to the proper
recipient. To solve this problem, the fax industry is developing standards for
subaddressing capabilities. Subaddresses will be invaluable to businesses with PC
networks using fax servers. With current technology, giving each user a private fax
mailbox means a separate telephone line for each. Using a subaddress, a single fax server
can receive all fax messages and route them to the proper recipient. The fax subaddress
will be a mailbox number that's added to your primary telephone number. It will direct

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (81 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

the automatic routing of the message through a network fax server to an individual's fax
mailbox. The subaddress number will be transmitted during the opening handshake of the
fax modems, and it will be independent from the primary telephone number.
PC fax beats standalone fax with its management capabilities. PC fax software can
broadcast fax messages to as wide a mailing list as you can accommodate on your hard
disk, waiting until early morning hours when long distance rates are cheapest to make the
calls. You can easily manage the mailing list as you would any other PC database.
The concept of facsimile transmissions is not new. As early as 1842, Alexander Bain
patented an electro-mechanical device that could translate wire-based signals into marks
on paper. Newspaper wire photos, which are based on the same principles, have been
used for generations.
The widespread use of fax in business is a more recent phenomenon, however, and its
growth parallels that of the PC for much the same underlying reason. Desktop computers
did not take off until the industry found a standard to follow, the IBM PC. Similarly, the
explosive growth of fax began only after the CCITT adopted standards for the
transmission of facsimile data.

Analog Standards

The original system, now termed Group 1, was based on analog technology and used
frequency shift keying, much as 300 baud modems do, to transmit a page of information
in six minutes. Group 2 improved that analog technology and doubled the speed of
transmission, to three minutes per page.

Group 3

The big break with the past was the CCITT's adoption in 1980 of the Group 3 fax
protocol, which is entirely digitally based. Using data compression and modems that
operate at up to 14,400 bits per second, full page documents can be transmitted in 20 to
60 seconds using the Group 3 protocol. New transmission standards promise to pump up
the basic Group 3 data rate to 28,800 bits per second.

Resolution

Under the original Group 3 standard, two degrees of resolution or on paper sharpness are
possible: standard, which allows 1,728 dots horizontally across the page (about 200 dots
per inch) and 100 dots per inch vertically; and fine, which doubles the vertical resolution
to achieve 200 x 200 dpi and requires about twice the transmission time. Fine resolution
also approximately doubles the time required to transmit a fax page because it doubles

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (82 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

the data that must be moved.


Revisions to the Group 3 standard have added more possible resolutions. Two new
resolutions compensate for the slight elongation that creeps into fax documents when
generated and transmitted in purely electronic form. New fax products may optionally
send and receive at resolutions of 204 by 98 pixels per inch in standard mode or 204 by
196 pixels per inch in fine mode. Two new high resolution modes of 300 by 300 pixels
per inch and 400 by 400 pixels per inch were also established. The 300 by 300 mode
enables fax machines, laser printers, and scanners to share the same resolution levels for
higher quality when transferring images between them. To take advantage of these
resolutions, both sending and receiving fax equipment must support the new modes.

Data Rates

The basic speed of a Group 3 fax transmission depends on the underlying


communications standard that the fax product follows. These standards are similar to data
modem standards. With the exception of V.34, data and fax modems operate under
different standards, even when using the same data rates. Consequently, data and fax
modems are not interchangeable, and a modem that provides high speed fax capabilities
(say 9600 bps) may operate more slowly in data mode (say 2400 bps).
The Group 3 protocol does not define a single speed for fax transmissions but allows the
use of any of a variety of transmission standards. At data rates of 2400 and 4800 bits per
second, fax modems operate under the V.27ter standard. At 7200 and 9600 bits per
second, they follow V.29 (or V.17, which incorporates these V.29 modes). At 12,000 and
14,400 bits per second, fax modems follow V.17. The V.34 standard will take both fax
and data modems up to 28,800 bits per second. New standards will allow the use of the
Group 3 fax protocol over ISDN and other future digital telephone services.
Fax modems are typically described by the communications standards they support or by
the maximum data rate at which they can operate. Most modern fax modems follow the
V.17 standard, which incorporates the lower V.29 speeds. Most will also fall back to
V.27ter to accommodate older, slower fax products.

Compression

In a typical fax machine, you slide a page into the machine, place the call, and the
machine calls a distant number. Once the connection is negotiated, the fax machine scans
the page with a photodetector inside the machine, which detects the black and white
patterns on the page one line at a time at a resolution of 200 dots per inch. The result is a
series of bits with the digital 1s and 0s corresponding to the black and white samples
each 1/200 of an inch. The fax machine compresses this raw data stream to increase the
apparent data rate and shorten transmission times.
Data compression makes the true speed of transmitting a page dependent on the amount

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (83 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

of detail each page contains. In operation, the data compression algorithm reduces the
amount of data that must be transferred by a factor of 5 to 10. On the other hand, a bad
phone connection can slow fax transmissions as fax modems automatically fall back to
lower speeds to cope with poor line quality.
Group 3 fax products may use any of three levels of data compression designated as MH,
MR, and MMR. The typical Group 3 fax product includes only MH compression. The
others are optional, and MMR is particularly rare. To be sure that a given fax products
uses MR or MMR, you must check its specifications.
MH stands for Modified Huffman encoding, which is also known as one-dimensional
encoding. It was built into the Group 3 standard in 1980 so that a fax machine could send
a full page in less than one minute using a standard V.27ter modem that operated at 4800
bits per second. With 9600 bps modems, that time is cut nearly in half.
MR or Modified Read encoding was added as an option shortly after MH encoding was
adopted. MR starts with standard MH encoding for the first line of the transmission but
then encodes the second line as differences from the first line. Because with fine images,
line data changes little between adjacent lines, usually little change information is
required. To prevent errors from rippling through an entire document, at the third line
MR starts over with a plain MH scan. In other words, odd-numbered scan lines are MH
and even lines contain only difference information from the previous line. If a full line is
lost in transmission, MR limits the damage to, at most, two lines. Overall, the
transmission time savings in advancing from MH to MR amounts to 15 to 20 percent, the
exact figure depending on message contents.
MMR or Modified Modified Read encoding foregoes the safety of the MR technique and
records the entire page as difference data. Using MMR, the first line serves as a reference
and is all white. Every subsequent line is encoded as the difference from the preceding
line until the end of a page. However, an error in any one line repeats in every subsequent
line, so losing one line can garble an entire page. To help prevent such problems, MMR
can incorporate its own error correction mode (ECM) through which the receiving fax
system can request the retransmission of any lines received in error. Only the bad lines
are updated, and the rest of the page is reconstructed from the new data. MRR with ECM
is the most efficient scheme used for compressing fax transmissions and can cut the time
needed for a page transmission with MH in half.
Instead of individual dots, under MH (and thus MR and MMR) the bit pattern of each
scan line on the page is coded as short line segments, and the code indicates the number
of dots in each segment. The fax machine sends this run-length coded data to the remote
fax machine. Included in the transmitted signal is a rudimentary form of error protection,
but missed bits are not reproduced when the receiving fax machine reconstructs the
original page.
The exact code used by MH under Group 3 fax uses four code groups, two for sequences
of white dots, two for sequences of black dots. Sequences from 0 to 63 dots long are
coded using terminating codes, which express the exact number of dots of the given color
in the segment. If the segment of like-color dots scanned from the paper is longer than 63
dots, MH codes it as two code groups, a terminating code and a make-up code. The
make-up code value indicates the number of 64-dot blocks in the single-color segment.
Tables 22.10 through 22.13 list the terminating and make-up code values for both white

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (84 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

and black dots.

Table 22.10. Group 3 Fax Terminating White Codes

White run length Code word length Binary code


0 8 00110101
1 6 000111
2 4 0111
3 4 1000
4 4 1011
5 4 1100
6 4 1110
7 4 1111
8 5 10011
9 5 10100
10 5 00111
11 5 01000
12 6 001000
13 6 000011
14 6 110100
15 6 110101
16 6 101010
17 6 101011
18 7 0100111
19 7 0001100
20 7 0001000
21 7 0010111
22 7 0000011
23 7 0000100
24 7 0101000
25 7 0101011
26 7 0010011
27 7 0100100
28 7 0011000
29 8 00000010

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (85 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

30 8 00000011
31 8 00011010
32 8 00011011
33 8 00010010
34 8 00010011
35 8 00010100
36 8 00010101
37 8 00010110
38 8 00010111
39 8 00101000
40 8 00101001
41 8 00101010
42 8 00101011
43 8 00101100
44 8 00101101
45 8 00000100
46 8 00000101
47 8 00001010
48 8 00001011
49 8 01010010
50 8 01010011
51 8 01010100
52 8 01010101
53 8 00100100
54 8 00100101
55 8 01011000
56 8 01011001
57 8 01011010
58 8 01011011
59 8 01001010
60 8 01001011
61 8 00110010
62 8 00110011
63 8 00110100

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (86 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Table 22.11. Group 3 Fax White Make-Up Codes

White run length Code length Binary code


64 5 11011
128 5 10010
192 6 010111
256 7 0110111
320 8 00110110
384 8 00110111
448 8 01100100
512 8 01100101
576 8 01101000
640 8 01100111
704 9 011001100
768 9 011001101
832 9 011010010
896 9 011010011
960 9 011010100
1024 9 011010101
1088 9 011010110
1152 9 011010111
1216 9 011011000
1280 9 011011001
1344 9 011011010
1408 9 011011011
1472 9 010011000
1536 9 010011001
1600 9 010011010
1664 6 011000
1728 9 010011011

Table 22.12. Group 3 Fax Terminating Black Codes

Black run length Code length Binary code

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (87 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

0 10 0000110111
1 3 010
2 2 11
3 2 10
4 3 011
5 4 0011
6 4 0010
7 5 00011
8 6 000101
9 6 000100
10 7 0000100
11 7 0000101
12 7 0000111
13 8 00000100
14 8 00000111
15 9 000011000
16 10 0000010111
17 10 0000011000
18 10 0000001000
19 11 00001100111
20 11 00001101000
21 11 00001101100
22 11 00000110111
23 11 00000101000
24 11 00000010111
25 11 00000011000
26 12 000011001010
27 12 000011001011
28 12 000011001100
29 12 000011001101
30 12 000001101000
31 12 000001101001
32 12 000001101010
33 12 000001101011
34 12 000011010010

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (88 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

35 12 000011010011
36 12 000011010100
37 12 000011010101
38 12 000011010110
39 12 000011010111
40 12 000001101100
41 12 000001101101
42 12 000011011010
43 12 000011011011
44 12 000001010100
45 12 000001010101
46 12 000001010110
47 12 000001010111
48 12 000001100100
49 12 000001100101
50 12 000001010010
51 12 000001010011
52 12 000000100100
53 12 000000110111
54 12 000000111000
55 12 000000100111
56 12 000000101000
57 12 000001011000
58 12 000001011001
59 12 000000101011
60 12 000000101100
61 12 000001011010
62 12 000001100110
63 12 000001100111

Table 22.13. Group 3 Fax Black Make-Up Codes

Black run length Code length Binary code


64 10 0000001111
128 12 000011001000

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (89 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

192 12 000011001001
256 12 000001011011
320 12 000000110011
384 12 000000110100
448 12 000000110101
512 13 0000001101100
576 13 0000001101101
640 13 0000001001010
704 13 0000001001011
768 13 0000001001100
832 13 0000001001101
896 13 0000001110010
960 13 0000001110011
1024 13 0000001110100
1088 13 0000001110101
1152 13 0000001110110
1216 13 0000001110111
1280 13 0000001010010
1344 13 0000001010011
1408 13 0000001010100
1472 13 0000001010101
1536 13 0000001011010
1600 13 0000001011011
1664 13 0000001100100
1728 13 0000001100101

Binary File Transfer

More than just following the same modem standard, the capabilities of fax service are
merging with those of standard data communications. New fax modems, for example,
incorporate Binary File Transfer capabilities, which enable them to ship BFT files from
one fax system to another as easily as document pages. You could, for example, send a
file from your PC to a printer for a remote printout or to a PC where it could be received
automatically. The receiving fax modem picks up the line, makes the connection, and
records the file as dutifully as it would an ordinary fax page—without anyone standing
around to control the modem.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (90 de 96) [23/06/2000 06:57:05 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Group 4

In 1984 the CCITT approved a super-performance facsimile standard, Group 4, which


allows resolutions as high as 400 x 400 dpi as well as higher speed transmissions of
lower resolutions. Although not quite typeset quality (phototypesetters are capable of
resolutions of about 1,200 dpi), the best of Group 4 is about equal to the resolving ability
of the human eye at normal reading distance. However, today's Group 4 fax machines
require high speed, dedicated lines and do not operate as dial-up devices. Group 3
equipment using new, higher resolution standards, and coupled to digital services, offers
a lower cost alternative to Group 4.

Interface Classes

As with data modems, fax modems must link with your PC and its software. Unlike data
modems, which were blessed with a standard since early on (the Hayes command set),
fax modems lacked a single standard. In recent years, however, the Electronics Industry
Association and the Telecommunications Industry Association have created a standard
that is essentially an extension to the Hayes AT command set. The standard embraces
two classes for support of Group 3 fax communications.
Class 1 is the earlier standard. Under the Class 1 standard, most of the processing of fax
documents is performed by PC software. The resulting fax data is sent to the modem for
direct transmission. It includes requirements for autodialing; a GSTN interface; V-series
signal conversion; HDLC data framing, transparency, and error detection; control
commands and responses; and data commands and reception.
Class 2 shifts the work of preparing the fax document for transmission to the fax modem
itself. The modem hardware handles the data compression and error control for the
transmission. The Class 2 standard also incorporates additional flow control and station
identification features including T.30 protocol implementation; session status reporting;
phase C data transfer; padding for minimum scan line time; quality check on received
data; and packet protocol for the DTE/DCE interface.
These classes hint at the most significant difference between PC-based fax systems,
which is software. Fax modem hardware determines the connections that can be made,
but the software determines the ultimate capabilities of the system. A fax modem that
adheres to various standards (classes as well as protocols) will open for you the widest
selection of software and the widest range of features.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (91 de 96) [23/06/2000 06:57:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Installation

Installing a modem usually is a two step process. First you must prepare the modem
hardware by configuring physical switches, sliding it into your PC (or plugging it into an
appropriate port), and finally connecting it to your telephone line and, if need be,
telephone receiver. To make it work, you also must perform one or more software
installations.

Physical Matters

All modems require physical preparation of some kind before you put them to work—at
minimum you must connect the modem to a telephone line (or other service). Modem
makers have gone to not-so-great lengths to assure that the connection process is easy.
They include a cable and marking the jacks on the modem to plug it into. In that most
folks have plugged in at least one telephone in their lives, the process isn't difficult.
Complications arise when you need to share a jack with other devices or have an external
modem.
Older modem models often require another kind of physical preparation. They have
several jumpers or switches that control the interface or operation of the modem. You'll
have to properly adjust these switches to get the modem to work.

Switches

Old modems and old software are a particularly pernicious combination when it comes to
physical setup. Some older communications programs required that the modem keep
them abreast of the connection through the Carrier Detect signal. Other programs couldn't
care less about Carrier Detect but carefully scrutinized Data Set Ready. To accommodate
the range of communications applications, many older modems have setup switches that
determine the handling of their control lines. In one position, a switch may force Carrier
Detect to stay on continually, for example. The other setting might cause the status of
Carrier Detect to follow the state of the modems' conversations.
These switches take two forms: mechanical and electrical. In external modems,
mechanical switches are generally of the DIP variety on the rear panel or hidden behind
the front panel of the modem. In internal modems, these switches often take the form of
jumpers. The electrical switches are usually S-register settings.
Old programs were so ill-informed (about modem status and the PC industry in general)
that they would think the modem was stuck perpetually online unless you made the right
setting. Modern software writers are more enlightened, however, and either
accommodate all of the modem's needs through driver software or automatically send the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (92 de 96) [23/06/2000 06:57:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

needed modem settings as part of the setup string sent out by the program.
Older internal modems also require that you designate the resources that the modem is to
use corresponding to the serial port designation occupied by the modem. The serial port
resource settings match those of a standalone serial port—the modem needs both an
interrupt and base address for its registers. You select an unused COM port (or parallel
port, if your modem supports a parallel connection), and assign resources appropriately.
Newer internal modems get these resources automatically assigned using the
Plug-and-Play system. They may use ports and interrupts that don't match any serial
standard. Through driver software both Windows and the modem can accommodate a
much wider range of resources.

Connections

Nearly all modems sold for PCs today are direct connect modems. That is, they directly
plug into the electrical wires of the telephone system. They are a tribute to the simplicity
of the modular telephone wiring system. And they are a pain in the gazebo when you
encounter a telephone that lacks modular wiring, say in a third world locale like a budget
hotel on the outskirts of Chicago. The alternative is using an acoustic coupler that links
the modem by sound waves, an aged but occasionally useful technology.

Direct Connect Modems

By its nature, an external direct connect modem requires at least two connections: it must
link to your PC and to a telephone line. In addition, the circuitry of the modem requires
power from some source, and it requires yet another connection to get it. These external
modems require that you explicitly make these connections using jacks like those shown
in Figure 22.9.
Figure 22.9 Rear panel of an analog modem showing jacks for connections.

Internal direct connect modems need only a telephone line link. They link to your PC
when you slide the modem board into an expansion slot. At the same time, they draw
power from your PC's internal supply.
The standard jack for the serial connection to an external modems is a female 25-pin
D-shell connector. Modems are DCE, data communication devices, so a straight through
cable from the standard male 25-pin D-shell connector on PC serial ports is all that's
required. Nine-pin PC serial ports require adapters, as noted in Chapter 21, "Serial Ports."

External modem power connectors are usually proprietary, matched between the modem
and an external transformer. As long as you use the transformer supplied by the modem
maker, you just plug the power cable into the jack into which it fits.
Both external and internal modems use the same kind of connection between the modem

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (93 de 96) [23/06/2000 06:57:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

and your standard telephone line, a standard modular telephone jack and cable. A
two-wire connection is sufficient for most modems and telephone systems. You face two
complications, however—multi-line jacks and additional telephone devices.
Multi-line jacks make connections more complex. A single RJ-45 modular connector
may hold up to four distinct telephone lines, any of which can be linked to your modem
with the proper wiring. The most common standard for assigning pins and color codes to
these phone lines, that used by AT&T and the EIA, is listed in Table 22.14.

Table 22.14. AT&T 258A and EIA 568B Wiring for 8P8C Jacks

Pin Pair Polarity Jack wire color Cable wire color


1 Pair 2 Tip Blue White/Orange
2 Pair 2 Ring Orange Orange/White
3 Pair 3 Tip Black White/Green
4 Pair 1 Ring Red Blue/White
5 Pair 1 Tip Green White/Blue
6 Pair 3 Ring Yellow Green/White
7 Pair 4 Tip Brown White/Brown
8 Pair 4 Ring White Brown/White

The modern office often includes several devices that want to grab on to your telephone
connection. Besides you modem, you're likely to have an ordinary telephone, an
answering machine, and even a dedicated fax machine. Splitting one line between all of
them can be a challenge.
Some modems have two jacks and are meant to be connected in series with other
telephone devices. These two jacks are not identical—one (usually labeled "Line") links
the modem to the telephone line, and the other (usually labeled "Phone") is meant for
connecting to your telephone set and cuts off the outbound signal when the modem is
operating. Reverse the connections, and the modem won't work properly.
If your modem has only one jack and you want to connect your telephone to the same
line, you'll need to make a parallel connection with an adapter that plugs into your wall
jack and gives two jacks in return. Plug your modem into one and your telephone into the
other. Be careful. If you pick up your telephone handset when the modem is operating,
you'll blast your ears with a modem modulation and probably induce errors into the
modem's data stream.
Other telephone devices connect similarly. Note, however, some dedicated fax machines
may make particular requirements. They may favor being the first or last device in a
serial connection. Check the instruction manual accompanying the fax machine to be
certain of its preferred kind of connection.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (94 de 96) [23/06/2000 06:57:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Acoustic Couplers

An alternative way to connect a modem, one without modular worries, is the acoustic
coupler. The very first modems used acoustic couplers to avoid any electrical contact
with telephone lines because years ago hooking your modem directly to the phone line
was neither practical nor legal. It wasn't practical because modular jacks weren't
available. It wasn't legal because telephone company regulations dating long before the
AT&T telephone monopoly was split up did not permit individuals to directly connect
modems to their telephone lines.
Instead of electrical connections, these vintage modems sent their signals to telephones as
sound waves, acoustically, directly into the handset of a standard telephone. An acoustic
coupler converts the tone-like analog signals made by the modem into sounds that are
then picked up by the microphone in the telephone handset and passed through the
telephone network as electrical signals. To make the sound connection a two way street,
the acoustic coupler also incorporated a microphone to pick up the squawks emanating
from the ear piece of the telephone handset, convert them into electrical signals, and
supply them to the modem for demodulation.
Acoustic couplers are still available and can take many forms. In early equipment, the
acoustic coupler was integral to the modem; it was a special cradle in which you placed
the telephone handset. Today, you're more likely to see couplers made from two rubber
cups designed to engulf the mouthpiece and ear piece of a telephone handset. This latter
form of acoustic coupler persists because it allows modems to be readily connected and
disconnected from non-modular telephones—those that you cannot unplug in order to
directly attach a modem. This connectability is especially important for roving computers
that may be called upon to tie their internal modems into non-modular pay station and
hotel room telephones.
Although acoustic couplers were normally used only at low speeds—typically ordinary
Bell 103 communications at 300 baud—higher speed acoustic couplers that operate at
speeds up to 9600 bits per second are also available. Compared to the cost of a modular
cable, however, all are pricey.

Software

Old operating systems lack internal modem support. They require that you install your
modem into each application you want to use it. The installation may be anything from
specifying a type of modem to typing in by hand the setup strings and commands your
modem uses. Fortunately, most people had only one or two applications that used their
modem so the chore was merely bothersome.
Modern operating systems like Windows 95 have internal modem services. You install
the modem once for the operating system, then all applications that run under the
operating system have access to the modem through an API in the operating system. In
other words, you confront the modem installation chore only once.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (95 de 96) [23/06/2000 06:57:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 22

Under Windows 95, modem installation is part of the hardware installation wizard. You
select the wizard (as Add New Hardware) through Control Panel. To install a modem,
choose not to have Windows search for new hardware (unless you want to spend the
afternoon while Windows checks all possible hardware installation options), and you'll
get to choose the kind of device you want to install. Select Modem, as shown in Figure
22.10.
Figure 22.10 The Windows 95 Add New Hardware wizard.

Windows will again volunteer to automatically detect your modem. Although this
process works in most cases, you may prefer to save time by explicitly specifying the
brand and model of your modem. To do this, first select the manufacturer of your
modem, then the particular model, as shown in Figure 22.11.
Figure 22.11 Selecting a modem manufacturer and model under Windows 95.

If your modem isn't one of those for which Windows has built-in support, select the Have
Disk option and use the driver disk supplied by the modem's manufacturer.
Windows needs to communicate with your modem, so you must specify the port to use.
At this point, Windows has not checked to see whether your modem has been installed
for a particular port, so you have to tell it. Use the settings you made during the hardware
setup of your modem. Specify them by choosing the option Windows offers, as shown in
Figure 22.12.
Figure 22.12 Selecting a modem port connection under Windows 95.

Windows and your applications can take control of all the functions of your modem
using the driver software. They can adjust the various settings as their needs require.
Sometimes, however, you may want to override their decisions.
You can investigate or set your modem's settings in two ways, through Device Manager
that you access through the System icon in Control Manager or through the Modem icon.
The Modem Properties sheet gives you access to many of the controls available to you.
Figure 22.13 shows the Modem tab of the properties sheet, which allows you to select the
maximum speed for your modem to use.
Figure 22.13 Adjusting modem speed under Windows 95.

In addition to these global modem properties, each connection require you to make
individual settings for it. For example, each service to which you connect requires setting
its own telephone number. You make these setting (and adjust other options) through the
Dialing Settings available through the specific application.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh22.htm (96 de 96) [23/06/2000 06:57:06 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Chapter 23: Networking


Networks link from two to thousands of PCs together, enabling them to share files and
resources. In addition, a network can centralize the management of a huge base of PCs,
providing one location for coordinated security, backup, upgrades, and control.
Networking now is so essential to regular PC operations that it is built into new
operating systems and serves both in the home and office.[/block]

■ Architecture
■ Layers
■ Physical
■ Data Link
■ Network
■ Transport
■ Session
■ Presentation
■ Application
■ Practical Systems
■ Adapter
■ Protocol
■ Service
■ Client
■ Topologies
■ Linear
■ Ring
■ Star
■ Hierarchies
■ Client-Server
■ Peer-to-Peer

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (1 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

■ Standards
■ Ethernet
■ Token Ring
■ Asynchronous Transfer Mode
■ FDDI
■ AppleTalk
■ Arcnet
■ Zero-Slot LANs
■ Cabling
■ 10Base-5
■ 10Base-2
■ 10Base-T
■ Setup
■ Preparation
■ Host Adapters
■ Wiring
■ Hubs
■ Cable Installation
■ Configuration
■ Enabling Sharing
■ Resource Sharing

23

Networking

By themselves, PCs might never have usurped the role of the mainframe or other large
computer systems. Big systems would hold an important business advantage: they are
able to link all the workers at a facility. Because the mainframe holds the data (as well as
all the computing power) in one centralized location, its storage is easily shared. All
workers can have access to the same information and can even work together on projects,
communicating with one another through the central computer.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (2 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

The network provides connectivity that gives the entire web of PCs collective power far
beyond that of the mainframe. Anywhere two or more PCs are present, the features and
facilities added by a network can make using PCs easier, more accommodating, and more
powerful.
The challenge you face in linking one PC to others is the same as faced by a child
growing up with siblings— it has to learn to share. When kids share, you get more quiet,
greater peace of mind, and less bloodshed. When PCs share, you get the convenience of
sharing files and other resources, centralized management (including the capability to
back up all PCs from one location or use one PC to back up others), and improved
communication between workers in your business.
The drawback to connectivity is that computer networks are even more difficult to
understand and manage than a platoon of teenagers. They have their own rules, their own
value system, their own hardware needs, even their own language. Just listening in on a
conversation between network pros is enough to make you suspect that an alien invasion
from the planet Oxy-10 has succeeded. To get even a glimmer of understanding, you
need to know your way around layers of standards, architectures, and protocols.
Installing a network operating system can take system managers days; deciphering its
idiosyncrasies can keep users and operators puzzled for weeks. Network host adapters
often prove incompatible with other PC hardware, their required interrupts and I/O
addresses locking horns with SCSI boards, port controllers, and other peripherals. And
weaving the wiring for a network is like threading a needle while wearing boxing gloves
during a cyclone that has blown out the electricity, the candles, and your last rays of
hope.
In fact, no one in his right mind would tangle with a network were not the benefits so
great. File sharing across the network alone eliminates a major source of data loss, which
is duplication of records and out-of-sync file updates. Better still, a network lets you get
organized. You can put all your important files in one central location where they are
easier to protect, both from disaster and theft. Instead of worrying about backing up half
a dozen PCs individually, you can easily handle the chore with one command. Electronic
mail can bring order to the chaos of tracking messages and appointments, even in a small
office. With network-based E-mail, you can communicate with your coworkers without
scattering memo slips everywhere. Sharing a costly laser printer or large hard disk (with
some networks, even modems) can cut capital costs for computer equipment by
thousands or tens of thousands of dollars. Instead of buying a flotilla of personal laser
printers, for example, you can serve everyone's hard copy needs with just one machine.
Nearly every aspect of networking has spawned its own literature covered by dozens of
books. This single chapter cannot hope to discuss all aspects of network technology.
Consequently, we'll restrict ourselves to a practical approach. From a foundation of basic
terminology and concepts, we'll work our way to wiring together a small office or home
network and setting up the necessary software. In the end, you won't be an expert, but
you will have a working network that you can use for exchanging files, sharing printers,
and making backups.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (3 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Architecture

The biggest issue in building a network is getting everything to work with everything
else—in other words, basic compatibility. By its very nature, a network embraces a more
diverse array of species than Noah escorted into his ark. Besides different brands of PCs,
networks have to have some provisions to accommodate printers, modems, CD ROM
players, fax systems, computers, and workstations that follow their own standards; access
to mainframes and remote data bases; cellular systems; and whatever else anyone might
want to plug in. Although some of these devices might naturally communicate, other
combinations turn cacophony into chaos. Not only are there enough differences in the
hardware interfaces to keep you stripping and soldering cables until the next technology
comes home, you need to translate command sets, data formats, and even character
codes.

Layers

In 1984, the International Standards Organization laid out a blueprint to bring order to the
nonsense of networking by publishing the Open Systems Interconnection Reference
Model. The approach was much like that used for PC intercompatibility: layering. Just as
a PC has a software layer (the operating system) and a firmware layer (the BIOS) to link
your application software to your underlying hardware (the PC), the ISO built a network
compatibility system from seven layers ranging from the connecting wire to software
applications. These layers define functions and protocols that enable the wide variety of
network hardware and software to work together. The standard was adopted worldwide,
including in the United States and by major organizations such as IBM.
The one standout exception to the OSI design is the Internet, which was already growing,
evolving, and flourishing with its own architecture. While at first the Internet firewall
separated the two approaches to network design, the quick adoption of intranet concepts
by businesses for internal publishing has spread the Internet philosophy inside
organizations. An intranet is a private business network generally used for the
distribution of corporate information and e-mail that uses the same protocols as the
Internet. For example, a business might publish its health services or benefits manual in
HTML on its intranet.
Regardless of where it fits with your beliefs and philosophies, the OSI model presents an
excellent way to understand networks. The layering defined by the OSI Reference Model
illustrates how the various elements of a network—from the wire running through your
office ceiling to the Windows menu of your mail program—fit together and interact.
Although few actual networks or network products exactly fit the model (the ISO is
working on a complete set of standards), the layers show how networks must be
structured and the problems in building a network. The OSI Reference Model has
become the standard framework for describing networks. The Internet also uses a layered
approach, differing in where the lines are drawn.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (4 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Physical

The first layer of the OSI Reference Model is the Physical layer, which defines the basic
hardware of the network, which is the cable that conducts the flow of information
between the devices linked by the network. This layer defines not only the type of wire
(for example, coaxial cable or twisted pair wire) but the possible lengths and connections
of the wire, the signals on the wire, and the interfaces of the cabling system. This is the
level at which the device that connects a PC to the network (the network host adapter) is
defined.

Data Link

Layer 2 in a network is called the Data Link layer. It defines how information gains
access to the wiring system. The Data Link layer defines the basic protocol used in the
local network. This is the method used for deciding which PC can send a message over
the cable at any given time, the form of the messages, and the transmission method of
those messages.
This level defines the structure of the data that is transferred across the network. All data
transmitted under a given protocol takes a common form called the packet, or network
data frame, each of which is a block of data that is strictly formatted and may include
destination and source identification as well as error correction information. All network
data transfers are divided into one or more packets, the length of which is carefully
controlled.
Breaking network messages into multiple packets enables the network to be shared
without interference and interminable waits for access. If you transferred a large file, say
a bitmap, across the network in one piece, you might monopolize the entire network for
the duration of the transfer. Everyone would have to wait. By breaking all transfers into
manageable pieces, everyone gets access in a relatively brief period, making the network
more responsive.

Network

Layer 3 in the OSI Reference Model is the Network layer, which defines how the
network moves information from one device to another. This layer corresponds to the
hardware interface function BIOS of an individual PC, because it provides a common
software interface that hides differences in underlying hardware. Software of higher
layers can run on any lower layer hardware because of the compatibility this layer
affords. Protocols that enable the exchange of packets between different networks
operate at this level.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (5 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Transport

Layer 4 is for the control of data movement across the network. This Transport layer
defines how messages are handled, particularly how the network reacts to packets that
become lost or other errors that may occur.

Session

Layer 5 of the OSI Reference Model defines the interaction between applications and
hardware much as a PC BIOS provides function calls for programs. By using functions
defined at this Session layer, programmers can create software that will operate on any of
a wide variety of hardware. In other words, the Session layer provides the interface for
applications and the network. Among PCs, the most common of these application
interfaces is IBM's Network Basic Input/Output System or NetBIOS.

Presentation

Layer 6, the Presentation layer, provides the file interface between network devices and
the PC software. This layer defines the code and format conversions that must take place
so that applications running under a PC operating system, such as DOS, OS/2 or
Macintosh System 7, can understand files stored under the network's native format.

Application

Layer 7 is the part of the network that you deal with personally. This Application layer
includes the basic services you expect from any network including the capability to deal
with files, send messages to other network users through the mail system, and to control
print jobs.

Practical Systems

Although the seven layers of the OSI may be useful to engineers designing network
systems, they don't have any particular relevance when you're setting up a small network
in your office or even home. Instead, you face four practical levels when configuring
your network using network or operating system software. Under Windows 95, these
levels include the adapter, protocol, service, and client software, as shown in Figure 23.1,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (6 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

the Windows 95 Select Network Component Type menu.


Figure 23.1. The Windows 95 Select Network Component menu.

In this model, the network has four components: the client, the adapter, the protocol, and
the service. Each has a distinct role to play in making the connection between PCs.

Adapter

The adapter is the hardware that connects your PC to the network. It is the foundation of
the physical layer of the network, translating the bus signals of your PC into a form that
can skitter through the network wiring. The adapter determines the form and speed of the
physical side of the network.
From the practical perspective, the network adapter is generally the part of the network
that you must buy. You plug the network adapter into your PC to provide a port for
plugging into the network wire.

Protocol

The protocol is the music of the packets, the lyric that controls the harmony of the data
traffic through the network wiring. The protocol dictates not only the logical form of the
packet—the arrangement of address, control information, and data among its bytes—but
also the rules on how the network deals with the packets. The protocol determines how a
packet gets where it is going, what happens when it doesn't, and how to recover when an
error appears in the data as it crosses the network.
Support for the most popular and useful protocols for small networks is included with
today's operating systems. It takes the form of drivers you install to implement a
particular networking system. Windows 95, for example, includes several.

Service

The service of the network is the work the packets perform. The services are often
several and always useful. Network services include exchanging files between disk
drives (or making a drive far away on the network appear to be local to any or every PC
in the network. It can be the sharing of a printer resource so that all PCs have access to a
centralized printer, or electronic mail passed from a centralized post office to individual
machines.
Most networking software includes the more useful services as part of the basic package.
Windows 95 includes file and printer sharing as its primary service. The basic operating
system also includes e-mail support.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (7 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Client

To the network, the client is not you but where the operating system of your PC and the
network come together. It's the software the brings you the network resources so that you
can take advantage of the services.

Topologies

The topology of a network is the lay of the cables across the land. Most networks involve
cables, lots of them, with at least one leading to every PC. Like the proverbial can of
worms, they can crawl off in every direction and create chaos.
If PCs are to talk to one another, however, somehow the cables must come together so
that signals can move from one PC to another. If network cables were ordinary wires,
you might splice them together with the same abandon as making spaghetti, and the
results might have a similar aesthetic. But networks operate at high frequencies and their
signals behave like the transmissions of radio stations. The network waves flash down
the wires, bounce around, and ricochet from every splice. The waves themselves stretch
and bend, losing their shape and their digital purity.
To work reliably, the network cable must be a carefully controlled environment. It must
present a constant impedance to the signal, and every connection must be properly made.
Any irregularity increases the chance of noise, interference, and error.
Designers have developed several topologies for PC networks. Most can be reduced to
one of three basic layouts: linear, ring, and star. The names describe how the cables run
throughout an installation.

Linear

The network with linear cabling has a single backbone, one main cable that runs from
one end of the system to the other. Along the way, PCs tap into this backbone to send and
receive signals. The PCs link to the backbone with a single cable through which they
both send and receive. In effect, the network backbone functions as a data bus, and this
configuration is often called a bus topology. Figure 23.2 illustrates a simple network bus.
Figure 23.2. Simple bus network topology.

In the typical installation, a wire leads from the PC to the backbone, and a T-connector
links the two. The network backbone has a definite beginning and end. In most cases,
these ends are terminated with a resistor matching the characteristic impedance of the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (8 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

cable in the background. That is, a 61 ohm network cable will have a 61 ohm termination
at either end. These terminations prevent signals from reflecting from the ends of the
cable, helping assure signal integrity.

Ring

The ring topology looks like a linear network that's biting its own tail. The backbone is a
continuous loop, a ring, with no end. But the ring is not a single, continuous wire. Instead
it is made of short segments daisy chained from one PC to the next, the last connected, in
turn, to the first. Each PC thus has two connections. One wire connects a PC to the PC
before it in the ring, and a second wire leads to the next PC in the ring. Signals must
traverse through one PC to get to the next, and the signals typically are listened to and
analyzed along the way.

Star

Both linear and ring topologies sprawl all over creation. The star topology shines a ray of
light into tangled installations. Just as rays blast out from the core of a star, in the star
topology connecting cables emanate from a centralized location called a hub, and each
cable links a single PC into the network. A popular image for star topology is an
old-fashioned wagon wheel—the network hub is the hub, the cables are the spokes, and
the PCs are ignored in the analogy. Try visualizing them as clumps of mud clinging to
the rim (which, depending on your particular network situation, may be an apt metaphor).
If you can't, Figure 23.3 illustrates a simple star configuration of a network.
Figure 23.3. Simple star network topology.

In the most popular network systems based on the star topology, each cable is actually
twofold. Each has two distinct connections, one for sending data from the hub to an
individual PC and one for the PC to send data back to the hub. These paired connections
are typically packaged into a single cable.
Star-style networks have become popular because their topology matches that of other
office wiring. In the typical office building, the most common wiring is used by
telephones, and telephone wiring converges at the wiring closet in which is the PBX
(Private Branch Exchange, the telephone switching equipment for a business). Star-style
topologies require only a single cable and connection for each device to link to the
central location where all cables converge into the network hub.
As distinct as these three topologies seem, they are really not so different. Cut the ring,
for example, and the result is a linear system. Or shrink the ring down to a single point,
and the result is a star. This confluence is hardly coincidental. All networks must perform
the same functions, so you should expect all the varieties to be functionally the same.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (9 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Hierarchies

Topology describes only one physical aspect of a network. The connections between the
various PCs in a network also can fit one of two logical hierarchies. The alternatives
form a class system among PCs. Some networks treat all PCs the same; others elevate
particular computers to a special, more important role. Although the network serves the
same role in either case, these two hierarchical systems enforce a few differences in how
the network is used.

Client-Server

Before PC networks, mainframe computers extended their power to individual desks


through terminal connections. By necessity, these mainframe systems put all the
computer power in one central location that served the needs of everyone using the
system. There simply wasn't any other computer power in the system.
In big companies, this kind of computer system organization became an entrenched part
of the corporate bureaucracy. Transferring the structure to PCs was natural. At first, the
computer managers merely connected PCs to the mainframe as smarter terminals. The
connection schemes were called micro-to-mainframe links.
Eventually, however, some managers discovered that PCs provided more power at
substantially less cost than the mainframe, and the actual computing was shifted down to
the desktop. The powerful mainframe computer was left to do nothing but supply data
(and sometimes program) files to the PCs. Managers needed no large amount of
enlightenment to see that even a modestly powerful PC could shuffle files around, and
the mainframe was replace by a PC that could manage the shared storage required by the
system. Because the special PC served the needs of the other PCs, it was called a server.
The corresponding term for the desktop PC workstations is client, a carry over from the
mainframe days. This form of network link is, consequently, called a client-server
hierarchy. Note that the special role of the server gives it more importance but also
relegates it to the role of a slave that serves the need of many masters, the clients. The
server in a client-server network runs special software (the network operating system).
The server need not be a PC. Sometimes a mainframe still slaves away at the center of a
network. Typically, the server is a special PC more powerful than the rest in the network
(notwithstanding that the server's work is less computorially intense than that of the
clients it serves). Its most important feature is storage. Because its file space is shared by
many—perhaps hundreds—of PCs, it requires huge amounts of mass storage. In addition,
the server is designed to be more reliable because all the PCs in the network depend on
its proper functioning. If it fails, the entire network suffers.
Most modern servers are designed to be fault-tolerant. That is, they will continue to run
without interruption despite a fault, such as the failure of a hardware subsystem. Most
servers also use the most powerful available microprocessors, not from need but because

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (10 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

the price difference is tiny once the additional ruggedness and storage are factored
in—and because most managers think that the single most important PC in a network
should be the most powerful.

Peer-to-Peer

The client-server is a royalist system, particularly if you view a nation's leader as a


servant of the people rather than a profiteer. The opposite is the true democracy in which
every PC is equal. PCs share files and other resources (such as printers) among one
another. They share equally, each as the peer of the others, so this scheme is called
peer-to-peer networking.
Peer-to-peer means that there is no dedicated file server as you would find in big,
complex networks. All PCs can have their own, local storage, and each PC is (or can be)
granted access to the disk drives and printers connected to the others. In most
peer-to-peer schemes, the same DOS commands apply to both the drives local to an
individual computer and those accessed remotely through the network. Because most
people already know enough about DOS to change drive letters, they can put the network
to work almost instantly.
Even in peer-to-peer networks, some PCs are likely to be more powerful than others or
have larger disk drives or some such distinction. Some PCs may have only floppy disks
and depend on the network to supply the equivalent of hard disk storage. In other words,
some PCs are created more equal than others. In fact, it's not unusual for a peer-to-peer
network to have a single dominant PC that serves most of the needs of the others.
Functionally, the client-server and peer-to-peer architectures are not digitally distinct like
black and white but shade into one another.
In a peer-to-peer network, no one PC needs to be particularly endowed with
overwhelming mass storage or an incomprehensible network operating system. Each
computer connects to the network using simple driver software that makes the resources
of the other PCs appear as extra disk drives and printers. There's no monstrous network
operating system to deal with, only a few extra entries to each PC's CONFIG.SYS or
AUTOEXEC.BAT file. Although someone does have to make decisions in setting up the
peer-to-peer network (such as which PCs have access to which drives in other PCs),
day-to-day operations usually don't require an administrator.
The peer-to-peer scheme has another advantage: you don't need to buy an expensive file
server. Not only will that save cash, it can give you the security of redundancy. The
failure of a server puts an entire network out of action. The failure of a network peer only
eliminates that peer; the rest of the network continues to operate. And if you duplicate
vital files on at least two peers, you'll never have to fear losing data from the crash of a
single system.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (11 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Standards

A network is a collection of ideas, of hardware and software. The software comprises


both the programs that make it work and the protocols that let everything work together.
The hardware involves the network adapters, the wires, hubs, concentrators, routers, and
even more exotic fauna. Getting it all to work together requires standardization.
Because of the layered design of most networks, these standards can appear at any level
in the hierarchy, and they do. Some cover a single layer, others span them all to create a
cohesive system.
In your exploration of small networks, you're apt to run into many of these standards.
What follows is a brief discussion of some of the more common names you'll encounters.

Ethernet

The progenitor of all of today's networks was the Ethernet system originally developed in
the 1970s at the Xerox Corporation's Palo Alto Research Center for linking its Alto
workstations to laser printers. The invention of Ethernet is usually credited to Robert
Metcalf, who later went on to found 3Com Corporation, an early major supplier of PC
networking hardware and software. During its first years, Ethernet was proprietary to
Xerox, a technology without a purpose, in a world in which the PC had not yet been
invented.
In September 1980, however, Xerox joined with minicomputer maker Digital Equipment
Corporation and semiconductor manufacturer Intel Corporation to publish the first
Ethernet specification, which later became known as E.SPEC VER.1. The original
specification was followed in November 1982 by a revision that has become today's
widely used standard, E.SPEC VER.2.
This specification is not what most people call Ethernet, however. In January 1985, the
Institute of Electrical and Electronic Engineers published a networking system derived
from Ethernet but not identical with it. The result was the IEEE 802.3 specification.
Ethernet and IEEE 802.3 share many characteristics—physically, they use the same
wiring and connection schemes—but each uses its own packet structure. Consequently,
although you can plug host adapters for true Ethernet and IEEE 802.3 together in the
same cabling system, the two standards will not be able to talk to one another. Some PC
host adapters, however, know how to speak both languages and can exchange packets
with either standard.
The basis of Ethernet is a clever scheme for arbitrating access to the central bus of the
system. The protocol, formally described as Carrier Sensing, Multiple Access with
Collision Detection is often described as being like a party line. It's not. It's much more
like polite conversation. All the PCs in the network patiently listen to everything that's
going on across the network backbone. Only when there is a pause in the conversation
will a new PC begin to speak. And if two or more PCs start to talk at the same time, all

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (12 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

become quiet. They will wait for a random interval (and because it is random, each will
wait a different interval) and, after the wait, attempt to begin speaking again. One will be
lucky and win access to the network. The other, unlucky PCs will hear the first PC
blabbing away and wait for another pause.
Access to the network line is not guaranteed in any period by the Ethernet protocol. The
laws of probability guide the system, and they dictate that eventually every device that
desires access will get it. Consequently, Ethernet is described as a probabilistic access
system. As a practical matter, when few devices (compared to the bandwidth of the
system) attempt to use the Ethernet system, delays are minimal because all of them trying
to talk at one time is unlikely. As demand approaches the capacity of the system,
however, the efficiency of probability-based protocol plummets. The size limit of an
Ethernet system is not set by the number of PCs but by the amount of traffic; the more
packets PCs send, the more contention, and the more frustrated attempts.
The Ethernet protocol has many physical embodiments. These can embrace any
topology, type of cable, or speed. The IEEE 802.3 specification defines several of these,
and assigns a code name to each. Today's most popular Ethernet implementations operate
at a raw speed of 10 MHz. That is, the clock frequency of the signals on the Ethernet (or
IEEE 802.3) wire is 10 MHz. Actual throughput is lower because packets cannot occupy
the full bandwidth of the Ethernet system. Moreover, every packet contains formatting
and address data that steals space that could be used for data.
Today's four most popular IEEE 802.3 implementations are 10Base-5, 10Base-2,
10Base-T, and 100Base-T. Although daunting at first look, you can remember the names
as codes: The first number indicates the operating speed of the system in megahertz; the
central word "Base" indicates that Ethernet protocol is the basis of the system; and the
final character designates the wire used for the system. The final digit (when numerical)
refers to the distance in hundreds of feet the network can stretch, but, as a practical
matter, also specifies the type of cable used. Coincidentally, the number also describes
the diameter of the cable; under the 10 MHz 802.3 standard, the "5" stands for a thick
coaxial cable that's about one-half (.5) inch in diameter; the "2" refers to a thinner coaxial
cable about .2 inch in diameter; the "T" indicates twisted pair wiring like that used by
telephone systems.
Other differences besides cable type separate these Ethernet schemes. The 10Base-5 and
10Base-2 use a linear topology; 10Base-T and 100Base-T are built in a star
configuration. The three IEEE 802.3 systems with the "10" prefix operate at the same 10
MHz speed using the same Ethernet protocol, so a single network can tie together all
three technologies without the need for such complications as protocol converters. In
typical complex installations, thick coaxial cable links far-flung workgroups, each of
which is tied together locally with a 10Base-T hub. This flexibility makes IEEE 802.3
today's leading networking choice.
The 100Base-T system operates at 100 MHz, yielding higher performance consistent
with transferring multimedia and other data intensive applications across the network. Its
speed has made it the system of choice in most new installations.
Actually 100Base-T isn't a single system but a family of siblings each designed for
different wiring environments. 100Base-TX is the purest implementation—and the most
demanding. It requires Class 5 wiring, shielded twisted pair designed for data

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (13 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

applications. In return for the cost of the high class wiring, it permits full duplex
operation so any network node can both send and receive data simultaneously.
100Base-T4 works with shielded or unshielded voice-grade wiring, Classes 3 and 4, but
only allows for half-duplex operations. 100Base-FX uses the same timing and protocol as
the 100Base-T systems but operates across fiber optic cables instead of copper twisted
pair wiring. It also allows full duplex operation.
StarLAN is the Ethernet derivative developed by AT&T and sanctioned by the IEEE as
1Base-5 in the 802.3 specification. As you would expect from a networking system
designed by a telephone company, it was designed to use unshielded twisted pair wiring
with a star configuration (although nodes can also be daisy chained) that can take
advantage of standard office telephone wiring (where all the wires from a given office or
floor converge in a wiring closet). The speed of StarLAN was set at 1 MHz to assure
reliable operation over the inexpensive wiring the system used. Because 10Base-T
effectively fills the same wiring niche with 10 times the speed, StarLAN has fallen out of
favor.

Token Ring

Another way to handle packets across a network is a concept called token passing. In this
scheme, the token is a coded electronic signal used to control network access. IBM
originated the most popular form of this protocol, which after further development, was
sanctioned by the IEEE as its 802.5 standard. Because this standard requires a ring
topology, it is commonly called Token Ring networking. Although once thought the most
formidable competitor to Ethernet, it is now chiefly used only in large corporations.
Other networking systems such as FDDI (see the "FDDI" section that follows) use a
similar token passing protocol.
In a token passing system, all PCs remain silent until given permission to talk on the
network line. They get permission by receiving the token. A single token circulates
around the entire network, passed from PC to PC in a closed loop that forms a ring
topology. If a PC receives the token and has no packets to give to the network to deliver,
it simply passes along the token to the next PC in the ring. If, however, the PC has a
packet to send, it links the packet to the token along with the address of the destination
PC (or server). All the PCs around the ring then pass this token and packet along until it
reaches its destination. The receiving PC strips off the data and puts the token back on
the network, tagged to indicate that the target PC has received its packet. The remaining
PCs in the network pass the token around until it reaches the original sending PC. The
originating PC removes the tag and passes the token along the network to enable another
PC to send a packet.
This token passing method offers two chief benefits: reliability and guaranteed access.
Because the token circulates back to the sending PC, it gives a confirmation that the
packet was properly received by the recipient. The protocol also assures that the PC next
in line after the sending PC will always be the next one to get the token to enable
communication. As the token circulates, it allows each PC to use the network. The token
must go all the way around the ring—and give every other PC a chance to use the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (14 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

network—before it returns to any given PC to enable it to use the network again. Access
to the network is guaranteed even when network traffic is heavy. No PC can get locked
out of the network because of a run of bad luck in trying to gain access.
The original Token Ring specification called for operation at 4 MHz. A revision to the
standard allows for operation at 16 MHz. The specification originally required the use of
a special four-wire shield twisted pair cabling, but current standards enable several types
of cabling, including unshielded twisted pair wires.

Asynchronous Transfer Mode

One of the darling technologies of new networking, Asynchronous Transfer Mode or


ATM is fundamentally different from other networking systems. It is a switched
technology rather than a shared bus. Instead of broadcasting down a wire, a sending PC
sets up a requested path to the destination specifying various attributes of the connection,
including its speed. The switch need not be physical. In fact, ATM is independent of the
underlying physical wiring and works with almost any physical network architecture
from twisted pair to fiber optical. Its performance depends on the underlying physical
implementation, but its switched design assures the full bandwidth of the medium for the
duration of each connection.
Instead of packets, ATM data takes the form of cells. The length of each cell is fixed at
53 bytes. The first five serve as an address. The remaining 48 are the payload, the data
the packet transfers. The payload can be any kind of data—database entries, audio, video,
or whatever. ATM is independent of data types and carries any and all bytes with exactly
the same dispatch.
ATM is built from a layered structure. It takes the form of three layers at the bottom of
the network implementation—the physical layer, the ATM layer, and the adaptation
layer.
The physical layer controls how ATM connects with the overall network wiring. It
defines both the electrical characteristic of the connection and the actual network
interface.
The ATM layer takes care of addressing and routing. It adds the five-byte address header
to each data cell to assure that the payload travels to the right destination.
The adaptation layer takes the data supplied from higher up the network hierarchy and
divides it into the 48-byte payload that will fit into each cell.
ATM is part of a network. By itself it does not make a network. Because of its high speed
potential and versatility, it is becoming popular in large businesses where it neatly
sandwiches between other network standards.

FDDI

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (15 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Although many publications use the acronym FDDI to refer to any network using optical
fibers as the transmission medium, it actually refers to an international networking
standard sanctioned by the American National Standards Institute and the International
Standards Organization. The initials stand for Fiber Distributed Data Interface. The
standard is based on a dual counter rotating fiber optic ring topology operating with a 100
MHz data rate. The FDDI standard permits the connection of up to a maximum of PCs or
other nodes with a distance up to 2 to 3 kilometers between PCs and an entire spread up
to 100 kilometers.

AppleTalk

Apple Computer developed its own networking scheme for its Macintosh computers.
Called AppleTalk, the network is built around an Apple-developed hardware
implementation that Apple called LocalTalk. In operation, LocalTalk is similar to
Ethernet in that it uses probabilistic access with Carrier Sensing, Multiple Access
technology. Instead of after the fact collision detection, however, LocalTalk uses
collision avoidance. Originally designed for shielded twisted pair cable, many LocalTalk
networks use unshielded twisted pair telephone wiring. The LocalTalk system is slow,
however, with a communication speed of 230.4 KHz (that's about one quarter
megahertz).

Arcnet

Another token passing network system, Arcnet, pre-dates IEEE 802.5 Token Ring.
Arcnet was developed in 1977 by Datapoint Corporation. In an Arcnet system, each PC
is assigned an eight-bit address from 1 to 255. The token is passed from one PC to the
next in numerical order. Each PC codes the token signal with the value of the next
address in the network, the network automatically configuring itself so that only active
address numbers are used. The number is broadcast on the network so that all PCs
receive every token, but only the one with the right address can use it. If the PC receiving
the token has a packet to send, it is then allowed to send out the packet. When the packet
is received, an acknowledgment is sent back to the originating PC. The PC then passes
the token to the next highest address. If the PC that receives the token has no packets to
send, it simply changes the address in the token to the next higher value and broadcasts
the token.
Because the token is broadcast, the Arcnet system does not require a ring. Instead it uses
a simple bus topology that includes star-like hubs. Arcnet hubs are either active or
passive. Active hubs amplify the Arcnet signal and act as distribution amplifiers to any
number of ports (typically eight). Passive hubs act like simple signal splitters and
typically connect up to four PCs. The basic Arcnet system uses coaxial cable. Compared
to today's Ethernet systems, it is slow, operating at 2.5 megahertz.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (16 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Zero-Slot LANs

When you need to connect only a few PCs and you don't care about speed, you have an
alternative in several proprietary systems that are lumped together as Zero-Slot LANs.
These earn their name from their capability to give you a network connection without
requiring you to fill an expansion slot in your PC with a network host adapter. Instead of
a host adapter, most Zero-Slot LANs use a port already built into most PCs, the serial
port.
Protocols and topologies of Zero-Slot LANs vary with each manufacturer's
implementation. Some are built as star-like systems with centralized hubs; others are
connected as buses. Nearly all use twisted pair wiring, although some need only three
connections and others use up to eight. The former take advantage of a protocol derived
from Ethernet; the latter use the handshaking signals in the serial port for hardware
arbitration of access to the network.
The one factor shared by all Zero-Slot LANs is low speed. All are constrained by the
maximum speed of the basic PC serial port, which is 115,200 bits per second (or about
one-tenth megahertz). Lower speeds are often necessary with long reaches of cable
because Zero-Slot LAN signals are particularly prone to interference. Serial ports provide
only single-ended signals, which are not able to cancel induced noise and interference, as
is possible with balanced signals.

Cabling

One of the biggest problems faced by network system designers is keeping radiation and
interference under control. All wires act as antenna, sending and receiving signals. As
frequencies increase and wire lengths increase, the radiation increases. The pressure is on
network designers to increase both the speed (with higher frequencies) and reach of
networks (with longer cables) to keep up with the increasing demands of industry.
Two strategies are commonly used to combat interference from network wiring. One is
the coaxial cable, so called because it has a central conductor surrounded by one or more
shields that may be a continuous braid or metalized plastic film. Each shield amounts to a
long thin tube, and each shares the same longitudinal axis: the central conductor. The
surrounding shield typically operates at ground potential, which prevents stray signals
from leaking out of the central conductor or noise seeping in. Because of its shielding,
coaxial cable is naturally resistant to radiation. As a result, coax was the early choice for
network wiring.
Coaxial cables generally use single-ended signals. That is, only a single conductor, the
central conductor of the coaxial cable, carries information. The outer conductor operates
at ground potential to serve as a shield. Any voltage that might be induced in the central
conductor (to become noise or interference) first affects the outer conductor. Because the
outer conductor is at ground potential, it shorts out the noise before it can affect the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (17 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

central conductor. (Noise signals are voltages in excess of ground potential; so, forcing
the noise to ground potential reduces its value to zero.) Figure 23.4 shows the
construction of a typical coaxial cable.
Figure 23.4. Components of a coaxial cable.

The primary alternative is twisted pair wiring, which earns its name from being made of
two identical insulated conducting wires that are twisted around one another in a loose
double-helix. The most common form of twisted pair wiring lacks the shield of coaxial
cable and is often denoted by the acronym UTP, which stands unshielded twisted pair.
Figure 23.5 shows the construction of a typical twisted-pair cable.
Figure 23.5. Components of a twisted pair wiring cable.

Most UTP wiring is installed in the form of multi-pair cables with up to several hundred
pairs inside a single plastic sheath. The most common varieties have 4 to 25 twisted pairs
in a single cable. The pairs inside the cable are distinguished from one another by color
coding. The body of the wiring is one color alternating with a thinner band of another
color. In the two wires of a given pair, the background and banding color are
opposites—that is, one wire will have a white background with a blue band and its mate
will have a blue background with a white band. Each pair has a different color code (see
Table 23.1). The most common type of UTP cable conforms to the AT&T specification
for D-Inside Wire (DIW). The same type of wiring also corresponds to IBM's Type 3
cabling specification for Token Ring networking.

Table 23.1. Unshielded Twisted Pair Color Code.

Pair Number Color Code Pair Number Color Code


1 White/Blue 16 Yellow/Blue
2 White/Orange 17 Yellow/Orange
3 White/Green 18 Yellow/Green
4 White/Brown 19 Yellow/Brown
5 White/Slate 20 Yellow/Slate
6 Red/Blue 21 Violet/Blue
7 Red/Orange 22 Violet/Orange
8 Red/Green 23 Violet/Green
9 Red/Brown 24 Violet/Brown
10 Red/Slate 25 Violet/Slate
11 Black/Blue
12 Black/Orange
13 Black/Green
14 Black/Brown

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (18 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

15 Black/Slate
Key: First color is the body of the wire,
second color the stripes. The mate of the
wire pair has the color scheme
(body/stripe) reversed.

To minimize radiation and interference, most systems that are based on UTP use
differential signals. Each conductor carries the same information at different polarities
(plus and minus), and the equipment signal subtracts the signal on one conductor from
the other before it is amplified (thus finding the difference between the two conductors
and the name of the signal type). Because of the polarity difference of the desired signals
on the conductors, subtracting them from one another actually doubles the strength of the
signal. Noise that is picked up by the wire, however, appears at about equal strength in
both wires. The subtraction thus cancels out the noise. Twisting the pair of wires together
helps assure that each conductor picks up the same noise. In addition, any radiation from
the wire tends to cancel itself out because the signals radiated from the two conductors
are added together. Again, the twist helps ensure that the two signals are equally radiated.
For extra protection, some twisted pair wiring is available with shielding. As with coaxial
cable, the shielding prevents interference from getting to the signal conductors.
In practical application, twisted pair wiring has several advantages over coaxial cable. It's
cheaper to make and sell. It's more flexible and easier to work with. And zillions of miles
of twisted pair wire are installed in offices around the world (it is telephone wire). On the
other hand, coaxial cable holds the advantage when it comes to distance. Coaxial cable
provides an environment to signals that's more carefully controlled. In general, its
shielding and controlled impedance allow for the handling of higher frequencies, which
means that network signals are less likely to blur and lose the sharp edges necessary for
unambiguous identification as digital values.
Each major wiring standard has its own cabling requirements. Although the limits the
standard set for each cabling scheme seem modest (up to a 100 or so PCs), these limits
apply to only a single network cable. You can link multiple cables together using
network concentrators, or you can extend the reach of a single cable over a long range
using a repeater, which is simply an amplifier for network signals. The repeater boosts
the signal on the network cable (and may offer ports to link together several network
buses) without changing the data on the bus.

10Base-5

Under the IEEE 802.3 specification, wiring for 10Base-5 networks uses thick coaxial
cable with a characteristic impedance of 50 ohms. The standard permits the bus to be up
to 500 meters (1,640 feet) long with a 50 ohm terminating resistor at each end. The
special cable used for thick-wire networks is covered with a yellow jacket for normal use
and an orange jacket for plenum installation (for example, over a suspended ceiling in an
airspace that's used as part of the building ventilation system). Because of this coloring,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (19 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

thick wire is often called yellow cable. It is similar to standard RG-8/U coaxial cable
(which is generally black) but has somewhat different electrical characteristics.
Each PC is attached to the network bus through a transceiver, which is most often a wire
tape that clamps onto the wire and penetrates through the jacket and shield to make its
connection without breaking or stripping the bus. Because of the way it clamps onto the
bus and sucks out the signal, this kind of transceiver is often called a vampire tap.
Linking the transceiver and the PC is another special cable called the Attached Unit
Interface cable. The AUI cable can be up to 50 meters (164 feet) long. Under the IEEE
802.3 specification, you can connect up to 100 transceivers to a single 10Base-5
backbone.

10Base-2

In the IEEE 802.3 scheme of things, 10Base-2 is called thin wire or thinnet. It uses a
double shielded 50 ohm coaxial cable that's similar to but not identical with RG-58/U,
another 50 ohm cable that's used for a variety of applications including Citizens' Band
radio. Under the IEEE specification, the 10Base-2 bus cable should not exceed a length
of 185 meters (about 600 feet) but some host adapter manufacturers allow runs of up to
300 meters (about 1,000 feet).
Taps into the 10Base-2 bus require transceivers, but 10Base-2 network host adapters
incorporate integral transceivers. The cabling then takes the form of a daisy chain using
T-connectors on the host adapter. A T-connector plugs into a single jack on the back of
the network host adapter. One cable of the network bus plugs into one leg of the
T-connector on the host adapter, and another cable plugs into a second T-connector leg
and runs to the next host adapter in the network. The network bus consequently
comprises multiple short segments. All connections to it are made using BNC
connectors. In place of a network cable at the first and last transceivers in a backbone,
you plug in a 50 ohm cable terminator instead. You can connect up to 30 transceivers to a
single 10Base-2 backbone.

10Base-T

Because of its star topology, 10Base-T networks use point to point wiring. Each network
cable stretches from one point (a PC or other node) to another at the hub. The hub has a
wiring jack for each network node; each PC host adapter has a single connector.
The basic 10Base-T system uses unshielded twisted pair cable. In most permanently
installed networks, wall jacks that conform to the eight-wire RJ-45 design link to
standard D-Inside Wire buried in walls and above ceilings. To link between the wall
jacks and the jacks on 10Base-T host adapters, you should use special round modular
cables. Ordinary flat telephone wires do not twist their leads and are not suitable to high
speed network use.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (20 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Although 10Base-T uses eight-wire (four-pair cabling) and eight-pin connectors, only
four wires actually carry signals. Normally, the wires between hub and host adapter use
straight through wiring. The PC transmits on pins 2 and 1 and receives on pins 6 and 3
(the first number being the positive side of the connection). Figure 23.6 shows the correct
wiring for a hub to workstation 10Base-T cable.
Figure 23.6. Wiring for a hub-to-workstation 10Base-T cable.

Patching between hubs may require crossover cables that link pins 1 to 3 and 2 to 6.
The 10Base-T specifications enable the entire cable run between hub and host adapter to
be no more than 100 meters (about 325 feet). This distance includes the cable inside the
wall as well as the leads between the hub and building wiring and between the node and
building wiring. Only one PC or node can be connected to each hub jack, but the number
of PCs that can be connected to a single hub is limited only by the number of jacks on the
hub. Most 10Base-T hubs provide a thin or thick wire connector for linking to other hubs,
concentrators, or repeaters.

Setup

Certainly everyone doesn't have the need for a network. If you have only one PC, you
have nothing to connect. If you have more than one PC, you probably would like to link
them together, but you may be afraid of the work involved.
You have good reason. Even companies with their own computer resource departments
typically hire consultants when planning a network. The issues are entirely unlike those
of managing individual PCs, and most involve the snarl of wire that's the heart—or at
least circulatory system—of the network. For example, the network guru has to worry
about such things as coaxial cables, terminations, loop resistance, and (probably) the
phases of the moon.
Fortunately with 10Base-T, an entire network can be plug-and-play, linked together with
cables you can buy direct. For more complex or permanent installations, 10Base-T lets
you take advantage of existing telephone wiring to install the network without blasting
holes in walls and ceilings.
All networks need software of some kind to work. Nearly all use drivers (or, in the case
of Windows, dynamic link libraries) to enable their necessary networking features:
giving access to shared resources, passing data through the network adapter, and
accessing data from remote PCs. All three of these functions involve matching of some
kind: the sending and receiving ends must match their respective PCs and their software.
The data that's passed must, of course, match the needs of the systems at either end. The
bottom line is that your network software must match the operating system you run on
your PC, and the two ends of the network must speak the same language to understand
one another. That is, they must use the same networking protocol. The protocol dictates
the form of the data such as the size of the packets (or blocks of bytes) that get
transferred between systems, the addresses added to each packet so that it can find its
destination, and the means of arbitrating network access and routing the data.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (21 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

The software for building a network ranges from the trivial to sorts more complex than
can be comprehended by the mortal mind. After all, the Internet that connects nearly
every computer in the world one way or another (through modem or direct connection) is
just one computer network. No one understands the full complexity of the system or all
of its software. Fortunately, to simply share a printer you don't have to understand all of
computer networking or even the intimate details of how networking works. The
software used by a printer sharing network need not be quite that complex or
incomprehensible.
For a simple home or small business network, the lowest of the low end of computer
networks will serve you well. Certainly networking is complex even at this level, but it is
at least manageable. Although even the smallest networks involve a complicated
software setup, many have automatic installation programs that let your PC do most of
the work.
The networking software to use can be a religious issue. Some people still believe that
there is one true network. For the purpose of printer sharing, however, you can make a
few simplifying assumptions that should guide your choice. First is that you'll want to go
the peer to peer route so you don't have to waste an entire PC to act as your server or
worry about the complexities of administering a major network. Second, you'll want
something simple, proven, and inexpensive. You can meet all of those requirements with
the networking that's built into Windows 95.

Preparation

The first step in installing any network is planning—deciding on exactly what you want
to accomplish, then figuring out how to do it.
Before you start the installation, you need to assign a name to each of the PCs in your
network so that both you and the network have a way of identifying individual machines.
As with selecting names for offspring, naming nodes is a personal matter. Your network
may put some restrictions on the number of characters you must use, but that often is
your only constraint. You can be dull and descriptive—Bob's PC, Ted's PC, Carol's PC,
and so on. You can express your erudition (not to mention your lack of fear at typing) by
naming your nodes after Egyptian or Greek gods, paleological epochs, chemical elements
(better still, organic hydrocarbon molecules), or just good old Fred and Ethel. Under
Windows you'll also need a name for your workgroup—another chance to demonstrate
your imagination or lack of same.

Host Adapters

The hardware for putting together a peer to peer 10Base-T network comprises three
parts—the network/host adapters in each PC, a hub, and the wire that links them together.
You'll need one 10Base-T network adapter for every PC you want to add to the network.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (22 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

The network adapter provides the connection between PC and network cable. All
10Base-T network adapters provide the same basic functions, although some are more
feature-packed than others. Any 10Base-T adapter is acceptable providing it either is
supported directly by Windows, has a Windows driver, or emulates a board with built-in
Windows support. For example, Novell NE2000 compatibility assures that the board
duplicates the Novell product and delivers the features expected by the software. Some
network adapters allow for optional boot ROMs, which allow PCs to boot up using a
remote disk drive, but this feature is more applicable to client-server rather than peer to
peer networks.

Wiring

The twisted pair cabling for 10Base-T has a few of its own requirements. The most
important is that it must be truly twisted to properly minimize noise and interference.
Ordinary modular telephone cables are inappropriate for 10Base-T because these cables
are flat and lack the needed twists. In addition, ordinary modular telephone cables use
four-wire RJ-11 connectors that will snap into 10Base-T connectors but won't connect
with all of the necessary signals. A 10Base-T jack is designed to accept eight-wire RJ-45
connectors even though only four of the connections are active.
If you want to make a permanent 10Base-T installation, you can use the phone wiring
that's already installed in most businesses. Most multi-pair telephone cables—such as the
ubiquitous 25-pair cables—give each cable pair the proper twists and are entirely suitable
for 10Base-T use. Of course, you'll need to connect to the trunk cable somehow. The
easiest way is to add the appropriate jacks and use twisted pair cables with RJ-45 plugs
between the jacks and your PC.
If all your PCs are in one room or reasonably close proximity, the best choice is to buy
prefabricated RJ-45 cables, which are available direct from several sources, in standard
lengths (for example 10, 25, 50, and 100 feet). You can loosely coil up several feet of
extra cable without problem. The only restriction imposed by 10Base-T is that the length
of cable between PC and hub cannot exceed 100 meters (328 feet).

Hubs

You'll need at least one hub for your network. A hub is simply a box with circuitry inside
and a bunch of jacks for RJ-45 plugs on the back. The circuitry inside links the 10Base-T
cables together. Hubs are distinguished by the number of features they offer. But most of
those features are designed to make the network administrator's job easier and are
unnecessary in a small (say five or fewer PCs) peer to peer network. For such smaller
systems, the minimal hub will be all you need.
The first step in wiring your network is to determine the most convenient location to put
your 10Base-T hub. Most larger businesses have dedicated wiring closets, on each floor

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (23 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

in a building or amid a cluster of offices, to organize the telephone wiring. The network
hub naturally fits the same location. But you don't have to put the hub in a closet. Some
people stuff them above suspended ceilings (neither a permanent solution nor likely a
legal one; fire codes look dimly on such things). However, you can always set the hub on
or under a desk, adjacent to a PC, or wherever is convenient.
The important consideration with locating the hub is that you put it in a convenient
location, out of the way but easy to access should problems develop. Ideally, you'll want
the hub in the exact center of the PCs it serves to minimize wiring hassles. As long as
you don't violate the 10Base-T wiring limitations, however, you can locate the hub
anywhere.

Cable Installation

Once you've set a location for each PC and the hub, you can configure the wiring. You
need run only one cable from each PC node to the hub.
How you run the wiring and what wire to use is another matter entirely. The easiest route
is to strew about ready made cables. Otherwise you'll need a special tool to crimp
modular connectors on the cables.
How exotic you want to get with 10Base-T wiring depends mostly on your aesthetic
sense. You won't gain anything in network reliability by using existing telephone wiring
or cleverly hiding your wiring inside walls. In fact, the most reliable wiring you can use
are the prefabricated lengths. Every connector you crimp on a cable adds a potential
problem. Moreover, you have less control (typically none) over the quality of existing
wiring.

Configuration

Once the wiring is in place, prepare the network adapter for each peer. Typically, a
network adapter requires an interrupt and base address from your system resources. If
you have a Plug-and-Play network adapter, Windows will recognize it when you install it
inside your PC, automatically assigning resources.
If the adapter you choose doesn't follow the Plug-and-Play standard, you'll assign
resources using DIP switches or jumpers on each network adapter or through the setup
software accompanying the board. Under Windows 95, these choices are not random.
You run the software installation before you put the hardware in your PC. During the
installation process, Windows will tell you the resource you have to use for your network
adapter to avoid resource conflicts with the stuff already installed in your PC.
You can install the network adapter in two ways. You can step through the Add New
Hardware Wizard from Control Panel. Or you can use the Network properties sheet, also
accessible from Control Panel. To use the latter method, double click on the Network

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (24 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

icon under Control Panel. From the first screen, click on Add, and you'll see a menu like
the one shown in Figure 23.7.
Figure 23.7. Selecting a network component type.

Select Adapter and click the Add button. Windows will then let you choose a
manufacturer and model of network adapter to install. Select the board you've chosen or a
suitable emulation, as shown in Figure 23.8.
Figure 23.8. Selecting a network adapter emulation.

Once you've selected an adapter, click OK. Windows will immediately swing into action
and install all the drivers required not only by the host adapter but also all the services
and protocols you'll need to get your network working.
When Windows finishes with this part of the installation, you'll see a revised Network
sheet listing everything that the operating system has installed, as shown in Figure 23.9.
Figure 23.9 Clients, services, adapters, and protocols installed.

If Windows doesn't prompt you for the names you've decided to use for your network
and node, select the Identification tab and enter the names.
The next step finally assigns resource values to your network host adapter. Highlight the
host adapter and click on the Properties button. Windows will reply with a screen like
that shown in Figure 23.10.
Figure 23.10. Assigning resources to your network host adapter.

On this screen, Windows will suggest the setting to use for your network host adapter's
resources. You can choose alternate values to resolve conflicts or to suit the settings you
prefer. Once you've finished, click OK. Click OK on the next screen, and Windows will
tell you to reboot your system.
Configure you network adapter to match the settings you've chosen. With the PC off,
install the network adapter in your PC. Simply slide the boards into expansion slots. Of
course, the normal expansion issues apply. Switch off each system, unplug its power
cable, remove the lid, and remove the blank retaining bracket from the slot you want to
use. While you can put eight-bit boards into eight- or sixteen-bit slots, sixteen-bit boards
should only go into sixteen bit slots. Repeat this process for every PC in your network.
Be sure to screw the card retaining bracket into each PC. Not only will the screw prevent
you from dislodging the board when plugging in the network cable, it will also provide a
better ground which will improve network reliability.
Before reassembling each peer, plug it in and boot it up to assure that the network adapter
doesn't inadvertently interfere with some vital system function. Once you're sure all is
well, turn off the peer and re-install its cover.
Finally you're ready to plug your network together. Simply slide each RJ-45 plug into the
jack in each peer. Then plug the other end of each cable into the hub. Switch on the hub,
then all of the PCs in the network.
The connectivity features of Windows 95 are more than sufficient to share a printer (as

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (25 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

well as disk drives and other resources) among a dozen PCs. Similar size groups of PCs
running under DOS can use any of a number of small network programs, the most
popular of which is Artisoft's Lantastic. More complex installations work best with
full-fledged networks, which are beyond the scope of this book—or any single volume.
The setup process involves two types of PCs: servers that will provide access to the
printer or printers you want to share, and workstations that will use the network to send
print jobs to those printers. Each of these two classes of PCs requires its own distinct
setup.

Enabling Sharing

Before you can begin thinking about setting up your network software for resource
sharing, you must have a working network. Installing the network hardware is only the
first step. You must then configure your network workstations and servers to use the
hardware you've installed.
Once your network hardware is properly setup, you must enable the PC that you want to
act as a print server to share its printer. The control for this function is sufficiently buried
that you need a treasure map to find it. Start your quest by opening Control Panel by
selecting it from the Settings menu from the start button. From Control Panel, double
click on the Network icon to open the Network folder. You'll see a screen that tells you
about all the network components that Windows has installed for you, which will look
something like Figure 23.11.
Figure 23.11 The Windows 95 Network dialog.

If your network is operating, you don't have to worry about all the obscure entries on this
screen. For your purposes, its only important aspect is the File and Printer Sharing button
about two-thirds the way down the screen. Clicking on it will open a small windows
giving you the choice of whether to share system resources, as shown in Figure 23.12.
Figure 23.12 The file and printer sharing menu of Windows 95.

Your choices are twofold, sharing files and sharing printers. In most cases, you'll want to
share files as well to put the machine to work as a server.
Once you've made your choices, clicking on OK will take you back to the previous
screen, the Network folder. Click on OK again to exit back to Control Panel.
At this point you've enabled the overall sharing abilities of your PC, but you haven't
made any of the system's resources available to other PCs. To do that, you must alter the
properties of each resource you want to share.
To enable sharing for a disk drive, be it a hard disk, floppy, or CD drive, highlight the
drive's icon in My Computer window. Then choose Sharing from the File menu. You'll
see a display like that shown in Figure 23.13.
Figure 23.13 Windows 95 disk properties sheet.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (26 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

Once you've enabled a disk for sharing, its icon will appear in the Network
Neighborhood of all the PCs that have access to it on the network.
To enable sharing the disk resource, you only need to click on the "Shared As" button.
Haul out your imagination again and give the disk a name. Other options allow you to
limit access so networked PCs cannot alter your drive or can only alter it after giving a
password you designate. Once you've made the proper entries, click OK and you're done.
Similarly, you have two ways of altering the properties of your printer. First, double click
on the Printer icon inside Control Panel. Windows 95 will display icons for each of the
printers installed in your PC (as well as giving you the option to install another). You can
either right click on a printer's icon and select Properties from the popup menu.
Alternately, double click on the printer icon, opening the associated folder. From the
Printer menu, select properties. In either case, you'll have a variety of tabs to choose
from. Among them you should see a Sharing tab. If you do not, you've not properly
enabled the printer sharing abilities of your PC, and you'll have to go back to the
Network folder to enable sharing.
Once you select the Sharing tab, you'll see a screen like that shown in Figure 23.14.
Figure 23.14 The Sharing tab of the Printer Properties folder.

When this tab initially pops up, it will default to Not Shared, and the lower part of the
screen will be grayed out. Selecting Sharing will activate your other choices. You must
give the printer a name by which PCs in the network will refer to it. You can optionally
supply a comment so you can remember what you've done. This tab also provides for
password protection for printer access. If you want to restrict printer access, you can
enter a password here; otherwise, printer access won't be limited.
When you're finished with your entries, click on OK, and you're done. Your printer can
now be shared with other workstations attached to your network.

Resource Sharing

After you've enabled sharing from the system that is to act as your file and print server,
you must individually connect each PC to it through Windows 95. Although shared
drives automatically appear under Windows, if you want to use them as drive letters
through a DOS box, you will have to map them to the workstation. To share a printer,
you must install the network printer exactly as you would install a local printer.
To map a network drive, first open Network Neighborhood and click on the name of the
server containing the drive you want to map. Windows should obediently display the
names of the sharable disk drives on the server. Highlight the drive you want to map,
then select the File menu and the Map Network Drive entry. Windows will assign a drive
letter and give you the opportunity to automatically remap the drive every time you boot
up. Use the drive letter Windows assigns in your DOS box to access the remote drive.
Enabling printer sharing on a workstation is a bit more complex. Start by double clicking
on the Printer icon from control panel in the workstation in which you want to install the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (27 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

networked printer. Windows 95 will respond by letting you choose to add a printer and,
should you have already installed another printer, the choice of controlling the already
installed printers. If you've not previously installed a printer in the workstation, you'll see
a screen like that shown in Figure 23.15.
Figure 23.15 The Windows 95 Printer folder.

Double click on Add Printer from the Printers folder. This will activate the Add Printer
Wizard, which will step you through the installation process.
When the Add Printer Wizard starts, your first choice will be whether to install a local or
networked printer, as shown in Figure 23.16. The wizard defaults to installing a local
printer, so you'll want to select Network printer. Then click on the Next button.
Figure 23.16 The first screen of the Add Printer Wizard.

In order to install a printer, you have to tell the Wizard which printer to use. The Wizard
identifies a printer by its assigned name, and it prompts you to type in the name of the
printer to install, as shown in Figure 23.17.
Figure 23.17 The Add Printer Wizard prompting for a network printer name.

This screen also gives you the choice of routing the print commands from your DOS
programs to the networked printer. If you're going to use DOS at all, you'll want to use
one printer for all your output needs. To do so, select the Yes button.
If you don't remember the complete name and path of all the printers in your network,
you can ask the Wizard to find the available printers for you by clicking on the Browse
button and looking in the likely places on the network. The Browse button is a good
choice even if you know your printer's name because by selecting a printer from the
names it presents you can avoid typing errors. Click on the PC that acts as your server,
and the Wizard will respond by listing the shared printers attached to that server, as
shown in Figure 23.18. You only need to click on the printer that you want.
Figure 23.18 Browsing through your network for a printer.

The Wizard is not done with you yet. If you've told it to route your DOS print jobs to
print through the network, the Wizard will want you to decide whether you want to
capture a printer port. You'll be staring at a screen like that shown in Figure 23.19.
Figure 23.19 Selecting to capture a printer port.

Because most DOS applications send their output directly to a printer port, you'll need to
enable capture to allow them to print. When you click on the Capture Printer Port button,
the Wizard will ask which port to capture with a screen like that shown in Figure 23.20.
Figure 23.20 Selecting a port to capture for DOS printing.

You'll have to install all your DOS applications to use the port you choose for printing.
The Wizard's default choice, LPT1, is a good one. After you make your choice, click on
OK to finish the networked printer installation.
The Add Printer Wizard asks one more question: what name to use for your newly
installed printer. This name will appear on the icon in the Printers folder for the printer

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (28 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 23

you select. It does not need to be the same as the network printer name. Once you fill in a
name, click on Next to end the installation process.
The Wizard takes care of all the details for you, automatically copying to your local PC
the driver files necessary for your applications and printer. Your networked printer will
then act as if it were locally connected to your PC—only you'll have to walk farther to fill
the paper tray.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh23.htm (29 de 29) [23/06/2000 06:58:13 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Chapter 24: Power


PCs require a continuous supply of carefully conditioned, low voltage, direct current at
several potentials. Batteries provide a direct source of power for Portable PCs, but
desktop units require power supplies to provide the proper voltages. Similar power
supplies charge and substitute for portable PC batteries. Any time you connect any PC to
utility power, it stands a chance of damage from power related problems. Surge
suppressors and backup power systems help to ensure that your PC gets the proper
electrical diet.[/block]

■ Power Supplies
■ Technologies
■ Linear Power Supplies
■ Switching Power Supplies
■ PC Power Needs
■ Voltages and Ratings
■ Supply Voltage
■ The Power-Good Signal
■ Portable Computer Power
■ Rectifying Power Converters
■ Transformers
■ Batteries
■ Storage Density
■ Primary and Secondary Cells
■ Technologies
■ Standards
■ Smart Battery Specifications
■ Clock Batteries
■ Notebook Power

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (1 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

■ Battery Safety
■ Desktop PC Power Supplies
■ Power Supply Selection
■ Power Supply Mounting
■ PC Power Connections
■ ATX Power Connections
■ Power Management
■ Advanced Power Management
■ States
■ Structure
■ Operation
■ Advanced Configuration and Power Interface
■ Soft Off
■ States
■ Configuration
■ Energy*Star
■ Power Protection
■ Power Line Irregularities
■ Overvoltage
■ Undervoltage
■ Noise
■ Overvoltage Protection
■ Metal Oxide Varistors
■ Gas Tubes
■ Avalanche Diodes
■ Reactive Circuits
■ Blackout Protection
■ Standby Power Systems
■ Uninterruptible Power Systems
■ Phone Line Protection

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (2 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

24

Power

All practical computers made today operate electronically. Moving


electrons—electricity—are the media of their thoughts. Electrical pulses course from one
circuit to another, switched off or on in an instant by logic chips. Circuits combine the
electrical pulses to make logical decisions and send out other pulses to control
peripherals. The computer's signals stay electrical until electrons colliding with
phosphors in the monitor tube push out photons toward your eyes or generate the fields
that snap your printer into action.
Of course, your computer needs a source for the electricity that runs it. The power does
not arise spontaneously in its circuits but must be derived from an outside source.
Conveniently, nearly every home in America is equipped with its own electrical supply
that the computer can tap into. Such is the wonder of civilization.
But the delicate solid state semiconductor circuits of today's computers cannot directly
use the electricity supplied by your favorite utility company. Commercial power is an
electrical brute, designed to have the strength and stamina to withstand the miles of travel
between the generator and your home. Your PC's circuits want a steady, carefully
controlled trickle of power. Raw utility power would fry and melt computer circuits in a
quick flash of miniature lightning.
For economic reasons, commercial electrical power is transmitted between you and the
utility company as alternating current, the familiar AC found everywhere. AC is
preferred by power companies because it is easy to generate and adapts readily between
voltages (including the very high voltages that make long distance transmission
efficient). It's called alternating because it reverses polarity—swapping positive for
negative—dozens of times a second (arbitrarily 60 Hz in America; 50 Hz in Europe).
The changing or oscillating nature of AC enables transformers to increase or decrease
voltage (the measure of driving force of electricity) because transformers only react to
electrical changes. Electrical power travels better at higher voltages because waste (as
heat generated by the electrical current flowing through the resistance of the long
distance transmission wires) is inversely proportional to voltage. Transformers permit the
high voltages used in transmitting commercial power—sometimes hundreds of thousands
of volts—to be reduced to a safe level (nominally 117 volts) before it is let into your
home.
As wonderful as AC is to power companies, it's anathema to computer circuits. These
circuits form their pulses by switching the flow of electricity tapped from a constant
supply. Although computers can be designed that use AC, the constant voltage reversal
would complicate the design so that juggling knives while blindfolded and riding a roller

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (3 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

coaster would seem tame in comparison. Computers (and most electronic gear) uses
direct current, or DC, instead. Direct current is the kind of power that comes directly
from a primary source—a battery—a single voltage that stays at a constant level (at least
constant as long as the battery has the reserves to produce it). Moreover, even the
relatively low voltage that powers your lights and vacuum cleaner would be fatal to
semiconductor circuits. Tiny distances separate the elements inside solid state circuits,
and high voltages can flash across those distances like lightning, burning and destroying
the silicon along the way.

Power Supplies

The intermediary that translates AC from your electrical outlets into the DC that your
computer's circuits need is called the power supply. As it operates, the power supply of
your PC attempts to make the direct current supplied to your computer as pure as
possible, as close to the ideal DC power produced by batteries. The chief goal is
regulation, maintaining the voltage as close as possible to the ideal desired by the circuits
inside your PC.
Notebook and subnotebook computers have it easy. They work with battery power,
which is generated inside the battery cells in exactly the right form for computer
circuits—low voltage DC. However, even notebook computers require built-in voltage
regulation because even pure battery power varies in voltage depending on the state of
charge or discharge of the battery. In addition, laptop and notebook computers also must
charge their batteries somehow, and their charges must make exactly the same electrical
transformations as a desktop computer's power supply.

Technologies

In electronic gear, two kinds of power supplies are commonly used: linear and switching.
The former is old technology, dating from the days when the first radios were freed from
their need for storage batteries in the 1920s. The latter rates as high technology, requiring
the speed and efficiency of solid state electronic circuitry to achieve the dominant
position they hold today in the computer power market. These two power supply
technologies are distinguished by the means used to achieve their voltage regulation.

Linear Power Supplies

The design first used for making regulated DC from utility supplied AC was the linear
power supply. At one time, it was the only kind of power supply used for any electronic
equipment. When another technology became available, it was given the linear label
because they then used standard linear (analog) semiconductor circuits, although a linear

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (4 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

power supply need not have any semiconductors in it at all.


In a linear power supply, the raw electricity from the power line is first sent through a
transformer that reduces its voltage to a value slightly higher than required by the
computer's circuits. Next, one or several rectifiers, usually semiconductor diodes, convert
the now low voltage AC to DC by permitting the flow of electricity in only one direction,
blocking the reversals. Finally, this DC is sent through the linear voltage regulator, which
adjusts the voltage created by the power supply to the level required by your computer's
circuits.
Most linear voltage regulators work simply by absorbing the excess voltage made by the
transformer, turning it into heat. A shunt regulator simply shorts out excess power to
drive the voltage down. A series regulator puts an impediment—a resistance—in the flow
of electricity, blocking excess voltage. In either case, the regulator requires an input
voltage higher than the voltage it supplies to your computer's circuits. This excess power
is converted to heat (that is, wasted). The linear power supply achieves its regulation
simply by varying the waste.

Switching Power Supplies

The design alternative is the switching power supply. Although more complex, switching
power supplies are more efficient and often less expensive than their linear kin. While
designs vary, the typical switching power supply first converts the incoming 60 Hertz
utility power to a much higher frequency of pulses (in the range of 20,000 Hz, above the
range of normal human hearing) by switching it on and off using an electrical component
called a triac.
At the same time the switching regulator increases the frequency of the commercial
power, it regulates the commercial power using a digital technique called pulse width
modulation. That is, the duration of each power pulse is varied in response to the needs
of the computer circuitry being supplied. The width of the pulses is controlled by the
electronic switch; shorter pulses result in a lower output voltage. Finally, a transformer
reduces the switched pulses' voltage to the level required by the computer circuits and, by
rectification and filtering, turns it into pure direct current.
Switching power supplies earn their efficiency and lower cost in two ways. Switching
regulation is more efficient because less power is turned into heat. Instead of dissipating
energy with a shunt or series regulator, the switching regulator switches all current flow
off, albeit briefly. In addition, high frequencies require smaller, less expensive
transformers and filtering circuits. For these two very practical reasons, nearly all of
today's personal computers use switching power supplies.

PC Power Needs

Modern computer logic circuits operate by switching voltages with the two different

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (5 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

logic states (true or false, one or zero) coded as two voltage levels—high and low. Every
family of logic circuits has its own voltage standards. Most PCs today are built around
the requirements of Transistor-Transistor Logic or TTL. In a TTL design, "high" refers to
voltages above about 3.2 volts, and "low" means voltages lower than about 1.8. The
middle ground is undefined logically, an electrical guard band that prevents ambiguity
between the two meaningful states. Besides the signals, TTL logic circuits also require a
constant supply voltage that they use to power their thinking—it provides the electrical
forces that throw their switches. TTL circuits nominally operate from a five-volt supply.
The power supplies used by all full size PCs are designed to produce this unvarying five
volts in great abundance—commonly 20 or more amperes.
PCs often require other voltages as well. The motors of most disk drives (hard and
floppy) typically require 12 volts to make their spin. Other specialized circuits in PCs
sometimes require bipolar electrical supplies. A serial port, for example, signals logic
states by varying voltages between positive and negative in relation to ground.
Consequently, the mirror image voltages, -5 and -12 volts, must be available inside every
PC, at least if it hopes to use any possible expansion boards.
In notebook computers, most of which have no room for generic expansion boards, all
these voltages are often unnecessary. For example, many new hard disks designed for
notebook computers use five-volt motors, eliminating the need for the 12-volt supply.
In addition, the latest generation of notebook computer microprocessors and support
circuits are designed to operate with a 3.3-volt supply. These lower voltage circuits cut
power consumption because—all else being equal—the higher the voltage, the greater
the current flow, and the larger the power usage. Dropping the circuit operating voltage
from 5 to 3.3 volts cuts the consumption of computer power by about half (the power
usage of a circuit is proportional to the square of the current consumed).

Voltages and Ratings

The power supplies that you are most likely to tangle with are those inside desktop PCs,
and these must produce all four common voltages to satisfy the needs of all potential
combinations of circuits. In many desktop PC power supplies, the power supply typically
produces four voltages (+5, -5, +12, and -12), delivered in different quantities
(amperages) because of the demands associated with each. A separate voltage regulator
on the motherboard produces the lower voltage in the 3.3-volt range required by
Pentium-level microprocessors and their associated circuitry. In some systems, the output
voltage of this regulator may be variable to accommodate energy saving systems (which
reduce the speed and voltage of the microprocessor to conserve power and reduce heat
dissipation). Newer power supplies, such as those that follow the ATX design standard,
sometimes also provide a direct 3.3-volt supply.
The typical PC has much logic circuitry so it needs copious quantities of 5-volt power,
often as much as 20 to 25 amperes). Many disk drives use 12-volt power; the typical
modern drive uses an ampere or so. Only a few components require the negative
voltages, so most power supplies only deliver a few watts of each.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (6 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Most power supplies are rated and advertised by the sum of all the power they can make
available, as measured in watts. The power rating of any power supply can be calculated
by individually multiplying the current rating of each of the four voltages it supplies and
summing the results. (Power in watts is equal to the product of voltage times current in
amperes.) Most modern full size computers have power supplies of 150-220 watts.
Notebook PCs use 10 to 25 watts.
Note that this power rating does not correspond to the wattage that the power supply
draws from a wall outlet. All electronic circuits—and power supplies in
particular—suffer from inefficiencies, linear designs more so than switching.
Consequently, a power supply requires a wattage in excess of that it provides to your
computer's circuits—at least when it is producing its full output. PC power supplies,
however, rarely operate at their rated output. As a result, efficient switching power
supplies typically draw less power than their nominal rating in normal use. For example,
a PC with a 220-watt power supply with a typical dosage of memory (say four
megabytes) and one hard disk drive likely draws less than 100 watts while it is operating.
When selecting a power supply for your PC, the rating you require depends on the boards
and peripherals with which you want to fill your computer. A system board may require
15-25 watts; a floppy disk drive, 3-20 (depending on its vintage); a hard disk, 5-50 (also
depending on its vintage); a memory or multifunction expansion board, 5-10. Sum things
up, and you see that 200 watts, even 150 watts, is more than adequate for any single user
system equipped with state of the art components. Table 24.1 lists the power needs of
various PC components of both ancient and modern designs.

Table 24.1. Typical Device Power Demands

Device class Device type Power Example


Floppy disk Full height, 12.6 watts IBM PC diskette
drive 5.25 inch drive
Floppy disk Half-height, 12.6 watts QumeTrak 142
drive 5.25 inch
Floppy disk One-inch high, 1.4 watts Teac FD-235J
drive 3.5-inch
Graphics Two-board old 16.2 watts IBM 8514/A
board technology
Graphics board High performance, full 13.75 watts Matrox MGA
length
Graphics board Accelerated half-card 6.5 watts ATI VGA Wonder,
Graphics Ultra+
Hard disk Full height, 5.25-inch 59 watts IBM 10MB XT hard
disk
Hard disk Half-height, 5.25-inch 25 watts [estimated]

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (7 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Hard disk One-inch high, 6.5 Quantum LPS120S


watts, 3.5-inch ProDrive
Hard disk 2.5 inch 2.2 watts Quantum Go-Drive
120AT
Hard disk PCMCIA card 3.5 watts Maxtor MXL-131-III
Hard disk Full height, 3.5 inch 12 watts Quantum ProDrive 210S
Memory 1MB SIMM 4.8 watts Motorola MCM81000
Memory 4MB SIMM 6.3 watts Motorola MCM94000
Memory 8MB SIMM 16.8 watts Motorola MCM36800
Modem PCMCIA card 3.5 watts MultiTech MT1432LT
Modem Internal, half-card 1.2 watts Boca V.32bis
Network Ethernet, half-card 7.9 watts Artisoft AE-2/T
adapter
System board 286, AT-size 25 watts [estimated]
System board 386, XT-size 12 watts Monolithic Systems
MSC386 XT/AT
System board 486 or Pentium, 25 watts [estimated]
AT-size

Supply Voltage

Most power supplies are designed to operate from a certain line voltage and frequency. In
the United States, utility power is supplied at a nominal 115 volts and 60 Hertz. In other
nations, the supply voltage and frequency may be different. In Europe, for instance, a 230
volt, 50 Hertz standard prevails.
Most switching power supplies can operate at either frequency, so that shouldn't be a
worry when traveling. (Before you travel, however, check the ratings on your power
supply to be sure.) Linear power supplies are more sensitive. Because their transformers
have less reactance at lower frequencies, 60 Hz transformers draw more current than
their designers intend when operating on 50 Hz power. Consequently, they are liable to
overheat and fail, perhaps catastrophically.
Most PC power supplies are either universal or voltage selectable. A universal power
supply is designed to have a wide tolerance for supply current. If you have a computer
with such a power supply, all you need to do is plug the computer in, and it should work
properly. Note that some of these universal power supplies accommodate any supply
voltage in a wide range and will accept any standard voltage and line frequency available
in the world—a voltage range from about 100 to 250 volts and line frequency of 50 to 60
Hz. Other so-called universal supplies are actually limited to two narrow ranges,
bracketing the two major voltage standards. Because you are unlikely to encounter a

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (8 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

province with a 169.35-volt standard, these dual range supplies are universal enough for
world wide use.
Voltage selectable power supplies have a small switch on the rear panel that selects their
operating voltage, usually in two ranges—115 and 230 volts. If your PC has a voltage
selectable power supply, make sure that the switch is in the proper position for the
available power before you turn on your computer.
When traveling in a foreign land, always use this power supply switch to adjust for
different voltages. Do not use inexpensive voltage converters. Often these devices are
nothing more than rectifiers that clip half the incoming waveform. Although that strategy
may work for light bulbs, it can be disastrous to electronic circuitry. Using such a device
can destroy your computer. It's not recommended procedure.

The Power-Good Signal

Besides the voltages and currents the computer needs to operate, PC power supplies also
provide another signal called Power-Good. Its purpose is just to tell the computer that all
is well with the power supply and the computer can operate normally. If the Power-Good
signal is not present, the computer shuts down. The Power-Good signal prevents the
computer from attempting to operate on oddball voltages (for example, those caused by a
brown-out) and damaging itself. A bad connection or failure of the power-good output of
the power supply also causes your PC to stop working just as effectively as a complete
power supply failure.

Portable Computer Power

As with any PC, electricity is the lifeblood of notebook machines. With these machines,
however, emphasis shifts from power production to consumption. To achieve freedom
from the need for plugging in, these totable PCs pack their own portable
power—batteries. Although they are free from concerns about lightning strikes and
utility bill shortfalls, they face a far less merciful taskmaster—gravity. The amount of
power they have available is determined by their batteries, and weight constrains battery
size to a reasonable value (the reasonableness of which varies inversely with the length
of the airport concourse and the time spent traveling). Compared to the almost unlimited
electrical supply available at your nearby wall outlet, the power provided by a pound of
batteries is minuscule, indeed, with total available energy measuring in the vicinity of
five watt-hours.
The power supply in a notebook computer is consequently more concerned with
minimizing waste rather than regulating. After all, battery power is close to ideal to begin
with—smooth, unchanging DC at a low potential (voltage) that can be tailored to match
computer circuitry with the proper battery selection. Regulation needs are minimum: a
protection circuit to prevent too much voltage from sneaking in and destroying the
computer and a low voltage detection circuit to warn before the voltage output of the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (9 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

battery supply drops too low to reliably run the machine. Power wasting shunt or series
regulators are unnecessary because battery voltage is entirely predictable—it simply
grows a bit weaker as the charge is drained away.
Rather than regulation, management is the principal power issue in a portable PC.
Circuitry inside the system monitors which resources are being used and, more
importantly, which are not. Anything not being used gets shut off—for example, the
backlight on the display screen, the spin of the hard disk, even the microprocessor in
some systems.
With but a few exceptions, notebook computers also rely on a battery charger of some
kind so that you can use rechargeable batteries for power. In essence and operation, the
battery charger is little more than a repackaged power supply. Line voltage AC goes in,
and low voltage DC (usually) comes out. The output voltage is close to that of the
system's battery output, always a bit higher. (A slightly higher voltage is required so that
the batteries are charged to their full capacity.)
Most of the time, the battery charger/power supply is a self-contained unit external to the
notebook PC. Although they typically contain more than just a transformer, most people
call these external power supplies transformers or power bricks. The name was apt when
all external battery chargers used linear designs with heavy transformers, giving the
device the size and heft approaching that of an actual clay brick. Modern external power
supplies use switching designs, however, and can be surprisingly compact and light.
Manufacturers favor the external power supply design because it moves unnecessary
weight out of the machine itself and eliminates high voltages from anywhere inside the
computer. The design also gives you something else to carry and leave behind as well as
a connection that can fail at an inopportune time.
The brick typically only reduces line voltage to an acceptable level and rectifies it to DC.
All the power management functions are contained inside the PC.
No standard exists for the external battery chargers/power supplies of notebook
computers. Every manufacturer—and often every model of PC from a given
manufacturer—uses its own design. They differ as to output voltage, current, and
polarity. You can substitute a generic replacement only if the replacement matches the
voltage used by your PC and generates at least as much current. Polarity matching gives
you two choices—right and wrong—and the wrong choice is apt to destroy many of the
semiconductors inside the system. In other words, make extra certain of power polarity
when plugging in a generic replacement power supply. (With most PCs, the issue of
polarity reduces to a practical matter of whether the center or outer conductor of the
almost universal coaxial power plug is the positive terminal.) Also available are cigarette
lighter adapters that enable you to plug many models of notebook computers into the
standard cigarette lighter jack found in most automobiles. Again, you must match these
to the exact make and model of the PC you plan to use, being particularly critical of the
polarity of the voltage.
Most external power supplies are designed to operate from a single voltage (a few are
universal, but don't count on it). That means you are restricted to plugging in and
charging your portable PC to one hemisphere (or thereabouts) or the other. Moving from
115-volt to 230-volt electrical systems requires a second, expensive external charger.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (10 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Experienced travelers often pack voltage converters to take care of electrical differences.
Two kinds of converters are available: one that works with notebook PC chargers; and
one that will likely destroy the charger and the computer as well.

Rectifying Power Converters

The simplest, smallest, lightest, and cheapest converter is nothing more than a diode
(rectifier) that blocks half the AC wave from getting through, effectively cutting the
voltage in half—sort of. The result is an oddball half wave electrical supply apt to wreak
both havoc and disaster with critical electronic circuits, such as your PC and its power
supply. Although these converters work well with electric razors and hair dryers, never
plug your PC into one.

Transformers

The other kind of converter is a simple transformer. Like all transformers, these
converters are heavy (making them a joy to pack into an overnight bag). They are also
relatively expensive. They are safe for powering your PC because they deliver normal
AC at their outputs. Of course, they require that you carry two power adapter bricks with
your notebook computer—its power supply and the converter—that together probably
weigh more than the machine itself. In the long run—and the long concourse—you are
better off buying a second battery charger/power supply for your PC.

Batteries

Think back to elementary school, and you probably remember torturing half a lemon
with a strip of copper and one of zinc in another world relevant experiment meant to
introduce you to the mysteries of electricity. Certainly those memories come in handy if
you are stuck on a desert island with a radio, dead batteries, a case of lemons, and strips
of zinc and copper, but they probably seem as meaningless in connection with your PC as
DOS 1.1. Think again. That juicy experiment should have served as an introduction to
battery technology (and recalling the memories of it makes a good introduction to this
section).

Storage Density

The amount of energy that a battery can store is its capacity and is usually measured in
watt-hours, abbreviated Wh. The ratio of capacity to the weight (or size) of the battery is

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (11 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

called the storage density of the battery. The higher the storage density, the more energy
that can be stored in a given size or weight of a cell and, hence, the more desirable the
battery—at least if you have to carry it and the PC holding it around airports all day long.

Primary and Secondary Cells

Batteries can be divided into two types: primary and secondary, or storage. In primary
batteries, the creation of electricity is irreversible; one or both of the electrodes is altered
and cannot be brought back to its original state except by some complex process (like
re-smelting the metal). Secondary or storage batteries are rechargeable; the chemical
reaction is reversible by the application of electricity. The electrons can be coaxed back
whence they came. After the battery is discharged, the chemical changes inside can be
reversed by pumping electricity into the battery again. The chemicals revert back to their
original, charged state and can be discharged to provide electricity once again.
In theory, any chemical reaction is reversible. Clocks can run backwards, too. And pigs
can fly, given a tall enough cliff. The problem is that when a battery discharges, the
chemical reaction affects the electrodes more in some places than others; recharging does
not necessarily reconstitute the places that were depleted. Rechargeable batteries work
because the chemical changes inside them alter their electrodes without removing
material. For example, an electrode may become plated with an oxide, which can be
removed during recharging.
Primary and secondary (storage) batteries see widely different applications, even in PCs.
Nearly every modern PC has a primary battery hidden somewhere inside, letting out a
tiny electrical trickle that keeps the time-of-day clock running while the PC is not. This
same battery also maintains a few bytes or kilobytes of CMOS memory to store system
configuration information. Storage batteries are used to power just about every notebook
computer in existence. (A few systems use storage batteries for their clocks and
configuration memory.)

Technologies

The lemon demonstrates the one way that chemical energy can be put to work, directly
producing electricity. The two strips of metal act as electrodes. One gives off electrons
through the chemical process of oxidation, and the other takes up electrons through
chemical reduction. In other words, electrons move from one electrode, the anode, to
another called the cathode. The acid in the lemon serves as an electrolyte, the medium
through which the electrons are exchanged in the form of ions. Together the three
elements make an electricity-generating device called a Galvanic cell, named after
eighteenth-century chemist Luigi Galvani. Several such cells connected comprise a
battery.
Connect a wire from cathode to anode, and the electrons have a way to dash back and
even up their concentration. That mad race is the flow of electricity. Add something in

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (12 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

the middle—say a PC—and that electricity performs work on its way back home.
All batteries work by the same principle. Two dissimilar materials (strictly speaking, they
must differ in oxidation potential, commonly abbreviated as E0 value) serving as anode
and cathode are linked by a third material that serves as the electrolyte. The choice of
materials is wide and allows for a diversity of battery technologies. It also influences the
storage density (the amount of energy that can be stored in a given size or weight of
battery) and nominal voltage output. Table 24.2 summarizes the characteristics of
common battery chemical systems.

Table 24.2. Electrical Characteristics of Common Battery Chemistry Systems

Technology Cell voltage Storage Discharge Discharge in


(nominal) density curve storage
Volts Wh/kg
Carbon-zinc 1.5 70 Sloping 15% per year
Alkaline 1.5 130 Sloping 4% per year
Lithium 3.2 230 Flat 1% per year
Silver oxide 1.5 120 Flat 6% per year
Nickel-cadmium 1.2 42 Flat 1% per day
Nickel-metal hydride 1.2 64 Flat 1% per day
Zinc-air 1.4 310 Flat 2% per year

Carbon-Zinc

The most common batteries in the world are primary cells based on zinc and carbon
electrodes. In these zinc/carbon batteries—formally called a Leclanche dry cell but better
known as the flashlight battery—zinc (the case of the battery) serves as the anode; a
graphite rod in the center acts as cathode; and the electrolyte is a complex mixture of
chemicals (manganese dioxide, zinc chloride, and ammonium chloride). Alkaline
batteries change the chemical mix to increase storage density and shelf life. Other
materials are used for special purpose batteries, but with the exception of lithium these
have not found wide application in PCs.

Alkaline

Ordinarily, alkaline batteries cannot be recharged, but Rayovac Corporation has


developed a series of standard-sized alkaline cells called Renewal batteries that accept 25

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (13 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

to 100 recharges. To achieve their reusability, these cells combine novel fabrication
techniques and a special microprocessor controlled charger. The charger pulses power
into discharged cells and measures the effect of each pulse. Renewal batteries cannot be
charged with conventional battery chargers; in fact, they may explode if you try.

Lead-Acid

The most common storage batteries in the world are the lead-acid batteries used to start
automobiles. These have electrodes made from lead (anode) and lead oxide (cathode)
soaked in a sulfuric acid electrolyte. Not only are these batteries heavy—they are filled
with lead, after all—but they contain a corrosive liquid that can spill where it is not
wanted—generally, anywhere. Some lead-acid batteries are sealed to avoid leakage.
Gelled-electrolyte lead-acid batteries, often called simple gel cells reduce this problem.
In these batteries, the electrolyte is converted to a colloidal form like gelatin, so it is less
apt to leak out. Unlike most lead-acid batteries, however, gel cells are degraded by the
application of continuous low current charging after they have been completely charged.
(Most lead-acid batteries are kept at full capacity by such "trickle" charging methods.)
Consequently, gel cells require special chargers that automatically turn off after the cells
have been fully charged.

Nickel-Cadmium

In consumer electronic equipment, the most popular storage batteries are nickel-cadmium
cells, often called nicads. These batteries use electrodes made from nickel and cadmium,
as the name implies. Their most endearing characteristic is the capability to withstand in
the range of 500 full charge/discharge cycles. They are also relatively lightweight, have a
good energy storage density (although about half that of alkaline cells), and tolerate
trickle charging. On the downside, cadmium is toxic.
The output voltage of most chemical cells declines as the cell discharges because the
reactions within the cell increase its internal resistance. Nicads have a very low internal
resistance—meaning they can create high currents—which change little as the cell
discharges. Consequently, the nicad cell produces a nearly constant voltage until it
becomes almost completely discharged, at which point its output voltage falls
precipitously. This constant voltage is an advantage to the circuit designer because fewer
allowances need to be made for voltage variations. However, the constant voltage also
makes determining the state of a nicad's charge nearly impossible. As a result, most
battery-powered computers deduce the battery power they have remaining from the time
they have been operating rather than by actually checking the battery state.
Nicads are known for another drawback: memory. When some nicads are partly
discharged, left in that condition, and then later recharged, they may lose capacity. The
cure for the memory problem is deep discharge—discharging the battery to its minimum
working level and then charging the battery again. Deep discharge does not mean totally

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (14 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

discharging the battery, however. Draining nearly any storage battery absolutely dry will
damage it and shorten its life. If you discharge a nicad battery so that it produces less
than about one volt (its nominal output is 1.2 volts), it may suffer such damage.
Notebook computers are designed to switch off before their batteries are drained too far,
and deep discharge utilities do not push any farther so you need not worry in using them.
But don't try to deeply discharge your system's batteries by shorting them out—you risk
damaging the battery and even starting a fire.
According to battery makers, newer nicads are free from memory defects. In any case, to
get the longest life from nicads the best strategy is to operate them between
extremes—operate the battery through its complete cycle. Charge the battery fully; run it
until it is normally discharged; then fully charge it again.

Nickel-Metal Hydride

A more modern update to nickel cadmium technology is the nickel-metal hydride cell
(abbreviated NiMH). These cells have all the good characteristics of nicads, but lack the
cadmium—substituting heavy metals that may also have toxic effects. Their chief
strength is the capability to store up to 50 percent more power in a given cell. In addition,
they do not appreciably suffer from memory effects.
Both nicads and nickel-metal hydride cells suffer from self-discharge. Even sitting
around unused, these cells tend to lose their charge at a high rate in the vicinity of 30
percent per month.
Most PC batteries and battery chargers are designed to be plugged in continuously
without any detrimental effects on the battery. In fact, the best strategy is to leave your
PC plugged in even after it is fully charged, detaching it from its charger only when you
have to take the machine on the road. The trickle charge will not hurt it (in fact, the
battery charging circuitry may switch off once the battery is charged), and you will
always be ready to roam.

Zinc-Air

Of the current battery technologies, the one offering the most dense storage is zinc-air.
One reason is that part of its chemical needs is external. Zinc-air batteries use
atmospheric oxygen as their cathode reactant, hence the "air" in the name. Small holes in
the battery casing allow air in to react with a powered zinc anode through a highly
conductive potassium hydroxide electrolyte.
Originally created for use in primary batteries, zinc-air batteries were characterized by
long, stable, storage life, at least when kept sealed from the air and thus inactive. A
sealed zinc-air cell loses only about 2 percent of its capacity after a year of storage.
battery makers have adapted zinc-air technology for secondary storage. Zinc-air cells
work best when frequently or continuously used in low drain situations.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (15 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Zinc-air secondary cells are so new that they have been only crudely adapted to portable
PC applications. One of the first products was the PowerSlice XL, developed jointly by
Hewlett-Packard Co. and AER Energy Resources Inc. For the HP OmniBook 600
notebook PC. The 7.3-pound external battery can power the OmniBook for about 12
hours.

Standards

Contrary to appearances, most rechargeable batteries used by notebook PCs are standard
sizes. Computer manufacturers, however, package these standard size batteries in custom
battery packs that may fit only one model of computer. Usually you cannot change the
batteries in the pack but must buy a new replacement if something goes awry.
Duracell has made a concerted effort at developing standard size battery packs as well.
Compaq has even used some of these standards in its notebook PCs. Table 24.3 lists the
characteristics of some of these standardized Duracell batteries.

Table 24.3. Specifications of Duracell Standard Notebook Batteries

Type Size Weight grams Voltage volts Capacity milliamp-hours


DR10 5 x 4/5A 182 6.0 1500
DR11 10 x 4/5A 345 6.0 3000
DR17 6 x 4/5A 210 7.2 1500
DR19 9 x 4/5A 305 10.8 1500
DR30 6 x 4/3A 323 7.2 2400
DR31 9 x 4/3A 495 10.8 2400

The individual batteries in the custom packs used by many notebook computer makers
contain several standard size cells. If you are up to a challenge, you can disassemble the
packs, remove the individual cells (they look like ordinary flashlight batteries only
smaller and usually lack labels), and replace them with new cells. A couple of warnings
are in order: For reliability, these cells are usually soldered together, so you'll need
proficiency with a soldering iron to replace batteries. Moreover, you should replace all
the cells in a given pack at the same time. Never replace a single cell. Always replace the
cells with those of equivalent capacity and technology—more recent battery packs have
built-in intelligence that doesn't expect any yahoo to monkey with its inner workings, so
your change has to be an exact replacement.

Smart Battery Specifications

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (16 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Rechargeable batteries are fraught with problems. Drain them too much and you kill
them. Charge them too much and you kill them. Charge them not enough and you'll wish
you were dead when they run dry mid-continent when you have only the rest of the flight
to finish your report.
Charging and monitoring the charge of batteries has always been problematic. Both
capacity and charge characteristics vary with the battery type and over the life of a given
battery.
The smartest conventional battery chargers monitor not the voltage but the temperature
of their subjects because a sharp rise in temperature is the best indication available to the
charger of the completion of its work. Even this rise varies with battery chemistry, so a
nicad and NiMH battery present different—and confusing—temperature characteristics
that would lead to a charger mistaking one for the other and possibly damaging the
battery.
The Smart Battery system, developed jointly by battery maker Duracell and Intel and
first published as the Smart Battery Data Specification, Version 1.0, on February 15,
1995, eliminates these problems by endowing batteries with enough brains to tell of their
condition. When matched to a charger with an equivalent IQ that follows the Smart
Charger specification, the Smart Battery gets charged perfectly every time with never a
worry about overcharging.
The Smart Battery system defines a standard with several layers that distribute
intelligence between battery, charger, and your PC. It provides for an inexpensive
communication link between them—System Management Bus, the equivalent of
ACCESS.bus discussed in Chapter 21, "Serial Ports"—a protocol for exchanging
messages, and message formats themselves.
The Smart Battery Data Specification outlines the information that a battery can convey
to its charger and the message format for doing so. Among other data that the battery can
relay are its chemistry, its capacity, its voltage, and even its physical packaging.
Messages warn not only about the current status of the battery's charge but even how
many charge-recharge cycles the battery has endured so the charger can monitor its long
term prognosis. The specification is independent of the chemistry used by the battery and
even the circuitry used to implement its functions. What matters is the connection
(System Management Bus) and the messages sent by the battery.
Complementing the battery standard is the Smart Battery Charger specification that, in
addition to describing the data exchanged between charger and battery, categorizes the
relationship between Smart Batteries and different charger implementations.
Completing the system are a description of its System Management Bus and a matching
BIOS interface standard that provides a common control system to link to PC software
and operating systems.

Clock Batteries

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (17 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Nearly every PC since the AT was introduced in 1984 has had a time-of-day clock built
into its system board circuitry. To keep proper track of the hours, days, and eons, this
clock needs to run continuously even when the computer itself is switched off or
unplugged. The source for the needed power is a small battery.
Different manufacturers have taken various approaches to supplying this power. Once
some manufacturers put lithium primary batteries in a plastic holder accessible at the rear
of the system unit. Most PC makers put the batteries inside, often hidden if not
inaccessible. More recent machines often use integrated clock modules such as those
made by Dallas Semiconductor which have small lithium cells built into molded plastic
modules. Some machines make these modules user-replaceable; others solder them in
place. In that the modules are rated for a ten-year life, you in theory may never need to
replace one during the brief period your PC is actually useful.
Lithium cells have several notable aspects. They offer a high energy density, packing
much power for their size. Moreover, they have a very long shelf life. Whereas
conventional zinc/carbon dry cells lose potency after a year or so even when no power is
being drawn from them, lithium cells keep most of their power for a decade. These
qualities make lithium cells suited to providing clock power because today's solid state
clocks draw a minuscule amount of power—so small that when battery and circuit are
properly matched, battery life nearly equals shelf life.
The downside of these lithium cells is that they are expensive and often difficult to find.
Another shortcoming is that the metals used in them result in an output voltage of three
volts per cell. A one-cell lithium battery produces too little voltage to operate standard
digital circuits; a two-cell lithium battery produces too much.
Of course, engineers always can regulate away the excess voltage, and that is typically
done. Poor regulator design, however, wastes more power than is used, robbing the
battery of its life. Some PCs suffer from this design problem and consequently give
frightfully short battery life.
Many computer makers avoid the expense and rarity of lithium batteries by adding
battery holders for four (or so) type AA cells. Because zinc/carbon and alkaline cells
produce 1.5 volts each, a four pack puts out the same six volts as a dual cell lithium
battery and can suffer the same problems in improperly designed PCs—only more so
because the cells have shorter lives. A three pack of AA cells produces 4.5 volts, which is
adequate for most clock circuits and need not be hampered by regulation. Special
alkaline PC battery modules are available that combine three ordinary cells into one
package with the proper connector to match most system boards.

Notebook Power

Portable computers put contradictory requirements on their batteries; they must produce
as much power for as long as possible, yet be as small and light as possible. Filling those
needs simultaneously is impossible, so notebook computer batteries are always a
compromise.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (18 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

All three of the most popular storage batteries—lead-acid, nicad, and


nickel-hydride—are used in notebook and subnotebook computers. From your
perspective as an end user, however, the technology doesn't matter as long as the result is
a PC that you can carry without stretching your arms too long and use without getting
caught short too often. Odds are, however, you will see nickel-hydride batteries
increasing in popularity in notebook computers because of their greater storage density
and less hazardous nature.
Notebook computer makers traditionally design the packaging for the batteries of their
machines. These custom designs enable them to better integrate the battery with the rest
of the notebook package. It also makes you dependent on the computer manufacturer for
replacement batteries. (Most packs have standard size cells inside. You can crack the
battery pack open and replace the cells, but the effort is rarely worth the reward).
This situation is changing. One battery manufacturer (Duracell) has proposed standard
sizes for rechargeable batteries for notebook computers.
Rather than battery type or packaging, care is most important with computer batteries. If
you take proper care of your PC's batteries, they will deliver power longer—both more
time per charge and more time before replacement.

Battery Safety

The maximum current any battery can produce is limited by its internal resistance.
Zinc/carbon batteries have a relatively high resistance and produce small currents, on the
order of a few hundred milliamperes. Lead-acid, nickel-cadmium, and nickel-hydride
batteries have very low internal resistances and can produce prodigious currents. If you
short the terminals of one of these batteries, whatever produces the short circuit—wires, a
strip of metal, a coin in your pocket—becomes hot because of resistive heating. For
example, you can melt a wrench by placing it across the terminals of a fully charged
automotive battery. Or you can start a fire with something inadvertently shorting the
terminals of the spare nickel-cadmium battery for your notebook or subnotebook
computer. Be careful and never allow anything to touch these battery terminals except
the contacts of your notebook PC.
When a battery is charged, a process called electrolysis takes place inside. If you
remember your high school science experiments, electrolysis is what you did to break
ordinary water into hydrogen and oxygen using electricity. Hydrogen is an explosive gas;
oxygen is an oxidizer. Both are produced when charging batteries. Normally these gases
are absorbed by the battery before they can do anything (such as explode), but too great a
charging current (as results from applying too high a voltage) can cause them to build up.
Trying to charge a primary battery produces the same gas build-up. As a result, the
battery can explode from too great an internal pressure, or from combustion of the gases.
Even if the battery does not catastrophically fail, its life will be greatly reduced. In other
words, use only the charger provided with a portable PC battery and never try to hurry
things along.
Nearly all batteries contain harmful chemicals of some kind. Even zinc-carbon batteries

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (19 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

contain manganese, which is regarded as hazardous. All batteries present some kind of
environmental hazard, so be sure to properly dispose of them. Some manufacturers are
beginning to provide a means of recycling batteries. Encourage them by taking advantage
of their offers.

Desktop PC Power Supplies

Most PCs package their power supplies as a subassembly that's complete in itself and
simply screws into the chassis and plugs into the system board and other devices that
require its electricity. The power supply itself is ensconced in a metal box perforated with
holes that let heat leak out and prevent your fingers from poking in.
In fact, the safety provided by the self-contained and fully-armored PC power supply is
one of the prime advantages of the original design. All the life-threatening voltages—in
particular, line voltage—are contained inside the box of the power supply. Only low,
non-threatening voltages are accessible—that is, touchable—on your PC's system board
and expansion boards. You can grab a board inside your PC even when the system is
turned on and not worry about electrocution (although you might burn yourself on a
particularly intemperate semiconductor or jab an ill-cut circuit lead through a finger).
Grabbing a board out of a slot of an operating computer is not safe for the computer's
circuits, however. Pulling a board out is apt to bridge together some pins on its slot
connector, if but for an instant. As a result, the board (and your PC's motherboard) may
find unexpected voltages attacking, possibly destroying, its circuits. These surprises are
most likely in EISA systems because of their novel expansion connectors. In other words,
never plug in or remove an expansion board from a PC that has its power switched on.
Although you may often be successful, the penalty for even one failure should be enough
to deter your impatience.
In most PCs, the power supply serves a secondary function. The fan that cools the power
supply circuits also provides the airflow that cools the rest of the system. This fan also
supplies most of the noise that PCs generate while they are running. In general, the power
supply fan operates as an exhaust fan—it blows outward. Air is sucked through the other
openings in the power supply from the space inside your system. This gives dust in the
air taken into your PC a chance to settle anywhere on your system board before getting
blown out through the power supply.

Power Supply Selection

Three standards have emerged for the physical size of the PC power supply
package—one that fits into the chassis of the original PC and XT, one that fits the full
size AT chassis, and the ATX design meant to link to the latest ATX standard
motherboards. AT-size power supplies are taller and wider than PC/XT models,
measuring 5-7/8 by 8-3/8 by 5-7/8 inches (HWD) with a notch taken out of the inboard
bottom corner to allow extra space inside the computer chassis for the system board.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (20 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

PC/XT-size power supplies measure about 4-3/4 by 8-3/8 by 5-1/2 inches. ATX power
supplies are meant specifically for the standardized case design that was created for the
ATX motherboard.
Although it is obvious that the AT power supply cannot fit into the smaller XT-size
chassis, you may be surprised to discover that the smaller XT power supply also cannot
fit properly into an AT chassis. The placement of screws and other functional parts is
different enough that the little box cannot fit right in the big box.
Other system design variations may frustrate your power supply replacement efforts. The
more effort that a PC maker uses in designing its own identifiable system, the farther that
system varies from the accepted standards. Larger manufacturers—AST, Compaq, Dell,
IBM, NEC, and others—typically use a ground-up design philosophy that requires power
supplies be matched to a custom designed case. This means that they forego either of the
two standard power supply packages for something that suits their purposes better. As a
result, a power supply failure in one of these systems is a more expensive disaster than in
systems from smaller manufacturers who use standard size parts. A proprietary power
supply may cost $400 or more, whereas a standard power supply retails for $50 or less.
Beyond mere size, power supplies come in two classes—the generic and the glamorous.
Generic power supplies make no claims except that they deliver the volts and amps you
need. They likely originate in some part of the Far East that you can't pronounce and
even less imagine. They are the cheap ones with prices often below $50—and they work,
at least for a while. In fact, many are likely to be the same units that nestle themselves in
your favorite compatible computers.
The glamorous watt makers promise some grand advantage over their generic siblings.
More watts, less noise, more wind, and so on. The glamorous demands a premium price
and may earn a premium guarantee. Whether you need one depends on your sensibilities
and motivations. Most PCs are adequately served by the low end power supplies. If your
PC operates without problems on a 100-degree day, you probably don't need better
cooling, although a quieter fan gives anyone's ears a break. In other words, peace and
quiet can overrule purely budgetary sense, but the decision is entirely personal.

Power Supply Mounting

Standard size power supplies also are standard in their mounting. In all cases, the big
chrome power supply box is held in place by four screws in the back panel of the
computer. The front is secured by two fingers stamped from the computer chassis so that
the power supply does not stress the rear panel of the PC.
After you remove the top of your computer's case and locate the power supply, you see
that four of the screws in the rear panel roughly coincide with the four corners of the
backside of the power supply. When you face the rear of the computer, the four screws
are on the left half of the rear panel, arranged roughly in a rectangle (see Figure 24.A).
Remove these screws, and the power supply will be loose inside the chassis but not
entirely free.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (21 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Figure 24.1. An example power supply with rear panel screws in corners.

Before you attempt to lift the power supply out, remove the power supply connectors
from each disk drive and the two connectors from the system board. Finally, slide the
power supply box about one inch forward in the chassis until it bumps lightly against the
rear of the disk drive bay or disk drives. The power supply should then lift out of the
chassis without further ado.
Installing a new or replacement power supply is equally easy. First, properly orient the
supply so that the power switch protrudes through the notch cut in the top of the chassis.
Then lower the power supply straight down into the chassis into the empty space left by
the old power supply.
Before attempting to screw the new power supply into place, push it toward the front of
the chassis and gently into the drive bays. Then, while pressing it down, push the power
supply back toward the rear panel of the chassis. This front-then-back slide should slip
the two steel fingers on the computer chassis through slots at the bottom of the power
supply to hold it in place. You may also want to attach the power supply connectors
before you screw the power supply down.
Finally, screw the power supply into place. Start all four screws, but give them no more
than two full turns before you have all four started. This enables you to move the power
supply slightly to line up all four holes. If you tighten one screw first, you may find that
the rest of the holes in the power supply do not line up with those in the chassis. When all
four screws have been started, drive them all home.

PC Power Connections

Standard PC power supplies have two kinds of connectors dangling from them. Two of
them go to the system board; the rest are designed to mate with tape and disk drives.

Mass Storage Power

The tape and disk drive connectors supply five and twelve volts, respectively, to operate
those devices. The connectors come in two sizes and both are polarized so that you can't
install them properly. The original drive power connector was roughly rectangular in
cross-section, but had two of its corners chamfered so that it fit in its matching jack on
the drive in only one orientation. The newer, miniaturized power connectors used by
many 3.5-inch drives has a polarizing ridge that enables you to insert it only in the proper
orientation. If either kind of drive connector doesn't seem to fit, don't force it! Instead,
rotate it 180 degrees and try again. Likely it will slide into place. Figure 24.2 shows the
drive power connector and its keying.
Figure 24.2 Mass storage device power connector.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (22 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

PCs and some compatible computers are decisively frugal with their power connectors,
supplying only two outlets. Powering more than two drives requires a Y-adapter that
splits the power lines two ways. Figure 24.3 shows a Y-adapter cable that increases the
available drive power connectors by one.
Figure 24.3 A sample Y-cable.

Although you can make such a Y-cable if you have the right connectors, it is easier and
often cheaper to buy one from a drive vendor ready-made. An even better idea is to
replace the meager PC power supply with one that can supply more current because the
factory standard supply really doesn't have the capacity for operating multiple mass
storage devices other than their standard two-floppy endowment.

System Board Power

The two system board power connectors on standard power supplies are not identical.
Each has its own repertory of voltages. On most PC power supplies these connectors are
labeled P8 and P9. The lower number attaches to the mating connector on the PC system
board, typically the one nearer the rear of the chassis.
Not all system boards match the two Burndy connectors standard on most power
supplies. Some systems combine the two connectors into one. Other system board
manufacturers sometimes use slightly different Molex connectors. Unfortunately, Burndy
and Molex connectors are not entirely compatible.
One difference between the two connector types is that the pins of a Burndy are
rectangular. Molex system board connectors use smaller, square pins. Only with great
effort can you mate dissimilar connectors. Check the style of pins required by your
system board before you order a power supply. The only way to be sure about the style of
connector is to disconnect one of them (with your PC switched off, of course) to examine
the shape of its pins.
The Burndy connectors used by most power supplies are supposed to be keyed so that
you cannot put one in the wrong place. Unfortunately, many replacement power supplies
are shipped without the proper keying.
If you examine the power connectors meant to attach to the system board, you see that
one side of the connector has one or more small tabs sticking out. If just one is longer
than the rest, the connector is keyed. If all are the same length, the connector has not
been keyed. You can key it by cutting off all but the one right tab using a pair of diagonal
cutters. The proper keying trims all but the fifth tab on connector P8 and all but the first
tab on P9; in both cases, as viewed from the tab side with the connector contacts down
(see Figure 24.4).
Figure 24.4 Keying of system board power connectors.

Another way to make sure that the system board connectors are in their proper positions
is by the color codes of the wires. Proper installation puts black wires in the middle—that
is, the black wires on the connectors adjoin one another. The proper color sequence is as

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (23 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

shown in Figure 24.3: Orange, red, yellow, blue, black, and black on P8; black, black,
white, red, red, and red on P9. Some of these colors may vary among power supplies of
different manufacture, although the black and red coding is usually consistent among
most power supplies. When the two motherboard power supply connectors are combined
into one, the arrangement of the wire color codes typically remains consistent with that
used for two connectors.

ATX Power Connections

As part of the ATX effort to simplify the design of motherboards, the two power supply
connectors of the conventional AT motherboard were combined into a single 20-pin
connector. This single unified design provides standard locations for all voltages and
signals normally used by PC power supplies. Table 24.4 lists the definitions of the pins of
the ATX power connector.

Table 24.4. ATX Motherboard Power Supply Connections

Pin Color Function Pin Color Function


1 Orange +3.3 VDC 11 Orange +3.3 VDC
2 Orange +3.3 VDC 12 Blue -12 VDC
3 Black Common 13 Black Common
4 Red +5 VDC 14 Green Power Supply On
5 Black Common 15 Black Common
6 Red +5 VDC 16 Black Common
7 Black Common 17 Black Common
8 Gray Power Good 18 White -5 VDC
9 Purple 5VSB 19 Red +5 VDC
10 Yellow +12 VDC 20 Red +5VDC

In addition to the signals listed above, under version 2.01 of the ATX specification, pin
11 of the power connector also has a brown wire for sensing the 3.3 voltage supply. Pin
11 consequently may have two wires connected to it, one orange, one brown, of a smaller
gauge (AWG 22) than the other wires (which are AWG 18).
The revised ATX specification also includes provisions for a second, optional six-pin
power connector. This connector allows for a separate 3.3-volt sensing signal as well as
fan power and power for the IEEE 1394 bus. The signal assignments for this connector
are listed in Table 24.5.

Table 24.5. ATX Optional Power Connector Signal Assignments

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (24 de 45) [23/06/2000 07:00:47 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Pin Color Function


1 White FanM
2 White w/blue stripe FanC
3 White w/brown stripe 3.3V Sense
4 None Reserved
5 White w/red stripe 1394V
6 White w/black stripe 1394R

All of these are specialized signals. FanM is a tachometer signal from the power supply
fan that allows the system to monitor whether the fan is operating properly. FanC
controls the speed of the fan with a variable voltage ranging from 1 volt for off to 10
volts for full speed. The 1394V connection provides a dedicated, unregulated voltage
supply for the IEEE 1394 bus; 1394R is the ground return for this voltage.

Power Management

With few advances in battery storage density expected in the near future, PC makers have
relied on reducing the power consumption of their notebook PCs to extend the time a
machine can operate between battery charges.
Engineers can use two basic strategies to reduce the power consumption of PCs. They
can design circuits and components to use less power, and they can manage the power
used by the devices. Managing power needs usually means switching off whatever
system components aren't being actively used. Although the two design methods can be
used separately, they are usually used in tandem to shrink PC power needs as much as
possible.
Microprocessors, the most power hungry of PC circuits, were among the first devices to
gain built-in power management. System Management Mode endowed processors with
the ability to slow down and shut off unnecessary circuits when they were idle. Similarly,
makers of hard disk drives added sleep modes to spin down their platters and reduce
power needs. Most PCs also incorporated timers to darken their screens to further
conserve power.
Although these techniques can be successful in trimming power demands, they lack a
unified control system. In response, the industry developed the Advanced Power
Management interface to give overall control to the power saving systems in PCs. More
recently, APM has been updated and augmented by the Advanced Configuration and
Power Interface specification.

Advanced Power Management

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (25 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

The Advanced Power Management interface specification was jointly developed by Intel
and Microsoft to integrate the control of hardware power saving features with software
control. First published in January 1992, as the APM BIOS Interface Specification, the
current version, 1.2, was published in February 1996.
Although nominally a BIOS interface, the APM specification describes a layered control
system that controls PC devices to reduce power consumption and uses both BIOS and
API interfaces. To be fully functional, APM requires a compatible BIOS and hardware
devices that recognize APM control. In addition, hardware devices may have their own
built-in automatic power management functions which are not controlled by your PC's
software. For example, after a given period without accesses, a hard disk drive may
automatically power down without specific command from your PC. The APM
specification tolerates but does not affect these built-in functions.

States

APM is an overall system feature. Although it has the ability to individually control the
features of each device it manages, the basic design APM controls all devices together to
conserve power. It manages system power consumption by shifting the overall operating
mode of the PC called APM states. APM shifts the operating state of the system based on
the needs of the system as determined from a combination of software commands and
events. The various APM states provide for power savings at five levels. The APM
specification gives each of these levels a specific state name.
The first, Full On state, means that the system is operating at full power without any
management at all. The APM software is not in control, and no power savings can be
achieved. A system without APM or with its APM features disabled operates in full on
state.
When the APM system is active, all devices run in their normal, full power consumption
modes. The system is up and ready to do business, operating in what the specification
calls APM Enabled state.
In APM Standby state, the microprocessor may stop and many of the system devices are
turned off or operate at reduced power. The system usually cannot process data, but its
memory is kept alive and the status of all devices is preserved. When your activity or
some other event requires system attention, the PC can rapidly shift from standby to
enabled state.
In APM Suspend state, the system shifts to its maximum power savings mode—most
devices that follow the APM standard are switched off and the microprocessor switches
to its lowest power state with its clock turned off. Your PC becomes a vegetable.
Hibernation is a special implementation of suspend state that allows the system to be
switched entirely off and still be restored to the point at which it entered suspend state.
When entering suspend state, the system saves all of its operating parameters. In entering
hibernation, the system copies memory and other status data to non-volatile storage such

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (26 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

as hard disk, allowing you to switch off memory power. A system event can shift back to
enabled state from suspend or hibernation, but changing modes from suspend to enabled
takes substantially longer than from standby to enabled.
Off state is exactly what the name implies. Power to the system is entirely off. The
computer is more mineral than vegetable. The only event that restores the system is
turning it back on. If you enter off state directly—say by switching your PC off—no
status information or memory gets saved. The system must run through the entire bootup
process and starts with a clean slate.

Structure

APM adds a layered control system to give you, your software, and your hardware a
mechanism to shift states manually or automatically.
The bottom layer of the system is the APM BIOS which provides a common software
interface for controlling hardware devices under the specification. The APM specifies
that the BIOS have at least a real mode interface that uses interrupt 15(Hex) to implement
its functions. In addition, the APM BIOS may also use 16- or 32-bit protected mode
using entry points that are returned from the protected mode connection call using the
real mode interrupt.
The APM BIOS is meant to manage the power of the motherboard. Its code is specific to
a given motherboard. Under the APM specification, the APM BIOS can operate
independently of other APM layers to effect some degree of power saving in the system
by itself. Your PC's operating system can switch off this internal BIOS APM control to
manage system power itself, still using the APM BIOS interface functions to control
hardware features.
Linking the APM BIOS to your operating system is the APM Driver. The driver provides
a set of function calls to the operating system, which it translates to BIOS interrupts. The
driver is more than a mere translator, however. It is fully interactive with both the BIOS
and operating system. For example, the BIOS may generate its own request to power
down the system, and the driver then checks with the operating system to determine
whether it should permit the powerdown.
The APM system has a built-in fail-safe. The APM driver must interact with the BIOS at
least once per second. If it does not, after a second second, the BIOS assumes the
operating system has malfunctioned and takes control. The driver can regain control by
sending the appropriate commands (interrupts) to the BIOS.
Certain system events termed wake-up calls tell the APM system to shift modes.
Interrupts generated by such events as a press of the resume button, when the modem
detects an incoming telephone ring, or an alarm set on the real time clock can command
the APM BIOS to shift the system from suspend to enabled state.

Operation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (27 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

All real mode APM interrupt functions require that the AH register of the microprocessor
be set at 53(Hex) on entry, identifying that the requested function is for APM. The AL
register then defines the function to be carried out. Other registers indicate which devices
in the system (which essentially means the microprocessor or everything else) to affect
and parameters of the command. For example, to shift the system from On to APM
Enable state, your operating system can issue interrupt 15(Hex) with AH set at 53(Hex)
and AL set at 08(Hex). The BX register identifies the devices to be affected and CX tells
the BIOS whether to enable (set at 0001) or disable (set at 0000) power management.
Table 24.6 lists the 17 functions defined for the APM BIOS.

Table 24.6. APM Real Mode Interrupt Functions

AH value Function
00(Hex) APM installation check
01(Hex) APM real mode interface connect
02(Hex) APM protected mode connect 16 bit
03(Hex) APM protected mode connect 32 bit
04(Hex) APM interface disconnect
05(Hex) CPU idle
06(Hex) CPU busy
07(Hex) Set power state
08(Hex) Enable/disable power management
09(Hex) Restore power-on defaults
0A(Hex) Get power status
0B(Hex) Get power managed event
0C(Hex) Get power state
0D(Hex) Enable/disable device management
0E(Hex) APM driver version
0F(Hex) Engage/disengage power management
10(Hex) Get capabilities
11(Hex) Get/set/disable resume timer
12(Hex) Enable/disable resume on ring indicator
13(Hex) Enable/disable timer based requests
80(Hex) OEM APM function

By loading the BX register with an appropriate value, the driver or operating system can
command an individual device, class of devices, or the entire APM system. Device

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (28 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

classes include mass storage, the display system, serial ports, parallel ports, network
adapters, and PC Card sockets.
To determine the state of devices in the system, the APM design requires that the BIOS
be polled at the once per second rate. The APM driver monitors the status of power
managed events using the 0B(Hex), and the BIOS responds by sending an event code
back to the driver in its BX register. Of course, several events might occur in the second
between polls. To accommodate multiple events, driver repeatedly polls the BIOS. The
BIOS reports each event in sequence. The driver ceases its polling when the BIOS runs
out of events to report. Table 24.7 lists APM power management events.

Table 24.7. APM Power Management Events

BX value Event
0001(Hex) System standby request
0002(Hex) System suspend request
0003(Hex) Normal resume system
0004(Hex) Critical resume system
0005(Hex) Battery low
0006(Hex) Power status change
0007(Hex) Update time
0008(Hex) Critical system suspend
0009(Hex) User system standby request
000A(Hex) User system suspend request
000B(Hex) System standby resume
000C(Hex) Capabilities change
000D to 00FF(Hex) Reserved system events
0100 to 01FF(Hex) Reserved device events
0200 to 02FF(Hex) OEM-Defined APM events
0300 to FFFF(Hex) Reserved

The driver can take appropriate action on its own or relay the information it obtains to the
operating system, which then makes its own judgment about what to do.

Advanced Configuration and Power Interface

The next generation of power management will integrate PC hardware and operating
systems into a cooperative power-saving whole called the Advanced Configuration and
Power Interface. This new standard, developed jointly by Intel, Microsoft, and Toshiba,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (29 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

builds on the foundation of APM with the goal of putting the operating system into
control of the PC power system. Version 1.0 was formally released in December 1996.
ACPI is an integral part of the Microsoft-inspired OnNow initiative that seeks to
minimize the delays inherent in starting up and shutting down a PC burdened with
megabytes of operating system overhead, to let the PC run tasks while it appears to be
off, and to lower the overall power requirement of the PC. New operating systems
require time to test the host PC, check out Plug-and-Play devices, and set up their
structures. These functions require so long to carry out that a modern operating system
takes so long to boot up it makes the warming up of your system seem to start at absolute
zero. OnNow seeks to eliminate that wait. At the same time, it promises to integrate the
power and configuration interfaces of operating systems (specifically, the Windows 95
and NT families) so that programmers can write to a common standard.
To bring these features to life, the OnNow design moves the operating system to the
center of power management using ACPI and builds a new table structure for storing and
organizing configuration information.
As a power management system, the ACPI specification can accommodate the needs of
any operating system, integrating all the necessary power management features required
in a PC from the application software down to the hardware level. It enables the
operating system to automatically turn on and off and adjust the power consumption of
nearly any peripheral, from hard disk drives to displays to printers. It can reach beyond
the PC to other devices that may be connected into a single system some time in the
future—televisions, stereos, VCRs, telephones, and even other appliances. Using the
SmartBattery specification, under ACPI the operating system takes command of battery
charging and monitoring. It also monitors the thermal operation of the system, reducing
speed or shutting down a PC that overheats.
The ACPI standard itself defines the interface for controlling device power and a means
of identifying hardware features. The interface uses a set of five hardware registers that
are controlled through a higher level application programming interface through the
operating system. The descriptive elements identify not only power management but also
device features through a nested set of tables. It supplements Plug-and-Play technology,
extending its existing structure with an architecture-independent implementation and
replacing the Plug-and-Play BIOS with a new ACPI BIOS.

Soft Off

The fundamental and most noticeable change made by ACPI is the power button on the
front of new PCs. In systems equipped to handle ACPI, this is a soft switch or set of two
switches. Although one of these switches may be labeled power and imply that it is an
on/off switch, in the APCI scheme of things the power switch does not actually switch
the power to the system on and off. Rather, it sends a command to the system to shut
itself off—and not exactly what you think is off.
Using the front panel off button actually puts the PC in a new mode called soft off. In this
mode, the PC acts like you've shut it off and requires rebooting to restart. But it doesn't

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (30 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

remove all power from the system. A slight bit of power continues to be supplied to the
motherboard and expansion boards enabling them to monitor external events. For
example, a network board will still listen to network traffic for packets targeted at it. A
modem or fax board may lie in wait of a telephone call. Or you may set a time, for
instance, midnight, to start the tape backup system. When any of these designated
external events occurs, the PC automatically switches itself back on to deal with it.
ACPI envisions that some manufacturers will also put a sleep switch on the front panel.
Pressing it will put the PC in a sleep mode that uses somewhat more power than soft off
but allows the system to resume operation more quickly.

States

As with APM, the ACPI design works by shifting modes called ACPI states. The states
differ substantially from those in APM. Under ACPI, there is a great variety of states,
four basic types—global, special sleep, microprocessor, and device—and the last of these
is further subdivided. ACPI lets the operating system control all aspects of the power
consumption of a PC by shifting the single devices or the entire system between these
states.
The ACPI global states most closely correspond to the APM modes.
● G0 is working state in which the PC is operating normally. Programs execute
actively. Even in G0 state, however, some devices that are inactive may
automatically power down, but they will quickly resume normal operation when
they are called upon.

● G1 is sleeping state during which no computation visibly appears to be going on.


Various system events can cause the PC to return to working state. In the ACPI
definition, G1 has many sub-states that are defined by the devices being managed.
The standard allows great flexibility in this state, not only in terms of how (and
how quickly) normal operation. Within G1 are several special sleeping states that
trade-off resume speed for reduction in power consumption.

● G2 is the new soft-off state.

● G3 is complete power off, equivalent to unplugging the PC.

The meaning of the ACPI device modes varies with the device type. The states differ in
four chief characteristics—the amount of power that the state saves over normal
operation; how long is required to restore the device from the state to normal operation;
how much of the operating context of the device is saved in entering the state; and what
must be done to return the device to normal operation. All four device states are
designated with names beginning with the letter "D" as follows:
● D0 designates the fully on state at which the device operates at top speed, is fully
responsive, and consumes the most power.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (31 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

● D1 saves power over the D0 state. How it achieve that goal depends on the type of
device. In general, the device can quickly shift back to D0 state without needing to
reset or losing data.

● D2 further saves power over the D1 state, and is again device-specific. In general,
the device becomes less responsive. It may need to reset itself or go through its
power-on sequence to return to the D0 state.

● D3 corresponds to the power-off state. Electrical power is removed from the


device, and the device does not function. It must go through its power-on sequence
to begin operations again. Upon entering D3, none of its operating context gets
saved. This achieves the greatest power savings but requires the longest restoration
time.

Under ACPI, the microprocessor is a special device that has its own four operating states.
These include:
● C0 state designates the processor executing at full speed.

● C1 state put the microprocessor in its halt state under command of the ACPI driver
without affecting other aspects of its operation.

● C2 state shifts the microprocessor to low power state and maintains the integrity of
the system's memory caches. In a fully implemented ACPI system, the
microprocessor will shift to this state if a bus master takes control of the system.

● C3 state pushes the microprocessor down to low power state and does not maintain
cache memory.

Configuration

To handle its configuration function, ACPI must manage a tremendous amount of data
not only about the power needs and management capabilities of the system but also
describing the features available for all of the devices connected to the system. ACPI
stores this information in a hierarchy of tables.
The overall master table is called the Root System Description Table. It has no fixed
place in memory. Rather, upon booting up the BIOS locates a pointer to the table during
the memory scan that's part of the bootup process. The Root System Description Table
itself is identified in memory because it starts with the signature "RSDT." Following the
signature is an array of pointers that tell the operating system the location of other
description tables that provide it with the information it needs about the standards defined
on the current system and individual devices.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (32 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

One of these tables is called the Fixed ACPI Description Table. In it the operating system
finds the base address of the registers used for controlling the power management
system. In addition, the Fixed ACPI Description Table also points to the Differentiated
System Description Table which provides variable information about the design of the
base system. Some of the entries in this table are Differentiated Definition Blocks, which
can contain data about a device or even a program that sets up other structures and define
new attributes. ACPI defines its own languages for programming these functions.

Energy*Star

Everything these days claims to meet the Engery*Star standard fostered by the federal
Environmental Protection Agency. The goal of the Energy*Star program is to encourage
manufacturers to create business equipment that minimizes power consumption. For
example, Green PCs earn their environmentally friendly sobriquet through efficient
power management that, in theory, helps preserve the earth's unrenewable resources. A
complete PC system (system unit, monitor, and printer) that meets the Energy*Star
requirements must use less power when idle than a standard light bulb.
Energy*Star is a certification program. Equipment conforming to the Energy*Star
standard must meet strict guidelines on power consumption. Manufacturers have a strong
incentive to embrace the Energy*Star standard and put the label on their products—some
businesses and many federal contracts require that new PC equipment meets the
Energy*Star standards.
All that said, the actual Energy*Star standards for PCs are quite simple. Energy*Star
version 2.0, which applies to products shipped after October 1,1995, asks only that a PC
or monitor be able to switch to a low power mode that consumes less than 30 watts after
15 to 30 minutes of inactivity (a default you are allowed to adjust). Combination
monitor-and-PC units are allowed the full 60 watts. Printers able to generate seven or
fewer pages per minute must reduce their drain to 15 watts after 15 minutes; fourteen or
fewer ppm, 30 watts after 30 minutes; faster or high-end color printers, 45 watts after an
hour.

Power Protection

Normal line voltage is often far from the 115 volt alternating current you pay for. It can
be a rather inhospitable mixture of aberrations like spikes and surges mixed with noise,
dips, and interruptions. None of these oddities is desirable, and some can be powerful
enough to cause errors to your data or damage to your computer. Although you cannot
avoid them, you can protect your PC against their ill effects.

Power Line Irregularities

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (33 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Power line problems can be broadly classed in three basic categories: overvoltage,
undervoltage, and noise. Each problem has its own distinct causes and requires a
particular kind of protection.

Overvoltage

The deadliest power line pollution is overvoltage—lightning-like high potential spikes


that sneak into your PC and actually melt down its silicon circuitry. Often the damage is
invisible—except for the very visible lack of image on your monitor. Other times, you
can actually see charred remains inside your computer as a result of the overvoltage.
As its name implies, an overvoltage gushes more voltage into your PC than the
equipment can handle. In general—and in the long run—your utility supplies power that's
very close to the ideal, usually within about ten percent of its rated value. If it always
stayed within that range, the internal voltage regulation circuitry of your PC could take
its fluctuations in stride.
Short duration overvoltages larger than that may occur too quickly for your utility's
equipment to compensate, however. Moreover, many overvoltages are generated nearby,
possibly within your home or office, and your utility has no control over them. Brief
peaks as high as 25,000 volts have been measured on normal lines, usually due to nearby
lightning strikes. Lightning doesn't have to hit a power line to induce a voltage spike that
can damage your PC. When it does hit a wire, however, everything connected to that
circuit is likely to take on the characteristics of a flash bulb.
Overvoltages are usually divided into two classes by duration. Short-lived overvoltages
are called spikes or transients and last from a nanosecond (billionth of a second) to a
microsecond (one millionth of a second). Longer duration overvoltages are usually
termed surges and can stretch into milliseconds.
Sometimes power companies do make errors and send too much voltage down the line,
causing your lights to glow brighter and your PC to teeter close to disaster. The
occurrences are simply termed overvoltages.
Most AC-power PCs are designed to withstand moderate overvoltages without damage.
Most machines tolerate brief surges in the range of 800 to 2,000 volts. On the other hand,
power cords and normal home and office electrical wiring breaks (by arcing over
between the wiring conductors) at potentials between about 4,000 and 6,000 volts. In
other words, electrical wiring limits the maximum surge potential your PC is likely to
face to no more than about 6,000 volts. Higher voltage surges simply can't reach your
PC.
Besides intensity and energy, surges also differ in their mode. Modern electrical wiring
involves three conductors: a hot, neutral, and ground. Hot is the wire that carries the
power; neutral provides a return path; and ground provides protection. The ground lead is
ostensibly connected directly to the earth.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (34 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

A surge can occur between any pairing of conductors: hot and neutral, hot and ground, or
neutral and ground. The first pairing is termed normal mode. It reflects a voltage
difference between the power conductors used by your PC. When a surge arises from a
voltage difference between hot or neutral and ground, it is called common mode.
Surges caused by utility switching and natural phenomena—for the most part
lightning—occur in the normal mode. They have to. The National Electrical Code
requires that the neutral lead and the ground lead be bonded together at the service
entrance (where utility power enters a building) as well as at the utility line transformer
typically hanging from a telephone pole near your home or office. At that point, neutral
and ground must have the same potential. Any external common mode surge becomes
normal mode.
Common mode surges can, however, originate within a building because long runs of
wire stretch between most outlets and the service entrance, and the resistance of the wire
allows the potential on the neutral wire to drift from that of ground. Although opinions
differ, recent European studies suggest that common mode surges are the most dangerous
to your equipment. (European wiring practice is more likely to result in common mode
surges because the bonding of neutral and ground is made only at the transformer.)

Undervoltage

An undervoltage occurs when your equipment gets less voltage than it expects.
Undervoltages can range from sags, which are dips of but a few volts, to complete
outages or blackouts. Durations vary from nearly instantaneous to hours—or days, if you
haven't paid your light bill recently.
Very short dips, sags, and even blackouts are not a problem. As long as they are less than
a few dozen milliseconds—about the blink of an eye—your computer should purr along
as if nothing happened. The only exceptions are a few old computers that have power
supplies with very sensitive Power Good signals. A short blackout may switch off the
Power Good signal, shutting down your computer even though enough electricity is
available. (See Appendix A, "PC History.")

Most PCs are designed to withstand prolonged voltage dips of about 20 percent without
shutting down. Deeper dips or blackouts lasting for more than those few milliseconds
result in shutdown. Your PC is forced to cold start, booting up afresh. Any work you
have not saved before the undervoltage is lost.

Noise

Noise is a nagging problem in the power supplies of most electronic devices. It comprises
all the spurious signals that wires pick up as they run through electromagnetic fields. In
many cases, these signals can sneak through the filtering circuitry of the power supply

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (35 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

and interfere with the signals inside the electrical device.


For example, the power cord of a tape recorder might act like an antenna and pick up a
strong radio signal. The broadcast could then sneak through the circuitry of the recorder
and mix with the music it is supposed to be playing. As a result, you might hear a CB
radio maven croaking over your Mozart.
In computers, these spurious signals could confuse the digital thought coursing through
the circuitry of the machine. As a practical matter, they don't. All better computers are
designed to minimize the leakage of their signals from inside their cases into the outside
world to minimize your computer's interfering with your radio and television. The same
protection against signals getting out works extremely well against other signals getting
in. Personal computers are thus well-protected against line noise. You probably won't
need a noise filter to protect your computer.
Then again, noise filtering doesn't hurt. Most power protection devices have noise
filtering built into them because it's cheap, and it can be an extra selling point
(particularly to people who believe they need it). Think of it as a bonus. You can take
advantage of its added protection—but don't go out of your way to get it.

Overvoltage Protection

Surges are dangerous to your PC because the energy they contain can rush through
semiconductor circuits faster than the circuits can dissipate it—the silicon junctions of
your PC's integrated circuits fry in microseconds. Spike and surge protectors are designed
to prevent most short-duration, high intensity overvoltages from reaching your PC. They
absorb excess voltages before they can travel down the power line and into your
computer's power supply. Surge suppressors are typically connected between the various
conductors of the wiring leading to your PC. They work by conducting electricity only
when the voltage across their leads exceeds a certain level, that is they conduct and short
out the excess voltage in spikes and surges before it can pop into your PC. The voltage at
which the varistor starts conducting and clipping spikes and surges is termed its clamping
voltage.
The most important characteristics of overvoltage protection devices are how fast they
work and how much energy they can dissipate. Generally, a faster response time or
clamping speed is better. Response times can be as short as picoseconds—trillionths of a
second. The larger the energy handling capacity of a protection device, the better. Energy
handling capacities are measured in watt-seconds or joules. Devices claiming the
capability to handle millions of watts are not unusual.
Four kinds of devices are most often used to protect against surges: Metal Oxide
Varistors (MOVs), gas tubes, avalanche diodes, and reactive circuits. Each has its own
strengths and weaknesses. Typically, commercial surge protectors use several
technologies in combination.

Metal Oxide Varistors

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (36 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

The most popular surge protection devices are based on Metal Oxide Varistors or MOVs,
disc-shaped electronic components typically made from a layer of zinc oxide particles
held between two electrodes. The granular zinc oxide offers a high resistance to the flow
of electricity until the voltage reaches a breakover point. The electrical current then
forms a low resistance path between the zinc oxide particles that shorts out the electrical
flow.
MOVs are the most popular surge protection component because they are inexpensive to
manufacture and easy to tailor to a particular application. Their energy-handling
capability can be increased simply by enlarging the device (typical MOVs are about an
inch in diameter; high power MOVs may be twice that). Figure 24.5 shows a typical
MOV.
Figure 24.5 A metal oxide varistor.

The downside to MOVs is that they degrade. Surges tend to form preferred paths
between the zinc oxide particles, reducing the resistance to electrical flow. Eventually,
the MOV shorts out, blowing a fuse or (more likely) overheating the MOV until it
destroys itself. The MOV can end its life in flames or with no external change at
all—except that it no longer offers surge protection.

Gas Tubes

Gas tubes are self-descriptive: tubes filled with special gases with low dielectric potential
designed to arc over at predictable low voltages. The internal arc short circuits the surge.
Gas tubes can conduct a great deal of power—thousands of kilowatts—and react quickly,
typically in about a nanosecond.
On the negative side, a gas tube does not start conducting (and suppressing a surge) until
the voltage applied it reaches two to four times the tube's rating. The tube itself does not
dissipate the energy of the surge; it just shorts it out, allowing your wiring to absorb the
energy. Moreover, the discharge voltage of a gas tube can be affected by ambient lighting
(hence most manufacturers shield them from light).
Worst of all, when a gas tube starts conducting, it doesn't like to stop. Typically, a gas
tube requires a reversal of current flow to quench its internal arc, which means that the
power going to your PC could be shorted for up to 8.33 milliseconds. Sometimes gas
tubes continue to conduct for several AC current cycles, perhaps long enough for your
PC power supply to shut down. (Many PC power supplies switch off when power
interruptions exceed about 18 milliseconds.)

Avalanche Diodes

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (37 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Avalanche diodes are semiconductor circuits similar to zener diodes that offer a high
resistance to electrical flow until the voltage applied to them reaches a breakover
potential. At that point, they switch on and act as conductors to short out the applied
current. Avalanche diodes operate more quickly than other protection devices, but have
limited energy capacity, typically from 600 to 1,500 watts.

Reactive Circuits

While MOVs, gas tubes, and avalanche diodes share the same operating
principle—shorting out the surge before it gets to your PC—the reactive surge suppressor
is different. The typical reactive surge suppressor uses a large inductance to resist the
sharp voltage rise of a surge and spread it out over a longer time. Adding a capacitor
tunes the reactance so that it can convert the surge into a semblance of a normal AC
waveform. Other noise on the power line is also automatically absorbed.
Unfortunately, this form of reactive network has severe drawbacks. It doesn't eliminate
the surge—only spreads out its energy. The size of the inductor determines the spread,
and a large inductor is required for effective results. In addition, the device only works on
normal mode surges. The reactance also can cause a common mode surge in the wiring
leading to the device by raising the neutral line above ground potential.
Most commercial surge suppressors combine several of these technologies along with
noise reduction circuitry, and better surge suppressors arrange them in multiple stages,
isolated by inductors, to prolong life and improve response time. Heavy-duty components
such as gas tubes or large MOVs form the first stage and absorb the brunt of the surge. A
second stage with tighter control (more MOVs or avalanche diodes) knocks the surge
voltage down farther.
Thanks to the laws of thermodynamics, the excess energy in a surge cannot just
disappear; it can only change form. With most surge suppression technologies (all except
reactive devices), the overvoltage is converted into heat dissipated by the wiring between
the device and the origin of the surge as well as inside the surge suppressor itself. The
power in a large surge can destroy a surge suppressor so that it yields up its life to protect
your PC.
Because they degrade cumulatively with every surge they absorb, MOVs are particularly
prone to failure as they age. Eventually, an MOV will fail, sometimes in its own
lightning-like burst. Although unlikely this failure will electrically damage the circuits of
your computer, it can cause a fire—which can damage not just your PC, but your home,
office, or self. Some manufacturers (for example, IBM) forego putting MOVs in their
power supplies to preclude the potential for fire, which they see as less desirable than a
PC failure.
An MOV-based surge suppressor also can fail more subtly—it just stops sucking up
surges. Unbeknownst to you, your PC can be left unprotected. Many commercial surge
suppressors have indicators designed to reveal the failure of an internal MOV.
In any case, a good strategy is to replace MOV-based surge suppressors periodically to

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (38 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

ensure that they do their job and to lessen the likelihood of their failure. How often to
replace them depends on how dirty an electrical diet you feed them. Every few years is
generally a sufficient replacement interval.
Three devices help your computer deal with undervoltages. Voltage regulators keep
varying voltages within the range that runs your PC, but offer no protection against steep
sags or blackouts. The standby power system and uninterruptible power system (or UPS)
fight against blackouts.
Voltage regulators are the same devices your utility uses to try to keep the voltage it
supplies at a constant level. These giant regulators consist of large transformers with a
number of taps or windings—outputs set at different voltage levels. Motors connected to
the regulators move switches that select the taps that supply the voltage most nearly
approximating normal line voltage. These mechanical regulators are gargantuan devices.
Even the smallest of them is probably big enough to handle an entire office. In addition,
they are inherently slow on the electrical time scale, and they may allow voltage dips
long enough for data to be lost.
Solid state voltage regulators use semiconductors to compensate for line voltage
variations. They work much like the power supply inside your computer, but can
compensate over a wider range.
The saturable reactor regulator applies a DC control current to an extra control coil on the
transformer, enough to "saturate" the transformer core. When saturation is achieved, no
additional power can pass through the transformer. Regulating the DC control current
adjusts the output of the transformer. These devices are inherently inefficient because
they must throw away power throughout their entire regulating range.
Ferroresonant transformer regulators are "tuned" into saturation much the same as a radio
is tuned—using a capacitor in conjunction with an extra winding. This tuning makes the
transformer naturally resist any change in the voltage or frequency of its output. In effect,
it becomes a big box of electrical inertia that not only regulates, but also suppresses
voltage spikes and reduces line noise.
The measure of quality of a voltage regulator is its regulation, which specifies how close
to the desired voltage the regulator maintains its output. Regulation is usually expressed
as the output variation for a given change in input. The input range of a regulator
indicates how wide a voltage variation the regulator can compensate for. This range
should exceed whatever variations in voltage you expect to occur at your electrical
outlets.

Blackout Protection

Both standby and uninterruptible power systems provide blackout protection in the same
manner. They are built around powerful batteries that store substantial current. An
inverter converts the direct current from the batteries into alternating current that can be
used by your computer. A battery charger built into the system keeps the reserve power
supply fully charged at all times.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (39 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Because they are so similar, the term UPS is often improperly used to describe both
standby and uninterruptible power systems. They differ in one fundamental
characteristic: the electricity provided by a standby power system is briefly interrupted in
the period during which the device switches from utility power to its own internal
reserves. An uninterruptible power system, as its name indicates, avoids any interruption
to the electricity supplied to the device it protects. If your PC is sensitive to very short
interruptions in its supply of electricity, this difference is critical.

Standby Power Systems

As the name implies, the standby power system constantly stands by, waiting for the
power to fail so that it can leap into action. Under normal conditions—that is, when
utility power is available—its battery charger draws only a slight current to keep its
source of emergency energy topped off. The AC power line from which the standby
supply feeds is directly connected to its output, and thence to the computer. The batteries
are out of the loop.
When the power fails, the standby supply switches into action—switch being the key
word. The current-carrying wires inside the standby power supply that lead to the
computer are physically switched from the utility line to the current coming from the
battery-powered inverter.
The switching process requires a small but measurable amount of time. First, the failure
of the electrical supply must be sensed. Even the fastest electronic voltage sensors take a
finite time to detect a power failure. Even after a power failure is detected, another slight
pause occurs before the computer receives its fresh supply of electricity while the
switching action itself takes place. Most standby power systems switch quickly enough
that the computer never notices the lapse. A few particularly unfavorable combinations of
standby power systems and computers, however, may result in the computer shutting
down during the switch.
Most standby power systems available today switch within one-half of one cycle of the
AC current they are supplied—that's less than ten milliseconds, quick enough to keep
nearly all PCs running as if no interruption occurred. Although the standby power system
design does not protect again spikes and surges, most SPSes have other protection
devices installed in their circuitry to ensure that your PC gets clean power.

Uninterruptible Power Systems

Traditionally, an uninterruptible power system supplied uninterrupted power because its


output did not need to switch from line power to battery. Rather, its battery was
constantly and continuously connected to the output of the system through its inverter.
This kind of UPS always supplied power from the batteries to the computer. The
computer was thus completely isolated from the vagarities of the AC electrical line. New
UPS designs are more like standby systems, but use clever engineering to bridge over

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (40 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

even the briefest switching lulls. They, too, deliver a truly uninterrupted stream of power,
but can be manufactured for a fraction of the cost of the traditional design.
In an older UPS, the batteries are kept from discharging from the constant current drain
of powering your computer by a large built-in charger. When the power fails, the charger
stops charging, but the battery—without making the switch—keeps the electricity
flowing to the connected computer. In effect, this kind of UPS is the computer's own
generating station only inches away from the machine it serves, keeping it safe from the
polluting effects of lightning and load transients. Dips and surges can never reach the
computer. Instead, the computer gets a genuinely smooth, constant electrical supply
exactly like the one for which it was designed.
Newer UPSes connect both the input power and the output of their inverters through a
special transformer, which is then connected to your PC or other equipment to be
protected. Although utility power is available, this kind of UPS supplies it through the
transformer to your PC. When the utility power fails, the inverter kicks in, typically
within half a cycle. The inductance of the transformer, however, acts as a storage system
and supplies the missing half-cycle of electricity during the switchover period.
The traditional style of UPS provides an extreme measure of surge and spike protection
(as well as eliminating sags) because no direct connection bridges the power line and the
protected equipment—spikes and their kin have no pathway to sneak in. Although the
transformer in the new style of UPS absorbs many power line irregularities, overall it
does not afford the same degree of protection. Consequently, these newer devices usually
have other protection devices (such as MOVs) built in.

Specifications

The most important specification to investigate before purchasing any backup power
device is its capacity as measured in volt-amperes (VA) or watts. This number should
always be greater than the rating of the equipment to which the backup device is to be
connected.
In alternating current (AC) systems, watts do not necessarily equal the product of volts
and amperes (as they should by the definition that applies in DC systems) because the
voltage and current can be out of phase with one another. That is, when the voltage is at a
maximum, the current in the circuit can be at an intermediary value. So the peak values
of voltage and amperage may occur at different times.
Power requires both voltage and current simultaneously. Consequently, the product of
voltage and current (amperage) in an AC circuit is often higher than the actual power in
the circuit. The ratio between these two values is called the power factor of the system.
What all this means to you is that volt-amperes and watts are not the same thing. Most
backup power systems are rated in VA because it is a higher figure thanks to the power
factor. You must make sure that the total VA used by your computer equipment is less
than the VA available from the backup power system. Alternatively, you must make sure
that the wattage used by your equipment is less than the wattage available from the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (41 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

backup power system. Don't indiscriminately mix the VA and watts in making
comparisons.
To convert a VA rating to a watt rating, multiply the VA by the power factor of the
backup power supply. To go the other way—watts to VA—divide the wattage rating of
the backup power system by its power factor. (You can do the same thing with the
equipment you want to plug into the power supply, but you may have a difficult time
discovering the power factor of each piece of equipment. For PCs, a safe value to assume
is 2/3.)
Standby and uninterruptible power systems also are rated as to how long they can supply
battery power. This equates to the total energy (the product of power and time) that they
store. Such time ratings vary with the VA the backup device must supply—because of
finite battery reserves, it can supply greater currents only for shorter periods. Most
manufacturers rate their backup systems for a given number of minutes of operation with
a load of a particular size instead of in more scientific fashion using units of energy. For
example, a backup system may be rated to run a 250 volt-ampere load for 20 minutes.
If you want an idea of the maximum possible time a given backup supply can carry your
system, check the ratings of the batteries it uses. Most batteries are rated in
ampere-hours, which describes how much current they can deliver for how long. To
convert that rating to a genuine energy rating, multiply it by the nominal battery voltage.
For example, a 12-volt, 6 amp-hour battery could, in theory, produce 72 watt-hours of
electricity. That figure is theoretical rather than realistic because the circuitry that
converts the battery DC to AC wastes some of the power and because ratings are only
nominal for new batteries. However, the numbers you derive give you a limit. If you
have only 72 watt-hours of battery, you can't expect the system to run your 250 VA PC
for an hour. At most, you could expect 17 minutes; realistically, you might expect 12 to
15.
You probably will not need much time from a backup power system, however. In most
cases, five minutes or less of backup time is sufficient because the point of a backup
supply is not to keep a system running forever. Instead, the backup power system is
designed to give you a chance to shut down your computer without losing your work.
Shutting down shouldn't take more than a minute or two.
UPS makers warn that no matter the rating of your UPS, you should never plug a laser
printer into it. The fusers in laser printers are about as power hungry as toasters—both
are resistive heaters. The peak power demand when the fuser switches on can overload
even larger UPSs and the continuing need for current can quickly drain batteries.
Moreover, there's no need to keep a print job running during a power failure. Even if you
lose a page, you can reprint it when the power comes back at far less expense than the
cost of additional UPS power capable of handling the laser's needs. Some printers, such
as inkjets, are friendlier to UPSs and can safely be connected, but you'll still be wasting
capacity. The best strategy is to connect only your PC, your monitor, and any external
disk drives to the UPS. Plug the rest of your equipment into a surge suppressor.
To handle such situations, many UPSs have both battery-protected outlets and outlets
with only surge protection. Be sure to check which outlets you use with your equipment,
making sure your PC has battery-backed protection.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (42 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

Waveform

Different backup power systems also vary as to their output waveform. The perfect
waveform is one that matches that the utility company makes—sine wave (or sinusoidal)
power in which the voltage and current smoothly alternates between polarities 120 times
a second (a frequency of 60 Hz). Although the most desirable kind of power, smooth sine
waves are difficult to generate. Electronic circuits such as those in a backup power
system more easily create square waves, which abruptly switch between polarities. The
compromise between the two—called modified square waves or modified sine waves
(depending on who's doing the talking)—approximates the power factor of sine waves by
modifying the duty cycle of square waves or stepping between two or more voltage
levels in each power cycle. Figure 24.6 shows the shapes of these different wave forms.
Figure 24.6. Power supply wave forms.

Considerable debate surrounds the issue of whether sine or square waves are better for
your equipment. One manufacturer, Compaq, has even gone so far as recommending
against the use of square wave power with its computers.
In truth, however, most waveform arguments are irrelevant for PC backup power
systems. Although a backup power system should produce square waves most efficiently,
commercial products show little correspondence between efficiency and output
waveform. On the other hand, square waves are richer in harmonics that can leak into
sensitive circuits as noise. But the filters in all PC power supplies effectively eliminate
power line-related noise.
Perhaps the biggest shortcoming attributed to square waves is that they can cause
transformers to overheat. All PC power supplies, however, use high speed switching
technology, which breaks the incoming waveform into a series of sharp pulses regardless
of whether it is made from sine or square waves. Most monitors also use switching power
supplies. Only linear power supplies, now rare in electronic equipment, may be prone to
overheating from square waves. Moreover, standby power systems, the inverters of
which are designed to operate your equipment for less than 30 minutes, do not provide
local power long enough to create a severe overheating problem.

Interfaces

An ordinary UPS works effectively if you're sitting at your PC and the power fails. You
can quickly assess whether it looks like the blackout will be short or long (if the world is
blowing away outside your window, you can be pretty sure any outage will be
prolonged). You can save your work, haul down your operating system, and shut off your
PC at your leisure. When a PC is connected to a network or is running unattended,
however, problems can arise.
During a prolonged outage a simple UPS only prolongs a disaster with an untended

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (43 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

PC—it runs another dozen minutes or so while the power is off, then the UPS runs out of
juice and the PC plummets with it. Of course, if a server crashes without warning, no one
is happy, particularly if a number of files were in the queue to be saved.
To avoid these problems, better UPSes include interfaces that let them link to your PC,
usually through a serial port. Install an appropriate driver, supplied by the UPS maker,
and your PC can monitor the condition of your power line. When the power goes off, the
software can send messages down the network warning individual users to save their
work. Then, the UPS software can initiate an orderly shutdown of the network.
Some UPSes will continue to run even after your network or PC has shut itself down.
Better units have an additional feature termed inverter shutdown that automatically
switches off the UPS after your network shuts down. This preserves some charge in the
batteries of the UPS so that it can still offer protection if you put your PC or network
back online and another power failure follows shortly thereafter. A fully discharged UPS,
on the other hand, might not be ready to take the load for several hours.

Battery Life

The gelled electrolyte batteries most commonly used in uninterruptible power systems
have a finite life. The materials from which they are made gradually deteriorate and the
overall system loses its ability to store electricity. After several years, a gelled electrolyte
battery will no longer be able to operate a UPS even for a short period. The UPS then
becomes non-functional. The only way to revive the UPS is to replace the batteries.
Battery failure in a UPS usually comes as a big surprise. The power goes off and your PC
goes with it, notwithstanding your investment in the UPS. The characteristics of the
batteries themselves almost guarantee this surprise. Gelled electrolyte batteries gradually
lose their storage capacity over a period of years, typically between three and five. Then,
suddenly, their capacity plummets. They can lose nearly their total storage ability in a
few weeks. Figure 24.7 illustrates this characteristic of typical gelled electrolyte batteries.
Figure 24.7 UPS battery capacity slowly deteriorates, then suddenly plummets.

Note that the deterioration of gelled electrolyte batteries occurs whether or not they are
repeatedly discharged. They deteriorate even when not used, although repeated heavy
discharges will further shorten their lives.
To guard against the surprise of total battery failure, better UPSes incorporate an
automatic testing mechanism that periodically checks battery capacity. A battery failure
indication from such a UPS should not be taken lightly.

Phone Line Protection

Spikes and surges affect more than just power lines. Any wire that ventures outside holds
the potential for attracting lightning. Any long wiring run is susceptible to induced

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (44 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 24

voltages including noise and surges. These overvoltages can be transmitted directly into
your PC or its peripherals and cause the same damage as a power line surge.
The good news is that several important wiring systems incorporate their own power
protection. For example, Ethernet systems (both coaxial and twisted pair) have sufficient
surge protection for their intended application. Apple LocalTalk adapters are designed to
withstand surges of 2,000 volts with no damage. Because they are not electrical at all,
fiber optical connections are completely immune to power surges.
The bad new is that two common kinds of computer wiring are not innately protected by
surges. Telephone wiring runs long distances through the same environments as the
power distribution system and is consequently susceptible to the same problems. In
particular, powerful surges generated by direct lightning hits or induction can travel
through telephone wiring through your modem and into the circuitry of your PC. In
addition, ordinary serial port circuitry includes no innate surge suppression. A long
unshielded serial cable can pick up surges from other cables by induction.
The best protection is avoidance. Keep unshielded serial cable runs short whenever
possible. If you must use a long serial connection, use shielded cable. Better still, break
up the run with a short haul modem, which will also increase the potential speed of the
connection.
Modem connections with the outside world are unavoidable in these days of on-line
connectivity and the Internet. You can, however, protect against phone line surges using
special suppressors designed exactly for that purpose. Better power protection devices
also have modem connections that provide the necessary safeguards. Standalone
telephone surge suppressors are also available. They use the same technologies as power
line surge suppressors. Indeed, the voltage that rings your telephone is nearly the same as
the 110-120 volt utility power used in the United States. Most phone line suppressors are
based on MOV devices. Better units combine MOVs with capacitors, inductors, and
fuses.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh24.htm (45 de 45) [23/06/2000 07:00:48 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

Chapter 25: Cases


What holds your whole PC together is its case, but a case is more than a mere box. The
case provides secure mountings for circuit boards and mass storage devices. It protects
delicate circuitry from all the evils of the outside world—both mechanical and
electrical—and it protects the world and you from what's inside the PC—both
interference and dangerous voltages. Cases come in various sizes, shapes, and
effectiveness at their protective tasks to match your PC and the way you plan to use it.

■ Physical Construction
■ Mechanical Matters
■ XT Size
■ AT Size
■ Mini-AT Size
■ ATX Cases
■ Small Footprint PCs
■ Tower Style Cases
■ Notebook Packaging
■ Motherboard Mounting
■ Drive Mounting
■ Form Factors
■ Device Heights
■ Drive Installation
■ Direct Mounting
■ Rail Mounting
■ Tray Mounting
■ Cooling
■ Passive Convection
■ Active Cooling

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (1 de 25) [23/06/2000 07:03:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

■ Advanced Cooling
■ Fan Failure
■ Radiation
■ Radio Frequency Interference
■ Minimizing Interference
■ Health Concerns

25

Cases

The case is the physical embodiment of your PC. In fact, the case is the body of your PC.
It's a housing, vessel, and shield that provides the delicate electronics of the computer a
secure environment in which to work. It protects against physical dangers—forces that
might act against its circuit boards, bending, stressing, even breaking them with
deleterious results to their operation. It also prevents electrical short circuits that may be
caused by the in-fall of the foreign objects that typically inhabit the office—paper clips,
staples, letter openers, beer cans, and errant bridgework. The case also guards against
invisible dangers, principally strong electrical fields that could induce noise that would
interfere with the data handling of your system, potentially inducing errors that would
crash your system.
The protective shield of the case works both ways. It also keeps what's inside your PC
inside your PC. Among the wonders of the workings of a computer, two in particular
pose problems for the outside world. The electrical voltages inside the PC can be a
shocking discovery if you accidentally encounter them. And the high frequency electrical
signals that course through the computer's circuits can radiate like radio broadcasts and
interfere with the reception of other transmissions—which includes everything from
television to aircraft navigational beacons.
Your PC's case also has a more mundane role. Its physical presence gives you a place to
put the things that you want to connect to your computer. Drive bays allow you to put
mass storage devices within ready reach of your PC's logic circuits while affording the
case's protection to your peripherals. In addition, your PC's case gives your expansion
boards a secure mounting—the expansion slot—and provides them the same mechanical
and electrical shelter as the rest of the system.
The case can play a more mundane role, too. It also can serve as the world's most
expensive monitor stand, raising your screen to an appropriate viewing height, elevated
above the clutter and confusion of your desktop.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (2 de 25) [23/06/2000 07:03:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

Compounding the function of your computer's case is the need to be selective. Some of
what's inside your PC needs to get out—heat, for instance. And some of what's outside
needs to get in—such as signals from the keyboard and power from your electrical
outlets. In addition, the computer case must form a solid foundation upon which your
system can be built. It must give disk drives a firm base and hold electrical assemblies
out of harm's way. Overall, the simple case may not be as simple as you think.

Physical Construction

When building a PC, two issues are paramount: how the case is put together and how you
put together a PC inside it. In the first case, how the case gets constructed—whether with
screws, rivets, or welds—is not so important as the size of the finished product. The size
of the case determines what it can hold, which in turn limits how much you can expand
the system to add all the features you want. How components install inside the case affect
how much you want to expand your system. If sliding in an expansion board or just
getting at a drive bay to install a new disk make you think the system was designed by
someone in league with the devil (or another employee of Microsoft) you're not going to
look on expansion with much favor.
In real life, as opposed to the realm of paranoia, the installation of a single disk drive can
be daunting, particularly if you don't know the methodology employed by the case
designer. Always be suspicious if your instruction manual starts by saying, "First make a
pentangle on the floor...."
To help you gain familiarity with the issues of case selection and expansion, let's first
look at the size and layout of different case designs and the effects of these design
choices on expandability. Then we'll examine the actual installation of computer
components.

Mechanical Matters

The obvious function of the case is mechanical—you can see it and touch it as a distinct
object. And it steals part of your desktop, floor, or lap when you put it to work. It has a
definite size—always too small when you want to add one more thing but too large when
you need to find a place to put it (and particularly when that place happens to be inside
your carry-on luggage). The case also has a shape, which may be functional to allow you
the best access to all those computer accouterments, like the slot into which you shove
your backup tapes. But shape and color also are part of your PC's style, which can set one
system apart from the boring sameness of its computer kin.
Modern PCs have wings, waves, and flares—all only aesthetics calculated to make a
given manufacturer's machines stand out among the masses, style to make you think it is
more modern and capable. The features, like some of the more interesting paint shades
with which some manufacturers have started experimenting, are design gimmicks, the tail

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (3 de 25) [23/06/2000 07:03:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

fins of the Nineties. There's nothing wrong with a PC that looks like you've stolen it from
the deck of an aircraft carrier, but there's nothing inherently better about it, either.
Beneath the plastic you'll find that same basic mechanical design and construction of
systems built a decade ago.
In computers, form dictates function as much (if not more) than it does for any other type
of office equipment. Computers have the shape they have so that they can hold what you
want to put inside them—primarily all those expansion options that give your machine
power and ability. It has to be large enough to accommodate the expansion boards you
want to plug in as well as provide adequate space for all the floppy and hard disks,
optical, and tape drives your PC and life would not be complete without.
The sizes of both boards and drives are pre-ordained—set once long ago and forever
invariant. The case must be designed around the needs of each. But creating a case is
more than a matter of allocating box-like space for options. The case also has to provide
a place for such mandatory system components as power supplies and loudspeakers. In
addition, everything must be arranged to allow air to freely flow around to bring your
PC's circuitry and peripherals a breath of cooling fresh air.
Not all computer manufacturers, however, give that much thought to the cases into which
they pack their products. Many smaller manufacturers simply select a case from some
other manufacturer that specializes in molding plastic and bending steel. Thanks to a
mixture of the forethought of the case maker and dumb luck, these amalgamations work
out and everyone is happy—the case maker who made the original sale, the computer
maker who gets a cheap box to slide his equally cheap electrical works into, and you get
a deal on the system that you buy.
Nevertheless, you still have options open to you when you select a PC or buy a case into
which to put your own computer creation. To make the right decision—and to ensure that
your computer as a whole suits your needs and continues to do so for a long life—you
need to know your options and all of their ramifications.

XT Size

The place to begin a discussion of cases is with the first case design, the metal package
that surrounded the original IBM PC and its younger cousin, the XT. These set the
pattern for all machines to come. Although dimensions may vary, the majority of desktop
computers still abide by some variation of the PC/XT layout. Moreover, even more than
a decade and a half later, the package is still current. You can buy ready built computers
in this exact size case or find empty shells widely available on the parts market. It has
become an industry standard called XT size, after IBM's most popular product in this
package, notwithstanding the last product to use this exact case design (and the XT
designation) rolled off the assembly line a decade ago.
Although superficially identical, the cases of IBM's PC and XT computers were different,
reflecting their difference in expansion slot spacing and number (one-inch spacing of five
slots for the PC; 0.8-inch spacing and eight slots for the XT). Squeezing in the extra slots
drew its own penalty. Because the space used by the slots overlapped that of the drive

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (4 de 25) [23/06/2000 07:03:49 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

bays, two slots could not extend the full length of the case. These short slots would
accept expansion boards no longer than seven inches. Despite this shortfall in slot length,
all of the cases of current PCs and those sold in the replacement market follow the XT
pattern. Moreover, short cards have become a standard design, now favored not because
they fit (which can be a good reason in itself) but because they use less materials and are
thus less costly for a manufacturer to make.
The success of the XT design is a case of success breeding success. Not that IBM's
designers had any extraordinary insights and somehow managed to create the "one true
computer case." The original design was more pragmatic with its fundamental design
dictated by function and fit. It had space at the front for two disk drives of the then
current "miniature" format (full height 5.25-inch drives), a power supply tucked behind
the drives, and a system board lining the rest of the case to the left—its size determined
by the amount of circuitry required to build a computer. The height of the case was set by
the needs of the full height drives and the expansion board format.
Mix in the competing need to keep things compact, and the result was functional, if not
inspired. The footprint of the XT case (the amount of desk space it needed) measures 21
inches wide by 17 inches deep. Its height, to allow for expansion boards and
three-eighths-inch feet underneath for airflow (just enough so you could lose a
pencil—maybe the designers did have some kind of inspiration!), measures 5.5 inches.
Fabrication was designed to be easy—and cheap, as suited a machine of unknown
destiny. The bottom of the case formed the computer's chassis, the frame or foundation to
which all the important mechanical components of the system are bolted. The top was
simpler still—a flat piece of steel with the sides rolled down to a bottom lip. With this
design, the chassis provided a full steel bottom, front, and back. The lid steel provided
sides top, left, and right giving the PC built into the case steely protection and
interference prevention on all sides. A molded plastic front panel added a decorative
touch.
The one part of the XT case that has undergone considerable modification is drive
mounting. Today's smaller drives require bays with different mounting provisions. The
original XT design was meant only for wide, full height drives. Using anything else
requires one or more adapters of some kind. Such considerations would be pleasantly
behind us had not some case makers until recently incorporated such a mounting scheme
in new products.
If you have such a case with old style mounting and you want to recycle it with more
modern drives, you'll need an adapter. Although adapters are available commercially,
you can fabricate them yourself (you need two per drive bay) from flat metal stock. Your
objective is simple, to make plates to which you can mount two drives, one atop the
other, so they take the same space as one taller driver. Figure 25.1 illustrates a common
design for such adapters.
Figure 25.1 Half height drive adapter plate.

Today most XT-size cases provide a variety of drive mounting provisions. If you look at
ads for new or replacement cases, you'll see a list of the form factors of the bays in each
case, usually listed as the number of 3.5-inch and 5.25-inch drives that will fit inside.
Note that these numbers usually apply only to one inch tall 3.5-inch drives and half

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (5 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

height 5.25-inch drives. Case makers will also list bays as internal or external. Internal
bays are suitable only to hard disk drives. External bays can be use for any drive type that
fits—hard disk, floppy, CD, or tape. It's only a matter of access. You have to change
media on all drive types except hard disk drives.

AT Size

Until the last few years, the most popular case design was the one IBM originally created
for its Personal Computer AT way back in 1984. Its principal goal was to make up for
one of the drawbacks of the XT-size case: size. The power of PCs grew faster than could
be accommodated within the confines of the XT case. Even today, a fully expanded PC
may demand more space than the XT case can accommodate. The need for room to roam
arises particularly with each new generation of microprocessor—when a new design is
introduced, support circuits usually haven't had enough development time to be reduced
to a single Application-Specific Integrated Circuit (ASIC). The motherboard needs a lot
of real estate for circuit layout, more than can be crammed into the XT footprint even
after liberally coating the motherboard with grease and persuading it to fit by gently
bending, folding, stapling, and mutilating.
To accommodate the larger system board required by the AT's then advanced design, the
system unit was broadened by two inches (to 23 by 17 inches) and its height was
increased by nearly an inch, allowing both taller expansion boards and accommodations
for a stack of three half height devices in the mass storage bays. A taller power supply
with greater reserves was also fitted. To accommodate the large system board, however,
the base of the power supply had to be cut away. The foundation of the AT design was
basically a continuation of previous PC designs—a chassis with a matching lid, both
fabricated from steel, and a decorative plastic front panel (and an easily removable
decorative plastic panel to fit over the rear of the machine).
As with earlier case designs, two side by side mass storage bays were installed, but the
inboard (left) bay was reserved for hard disk drives that required no access to their front
panels for media changes. Because the large system board required space for its circuits
under the drive bay, this inboard bay had to end more than an inch above the bottom of
the case, restricting it to a single full height device while the outboard (right) bays
provided space for three half height drives, two of which had front panel access.
Drive mounting was the real innovation of the AT case. Mass storage devices are secured
on both sides by sturdy mounting rails that slide into channels on either side of the drive
bay. The two-sided mounting prevents the drive in the bay from bouncing or rattling
around during shipment as it would in one-sided PC mounting. The new design made
drive removal and installation relatively easy—providing you had hands small enough or
skin tough enough so that you didn't bloody your knuckles reaching behind the drive to
connect or disconnect its various cables. The mounting rails were secured by two
brackets that screwed into the front panel of the chassis.
True AT size cases use exactly this mounting scheme with rails that match those used by
IBM down to a fraction of an inch. Other manufacturers developed individual variations
on the IBM theme, either to distinguish themselves or to make manufacturing easier.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (6 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

Modern cases that wear the AT designation do not necessarily follow this drive mounting
convention. To adapt to new drive sizes, designers have incorporated a variety of bays in
their case designs. These are usually described with terms used in XT-size (and all other)
case designs.
One additional AT case innovation was motivated by security concerns. A PC and its
data are vulnerable to anyone who can switch on a machine and probe its mass storage.
Or the entire mass storage system can disappear by simply popping open the case with a
screwdriver. Replace the lid, and the damage won't be obvious until someone boots the
machine, possibly days later.
To help prevent such problems, most AT cases (and most cases in general) add access
control. The AT case set the pattern with a keylock using a cylindrical key (the high
security type similar to the ones you find on pay phones and Coke machines) that both
physically latched the lid on the case and electrically disabled the keyboard. Twist the
key off, and you keep your fingers out of the box and prevent their doing damage by
dancing on the keyboard.
The keylock complemented a small control panel that incorporated a power on indicator
which helped diagnose monitor problems. (Nothing on your screen? Is the little green
light on the computer lit?) In addition, a disk activity indicator—a small red LED
light—let you know when your hard disk was accessing data (just in case you couldn't
feel your desk shake).
Although the originator of the AT case never advanced beyond this arrangement, various
compatible computer makers refined the details. Most AT-compatible computers have
added front panel access to all of the drive positions in the case. The internal bays of
most machines also have been subdivided to accommodate either a pair of half height
drives or a single full height device. Some PC makers have even elaborated on the
control panel with indicators to entertain you and incidentally help in diagnosing system
problems. All retain the generous space available for a large motherboard.

Mini-AT Size

A couple of inches may not seem like much, but the change between XT and AT cases
represents a huge increase in apparent mass. While an XT resided on a desktop, an AT
dominated it. But the larger size of the AT brought its own benefits, primarily the ability
to use larger expansion boards that could pack more circuitry and features into an
expansion slot.
Some compatible manufacturers hit upon the great compromise—an XT-size footprint in
an AT-height case. The result was the mini-AT. A better description might be "tall-XT"
cases because all their horizontal dimensions and accommodations (including system
board size) match the XT standard; they equal the AT only in expansion slot area and
drive bay height.
The advantage of this design is simply the smaller space it requires on your
desk—compared to the ordinary AT—coupled with the ability to accommodate any

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (7 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

expansion board. Although not as popular as they once were, tall AT-size expansion
boards still appear on the market, often as the basis for some of the most desirable, high
performance products. The downside of the mini-AT case is its inevitable
compromises—slicing off those inches eliminates some space in the expansion board
area so one or two slots may be pared off or truncated into short slots. Also, the access
inside is tighter.
The mini-AT case is one of the most popular available. Although its overall space
savings is actually modest compared to a full AT-size box, it is aesthetically pleasing.
And it affords designers all sorts of possibilities for squeezing in small drive bays. For
XT-size and smaller motherboards, it is probably the best overall compromise.

ATX Cases

Although ATX refers to a motherboard and not a case design, the ATX mother board
makes specific requirement of a computer case. The port connectors of every ATX
motherboard are in the same position. This is actually a convenience—any case
designated ATX will accept an ATX motherboard and allow access to all of its port
connectors with no further fuss. The other side of the coin is that you need an ATX case
for an ATX motherboard. Initially, ATX cases demanded a price premium.
The ATX specification does not require a case of a particular size or layout as long as an
ATX motherboard will properly fit inside. Consequently, you can find both desktop and
tower style cases (see the following "Tower Style Cases" section) with various external
dimensions and drive mounting systems.

Small Footprint PCs

With today's advanced chipsets and integrated microprocessors, motherboards measuring


only a few square inches are possible and even the XT and mini-AT cases loom like
caverns in accommodating one of them. True miniaturization demands a case of smaller
dimensions. The generic term for such boxes is the small footprint PC. In general, the
term describes a case trimmed smaller than the XT horizontal dimensions yet still able to
accommodate those taller AT-size expansion boards.
Achieving that smaller size requires a big trade-off: accommodations. The principal loss
is in expansion slots. To shave down the height of these tiny cases while conserving the
capability of handling tall AT-size expansion boards, small footprint machines move
their options on edge. Instead of boards sliding into slots vertically, these tiny PCs align
them horizontally, as is done with LPX motherboards. The motherboard boards remain
horizontal in the case and have a single master expansion slot. A special expansion board
rises vertically with additional slot connectors on one side to allow you to slide in several
ordinary expansion boards (typically three) parallel to the system board.
Drive accommodations suffer, too. Although the exact assortment of drive possibilities

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (8 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

varies with system design, most small footprint machines roughly halve the number of
bay possibilities in larger designs. Instead of two side by side stacks of two drives, some
machines have a single stack. Others put two half height bays side by side. More modern
machines squeeze in at least one 3.5-inch or smaller internal drive bay.
With the multi-gigabyte capacities possible inside today's 3.5-inch hard disk drives, the
overall storage capabilities of these machines is not a major concern. The chief
shortcoming is in the variety of drive options. Two bays means one floppy disk drive and
one something else—your choice: second floppy, tape backup, removable cartridge hard
disk, CD ROM, whatever. Although you can add other peripherals externally, you pay
more for self-contained products and add to your office mess with a clutter of wires.
Another problem besides the obvious space and expansion limitations of the small
footprint design is that horizontal mounting of expansion boards also has a shortcoming.
The horizontal boards impede the normal convective air flow that would otherwise cool
the components on the board. Moreover, the lower boards can serve as stoves, heating the
boards installed atop them.
As fewer expansion boards are installed in PCs (because of more functions being
integrated on the system board) and large scale integration trims the power consumption
and heat generated by circuits, this problem decreases in magnitude. Moreover, the
demonstration Green PCs that forego traditional bus expansion for low power PCs
entirely eliminates the problem. Nevertheless, if you want to squeeze conventional
expansion into a small case, you need to put the most component laden of your expansion
boards in the top slot so the other boards can keep their cool.

Tower Style Cases

At the other end of the packaging continuum from the small footprint PC is the machine
designed to be as large as logic allows—the tower style system. Designed to stand
upright on the floor, they are free from the need of minimizing the desk space they
require. Instead, they concentrate on expandability, allowing as many expansion boards
as standard system board designs permit and a wealth of drive bays.
Standing on edge is enough to qualify a PC as a tower. The internal accommodations
vary as much as the aesthetic tastes. Most—but not all—larger machines can happily
house an AT-size motherboard. Most—but not all—can accommodate at least five
5.25-inch half height devices. You can't tell by looking at the outside of the case what fits
inside. The accommodations need to be enumerated.
The AT first brought legitimacy to installing personal computers on edge, using an
afterthought mounting scheme—a cocoon to enclose a conventional AT machine on its
end. The first mainstream PCs designed from the start for floor mounting were
introduced in 1987. Soon most PC makers took to towers to take advantage of their more
commodious drive accommodations. With some models having space for eight or more
drives, freestanding tower style PCs have become the choice for multi-gigabyte network
servers.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (9 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

Vertically mounting computer components causes no problems. Electronic circuits don't


know which way is up. However, some hard disks (in particular, massive older models
that require a full height 5.25-inch drive bay) may complain about installation on edge. In
some cases, drives that have been low level formatted in horizontal orientation show an
inordinate number of errors when operated vertically. The weight of a sturdy, old head
mechanism can be enough to skew the head away from the center of the disk tracks with
the result that errors reading the disk can become appallingly frequent. The simple
solution is to low level format the disk in the same vertical orientation in which it will be
used.
Another flaw in some tower style cases is cooling. Most towers align expansion boards
horizontally, stacking them one atop another much like small footprint PCs. Unless care
is taken in providing cooling air flow, power hungry boards low in the stack can cook
those boards higher up.
Towers come in various sizes. Mini-tower cases are the most compact and usually
accommodate only mini-AT and smaller motherboards and a few drive options. Full size
tower cases hold full size motherboards and more drive bays than most people know
what to do with. Recently midi-tower cases, with accommodation falling in between,
have become a popular option. There is no standardization of these terms. One
manufacturer's mini is another's midi.
Choose a PC with a tower style case for its greater physical capacity for internal
peripherals and its flexibility of installation wherever there are a few vacant feet of floor
space. You also need to be critical about the provisions for physically mounting mass
storage devices. Some towers provide only flimsy mounting means or require you to
work through a Chinese puzzle of interlocking parts to install a drive. You need a system
that provides sufficient drive mounting options.

Notebook Packaging

Back in the days before microminiaturization, anything instantly became portable the
moment you attached a handle. The first generation of portable televisions, for example,
was eminently portable—at least for anyone accustomed to carrying a carboy under each
arm. The first generation of PCs had similar pretenses of portability, challenging your
wherewithal with a weighty bottle of gas and photons, and a small but hardly lightweight
picture tube. The typical weight of a first generation portable PC was about 40
pounds—about the limit of what the market (or any reasonable human being) would bear.
These portables were essentially nothing more than a repackaging that combined a
conventional PC with an integral monitor. Some—for example, IBM's ill-starred PC
Portable—used motherboards straight from desktop systems (the PC Portable was just an
XT in schleppable clothing). Drive bays were moved and slots sacrificed for a package
that appealed to the visual senses, no matter the insult to your musculature.
Replacing the bottle with a flat panel display gave designers a quick way to cut half the
weight and repackage systems into lunchbox PCs. The name referred to the slab-sided
design with a handle on top reminiscent of what every kid not party to the school lunch

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (10 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

program toted to class—but with some weighing in at 20 to 25 pounds, these packages


were enough to provide Paul Bunyan with his midday meal. The largest of these did
allow the use of conventional motherboards with space for several conventional
expansion slots. Overall, however, the design was one that only a mother could love, at
least if she advocated an aggressive weight training program.
The ultimate in computer compression is the notebook PC, machines shrunk as small as
possible while allowing your hands a grip on their keyboards (and eyes a good look at the
screen) and as thin as componentry allows. Making machines this small means
everything has got to give—you can find compromises in nearly every system
component.
The fewest of these compromises appear in mass storage. The need for tiny, flyweight
drives for both notebook computers and machines of even smaller dimensions has been
the principal driving force behind the miniaturization of floppy and hard disks. Drive
manufacturers have been amazingly successful at reducing physical size while increasing
capacity and improving performance and reliability. Moreover, many notebook system
manufacturers are now relegating the hard disk drive to removable status, opting to
install drives in PCMCIA Type 3 slots so that your dealer can easily configure a system
or you can readily upgrade your own.
Today the biggest compromises made for the sake of compact size appear in the user
interfaces. Making a portable computer portable means making it a burden that a human
being can bear, even one that will be willingly borne. And the portable must be
something that can be packed rather than needing to be tethered with mooring ropes.
Unfortunately, some aspects of the user interface can't be compressed without losing
usability—it's unlikely that human hands will be downsized to match the demand for
smaller, lighter PCs so the optimum size required for a keyboard won't shrink. But the
temptation remains for the manufacturer to trim away what's viewed as excess—a bit
around the edges from the function keys or eliminating some keys altogether in favor of
key combinations only contortionists can master.
A number of subnotebook machines have been developed with keyboards reduced to 80
percent the standard size. These include the Gateway HandBook series and the Zeos
Contenda. Most people adapt to slightly cramped keyboards and continue to touch type
without difficulty. Smaller than that, however, and touch typing becomes challenging. In
other words, hand held PCs are not for extensive data entry.
Besides length and width, notebook computer makers also have trimmed the depth of
their keyboards, reducing the height of keytops—not a noticeable change—as well as key
travel. The latter can have a dramatic effect on typing feel and usability. Although the
feel and travel of a keyboard is mostly user preference, odds favor greater dissatisfaction
with the shrunken, truncated keyboards in miniaturized computers compared to full size
machines.
Displays, too, are a matter of compromise. Bulky picture tube displays are out, replaced
with flat screens of size limited by the dimensions of the rest of the notebook package.
Although notebook systems of days gone by have explored a number of variations on the
screen mounting theme, today's most common case is the clamshell. Like the homestead
of a good old geoduck, the clamshell case is hinged to open at its rear margin. The top

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (11 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

holds the screen. When folded down, it protects the keyboard; when opened, it looms
behind the keyboard at an adjustable angle. In general, the hinge is the weakest part of
this design.
To make their systems more ergonomic, some notebook manufacturers try to follow the
desktop paradigm by cutting the keyboard, screen, or both free from the main body of the
computer. The appeal of these designs is adjustability: you can work the way you want
with your hands as close to or far from the screen as feels most comfortable. When used
in an office, this design works well. When mobile—say in a coach class seat on a
commuter plane bounding between less civilized realms in the Midwest—the extra pieces
to tangle with (and lose) can be less a blessing than a curse. The worst compromise the
keyboard. Making a machine portable demands that the weight of every part be
minimized. But lightweight keyboards coupled with cables that are too short and too
springing can be frustrating to use—the keys, then the entire keyboard, slipping away
from under your fingers.
Although a few notebook systems allow the use of ISA expansion boards, the match is
less than optimal. Expansion boards designed for desktop use are not built with the idea
of conserving power, so a single board may draw as many watts as the rest of a notebook
computer, cutting battery life commensurately. Consequently, most notebook PCs and
nearly all notebook machines forswear conventional expansion boards in favor of
proprietary expansion products or the credit card size expansion modules that follow the
PC Card and CardBus standards.
Access is one major worry with notebook cases. If you plan to expand the memory of
your system, you need a notebook that lets you plug in memory modules or memory
cards without totally disassembling the PC. The easiest machines to deal with have slots
hidden behind access panels that allow you to slide in a memory card as easily as a
floppy disk. Others may have access hatches that accommodate the addition of memory
modules. The only shortcoming of either of these expansion methods is the amount of
additional memory such machines support—generally one to eight megabytes—enough
for current applications, perhaps, but insufficient for the massive applications
programmers stay up nights creating. Many notebook computers, especially the lower
cost models wearing the house brands of mail order companies, require that you remove
the keyboard to access memory module sockets—a tricky job that demands more skill
and patience than you may want to devote to such a task. A few even require unbolting
the screen for accessing expansion sockets. If you plan on expanding a notebook or
notebook system in the future, check both the permitted memory expansion capacity and
the method for adding more RAM.
The key design feature of any notebook or subnotebook computer is portability. You
need a machine that's packaged to be as compact and light as possible—commensurate
with the ergonomic features that you can tolerate. Older notebooks had built-in handles;
newer machines have foregone that luxury. With PCs now weighing in under five pounds
and sometimes measuring smaller than a stack of legal pads, that lack has become
tolerable—you can either wrap your palm around the machine or tuck it into a carrying
case.
At one time the toughest notebook computers had cases crafted from metal, often tough
but light magnesium, but nearly all machines today are encased in high impact plastic. In
general they are tough enough for everyday abuse but won't tolerate a tumble from

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (12 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

desktop to floor any more than would a clock, a camera, or other precision device. Inside
you find foil or metal enclosed subassemblies, but these are added to keep radiation
within limits rather than minimize the effects of sudden deceleration after free fall. In
other words, notebook and subnotebook computers are made to be tough, but abuse can
be as fatal to them as any other business tool.

Motherboard Mounting

A motherboard must somehow be mounted in its cabinet, and PC manufacturers have


devised a number of ways to hold motherboards down. This simple sounding job is more
complex than you may think. The motherboard cannot simply be screwed down flat. The
projecting cut ends of pin-in-hole components make the bottom uneven, and torquing the
board into place is apt to stress it, even crack hidden circuit traces. Moreover, most PC
cases are metal and laying the motherboard flat against the bottom panel is apt to result in
a severe short circuit. Consequently, the motherboard must not only be held secure, it
must be spaced a fraction of an inch (typically in the range 3/8 to 1/2) above the bottom
of the case.
IBM originally solved the motherboard mounting problem ingeniously in its first PC. The
motherboards in these machines—and those of later machines from IBM and many other
manufacturers—use a combination of screws and specialized spacers that make
manufacture (and board replacement) fast and easy. The design is actually amazingly
frugal, using just two (sometimes three) screws.
The balance of the mounting holes in these motherboards is devoted to nylon fasteners,
which insulate the boards from the metal chassis while holding them in place. These
fasteners have two wings that pop through the hole in the motherboard and snap out to
lock themselves in place. The bottom of these fasteners slides into a special channel in
the bottom of the PC case.
Mechanically, the two or three screws hold these PC motherboards in place and the nylon
fasteners are designed to space the board vertically and fit special channels in the
metalwork of the case, allowing the boards to slide into place.
In this design, removing the screws allows you to slide the motherboard to the left,
freeing the nylon fasteners from their channel. Installing a motherboard requires only
setting the board down so that the fasteners engage their mounting channel, then sliding
the board to the right until the vacant screw holes line up with the mounting holes in the
chassis. Because the number of screws is minimized, so is the labor required to assemble
a PC—an important matter when you plan to make hundreds of thousands of machines.
Other personal computer makers developed their own means of mounting motherboards
inside their machines. Some of these manufacturers save the cost of welding the fastener
mounting channels in place by drilling a few holes in the bottom of the case and
supplying you with a number of threaded metal or plastic spacers (usually nothing more
than small nylon tubes) and screws. These spacers are meant to hold the system board the
same height above the bottom of the chassis as would the standard nylon fasteners.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (13 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

When installing a replacement motherboard in a case using screws and spacers, you have
two choices Screw the spacers into the case, put the motherboard atop them, then screw
the motherboard to the spacers. Or, screw the motherboard to the spacers then try to get
the spacers to fit the holes in the bottom of the case. Neither method is very satisfactory
because you're faced with getting ten or so holes and screws to line up, which, owing to
the general lack of precision exercised by cut rate manufacturers in making these cases,
they never do. The best thing to do is compromise. Attach the spacers loosely to the
motherboard, then try to get the screws at the bottom of the spacers to line up with the
holes in the case. You should be able to wiggle them into the holes.
Sometimes when you want to upgrade or repair a PC by replacing its motherboard you
find that the holes in the case and motherboard are at variance. In such circumstances the
best strategy is to modify your case by drilling holes in it to match the motherboard, then
use screws and spacers for mounting. Never modify the motherboard by drilling holes in
it. You can damage circuit board traces, some of which are invisible and buried within
layers of the motherboard.

Drive Mounting

Not only must devices fit into the chassis of a personal computer, they must securely
attach in some way. Other components, too, require some means of attachment to the
chassis so that your PC doesn't turn into a basketful of parts as soon as you touch it.
The most important of the components that somehow must fit and be affixed to the
chassis is the power supply. For the most part, installing power supplies presents few
problems. Most power supplies are standardized boxes that simply screw in place. Four
screws suffice. (Power supply installation is covered in Chapter 24, "Power.")

Note, however, that all power supplies do not fit into all cases for both size and
configuration reasons. Most power supplies follow the IBM style and put a big red on/off
switch on an extension bracket to the right of the box. The cases of most systems—in
particular, those that explicitly follow the old XT and AT standards—are notched at the
right rear to allow access to these switches. A few cases, however, are designed to use
power supplies with rear panel power switches. Usually the company that sells one
variety of case offers power supplies that match (and vice versa). But if you're just
replacing the case or the power supply, you need to be careful.
Don't forget that power supplies for XT- and AT-size cases are different sizes
themselves, and one does not fit in a case meant for the other. It's an obvious problem but
one that can be eliminated if it's anticipated.
Mass storage devices present their own case matching considerations. When you buy a
complete computer system, you don't need to worry about interfaces, controllers, and the
like. The manufacturer has done all the work and properly matched everything for
optimum operation (you hope). Adding a new mass storage device to enhance your PC or
replacing one that has failed provides you with several interesting challenges. Not only
must the prospective product be matched electrically to your system, but also it must
physically match your PC's case. After all, any device meant to be installed inside your

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (14 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

system must, at minimum, physically fit in place.

Form Factors

Disk drives come in a variety of heights and widths. The basic unit of measurement of
the size of a drive is the form factor. A form factor is simply the volume of a standard
drive that handles a particular medium. Several form factors regularly find their way into
discussions of personal computers ranging in size from eight inches to 1.3 inches, most
of which allow for one or more device heights.
A full size drive, one which defines the form factor and occupies all of its volume, is
usually a first generation machine. Its exact dimensions, chosen for whatever particular
reason, seemed fitting, perhaps allowing for the mood of the mechanical engineer on the
day he was drafting the blueprints. If the drive is a reasonable size and proves
particularly successful—successful enough that other manufacturers eagerly want to cash
in, too—others follow suit and copy the dimension of the product, making it a standard.

Device Heights

The second generation of any variety of hardware inevitably results in some sort of size
reduction. Cost cutting, greater precision, experience in manufacturing, and the inevitable
need to put more in less space gang up to shrink things down. The result of the
downsizing process is a variety of fractional size devices, particularly in the 5.25-inch
and 3.5-inch form factors. At 5.25 inches, devices are measured in sub-increments of the
original full height package. Devices that are two-thirds height, half-height, one-third, or
one-quarter height have all been manufactured at one time or another.
At the 3.5-inch form factor, sizes are more pragmatic, measured as the actual height in
inches. The original 3.5-inch drives may be considered full height and typically measure
about 1.6 inches high. The next most widely used size was an even inch in height
(five-eighths height, for the fractious folk who prefer fractions). Sub-inch heights have
been used for some devices, some as small as 0.6 inch.
However, before 3.5-inch drives had a chance to slim down to two dimensions, smaller
form factors came into play—2.5, 1.8, and 1.3 inches. The 2.5-inch devices were
designed primarily for notebook computers. Smaller drives fit palmtop computers, even
on credit card size expansion boards. The 1.8-inch size has won particular favor for
fitting into Type 3 PC Cards that follow the most recent standards promulgated by
PCMCIA.
Note that all of these applications for sub-3.5 inch drives are the type in which the system
(or at least its case or other packaging) is designed around the drive. In other words, only
the PC manufacturer needs to fret about the exact dimensions of the drive and what fits
where.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (15 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

To bring order to the chaos of drive sizes with each manufacturer determining what best
fits its own purposes, several drive makers formed a consortium to standardize drives.
Called the Small Form Factor Committee, the organization does not officially sanction
standards but rather creates specifications, which are then submitted for approval to other
standard setting organizations (such as the IEEE and ANSI). Because SFF became active
only in 1992, it initially began work on drives with form factors smaller than 3.5 inches
(as fits its name). However, the committee is also working on defining firm specifications
for larger drives.
The specifications developed by the SFF committee are published by
ENDL Publications
14426 Black Walnut Court
Saratoga, California 95070.
The one important rule regarding small drives is that a device can always be adapted to
fit a drive bay designed for a larger form factor. For example, kits to adapt 3.5-inch
drives for 5.25-inch bays are readily available—often included with the drive itself. You
can also always install a smaller drive in place of a larger one with a suitable adapter (or
even by making mounting holes in your chassis yourself). Going the other way is more
difficult.

Drive Installation

Internal installation of any mass storage device in a drive bay is actually quite simple.
Most disk and tape drives are replete with multiple tapped holes on each side and bottom
that accept screws to hold the drive in place. Add the right screws and a few twists and
you can install a device in a few minutes. Anyone who knows the right end of a
screwdriver to grab is qualified to physically install or remove a device from a drive bay.
But matters—and drive bays—are not quite so simple. You need to get access to the
drive bay itself as well as the holes through which you must twist the screws. Some
systems make playing hide and seek with the invisible man less of a challenge.

Direct Mounting

The logical way to attach a disk drive to a computer is to screw it in. Disk drives provide
tapped holes on three sides to accommodate your efforts as well as the most vexing drive
placement schemes concocted by misanthropic engineers. All you need to do is find
where the case maker provided space for the drive, figure out how to get the drive in
place, and screw everything up. Some schemes make external drives the most desirable
alternative.
The most straightforward direct mounting scheme is that used by the original PC, XT,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (16 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

and compatible computers patterned after them. Bays are accommodated in a mounting
tray, the sides of which are bent upward with holes provided to match the two tapped
mounting holes on each side of the standard full height 5.25-inch mass storage device.
These screws are visible on the right side of drives in the right drive bay and the left side
of drives in the left bay. Gaining access to the latter typically requires a short
screwdriver, a great deal of dexterity, and a tolerance for blood loss—or removing all or
most of the expansion cards inside the computer.
The only challenge is lining up the holes tapped into the side of a device with the holes or
slots in the sides of the drive bay. Slots impose an additional layer of merriment. You
must align the front of the device with the front panel of the computer. You also need to
use screws with large heads (binder head screws are best if you can acquire them from
your hardware store) that won't slip through the slots.
As noted earlier, two screws on one side of a device provides less than adequate
mounting security, particularly in inexpensive cases made from sheet metal so thin you
may expect it to be recycled soup cans. Slide a tape cartridge drive into one of these bays
and you can probably fold the entire bay half an inch over every time you shove in a tape.
IBM added an extra screw for its XT hard disk drives for the sake of mechanical
integrity, and you would be well advised to do the same. You need to mark the place on
the bay that lines up with one of the screw holes in the bottom of the device you want to
install and drill a matching hole in the bottom of the mounting tray. You then need to
drill a matching hole in the bottom of the case large enough for the entire screw (and
screwdriver) to be pushed up to the bottom of the mounting tray.
Another difficulty you may face is installing half height devices in very early machines
(such as IBM's PC and XT) that were designed before half height devices became
popular. Although you can install a single half height drive in each slot simply by using
these mounting holes and filling the empty space above the drive with a blank half height
panel, packing a pair of drives in one bay is more challenging. Try to install two half
height drives in a single full height bay and you could be trying to attach to something
that's not there. The solution is to create a pair of half height adapter plates. Basically just
thin pieces of steel with a number of slots or holes in them, adapter plates allow you to
assemble two half height drives into a single unit that installs into a full height drive bay.
Installing drives with these adapters is more difficult than you may think. You can't just
connect two drives with them and slide the whole assembly into the drive bay—the screw
heads (and maybe plates themselves) make the drive package too wide to slide through
the front panel opening in the chassis.
The first step to a double half height installation is to connect the two drives together on
the side opposite that which attaches the mounting tray. Once one plate is installed, you
should be able to slide the drive stack into the computer as a single piece.
Once the two-drive stack is in its proper place but while you can still maneuver the drives
in and out of the bay, install all the cables to both drives. Once everything is plugged in,
slip the other mounting plate between the drives and the side of the drive bay. Secure this
mounting plate and the drive by screwing through the two holes in the bay, through the
mounting plate, and into the screw holes in the bottom drive. Finally, finish your work by
screwing the top drive into place.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (17 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

Systems that have proper holes for mounting half height drives into full height slots also
benefit from using an adapter plate. Installing a single adapter on the side of the drive
stack opposite the attachment screws adds stability.
A number of small footprint PCs use special internal bays for 3.5-inch hard disks. In
most cases, you need to remove the bay itself—which is little more than a bent sheet
metal cage—before you can install the drive. After removing the bay from the chassis,
you then can screw the drive directly to the bay. Then reinstall the entire drive and bay
assembly.
Most makers of modern miniaturized hard disks warn that you should use the shortest
possible screws for direct mounting of the drive. When you tighten long screws, they
may press against the side of the drive's case or its electronics, physically distorting the
drive itself. Because the drive is a precision instrument, even the slightest distortion can
be fatal. Some drives require screws as short as one-quarter inch.

Rail Mounting

The improved drive mounting scheme developed for the AT and compatible systems
following its case design imposes an additional step on device installation: mounting
rails. The sides of the drive bay are fabricated into channels. Rails installed on each side
of each drive slide into these channels, securing the drive on both sides.
Rail mounting solves and adds problems to drive installation. On the positive side, they
make a truly secure mounting system that can withstand even moderate earthquakes. A
tight fit between the channel and rail ensures that the drive is kept in place vertically. A
stop at the back of the channel and a mounting bracket at the front prevent fore and aft
rattling. On the negative side, however, the process of installing the rails themselves
allows you to make several missteps that you won't discover until too late, giving you the
pleasure of screwing rails on and off your drive several times.
The rail mounting scheme has proven so satisfactory that IBM has used it with little
change for more than a decade (compared to the three years it used direct mounting).
Many other manufacturers have adopted the same or a similar mounting system. You
may find many AT-compatible cases that simply duplicate the IBM mounting scheme in
all of its dimensions. Others—particularly the larger manufacturers (such as
Compaq)—have opted for their own rail designs.
The first challenge in installing rails on a device is coming up with the rails themselves.
True AT-style rails are easy to find. Many device vendors package AT-style rails with
their products or pre-install AT-style rails before sending the device to you. The rails in
compatible computers are so varied that finding the exact size you need means contacting
the computer maker or an authorized dealer who stocks a complete array of parts. Many
makers of AT-style computers now alleviate this problem by filling all drive bays in their
systems with rails, whether or not a drive is installed in each bay.
Even with AT-style rails, you face a variety. All sorts of rails are available. Official IBM
rails are different for the right and left side of the drive and have only two installation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (18 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

holes. Aftermarket rails may have four or eight holes or slots and may be entirely
symmetrical, even squared off at both ends. These rails are meant to be interchangeable
between the right and left sides of the device you want to install. Although that's a
laudable design consideration, the multiplicity of holes also imposes on you the challenge
of finding which pair of the eight puts the device at the proper height in the bay and the
proper distance from the front panel.
With true, unsymmetrical IBM rails, proper orientation points the tapered end of the rail
toward the rear of the drive. The screws to hold the rails in place then go into the lower
pair of the two sets of mounting holes on the drive. With non-IBM AT-style rails, the
general rule is to use the lower holes on the rail (when its tapered end is pointed toward
the rear of the drive) to mate with the lower holes in the side of the device. Some odd
rails may have holes in different positions, however. If you install rails on a new drive,
make sure that the drive lines up in the proper vertical position before you secure its
mounting brackets and try to reinstall the lid of the case.
Once you've installed rails on a device, slide it into the bay in which you plan to use it.
Push the device to the back of the bay to ensure that the device fits at the proper height
and depth to match with the computer's fascia panel. Once you're satisfied that the device
fits properly, pull it part way out and make all the electrical connections to it. Then push
the drive back to its final resting place. Then with AT-style cases, screw in the front
panel brackets to hold the rails and device in place. With some compatibles, the bracket
is part of the rail itself. These mate directly against the front panel of the computer.
In computers that follow the AT pattern, most of the brackets are L-shaped. However,
one (which fits between the left and right bays) is U-shaped. With either style of bracket,
the arm or arms of the bracket that project backward press against the drive mounting rail
and hold it at the end of its travel in its channel.

Tray Mounting

However, some computer makers use a variation on the theme—the drive tray. One
example is the Advanced Logic Research tower style case. In these systems, drives
mount through their bottom screw holes to trays, much as drives mount to IBM sleds.
The trays, however, then screw into the side of the tower. The cover plate on the other
side of the case is slotted to support the other side of each tray.
Some manufacturers hide hard disk bays inside their cases in a way that makes the
mounting screw of the drive inaccessible. These mounting systems are actually
modifications of the tray. The drive mounts conventionally in a tray, then the tray installs
(often with a single screw) inside the case. The tray is a carrier that provides a secure
mount for the drive even when using only a single screw.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (19 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

Cooling

A case can be confining. It can keep just about everything from escaping, including the
heat electronic circuits produce as a by-product of performing their normal functions.
Some of the electricity in any circuit (except one made from superconductors) is turned
into heat by the unavoidable electrical resistance of the circuit. Heat is also generated
whenever an element of a computer circuit changes state. In fact, nearly all of the
electricity consumed by a computer eventually turns into heat.
Inside the protective (and confining) case of the computer, that heat builds up, thus
driving up the temperature. Heat is the worst enemy of semiconductor circuits; it can
shorten their lives considerably or even cause their catastrophic failure. Some means of
escape must be provided for the excess heat. In truth, the heat build-up in most PCs may
not be immediately fatal to semiconductor circuits. For example, most microprocessors
shut down (or simply generate errors that shut down your PC) before any permanent
damage occurs to them or the rest of the components inside your PC. However, heat can
cause circuits to age prematurely and can trim the lives of circuit components.
The design of the case of a PC affects how well the machine deals with its heat build-up.
A case that's effective in keeping its internal electronics cool can prolong the life of the
system.

Passive Convection

The obvious way to make a PC run cooler is to punch holes in its case to let the heat
out—but to keep the holes small enough so that other things, such as mice and
milkshakes, can't get in. In due time, passive convection—less dense hot air rising with
denser cool air flowing in to take its place—lets the excess thermal energy drift out of the
case.
Any impediment to the free flow of air slows the passive cooling effect. In general, the
more holes in the case the merrier the PC will be. Remove the lid, and the heat can waft
away along with temperature worries.
Unfortunately, your PC's case should be closed. Keeping a lid on it does more than just
restrict cooling—it is also the only effective way to deal with interference. It also keeps
your PC quieter, prevents foreign objects and liquids from plummeting in, and gives your
monitor a lift.
Moreover, passive cooling is often not enough. Only low power designs (such as
notebook and Green PCs) generate little enough heat that convection can be entirely
successful. Other systems generate more heat than naturally goes away on its own.

Active Cooling

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (20 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

The alternative to passive cooling is, hardly unexpectedly, active cooling, which uses a
force of some kind to move the heat away from the circuits. The force of choice in most
PCs is a fan.
Usually tucked inside the power supply, the computer's fan forces air to circulate both
inside the power supply and the computer. It sucks cool air in to circulate and blows the
heated air out.
The cooling systems of early PCs, however, were particularly ill conceived for active
cooling. The fans were designed mostly to cool off the heat generating circuitry inside
the power supply itself and only incidentally cooled the inside of the computer.
Moreover, the chance design of the system resulted in most of the cool air getting sucked
in through the floppy disk drive slots. Along with the air, in comes all the dust and grime
floating around in the environment, polluting whatever media you have sitting in the
drive. At least enough air coursed through the machine to cool off the small amount of
circuitry that the meager power supply of the PC could provide.
The XT added more electricity from the power supply but no better ventilation. And that
brought its own problem. The airflow around expansion cards and the rest of the
computer was insufficient (actually badly placed) to keep the temperature throughout the
machine down to an acceptable level. As a correction to later models of the XT, IBM
eliminated a series of ventilation holes at the bottom of the front of the chassis. The
absence of these holes actually improves the air circulation through the system unit and
keeps things cooler (see Appendix A, "PC History").

Unfortunately, most computer manufacturers rely on cooling that has not advanced
beyond the XT system. At most, they graft a heatsink onto the system microprocessor to
provide a greater area to radiate heat. But most still rely on the fan in the power supply to
move the cooling air through the system.

Advanced Cooling

Some systems have more carefully thought out cooling systems, channeling the flow of
cooling air to the places it is most needed. A few manufacturers add extra fans to
supplement the air flow generated by the power supply fan.
In most systems, however, the cooling system can be improved. Booster fans that clamp
on the rear panel of the computer and power supplies with beefed up fans are available.
These do, in fact, increase air circulation through the system unit, potentially lowering
the internal temperature and prolonging the lives of components. Note that there is no
reliable data on whether this additional cooling increases the life of the components
inside your PC. Unless you stuff every conceivable accessory into your machine,
however, you're unlikely to need such a device except for the added measure of peace of
mind it provides.
On the other hand, blocking the airpath of the cooling system of any PC can be fatal,
allowing too much heat to build up inside the chassis. Never locate a PC in cramped

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (21 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

quarters that lacks air circulation (like a desk draw or a shelf on which it just fits). Never
block the cooling slots or holes of a computer case.

Fan Failure

The fan inside a PC power supply is a necessity, not a luxury. If it fails to operate, your
computer won't falter—at least not at first. But temperatures build up inside. The
machine—the power supply in particular—may even fail catastrophically from
overheating.
The symptoms of fan failure are subtle but hard to miss. You hear the difference in the
noise your system makes. You may even be able to smell components warming past their
safe operating temperature.
Should you detect either symptom, hold you hand near where the air usually emerges
from your computer. (On most PCs, that's near the big round opening that the fan peers
through.) If you feel no breeze, you can be certain your fan is no longer doing its job.
A fan failure constitutes an emergency. If it happens to your system, immediately save
your work and shut the machine off. Although you can safely use it for short periods, the
better strategy is to replace the fan or power supply as soon as you possibly can.

Radiation

Besides heat, all electrical circuits radiate something else—electromagnetic fields. Every
flow of electrical energy sets up an electromagnetic field that radiates away. Radio and
television stations push kilowatts of energy through their antennae so that this energy
(accompanied by programming in the form of modulation) radiates over the countryside,
eventually to be hauled in by a radio or television set for your enjoyment or
disgruntlement.
The electrical circuits inside all computers work the same way but on a smaller scale.
The circuit board traces act as antennae and radiate electromagnetic energy whenever the
computer is turned on. When the thinking gets intense, so does the radiation.
You can't see, hear, feel, taste, or smell this radiation, just as you can't detect the
emissions from a radio station (at least not without a radio), so you would think there
would be no reason for concern about the radiation from your PC. But even invisible
signals can be dangerous, and their very invisibility makes them more worrisome—you
may never know if they are there or not. The case of your PC is your primary (often only)
line of defense against radiation from its electronic circuitry.
The problems of radiation are twofold: the radiation interfering with other, more
desirable signals in the air; and the radiation affecting your health.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (22 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

Radio Frequency Interference

The signals radiated by a PC typically fall in the micro-watt range, perhaps a billion
times weaker than those emitted by a broadcasting station. You would think that the
broadcast signals would easily overwhelm the inadvertent emissions from your PC. But
the strength of signals falls off dramatically with distance from the source. They follow
the inverse square law; therefore a signal from a source a thousand times farther away
would be a million times weaker. Radio and television stations are typically miles away,
so the emissions from a PC can easily overwhelm nearby broadcast signals, turning
transmissions into gibberish.
The radiation from the computer circuitry occurs at a wide variety of frequencies,
including not only the range occupied by your favorite radio and television stations but
also aviation navigation systems, emergency radio services, and even the eavesdropping
equipment some initialed government agency may have buried in your walls. Unchecked,
these untamed radiations from within your computer can compete with broadcast signals
not only for the ears of your radio but that of your neighbors. These radio-like signals
emitted by the computer generate what is termed radio frequency interference or RFI, so
called because they interfere with other signals in the radio spectrum.
The government agency charged with the chore of managing interference—the Federal
Communications Commission—has set strict standards on the radio waves that personal
computers can emit. These standards are fully covered in Appendix B. At their hearts,
however, the FCC standards enforce a good neighbor policy. They require that the RFI
from PCs be so weak that it won't bother your neighbors, although it may garble radio
signals in your own home or office.
The FCC sets two standards: Class A and Class B. Computer equipment must be verified
to meet the FCC Class A standard to be legally sold for business use. PCs must be
certified to conform with the more stringent FCC Class B standard to be sold for home
use.
Equipment makers, rather than users, must pass FCC muster. You are responsible,
however, for ensuring that your equipment does not interfere with your neighbors. If your
PC does interfere, legally you have the responsibility for eliminating the problem. While
you can sneak Class A equipment into your home, you have good reason not to. The job
of interference elimination is easier with Class B certified equipment because it starts off
radiating lower signal levels, so Class B machines give you a head start. Moreover,
meeting the Class B standards requires better overall construction, which helps assure
that you get a better case and a better PC.

Minimizing Interference

Most television interference takes one of two forms, noise and signal interference.
Noise interference appears on the screen as random lines and dots that jump randomly

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (23 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

about. The random appearance of noise reflects its origins. Noise arises from random
pulses of electrical energy. The most common source for noise is electric motors. Every
spark in the brushes of an electric motor radiates a broad spectrum of radio frequency
signals that your television may receive along with its normal signals. Some computer
peripherals may also generate such noise.
Signal interference usually appears as a pattern of some sort on your screen. For
example, a series of tilted horizontal bars or noise-like snow on the screen that stays in a
fixed pattern instead of jumping madly about. Signal interference is caused by regular,
periodic electrical signals.
Television interference most commonly occurs when you rely on a "rabbit ear" antenna
for your television reception. Such antenna pull signals from the air in the immediate
vicinity of the television set, so if your PC is nearby its signals are more likely to be
received. Moving to cable or an external antenna relocates the point your TV picks up its
signals to a distant location and will likely minimize or eliminate interference from a PC
near the TV set.
You can minimize the interference your PC radiates to improve your television reception
by taking several preventive measures.
The first step is to make sure the lid is on your PC's case and that it and all expansion
boards are firmly screwed into place. Fill all empty expansion slots with blank panels.
Firmly affixing the screws is important because they ground the expansion boards or
blank panels which helps them shield your PC. This strategy also helps minimize the
already small fire hazard your PC presents.
If the interference persists after you screw everything down in your PC, next check to see
if you can locate where the interference leaks from the PC. The most likely suspects are
the various cables that trail out of your PC and link to peripherals such as your monitor,
keyboard, and printer. Disconnect cabled peripherals one at a time and observe if the
disconnection reduces the interference.
Because they operate at the highest speed (and thus, highest frequency), external SCSI
cables are most prone to radiating interference. All external SCSI cables should be
shielded.
Your mouse is the most unlikely part of your PC to cause TV interference. The mouse
operates at serial data rates, which are much too low to interfere even with VHF
television.
If disconnecting a cable reduces onscreen TV interference, the next step is to get the
offending signal out of the cable. The best way is to add a ferrite core around the cable.
Many computer cables already have ferrite cores installed. They are the cylindrical lumps
in the cable near one or the other connector. Install the ferrite core by putting it around
the offending cable near where the cable leaves your PC. You can buy clamp-on ferrite
cores from many electronic parts stores.
Unplugging one cable—your PC's power cable—should completely eliminate the
interference radiated by your PC. After all, the PC won't work without power and can't
generate or radiate anything. You can reduce the interference traveling on the power line
by adding a noise filter between your PC's plug and its power outlet. You can usually

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (24 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Chapter 25

obtain noise filters from electronic parts suppliers. Although a noise filter is not the same
thing as a surge suppresser, most better surge suppressers also include noise filtering.

Health Concerns

Some radiation emitted by PCs is of such low frequencies that it falls below the range
used by any radio station. These Very Low Frequency and Extremely Low Frequency
signals (often called VLF and ELF) are thought by some people to cause a variety of
health problems (see Appendix B, "Regulations").

Your PC's case is the first line of defense against these signals. A metal case blocks low
frequency magnetic fields, which some epidemiological studies have hinted might be
dangerous, and shields against the emission of electrical fields. Plastic cases are less
effective. By themselves they offer no electrical or magnetic shielding. But plain plastic
cases would also flunk the FCC tests. Most manufacturers coat plastic cases with a
conductive paint to contain interference. However, these coatings are largely ineffective
against magnetic fields. Most modern systems now use metal cases or internal metal
shielding inside plastic cases to minimize radiation.
No matter the construction of your PC, you can minimize your exposure to radiation
from its case by ensuring that it is properly and securely assembled. Minimizing
interference means screwing in the retaining brackets of all the expansion boards inside
your PC and keeping the lid tightly screwed into the chassis. Keeping a tight PC not only
helps keep you safe, it keeps your system safe and intact as well.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrh25.htm (25 de 25) [23/06/2000 07:03:50 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Appendix A

PC History
■ Hardware Development
■ Mechanical Computers
■ Relay Computers
■ Electronic Computers
■ Software Evolution
■ Personal Information Managers
■ Interactive Computing
■ Graphic User Interface
■ Workgroup Computing
■ Personal Computers
■ Altair
■ CP/M
■ Apple Computer
■ Commodore International
■ Tandy/Radio Shack
■ IBM
■ Compatible PCs

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (1 de 16) [23/06/2000 07:05:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

PC History

In the great scheme of things, the personal computer is just the flowering of hundreds of years of
development, the coming together of diverse ideas, inspiration, technology, and tinkering. In
retrospect, the PC was inevitable. In prospect, however, today’s discount-store wonders were
inconceivable. The groundwork was laid without any conception of where it would eventually lead.
In tomorrow’s history the PC will likely be seen as the cause of a great social shift, one that we won’t
be able to appreciate until long after it’s happened. By the time we have the perspective to appreciate
the changes wrought by the PC, we might not remember the details of what went on, the how and why
the PC arose. Already the origins of the PC are slipping into the murk of fading memory. The PCs on
desktops and dealers’ shelves appear to have sprung into existence in final form, a complete machine
capable of processing and managing most of the information we deal with throughout our lives. They
might just as well have been dropped from alien space ships, exploded out of volcanoes, or flooded
into society by some secret government organization intent on bending our thoughts and our lives to
fit their plans. Already the history of the PC is becoming cloudy, ever more so as the current machines
shake free from their roots and head into new generations.

Hardware Development

If any single inspiration for the development of computers exists, it was the simple principle that
mathematicians don’t like to do math problems either. With the coming of the mechanical age in the
17th Century, mathematicians looked for ways of mechanizing the busy work of making their
calculations. They had better things to do than cramp their hands on pencils and overheat their brains
adding numbers. Rather than calculate pages of tables, mathematicians would rather be developing
numbers theories, classifying fauna, and visiting the public house to quaff a few beers.

Mechanical Computers

William Schickard (1592-1635) is believed to have been the first of these lazy number-workers to
think of giving the work of calculating to a machine. Working in what is now Germany around 1623,
Schickard created a "calculating clock," a mechanical machine that could add and subtract six-digit
numbers and had a rudimentary memory system that allowed limited multiplication through repeated
addition. Although the machine was proven workable after a recent reconstruction (in 1960), its
design has no influence on later mechanical calculators in that both the machine itself and its plans
disappeared and were only rediscovered in 1935.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (2 de 16) [23/06/2000 07:05:27 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

More influential was the French philosopher Blaise Pascal, who was favored by modern-day computer
language developer Nicolas Wirth when Wirth was looking for a name for his procedural
programming language. Among Pascal’s achievements that led to his selection for that singular honor
was the creation of the first mechanical calculator in about 1642.
The first of these, which the inventor called the Pascaline, was able only to add five digits at a time.
Later, Pascal enlarged the mechanism to handle eight digits and actually built and sold several of
them. Hand-powered, Pascal’s calculators over-reached the capabilities of 17th Century technology.
At the time they were regarded as unreliable; notwithstanding, several proved robust enough to
survive till the present day.
Nearly two centuries passed before manufacturing technology caught up with the concept of
mechanical calculations. The Arithmometer, invented in 1820 by the Frenchman Charles Xavier
Thomas de Colmar (1785-1870), is generally regarded as the first commercially successful
mechanical calculator.
While the mathematicians were tinkering away with their impractical calculators, an independent
group of inventors tangled with what today we regard as an immensely more complicated problem:
programming. This work, too, was based on a fundamental human drive: getting someone or (better
still) something to do the work that you find too boring, too tiring, or too much trouble to bother with.
One solution was long known. When the work is simply methodical, you can teach a servant, slave,
serf, or graduate assistant to carry it out for you. Better still is to give the task to a machine that won’t
complain, come in late for work, or run away and start a competing business.
Most boring jobs can be broken down into a series of simple steps. With a little incentive, say the
promise of an evening meal or a long enough whip, a servant can be taught to carry out each step in
turn. Inventors saw that machines, dim-witted as they were at the time, could be made to master the
same step-by-step tasks. The steps taken together make up what today we call a program.
The origins of the idea of a program of clearly defined steps for carrying out a complex task are lost in
obscurity. But as early at the 16th Century, a form of programming had been developed and
successfully commercialized, one that not only worked but sounded good. Music boxes were probably
the first truly programmed machines. Using an arrangement of pegs on a rotating drum, the music box
then (and now) can select the proper notes to play in a long sequence to make recognizable (or nearly
so) melodies.
By 1725, the French inventor Bouchon came up with the idea of using a similar method for
programming a weaving machine. The idea was perfected when it was combined with Edmund
Cartwright’s 1785 power loom and put to work in 1804 by Joseph Marie Jacquard. The final creation
is generally regarded as the first truly programmable machine, the Jacquard loom. Using a belt of
linked punched cards, the Jacquard loom could be programmed to produce any of an infinite variety
of woven patterns in cloth. This technology is still used today.
These two streams of development, mechanical calculating and programming, came together in the
work of Charles Babbage, who designed what many regard as the first computer, the Analytical
Engine. Babbage believed that the most complex mathematical problems of the time could be solved
by his machines. After all, the questions to which he sought answers required nothing but repetitive,
mechanical calculations carried out again and again, for example, finding square roots by repeated
approximation using division and multiplication—the same monotonous calculating that had inspired
inventors over the ages.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (3 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Babbage first created his own mechanical calculator, Difference Engine Number 1, in 1822. He
started work on a more ambitious calculator (Difference Engine Number 2) but never built the
machine. Instead he abandoned that project in 1834 in favor of a more flexible, versatile design that
could tackle more complex problems. His insight was to combine his mechanical calculators with the
programming facility of the Jacquard loom.
Many of Babbage’s ideas foreshadowed the work of other inventors. For example, Babbage imagined
using punched cards to create a machine to step through complex calculations at the turn of a crank.
This idea, an offshoot of the Jacquard loom, would later independently occur to other inventors.
Herman Hollerith, for one, used punched cards for the tabulating machine in 1890. Hollerith himself
is notable because the company he founded merged with two others to become the
Computing-Tabulating-Recording Company. In 1924, that company changed its name to International
Business Machines.
Babbage anticipated many of the concepts embodied in modern computers, including the capability to
jump from one point in a program to another. He even added a printer so that his Analytical Engine
could crank out (literally—it was to be powered by a hand crank) answers as hard copy.
His design called for a collection of cams, gears, and pulleys more complex than any machine ever
before created. In fact, the Analytical Engine proved too complex and never got off paper. Although a
few others were inspired by Babbage’s work, those who built the first successful mechanical
computer were unaware of his work. For the most part, Babbage resurfaced only after the success of
electronic computers caused historians to poke into the darker crannies of the 19th Century.
Perhaps the most notable direct influence of Babbage is an interesting footnote in the history of
computing. Babbage inspired Augusta Ada Byron, Countess of Lovelace (more commonly addressed
as Lady Lovelace), to develop a nearly complete program that would have enabled the Analytical
Engine to compute Bernoulli (fluid-flow) equations. For this work (and being first to describe what
today are regarded as program subroutines) Lady Lovelace is regarded as the first computer
programmer. The short-lived Ada programming language of 1979 was named after her.
Nearly a century passed before technology had once again caught up with the ideas of the theorists. In
1937, work began on what is regarded as the first (and last) truly successful mechanical computer, the
Automatic Sequence Controlled Calculator. Credited to Howard Aiken, this machine is most often
called Harvard Mark I because of Aiken’s affiliation with Harvard University, although it was
actually financed by IBM and built at IBM Development Laboratories in Endicott, Massachusetts.
Essentially completed in 1943, the Mark I first performed calculations in August 1944, using rotating
shafts, gears, and cams like the Analytical Engine. The differences reflected only changes in
technology, not the underlying concepts. The Harvard Mark I was electrically powered instead of
hand-cranked, and it was programmed using punched paper tape instead of punched cards. The
working machine could add two numbers in 0.3 seconds (the time it took its shafts to rotate once
around, that is, 200 RPM) and was capable of multiplying two 23-digit numbers in about six seconds.
Another mechanical program-controlled calculator, the Z3, was built by Konrad Zuse. Operational
before Harvard Mark I in 1941, its existence was not known outside Germany until after World War
II, so it does not figure in the mainstream development of computers. Zuse’s earlier efforts, the Z1 in
1938 and Z2 in 1940, were unsuccessful.

Relay Computers

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (4 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Even in the ‘40s the primary speed limit on these rudimentary computers was mechanical, so
developers looked to other technologies to build their computers. Bell Telephone Laboratories began
work on relay-based computers in 1938. A relay is an electrically controlled switch—one source of
electricity activates an electromagnet which operates a switch which, in turn, alters the electrical flow
in another circuit. Relays are a hybrid technology, electro-mechanical. Their mechanical side
performs physical work while their electrical nature makes them very flexible. One relay can control
others almost unlimited in number and distance. The gears and levers of purely mechanical calculators
are limited in reach in both regards.
The choice of relay technology was a natural one for the telephone company. After all, the telephone
switching systems of the time made extensive use of relays—rooms and rooms filled with them.
Bell Lab’s first successful machine, Bell Model V, began work at the end of 1946. Although it was no
faster than Harvard Mark I at addition, multiplication took the Bell machine only one second.
The speed of relay technology also intrigued Aiken, and in September 1948, he had his own
relay-based machine, Harvard Mark II, operating. By 1950, several other relay-based machines were
running in Europe.

Electronic Computers

Early in the development of the computer, designers recognized the speed advantages of an
all-electronic machine. After all, electronic signals could switch thousands or millions of times faster
than mechanical cams or electrical relays.
Several inventors made initial stabs at the challenge of electronic computers. John Atansoff and
Clifford Berry designed an electronic digital calculator at Iowa State College in 1938 but abandoned
their work in 1942. They had the arithmetic unit of their machine working successfully but had not
completed work on its input/output unit. In Germany, Zuse proposed a vacuum tube-based computer
in 1939, but its design was rejected by the Nationalist-Socialist government.
The first successful electronic machine was secretly developed as part of the British cryptoanalysis
program at Bletchley Park during World War II. There, T. H. Flowers created an electronic machine
known as Colossus for comparing cipher texts. Colossus, first tested in December 1943, pioneered the
concept of electronic clocked logic (with a clock speed of 0.005 MHz) and used 1,500 vacuum tubes.
Although Colossus was a programmable machine, neither it nor the succeeding generations of
cryptographic machines developed at Bletchley Park were designed to handle decimal multiplication.
Moreover, the development of Colossus and its kin was kept secret until long after the war, so it did
not in itself contribute to the development of the computer. In fact, many details of the Bletchley Park
operation are still secret forty years later.
The most notable contributions to computing came from another member of the Bletchley Park
operation, Alan M. Turing. Although not one of the principle developers of Colossus, his theoretical
work explored the limits of what a computer can do. He conceptualized a mechanism now called a
Turing Machine, the ultimate reductionist computer that broke the task of computing into the most
elemental steps. Given enough time, the Turing Machine could compute anything computable. Turing

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (5 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

also explored the realm of artificial intelligence—and, by definition, anything that it could not
compute was not computable. His Turing Test is still regarded as the measurement of success of an
artificial intelligence system. The test requires that the responses of a computer be indistinguishable
from those of a human given any set of questions.
The seminal machine in the history of the electronic computer is generally regarded as ENIAC, the
Electronic Numerical Integrator and Computer, developed at the Moore School of the University of
Pennsylvania in Philadelphia by a team led by John Mauchly and J. Presper Eckert, Jr. Proposed in
1943, it was officially inaugurated in February 1946. The most complex vacuum tube machine ever
built, ENIAC occupied a 30 by 50 foot room (at 1,500 square feet, that’s the size of a small house),
weighed 30 tons, and required about 200 kilowatts of electricity. It used 18,000 vacuum tubes and was
based on a clocked logic design.
When operating at its design clock speed of 0.1 MHz, ENIAC required a mere 200 microseconds for
addition, and 2.6 milliseconds for multiplication. At about 5,000 arithmetic operations per second, it
was approximately 1,000 times faster than the Harvard Mark I.
The design goal of ENIAC was to calculate ballistic trajectories, and the machine succeeded well. It
was able to compute the path of a 16-inch artillery shell in less than real time—that is, it could predict
about where a shell would fall after it was fired but before it hit.
In late 1946, ENIAC was disassembled, moved to Aberdeen Proving Ground in Maryland, and
reactivated. It served there until October 2, 1955. Portions of ENIAC survive in the Smithsonian
Institution.
The next step in the development of the computer and PC was EDVAC, the Electronic Discrete
Variable Automatic Computer. Unlike the decimal-based ENIAC, EDVAC was designed as a binary
computer. Information to EDVAC was encoded in its most essential form— the presence or absence
of a code symbol—which could be represented by a voltage. This binary basis is the essence of
today’s digital logic, upon which nearly all current computers are based.
EDVAC was also the first stored program computer; it held its binary instructions in memory exactly
as it stored its binary data, a concept based on the ideas of John von Neumann, one of its developers.
The new EDVAC design resulted in vacuum tube economy. Only 4,000 tubes (and 10,000 crystal
diodes) were required to build it. It was delivered in 1949 to the Ballistic Research Laboratories at
Aberdeen Proving Ground and became operational in 1951. It remained in service until December
1962.
During the development of EDVAC, Eckard and Mauchly of ENIAC fame left the Moore School to
form the Eckert-Mauchly Computer Corporation in 1948. There they developed the first commercial
electronic computer, UNIVAC, the Universal Automatic Computer, the first truly commercial
electronic computing machine. After the first UNIVAC was completed in 1951, another 45 were made
throughout the next seven years. (Eckert-Mauchly Computer Corporation was acquired by Remington
Rand Inc., which merged with Sperry Corporation to form Sperry Rand Corporation in 1955. Sperry
later merged with Burroughs Corporation to form Unisys.)
UNIVAC had an internal clock rate of 2.25 MHz and about 12K of RAM in the form of mercury
delay lines. Using its 5000 tubes, UNIVAC could add or subtract in 0.525 milliseconds and multiply
in 2.15 milliseconds.
Also in 1951, Ferranti Ltd., of Manchester, England, developed a electronic computer, which some

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (6 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

sources credit as the first commercial computer to be sold. Calling its creation the Mark 1, Ferranti
went on to build eight of the machines.
With UNIVAC, the basic operating principles of the computer were in place. Further developments
have come in the refinement of the technology used to make computer circuits. Switching from tubes
to transistors increased reliability and allowed designs to become both more complex (mainframes)
and smaller (minicomputers). Memory shifted from mercury delay lines (which briefly stored data as
ultrasonic pulses propogating through tubes of liquid mercury) and cathode ray tubes (which
temporarily stored data bits as the visible afterglow on a section of a tube much like a television
picture tube) to magnetic core and finally solid-state transistors. Integrated circuits continued this
trend and made possible microprocessors and RAM chips, which, in turn, led to the circuits that
formed the basis of the first personal computers.
IBM entered the computer market in 1952 with a machine it called the "Defense Calculator." Later
renamed the 701, the machine proved capable of about 2200 multiplications per second. A total of 19
machines were eventually sold, the first delivered in March 1953.

Software Evolution

The first giant computers were by no means personal, nor were any of the machines of the 1950s and
60s. You couldn’t put one on your desk unless your desk was large enough to shame even the most
profligate CEO. You likely couldn’t afford the price tag that reached into the millions. Moreover, you
would need a small squad to keep the blessed things running—someone to load tapes, another to type
commands at the console, several to wring their hands when something went wrong and half a day’s
work evaporated into a glitch. In fact, the idea of a personal computer was foreign to the people who
worked on these machines. And the software; well, most of it was homegrown and designed for
specific business chores such as billing you for your electrical use. Few people had use for a machine
that did its best work mailing monthly bills to a million utility customers.

Personal Information Managers

But even as vacuum tubes were straining the power grid, the ideas behind the personal computer were
simmering on the back burner in computer science classes. In 1945, Atlantic Monthly magazine ran
an article by eminent M.I.T researcher Vannevar Bush in which he described his vision of the
computer of the future. It wasn’t a personal computer, because that word hadn’t yet been coined. Bush
instead made up the term "memex" to identify his vision of a machine that would store the records and
correspondence of a person and be able to retrieve any information with lightning speed. The memex
would be an enhancement to the human memory, which Bush saw as becoming unable to keep up
with the vast outpouring of information in nearly every field.
In Bush’s mind, the memex would be controlled by a keyboard, knobs, and levers and would use
photographic processes like microfilm for storing information. But the machine would not simply
store information, it would keep it indexed, link information together, and even allow annotation. The
machine would not fit on a desktop, but take the place of the desk itself.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (7 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Bush’s conception of the computer of the future inspired Douglas C. Englebart at Stanford Research
Institute to explore the potentials of information technology. Englebart saw the computer as more than
memory. Besides augmenting a single individual, it could link many people together. The computer
could be the basis of a communication system to the extent of forming a workgroup. In his
explorations, Englebart’s group invented many concepts now a fact in PCs, such things as
What-You-See-Is-What-You-Get (WYSIWYG) word processing, onscreen windows to look into
multiple applications running simultaneously or multiple messages in a communication session,
electronic meeting rooms, and the mouse as a pointing device to control everything. Englebart
published his ideas as early as October 1962, but again his ideas outpaced the technology of the time.

Interactive Computing

The computer first became interactive at M.I.T. in the 1950s. Jay Forrester developed a computer he
called Whirlwind, which was able to process telemetry data in real time and interact with an operator
running the machine at a console. The entire machine would be devoted to a single task under the
control of a single individual.
Only at the university could such single-user interactive computing work. In the real world, the
expensive computer could not be squandered on a single person. But interactive computing could be
brought into the realm of affordability by dividing the time of a large computer among several people.
This concept, now termed "time-sharing" was first described in 1959 by Christopher Strachey who
saw the potential of a single large computer acting as several smaller (slower), separate machines for
individual users. He realized that to make this vision possible, he needed new operating mechanisms
that included prioritized interrupts so that individual users (or their applications) could gain immediate
(if brief) system control when their job needed immediate attention, and memory protection to keep
the work of multiple operators (multiple computing sessions or tasks) from interfering with one
another. These concepts are a fundamental part of all of today’s personal computers. The first
application of these concepts was the Simulated Air-Ground Environment (SAGE) air defense system,
a project that evolved out of M.I.T.’s Whirlwind system.

Graphic User Interface

The first computers were about as accessible as an medieval alchemical text. They responded only to
programs that resembled mathematical formulas more than the product of a normal human mind. The
first step on the long road to the graphic user interface used by today’s most powerful personal
computer operating systems was a demonstration program called Sketchpad, first formally described
by its creator, Ivan Sutherland, in 1963. Sketchpad enabled the user to push a lightpen across a
monitor screen to make engineering drawings. Sketchpad introduced many of the concepts basic to
today’s most powerful software including windows and the graphic cursor.

Workgroup Computing

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (8 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

The groundwork for computer cooperation was laid in the first time-sharing systems. In these systems,
the users exchanged data in the sense that by sharing a single computer they were all connected to the
same system and resources. The backbone of today’s world-wide computer linkup was first organized
in 1969 by the Advanced Research Projects Agency of the Department of Defense (ARPA). Initially,
four systems were linked together to form a network that was termed ARPAnet. In the late 1980s, the
system was renamed Internet, in part because it had by then already reached far beyond the Defense
Department.
Workgroup computing can trace its beginnings to Xerox Corporation’s Palo Alto Research Center
(PARC) where researchers developed an experimental single user computer called Alto. To tie
together the individual systems so that they could cooperate, the systems were linked into an Ethernet
network. Some histories list the Alto as the first true personal computer, but it lacked the accessibility
that characterizes the modern machine. The honor of being first must go to a more personal effort.

Personal Computers

By the middle of the 1970s, most of the software concepts underlying what would become the
personal computer had been developed. What the world lacked was an affordable means of bringing
that software to life. Just as the ideas in Babbage’s Analytical Engine outpaced the available
technology, the software concepts of the 1950s and 1960s far exceeded the capability of hardware
accessible to the people for whom it was designed.
The necessary hardware breakthrough was the microprocessor. Originally designed for hand held
calculators (see Chapter 3, "Microprocessors"), the microprocessor was a fully programmable
electronic circuit. Its programmability, however, was limited to hardware engineers who took
advantage of it to quickly build machines more modest than computers. They used programmability
as a design tool that saved much of the effort at designing electrical circuits. Complex electrical logic
designs could be reduced to microprocessor programs hard-wired into read-only memory (now more
familiar as ROM).
The first microprocessor, developed by Intel Corporation in 1971, was designed to be no more than
the brain of hand held calculators. The first true single-board microcomputer, the IMP-16C made by
National Semiconductor in 1973, was designed for used in industrial control applications for operating
machine tools.
The creation of a true personal computer was delayed because microprocessors lacked a convenient
way of being programmed. In general, programmers used a large computer as a development system
to emulate the microprocessor and write programs step-by-step while monitoring their progress. Once
all the code was finished, it would be recorded into read-only memory, packaged with the
microprocessor, and sold as a unit along with all the peripherals the machine was to use.
This situation arose from necessity. Programming languages required huge amounts of memory, and
microcomputers had little. The IMP-16C, for example, had 256 bytes of RAM and 512 bytes of ROM.
A couple of lines of mainframe computer program source code would fill the entire memory of the
machine, leaving no room for compiling or running the program.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (9 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Altair

In late 1974, Ed Roberts brought together all the elements necessary for launching the personal
computer. Roberts worked in the laser division of the Air Force weapons lab in Albuquerque, New
Mexico. Together with some partners, he founded a company called Micro Instrumentation Telemetry
Systems, now better known by its acronym, MITS. The original intent of the company was to sell
model rocketry equipment. Roberts, however, believed he could do better selling a calculator kit, so
he bought out his partners. The product he designed earned fame as a feature on the cover of Popular
Electronics, a magazine read by electronic hobbyists and experimenters, and proved successful for a
while. Then an influx of mass-produced calculators from across the Pacific ignited price wars that
made the MITS product unprofitable.
Roberts decided to take the next step; he would make a true computer. Using a $65,000 loan, he
developed what he called the PE-8. He also arranged for Popular Electronics to run a story about the
machine (which appeared on the cover of the January 1975, issue). The magazine was anxious for a
computer story, having been scooped by Radio Electronics magazine, which had run a construction
project story about the Mark-8 computer designed by Jonathan Titus around Intel’s 8008
microprocessor.
The editors of Popular Electronics renamed the Roberts’ PE-8 something they thought catchier, the
Altair 8080. According to legend, a daughter of one of the magazine’s editors suggested Altair, which
was the name of a planet on an early episode of the television program Star Trek. The 8080 was the
model designation of the Intel microprocessor that powered it.
The original Altair 8080 cost $397 in kit form and included an attractive (at least to the hobbyist’s
eye) painted tin box with front panel switches and lights (to impress the nonbelievers and incidentally
program the thing) as well as 256 bytes of memory.
Besides being a challenge to build, the Altair was a misery to operate. It had no other storage except
its RAM, so everything you did was lost as soon as you switched it off. Worse than the lack of storage
was the shortage of software. The Altair came with none, not even a programming language.
This omission was obvious even to the most dedicated computer hobbyists. Most put up with the
shortfall. They were happy just to have a computer with which to experiment. Some even got a thrill
from the challenge (or ordeal) of stringing numbers of machine language together. But two teenagers
from Washington state, Paul Allen and William "Bill" Gates, had a better idea: a compact version of
the BASIC programming language that would actually run on the minimal resources of the Altair.
They offered MITS their language, and Roberts gladly accepted, not knowing that their version of
BASIC was closer to conception than completion. The twosome moved to Albuquerque, and Allen
became director of software for MITS. Later, they formed their own software company that proved
even more successful than the Altair. They called it Microsoft.
Not only is the Altair considered the first PC, it also spawned the first expansion bus standard (the
Altair bus, which evolved into the S-100 standard) and the first compatible computer or clone, the
Imsai made by IMS Associates. This and other machines based on the Altair design proved useful for
hobbyists and, when coupled with a standardized operating system, the first small business computers.
The Altair itself, however, faded from the scene after Roberts sold MITS to a company called Pertec
in May 1977, leaving the electronics industry for medicine. Not long afterward, Pertec abandoned the
Altair name and, eventually, the PC industry.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (10 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

CP/M

Although the original Altair was too limited to take advantage of the operating system that eventually
brought small computers into small businesses, other machines using the same microprocessor and
basic bus design were not. This operating system, called CP/M (Control Program for
Microcomputers), was developed by Gary Kildall. CP/M linked the very popular and powerful 8080
and Z80 microprocessors and up to 64K of memory with floppy disk drives for mass storage. The new
operating system led Kildall to launch his own company, initially called Intergalactic Digital
Research, which he started in his home in 1974. The corporate name soon was shortened to Digital
Research.
The combination of microprocessor and operating system yielded enough power to handle many
business chores, from word processing to bookkeeping. It was exactly what was needed in business.
Consequently, CP/M computers emerged as the business standard among desktop machines. In the
early 1980s, more business-oriented software—which often consisted of little more than a few dozen
lines of BASIC code—was available for CP/M than any other computer operating environment.
CP/M survived into the PC age, and a few holdouts still use it—just as a few men still lather their
faces with a brush and mug before shaving.
At one time IBM offered a version of CP/M for the PC called CP/M 86. Note, however, that programs
written for CP/M and CP/M 86 are not compatible.
Legend holds that, in 1981 when IBM was looking for an operating system for its new PC, CP/M 86
was the company’s first choice. When IBM’s negotiators went to California to arrange a deal,
however, Digital Research head Kildall was unavailable, so the IBM negotiators instead flew to
Washington state and licensed MS-DOS (Microsoft Disk Operating System) from Microsoft.
Depending on the version of the legend you believe, Kildall was racing his Maserati, flying his plane,
or off in the Orient. Kildall, who passed away in 1994, liked to keep the tale shrouded in mystery.
However, at one time he admitted he actually took the interview with IBM while pressed for time
before a two week vacation. The negotiations were chilly, noted one meeting participant, because
Kildall arrived late and IBM would not sign a reciprocal nondisclosure agreement with Digital
Research. Nevertheless, after the meeting Kildall believed he had struck a deal with IBM, only to find
out later that the company had agreed to a license with Microsoft. Although IBM offered both MS
DOS and CP/M 86, it priced the Microsoft product $200 cheaper ($40 for MS DOS versus $240 for
CP/M), which gave MS DOS the foothold that eventually led to its industry dominance.
The Microsoft that IBM visited in lieu of Kildall was the same upstart company that developed the
BASIC language for the Altair. (The company move from Arizona to Washington in part, legend
holds, to avoid speeding tickets liberally acquired by one founder.) Microsoft actually had bought
rights to the original MS-DOS from Seattle Computing—after striking the deal with IBM—although
it later developed and improved on the original.
The price difference as well as a slightly easier-to-use syntax helped MS DOS quickly develop a huge
base of applications. In a few years the Microsoft operating system had left CP/M 86 in the dust.
Nevertheless, CP/M persevered and evolved into one of the first multitasking systems for PCs:
Concurrent DOS.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (11 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

Digital Research much later developed an operating system compatible with standard MS-DOS called
DR DOS (Digital Research DOS, but often called Doctor DOS). The first DR DOS, Version 3.3, was
released in May 1988, followed by two revisions (Version 3.40, released in January 1989, and
Version 3.41 in June 1989). DR DOS 5.0 was introduced in 1990, followed by DR DOS 6.0 in 1991.
In 1992, Digital Research was acquired by Novell, and the offspring of the DR DOS product was
offered as Novell DOS.

Apple Computer

An alternative to the bus-oriented Altair design was the single-board microcomputer like the
IMP-16C. Miniaturization made it possible to put a small computer comprised of a microprocessor,
memory, and support circuitry on a single (though large) circuit board. Such a single-board design is
economical because it saves the expense of the bus connector and redundant circuitry on supplemental
circuit boards.
Two hobbyists, Steve Jobs and Steve Wozniak, experimented with this approach and, in 1976, built
boards they called the Apple Computer. But, even back then, the market for products aimed for people
born with soldering irons in hand was limited, and the original Apple circuit board computer is now
regarded as a curiosity, an interesting antique for computer collectors.
But the next attempt by Jobs and Wozniak proved a hit. In 1977, the twosome combined an innovative
ready-made computer and professional marketing. The result was the Apple II, the longest lived of all
small computer models. The Apple II blazed a path as the best of both worlds, combining a single
board for consistency, efficiency, and economy with a dedicated expansion bus into which accessories
(and some necessities) could be attached.
The Apple II was based on a single microprocessor and was a single-board computer because
everything needed to make it work (at least in the most rudimentary way) was built onto a single
glass-epoxy printed circuit board. Its expansion bus provided a way of connecting additional printed
circuit boards almost directly to the microprocessor. Even the keyboard was combined into the
attractively designed plastic case that housed all the electronics—a simple, practical, and cost
effective approach.
The central processing unit of the Apple II was its microprocessor, the 6502 made by Motorola. At the
time, this was a respectable chip choice. It could perform eight-bit calculations at an operating speed
of about one million cycles per second (megahertz).
Compared to the personal computers of today, the Apple II was rudimentary. The straightforward
original design of the Apple II made no provision for lowercase letters, could put only 40 columns of
text across the screen, and could be bought with as little as 8K of memory. For more permanent
storage, it could route data from its electronic memory onto magnetic tape using a conventional audio
cassette machine. Compared to what came before, however, it was groundbreaking. You could buy an
Apple II, pull it from its box, plug it in, and have a working computer. Previous small computers
universally required at least a moderate degree of technical knowledge, a great deal of patience to
withstand the tedious process of assembling parts not necessarily meant to work together, and an
overriding faith that they would, in fact, work.
Later, Apple added features to bring the Apple II up to par with other PCs, including lowercase

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (12 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

characters in 80 columns, bitmapped graphics, and disk storage controlled by Apple DOS. But in the
early 1980s Apple’s development attention shifted to the Macintosh, a more powerful architecture
based on the Motorola MC68000 microprocessor. The Macintosh was introduced in January 1984.
The original Apple II design was adapted through several models, which later found their primary
application in elementary schools. The last models in the Apple II product line were discontinued in
1993.

Commodore International

The first large manufacturer to announce a personal computer was Commodore International. Its first
effort was the Pet, announced in 1977. Designed around the 6502 microprocessor as a business
machine, it had all the hardware characteristics of a modern PC, including expansion slots, a dedicated
monitor, and floppy disk drives. Its software, however, was proprietary. Despite its early entry to the
world of personal computing, Commodore was unable to establish its Pet as a standard, and the line
faded from the scene after 1981.
In 1994, Commodore International itself went into liquidation. In May 1995, the name was
resurrected when the German company Escom AG bought the Commodore name, patents, and
intellectual property rights for $10 million in a bankruptcy auction. After making the purchase, the
company announced that it would manufacture PCs with both the Commodore and Amiga brands.

Tandy/Radio Shack

The second pre-PC small computer design camp rallied under the Radio Shack flag. The familiar
corner store vendor of everything from batteries and toys to watches and telephones added small
computers to its wide range of offerings by producing a number of machines based on different
technologies, microprocessors, and operating systems.
The first machine offered was the TRS-80, which earned its name from its Z80 microprocessor (rather
than the year). The TRS-80 was a desktop computer that combined monitor, keyboard, and electronics
into a single silver-gray plastic box that was styled to make Buck Rogers feel homesick. Both cassette
and floppy disk storage was available, the latter using TRS-DOS (widely known as Trash DOS to
both its friends and detractors).
When the success of IBM’s PC pushed the TRS-80 line out of the limelight, Tandy slowly adapted to
the challenge. It first offered a machine that ran MS-DOS but which was not compatible with the PC
(the Model 5000 in 1982). Then, Tandy began building successful PC-compatibles under the Tandy
name. As part of corporate restructuring, Tandy sold its computer manufacturing operation to AST
Research in 1993.

IBM

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (13 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

This book would not exist, nor would the personal computer industry in its present form, were it not
for a number of seemingly arbitrary but, at heart, practical decisions made at the IBM Corporation
Entry Systems Division in Boca Raton, Florida, just as the 1980s were dawning. The culmination of
that decision-making came on August 12, 1981, with the introduction of the IBM PC.
Being a market-driven company, IBM created the PC in response to customers who wanted small,
standalone machines instead of terminals wired to mainframes. CP/M machines were simply proving
effective in small businesses, and larger companies and governmental organizations were looking for
an alternative from IBM.
Far from idealists intent on starting a revolution, IBM’s engineers created the PC as a mundane
machine with appeal for a limited number of people. Some sources inside IBM have stated that
expectations were about 100,000 Personal Computers would be sold, machines that would appeal
primarily to hobbyists intent on exploring what computers could do. (But, of course, who would
probably never do anything useful on the electronic curiosities. After all, useful computing work
remained in the realm of the mainframe computer.) At the time, hobbyists were already exploring
programming with other small computers. The IBM Personal Computer was seen as just another of
these—perhaps a toehold in the hobbyist market, perhaps an exploration into a new technological
area, or perhaps just something that some anxious engineers at IBM wanted to play with (with official
sanction, of course). IBM figured that while its traditional business customer base of large
corporations might toy with a few small, dedicated computers, the shortcomings of the machines
would become obvious.
IBM was as surprised as the rest of the world when even in the initial months of its release, demand
for the PC far outran supply, resulting in shortages and an unbelievable windfall to authorized IBM
dealers who found that a little silicon could be worth its weight in gold.
The true motivations and design decisions underlying that first PC are forever the secret of IBM. The
best guess that can be made is that the success of the PC stemmed from equal measures of serendipity
and hard-nosed bottom-line-oriented decision-making. IBM wanted to cash in on the success that
small computers were having among hobbyists and, increasingly, small businesses. The desktop
computer presented a tremendous opportunity, an opportunity that IBM did not want to miss, as it had
with minicomputers. (Most industry analysts attribute the astounding success of Digital Equipment
Corporation (DEC) in the 1970s and 1980s to IBM’s failure to move into the minicomputer field fast
enough.)
Once you understand the IBM market perspective, you can readily appreciate its design choices. It
was not a project for extensive research and development but one better suited to a quick, minimal
investment approach. Moreover, the goal was simply to create the machine without aiming at a
particular end or purpose. In retrospect, that almost accidental element of the Personal Computer
design may have been IBM’s masterstroke. It allowed the simple creation to grow into a variety of
fields, to serve many masters, to be a true general purpose computing instrument.
Developing its own machine was not quite so far-fetched. IBM had already made small computers in
the guise of its transportable (by a stretch of the imagination and arm) Model 5100. Built without
benefit of such innovations as miniature (5.25-inch) floppy disk drives, the 5100 primarily found use
inside IBM but never did well as a commercial product.
To create IBM’s first true desktop machine, the company’s engineers—with the aid of an outside
consulting firm—carefully pruned and grafted the ideas embodied in other small computers on the
market with a scattering of minor changes and innovations to make their new product stand out. The

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (14 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

engineers put together a computer mostly made from parts and components crafted by other
manufacturers so that if the product misfired, losses would not be great, and IBM could go on to other
products in its bread-and-butter mainframe line. They borrowed design concepts from the machines
that hobbyists were toying with, engineered around semiconductor parts widely available on the
market that required no exotic proprietary design work, and exploited an operating system based on
the most popular of those used in small business computers. The motherboard is based on an Intel
application note for its microprocessors. The expansion bus borrows from Intel’s Multibus design.
The video system was based on another application note for one Motorola’s video controllers. The
keyboard draws upon the designs of IBM’s Selectric typewriter and Displaywriter word processor.
There could be no doubt the IBM machine would be based on a microprocessor. The smart chips were
what had originally made small computers practical and the industry possible. The question was
which chip to use.
IBM chose Intel’s 8088. The choice was a compromise between performance, cost, compatibility, and
marketability. Because the 8088 had 16-bit internal registers, IBM could (and did) market its PC as a
16-bit computer, more powerful than the older 8-bit Apple, CP/M, and Radio Shack machines.
Although IBM could have made the PC a true 16-bit machine (16-bit microprocessors were available
even before the 8088 had been offered), the company had one good reason for foregoing full 16-bit
power: cost. In the early 1980s, the price of microprocessor support chips and memory was much
higher than today, and 16-bit components were substantially more expensive than the 8-bit variety.
Because the PC was built without a true idea of what the machine would be used for, IBM hedged its
bets. It allowed for 64K of memory in the system—the same capability as Apple and CP/M
machines—but took advantage of the 8088 and allowed for adding up to 448K using expansion
boards. (A second model pushed total capacity to 640K.) In addition to floppy disk drives (each
floppy held 160K, about double other machines), IBM also hedged by including a cassette port as part
of the first PC. Instead of buying a $500 floppy disk drive, you could use your $20 portable tape
recorder to record programs and data and exchange files with your friends. But the keyboard and
monitor quality of the PC was on a par with IBM’s business machines so that the PC could do real
work.
IBM steadily added power to its PC, but was careful not to add too much so that its traditional
customers wouldn’t abandon the profitable mainframe and minicomputers in favor of PCs. First IBM
added a hard disk drive as standard equipment to the PC, creating the XT in 1982. In 1984, IBM
attempted to broaden the PC market by adding a low-end home machine, the PCjr, and a high-end
286-based powerhouse, the AT. The PCjr proved to be an expensive failure. The AT machine set the
architectural standard for more than a decade.
In 1987, IBM’s PC influence and stock price peaked almost concurrently with what the company
hoped would set the standard for the next generation of personal computers, the PS/2. Instead of being
the new industry standard, however, the PS/2 line became a mere design alternative. A PC-compatible
industry had arisen, and it was steadily eating into IBM’s share of the computer market. Few
compatible makers chose to follow the new standard.

Compatible PCs

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (15 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix A

The PC gave the world what it had been waiting for: a standard for personal computers.
Many large companies launched their own PCs. Some (for example, the Xerox 820) relied on proven
CP/M. Some pursued MS DOS. A few (for example, the DEC Rainbow 100) tried it both ways. Their
goal was to achieve the same success as IBM, and with true corporate hubris, they tried to set their
own standards, all of which have been left by the wayside.
Smaller, often start-up, companies chose to cash in on IBM’s success rather than challenge it. They
immediately started to create their own products that would match the IBM standard, hoping to
duplicate the fortunes of plug-compatible companies that copied IBM’s mainframe products. (These
companies made mainframe computers that would "plug-in" in place of IBM’s own machines,
identical in operation and able to run the same software.)
At first, the going was slow as these small companies tried to figure out exactly what was necessary
for compatibility. The first few attempts at building compatibles (Columbia Data Systems and
Corona) didn’t go far enough. These machines were supposedly software-compatible, but they didn’t
duplicate IBM’s complete hardware design. Unfortunately, many programmers didn’t follow IBM’s
rules, and they expected all computers to be built identically to the PC. Their software didn’t work on
these first compatibles, and the machines failed to win acceptance.
The first company to truly duplicate both the PC hardware and the firmware—the programs stored in
read-only memory that gave the machine its electronic identity (see Chapter 5, "The BIOS,")—was
Compaq Computer Corporation. Besides running all IBM software, the new Compaq computer had a
gimmick: it was portable. At least it had a handle and was entirely self-contained, despite its 40-pound
bulk. In itself, portability was not revolutionary (Osborne and Kaypro had earlier offered similar
portables based on CP/M), but the combination with PC-compatibility proved a winner.
IBM published the essential blueprint of the PC, a complete schematic diagram, so the hardware of
the system was no challenge for designers. It also published its essential-for-compatibility BIOS. But
the BIOS was copyrighted and couldn’t be copied. Compaq led the way by writing a compatible BIOS
without copying. But writing a BIOS was too time-consuming, challenging, and fraught with legal
pitfalls for most start-up companies to afford. Moreover, the results weren’t always completely
compatible with IBM software. But there was no choice because neither IBM nor Compaq would
license their BIOSes.
The breakthrough came when Phoenix Technologies wrote its own very compatible BIOS with the
explicit intention of licensing it to computer makers. Compatibility worries vanished, and a new
industry grew. By 1985, anyone could buy off-the-shelf parts and built a PC compatible. The PC
revolution was complete.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxa.htm (16 de 16) [23/06/2000 07:05:28 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

Appendix B

Regulations
■ Radio Frequency Emission
■ Interference
■ Scope of Regulation
■ FCC Classes
■ Radiation Limits
■ Enforcement
■ Verification and Certification
■ Equipment Design
■ ELF and VLF Radiation
■ MPR Standards
■ TCO Limits
■ Underwriters Laboratories Listing
■ Computer Safety Standards
■ UL Recognition Versus UL Listing

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (1 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

Regulations

Radio Frequency Emission

You may not think of your PC as a radio transmitter, but it is. As with all electrical devices, a
computer radiates electromagnetic fields. The frequencies at which your PC operates put these fields
in the range that a radio or television set might pick up. Of course, these radio emissions are not
intentional. They are an unwanted byproduct of a simple physical principle. Any moving electronic
current, including the minuscule logic signals inside your PC, creates an electromagnetic field. If the
current flow starts and stops or changes direction, it induces an electromagnetic field that causes radio
waves to radiate into space. (Unchanging current flows produce steady-state or static fields.)
The unintentional nature of these signals does not matter to the FCC (Federal Communications
Commission). Almost anything that gets into the airwaves is within the jurisdiction of the
Commission. In fact, its oversight of signals starts at frequencies that you could hear if they were
sound waves—9,000 Hertz—and keeps going almost to frequencies you could see as light
waves—300,000,000,000 Hz.
The Commission created a body of rules and regulations that cover signals akin to those emitted by
your computer, and it has Congressional authority to enforce its rules. It can, in fact, determine which
computers can be sold and when—and which will haunt their designers as costly yet stillborn,
unmarketable products. Every personal computer and most computer peripherals sold in the United
States must comply with these rules and regulations.
Nevertheless, few people (including the makers of many PCs and peripheral products) know what
those rules govern; what they are meant to achieve; and why anyone should care. In ignorance or in
defiance of the FCC’s authority, many computer manufacturers offer PCs for sale without regard to
these rules. Selling such computers is illegal in the United States.
For the computer designer who is aware of and obedient to the FCC rules, complying with them is
that last hoop to be leapt through, the final test before his life’s work can nestle on your dealer’s shelf.
The need for certification affects you, too. Because of this need for certification, your access to new
technology is considerably slowed. Any computer product must be ready to be sold before it can be
certified, and certification can take six to eight weeks. Automatically, the latest gear faces a month
and a half or longer delay getting to market.
On the other hand, the good side of certification may seem slight. For example, FCC certification does
not guarantee that a computer product is safe. Health and safety are not the concern of the FCC; a
product that meets FCC standards could, nevertheless, radiate harmful signals or contaminate your

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (2 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

office with arcane poisons. The FCC rules also do not guarantee that a given computer product
absolutely won’t interfere with your radio or television reception. (That’s why instructions for
eliminating interference caused by a computer are included in the manual of properly certified
equipment.) All FCC certification shows is that a particular product does not exceed a given level of
interference with broadcast services, such as television and radio transmissions (including cellular
phones, emergency radio services, and the radio-navigation equipment used by airplanes). Even
though it doesn’t seem like much, achieving this level of protection is something for which you and
your neighbors should be thankful—even if it is often a big headache for computer manufacturers.

Interference

Interference is one of the most important reasons underlying the FCC’s very existence. The
Commission was created in 1934 primarily (but not exclusively) to sort out the mess made by early
broadcasters who, in the 1920s, transmitted signals whenever, wherever, and however they wanted.
As a result, in some places the airwaves became a thick goulash from which no radio could
successfully sort a single program. The FCC was created to bring order to that chaos, and to do so it
created strict rules to prevent interference between radio stations. As other services began to use the
airwaves, the FCC set rules for them, too, always with the same purpose, to prevent signals from
interfering with one another—not to limit what you can hear but to ensure that you can clearly hear
what is there.
Although at first the FCC was interested only in signals meant to be broadcast, the advent of modern
computer equipment operating at high frequencies created a new source of radio interference. The
clock frequencies of computers right now sit in the middle of communications frequencies and are
edging up on the television and FM broadcast bands. (A few older computers operate at frequencies
within the AM broadcast band, but IBM-standard PCs have never stooped so low.)
Potential radio and television interference doesn’t seem like much of a cause for concern. Compared
to the quality of network television programming, interference can be an improvement. When the
FCC took control and created the computer emission standards, however, the situation was more
serious. At the time (the late 1970s), emissions from computer-like equipment were already proving to
be a dangerous if not life-threatening problem. For example, according to the FCC, the police
departments of several Western states reported their radios were receiving interference from
coin-operated video games based on computer-style circuitry. At an East coast airport, interference in
aeronautical communications was traced to the computer-like electronic cash register at a drug store a
mile away. Hobbyist-style computers and hand held calculators were already on the market and were
known to generate spurious radio signals. The Radio Shack TRS-80 was notorious for the television
interference it generated. Even though the personal computer boom of the 1980s could not have been
foreseen, the increased use of high-frequency digital logic circuitry promised that the situation could
only become worse.
In a first attempt to regulate the emissions of personal computers, the FCC developed a special set of
rules for them, enacted in October 1979, as the infamous Subpart J of Part 15 emblazoned on the
certification stickers on millions of PCs sold through 1989. In March of that year, the rules were
rewritten to bring together computers and other equipment that generated similar interference in a
rewritten Part 15 as Subpart B. The new rules apply to all electronic equipment that inadvertently
creates radio signals. The FCC calls this equipment unintentional radiators, as opposed to devices that

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (3 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

intentionally create radio signals for communications or related purposes. Of course, intentional
radiators, from television stations to garage door openers, also are governed by the FCC rules.

Scope of Regulation

The new FCC rules specifically cover personal computers as well as other larger and smaller
computer systems—from mainframes to pocket calculators. In addition, personal computer
peripherals also are included. In fact, most peripherals must undergo the same certification process as
computer systems. The rules explicitly define which peripherals require certification and which do
not.
Peripherals, according to the FCC, include both internal and external devices used to enhance a
personal computer. External devices connected to a PC require their own certification unless they are
sold together with the computer, in which case the PC and peripheral must be certified together.
Internal peripherals need not be certified only if they do not affect the speed or performance of the
computer and do not connect with external cables.
A serial communications board or a graphics board needs to be certified because it has a connector for
external devices. A turbo upgrade board requires certification because it increases the speed and
radiation potential of the computer. A memory-only expansion board or a hard disk controller does
not require certification.
Computer components ordinarily used only in making a computer at the factory are considered to be
subassemblies and, as such, do not require certification. When subassemblies are united to make a
personal computer that will be sold to end users, the entire computer must be certified.
Cases, motherboards, and power supplies are specifically designated as subassemblies and need
not—and cannot—be FCC certified. As things stand now, a computer motherboard does not require
certification, but when that motherboard is installed in a case with a power supply sold as a personal
computer, the entire assembly must be certified. Several organizations, including IBM, have lobbied
to get motherboards separately certified, but as of this writing the efforts have not been successful.
The rules recognize that the testing apparatus required for compliance with verification is beyond the
reach of the average computer hobbyist—and, implicitly, that the hobbyist may be beyond the reach
of the FCC if just because the effects of his efforts are so minor. Consequently, the FCC rules allow a
specific exception to the need for certification for home-made personal computers. For this exception
to apply, the home-built PC must meet all three of the following criteria: one, it must not be marketed
or offered for sale; two, it must not be made from a kit; and three, it must be made in quantities of five
or fewer solely for personal use. Commercial computer kits, on the other hand, must be certified by
the FCC.
Some kinds of commercial personal computer equipment also are specifically excluded from the need
for certification under the FCC rules. Low-power devices are unlikely to radiate substantial
interference, so equipment that uses less than six nanowatts (billionths of a watt) in its high-frequency
circuits are specifically excluded. All current microprocessors use far more power than this. For
example, a 50 MHz 486 microprocessor uses about nine watts, about a billion times too much energy
to sneak through the lower barrier of the requirements.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (4 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

Equipment that operates at very low frequencies does not have to comply with the FCC rules that
cover certification. A frequency of nine kilohertz is the minimum cut-off for the FCC definition of
digital device, so slower (more correctly, glacial) systems need not worry. The effective limit is
actually much higher—devices operating at speeds lower than 1.705 MHz that do not use AC power
also are excluded.
Mice and joysticks are explicitly excluded from the need for certification because they contain no
high frequency circuits and use no high frequency signals. However, a smart mouse with its own
internal microprocessor requires certification.

FCC Classes

By now, only the dead and the demented are unaware that the FCC divides digital devices into two
classes, A and B, with entirely different standards for allowable emissions and testing. The division is
made on the basis of where the equipment is likely to be used. Class A digital devices are those suited
only to business, commercial, and industrial applications. Class B applies to digital devices likely to
be used in the home.
The FCC rules explicitly define personal computers—all personal computers—as Class B equipment.
The rules also define the specific term "personal computer" so that just about anything you might
think of laying your hands on qualifies. What was classed as a "home" computer years ago—for
example, the Commodore 64—is specifically included because the FCC included any computer that
uses a television set as its display device in its personal computer definition. The rules go even farther.
Computers with dedicated display systems, such as the PC that’s probably sitting on your desk, meet
the FCC definition as a personal computer if it has all three of the following characteristics:
It was marketed through a retail dealer or direct mail outlet.
Advertisements of the equipment are directed toward the general public rather than
restricted to commercial users.
The computer operates on battery or 120-volt AC electrical power.
Note that how a particular computer actually is sold does not matter. As long as a particular model has
been offered for sale through a dealer or direct mail outlet, it meets the first requirement.
The definition of Class A equipment implicitly covers mainframe and minicomputers, most of which
use industrial-strength 230 volt power. According to the rules, however, the most important
distinguishing characteristic is that Class A devices are of such nature or cost that they would not or
could not be used at home by individuals. Here the FCC gives manufacturers an out. Manufacturers or
importers can apply to the FCC to have specific personal computers treated as Class A devices
providing the computer is of such a nature—priced too high or delivers performance too high—that it
is not suitable for residential or hobbyist use.
No hard and fast rule covers what is too powerful or too expensive to be a computer suitable for use in
the home. One general (but not absolute) guideline used by the FCC is that a base retail price higher
than $5,000 makes a computer more likely to be used in a business setting. Currently, computers
based on Pentium or more powerful microprocessors may be powerful enough to likely earn the
FCC’s approval to be rated as Class A devices. As the power of PCs increases, prices plummet, and

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (5 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

the expectations of home users skyrocket, this guideline likely will shift.
Note that manufacturers cannot simply declare a given computer is a Class A device. They must apply
to the FCC for such a classification—and they must be able to support their claims. The FCC confirms
its classification with a letter of notification.
All portable personal computers are considered to be Class B devices because their very portability
makes them likely to be used in a residential setting. Although Class A portable computers are
theoretically conceivable—for example, a machine dedicated to taking seismological measurements in
oil prospecting—general purpose portable PCs cannot qualify as Class A devices. In other words, any
portable computer from the smallest palmtop to an arm-stretching old lunchbox offered to you for sale
as a Class A device violates the FCC rules. Legally, such a computer cannot be sold in the United
States.
A substantial incentive exists for manufacturers to want their products treated as Class A devices. Not
only are the emission requirements more lax for Class A devices, a personal computer rated Class A
does not require the lengthy FCC certification process. Instead, a Class A device only needs to be
verified by its maker to comply with the FCC rules. In other words, while Class B equipment must be
certified by the FCC, a process which involves the FCC or, more usually, a special lab actually testing
the equipment. The results of the tests are filed with the FCC, and the FCC then issues the
certification. A Class A device is tested and verified to comply with the FCC rules by its
manufacturer, and the manufacturer simply files the results. The latter process is admittedly quicker. It
also offers the potential for creative interpretation of the rules. For example, a manufacturer might
succumb to marketing pressure and say equipment is verified before it actually is. However, the FCC
can double-check Class A verified equipment and prohibit its sale if it doesn’t in fact meet the
standard, and punish those who fraudulently claim to have verified equipment.

Radiation Limits

The justification of the distinction between Class A and B devices may seem nebulous, and the
different forms of test procedures insupportable, but there’s good reason behind both.
The emission limits for a Class B device are not arbitrary or capricious. They represent a value
believed by the FCC to be low enough that they do not cause interference to radio or television
reception when more than one wall and 30 feet separate the computer and the television set or radio.
That 30 feet and one wall is a reasonable description of the distance between one household and
another (at least it’s reasonable to the FCC). In other words, the standard is designed so that if Class B
equipment causes interference at all, it is only a bother in the home of the person owning the
computer. The neighbors shouldn’t have anything to worry about.
Class A equipment, on the other hand, may produce interference in equipment nearly ten times farther
away. The higher tolerance for interference is based on the assumption that most residential areas are
substantially more than 30 feet from industrial or commercial buildings. This greater separation means
that even with greater emissions, Class A devices should not bother the neighbors. However, in a
residential neighborhood, a Class A device may cause interference to neighbors’ radio and television
reception.
Two kinds of emissions are covered by the limits in the FCC rules: conductive emissions, those

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (6 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

conducted through the wires in the power cord; and radiation, the signals broadcast as radio signals
from the computer into space. The maximum strength of the emissions varies with frequency. Table
B.1 lists the conducted limits for the two classes of equipment.

Table B.1. Class A and Class B Conducted Limits

Frequency of Emission Class A conducted limit Class B conducted limit


Megahertz Millivolts Millivolts
0.45 to 1.705 1000 250
1.705 to 30.0 3000 250

Table B.2. Class A Radiated Limits

Frequency of Emission Megahertz Maximum Field Strength microvolts per meter at ten meters
30 to 88 90
88 to 216 150
216 to 960 210
960 and above 300

Table B.3. Class B Radiated Limits

Frequency of Emission Megahertz Maximum Field Strength microvolts per meter at three meters
30 to 88 100
88 to 216 150
216 to 960 200
960 and above 500

The different testing arrangements—verification versus certification—for Class A and B equipment


reflect some of the realities the FCC envisioned in the two types of computer equipment with which
the rules are concerned. Class B products are those mass produced in the thousands or millions.
Sending a sample to a lab should be no hardship (except for the delay imposed in the certification
process). Class A equipment may likely be unique, for example a custom-installed mainframe in an
environmentally-controlled computer room. Sending a one-of-a-kind mainframe computer to the FCC
testing lab would be impractical at best. Moreover, because more Class B than Class A devices are
likely to be unleashed on the public, a higher degree of assurance against interference from the more
popular equipment seems warranted.
Under the FCC rules, commercial or industrial equipment can be sent to the FCC to be certified as

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (7 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

Class B equipment, and Class B equipment can be used in business locations. The opposite is not true,
however. Class A equipment should not be used in residential areas.

Enforcement

The law doesn’t say that you can’t use a Class A device in your home, nor will the Radio Police bust
down your door if you do. The FCC rules implicitly allow you to get away with using a Class A
device in your home—as long as no one notices. If, however, your computer causes interference to
someone’s radio or television reception—no matter whether your PC is a Class A or Class B
device—you are responsible for eliminating the interference. If you don’t, the FCC can order you to
stop using your computer until you fix the interference problem, and if you don’t obey the order, you
may be fined or imprisoned. That threat alone should be sufficient to make you think twice about
using a Class A device at home.
If that policy seems to incorporate more than a bit of Big Brother, you should be aware that the FCC
also has the authority to demand to see your personal computer almost whenever they want. The FCC
rules require that the owner of any Class A or B device (or any equipment subject to the FCC rules)
make the equipment and any accompanying certification available for inspection upon request at any
reasonable time (generally, that means 9 AM to 5 PM on workdays). You also must "promptly
furnish" any FCC representative that calls upon you with such information as may be requested
concerning the operation of your personal computer.
You needn’t watch warily out your windows for big vans slowly driving down your street with dish
antennae pointed in your general direction, however. Those vans are things of cheap spy novels and
cheaper movies. In reality, the FCC uses ordinary-looking cars that may not even have an evident
antenna. Moreover, the FCC doesn’t arbitrarily go out looking for people holding Class A computers
in their homes. The interference-locating equipment goes out in response to complaints, so odds are
you’ll hear from your neighbors before the FCC knocks.
The bigger concern of the FCC is that interference-causing equipment is not sold in the first place so
you do not get a chance to put wavy lines through all your neighbor’s favorite television shows. To
that end, the FCC rules prohibit Class B equipment from being sold or offered for sale.
Class B personal computers not FCC certified cannot legally be advertised for sale, although ads
announcing products with a disclaimer noting that the device is not certified (and thus, not available
for sale) are permitted. If a company markets a computer that has neither been approved by the FCC
as a Class A device nor certified as complying as a Class B device, the company may be ordered to
stop selling the equipment and fined. If the company continues to flaunt the rules, company officials
could be jailed. According to the FCC, most companies get into compliance right away.
Before a Class B device is certified, it can be displayed at shows and demonstrated with the
appropriate disclaimer attached. The primary prohibition is against sales of noncertified equipment.
For example, demonstration units could be distributed, but "demo" models could not be sold to
computer dealers for display.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (8 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

Verification and Certification

If, after testing, a given device qualifies for certification, the FCC certifies the unit and issues a
certification number, which may be an alphanumeric set of characters. (Manufacturers often select
their own numbers.)
Every model of personal computer that differs as to case, power supply, or motherboard must be
separately certified. If a manufacturer offers two case styles—desktop and floor-standing—and three
system boards—386SX, 386DX, and 486DX—each of the six configurations needs to be separately
certified.
Computers from different vendors can share the same FCC number, providing they are identical units
made by a single manufacturer differing only cosmetically—for example, in label or color. In the past,
the FCC required that it be notified about different trade names used on certified products, but this
requirement was relaxed with the new (1989) rules. Computers with different packaging, processors,
or power supplies cannot share an FCC certification number, even if they are made by the same
manufacturer.
A personal computer need not be recertified if it differs from a certified model only in the addition of
a certified peripheral. For example, a manufacturer could create a separate model by installing an
extra FCC-certified serial board, and that new model would be covered by the certification of the old
one.
A claim sometimes made by small computer makers—that a product is made only from FCC-certified
subassemblies and thus does not require FCC certification itself—is simply impossible. A PC cannot
be built without a power supply, case, or motherboard, and these three subassemblies cannot be FCC
certified. Any computer manufactured from subassemblies must be FCC certified as a completed unit.
All these rules seem to make FCC certification important only to computer manufacturers. After all,
you are still liable for clearing up the interference generated by your PC no matter whether it’s Class
A or B. But FCC certification should be important to you.

Equipment Design

Achieving Class B certification takes a better design and better workmanship. Although a certification
sticker is no guarantee that a particular product is well-made, that sticker does show that the PC or
peripheral to which it is attached meets an important technical standard that uncertified equipment
does not. Although you should not rely entirely on FCC certification when buying a PC, it does give
you one more piece of evidence about the quality of your prospective purchase.
Manufacturers use a number of different strategies to minimize radiation. As speeds increase, they
must be increasingly diligent.
The heavy steel case of the typical PC, XT, AT, or compatible computer does a reasonable job of
limiting RFI. Plastic cases require special treatments to minimize radiation. The treatment of choice is
a conductive paint, often rich with silver, which shields the computer much like the full metal jacket

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (9 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

of other computers.
As PC operating frequencies increase, the spurious radiation becomes more pernicious. Any crack in
the case may allow too much radio energy to leak out. If different parts of the chassis and the lid of
the case are not electrically connected, RFI can leak out. In addition, any cable attached to the
computer potentially can act as an antenna, sending out signals as effectively as a radio station.
A number of design elements help reduce RFI. Special metal fingers on the edge of the case and its lid
ensure that the two pieces are in good electrical contact. Cables can be shielded. RFI absorbing ferrite
beads can be wrapped around wires before they leave the chassis to suck up excess energy before it
leaks out. Each of these cures adds a bit to the cost of the computer, both for the materials and for
their fabrication and installation. Moreover, it can take a substantial time to track down all the leaks
and plug them.
Interference also can be minimized at the point of its origin. For example, IBM designed its PS/2s
from the ground up to be inherently low in radio frequency emissions. Their system boards and the
Micro Channel are designed in such a way that spurious radiation is at a minimum. The outer layers of
the planar boards consist primarily of ground layers, which shield the high frequency signals within
the inner layers of the circuit board. Ground wires alternate with every few active conductors on the
Micro Channel to partially shield the bus.
The bottom line is that a Class B device must be designed to be less likely to cause interference than a
Class A computer. Not only does that mean you are less likely to get involved in an imbroglio with
your neighbors about their television reception, it also means that a Class B computer may be built
better with more attention to detail. In addition, lower emissions at radio frequencies generally go
hand-in-hand with lower emissions at lower frequencies. That can be comforting should you worry
about the health effects of low-frequency radiation.

ELF and VLF Radiation

At the bottom of the electromagnetic spectrum is Extremely Low Frequency (ELF) radiation. Strictly
defined, ELF comprises the frequency range from 3 to 30 Hertz, but in common usage the term is
extended to any frequency below 30,000 Hertz. As with all frequencies below 450 KHz, ELF is
ignored in the FCC certification process. ELF has long been thought innocuous, but a number of
newspaper and magazine articles have raised doubts about its safety.
Strictly speaking, the ELF of concern is not radiation but captive electric and magnetic fields
generated by strong electric currents in power systems, appliances, and other electrical equipment
(which includes computers and their peripherals). The two types of fields—electrical and
magnetic—are related and arise from the same phenomena, but have individual distinguishing
characteristics. Electric fields generate a potential (a voltage), are measured in millivolts or volts per
meter, and are relatively easily shielded against using a conductive material. Magnetic fields are
measured directly using units called gauss or indirectly as the strength the current (amperage) they can
generate in a length of wire using units of milliamps per meter. Magnetic fields are difficult to shield
against.
While older monitors emitted copious magnetic and electric fields—mostly above and from the left
side of the sets—most manufacturers are designing new products to meet a stringent radiation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (10 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

standard adopted in Sweden by the Swedish Board for Measurement and Testing, internationally
known by its Swedish initials, MPR.

MPR Standards

The Swedish safety standard pertains to a number of aspects of monitor emissions, including
x-radiation, static electrical fields, low-frequency electrical fields, and low-frequency magnetic fields.
Actually, two Swedish standards exist, an old one (now termed MPR I); and a new one (MPR II),
published in December 1990. Whereas MPR I focused solely on alternating magnetic fields with
frequencies between 1 KHz and 400 KHz, MPR II extended the standard to both electrical and
magnetic fields and lowered the reach of the standard to 5 Hz. This revision has important
implications for you and monitor makers. Some manufacturers claim their products meet the Swedish
standard even when they only comply with MPR I. But MPR I covers only one aspect of monitor
emissions—basically horizontal scanning frequencies. MPR II extends to the vertical frequency range
as well as covering power line frequencies.
The MPR II standard requires particular measurements of electrical and magnetic fields to be made at
various points around the monitor under carefully controlled conditions. Both types of fields are
measured in two bands (5 Hz to 2 KHz and 2 KHz to 400 KHz) at distances approximating normal
working distance at dozens of positions around the monitor. Electrical fields must be less than 25
volts per meter in the lower band and 2.5 volts per meter in the upper band; magnetic fields, below
250 nanoteslas (2.5 milligauss) in the lower band and below 25 nanoteslas in the upper band.

TCO Limits

The Swedish white-collar labor union known as TCO promulgates its own standards even tougher
than those of MPR II. The chief difference between MPR II and TCO is the distance at which
measurements are made. Whereas MPR II specifies measurements to be made at 50 centimeters from
the monitor screen, TCO makes the same measurements at 30 centimeters. In effect, TCO requires
monitor emissions to be roughly half that permitted by MPR II. Consequently, TCO is currently the
strictest standard to be applied to monitor emissions.
The Swedish standards are the toughest in the world, so compliance with them is the best assurance
that a monitor is as safe as possible. However, neither MPR nor TCO compliance is a complete
assurance of safety. Safe and unsafe levels of these low-frequency fields have yet to be confidently
determined. Some research actually suggests biological activity at field strengths permitted under the
Swedish standards.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (11 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

Underwriters Laboratories Listing

When you switch on your desktop PC, you probably don’t expect the electricity inside to jolt out and
send you corkscrewing, dropping you to the floor—your heart in fibrillation, your soul in limbo, and
your coworkers gathered around wondering who will get your window office they’ve all lusted after
for years. Nor do you think of your workstation as a potential flame-thrower, a time-release modern
Molotov that gives nightmares to fire marshals and Smokey the Bear. You have faith in the safety of
your PC and the rest of your array of office equipment. But nothing about modern electronics makes
equipment built from them inherently safe. Quite to the contrary, all electrical devices have some
shock potential. Voltages inside your PC’s power supply are quite sufficient to electrocute you or to
kindle your office aflame.
Worse, your shields against such prospective disasters may not be as impregnable as you think. For
example, insurance kicks in only after the fact—little solace when you’ve been thrown onto your
back, legs twitching. Government regulatory agencies react with enforced recalls and product bans
even more slowly, only after a string of catastrophes hints that something not-so-subtle infects a
certain product line. Anyone with memories of color televisions igniting apartments faster than you
can say "instant on" knows that sometimes even companies with excellent reputations accidentally
release potentially dangerous products.
Several testing and certification organizations do offer you the assurance that the equipment you trust
your livelihood and life to is safe. Among these are the Canada Standards Association, Underwriters
Laboratories Inc., and Verband Deutscher Elektrotechniker. The most familiar is Underwriters
Laboratories because the organization has been active in the United States for about a century. The
CSA is the Canadian equivalent; the VDE, German.
The trademarked stylized "UL" inside a circle means that independent safety engineers at
Underwriters Laboratories have examined the design and a sample unit of the product and found it to
meet their stringent safety standards. In addition, to assure you that the initial safety of your PC
wasn’t shortchanged later in production, other UL engineers occasionally spot check the manufacturer
and random products off the assembly line.
Underwriters Laboratories is not a government agency; nor is it the child of some Sixties-vintage
publicity-minded consumer-safety promoter. Rather, Underwriters Laboratories is an independent,
nonprofit organization that functions both as a safety engineering consultant and certification
organization. It’s a commercial business that earns its livelihood from manufacturers who pay for its
services in detecting what’s wrong with their products before they are put on the market.
Instead of governmental authority, the power of the UL arises from it standards and reputation—a
particularly long reputation. The organization dates from almost as far back as commercial electrical
power—well before the age of government regulations, consumer organizations, or even Upton
Sinclair’s exposes.
Founded in 1894 by William Henry Merrill, the UL was first known as the Underwriters’ Electrical
Bureau and primarily concerned itself with safety testing the products of the fledgling electrical
industry. At the time, the entire organization comprised three people—Merrill, Edward Teall, and W.
S. Boyd—and was appropriately housed above a fire station in Chicago. Since then, the company has
expanded in employment (the roster now totals thousands); offices (four, one each in Northbrook,
Illinois; Melville, New York; Research Triangle Park, North Carolina; and Santa Clara, California);

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (12 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

and other areas. It now sets standards and tests nearly any product about which there may be safety
concerns, from PCs to space heaters and, appropriately, fire extinguishers. The organization was
formally incorporated as Underwriters Laboratory, Inc. in 1901.
Although they are the product of a private business, Underwriters Laboratories’ standards can have
legal significance. The standards developed by the UL can and have been incorporated into statutes
and ordinances.
A particularly relevant example pertains in part to PCs. The National Electrical Code, which is
incorporated into the laws of numerous municipalities, specifies that as of July 1, 1991, any
equipment intended to be electrically connected to a telecommunications network must be listed for
that purpose (National Electrical Code, Article 800-51, subparagraph i). In communities enforcing the
national code, then, your PC must be listed if you have a modem in it that you intend on connecting to
a telephone line. A UL label on you PC constitutes the required listing.
Because Underwriters Laboratories is not an arm of the government, it cannot arbitrarily force a
company to follow its standards. It relies instead on cooperation and contract. To use the UL logo, a
company must enter into a contract with Underwriters Laboratories. In effect, it licenses the use of the
trademarked symbol.
Underwriters Laboratories does not just grant a license for the payment of a fee. To earn the right to
use the logo, a company must agree to follow the appropriate UL standards. More importantly, the
company must submit a sample of the equipment to Underwriters Laboratories so that it can be tested
and certified to conform with the standard. The contract also imposes a continuing duty on the
manufacturer to conform with the appropriate standard and gives the UL the right to check
compliance. The UL can award its symbol to the products it chooses or withhold it and enforce its
conditions on the use of its symbol under federal law. The UL symbol can appear on any or all
products in their class. It indicates compliance with safety rather than performance standards, and it
implies no relative measure of quality.

Computer Safety Standards

Computers, even electrical devices in general, are not the only or even primary concern of
Underwriters Laboratories. The organization develops standards for and tests everything from
building materials to fire alarm systems. Today PCs must conform primarily with but one of hundreds
of Underwriters Laboratory standards, designated as UL 1950.
The UL 1950 standard applies to all information technology equipment. First published on March 15,
1989, it became effective March 15, 1992, meaning equipment must meet the standard to be sold
wearing the UL logo after that date. Computer equipment now being made is tested for conformance
with the UL 1950 specifications.
In the 10 or 15 years before 1989, other standards applied to data processing equipment: UL 478 for
information-processing and business equipment; and UL 114 for office appliances and business
equipment. UL 1950 replaces both.
The new standard was not arbitrarily created, but represents an attempt to unite various standards in
use around the world by the unaffiliated CSA, VDE, and others. Manufacturers can follow the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (13 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

guidance of one standard rather than several perhaps conflicting standards for equipment to be used in
Europe, Canada, and the United States.
UL 1950 covers everything from ordinary desktop PCs and their peripherals (disk drives to printers)
to mainframe computers to simple desktop calculators to typewriters. Special standards with different
requirements cover industrial computer devices that may have to work in severe environments, such
as those with explosive fumes.
The inch-thick document covers nearly all aspects of equipment design and construction and outlines
the testing procedure that Underwriters Laboratories applies. These range from insulation and wiring
to mechanical strength and resistance to fire. Even relevant markings and identifications on the
product are covered. If you are interested, the complete, copyrighted publication can be ordered
directly from Underwriters Laboratories.

UL Recognition Versus UL Listing

In granting its approval to electrical devices, Underwriters Laboratories uses three strictly defined
terms: listing, recognition, and classification. These terms are not interchangeable; the standards for
each are different and even apply to different kinds of devices.
Recognition is an approval granted to electrical components, products not complete in themselves but
used in making a complete product. Light switches and computer power supplies are typical of
equipment that earns UL recognition. UL-recognized devices are entitled to wear a special symbol—a
slanted combination of the letter U and a backward R (for recognition) combined together.
Listing applies to complete products that you can buy—an entire appliance, monitor, or computer
system unit. Listed products are allowed to wear the familiar UL trademark, the circle with the letters
"UL" within it.
A UL-listed product is often made from UL-recognized components, but doesn’t have to be; nor does
the use of UL-recognized components automatically confer a UL listing on the finished product.
Rather, using UL-recognized components helps the manufacturer more easily achieve a listing.
A UL listing means that a product is safe for use in the form in which it is delivered to you. A
UL-recognized product is safe when installed and used properly. The listing lifts the responsibility
from you.
Although a UL-recognized component might be safe in itself, the possibility always exists that it
could be installed in a product in such a way to make it dangerous. For example, a heat-generating
power supply could be shoehorned into a tight case that lacks adequate ventilation, leading to
overheating and fire potential. Although the power supply was UL-recognized, such a dangerous
complete assembly could not be UL listed.
Any company that tells you that its PC is UL recognized is mistaken; it cannot be. The UL does not
give recognition to complete computer systems. Manufacturers may rightly claim that a computer
doesn’t need to be listed because it is built from UL-recognized components; and because a listing is
voluntary, and a computer legally need not be UL listed at all. The statement, while technically
accurate, is misleading.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (14 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix B

The operative words in such a statement are that the computer is not UL listed, so it gives you no
assurance that the completed system complies with UL design and manufacturing standards. In other
words, if you ask a PC maker whether its product is UL listed, and the response is that the system is
made from UL-recognized components, the answer is probably an evasion of the word "no."
UL classification generally applies only to commercial or industrial products that the UL tests for
conformance with specific published standards or regulatory codes, or that have been evaluated with
respect to certain hazards or to perform under specific conditions. A UL-classified device bears no
specific symbol, but instead is marked with the UL’s name and a statement indicating the extent of the
product’s classification. For the most part, UL classification has no relevance to commercial PCs.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxb.htm (15 de 15) [23/06/2000 07:06:00 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

Appendix C

Health and Safety


■ Repetitive Strain Injury
■ Back Problems
■ Eyestrain
■ Radiation
■ X-Radiation
■ Ultraviolet Radiation
■ Microwave Radiation
■ Low-Frequency Radiation
■ Pregnancy

Health and Safety

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (1 de 14) [23/06/2000 07:06:15 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

Repetitive Strain Injury

The hand-to-keyboard relationship is the most strained of any with PCs. No one wants to type, even
though it’s probably the fastest available means of entering information into your PC. But typing is
more than a bother. It can also cause permanent damage to your hands and wrists. Though it may
sound odd, the health problem of most concern in regard to keyboard use is the same ailment suffered
by chicken pluckers and meat packers. Once you understand the cause, however, the association is
obvious.
The primary health issue associated with keyboards is repetitive strain injury or RSI, a painful, often
debilitating disorder that develops when people must execute the same manual task over and over
again. The formal name for the ailment is self-descriptive. Straining to perform the same hand
movements over and over again eventually leads to physical damage. If you don’t take the proper
precautions, typing can also cause permanent damage to your hands and wrists.
The risks of RSI are real. In a study conducted by the South Australian Health Commission in 1984, it
was found that 56 percent of keyboard operators had recurring symptoms of keyboard-caused injury,
eight percent of them so serious they contacted a health care provider.
With keyboarding, the most common manifestation of RSI is carpal tunnel syndrome. A similar
ailment, wrist tendonitis, has also been associated with keyboard use.
The carpal tunnel is a narrow passageway in your wrist through which the median nerve passes,
carrying sensations for your entire hand, and the finger flexor tendons, which link your fingers to the
muscles in your lower arm. The tunnel is formed by walls of solid bone on three sides with the bottom
enclosed by the transverse carpal ligament, a tough, inelastic cartilage.
The carpal tunnel syndrome is caused by the tendons protecting themselves from overuse. Each
tendon is surrounded by a thin, fluid-filled sack called a synovid sheath, which swells with extra fluid
to protect the tendon. Scientifically, this swelling is called tendonitis. When these sacks swell in the
carpal tunnel, they can pinch the median nerve against the bones or the carpal ligament. The result can
be loss of sensation in the hand and debilitating pain.
The prognosis is not good. Treatment may involve an enforced vacation or medical leave of absence
during which no typing is permitted. Physical therapy, cortisone injections, even surgery are
sometimes necessary.
Although the problem develops over a period of years, the onset of pain caused by carpal tunnel
syndrome often appears suddenly. Some suffers have no symptoms one night and wake up the next
morning in excruciating pain, unable to work, possibly for months. In most cases those afflicted with
carpal tunnel syndrome have ignored the warning signs of the problem: a minor pain in the wrist after
a day of typing, numbness in the thumb or fingers.
People have been typing for over one hundred years, yet carpal tunnel syndrome appears to be a
recent phenomenon. The diagnosis is not new nor is the condition caused by a recently evolved virus
or bacterium. Rather, typing habits have, in general, changed.
Today, a typist’s fingers stay as close to the home row on the keyboard as possible. A simple press of
the pinkie is all that’s needed for a carriage return. Old typewriters required a definite change of
position and a resounding right hook to send the carriage back to the left after each page, and the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (2 de 14) [23/06/2000 07:06:15 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

typist had to extract one sheet and roll a new one into the typewriter. All of these simple, necessary
acts added variation to the typing process. Moreover, computers encourage extended use.
This difference between classic typing and keyboarding in itself hints at one way of avoiding carpal
tunnel syndrome—take a break. Remove your fingers from the home row and wrap them around a
coffee cup. Do something different for a while. As you’ll see in the next section, it’s better still to get
up from your chair and take a walk or otherwise divorce yourself from your workplace.
Sources agree that keyboarding with your wrists in the wrong position aggravates and may cause
carpal tunnel syndrome. The wrong position is anything but the naturally straight position your wrists
take on in relation to your arm when you stand relaxed with your arms dangling at your side. You
should adjust the angle of your keyboard to keep your wrists straight (if you can).
A number of innovative and odd redesigns of the computer keyboard have been developed with
ambitious inventors, most based on sound theory and an optimistic view of reality. No matter how
beneficial, it’s unlikely that bent, oddly-shaped, or vertical keyboards will catch on. But keyboards
with widened wrist rests and add-on accessories that provide wrist support may be useful in helping
you to keep your hands in the proper typing position.

Back Problems

People whose jobs involve sitting in one place all day long often complain of health problems, be they
hemorrhoids, obesity, or an aching back. Putting a PC in front of you won’t change the
complaints—and it won’t make the PC the cause of your problems.
The human body was not designed for the couch potato lifestyle. For example, proper circulation
depends on moving your legs to push venous blood back to your heart. Immobilize yourself in front of
a PC and you’ll pay for it in aches and pains—and worse. Studies have shown that the feet of office
workers who spend the day at their desks swell by four to six percent by the end of the day. Another
study suggested that prolonged quiet sitting might lead to a gradual increase in cardio-vascular strain.
In a questionnaire survey of 852 Video Data Terminal (VDT) users at New York State government
offices in 1985, one-third of the female operators experience frequent or daily neck pain. One-quarter
reported back or shoulder pain. For unknown reasons, males reported problems 10 to 20 percent less
often than females. Most studies attribute these pains to poor posture caused by non-optimal seating.
At one time, nearly every source recommended the same posture for working at a PC—back straight,
feet firmly flat against the floor, arms at your sides, a 90-degree bend at the elbow, and your wrists
straight at the keyboard. Only Marines and debutantes are likely to sit that way. One reason is office
furniture. Except for specially designed ergonomic computer chairs, standard office equipment is
typically not designed to adjust to the full range of settings required to accommodate the diversity of
human beings. The range is a wide one. A 1982 study reported in the Journal of Ergology shows that
preferred keyboard height varied from 28 to 34.25 inches; the preferred screen height ranged from
36.25 to 45.5 inches.
Moreover, there is substantial doubt that the classic recommendation for seating posture is best for
computer users. For example, the study above showed a preference among VDT workers to lean back
so that their bodies were at angles between 97 and 121 degrees. A report published by the Swedish

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (3 de 14) [23/06/2000 07:06:15 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

National Board of Safety and Health in 1986, noted that the generally accepted "correct" sitting
posture—back straight—is not supported by any scientific evidence. In fact, relieving pressure on the
knees caused by this posture recommendation has resulted in a reduction of the height of furniture of
about four inches in the Twentieth Century even though people are, on the average, about four inches
taller. Raising both tables and chairs by about three inches resulted in reduced back pain and a
reduction in leg swelling among workers.
Other studies show that when people are instructed on how to adjust adjustable furniture and then left
to their own devices, they rarely assume the textbook-perfect posture. They lean back at up to 31
degrees from the vertical and relax their way through the workday. And they feel better about it.
Little wonder that the modern viewpoint has outmoded the posture police and now substitutes
adjustable furniture. But even adjustable furniture is problematic. First, it’s worthless unless someone
shows you how to adjust it—the process can be complex because some chairs alone have 15 separate
adjustments! Second, one study seemed to show that it takes about a week to get near to an optimum
adjustment of your computer furniture. Third, current advice is against becoming too comfortable. If
you settle so perfectly into your chair that you don’t want to move, you’re likely to develop the
sedentary problems you were wanting to avoid. You may want to adjust your chair periodically
throughout the day, just as you are recommended to change your driving position periodically on long
trips.

Eyestrain

Today there is a general consensus that staring at a monitor screen all day long in and of itself does
not cause permanent damage. This research-based conclusion contradicts earlier speculation that long
term computer use in and of itself caused such vision problems as myopia (near-sightedness) and
cataracts.
The research has been extensive and convincing. For example, studies conducted over a number of
years in Canada (5 years) and Holland (2.5 years) found no deterioration in vision from computer use
that could not be attributed to normal aging. A report on ophthalmological examinations comparing
VDT users and non-users among members of the Newspaper Guild in 1985 found but one difference
between the two groups, a tendency of VDT users to become slightly cross-eyed (esophoric).
Admittedly severe esophoria can be a problem but the likely cause again was how the VDTs were
used, not some intrinsic property of the equipment. In fact, a statement made on behalf of the
American Academy of Ophthalmology in 1984 concluded that existing evidence indicated that VDTs
were safe for normal use and present no hazard to vision. There was no indication that VDT use could
harm normal eyes or worsen existing pathologic eye conditions.
But that’s far from giving monitors a clean bill of health. Over the short term, visual problems do arise
among VDT users. In a 1988 study, 26.3 percent of the participants developed significant temporary
myopia (nearsightedness) after VDT work and another 42.1 percent showed changes bordering on
significant. A 1984 study showed that after working on a VDT, the time required to shift focus
between near and far objects increased. And a 1981 study showed an increased incidence of eye
fatigue and irritation among VDT users as compared to other office workers, although eye
examinations showed the same level of eye problems in both groups. The eye irritation found in this
study often persisted well beyond the period in which the VDT was used, often through the next

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (4 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

morning.
Scientists have sought the cause of these irritations, and most research points in one direction, not to
the VDTs themselves but how, where, and under what conditions the computer equipment was used.
For example, a 1988 study reported in the New York State Journal of Medicine attributed the eye
irritations complained of by VDT workers to ergonomic considerations such as glare, improper
lighting, improperly corrected vision, and poor arrangement of work materials. A Swedish study in
1986 found that glare was significantly correlated with eye fatigue (as was the need to decipher
written handwriting).
Eliminating eyestrain, irritation, and fatigue means removing the source of the problems, the
shortcomings in the working environment. For the most part, that means changing the lighting in the
room containing your PC to reduce glare and equalize the illumination on your work, moving your
monitor to a more comfortable viewing position to lighten the burden on the muscles that shift your
gaze, and, if all else fails, adding a glare shield or buying a new monitor.
Lighting is an important issue with any work you do. After all, only in rare instances can you get
away without seeing what you’re doing. According to most experts, the amount of lighting required
varies with the job that you do. If you’re working on watches or other delicate, detailed work, you’ll
want your workplace as bright as 3500-5000 lux (Lux is a unit of measurement of illumination; bright
sunlight is 100,000 lux; a moonlit night, 0.1 lux). Normal office work falls into the range 100-500 lux.
In general, VDT users prefer lighting on the dark side to offer better screen contrast. On the other
hand, dealing with paperwork requires light on the brighter side. You’ll need to find a happy medium.
In most cases, that means reduced overall lighting with a task light, an adjustable lamp that can be
arranged to shine on your reading materials but not on your monitor screen.
The biggest nemesis when it comes to illumination is glare, reflections of bright light sources off the
glass surface of your monitor screen. To reduce glare, many sources recommend diffuse overhead
lighting supplemented by localized task lights. If you have a window in your workplace, you should
align your monitor screen at right angles to it to minimize reflections.
Substantial debate surrounds the issue of anti-glare treatments on monitor screens. The treatments
soften the image, putting it slightly out of focus, and decrease contrast. Of course, any anti-glare
treatment would not be necessary if the monitor were used under optimum viewing conditions.
Because few displays are used in optimum conditions, most monitor makers install glare-reducing
picture tubes.
Many equipment makers recommend against using an add-on anti-glare screen in front of your
monitor unless you have no other means of reducing glare. All such screens reduce image sharpness,
and that in itself can be a source of fatigue.
In fact, fatigue is one area where computer equipment itself may be an issue. Some studies have
correlated poor quality displays with increased eye fatigue. A better monitor may be better for you.
If you decide to replace your monitor, you’ll want to examine those with flatter screens, which are
less prone to glare. Curved monitor screens act like the convex mirrors stores use to catch shoplifters
and other goings-on in the aisles. The face of the tube reflects light sources over a wide angle,
invariably catching a lamp, overhead light, or a window. A flat or nearly-flat screen is less likely to
get extraneous light sources in its view.
To help you make such adjustments, your monitor should be equipped with a tilt-swivel stand. Most
health and safety recommendations mandate a tilt-swivel base.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (5 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

The obvious adjustment is to align your monitor to minimize glare. If glare is not a problem, IBM
recommends that the face of the monitor be aligned so that the top of the screen is set back 10 to 15
degrees from the vertical.
Another step to combat eye fatigue is to sharpen up your screen by chasing away dust. Monitor
screens naturally attract dust because the high voltage used by their electron beams builds up a static
charge on face of the tube. This charge collects dust the same way balloons stick to a wool sweater.
Most sources recommend using a damp—not dripping wet—rag to gently wipe the dust off the screen.
Be careful not to get the screen really wet as any liquid that runs down its face holds the opportunity
for dripping inside and damaging the monitor’s circuitry.
Research indicates that your eyes will become less fatigued if you don’t have to look up at your
monitor. Consequently, most experts agree that the optimum height for your monitor screen is such
that the top of the screen is at eye level. That way you never have to look up to see what’s on the
screen. If you slouch in your chair (which may not be such a bad posture after all—see the preceding
section on back pain), you may not want to stack your monitor atop your system unit.
You’ll also want some kind of stand or holder for your references when you type. To minimize eye
fatigue, experts recommend that you keep your drafts, notes, or whatever you’re typing from at the
same distance from your eyes as the monitor screen. That way you won’t have to shift focus every
time you check your notes. Less shifting of focus is believed to minimize fatigue.
Periodically, however, you should take a break and shift your focus. Look around the office, out the
window, count the holes in the acoustical tile. Anything to change your focus and relax your eyes. To
minimize eyes strain, fatigue, and irritation, the ideas is to moderate your eye work—neither shift
your focus dozens of times a minute nor lock your eyes at a consistent distance for hours on end.
A number of studies have shown that the people most likely to suffer from vision-related problems
when working on PCs are those who have uncorrected vision deficiencies. While you might be able to
read your monitor screen even if you are slightly nearsighted or anastigmatic (and refuse to wear
glasses or contacts), you’re more likely to develop headaches and other symptoms of eyestrain and
fatigue from your work. In addition, some bifocals are not suited to computer work because their
change in correction occurs too low.
Consequently, you should have your eyes checked periodically (some sources say twice a year) if you
work regularly on a PC or VDT. In some cases, you might want special corrective lenses to use only
while working on your PC. Be sure to let your optometrist or ophthalmologist know that you perform
substantial work on a PC.

Radiation

Monitors are thought by many to pose dangers beyond eyestrain. Monitors operate at frequencies that
may have some health effects and may generate forms of radiation that have been proven harmful.
The health effects of these emissions are not completely understood.
Although dedicated monitors for PCs are relatively new, they share many common characteristics
with VDTs, which have been in use for nearly forty years and about which a substantial body of
health-related data has been generated. Both monitor and VDT technologies use signals of

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (6 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

approximately the same frequencies. Both generate many of the same EMR components.
Even with VDTs, however, the issue of health effects remains unresolved after decades of study. No
true consensus on the safety of VDTs—and therefore, personal computer monitors—has emerged.
The conflicting results of studies have lined up two opposing parties who are unlikely to be swayed by
the arguments of the other. On one side are the makers of electronic equipment and the organizations
that employ the people that use it. They believe that the equipment is safe. The other side, the people
who actually must work at VDTs and personal computers all day long, have their doubts. It’s the
classic employer-employee struggle with a technological twist.
The employee viewpoint is buttressed by a variety of studies that show biological effects of
electromagnetic radiation and an association of between VDT use and health problems. The most
infamous of these problems is the increased risk of miscarriage. For example, a famous study
conducted for Kaiser Permanente in California (published in 1988) showed that among 1583 pregnant
women, those who used VDTs for more than 20 hours per week had a significantly elevated rate of
miscarriage.
On the other hand, VDT makers and employers rally a whole range of other studies (many of which
they have funded) that have failed to find any such risk to VDT users. For example, a 1989 University
of Toronto study of 800 pregnant mice subjected to electromagnetic fields of the kind given off by
VDTs suggested there is no relationship between spontaneous abortion and VDT electromagnetic
fields.
While that may be good news if you’re a pregnant mouse, pregnant human workers may not be
reassured. And that’s the problem. As in any scientific discipline, the VDT studies are subject to
interpretation. Moreover, the human VDT studies are correlational rather than causal—they associate
a problem with VDT use but cannot prove a true cause-and-effect relationship. The EMR from the
computer terminals could be causing miscarriages or something else about the terminals or the way a
particular study was conducted could have influenced the results. For example, the Kaiser study itself
admits that its results may have been confounded by unmeasured workplace factors such as poor
ergonomics and job-related stress. Stress rather than radiation is, in fact, a prime contender for the
cause of health effects associated with VDT use.
On the other hand, a growing number of studies have found cause-and-effect relationships between
EMR and biological changes in tissues grown under laboratory conditions. Some of these effects
occurred when the tissues were subjected to electromagnetic fields of the same nature as those created
by personal computers and VDTs.
The radiation emitted by monitors and VDTs falls into several distinct bands, some with known health
effects, some in which health effects are less defined. Among the most important of these frequency
ranges are x-radiation, ultraviolet radiation, microwave radiation, very low frequency radiation, and
extremely low frequency radiation.

X-Radiation

Perhaps the most publicized danger involved with equipment based on cathode-ray
technology—things like television picture tubes, oscilloscope, radar screens, and computer
monitors—is x-radiation.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (7 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

X-rays are known to cause cancer, and the mechanism is well-understood. X-rays are ionizing
radiation. The photons making up the X-ray signal contain sufficient energy to break up the chemical
bonds in molecules, including the DNA in chromosomes. Once the DNA in a cell has been changed,
the genetic code of the cell is altered. The cell mutates, perhaps dying immediately or just subtly
changing its activity. Once the DNA of a cell is changed and the cell replicates, the changes are
passed on to its progeny. One potential is for the growth control mechanism of the cell to change. As a
result, the cell and its offspring may multiply rapidly and uncontrollably as cancer.
The chances that any one cell will react with X-rays in such a way to cause cancer are minuscule.
Given enough rays, however, reacting with a sufficient number of cells and the cancer potential
becomes real and worrisome.
X-radiation is associated with color television screens—and thus with color computer monitors. This
association is based on the scare stories of the early 1960s when early color television sets did, indeed,
produce prodigious amounts of X-radiation.
One of the many ways that X-rays can be produced is through the rapid deceleration of electrons. As
the electrons slow down, they have to give up energy. Depending on the momentum of the electron,
some of this energy is given off as X-rays.
X-rays are classified as two types, low-energy or soft x-rays with wavelengths from one-tenth to one
nanometer, and high energy or hard x-rays with wavelengths shorter than one-tenth nanometer.
Because of their low energy, soft x-rays have little penetrating power. Hard x-rays can pass through
and interact with the human body. Medical x-rays are hard. They can cause cell damage;
consequently, the government has placed strict limits on exposure to them.
Early television sets used a vacuum tube high voltage rectifier, a small tube that generated the current
to drive the electron beam in the display tube. These rectifiers were essentially miniature X-ray tubes.
They functioned by passing a huge electron flux through the tube, from cathode to anode, the
electrons being rapidly decelerated at the anode. X-rays were emitted in the process.
The x-ray excitement that ultimately caused the federal government to issue strict regulations on the
x-radiation emitted by television sets (as well as computer terminals) was real. Certain television sets
emitted x-rays of such strength you might make a radiograph of the bones in your hand using the
television as an x-ray source.
Not all televisions were so dangerous, however. In fact, the culprit was proved to be defectively
manufactured shunt regulator tubes that did not properly shield their anodes. The result was the
emission of a concentrated, pencil-like beam of electrons through the bottom of the television set.
Unless you had the television resting on your stomach—unlikely in those days of hundred-pound
monster TVs—you would have been safe from its effects.
Moreover, vacuum tube high voltage rectifiers and shunt regulators are obsolete. They have been
replaced by solid-state silicon diodes which emit no X-radiation—electrons go through no rapid
deceleration in silicon diodes. No known PC monitor uses vacuum tube rectifiers, so the X-radiation
problem in PCs from that source should be non-existent.
However, all CRT-based devices have another potential source of x-ray emissions. Every CRT creates
its image by shooting a ray of electrons at the phosphors that coat the inner face of the screen. When
they strike the phosphors, these electrons also rapidly decelerate. Most of the energy from the electron
beam goes to excite the phosphors, which in turn emit the visible light of the image. Some of it,
however, can generate x-rays. The higher the voltage inside the tube, the larger the x-ray flux. Color

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (8 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

tubes, which operate at potentials as high as 30 kilovolts, produce thousands of times more x-radiation
than do monochrome tubes, which operate below 20 kilovolts. (X-ray emissions increase by about a
factor of ten for every one kilovolt increase.)
But it’s unlikely that much x-radiation leaks out of any computer monitor. The electron beams inside
their CRTs have little energy and produce only soft x-rays. This radiation is effectively absorbed by
the special face glass of the CRT.
Although the CRT looks like a simple thing in itself—hardly more than an oddly-shaped glass bottle
with some metal pins sticking out its narrow end—it’s a complex creation, believed to be the most
complicated consumer product made before the advent of the microprocessor. Rather than one
uniform kind of glass, the tube is crafted from several varieties, each tailored to a specific purpose.
The wide face of the tube is thick, sometimes as much as one-half inch. It’s made from glasses rich in
strontium and lead, which block the x-ray emissions from the beam within the tube.
Regulations promulgated by the Food and Drug Administration set a maximum limit of x-ray
emissions from televisions and terminals alike at 0.5 milliroentgens per hour at a distance of five
centimeters from the screen—that’s about two inches, close watching indeed. Devices with greater
emissions are not permitted to be sold in this country. Moreover, the measurement of x-radiation
under this standard must be made under worst-case conditions. Not only must all controls on a set
being measured be advanced to the position maximizing x-radiation (settings at which the set is
unlikely to be operated) but also failure conditions that would result in the worst possible x-ray
emissions must be simulated. (For instance, the failure of a voltage regulator that would increase the
potential of the CRT electron beam. These simulations often result in the catastrophic failure of the
equipment during the test).
Compliance testing by the FDA has turned up x-ray emissions from computer terminals. For example,
a 1981 study found roughly one out of 12 VDTs evaluated emitted x- radiation above the 0.5
milliroentgen per hour limit. The problems were confined to eight units (out of 91) which represented
three different models. The out-of-compliance models were either recalled to be modified to comply
with emissions requirements or were not permitted to be sold on the U.S. market.
The vast majority of computer monitors emit virtually no x-rays. In fact, their thick lead-enriched
glass screens can actually shield you from background x-radiation.

Ultraviolet Radiation

Ultraviolet radiation is part of sunlight, a growing part owing to the diminishing ozone layer in the
stratosphere. Its name describes it—ultraviolet is the invisible component of sunlight beyond the
violet end of the spectrum. It has shorter waves (180 to 400 nanometers) and higher frequencies than
visible light. Physically, that means that ultraviolet photons are more energetic than those of visible
light. In fact, the ultraviolet spectrum spans the transitionary range between ionizing and non-ionizing
radiation. UV photons can be so energetic that they cause chromosomal damage. UV has been
implicated in causing cancer. It can also burn the skin. UV also triggers the skin’s protective tanning
reaction.
Unlike x-rays, however, ultraviolet is not penetrating. The thick atmospheric blanket of ozone stops
them well; a thick blanket of cotton, or even a thin shirt, does quite a good job. Consequently, the

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (9 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

effects of ultraviolet on the human body are limited to the places that sunlight can reach—the skin and
the eyes. Today, it is generally agreed that exposure to ultraviolet radiation can cause skin cancer,
cataracts, conjunctivitis (irritation of the lining of the eye), keratitis (inflammation of the cornea),
pain, and light intolerance.
Current evidence indicates that UV exposure is cumulative. That is, the longer you bathe in its rays
over your lifetime and the stronger the rays, the greater the chances of unfavorable consequences. It is
also believed that exposure early in life has a greater effect than later exposure.
All computer monitors emit some UV along with the visible light of their images. However, the most
energetic and thus the most dangerous wavelengths cannot escape the CRT. Ordinary glass strongly
absorbs ultraviolet radiation with wavelengths shorter than about 350 nanometers. The only part of the
UV spectrum which may be present in CRT emissions is, therefore, the range 350 to 400 nanometers.
(Some sources list the beginning of UV radiation at 380 nanometers.)
Ultraviolet emissions are present to some extent from monitors, but the emission level declines with
decreasing wavelength and is virtually absent in most cases at wavelengths higher than about 350
nanometers. Because most color monitors use phosphors of the same family (P22), all have similar
UV emission characteristics. Invariably, however, monitor ultraviolet emissions are less than visible
radiation—typically no more than 5 percent of the level of the maximum emission in the visible
spectrum. In contrast, a "deluxe cool white" fluorescent tube, the kind often used in office lighting,
puts out UV at a level of about 20 percent of its maximum visible emissions. Based on typical monitor
brightness levels and office lighting levels mandated by OSHA, CRT emissions of UV would be a
fraction (in the range of one-quarter) of the level reflected from a white sheet of paper on a desktop
when the monitor is operated under the test conditions (brightness and contrast advanced fully, screen
fully lit). In normal operation, UV emissions from a monitor would be substantially less. In other
words, although monitors do emit measurable amount of UV radiation, fluorescent lighting poses
many times the danger of the typical computer monitor. Sunlight is substantially more dangerous.

Microwave Radiation

Microwave energy—the stuff that cooks in microwave ovens and blasts radar beams over the
horizon—has well-documented effects on living cells. Like a potato or poodle in the microwave oven,
they cook. The mechanism is well understood. The energy of the microwave signal excites water and
fat molecules, transferring to them as thermal energy (heat). Food is cooked by microwaves because
the heat induced in them accumulates faster than it radiates away, raising the temperature. Cell
proteins break down as temperature increases. Cells die. The food is cooked.
Microwaves penetrate moderate distances through living tissue. Consequently, organs inside a body
can be heated (potentially killed) by microwave beams. The thermal energy of microwaves is also
known to cause cataracts.
Wavelengths longer than microwaves (those typical of VHF television, FM, and standard broadcast
radio signals) also cause thermal effects by transferring energy to materials, but they are not as
reactive with biological tissue. They tend to penetrate without being absorbed.
Outside of the thermal effects, microwave and other radiation in the radio spectrum (that is, higher in
frequency than about 30 kilohertz) is thought not to pose other health hazards. Some studies have

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (10 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

implicated microwaves in causing cataracts, although most of these have been at intensities that cause
thermal effects. Cataracts caused by non-thermal microwaves have been reported, although the
preponderance of studies have found to the contrary.
Microwave and other radio frequency heating requires very strong signals. Microwave ovens operate
at levels of hundreds of watts. Computer monitors don’t even draw hundreds of watts from wall
outlets. Although they may emit some microwaves, the amounts are small. In fact, the government
assures that such emissions are well below the levels associated with heating effects. All computer
equipment must already be certified to abide by subpart B (formerly subpart J) of part 15 of the
Federal Communications Commission rules and regulations which sets interference standards that are
well below (by orders of magnitude) the radiation levels necessary for thermal effects. Whereas health
standards deal in volts per meter, the FCC interference standard limits emissions to microvolts per
meter (the exact value depends on frequency).
Moreover, the PCs do not directly create microwave energy. Although microwaves are theoretically
created as harmonics of the signals generated inside the computer, the levels of the microwave signals
are essentially unmeasurable.
A possibility exists that there are nonthermal microwave effects that may be active at lower signal
levels. If these effects are real, they are believed to be a results of low frequency modulation of the
microwaves. These modulation effects would be similar to the effects of direct radiation at such lower
frequencies, discussed in the next section.

Low-Frequency Radiation

A number of recent studies have correlated the strong ELF fields associated with power lines and
electrical distribution systems with increased cancer risk. Electric blankets and water bed heaters have
also been implicated. ELF fields similar to those generated by some computer equipment have
demonstrated biological effects in the laboratory. These effects include changes in cell membrane
permeability, altered prenatal development, and the promotion of the growth of cancerous cells.
ELF research has been of two types: laboratory studies on cell cultures and animal tissues, and
epidemiological studies—research that starts with sick people and attempts to find a common link
between their backgrounds.
The epidemiological studies of power distribution systems have mostly taken the form of correlating
illnesses with the exposure to ELF fields. To date, the results of these studies have been mixed. The
most recent, however, have been aimed at answering the criticisms of previous studies that found a
positive correlation of the conditions in which large ELF fields would be present (the fields
themselves were not measured) with childhood cancers. In the United States and Sweden, correlations
between cancer and strong ELF fields associated with electrical distribution systems have been found,
although other, contradictory studies have also been published.
In the laboratory, the potential biological effects of ELF at levels below those which would cause the
heating of tissue have been extensively investigated for about the last decade. The results of that
research are beginning to show that far from being innocuous and non-interactive with biological
tissue, ELF electrical and magnetic fields can be subtly active with both beneficial and harmful
effects. On the positive side, ELF fields are used in treating bone fractures. The fields apparently

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (11 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

promote bone growth and hasten healing. On the downside, ELF fields have demonstrated effects on
calcium channel permeability of cell membranes which can affect a variety of cell functions,
including the transmission of electrical signals in nerve tissue. The fields also have been shown to
affect protein synthesis and alter circadian rhythms. ELF fields also appear to promote the growth of
cancerous cells. Research has further demonstrated that developing nervous systems may be
particularly susceptible to ELF fields, and that these effects may be latent, showing up only in specific
situations or at later times.
Of course, not all of these dire studies stand up to scrutiny. The results of some have failed attempts at
replication. And, of course, since these lab studies were carried out in vitro, there is no guarantee that
the effects on human beings will be identical. A consensus is, however, emerging that ELF fields can
be biologically active at levels lower than were once thought possible.
One of the discoveries about ELF fields is that they do not behave like ionizing radiation. For
example, the fields are not energetic enough at the molecular level to change or destroy the chemical
bonds in cells. They don’t damage chromosomes. Instead, the ELF fields seem to mimic the electrical
changes that normally occur in living cells in the body. For example, by changing the calcium
permeability of cells, they can change the response of a nerve cell to stimulation. This mimicking of
normal cellular processes may be the root of the cancer-promoting potential of ELF. The membrane
sites at which some ELF reaction occurs appear to act as receptors for cancer promoting chemicals. In
addition, ELF fields also appear to increase the chemical activity of a compound, ornithine
decarboxylase. This effect has been associated with cancer promotion. In addition, ELF fields also
disrupt the functions of cell gap junctions, another effect associated with cancer growth.
Some studies have found ELF fields to have an odd aspect that complicates research. Chemical
carcinogens and ionizing radiation are believed to behave in a linear fashion. That is, the dangers of
each increase as the exposure level increases While some ELF effects show a similar relationship to
intensity, some studies have found "window" effects—biological effects occur only with certain field
strengths (or certain frequencies) of ELF and not at higher or lower values. In addition, the window
effects of ELF also appear to depend on the presence and orientation of static fields, like the earth’s
magnetic field. For example, one study on chick brain tissue showed changes in calcium ion flux with
60 Hz. ELF fields with strengths of 35, 40, and 42.5 volts per meter, while fields of 25, 30, and 45
volts per meter showed no effect.
For health scientists, just the possibility of window effects is worrisome. If these effects are real
(doubts persist that they are) they would preclude the development of exposure standards. The effects
of ELF fields would vary with the individual experiencing them because the size and shape of one’s
body affects the strength of voltages and currents induced inside it by the ELF fields.
To complicate matters further, the waveform associated with ELF fields appears to affect their
biological activity. Least active appear to be the sinusoidal waves that are characteristic of
utility-supplied electricity. The most active appear to be pulsed fields like those generated by radar
and fields with sawtooth waveforms, which are characteristically generated by the sweep circuitry in
televisions and monitors.
Because of the potential harm that might be caused by these emissions, most monitor makers now
offer products that conform with the Swedish safety standards MPR and TCO. Monitors that meet
these standards have essentially no measurable emissions—the standards themselves represent the
limits of measurement.
If you have an older monitor that does not conform to these standards and you believe ELF fields are

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (12 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

dangerous, you can take steps to minimize your exposure to them. For instance, sit in front of your
computer and display, where the ELF fields are the weakest. Avoid sitting near the sides of nearby
computer monitors, particularly the left side. Because both the magnetic and electric ELF fields
generated by computer equipment fall off quickly with increased distance, you can minimize your
exposure by working as far from your computer and its display as is consistent with good ergonomics.
In other words, don’t back off so far you have to squint or strain to reach what you need to get at.
Monitors emit more ELF than do computer systems, and this radiation appears to be related to the
scanning signals used by their CRTs. You can avoid the fields associated with scanning signals
(which may be more dangerous than the more pervasive sinusoidal waveforms) by using a display
based on an alternative technology, such as an LCD display.
It’s unlikely that your computer monitor will kill you. Even if the worst of the effects attributed to
ELF prove true, you likely face greater risks to your health from other forms of pollution, such as the
cigarette smoke you inhale (either your own or that of co-workers), the cholesterol in your
bloodstream, and the peanut-butter you spread on your noontime sandwich.

Pregnancy

Those nefarious types who use scare tactics to capitalize on the fears of expectant families to sell
computer safety equipment usually cite one particular study that linked VDT usage with miscarriages.
Rarely do they go into detail about the study they use to support their dire warnings. The study,
reported in 1988 in the American Journal of Industrial Medicine, was conducted among 1583 women
who used obstetrics and gynecological clinics affiliated with Kaiser Permanente Medical Care
Program in the San Francisco Bay area. Its design goal was to determine the effect of the insecticide
malathion on early pregnancies. The scientists conducting the study were very thorough and presented
participants with a lengthy questionnaire. After crunching the statistics to find correlations, only one
result popped out. A higher rate of miscarriage was correlated with women who used VDTs more than
20 hours per week. The researchers themselves noted that the VDTs themselves were not the cause of
the miscarriages but attributed it to "an occupational effect not related to VDTs" such as stress or
working conditions. Recent studies tend to reinforce this conclusion, including a tightly controlled
1989 report published in the International Journal of Epidemiology.
Another worrisome development that some health-mongers cite is the appearance of clusters of
miscarriages among VDT workers. For example, among workers in the Dallas computer center of a
large retailer, eight out of twelve pregnancies in which conception occurred between May 1979, and
June 1980, resulted in spontaneous abortion or neonatal death. To anyone in the group, these
occurrences would seem dire. The National Centers for Disease Control investigated, however, and
determined that the problem was not related to proximity to VDTs or the time spent working on them.
In perspective, these clusters don’t even rise to the status of statistical aberrations. Remember, there
are thousands, if not hundreds of thousands of similar groups that work on PCs. A gaussian
distribution of miscarriages among groups would imply that a few groups would have an abnormally
high number of miscarriages and a few abnormally low. No one notices the low-end because (nearly)
everyone expects to have a problem-free pregnancy and a perfect child. Moreover, another factor
could be at work—such as a boss with a bullwhip who has elevated workplace stress to an art form.
The conclusion is not that you have nothing to worry about if you are pregnant and working on a PC.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (13 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix C

Some sources indicate that up to 20 percent of all pregnancies end in miscarriage, PC present or not.
Your job that involves working on a PC may cause stress that can lead to health and pregnancy
problems. If you have suitable working conditions and you take care of yourself, however, you can
rest easy knowing that you won’t be causing your child-to-be any hardship.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhxc.htm (14 de 14) [23/06/2000 07:06:16 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

Appendix D

Data Coding
● BCD
● EBCDIC
● ASCII
● Unicode

Data Coding
Smart as they are, computers have difficulty reading ordinary text. While these alphabets work well
for physical representations of letters and words, they fail in the realm of electronics. Oddly, ancient
scribes never thought of applying digital techniques to their fledgling alphabets. Only in modern times
have people sought to standardize a correspondence between digital bit patterns and alphabetic
characters.
Four major systems have been developed for encoding characters as data. These include Binary
Coded Decimal, Extended Binary Coded Decimal Interchange Code, the American Standard Code for
Information Interchange, and UniCode.

BCD
When electronic calculators first appeared, the code was obvious: assign a binary code to each of the
ten numerals commonly used in our favored decimal system. The shortest code that works is four bits,
sufficient to encode 16 symbols. The leftover six can be used for mathematical operators or whatever
you like.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (1 de 17) [23/06/2000 07:07:21 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

The basic code that uses four bits for the ten numerals is called Binary Coded Decimal or BCD and is
still used in some data systems. Table D.1 lists the simple BCD code.

Table D.1. Binary Coded Decimal


Binary code Numeral
0000 0
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 8
1001 9

Useful as it is, BCD doesn't go far enough. It encodes only numbers. Adding letters and control
information requires something more. Engineers were happy to come up with more, even too much
more.

EBCDIC
When IBM developed its 360-series of mainframe computers, it developed its own eight-bit data code
to encompass the alphabet. Building on the foundation of BCD, IBM extended the code by adding
four more bits and created what it called the Extended Binary Coded Decimal Interchange Code or
EBCDIC.
In the EBCDIC system as developed by IBM, characters were not assigned to all of the potential code
values, leaving many of them undefined. Although this code is still used by many larger computer
systems, few PC applications understand it. With any luck, you will never encounter EBCDIC files
when working with your PC. For the sake of completeness, however, Table D.2 lists EBCDIC codes.
Table D.2. The Extended Binary Coded Decimal Interchange Code
Decimal Hex Symbol or mnemonic Function
0 0 NUL Null
1 1 SOH Start of heading (indicator)
2 2 STX Start of text (indicator)
3 3 ETX End of text (indicator)
4 4 PF Punch off
5 5 HT Horizontal tab

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (2 de 17) [23/06/2000 07:07:21 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

6 6 LC Lower case
7 7 DEL Delete
8 8
9 9
10 A SMM Start of Manual Message
11 B VT Vertical tab
12 C FF Form feed
13 D CR Carriage return
14 E SO Shift out
15 F SI Shift in
16 10 DLE Data link escape
17 11 DC1 Device control 1
18 12 DC2 Device control 2
19 13 TM Tape mark
20 14 RES Restore
21 15 NL New line
22 16 BS Backspace
23 17 IL Idle
24 18 CAN Cancel
25 19 EM End of medium
26 1A CC Cursor control
27 1B CU1 Customer use 1
28 1C IFS Interchange file separator
29 1D IGS Interchange group separator
30 1E IRS Interchange record separator
31 1F IUS Interchange unit separator
32 20 DS Digit select
33 21 SOS Start of significance
34 22 FS Field separator

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (3 de 17) [23/06/2000 07:07:21 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

35 23
36 24 BYP Bypass
37 25 LF Line feed
38 26 ETB End of transmission block
39 27 ESC Escape
40 28
41 29
42 2A SM Set mode
43 2B CU2 Customer use 2
44 2C
45 2D ENQ Enquiry
46 2E ACK Acknowledge
47 2F BEL Bell
48 30
49 31
50 32 SYN Synchronous idle
51 33
52 34 PN Punch on
53 35 RS Reader stop
54 36 UC Upper case
55 37 EOT End of transmission
56 38
57 39
58 3A
59 3B CU3 Customer use 3
60 3C DC4 Device control 4
61 3D NAK Negative acknowledge
62 3E
63 3F SUB Substitute

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (4 de 17) [23/06/2000 07:07:21 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

64 40 SP Space
65 41
66 42
67 43
68 44
69 45
70 46
71 47
72 48
73 49
74 4A Cent sign
75 4B
76 4C < Less than sign
77 4D ( Open parenthesis
78 4E + Plus sign
79 4F | Logical OR
80 50 & Ampersand
81 51
82 52
83 53
84 54
85 55
86 56
87 57
88 58
89 59
90 5A ! Exclamation mark
91 5B $ Dollar sign
92 5C . Period

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (5 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

93 5D ) Close parenthesis
94 5E ; Semi-colon
95 5F
96 60 - Minus sign, hyphen
97 61 / Slash
98 62
99 63
100 64
101 65
102 66
103 67
104 68
105 69
106 6A
107 6B , Comma
108 6C % Percent sign
109 6D _ Underscore
110 6E > Greater than sign
111 6F ? Question mark
112 70
113 71
114 72
115 73
116 74
117 75
118 76
119 77
120 78
121 79

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (6 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

122 7A : Colon
123 7B # Number sign
124 7C @ At sign
125 7D ' Single quote
126 7E = Equal sign
127 7F " Double quote
128 80
129 81 a
130 82 b
131 83 c
132 84 d
133 85 e
134 86 f
135 87 g
136 88 h
137 89 i
138 8A
139 8B Open curly bracket
140 8C Bar
141 8D Close curly bracket
142 8E Tilde
143 8F
144 90
145 91 j
146 92 k
147 93 l
148 94 m
149 95 n
150 96 o

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (7 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

151 97 p
152 98 q
153 99 r
154 9A
155 9B
156 9C
157 9D
158 9E
159 9F
160 A0
161 A1
162 A2 s
163 A3 t
164 A4 u
165 A5 v
166 A6 w
167 A7 x
168 A8 y
169 A9 z
170 AA
171 AB
172 AC
173 AD
174 AE
175 AF
176 B0
177 B1
178 B2
179 B3

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (8 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

180 B4
181 B5
182 B6
183 B7
184 B8
185 B9
186 BA
187 BB
188 BC
189 BD
190 BE
191 BF
192 C0
193 C1 A
194 C2 B
195 C3 C
196 C4 D
197 C5 E
198 C6 F
199 C7 G
200 C8 H
201 C9 I
202 CA
203 CB
204 CC
205 CD
206 CE
207 CF
208 D0

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (9 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

209 D1 J
210 D2 K
211 D3 L
212 D4 M
213 D5 N
214 D6 O
215 D7 P
216 D8 Q
217 D9 R
218 DA
219 DB
220 DC
221 DD
222 DE
223 DF
224 E0
225 E1
226 E2 S
227 E3 T
228 E4 U
229 E5 V
230 E6 W
231 E7 X
232 E8 Y
233 E9 Z
234 EA
235 EB
236 EC
237 ED

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (10 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

238 EE
239 EF
240 F0 0 Zero
241 F1 1 One
242 F2 2 Two
243 F3 3 Three
244 F4 4 Four
245 F5 5 Five
246 F6 6 Six
247 F7 7 Seven
248 F8 8 Eight
249 F9 9 Nine
250 FA
251 FB
252 FC
253 FD
254 FE
255 FF

ASCII
In small computer systems and the Internet, the most popular system for coding alphabetic characters
is the American Standard Code for Information Interchange or ASCII. Originally put to work when
serial communications was the common link between computers and terminals, and seven bit words
were commonplace, the basic ASCII code uses seven bits to encode all the letters of the alphabet,
numerals, punctuation marks, and a range of message formatting codes. In PC storage, of course, a
byte is the standard unit of measure, and adding a bit doubles the range of symbols that the ASCII
code can identify. Many eight-bit elaborations of the basic ASCII code have consequently been
developed.
The basic 128 characters are generally inviolate. The first 32 characters are reserved as control codes,
instructions that tell data processing equipment how to handle the data. Alphabetic characters are
stored in two ranges, from 65 through 90 for the capital letters "A" through "Z" and from 97 to 122 for
lower case "a" through "z." The two ranges work neatly together because the codes for a specific
capital and lowercase letter will always differ by only one bit. Adding 20 (Hex) to the code of a
capital letter results in the code for its lowercase equivalent. The numerals run from 48 (representing
zero) to 57 (representing nine). Table D.3 lists the basic seven-bit ASCII code.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (11 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

Decimal Hex Symbol Mnemonic or function


0 0 ^@ NUL (Used as a fill character)
1 1 ^A SOH
2 2 ^B STX
3 3 ^C ETX
4 4 ^D EOT
5 5 ^E ENQ
6 6 ^F ACK
7 7 ^G BEL
8 8 ^H BS
9 9 ^I HT
10 A ^J LF
11 B ^K VT
12 C ^L FF
13 D ^M CR
14 E ^N SO
15 F ^O SI
16 10 ^P DLE
17 11 ^Q DC1
18 12 ^R DC2
19 13 ^S DC3
20 14 ^T DC4
21 15 ^U NAK
22 16 ^V SYN
23 17 ^W ETB
24 18 ^X CAN
25 19 ^Y EM
26 1A ^Z SUB
27 1B ^[ ESC

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (12 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

28 1C ^\ FS
29 1D ^] GS
30 1E ^^ RS
31 1F ^_ US
32 20 SP Space character
33 21 ! Exclamation mark
34 22 " Double quotes
35 23 # Pound sign
36 24 $ Dollar sign
37 25 % Percent sign
38 26 & Ampersand
39 27 ' Single quote
40 28 ( Open parenthesis
41 29 ) Close parenthesis
42 2A * Asterisk
43 2B + Plus sign
44 2C , Comma
45 2D - Minus sign (hyphen)
46 2E . Period
47 2F / Slash
48 30 0 Zero
49 31 1 One
50 32 2 Two
51 33 3 Three
52 34 4 Four
53 35 5 Five
54 36 6 Six
55 37 7 Seven
56 38 8 Eight

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (13 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

57 39 9 Nine
58 3A : Colon
59 3B ; Semi-colon
60 3C < Less than sign
61 3D = Equals sign
62 3E > Greater than sign
63 3F ? Question mark
64 40 @ At sign
65 41 A
66 42 B
67 43 C
68 44 D
69 45 E
70 46 F
71 47 G
72 48 H
73 49 I
74 4A J
75 4B K
76 4C L
77 4D M
78 4E N
79 4F O
80 50 P
81 51 Q
82 52 R
83 53 S
84 54 T
85 55 U

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (14 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

86 56 V
87 57 W
88 58 X
89 59 Y
90 5A Z
91 5B [ Open bracket
92
5C \ Backslash
93 5D ] Close bracket
94 5E ^ Caret
95 5F _ Underscore
96 60 `
97 61 a
98 62 b
99 63 c
100 64 d
101 65 e
102 66 f
103 67 g
104 68 h
105 69 i
106 6A j
107 6B k
108 6C l
109 6D m
110 6E n
111 6F o
112 70 p
113 71 q

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (15 de 17) [23/06/2000 07:07:22 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

114 72 r
115 73 s
116 74 t
117 75 u
118 76 v
119 77 w
120 78 x
121 79 y
122 7A z
123 7B { Open curly bracket
124 7C | Bar
125 7D } Close curly bracket
126 7E ~ Tilde
127 7F

You're likely to run into two different sets of eight-bit extensions to the ASCII code. When working
with DOS, you're most likely to use the IBM extended character set that puts many of the extra codes
to work specifying additional symbols that are often used for drawing block graphics on monitors and
printed output. Windows has its own Windows extended character set that omits the block graphics
(after all, they are hardly necessary for an interface built around bit-mapped graphics) and instead
includes more foreign language characters and symbols.

UniCode
The eight-bit ASCII code cannot handle all the characters and symbols used by all languages
world-wide. Some languages have thousands of distinct characters. If the PC is to be useful
throughout the world, it requires some means of accommodating a wider range of characters. The
UniCode Worldwide Character Standard was designed to bridge this language gap. By using a 16-bit
code for individual characters, UniCode has the potential to encode 65,536 distinct symbols. The
downside is, of course, any program must reserve twice the space to stored individual characters.
UniCode has been incorporated into the latest operating system designs and their file systems.
Directory entries in new file systems, for example, make allowances for 16-bit character entries.
UniCode makes a distinction between characters and glyphs. Under the UniCode definition, a glyph is
the visual representation of a character. The character is the underlying concept, the understood
meaning of the symbol. A glyph is what prints; the character is what you understand the glyph to
mean. According to the UniCode design, a character has no inherent image of its own. A font, under
this definition, therefore, is a collection of glyphs rather than characters.
UniCode is also language neutral. Although it encodes the symbols used by many different languages,

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (16 de 17) [23/06/2000 07:07:23 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Appendix D

merely examining a list of the characters used does not in itself reveal what language is being
encoded. UniCode requires a higher level protocol to define the language being encoded.
In its current version, 2.0, UniCode includes characters not only for most major language writing
systems in use in the world today[md]a total of 25 different scripts[md]but it also includes symbols
for classical and historic languages. A total of 38,885 characters are currently defined for use with
languages in Africa, Asia, Europe, the Middle East, North and South America, and Oceana.
Even at this, the current version is not definitive. The symbol needs for some language systems are
still being defined and eventually will be accommodated into future versions.
The breadth of the code is so large the complete standard requires its own book, The UniCode
Standard, Version 2.0 Addison-Wesley, 1996. ISBN 0-201-48345-9.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/wrhbad.htm (17 de 17) [23/06/2000 07:07:23 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/25wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/25wrh01.gif [23/06/2000 07:07:35 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh01.gif [23/06/2000 07:07:52 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh02.gif [23/06/2000 07:08:05 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh03.gif [23/06/2000 07:08:28 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh04.gif [23/06/2000 07:09:00 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh05.gif [23/06/2000 07:09:11 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh06.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh06.gif [23/06/2000 07:10:41 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh07.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/24wrh07.gif [23/06/2000 07:10:49 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh01.gif [23/06/2000 07:11:44 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh02.gif [23/06/2000 07:11:51 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh03.gif [23/06/2000 07:12:00 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh04.gif [23/06/2000 07:12:41 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh05.gif [23/06/2000 07:12:50 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh06.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh06.gif [23/06/2000 07:13:03 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh07.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh07.gif [23/06/2000 07:13:15 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh08.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh08.gif [23/06/2000 07:13:29 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh09.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh09.gif [23/06/2000 07:13:37 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh10.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh10.gif [23/06/2000 07:13:50 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh11.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh11.gif [23/06/2000 07:14:03 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh12.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh12.gif [23/06/2000 07:14:10 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh13.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh13.gif [23/06/2000 07:14:20 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh14.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh14.gif [23/06/2000 07:14:32 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh15.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh15.gif [23/06/2000 07:14:42 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh16.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh16.gif [23/06/2000 07:14:53 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh17.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh17.gif [23/06/2000 07:15:07 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh18.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh18.gif [23/06/2000 07:15:21 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh19.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh19.gif [23/06/2000 07:15:35 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh20.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/23wrh20.gif [23/06/2000 07:15:38 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh01.gif [23/06/2000 07:15:42 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh02.gif [23/06/2000 07:15:53 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh03.gif [23/06/2000 07:15:58 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh04.gif [23/06/2000 07:16:02 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh05.gif [23/06/2000 07:16:06 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh06.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh06.gif [23/06/2000 07:16:10 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh07.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh07.gif [23/06/2000 07:16:25 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh08.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh08.gif [23/06/2000 07:16:53 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh09.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh09.gif [23/06/2000 07:17:10 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh10.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh10.gif [23/06/2000 07:17:15 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh11.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh11.gif [23/06/2000 07:17:31 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh12.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh12.gif [23/06/2000 07:17:43 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh13.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/22wrh13.gif [23/06/2000 07:17:48 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh01.gif [23/06/2000 07:18:01 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh02.gif [23/06/2000 07:18:30 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh03.gif [23/06/2000 07:18:57 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh04.gif [23/06/2000 07:19:10 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh05.gif [23/06/2000 07:19:20 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh06.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh06.gif [23/06/2000 07:19:28 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh07.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh07.gif [23/06/2000 07:19:32 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh08.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh08.gif [23/06/2000 07:20:10 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh09.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh09.gif [23/06/2000 07:22:38 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh10.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh10.gif [23/06/2000 07:22:49 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh11.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh11.gif [23/06/2000 07:23:02 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh12.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh12.gif [23/06/2000 07:23:34 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh13.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh13.gif [23/06/2000 07:24:19 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh14.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh14.gif [23/06/2000 07:24:27 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh15.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh15.gif [23/06/2000 07:25:19 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh16.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh16.gif [23/06/2000 07:25:44 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh17.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh17.gif [23/06/2000 07:26:00 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh18.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh18.gif [23/06/2000 07:26:36 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh19.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh19.gif [23/06/2000 07:26:48 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh20.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh20.gif [23/06/2000 07:26:57 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh21.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/21wrh21.gif [23/06/2000 07:27:09 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh01.gif [23/06/2000 07:27:43 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh02.gif [23/06/2000 07:28:10 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh03.gif [23/06/2000 07:28:14 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh04.gif [23/06/2000 07:28:29 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh05.gif [23/06/2000 07:29:18 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh06.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/20wrh06.gif [23/06/2000 07:29:38 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh01.gif [23/06/2000 07:29:58 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh02.gif [23/06/2000 07:30:15 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh03.gif [23/06/2000 07:30:19 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh04.gif [23/06/2000 07:30:42 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh05.gif [23/06/2000 07:30:58 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh06.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh06.gif [23/06/2000 07:31:08 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh07.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/19wrh07.gif [23/06/2000 07:31:28 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh01.gif [23/06/2000 07:31:45 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh02.gif [23/06/2000 07:32:00 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh03.gif [23/06/2000 07:32:18 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh04.gif [23/06/2000 07:32:29 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/18wrh05.gif [23/06/2000 07:32:36 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh01.gif [23/06/2000 07:33:01 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh02.gif [23/06/2000 07:33:11 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh03.gif [23/06/2000 07:33:23 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh04.gif [23/06/2000 07:34:59 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh05.gif [23/06/2000 07:35:13 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh06.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh06.gif [23/06/2000 07:35:27 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh07.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh07.gif [23/06/2000 07:35:41 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh08.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh08.gif [23/06/2000 07:35:52 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh09.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh09.gif [23/06/2000 07:36:30 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh10.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh10.gif [23/06/2000 07:36:54 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh12.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh12.gif [23/06/2000 07:37:19 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh13.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh13.gif [23/06/2000 07:37:28 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh14.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh14.gif [23/06/2000 07:37:37 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh15.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/17wrh15.gif [23/06/2000 07:37:55 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/15wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/15wrh01.gif [23/06/2000 07:38:15 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/15wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/15wrh02.gif [23/06/2000 07:38:26 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/15wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/15wrh03.gif [23/06/2000 07:38:46 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/13wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/13wrh01.gif [23/06/2000 07:39:03 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/13wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/13wrh02.gif [23/06/2000 07:39:32 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/12wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/12wrh01.gif [23/06/2000 07:39:44 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/10wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/10wrh01.gif [23/06/2000 07:40:39 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/08wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/08wrh01.gif [23/06/2000 07:40:46 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/06wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/06wrh01.gif [23/06/2000 07:40:55 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/04wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/04wrh01.gif [23/06/2000 07:41:19 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/04wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/04wrh02.gif [23/06/2000 07:41:31 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh01.gif [23/06/2000 07:41:46 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh02.gif [23/06/2000 07:42:02 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh03.gif [23/06/2000 07:42:37 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh04.gif [23/06/2000 07:42:47 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh05.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh05.gif [23/06/2000 07:42:59 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh06.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh06.gif [23/06/2000 07:43:04 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh07.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/03wrh07.gif [23/06/2000 07:43:14 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/02wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/02wrh01.gif [23/06/2000 07:43:20 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/01wrh02.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/01wrh02.gif [23/06/2000 07:43:36 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/01wrh03.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/01wrh03.gif [23/06/2000 07:43:58 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/01wrh04.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/01wrh04.gif [23/06/2000 07:44:57 p.m.]


http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/01wrh01.gif

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/01wrh01.gif [23/06/2000 07:45:10 p.m.]


Drive Parameter Table

This set of tables includes setup values for a wide variety of hard disk drives. Because most hard disk drive makers now
provide on-line listings of the setup parameters of their current products at their web sites, the chief utility of this listing
is for finding the required setup values for earlier drives, for products no longer supported by their manufacturers, and
for products made by manufacturers either no longer in existence or no longer in the disk drive business.
Hard disk drives are listed by drive maker and model number. To locate the parameters of your hard disk, first choose a
drive manufacturer from the following list:
● Alps Electric (USA), Inc.

● Ampex Corporation
● Areal Technology, Inc.
● Atasi Technology, Inc.
● BASF Corporation
● Bull (Honeywell/Bull)
● Control Data Corporation
● Century Data
● C. Itoh (C. I. E. America)
● CMI
● Cogito
● Conner Peripherals
● Disctec (Disk Technologies Corporation)
● Disctron
● Epson America
● Fuji Corporation
● Fujitsu America
● Hewlett-Packard
● Hitachi
● IBM
● IMI (International Memories, Inc.)
● Kalok Corporation
● Kyocera Electronics
● Lapine Technology
● Maxtor Corporation
● Memorex Corporation
● Micropolis Corporation

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/harddisk.htm (1 de 2) [23/06/2000 07:45:59 p.m.]


Drive Parameter Table

● Microscience International Corporation


● MiniScribe Corporation
● MMI (Micro Memory Incorporated)
● NEC
● Newbury Data, Limited
● Okidata
● Olivetti
● Otari Corporation
● Panasonic Industrial Company
● PrairieTek Corporation
● Priam Systems
● Quantum Corporation
● Rodime
● Samsung Electronics Company
● Seagate Technology
● Siemens Information Systems
● Shugart Associates
● Syquest Technology
● Tandon Corporation
● Teac America
● Toshiba
● Tulin Corporation
● Vertex
● Western Digital Corporation
● Zentec

The listings in these tables are believed to be accurate but, owing to the vagaries of time and the wide variety of sources, the accuracy cannot be guaranteed. The information is
provided as a convenience only. The most accurate setup information is that provided by the drive makers themselves, and only that information should be relied upon. In any
event, the values listed here will give you a starting point for your own experimentation in getting an older disk drive working in your PC.

This page and all pages subsidiary to it are Copyright © 1997 by Winn L. Rosch. All rights reserved.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/harddisk.htm (2 de 2) [23/06/2000 07:45:59 p.m.]


DS-ALPS

Alps Electric (USA), Inc.


Model Interface Capacity Cylinders Heads Sectors
DRND-10A MFM 10MB 615 2 17
DRND-20A MFM 20MB 615 4 17
DRPO-20D MFM/RLL 20MB 615 2 26
RPO-20A MFM/RLL 20MB 615 2 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsalps.htm [23/06/2000 07:46:05 p.m.]


DS-Ampex

AMPEX
Model Interface Capacity Cylinders Heads Sectors
PYXIS-13 MFM 10MB 320 4 17
PYXIS-20 MFM 15MB 320 6 17
PYXIS-27 MFM 20MB 320 8 17
PYXIS-7 MFM 5MB 320 2 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsampex.htm [23/06/2000 07:46:24 p.m.]


DS-Areal

Areal Technology
Model Interface Capacity Cylinders Heads Sectors
A120 ATA 124 1024 4 60
A180 ATA 181 1488 4 60
MD-2060 ATA 61 1024 2 60
MD-2080 ATA 80 1323 2 60

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsareal.htm [23/06/2000 07:46:35 p.m.]


DS-Atasi

Atasi Technology
Model Interface Capacity Cylinders Heads Sectors
AT-6120 ESDI 1051MB 1925 15 71
AT-3020 MFM 17MB 645 3 17
AT-3033 MFM 28MB 645 5 17
AT-3046 MFM 39MB 645 7 17
AT-3051 MFM 43MB 704 7 17
AT-3051+ MFM 44MB 733 7 17
AT-3053 MFM 44MB 733 7 17
AT-3075 MFM 67MB 1024 8 17
AT-3085 MFM/RLL 71MB 1024 8 26
AT-3128 MFM 109MB 1024 8 26
AT-676 ESDI 765MB 1632 15 54

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsatasi.htm [23/06/2000 07:46:43 p.m.]


DS-BASF

BASF Corporation
Model Interface Capacity Cylinders Heads Sectors
6185 MFM 23MB 440 6 17
6186 MFM 15MB 440 4 17
6187 MFM 8MB 440 2 17
6188-R1 MFM 10MB 612 2 17
6188-R3 MFM 21MB 612 4 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsBASF.htm [23/06/2000 07:46:48 p.m.]


DS-Bull

Bull
Honeywell/Bull
Model Interface Capacity Cylinders Heads Sectors
D-530 MFM 26 987 3 17
D-550 MFM 43 987 5 17
D-570 MFM 60 987 7 17
D-585 MFM 71 1166 7 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsBull.htm [23/06/2000 07:46:54 p.m.]


DS-CDC

Control Data Corporation


Model Interface Capacity Cylinders Heads Sectors
Š94155-86 MFM 72MB 925 9 17
Š94208-75 ATA 60MB 969 5 26
Š94351-126 SCSI 111MB 1068 7 29
24221-125M SCSI 110MB 1024 3 36
24221-209M SCSI 183MB 1024 5 36
94155-120 MFM/RLL 102MB 960 8 26
94155-135 MFM 115MB 960 9 26
94155-19 MFM 18MB 697 3 17
94155-21 MFM 18MB 697 3 17
94155-25 MFM 24MB 697 4 17
94155-28 MFM 24MB 697 4 17
94155-36 MFM 30MB 697 5 17
94155-38 MFM 31MB 733 5 17
94155-48 MFM 40MB 925 5 17
94155-51 MFM 43MB 989 5 17
94155-57 MFM 48MB 925 6 17
94155-67 MFM 58MB 925 7 17
94155-77 MFM 64MB 925 8 17
94155-85 MFM 71MB 1024 8 17
94155-96 MFM 80MB 1024 9 17
94156-48 ESDI 40MB 925 5 17
94156-67 ESDI 56MB 925 7 17
94156-86 ESDI 72MB 925 9 17
94161-101 SCSI 86MB 969 5 26
94161-121 SCSI 120MB 969 7 26
94161-141 SCSI 140MB 969 7 26
94161-155 SCSI 150MB 969 9 36
94161-182 SCSI 155MB 969 9 36
94166-101 ESDI 84MB 969 5 34
94166-141 ESDI 118MB 969 7 34
94166-182 ESDI 152MB 969 9 34
94171-300 SCSI 288MB 1365 9 36
94171-344 SCSI 335MB 1549 9 36
94171-350 SCSI 300MB 1412 9 46
94171-375 SCSI 375MB 1549 9 35
94171-376 SCSI 330MB 1546 9 45

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsCDC.htm (1 de 3) [23/06/2000 07:47:16 p.m.]


DS-CDC

94181-385D SCSI 337MB 791 15 36


94181-385H SCSI 330MB 791 15 55
94181-574 SCSI 574MB 1549 15 36
94181-702 SCSI 601MB 1546 15 54
94181-702M SCSI 613MB 1549 15 54
94186-265 ESDI 221MB 1412 9 34
94186-324 ESDI 270MB 1412 11 34
94186-383 ESDI 319MB 1412 13 34
94186-383H ESDI 319MB 1224 15 34
94186-383S ESDI 338MB 1412 13 36
94186-442 ESDI 368MB 1412 15 34
94186-442H ESDI 368MB 1412 15 34
94191-766 SCSI 676MB 1632 15 54
94191-766M SCSI 676MB 1632 15 54
94196-383 ESDI 338MB 1412 13 34
94196-766 ESDI 664MB 1632 15 54
94204-65 ATA 65MB 948 5 26
94204-71 ATA 71MB 1032 5 26
94204-74 ATA 65MB 948 5 26
94204-81 ATA 71MB 1032 5 26
94205-30 MFM 25MB 989 3 17
94205-41 MFM/RLL 38MB 989 3 26
94205-51 MFM/RLL 43MB 989 3 26
94205-77 MFM/RLL 65MB 989 5 26
94211-106 ESDI 89MB 1024 5 34
94211-106 SCSI 91MB 1022 5 26
94211-209 SCSI 142MB 1547 5 36
94211-91 SCSI 91MB 969 5 36
94221-125 SCSI 107MB 1544 3 36
94221-190 SCSI 190MB 1547 5 36
94221-209 SCSI 183MB 1544 5 36
94241-383 SCSI 338MB 1261 7 36
94241-502 SCSI 435MB 1755 7 69
94244-219 ATA 191MB 1747 4 54
94244-274 ATA 241MB 1747 5 54
94244-383 ATA 338MB 1747 7 54
94246-182 ESDI 160MB 1453 4 54
94246-182 ATA 160MB 1453 4 54
94246-383 ESDI 338MB 1747 7 54
94246-383 ATA 338MB 1747 7 54
94314-136 ATA 120MB 1068 5 36
94316-111 ESDI 98MB 1072 5 36
94316-136 ESDI 120MB 1268 5 36
94316-155 ESDI 138MB 1072 7 36
94316-200 ESDI 177MB 1072 9 36
94335-100 MFM 83MB 1072 9 17
94335-150 MFM/RLL 128MB 1072 9 26
94335-55 MFM 46MB 1072 5 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsCDC.htm (2 de 3) [23/06/2000 07:47:16 p.m.]


DS-CDC

94351-111 SCSI 98MB 1068 5 36


94351-128 SCSI 111MB 1068 7 36
94351-133 SCSI 116MB 1268 7 36
94351-133S SCSI-2 116MB 1268 7 36
94351-134 SCSI 117MB 1068 7 36
94351-155 SCSI 138MB 1068 7 36
94351-155S SCSI-2 138MB 1068 7 36
94351-160 SCSI 142MB 1068 9 29
94351-172 SCSI 150MB 1068 9 36
94351-186S SCSI-2 163MB 1268 7 36
94351-200 SCSI 177MB 1068 9 36
94351-200 SCSI-2 177MB 1068 9 36
94351-230 SCSI 210MB 1272 9 36
94351-90 SCSI 79MB 1068 5 29
94354-111 ATA 98MB 1072 5 36
94354-126 ATA 111MB 1072 7 29
94354-133 ATA 117MB 1272 5 36
94354-135 ATA 119MB 1072 9 29
94354-155 ATA 138MB 1072 7 36
94354-160 ATA 143MB 1072 9 29
94354-172 ATA 157MB 1072 9 36
94354-186 ATA 164MB 1272 7 36
94354-200 ATA 177MB 1072 9 36
94354-230 ATA 211MB 1272 9 36
94354-90 ATA 79MB 1072 5 29
94356-111 ESDI 98MB 1072 5 36
94356-155 ESDI 138MB 1072 7 36
94356-200 ESDI 177MB 1072 9 36
94601-767H SCSI-2 665MB 1356 15 64
94601-767M SCSI 676MB 1508 15 54

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsCDC.htm (3 de 3) [23/06/2000 07:47:16 p.m.]


DS-Century Data

Century Data
Model Interface Capacity Cylinders Heads Sectors
CAST 10203E ESDI 55MB 1050 3 35
CAST 10203S SCSI 55MB 1050 3 35
CAST 10304E ESDI 75MB 1050 4 35
CAST 10304S SCSI 75MB 1050 4 35
CAST 10305E ESDI 94MB 1050 5 35
CAST 10305S SCSI 94MB 1050 5 35
CAST 14404E ESDI 114MB 1590 4 35
CAST 14404S SCSI 114MB 1590 4 35
CAST 14405E ESDI 140MB 1590 5 35
CAST 14405S SCSI 140MB 1590 5 35
CAST 14406E ESDI 170MB 1590 6 35
CAST 14406S SCSI 170MB 1590 6 35
CAST 24509E ESDI 258MB 1599 9 35
CAST 24509S SCSI 258MB 1599 9 35
CAST 24611E ESDI 315MB 1599 11 35
CAST 24611S SCSI 315MB 1599 11 35
CAST 24713E ESDI 372MB 1599 13 35
CAST 24713S SCSI 372MB 1599 13 35

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsCentur.htm [23/06/2000 07:47:23 p.m.]


DS-CItoh

C. Itoh
C. I. E. America
Model Interface Capacity Cylinders Heads Sectors
YD-3042 MFM/RLL 44MB 788 4 26
YD-3082 MFM/RLL 87MB 788 8 26
YD-3530 MFM 32MB 731 5 17
YD-3540 MFM 45MB 731 7 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsCItoh.htm [23/06/2000 07:47:32 p.m.]


DS-CMI

CMI
Computer Memories, Inc.
Model Interface Capacity Cylinders Heads Sectors
CM5205 MFM 4MB 256 2 17
CM3206 MFM 10MB 306 4 17
CM3426 MFM 20MB 615 4 17
CM5206 MFM 5MB 306 2 17
CM5410 MFM 8MB 256 4 17
CM5412 MFM 10MB 306 4 17
CM5616 MFM 13MB 256 6 17
CM5619 MFM 15MB 306 6 17
CM5826 MFM 21MB 306 8 17
CM6213 MFM 11MB 640 2 17
CM6426 MFM 21MB 615 4 17
CM6426S MFM 22MB 640 4 17
CM6640 MFM 33MB 640 6 17
CM7660 MFM 50MB 960 6 17
CM7880 MFM 67MB 960 8 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsCMI.htm [23/06/2000 07:47:43 p.m.]


DS-Cogito

Cogito
Model Interface Capacity Cylinders Heads Sectors
CG-906 MFM 5MB 306 2 17
CG-912 MFM 11MB 306 4 17
CG-925 MFM 21MB 612 4 17
PT-912 MFM 11MB 612 2 17
PT-925 MFM 21MB 612 4 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsCogito.htm [23/06/2000 07:47:49 p.m.]


DS-Conner

Conner Peripherals
Model Interface Capacity Cylinders Heads Sectors
CFA850A EIDE 852MB 3659 4 80 to 144
CFA1275A EIDE 1278MB 3659 6 80 to 144
CFL-350A EIDE 350MB 2225 4 54 to 96
CFL-420A EIDE 422MB 2393 4 60 to 107
CFP1080E Fast Wide SCSI-2 1080MB 3658 6 66 to 120
CFP1080S Fast SCSI-2 1080MB 3658 6 66 to 120
CFP2105E Fast Wide SCSI-2 2147MB 3948 10 67 to 139
CFP2105S Fast SCSI-2 2147MB 3948 10 67 to 139
CFP2105W Fast/Wide SCSI-2 2147MB 3948 10 67 to 139
CFP2107E Fast Wide SCSI-2 2147MB 4016 10 69 to 124
CFP2107S Fast SCSI-2 2147MB 4016 10 69 to 124
CFP2107W Fast/Wide SCSI-2 2147MB 4016 10 69 to 124
CFP4207E Fast Wide SCSI-2 4294MB 4016 20 69 to 124
CFP4207S Fast SCSI-2 4294MB 4016 20 69 to 124
CFP4207W Fast/Wide SCSI-2 4294MB 4016 20 69 to 124
CFS210A ATA (Settings) 213MB 685 16 38
CFS270A ATA (Settings) 270MB 600 14 63
CFS420A ATA (Settings) 426MB 826 16 63
CFS425A EIDE 425MB 3687 2 78 to 144
CFS540A ATA (Settings) 541MB 1050 16 63
CFS541A EIDE 540MB 3924 2 90 to 170
CFS635A EIDE 635MB 3640 3 78 to 144
CFS850A EIDE 850MB 3640 4 77 to 143
CFS1081A EIDE 1080MB 3924 4 90 to 170
CFS1275A EIDE 1275MB 3640 6 77 to 143
CFS1621A EDIE 1620MB 3930 6 90 to 170
CP-2020 SCSI (Settings) 21MB 642 2 32
CP-2024 ATA 21MB 653 2 32
CP-2034 ATA 32MB 823 2 38
CP-2064 ATA 64MB 823 4 38
CP-2084 ATA (Settings) 85MB 548 8 38
CP-2088 ATA (Settings) 85MB 548 8 38
CP-2124 ATA (Settings) 122MB 762 8 39
CP-2304 ATA 209MB 1348 8 39
CP-3000 ATA (Settings) 43MB 976 5 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsConner.htm (1 de 3) [23/06/2000 07:48:26 p.m.]


DS-Conner

CP-30060 SCSI (Settings) 61MB 1524 2 39


CP-30064 ATA (Settings) 61MB 762 4 39
CP-30080 SCSI (Settings) 84MB 1053 4 39
CP-30080E SCSI (Settings) 85MB 1806 2 46
CP-30084 ATA (Settings) 84MB 526 8 39
CP-30084E ATA 85MB 905 4 46
CP-30100 SCSI (Settings) 120MB 1522 4 39
CP-30104 ATA (Settings) 120MB 1522 4 39
CP-30124 ATA (Settings) 126MB 895 5 55
CP-30170E SCSI (Settings) 170MB 1806 4 46
CP-30174E ATA (Settings) 170MB 903 8 46
CP-3020 SCSI (Settings) 21MB 622 2 33
CP-30200 SCSI (Settings) 213MB 2119 4 49
CP-30204 ATA (Settings) 213MB 683 16 38
CP-3022 ATA 21MB 622 2 33
CP-3024 ATA (Settings) 22MB 636 2 33
CP-30254 ATA (Settings) 252MB 895 10 55
CP-30344 ATA (Settings) 340MB 904 16 46
CP-3040 SCSI (Settings) 42MB 1026 2 40
CP-3044 ATA (Settings) 43MB 1047 2 40
CP-30540 SCSI (Settings) 528MB 2249 6 59 to 89
CP-30544 ATA 528MB 1023 16 63
CP-3100 SCSI (Settings) 105MB 776 8 33
CP-3102 ATA 104MB 776 8 33
CP-3104 ATA (Settings) 105MB 776 8 33
CP-3111 ATA 112MB 832 8 33
CP-3114 ATA (Settings) 112MB 832 8 33
CP-31370 SCSI (Settings) 1340MB 2094 14 59 to 95
CP-3180 SCSI (Settings) 84MB 832 6 33
CP-3184 ATA (Settings) 84MB 832 6 33
CP-3200/F SCSI (Settings) 213MB 1366 8 38
CP-3204/F ATA (Settings) 213MB 683 16 38
CP-3304 ATA (Settings) 340MB 659 16 63
CP-3360 SCSI (Settings) 363MB 1807 8 49
CP-3364 ATA (Settings) 362MB 702 16 63
CP-340 SCSI (Settings) 42MB 788 4 26
CP-342 ATA 40MB 805 4 26
CP-344 ATA (Settings) 43MB 788 4 26
CP-3504 ATA (Settings) 509MB 987 16 63
CP-3540 SCSI (Settings) 544MB 1807 12 49
CP-3544 ATA (Settings) 528MB 1023 16 63
CP-3554 ATA 544MB 1054 16 63
CP-4024 ATA 22MB 627 2 34
CP-4044 ATA 43MB 1104 2 38

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsConner.htm (2 de 3) [23/06/2000 07:48:26 p.m.]


DS-Conner
Conner Peripherals was absorbed by Seagate Technology in March, 1996. Seagate provides some technical support to Conner products manufactured before that date, including a full
listing of drive parameters and jumper settings at the Seagate web site, www.seagate.com/support/disc/specs/specfind.shtml.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsConner.htm (3 de 3) [23/06/2000 07:48:26 p.m.]


Conner old ATA drive jumpers

Conner Peripherals
Special Jumper Settings
The C/D jumper on this disk drive determines whether the drive functions as a master or slave. When the C/D jumper is
in place, the drive is configured as a master. When the C/D jumper is removed, the drive acts as a slave.
On the CFS210A drive, the ATA/ISA jumper may need to be removed when daisy-chaining the drive with an older drive
such as one manufactured before the adoption of the ATA standard.
On the CFS270A drive, the CS (cable select) jumper allows the status of the drive as master or slave to be determined by
drive cabling.
The CFS420A and CFS540A have both ATA/ISA and CS jumpers which serve the same functions outlined above.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/js9conne.htm [23/06/2000 07:48:36 p.m.]


Conner SCSI ID Settings 1

Conner Peripherals
SCSI ID Settings:
ID Number Jumper E-1 Jumper E-2 Jumper E-3
0 Out Out Out
1 In Out Out
2 Out In Out
3 In In Out
4 Out Out In
5 In Out In
6 Out In In
7 In In In

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/js1conne.htm [23/06/2000 07:48:46 p.m.]


Conner ATA Jumper Settings 4

Conner Peripherals
ATA Jumper Settings
Single drive M/S
Master drive M/S
Slave drive No jumpers

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/JS4conne.htm [23/06/2000 07:48:55 p.m.]


Conner ATA Jumper Settings 1

Conner Peripherals
ATA Jumper Settings
Single drive ACT and C/D
Master drive ACT, C/D, and DSP
Slave drive No jumpers

LED Connections
Connector J-4
LED poisitive terminal Pin 1
LED negative terminal Pin 2

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/JS1conna.htm [23/06/2000 07:49:20 p.m.]


Conner ATA Jumper Settings 6

Conner Peripherals
ATA Jumper Settings
Single drive C/D
Master drive C/D and DSP
Slave drive No jumpers

LED Connections
Connector J-1
LED positive terminal Pin 3
LED negative terminal Pin 4

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/JS6conne.htm [23/06/2000 07:49:32 p.m.]


Conner ATA Jumper Settings 8

Conner Peripherals
ATA Jumper Settings
Single drive J5 1-2
Master drive J5 1-2
Slave drive No jumper J5 1-2

LED Connections
Connector J-5
LED positive terminal Pin 7
LED negative terminal Pin 8

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/JS8conne.htm [23/06/2000 07:49:37 p.m.]


Conner ATA Jumper Settings 7C

Conner Peripherals
ATA Jumper Settings
Single drive C/D and DSP
Master drive C/D and DSP
Slave drive DSP

LED Connections
Connector J-5
LED positive terminal Pin 3
LED negative terminal Pin 4

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/JS7cconn.htm [23/06/2000 07:49:43 p.m.]


Conner ATA Jumper Settings 7D

Conner Peripherals
ATA Jumper Settings
Single drive C/D and DSP
Master drive C/D and DSP
Slave drive DSP

LED Connections
Connector J-3
LED positive terminal Pin 3
LED negative terminal Pin 4

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/JS7dconn.htm [23/06/2000 07:49:50 p.m.]


Conner SCSI ID Settings 2

Conner Peripherals
SCSI ID Settings:
ID Number Jumper E-2 Jumper E-3 Jumper E-4
0 Out Out Out
1 In Out Out
2 Out In Out
3 In In Out
4 Out Out In
5 In Out In
6 Out In In
7 In In In

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/js2conne.htm [23/06/2000 07:49:54 p.m.]


DS-Disctec

Disctec
Disk Technologies Corporation
Model Interface Capacity Cylinders Heads Sectors
RHD-20 ATA 21MB 615 2 34
RHD-60 ATA 63MB 1024 2 60

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsDiscte.htm [23/06/2000 07:49:58 p.m.]


DS-Disctron

Disctron
Model Interface Capacity Cylinders Heads Sectors
D-503 MFM 3 153 2 17
D-504 MFM 4 215 2 17
D-506 MFM 5 153 4 17
D-507 MFM 5 306 2 17
D-509 MFM 8 215 4 17
D-512 MFM 11 153 8 17
D-513 MFM 11 215 6 17
D-514 MFM 11 306 4 17
D-518 MFM 15 215 8 17
D-519 MFM 16 306 6 17
D-526 MFM 21 306 8 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsDisctr.htm [23/06/2000 07:50:08 p.m.]


DS-Epson

Epson America
Model Interface Capacity Cylinders Heads Sectors
HD850 MFM 11MB 306 4 17
HD860 MFM 21MB 612 4 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsEpson.htm [23/06/2000 07:50:15 p.m.]


DS-Fuji

Fuji Corporation
Model Interface Capacity Cylinders Heads Sectors
FK301-13 MFM 10MB 306 4 17
FK302-13 MFM 10MB 612 2 17
FK302-26 MFM 21MB 612 4 17
FK302-39 MFM 32MB 612 6 17
FK303-52 MFM 40MB 615 8 17
FK305-26 MFM 21MB 615 4 17
FK305-39 MFM 32MB 615 6 17
FK305-39R MFM/RLL 32MB 615 4 26
FK305-58R MFM/RLL 49MB 615 6 26
FK308S-39R SCSI 31MB 615 4 26
FK308S-58R SCSI 45MB 615 6 26
FK309-26 MFM 20MB 615 4 17
FK309-39 MFM 32MB 615 6 17
FK309-39R MFM/RLL 30MB 615 4 26
FK309S-50R SCSI 41MB 615 4 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsFuji.htm [23/06/2000 07:50:25 p.m.]


DS-Fujitsu

Fujitsu America
Model Interface Capacity Cylinders Heads Sectors
M1606S SCSI 1.09GB 3456 6 70 to 125
M1606T ATA 1.09GB 2111 16 63
M1614T ATA 1.08GB 2105 16 63
M1636TAU ATA 1.2GB 2491 16 63
M1638TAU ATA 2.5GB 4983 16 63
M1623TAU ATA 1.7GB 3298 16 63
M1624TAU ATA 2.1GB 4092 16 63
M2234AS MFM 16MB 320 6 17
M2249SB SCSI 343MB 1243 15 19
M2225D MFM 21MB 615 4 17
M2225DR MFM/RLL 32MB 615 4 26
M2226D MFM 30MB 615 6 17
M2226DR MFM/RLL 49MB 615 6 26
M2227D MFM 40MB 615 8 17
M2227DR MFM 65MB 615 8 26
M2230AS MFM 5MB 320 2 17
M2230AT MFM 5MB 320 2 17
M2231 MFM 5MB 306 2 17
M2233AS MFM 11MB 320 4 17
M2233AT MFM 11MB 320 4 17
M2235AS MFM 22MB 320 8 17
M2241AS MFM 25MB 754 4 17
M2242AS MFM 43MB 754 7 17
M2243AS MFM 68MB 754 11 17
M2243R MFM/RLL 110MB 1186 7 26
M2243T MFM 68MB 1186 7 17
M2245SA SCSI 148MB 823 10 35
M2246E ESDI 172MB 823 10 35
M2247E ESDI 143MB 1243 7 64
M2247S SCSI 138MB 1243 7 65
M2247SA SCSI 149MB 1243 7 36
M2247SB SCSI 160MB 1243 7 19
M2248E ESDI 224MB 1243 11 64
M2248S SCSI 221MB 1243 11 65
M2248SA SCSI 238MB 1243 11 36
M2248SB SCSI 252MB 1243 11 19

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsFujits.htm (1 de 2) [23/06/2000 07:50:37 p.m.]


DS-Fujitsu

M2249E ESDI 305MB 1243 15 64


M2249S SCSI 303MB 1243 15 65
M2249SA SCSI 324MB 1243 15 36
M2261E ESDI 326MB 1658 8 53
M2262E ESDI 448MB 1658 11 48
M2263E ESDI 675MB 1658 15 53
M2263HA SCSI 672MB 1658 15 53
M2266HA SCSI 1079MB 1658 15 85
M2611SA SCSI 45MB 1334 2 34
M2611T ATA 45MB 1334 3 33
M2612SA SCSI 90MB 1334 4 34
M2612T ATA 90MB 1334 4 33
M2613SA SCSI 136MB 1334 6 34
M2613T ATA 135MB 1334 6 33
M2614SA SCSI 182MB 1334 8 34
M2614T ATA 180MB 1334 8 33
M2622SA SCSI 330MB 1435 8 56
M2622T ATA 330MB 1435 8 56
M2623SA SCSI 425MB 1435 10 56
M2623T ATA 425MB 1435 10 56
M2624SA SCSI 520MB 1435 12 56
M2624T ATA 520MB 1435 12 56
M2631T ATA 45MB 916 2 48

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsFujits.htm (2 de 2) [23/06/2000 07:50:37 p.m.]


DS-Hewlett

Hewlett-Packard Company
Model Interface Capacity Cylinders Heads Sectors
HP-97544E ESDI 340MB 1457 8 57
HP-97544S SCSI 331MB 1447 8 56
HP-97544T SCSI-2 331MB 1447 8 56
HP-97548E ESDI 680MB 1457 16 57
HP-97548S SCSI 663MB 1447 16 56
HP-97548T SCSI-2 663MB 1447 16 56
HP-97549T SCSI-2 1000MB 1911 16 64
HP-97556E ESDI 681MB 1680 11 72
HP-97556T SCSI-2 673MB 1670 11 72
HP-97558E ESDI 1084MB 1962 15 72
HP-97558T SCSI-2 1075MB 1952 15 72
HP-97560E ESDI 1374MB 1962 19 72
HP-97560T SCSI-2 1363MB 1952 19 72
HP-C2233S SCSI-2 238MB 1511 5 49
HP-C2234S SCSI-2 334MB 1511 7 61
HP-C2235S SCSI-2 429MB 1511 9 73
HP-D1660A ESDI 333MB 1457 8 57
HP-D1661A ESDI 667MB 1457 16 57

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsHewlet.htm [23/06/2000 07:50:47 p.m.]


DS-Hitachi

Hitachi
Model Interace Capacity Cylinders Heads Sectors
DK211A ATA 540MB 1049 16 63
DK221A ATA 340MB 692 16 60
DK301-1 MFM 10MB 306 4 17
DK301-2 MFM 15MB 306 6 17
DK502-2 MFM 21MB 615 4 17
DK511-3 MFM 30MB 699 5 17
DK511-5 MFM 42MB 699 7 17
DK511-8 MFM 67MB 823 10 17
DK512-12 ESDI 94MB 823 7 34
DK512-17 ESDI 134MB 823 10 34
DK512-8 ESDI 67MB 823 5 34
DK512C-12 SCSI 94MB 823 7 34
DK512C-17 SCSI 134MB 819 10 34
DK512C-8 SCSI 67MB 823 5 34
DK514-38 ESDI 330MB 903 14 51
DK514C-38 SCSI 321MB 903 14 51
DK515-78 ESDI 673MB 1361 14 69
DK515C-78 SCSI 661MB 1261 14 69
DK521-5 MFM 42MB 823 6 17
DK522-10 ESDI 103MB 823 6 36
DK522C-10 SCSI 88MB 819 6 35

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsHitach.htm [23/06/2000 07:51:09 p.m.]


DS-IBM

IBM
International Business Machines
Model Interface Capacity Cylinders Heads Sectors
DALA-3540 ATA 540MB 1049 16 63
DBOA-2360 ATA-2 360MB 700 16 38
DBOA-2528 ATA-2 528MB 1024 16 63
DBOA-2540 ATA-2 540MB 1050 16 63
DBOA-2720 ATA-2 720MB 1400 16 63
DHAA-2270 ATA 270MB 524 16 63
DHAA-2405 ATA 344MB 915 15 49
DHAA-2405 ATA 405MB 785 16 63
DHAA-2540 ATA 540MB 1047 16 63
DHAA-2810 ATA 810MB 1571 16 63
DPEA-30540 ATA 540MB 1050 16 63
DPEA-31080 ATA 1080MB 2100 16 63
DPRA-20810 ATA-2 810MB 1572 16 63
DPRA-21080 ATA 1080MB 2100 16 63
DPRA-21215 ATA-2 1215MB 2358 16 63
DSAA-3270 ATA 281MB 954 16 36
DSAA-3360 ATA 365MB 929 16 48
DSAA-3540 ATA 548MB 1062 16 63
DSAA-3720 ATA 730MB 1416 16 63

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsIBM.htm [23/06/2000 07:51:26 p.m.]


DS-IMI

IMI
International Memories Incorporated
Model Interface Capacity Cylinders Heads Sectors
5006 MFM 5MB 306 2 17
5007 MFM 5MB 312 2 17
5012 MFM 10MB 306 4 17
5018 MFM 15MB 306 6 17
7720 MFM 21MB 310 4 17
7740 MFM 43MB 315 8 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsIMI.htm [23/06/2000 07:51:31 p.m.]


DS-Kalok

Kalok Corporation
Model Interface Capacity Cylinders Heads Sectors
KE3080 ATA 80MB 979 4 40
KL3100 ATA 105MB 979 6 35
KL3120 ATA 120MB 981 6 40
KL320 MFM 21MB 615 4 17
KL330 MFM/RLL 32MB 615 4 26
KL341 SCSI 40MB 644 4 26
KL343 ATA 42MB 676 4 31
P5-125 ATA 125MB 2048 2 80
P5-250 ATA 251MB 2048 4 80

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsKalok.htm [23/06/2000 07:51:40 p.m.]


DS-Kyocera

Kyocera Electronics
Model Interface Capacity Cylinders Heads Sectors
KC20 MFM 21MB 615 4 17
KC30 MFM/RLL 32MB 615 4 26
KC40GA ATA 41MB 1075 2 26
KC80C SCSI 87MB 787 8 28

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsKyocer.htm [23/06/2000 07:51:44 p.m.]


DS-Lapine

Lapine Technology
Model Interface Capacity Cylinders Heads Sectors
3522 MFM 10MB 306 4 17
LT10 MFM 10MB 615 2 17
LT20 MFM 20MB 615 4 17
LT200 MFM 20MB 614 4 17
LT2000 MFM 20MB 614 4 17
LT300 MFM/RLL 32MB 614 4 26
Titan 20 MFM 21MB 615 4 17
Titan 30 MFM/RLL 32MB 615 4 26
Titan 3532 MFM/RLL 32MB 615 4 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsLapine.htm [23/06/2000 07:51:49 p.m.]


DS-Maxtor

Maxtor Corporation
Model Interface Capacity Cylinders Heads Sectors
250837A/AT ATA 830MB 161 16 63
25084A/AT ATA 84MB 569 16 18
251005A/AT ATA 1.0GB 1945 16 63
25128A/AT ATA 128MB 981 15 17
251340A/AT ATA 1.3GB 2594 16 63
25252A/AT SCSI 252MB 1418 6 43 to 46
2585A/AT ATA 85MB 981 10 17
7040A ATA 41MB 1170 2 36
7040S SCSI 40MB 1155 2 36
7060A ATA 65MB 467 16 17
7080A ATA 81MB 1170 4 36
7080S SCSI 81MB 1155 4 36
71000A/AT ATA 1.0GB 1946 16 63
71050A/AT ATA 1.0GB 2045 16 63
71084A/AT ATA 1.0GB 2105 16 63
7120A/AT ATA 130MB 936 16 17
71260A/AT ATA 1.2GB 2448 16 63
7131A/AT ATA 131MB 1002 8 32
71336A/AP ATA 1.3GB 2595 16 63
71350A/AP ATA 1.3GB 2624 16 63
7135AV ATA 135MB 966 12 21
71626A/AP ATA 1.6GB 3158 16 63
71670A/AP ATA 1.6GB 3224 16 63
71687A/AP ATA 1.6GB 3280 16 63
7170A/AT/AI ATA 171MB 984 10 34
7171A/AT ATA 172MB 866 15 26
72004A/AP ATA 2.0GB 3893 16 63
72025AP ATA 2.0GB 3936 16 63
7213A/AT ATA 212MB 683 16 38
7245A/AT ATA 245MB 967 16 31
72577AP ATA 2.5GB 4996 16 63
72700AP ATA 2.7GB 5248 16 63
7270AV ATA 270MB 959 11 50
7273A/AT ATA 273MB 1012 16 33
7290AV ATA 290MB 941 14 43
7345A/AT ATA 345MB 790 15 57

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMaxtor.htm (1 de 3) [23/06/2000 07:51:57 p.m.]


DS-Maxtor

7405AV ATA 405MB 989 16 50


7420AV ATA 420MB 986 16 52
7425AV ATA 426MB 1002 16 52
7540AV ATA 539MB 1046 16 63
7541A/AT ATA 542MB 1052 16 63
7546A/AT ATA 547MB 1060 16 63
7668A/AP ATA 668MB 1297 16 63
7850AV ATA 852MB 1654 16 63
8051A/AT ATA 43MB 745 4 28
80875A ATA 870MB 1700 16 63
81280A ATA 1280MB 2481 16 63
81312A ATA 1.3GB 2548 16 63
81750A ATA 1.8GB 3400 16 63
82187A ATA 2.1GB 4248 16 63
82560A ATA 2560MB 4962 16 63
82625A ATA 2.6GB 5100 16 63
83062A ATA 3.0GB 5948 16 63
83500A ATA 3.5GB 6800 16 63
83840A ATA 3.8GB 7441 16 63
85120A ATA 5.1GB 9924 16 63
LXT-100S SCSI 96MB 733 8 32
LXT-200A ATA 207MB 1320 7 45
LXT-200A/AT ATA 200MB 816 15 32
LXT-200S SCSI 191MB 1320 7 33
LXT-213A ATA 213MB 1320 7 55
LXT-213A/AT ATA 212MB 683 16 38
LXT-213S SCSI 200MB 1320 7 55
LXT-340A ATA 340MB 1560 7 47
LXT-340A/AT ATA 337MB 654 16 63
LXT-340S SCSI 340MB 1560 7 47
LXT-50S SCSI 48MB 733 4 32
LXT-535A/AT ATA 528MB 1024 16 63
MXL-105 PCMCIA 105MB 810 15 17
MXL-131 PCMCIA 131MB 1008 15 17
MXL-171 PCMCIA 171MB 656 15 34
MXL-262 PCMCIA 263MB 1008 15 34
MXT540A/AL ATA 540MB 1050 16 63
P0-12S SCSI 1027MB 1632 15 72
P1-08E ESDI 969MB 1778 9 72
P1-08S SCSI 696MB 1778 9 72
P1-12E ESDI 1051MB 1778 15 72
P1-12S SCSI 1005MB 1216 19 72
P1-13E ESDI 1160MB 1778 15 72
P1-16E ESDI 1331MB 1778 19 72
P1-17E ESDI 1470MB 1778 19 72
P1-17S SCSI 1470MB 1778 19 72
XT1050 MFM 38MB 902 5 17
XT1065 MFM 52MB 918 7 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMaxtor.htm (2 de 3) [23/06/2000 07:51:57 p.m.]


DS-Maxtor

XT1085 MFM 68MB 1024 8 17


XT1105 MFM 82MB 918 11 17
XT1120R MFM/RLL 104MB 1024 8 26
XT1140 MFM 116MB 918 15 17
XT2085 MFM 72MB 1224 7 17
XT2140 MFM 113MB 1224 11 17
XT2190 MFM 159MB 1224 15 17
XT4170E ESDI 157MB 1224 7 35
XT4170S SCSI 157MB 1224 7 36
XT4175E ESDI 149MB 1224 7 34
XT4179E ESDI 158MB 1224 7 36
XT4230E ESDI 203MB 1224 9 35
XT4280E ESDI 234MB 1224 11 34
XT4280S SCSI 241MB 1224 11 36
XT4380E ESDI 338MB 1224 15 35
XT4380S SCSI 337MB 1224 15 36
XT81000E ESDI 889MB 1632 15 54
XT8380E ESDI 360MB 1632 8 54
XT8380S SCSI 360MB 1632 8 54
XT8610E ESDI 541MB 1632 12 54
XT8702S SCSI 616MB 1490 15 54
XT8760S SCSI 675MB 1632 15 54
XT8800E ESDI 694MB 1274 15 71

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMaxtor.htm (3 de 3) [23/06/2000 07:51:57 p.m.]


Maxtor 25252A

MAXTOR 25252A 2.5-Inch SCSI DRIVE


SPECIFICATIONS:
Capacity 251.7MB
Platters 3
Surfaces 6
Heads 6
Servo Embedded
Cylinders (user) 1,418
Track density (average) 2,560 tpi
Bytes per sector 512
Zones 8
Sectors per track 43-67
Track-to-track seek time 2.5 ms
Average seek time 12 ms
Maximum seek time 22 ms
Average latency 7.064 ms
Rotational speed 4,247 rpm
Controller overhead 1ms
Data transfer rate (Mbytes/sec)
To/from media 2.01-3.13
To/from buffer (AT) 9.0
To/from buffer (synchronous SCSI) 7.5
To/from buffer (asynchronous SCSI) 3.0
Start time - Power up (0-4,247 RPM)
Typical ó3.5 sec
Start time - Sleep mode <1.5 sec
Start time - Power down ó3 sec
Start/Stop cycles (Typical) 50,000 min
Interleave 1:1
Buffer size 128K
Interface AT or SCSI
Recording method 1,7 RLL
Recording density (Kbpi) 55.0 (Zone 1)
Flux density (Kfci) 41.3 (Zone 1)
Peak power consumption 2.8 watts
Height 0.67 in. (17.0 mm)
Width 4.00 in. (101.6 mm)
Depth 2.75 in. (69.84 mm)

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/mx25252.htm (1 de 2) [23/06/2000 07:52:12 p.m.]


Maxtor 25252A

Weight 0.419 lbs. (190 gm)

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/mx25252.htm (2 de 2) [23/06/2000 07:52:12 p.m.]


DS-Memorex

Memorex Corporation
Model Interface Capacity Cylinders Heads Sectors
310 MFM 2MB 118 2 17
321 MFM 5MB 320 2 17
322 MFM 10MB 320 4 17
323 MFM 15MB 320 6 17
324 MFM 20MB 320 8 17
450 MFM 10MB 612 2 17
512 MFM 25MB 961 3 17
513 MFM 41MB 961 5 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMemore.htm [23/06/2000 07:52:17 p.m.]


DS-Micropolis

Micropolis Corporation
Model Interface Capacity Cylinders Heads Sectors
670 SCSI-2 667MB NR 15 Variable
1030 SCSI-2 1030MB NR 15 Variable
1548 Fast SCSI-2 1748MB NR 15 Variable
1302 MFM 20MB 830 3 17
1303 MFM 34MB 830 5 17
1304 MFM 41MB 830 6 17
1323 MFM 35MB 1024 4 17
1324 MFM 53MB 1024 6 17
1325 MFM 71MB 1024 8 17
1333 MFM 34MB 1024 4 17
1334 MFM 53MB 1024 6 17
1335 MFM 71MB 1024 8 17
1352 ESDI 30MB 1024 2 36
1353 ESDI 75MB 1024 4 36
1354 ESDI 113MB 1024 6 36
1355 ESDI 151MB 1024 8 36
1373 SCSI 73MB 1024 4 36
1374 SCSI 109MB 1024 6 36
1375 SCSI 146MB 1024 8 36
1551 ESDI 149MB 1224 7 34
1624 Fast SCSI-2 667MB NR 7 Variable
1908 Fast SCSI-2 1381MB NR 15 Variable
1924 Fast SCSI-2 2100MB NR 21 Variable
2105 Fast SCSI-2 648MB 1745 8 Variable
2112 Fast SCSI-2 1214MB 1745 15 Variable
2112A ATA 1214MB 2034 16 63
1323A MFM 44MB 1024 5 17
1324A MFM 62MB 1024 7 17
1333A MFM 44MB 1024 5 17
1334A MFM 62MB 1024 7 17
1352A ESDI 41MB 1024 3 36
1353A ESDI 94MB 1024 5 36
1354A ESDI 132MB 1024 7 36
1373A SCSI 91MB 1024 5 36
1374A SCSI 127MB 1024 7 36
1488-15 SCSI 675MB 1628 15 54

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMicrop.htm (1 de 3) [23/06/2000 07:52:29 p.m.]


DS-Micropolis

1516-10S ESDI 678MB 1840 10 72


1517-13 ESDI 922MB 1925 13 72
1518-14 ESDI 993MB 1925 14 72
1518-15 ESDI 1064MB 1925 15 72
1528-15 SCSI-2 1341MB 2106 15 84
1538-15 ESDI 872MB 1925 15 71
1554-11 ESDI 234MB 1224 11 34
1554-7 ESDI 158MB 1224 7 36
1555-12 ESDI 255MB 1224 12 34
1555-8 ESDI 180MB 1224 8 36
1555-9 ESDI 203MB 1224 9 36
1556-10 ESDI 226MB 1224 10 36
1556-11 ESDI 248MB 1224 11 36
1556-13 ESDI 276MB 1224 13 34
1557-12 ESDI 270MB 1224 12 36
1557-13 ESDI 293MB 1224 13 36
1557-14 ESDI 315MB 1224 14 36
1557-14 ESDI 315MB 1224 14 36
1557-15 ESDI 338MB 1224 15 36
1557-15 ESDI 338MB 1224 15 36
1558-14 ESDI 315MB 1224 14 36
1558-15 ESDI 338MB 1224 15 36
1566-11 ESDI 496MB 1632 11 54
1567-12 ESDI 541MB 1632 12 54
1567-13 ESDI 586MB 1632 13 54
1568-14 ESDI 631MB 1632 14 54
1568-15 ESDI 676MB 1632 15 54
1576-11 SCSI 243MB 1224 11 36
1577-12 SCSI 266MB 1224 12 36
1577-13 SCSI 287MB 1224 13 36
1578-14 SCSI 310MB 1224 14 36
1578-15 SCSI 332MB 1224 15 36
1586-11 SCSI 490MB 1632 11 54
1587-12 SCSI 535MB 1632 12 54
1587-13 SCSI 579MB 1632 13 54
1588-14 SCSI 624MB 1632 14 54
1588-15 SCSI 668MB 1632 15 54
1590-15 SCSI 1049MB 1919 15 71
1596-10S SCSI 668MB 1834 10 72
1597-13 SCSI 909MB 1919 13 72
1598-14 SCSI 979MB 1919 14 72
1652-4 ESDI 92MB 1249 4 36
1653-5 ESDI 115MB 1249 5 36
1654-6 ESDI 138MB 1249 6 36
1654-7 ESDI 161MB 1249 7 36
1663-4 ESDI 197MB 1780 4 36
1663-5 ESDI 246MB 1780 5 36
1664-6 ESDI 295MB 1780 6 54

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMicrop.htm (2 de 3) [23/06/2000 07:52:29 p.m.]


DS-Micropolis

1664-7 ESDI 345MB 1780 7 54


1673-4 SCSI 90MB 1249 4 36
1673-5 SCSI 112MB 1249 5 36
1674-6 SCSI 135MB 1249 6 36
1674-7 SCSI 158MB 1249 7 36
1683-4 SCSI 193MB 1776 4 54
1683-5 SCSI 242MB 1776 5 54
1684-6 SCSI 291MB 1776 6 54
1684-7 SCSI 340MB 1776 7 54
1743-5 ATA 112MB 1140 5 28
1744-6 ATA 135MB 1140 6 28
1744-7 ATA 157MB 1140 7 28
1745-8 ATA 180MB 1140 8 28
1745-9 ATA 202MB 1140 9 28
1773-5 SCSI 112MB 1140 5 28
1774-6 SCSI 135MB 1140 6 28
1774-7 SCSI 157MB 1140 7 28
1775-8 SCSI 180MB 1140 8 28
1775-9 SCSI 202MB 1140 9 28
2105A ATA 648MB 1255 16 63

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMicrop.htm (3 de 3) [23/06/2000 07:52:29 p.m.]


DS-Microscience

Microscience International
Model Interface Capacity Cylinders Heads Sectors
4050 MFM 45MB 1024 5 17
4060 MFM/RLL 68MB 1024 5 26
4070 MFM 62MB 1024 7 17
4090 MFM/RLL 95MB 1024 7 26
5040 ESDI 46MB 855 3 35
5070 ESDI 77MB 855 5 35
5100 ESDI 107MB 855 7 35
5160 ESDI 159MB 1271 7 35
6100 SCSI 110MB 855 7 36
7040 ATA 47MB 855 3 36
7100 ATA 107MB 855 7 35
7200 ATA 201MB 1277 7 44
7400 ATA 420MB 1904 8 39
8040 ATA 43MB 1047 2 40
8080 ATA 85MB 1768 2 47
8200 ATA 210MB 1904 4 39
5070-20 ESDI 86MB 960 5 35
5100-20 ESDI 120MB 960 7 35
7070-20 ATA 86MB 960 5 35
7100-20 ATA 120MB 960 7 35
7100-21 ATA 121MB 1077 5 44
8040/MLC ATA 42MB 1024 2 40
FH21200 ESDI 1062MB 1921 15 72
FH21600 ESDI 1418MB 2147 15 86
FH2414 ESDI 367MB 1658 8 54
FH2777 ESDI 688MB 1658 15 54
FH31200 SCSI 1062MB 1921 15 72
FH31600 SCSI 1418MB 2147 15 86
FH3414 SCSI 367MB 1658 8 54
FH3777 SCSI 688MB 1658 15 54
HH1050 MFM 45MB 1024 5 17
HH1060 MFM/RLL 66MB 1024 5 26
HH1075 MFM 62MB 1024 7 17
HH1080 MFM/RLL 95MB 1024 7 26
HH1090 MFM 80MB 1314 7 17
HH1095 MFM/RLL 95MB 1024 7 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMicros.htm (1 de 2) [23/06/2000 07:52:39 p.m.]


DS-Microscience

HH1120 MFM 122MB 1314 7 26


HH2012 MFM 10MB 306 4 17
HH2120 ESDI 128MB 1024 7 35
HH2160 ESDI 160MB 1276 7 35
HH312 MFM 10MB 306 4 17
HH3120 SCSI 121MB 1314 5 36
HH315 MFM 21MB 612 4 17
HH3160 SCSI 169MB 1314 7 36
HH330 (RLL) MFM 33MB 612 4 26
HH612 MFM 10MB 612 2 17
HH712A MFM 10MB 612 2 17
HH725 MFM 21MB 612 4 17
HH738 MFM/RLL 33MB 612 4 26
HH825 MFM 21MB 612 4 17
HH830 MFM 33MB 612 4 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMicros.htm (2 de 2) [23/06/2000 07:52:39 p.m.]


DS-Miniscribe

MiniScribe Corporation
Model Interface Capacity Cylinders Heads Sectors
1006 MFM 5MB 206 2 17
1012 MFM 10MB 306 4 17
2006 MFM 5MB 306 2 17
2012 MFM 10MB 306 4 17
3006 MFM 5MB 306 2 17
3012 MFM 10MB 612 2 17
3053 MFM 44MB 1024 5 17
3085 MFM 71MB 1170 7 17
3212 MFM 10MB 612 2 17
3412 MFM 21MB 615 4 17
3425 MFM 21MB 615 4 17
3438 MFM/RLL 32MB 615 4 26
3650 MFM 42MB 809 6 17
3675 MFM/RLL 63MB 809 6 26
4010 MFM 8MB 480 2 17
4020 MFM 17MB 480 4 17
5330 MFM 25MB 480 6 17
5338 MFM 32MB 612 6 17
5440 MFM 32MB 480 8 17
5451 MFM 43MB 612 8 17
6032 MFM 26MB 1024 3 17
6053 MFM 44MB 1024 5 17
6074 MFM 62MB 1024 7 17
6079 MFM/RLL 68MB 1024 5 26
6085 MFM 71MB 1024 8 17
6128 MFM/RLL 110MB 1024 8 26
6212 MFM 10MB 612 2 17
7426 MFM 21MB 612 4 17
8225 MFM/RLL 20MB 771 2 26
8412 MFM 10MB 306 4 17
8425 MFM 21MB 615 4 17
8438 MFM/RLL 32MB 615 4 26
8450 MFM/RLL 41MB 771 4 26
97803 ESDI 676MB 1661 15 54
3085E ESDI 72MB 1270 3 36
3085S SCSI 72MB 1255 3 36

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMinisc.htm (1 de 2) [23/06/2000 07:53:02 p.m.]


DS-Miniscribe

3130E ESDI 112MB 1250 5 36


3130S SCSI 115MB 1255 5 36
3180E ESDI 157MB 1250 7 36
3180S SCSI 153MB 1255 7 36
6170E ESDI 130MB 1024 8 36
7040A ATA 36MB 980 2 36
7080A ATA 72MB 980 4 36
7080S SCSI 81MB 1155 4 36
8051A ATA 43MB 745 4 28
8051S SCSI 45MB 793 4 28
8225AT ATA 21MB 745 2 28
8225C MFM 21MB 798 2 26
8225S SCSI 21MB 804 2 26
8434F MFM/RLL 32MB 615 4 26
8438XT ATA 32MB 615 4 26
8450AT ATA 42MB 745 4 28
8450C MFM 40MB 748 4 26
8450XT ATA 42MB 805 4 26
9000E ESDI 338MB 1224 15 36
9000S SCSI 347MB 1220 15 36
9230E ESDI 203MB 1224 9 36
9230S SCSI 203MB 1224 9 36
9380E ESDI 338MB 1224 15 36
9380S SCSI 347MB 1224 15 36
9424E ESDI 360MB 1661 8 54
9424S SCSI 355MB 1661 8 54
9780S SCSI 668MB 1661 15 54
MR521 MFM 10MB 612 2 17
MR522 MFM 20MB 612 4 17
MR5301E ESDI 65MB 977 5 26
MR533 MFM 25MB 971 3 17
MR535 MFM 42MB 977 5 17
MR535R MFM/RLL 65MB 977 5 17
MR535S SCSI 65MB 977 5 26
MR537S SCSI 65MB 977 5 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMinisc.htm (2 de 2) [23/06/2000 07:53:02 p.m.]


Miniscribe 3130S

Miniscribe 3130S
Specifications:
FORM FACTOR 5.25 inch HH
INTERFACE SCSI
DATA ENCODING METHOD 2,7 RLL
SUSTAINED TRANSFER RATE 1.25 MByte/Sec
BURST TRANSFER RATE 4.0 Mbyte/Sec
HEADS 5
CYLINDERS 1255
SECTORS/TRACK 35
UNFORMATTED BYTES/TRACK 20,832
RADIAL TRACK DENSITY (TPI) 1,135
ROTATIONAL SPEED (RPM) 3,600
UNFORMATTED CAPACITY 130.7MBytes
FORMATTED CAPACITY 112.4MBytes
AVERAGE ACCESS TIMES 17 mSec
MAXIMUM POWER REQUIRED 18.5 Watts
HEIGHT 1.625 in.
WIDTH 5.75 in.
DEPTH 8 in.
WEIGHT 3.2 lbs.
MTBF (Hours) 35,000

SCSI ADDRESS SETTINGS (J601)


J601-1,2 J601-3,4 J601-5,6
SCSI ADDRESS 0 OFF OFF OFF
SCSI ADDRESS 1 OFF OFF ON
SCSI ADDRESS 2 OFF ON OFF
SCSI ADDRESS 3 OFF ON ON
SCSI ADDRESS 4 ON OFF OFF
SCSI ADDRESS 5 ON OFF ON
SCSI ADDRESS 6 ON ON ON

SECTOR PER TRACK CONFIGURATION


Sectors/Track J-12 J-13
34 ON OFF
35 OFF OFF
36 OFF ON

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/mn3130s.htm [23/06/2000 07:53:09 p.m.]


Miniscribe 3180S

Miniscribe 3180S
Specifications:
FORM FACTOR 5.25 inch HH
INTERFACE SCSI
DATA ENCODING METHOD 2,7 RLL
SUSTAINED TRANSFER RATE 1.25 MByte/Sec
BURST TRANSFER RATE 4.0 Mbyte/Sec
HEADS 7
CYLINDERS 1255
SECTORS/TRACK 36
UNFORMATTED BYTES/TRACK 20,832
RADIAL TRACK DENSITY (TPI) 1,135
ROTATIONAL SPEED (RPM) 3,600
UNFORMATTED CAPACITY 183.0 MB
FORMATTED CAPACITY 160.0 MB
AVERAGE ACCESS TIME 17 mSec
MAXIMUM POWER REQUIRED 18.5 Watts
HEIGHT 1.625 in.
WIDTH 5.75 in.
DEPTH 8 in.
WEIGHT 3.2 lbs.
MTBF (Hours) 35,000

SCSI ADDRESS SETTINGS (J601)


J601-1,2 J601-3,4 J601-5,6
SCSI ADDRESS 0 OFF OFF OFF
SCSI ADDRESS 1 OFF OFF ON
SCSI ADDRESS 2 OFF ON OFF
SCSI ADDRESS 3 OFF ON ON
SCSI ADDRESS 4 ON OFF OFF
SCSI ADDRESS 5 ON OFF ON
SCSI ADDRESS 6 ON ON ON

SECTOR PER TRACK SETTINGS


SECTORS/TRACK J-12 J-13
34 ON OFF
35 OFF OFF
36 OFF ON

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/mn3180s.htm [23/06/2000 07:53:32 p.m.]


DS-MMI

MMI
Micro Memory Incorporated
Model Interface Capacity Cylinders Heads Sectors
M106 MFM 5MB 306 2 17
M112 MFM 10MB 306 4 17
M125 MFM 20MB 306 8 17
M212 MFM 10MB 306 4 17
M225 MFM 20MB 306 8 17
M306 MFM 5MB 306 2 17
M312 MFM 10MB 306 4 17
M325 MFM 20MB 306 8 17
M5012 MFM 10MB 306 4 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsMMI.htm [23/06/2000 07:53:55 p.m.]


DS-NEC

NEC
Model Interface Capacity Cylinders Heads Sectors
D3126 MFM 20MB 615 4 17
D3142 MFM 42MB 642 8 17
D3146H MFM 40MB 615 8 17
D3661 ESDI 118MB 915 7 36
D3735 ATA 56MB 1084 2 41
D3755 ATA 105MB 1250 4 41
D3761 ATA 114MB 915 7 35
D3835 SCSI 45MB 1084 2 41
D3855 SCSI 105MB 1250 4 41
D3861 SCSI 114MB 915 7 35
D5114 MFM 5MB 306 2 17
D5124 MFM 10MB 309 4 17
D5126 MFM 20MB 612 4 17
D5127H MFM/RLL 32MB 612 4 26
D5146 MFM 40MB 615 8 17
D5147H MFM/RLL 65MB 615 8 26
D5452 MFM 71MB 823 10 17
D5652 ESDI 143MB 823 10 34
D5655 ESDI 153MB 1224 7 35
D5662 ESDI 319MB 1224 15 34
D5681 ESDI 664MB 1633 15 53
D5882 SCSI 665MB 1633 15 53
D5892 SCSI 1404MB 1678 19 86
DSE1700 ATA 1.6GB 3306 16 63

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsNEC.htm [23/06/2000 07:54:22 p.m.]


DS-Newbury

Newbury Data
Model Interface Capacity Cylinders Heads Sectors
NDR1065 MFM 55MB 918 7 17
NDR1085 MFM 71MB 1025 8 17
NDR1105 MFM 87MB 918 11 17
NDR1140 MFM 119MB 918 15 17
NDR2085 MFM 74MB 1224 7 17
NDR2140 MFM 117MB 1224 11 17
NDR2190 MFM 160MB 1224 15 17
NDR3170S SCSI 146MB 1224 9 26
NDR320 MFM 21MB 615 4 17
NDR3280S SCSI 244MB 1224 15 26
NDR340 MFM 42MB 615 8 17
NDR360 MFM/RLL 65MB 615 8 26
NDR4170 ESDI 149MB 1224 7 34
NDR4175 ESDI 157MB 1224 7 36
NDR4380 ESDI 338MB 1224 15 36
NDR4380S SCSI 319MB 1224 15 34

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsNewbur.htm [23/06/2000 07:55:01 p.m.]


DS-Okidata

Okidata
Model Interface Capacity Cylinders Heads Sectors
OD526 MFM/RLL 31MB 612 4 26
OD540 MFM/RLL 47MB 612 6 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsOkidat.htm [23/06/2000 07:55:06 p.m.]


DS-Olivetti

Olivetti
Model Interface Capacity Cylinders Heads Sectors
HD662/11 MFM 10MB 612 2 17
HD662/12 MFM 20MB 612 4 17
XM5210 MFM 10MB 612 4 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsOlivet.htm [23/06/2000 07:55:11 p.m.]


DS-Otari

Otari Corporation
Model Interface Capacity Cylinders Heads Sectors
C214 MFM 10MB 306 4 17
C507 MFM 5MB 306 2 17
C514 MFM 10MB 306 4 17
C519 MFM 15MB 306 6 17
C526 MFM 10MB 306 8 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsOtari.htm [23/06/2000 07:55:17 p.m.]


DS-Panasonic

Panasonic Industrial Company


Model Interface Capacity Cylinders Heads Sectors
JU-116 MFM 20MB 615 4 17
JU-128 MFM 42MB 733 7 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsPanaso.htm [23/06/2000 07:55:45 p.m.]


DS-Prairietek

PrairieTek Corporation
Model Interface Capacity Cylinders Heads Sectors
120 ATA 21MB 615 2 34
240 ATA 42MB 615 4 34

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsPrairi.htm [23/06/2000 07:55:50 p.m.]


DS-Priam

Priam Systems
Model Interface Capacity Cylinders Heads Sectors
502 MFM 46MB 755 7 17
504 MFM 46MB 755 7 17
514 MFM 117MB 1224 11 17
519 MFM 160MB 1224 15 17
617 ESDI 153MB 1225 7 36
623 ESDI 196MB 752 15 34
628 ESDI 241MB 1225 11 36
630 ESDI 319MB 1224 15 34
638 ESDI 329MB 1225 15 36
717 SCSI 153MB 1225 7 36
728 SCSI 241MB 1225 11 36
738 SCSI 329MB 1225 15 36
3504 MFM 44MB 771 5 17
ID100 MFM/RLL 103MB 1166 7 25
ID120 ESDI 121MB 1024 7 33
ID130 MFM 132MB 1224 15 17
ID150 ESDI 159MB 1276 7 35
ID160 ESDI 158MB 1225 7 36
ID20 MFM 26MB 987 3 17
ID230 MFM/RLL 233MB 1224 15 25
ID250 ESDI 248MB 1225 11 36
ID330 ESDI 338MB 1225 15 36
ID330E ESDI 336MB 1218 15 36
ID330S SCSI 338MB 1218 15 36
ID40 MFM 43MB 981 5 17
ID45 MFM 50MB 1166 5 17
ID45H MFM 44MB 1024 5 17
ID60 MFM 59MB 981 7 17
ID62 MFM 62MB 1166 7 17
ID75 MFM/RLL 73MB 1166 7 25
V130R MFM/RLL 39MB 987 3 26
V150 MFM 42MB 987 5 17
V160 MFM 50MB 1166 5 17
V170 MFM 60MB 987 7 17
V170R MFM/RLL 91MB 987 7 26
V185 MFM 71MB 1166 7 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsPriam.htm (1 de 2) [23/06/2000 07:56:01 p.m.]


DS-Priam

V519 MFM 159MB 1224 15 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsPriam.htm (2 de 2) [23/06/2000 07:56:01 p.m.]


DS-Quantum

Quantum Corporation
Model Interface Capacity Cylinders Heads Sectors
Bigfoot 1275 ATA 1.2GB 2492 16 63
Bigfoot 2550 ATA 2.5GB 4994 16 63
Daytona 127 ATA 127MB 677 9 41
Daytona 170 ATA 170MB 538 10 62
Daytona 256 ATA 256MB 723 11 63
Daytona 341 ATA 341MB 1011 15 44
Daytona 514 ATA 514MB 996 16 63
Europa 1080 ATA 1.0GB 2362 15 60
Europa 540 ATA 540MB 1179 15 60
Europa 810 ATA 810MB 1771 15 60
Fireball 1080 ATA 1.0GB 2112 16 63
Fireball 540 ATA 540MB 1056 16 63
Fireball II 1280 ATA 1.2GB 2484 16 63
Fireball II 640 ATA 640MB 1244 16 63
GO Drive 40 ATA 40MB 821 6 17
GO Drive 60 ATA 60MB 526 9 26
GO Drive 80 ATA 80MB 991 10 17
GO Drive GLS 127 ATA 127MB 677 9 41
GO Drive GLS 170 ATA 170MB 538 10 62
GO Drive GLS 256 ATA 256MB 723 11 63
GO Drive GLS 85 ATA 85MB 722 10 23
GO Drive GRS 160 ATA 160MB 966 10 34
GO Drive GRS 80 ATA 80MB 966 5 34
Lightning 365 ATA 365MB 976 12 61
Lightning 540 ATA 540MB 1120 16 59
Lightning 730 ATA 730MB 1416 16 63
Maverick 270 ATA 270MB 944 14 40
Maverick 540 ATA 540MB 1049 16 63
PRO 120AT (Settings) ATA 120MB 814 9 32
PRO 120S SCSI 120MB 814 9 32
PRO 170AT (Settings) ATA 168MB 968 10 34
PRO 210AT (Settings) ATA 209MB 873 13 36
PRO 210S SCSI 209MB 873 13 36
PRO 40AT (Settings) ATA 42MB 965 5 17
PRO 40S SCSI 42MB 965 5 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsQuantu.htm (1 de 2) [23/06/2000 07:56:32 p.m.]


DS-Quantum

PRO 425AT (Settings) ATA 426MB 1021 16 51


PRO 80AT (Settings) ATA 84MB 965 10 17
PRO 80S SCSI 84MB 965 10 17
PRO ELS 127 ATA 127MB 919 16 17
PRO ELS 170 ATA 170MB 1011 15 22
PRO LPS105AT (Settings) ATA 105MB 755 16 17
PRO LPS105S SCSI 105MB 755 16 17
PRO LPS127AT ATA 127MB 919 16 17
PRO LPS170AT ATA 170MB 1011 15 22
PRO LPS210AT ATA 210MB 723 15 38
PRO LPS240AT (Settings) ATA 235MB 723 13 51
PRO LPS240S SCSI 235MB 723 13 51
PRO LPS420AT ATA 420MB 1010 16 51
PRO LPS52AT (Settings) ATA 52MB 751 8 17
PRO LPS52S SCSI 52MB 751 8 17
PRO LPS540 ATA 540MB 1120 16 59
PRO LPS80AT (Settings) ATA 86MB 616 16 17
PRO LPS80S SCSI 86MB 616 16 17
Q160 SCSI 200MB 971 12 36
Q250 SCSI 53MB 823 4 36
Q280 SCSI 80MB 823 6 36
Q510 MFM 8MB 512 2 17
Q520 MFM 18MB 512 4 17
Q530 MFM 27MB 512 6 17
Q540 MFM 36MB 512 8 17
Sirocco 1700 ATA 1.7GB 3309 16 63
Sirocco 2550 ATA 2.5GB 4969 16 63
Trailblazer 420 ATA 420MB 1010 16 51
Trailblazer 850 ATA 850MB 1647 16 63

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsQuantu.htm (2 de 2) [23/06/2000 07:56:32 p.m.]


Quantum ProDrive Jumper Settings

Quantum Corporation
ATA Jumper Settings
Configuration Jumper DS Jumper SP Jumper SS
Single drive On Off Off
Master drive in PDIAG mode, DASP to check for slave On Off On
Master drive in PDIAG mode, SP to check for slave On On Off
Master drive in 40/80 mode, SP to check for slave On On On
Slave with non-Quantum master Off On Off
Slave with Quantum ProDrive master Off Off Off
Slave with ProDrive 40/80 master Off Off On
Not used Off On On

Note: These settings apply to the Quantum ProDrives 120AT, 170AT, 210AT, and 425AT.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/js2quant.htm [23/06/2000 07:56:42 p.m.]


Quantum 40AT and 80AT Jumper Settings

Quantum Corporation
ATA Jumper Settings
Configuration Jumper DS Jumper SS
Single drive On Off
Master drive On On
Slave drive Off Off
Self-seek test Off On

Note: These settings apply to the Quantum 40AT and 80AT drives only.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/js1quant.htm [23/06/2000 07:56:51 p.m.]


Quantum LPS Drive Jumper Settings

Quantum Corporation
ATA Jumper Settings
Configuration Jumper DS Jumper SP Jumper DM (CS)
Slave in standard PDIAG mode for compatibility with
drives that use PDIAG-line to handle Master/Slave Off Off Off
communications
Slave in PRODRIVE 40/80A mode compat. without
Off Off On
using PDIAG line
Self Test Off On Off
Self Test Off On On
Master in PDIAG mode using DASP to check for Slave On Off Off
Master in 40/80A Mode using DASP to check for Slave On Off On
Master in PDIAG mode using SP to check for Slave
On On Off
without checking DASP
Master in 40/80A mode using SP to check for Slave
On On On
without checking DASP

Note: These settings apply to the Quantum ProDrives LPS 52, 80, 105, 120, 170, and 240AT.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/js3quant.htm [23/06/2000 07:57:03 p.m.]


DS-Rodime

Rodime
Model Interface Capacity Cylinders Heads Sectors
R0103 MFM 9MB 192 6 17
R05065 MFM 53MB 1224 5 17
RO101 MFM 3MB 192 2 17
RO102 MFM 6MB 192 4 17
RO104 MFM 12MB 192 8 17
RO201 MFM 5MB 321 2 17
RO201E MFM 11MB 640 2 17
RO202 MFM 11MB 321 4 17
RO202E MFM 22MB 640 4 17
RO203 MFM 16MB 321 6 17
RO203E MFM 33MB 640 6 17
RO204 MFM 22MB 320 8 17
RO204E MFM 44MB 640 8 17
RO251 MFM 5MB 306 2 17
RO252 MFM 10MB 306 4 17
RO3045 MFM 37MB 872 5 17
RO3055 MFM 45MB 872 6 17
RO3055T SCSI 45MB 1053 3 28
RO3057S SCSI 45MB 680 5 26
RO3058A ATA 45MB 868 3 34
RO3058T SCSI 45MB 868 3 34
RO3060R MFM/RLL 49MB 750 5 26
RO3065 MFM 53MB 872 7 17
RO3075R MFM/RLL 59MB 750 6 26
RO3085R MFM/RLL 69MB 750 7 26
RO3085S SCSI 70MB 750 7 26
RO3088T SCSI 76MB 868 5 34
RO3090T SCSI 75MB 1053 5 28
RO3095A ATA 80MB 923 5 34
RO3099AP ATA 80MB 1030 4 28
RO3121A ATA 122MB 1207 4 53
RO3128A ATA 105MB 868 7 34
RO3128T SCSI 105MB 868 7 34
RO3129TS SCSI 105MB 1091 5 41
RO3130T SCSI 105MB 1053 7 28
RO3135A ATA 112MB 923 7 34

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsRodime.htm (1 de 2) [23/06/2000 07:57:12 p.m.]


DS-Rodime

RO3139A ATA 112MB 523 15 28


RO3139TP SCSI 112MB 1148 5 42
RO3199AP ATA 112MB 1168 5 28
RO3199TS SCSI 163MB 1216 7 41
RO3209A ATA 163MB 759 15 28
RO3259A ATA 213MB 990 15 28
RO3259AP ATA 213MB 1235 9 28
RO3259T SCSI 210MB 1216 9 41
RO3259TP SCSI 210MB 1189 9 42
RO3259TS SCSI 210MB 1216 9 41
RO5075E ESDI 65MB 1224 3 35
RO5075S SCSI 61MB 1219 3 33
RO5078S SCSI 61MB 1219 3 33
RO5090 MFM 74MB 1224 7 17
RO5125E ESDI 109MB 1224 5 35
RO5125S SCSI 103MB 1219 5 33
RO5130R MFM/RLL 114MB 1224 7 26
RO5178S SCSI 144MB 1219 7 33
RO5180E ESDI 153MB 1224 7 35
RO5180S SCSI 144MB 1219 7 33
RO652A SCSI 20MB 306 4 33
RO652B SCSI 20MB 306 4 33
RO752A SCSI 20MB 306 4 33

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsRodime.htm (2 de 2) [23/06/2000 07:57:12 p.m.]


DS-Samsung

Samsung Electronics Company


Model Interface Capacity Cylinders Heads Sectors
SHD-3101A ATA 105MB 1282 4 40
SHD-31081A ATA 1.08GB 2092 16 63
SHD-31084A ATA 1.08GB 2094 16 63
SHD-3201S SCSI 211MB 1376 7 43
STG-31274A ATA 1280MB 2466 16 63

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsSamsun.htm [23/06/2000 07:57:20 p.m.]


DS-Seagate

Seagate Technology
Model Interface Capacity Cylinders Heads Sectors
ST1057A ATA 53MB 1024 6 17
ST1090A ATA 79MB 1072 5 29
ST1090N SCSI 79MB 1068 5 29
ST1096N SCSI 80MB 906 7 26
ST1100 MFM 83MB 1072 9 17
ST1102A ATA 89MB 1024 10 17
ST1106R MFM/RLL 91MB 977 7 26
ST1111A ATA 98MB 1072 5 36
ST1111E ESDI 98MB 1072 5 36
ST1111N SCSI 98MB 1068 5 36
ST1126A ATA 111MB 1072 7 29
ST1126N SCSI 111MB 1068 7 29
ST1133A ATA 117MB 1272 5 36
ST1133NS SCSI-2 116MB 1268 5 36
ST1144A ATA 130MB 1001 15 17
ST1150R MFM/RLL 128MB 1072 9 26
ST1156A ATA 138MB 1072 7 36
ST1156E ESDI 138MB 1072 7 36
ST1156N SCSI 138MB 1068 7 36
ST1156NS SCSI-2 138MB 1068 7 36
ST1162N SCSI 142MB 1068 9 29
ST1186A ATA 164MB 1272 7 36
ST1186NS SCSI-2 163MB 1268 7 36
ST1201A ATA 177MB 1072 9 36
ST1201E ESDI 177MB 1072 9 36
ST1201N SCSI 177MB 1068 9 36
ST1201NS SCSI-2 177MB 1068 9 36
ST1239A ATA 211MB 1272 9 36
ST1239NS SCSI-2 210MB 1268 9 36
ST124 MFM 21MB 615 4 17
ST125 MFM 21MB 615 4 17
ST125A ATA 21MB 404 4 26
ST125N SCSI 21MB 407 4 26
ST138 MFM 32MB 615 6 17
ST138A ATA 32MB 604 4 26
ST138N SCSI 32MB 615 4 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsSeagat.htm (1 de 4) [23/06/2000 07:57:34 p.m.]


DS-Seagate

ST138R MFM/RLL 33MB 615 4 26


ST1400N SCSI-2 331MB 1476 7 62
ST1480A ATA 426MB 1474 9 62
ST1480N SCSI-2 426MB 1476 9 62
ST151 MFM 43MB 977 5 17
ST157A ATA 45MB 560 6 26
ST157N SCSI 49MB 615 6 26
ST157R MFM/RLL 49MB 615 6 26
ST177N SCSI 61MB 921 5 26
ST206 MFM 5MB 306 2 17
ST2106E ESDI 92MB 1024 5 36
ST2106N SCSI 91MB 1022 5 36
ST212 MFM 10MB 306 4 17
ST2125N SCSI 107MB 1544 3 45
ST213 MFM 10MB 615 2 17
ST2182E ESDI 160MB 1452 4 54
ST2209N SCSI 179MB 1544 5 45
ST225 MFM 21MB 615 4 17
ST225N SCSI 21MB 615 4 17
ST225R MFM/RLL 21MB 667 2 31
ST2274A ATA 241MB 1747 5 54
ST2383A ATA 338MB 1747 7 54
ST2383E ESDI 337MB 1747 7 54
ST2383N SCSI 337MB 1261 7 74
ST238R MFM/RLL 32MB 615 4 26
ST2502N SCSI 435MB 1755 7 69
ST250R MFM/RLL 42MB 667 4 31
ST251 MFM 43MB 820 6 17
ST251N SCSI 43MB 820 4 26
ST251N-1 SCSI 43MB 630 4 34
ST252 MFM 43MB 820 6 17
ST253 MFM 43MB 989 5 17
ST274A ATA 65MB 948 5 26
ST277N SCSI 65MB 820 6 26
ST277N-1 SCSI 65MB 630 6 34
ST277R MFM/RLL 65MB 820 6 26
ST278R MFM/RLL 65MB 820 6 26
ST279R MFM/RLL 65MB 989 5 26
ST280A ATA 71MB 1032 5 27
ST296N SCSI 80MB 820 6 34
ST3051A ATA 43MB 820 6 17
ST3096A ATA 89MB 1024 10 17
ST3120A ATA 107MB 1024 12 17
ST31220A ATA 1083MB 2099 16 63
ST3123A ATA 107MB 1024 12 17
ST3144A ATA 131MB 1001 15 17
ST3145A ATA 131MB 1001 15 17
ST3195A ATA 171MB 981 10 34

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsSeagat.htm (2 de 4) [23/06/2000 07:57:34 p.m.]


DS-Seagate

ST3240A ATA 211MB 1010 12 34


ST3243A ATA 214MB 1024 12 34
ST3250A ATA 214MB 1024 12 34
ST325A ATA 21MB 615 4 17
ST3283A ATA 245MB 978 14 35
ST3290A ATA 261MB 1001 15 34
ST3291A ATA 273MB 761 14 50
ST3295A ATA 273MB 761 14 50
ST3385A ATA 341MB 767 14 62
ST3390A ATA 341MB 768 14 62
ST3391A ATA 341MB 768 14 62
ST3491A ATA 428MB 899 15 62
ST3500A ATA 426MB 895 15 62
ST351A ATA 43MB 820 6 17
ST352A ATA 43MB 980 5 17
ST3550A ATA 452MB 1018 14 62
ST3600A ATA 528MB 1024 16 63
ST3655A ATA 528MB 1024 16 63
ST3660A ATA 546MB 1057 16 63
ST3780A ATA 722MB 1399 16 63
ST4026 MFM 21MB 615 4 17
ST4038 MFM 31MB 733 5 17
ST4051 MFM 42MB 977 5 17
ST406 MFM 5MB 306 2 17
ST4085 MFM 71MB 1024 8 17
ST4086 MFM 72MB 925 9 17
ST4096 MFM 80MB 1024 9 17
ST4097 MFM 80MB 1024 9 17
ST412 MFM 10MB 306 4 17
ST41200N SCSI 1037MB 1931 15 71
ST4135R MFM/RLL 115MB 960 9 26
ST4144R MFM/RLL 123MB 1024 9 26
ST41520N SCSI-2 1352MB 2102 17 ZBR
ST41600N SCSI-2 1352MB 2101 17 75
ST41650N SCSI-2 1415MB 2107 15 87
ST41651N SCSI-2 1415MB 2107 15 ZBR
ST4182E ESDI 160MB 969 9 36
ST4182N SCSI 155MB 969 9 35
ST419 MFM 15MB 306 6 17
ST4250N SCSI 300MB 1412 9 46
ST4376N SCSI 330MB 1546 9 45
ST4383E ESDI 338MB 1412 12 36
ST4384E ESDI 338MB 1224 15 36
ST4385N SCSI 330MB 791 15 55
ST4442E ESDI 390MB 1412 15 36
ST4702N SCSI 601MB 1546 15 50
ST4766E ESDI 676MB 1032 15 54
ST4766N SCSI 676MB 1632 15 54

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsSeagat.htm (3 de 4) [23/06/2000 07:57:34 p.m.]


DS-Seagate

ST4767E ESDI 676MB 1399 15 63


ST4767N SCSI-2 665MB 1356 15 64
ST4769E ESDI 691MB 1552 15 53
ST506 MFM 5MB 153 4 17
ST5660A ATA 546MB 1057 16 63
ST5850A ATA 855MB 1656 16 63
ST9100A ATA (Settings) 85MB 748 14 16
ST9051A ATA (Settings) 43MB 654 4 32
ST9052A ATA 43MB 980 5 17
ST9077A ATA (Settings) 64MB 669 11 17
ST9080A ATA 64MB 823 4 38
ST9096A ATA 85MB 980 10 17
ST9100A ATA 86MB 748 14 16
ST9140A ATA (Settings) 128MB 980 15 17
ST9144A ATA (Settings) 128MB 980 15 17
ST9145A ATA (Settings) 128MB 980 15 17
ST9150A ATA (Settings) 131MB 419 13 47
ST9190A ATA (Settings) 172MB 873 16 24
ST9235A ATA (Settings) 210MB 985 13 32
ST9240A ATA (Settings) 210MB 988 8 52
ST9300A ATA (Settings) 262MB 569 15 60
ST9385A ATA (Settings) 341MB 934 14 51
ST9420A ATA (Settings) 421MB 988 16 52
ST9550A ATA (Settings) 455MB 942 16 59
ST9655A ATA (Settings) 524MB 1016 16 63

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsSeagat.htm (4 de 4) [23/06/2000 07:57:34 p.m.]


Seagate Jumper Settings

Seagate Technology
2.5-inch ATA hard disk drives

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/JS1seaga.htm [23/06/2000 07:58:12 p.m.]


DS-Seimens

Siemens Information Systems


Model Interface Capacity Cylinders Heads Sectors
1200 ESDI 174MB 1216 8 35
1300 ESDI 261MB 1216 12 35
2200 SCSI 174MB 1216 8 35
2300 SCSI 261MB 1216 12 35
4410 ESDI 322MB 1100 11 52
4420 SCSI 334MB 1100 11 54
5710 ESDI 655MB 1224 15 48
5720 SCSI 655MB 1224 15 48
5810 ESDI 688MB 1658 15 54
5820 SCSI 688MB 1658 15 54
6200 SCSI 1062MB 1921 15 72

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsSeimen.htm [23/06/2000 07:58:17 p.m.]


DS-Shugart

Shugart Associates
Model Interface Capacity Cylinders Heads Sectors
SA604 MFM 5MB 160 4 17
SA606 MFM 7MB 160 6 17
SA607 MFM 5MB 306 2 17
SA612 MFM 10MB 306 4 17
SA706 MFM 6MB 320 2 17
SA712 MFM 10MB 320 4 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsShugar.htm [23/06/2000 07:58:22 p.m.]


DS-Syquest

Syquest Technology
Model Interface Capacity Cylinders Heads Sectors
SQ225F MFM 20MB 615 4 17
SQ306F MFM 5MB 306 2 17
SQ306R MFM 5MB 306 2 17
SQ306RD MFM 5MB 306 2 17
SQ312 MFM 10MB 615 2 17
SQ312F MFM 20MB 612 4 17
SQ312RD MFM 10MB 615 2 17
SQ319 MFM 10MB 612 2 17
SQ325 MFM 20MB 612 4 17
SQ325F MFM 20MB 615 4 17
SQ338F MFM 30MB 615 6 17
SQ340AF MFM 38MB 649 6 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsSyques.htm [23/06/2000 07:58:34 p.m.]


DS-Tandon

Tandon Corporation
Model Interface Capacity Cylinders Heads Sectors
TM2085 SCSI 74MB 1004 9 36
TM2128 SCSI 115MB 1004 9 36
TM2170 SCSI 154MB 1344 9 36
TM244 MFM/RLL 41MB 782 4 26
TM246 MFM/RLL 62MB 782 6 26
TM251 MFM 5MB 306 2 17
TM252 MFM 10MB 306 4 17
TM261 MFM 10MB 615 2 17
TM262 MFM 21MB 615 4 17
TM262R MFM/RLL 20MB 782 2 26
TM264 MFM/RLL 41MB 782 4 26
TM3085 MFM 71MB 1024 8 17
TM3085R MFM/RLL 104MB 1024 8 26
TM344 MFM/RLL 41MB 782 4 26
TM346 MFM/RLL 62MB 782 6 26
TM361 MFM 10MB 615 2 17
TM362 MFM 21MB 615 4 17
TM362R MFM/RLL 20MB 782 2 26
TM364 MFM/RLL 41MB 782 4 26
TM501 MFM 5MB 306 2 17
TM502 MFM 10MB 306 4 17
TM503 MFM 15MB 306 6 17
TM602S MFM 5MB 153 4 17
TM603S MFM 10MB 153 6 17
TM603SE MFM 21MB 230 6 17
TM702 MFM/RLL 20MB 615 4 26
TM702AT MFM 8MB 615 4 17
TM703 MFM 10MB 733 5 17
TM703AT MFM 31MB 733 5 17
TM705 MFM 41MB 962 5 17
TM755 MFM 43MB 981 5 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsTandon.htm [23/06/2000 07:58:54 p.m.]


DS-Teac

Teac America
Model Interface Capacity Cylinders Heads Sectors
SD150 MFM 10MB 306 4 17
SD3105 ATA 105MB 1025 5 40
SD340-A ATA 43MB 1050 2 40
SD340S SCSI 43MB 1050 2 40
SD380 ATA 86MB 1050 4 40
SD380-S SCSI 86MB 1050 4 40
SD510 MFM 10MB 306 4 17
SD520 MFM 20MB 615 4 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsTeac.htm [23/06/2000 07:59:09 p.m.]


DS-Toshiba

Toshiba
Model Interface Capacity Cylinders Heads Sectors
MK134FA MFM 44MB 733 7 17
MK153FA ESDI 74MB 830 5 35
MK153FB SCSI 74MB 830 5 35
MK154FA ESDI 104MB 830 7 35
MK154FB SCSI 104MB 830 7 35
MK156FA ESDI 148MB 830 10 35
MK156FB SCSI 148MB 830 10 35
MK1724FCV ATA 260MB 842 16 38
MK1824FCV ATA 350MB 682 16 63
MK1924FCV ATA 540MB 1053 16 63
MK1926FCV ATA 814MB 1579 16 63
MK232FB SCSI 45MB 845 3 35
MK233FB SCSI 76MB 845 5 35
MK234FB SCSI 106MB 845 7 35
MK234FC ATA 106MB 845 7 35
MK250FA ESDI 382MB 1224 10 35
MK250FB SCSI 382MB 1224 10 35
MK2720FC ATA 1358MB 2633 16 63
MK355FA ESDI 459MB 1632 9 53
MK355FB SCSI 459MB 1632 9 53
MK358FA ESDI 765MB 1632 15 53
MK358FB SCSI 765MB 1632 15 53
MK53FA/B MFM 43MB 830 5 17
MK53FA/B MFM/RLL 64MB 830 5 26
MK54FA/B MFM 60MB 830 7 17
MK54FA/B MFM/RLL 90MB 830 7 26
MK556FA ESDI 152MB 830 10 35
MK56FA/B MFM 86MB 830 10 17
MK56FA/B MFM/RLL 129MB 830 10 26

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsToshib.htm [23/06/2000 07:59:17 p.m.]


DS-Tulin

Tulin Corporation
Model Interface Capacity Cylinders Heads Sectors
TL213 MFM 10MB 640 2 17
TL226 MFM 22MB 640 4 17
TL238 MFM 22MB 640 4 17
TL240 MFM 33MB 640 6 17
TL258 MFM 33MB 640 6 17
TL326 MFM 22MB 640 4 17
TL340 MFM 33MB 640 6 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsTulin.htm [23/06/2000 07:59:22 p.m.]


DS-Vertex

Vertex
Model Interface Capacity Cylinders Heads Sectors
V130 MFM 26MB 987 3 17
V150 MFM 43MB 987 5 17
V170 MFM 60MB 987 7 17

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsVertex.htm [23/06/2000 07:59:27 p.m.]


DS-Westerm

Western Digital Corporation


Model Interface Capacity Cylinders Heads Sectors
AC2540 EIDE (Settings) 540.8MB 1048 16 63
AC2635 EIDE (Settings) 639.9MB 1240 16 63
AC2700 EIDE (Settings) 730.8MB 1416 16 63
AC2850 EIDE (Settings) 853.6MB 1654 16 63
AC21000 EIDE (Settings) 1096.1MB 2100 16 63
AC31000 EIDE (Settings) 1083.8MB 2100 16 63
AC31200 EIDE (Settings) 1281.9MB 2484 16 63
AC31600 EIDE (Settings) 1624.6MB 3148 16 63
WD AB130 ATA 32MB 733 5 17
WD AC140 ATA 42MB 980 5 17
WD AC160 ATA 62MB 1024 7 17
WD AC280 ATA 85MB 980 10 17
WD AH260 ATA 63MB 1024 7 17
WD262 MFM 20MB 615 4 17
WD344R MFM/RLL 40MB 782 4 26
WD362 MFM 20MB 615 4 17
WD382R MFM/RLL 20MB 782 2 26
WD383R MFM/RLL 30MB 615 4 26
WD384R MFM/RLL 40MB 782 4 26
WD544R MFM/RLL 40MB 782 4 26
WD582R MFM/RLL 20MB 782 2 26
WD583R MFM/RLL 30MB 615 4 26
WD584R MFM/RLL 49MB 782 4 26
WD93024 ATA 20MB 782 2 27
WD93028 ATA 20MB 782 2 27
WD93034 ATA 30MB 782 3 27
WD93038 ATA 30MB 782 3 27
WD93044 ATA 40MB 782 4 27
WD93048 ATA 40MB 782 4 27
WD95024 ATA 20MB 782 2 27
WD95028 ATA 20MB 782 2 27
WD95034 ATA 30MB 782 3 27
WD95044 ATA 40MB 782 4 27
WD95058 ATA 40MB 782 4 27

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsWester.htm [23/06/2000 07:59:36 p.m.]


Western Digital EIDEJumper Settings

Western Digital
EIDE Hard Disk Drive Jumper Settings

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/JS1weste.htm [23/06/2000 07:59:45 p.m.]


DS-Zentec

Zentec Storage
Model Interface Capacity Cylinders Heads Sectors
ZM3540 ATA 518MB 2142 6 60 to 96
ZM3540 Fast SCSI-2 518MB 2142 6 60 to 96
ZM3272 ATA 260MB 2076 4 55

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/dsdata/dsZentec.htm [23/06/2000 07:59:48 p.m.]


BIO

About the Author

Winn L. Rosch has written about personal computers since 1981 and has penned about 1,000
published articles about them—a mixture of reviews, how-to guides, and background pieces
explaining new technologies. One of these was selected by The Computer Press Association as the
best feature article of the year for 1987; another was runner-up for the same award in 1990. He has
written other books about computers, the most recent of which are The Winn L. Rosch Multimedia
Bible PC and The Winn L. Rosch Printer Bible. Rosch has been a contributing editor to PC Magazine,
PC Week, PC Sources, Computer Shopper, and other computer publications. His books and articles
have been reprinted in several languages (French, Italian, German, Greek, and Portuguese).
Besides writing, Rosch is an attorney licensed to practice in Ohio and holds a Juris Doctor degree. A
member of the Ohio State Bar Association, he has served on the association's computer law
committee.
In other lifetimes, Rosch has worked as a photojournalist, electronic journalist, and broadcast
engineer. For 10 years, he wrote regular columns about stereo and video equipment for The Cleveland
Plain Dealer, Ohio's largest daily newspaper, and regularly contributed lifestyle features and
photographs. In Cleveland, where he still holds out, he has served as a chief engineer for several radio
stations. He also has worked on electronic journalism projects for the NBC and CBS networks.
Although Rosch has abandoned his efforts at creating a perpetual motion machine, he is now putting
the finishing touches on his latest creation, the world's first perpetual stillness machine.

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/bio.htm [23/06/2000 07:59:56 p.m.]


Winn L. Rosch Hardware Bible, Electronic Edition, Comment page

Comment Page

Tell Us What You Think!


As a reader, you are the most important critic and commentator of our books. We value your opinion
and want to know what we're doing right, what we could do better, what areas you'd like to see us
publish in, and any other words of wisdom you're willing to pass our way. You can help us make
strong books that meet your needs and give you the computer guidance you require.
Do you have access to CompuServe or the World Wide Web? Then check out our CompuServe forum
by typing GO SAMS at any prompt. If you prefer the World Wide Web, check out our site at
http://www.mcp.com.
Note: If you have a technical question about this book, call the technical support line at
317-581-3833.
As the publishing manager of the group that created this book, I welcome your comments. You can
fax, e-mail, or write me directly to let me know what you did or didn't like about this book[md]as well
as what we can do to make our books stronger. Here's the information:
FAX:
317-581-4669
E-mail:
opsys_mgr@sams.samspublishing.com
Mail:
Dean Miller
Sams Publishing
201 W. 103rd Street
Indianapolis, IN 46290

http://www.viking.delmar.edu/COURSES/Cis312J/Ebook/comments.htm [23/06/2000 07:59:59 p.m.]

You might also like