Instruction Set: 8085 Microprocessor
Instruction Set: 8085 Microprocessor
Microprocessor is a electronic chip, that functions as the central processing unit of a computer.
For example: Washing machines, microwave ovens, mobile phones etc.
Its advance applications are Radar, Satellites, flights.
All processors are use the basic concept of stored program execution. program or instructions are stored
sequentially in the memory. Every microprocessor has its own associated set of instructions. Instruction
set for microprocessor is in two forms one in mnemonic, which is comparatively easy to understand and
the other is binary machine code.
8085 microprocessor:
The Intel 8085 ("eighty-eighty-five") is an 8-bit microprocessor introduced by Intel in 1977. The 8085 is a
conventional von Neumanndesign based on the Intel 8080. It is designed by using nmos technology. The
"5" in the model number came from the fact that the 8085 requires only a +5- Volt (V) power supply.
rather than requiring the +5 V, 5 V and +12 V supplies the 8080 needed. It has 8 bit data bus and 16 bit
address bus. it can work upto 5 MHz frequency. It has 40 pins in its chip. Lower order address bus is
multiplexed with data bus to minimize the chip size.
The 8085 has extensions to support new interrupts, with three maskable interrupts (RST 7.5, RST 6.5 and
RST 5.5), one non-maskable interrupt (TRAP), and one externally serviced interrupt (INTR). The RST
n.5 interrupts refer to actual pins on the processor, a feature which permitted simple systems to avoid the
cost of a separate interrupt controller.
The internal arrangement of a microprocessor varies depending on the age of the design and the intended
purposes of the microprocessor. The complexity of an integrated circuit (IC) is bounded by physical
limitations of the number of transistors that can be put onto one chip, the number of package terminations
that can connect the processor to other parts of the system, the number of interconnections it is possible to
make on the chip, and the heat that the chip can dissipate. Advancing technology makes more complex
and powerful chips feasible to manufacture.
A minimal hypothetical microprocessor might only include an arithmetic logic unit (ALU) and a control
logic section. The ALU performs operations such as addition, subtraction, and operations such as AND or
OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the
last operation (zero value, negative number, overflow, or others). The control logic retrieves instruction
codes from memory and initiates the sequence of operations required for the ALU to carry out the
instruction. A single operation code might affect many individual data paths, registers, and other elements
of the processor.
As integrated circuit technology advanced, it was feasible to manufacture more and more complex
processors on a single chip. The size of data objects became larger; allowing more transistors on a chip
allowed word sizes to increase from 4- and 8-bit words up to today's 64-bit words. Additional features
were added to the processor architecture; more on-chip registers sped up programs, and complex
instructions could be used to make more compact programs. Floating-point arithmetic, for example, was
often not available on 8-bit microprocessors, but had to be carried out in software. Integration of
the floating point unit first as a separate integrated circuit and then as part of the same microprocessor
chip, sped up floating point calculations.
Occasionally, physical limitations of integrated circuits made such practices as a bit slice approach
necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel
processed subsets of each data word. While this required extra logic to handle, for example, carry and
overflow within each slice, the result was a system that could handle, for example, 32-bit words using
integrated circuits with a capacity for only four bits each.
With the ability to put large numbers of transistors on one chip, it becomes feasible to integrate memory
on the same die as the processor. This CPU cache has the advantage of faster access than off-chip
memory, and increases the processing speed of the system for many applications. Processor clock
frequency has increased more rapidly than external memory speed, except in the recent past, [when?] so cache
memory is necessary if the processor is not delayed by slower external memory.
Special-purpose designs[edit]
A microprocessor is a general purpose system. Several specialized processing devices have followed from
the technology:
32-bit processors have more digital logic than narrower processors, so 32-bit (and wider) processors
produce more digital noise and have higher static consumption than narrower processors. [3] Reducing
digital noise improves ADC conversion results. [4][5] So, 8- or 16-bit processors are better than 32-bit
processors for system on a chip and microcontrollers that require extremely low-power electronics, or are
part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as highresolution analog to digital converters, or both.
Nevertheless, trade-offs apply: running 32-bit arithmetic on an 8-bit chip could end up using more power,
as the chip must execute software with multiple instructions. Modern microprocessors go into low power
states when possible,[6] and a 8-bit chip running 32-bit software is active most of the time. This creates a
delicate balance between software, hardware and use patterns, plus costs. [citation needed]
When manufactured on a similar process, 8-bit microprocessors use less power when operating and less
power when sleeping than 32-bit microprocessors.[7]
However, some people say a 32-bit microprocessor may use less average power than an 8-bit
microprocessor when the application requires certain operations such as floating-point math that take
many more clock cycles on an 8-bit microprocessor than a 32-bit microprocessor so the 8-bit
microprocessor spends more time in high-power operating mode. [7][8][9][10]
Embedded applications[edit]
Thousands of items that were traditionally not computer-related include microprocessors. These include
large and small household appliances, cars (and their accessory equipment units), car keys, tools and test
instruments, toys, light switches/dimmers and electrical circuit breakers, smoke alarms, battery packs, and
hi-fi audio/visual components (from DVD players to phonograph turntables). Such products as cellular
telephones, DVD video system and HDTV broadcast systems fundamentally require consumer devices
with powerful, low-cost, microprocessors. Increasingly stringent pollution control standards effectively
require automobile manufacturers to use microprocessor engine management systems, to allow optimal
control of emissions over widely varying operating conditions of an automobile. Non-programmable
controls would require complex, bulky, or costly implementation to achieve the results possible with a
microprocessor.
A microprocessor control program (embedded software) can be easily tailored to different needs of a
product line, allowing upgrades in performance with minimal redesign of the product. Different features
can be implemented in different models of a product line at negligible production cost.
Microprocessor control of a system can provide control strategies that would be impractical to implement
using electromechanical controls or purpose-built electronic controls. For example, an engine control
system in an automobile can adjust ignition timing based on engine speed, load on the engine, ambient
temperature, and any observed tendency for knockingallowing an automobile to operate on a range of
fuel grades.
History[edit]
The advent of low-cost computers on integrated circuits has transformed modern society. General-purpose
microprocessors in personal computers are used for computation, text editing, multimedia display, and
communication over the Internet. Many more microprocessors are part of embedded systems, providing
digital control over myriad objects from appliances to automobiles to cellular phones and industrial
process control.
The first use of the term "microprocessor" is attributed to Viatron Computer Systems describing the
custom integrated circuit used in their System 21 small computer system announced in 1968.
By the late-1960s, designers were striving to integrate the central processing unit (CPU) functions of a
computer onto a handful of MOS LSI chips, called microprocessor unit (MPU) chip sets. Building on 8bit arithmetic logic units (3800/3804) he designed earlier at Fairchild, in 1969, Lee Boysel created the
Four-Phase Systems Inc. AL-1 an 8-bit CPU slice that was expandable to 32-bits. In 1970, Steve Geller
and Ray Holt of Garrett AiResearch designed the MP944 chip set to implement the F-14A Central Air
Data Computer on six metal-gate chips fabricated by AMI.
Intel introduced its first 4-bit microprocessor 4004 in 1971, and its 8-bit microprocessor 8008 in 1972.
During the 1960s, computer processors were constructed out of small and medium-scale ICseach
containing from tens of transistors to a few hundred. These were placed and soldered onto printed circuit
boards, and often multiple boards were interconnected in a chassis. The large number of discrete logic
gates used more electrical powerand therefore produced more heatthan a more integrated design with
fewer ICs. The distance that signals had to travel between ICs on the boards limited a computer's
operating speed.
In the NASA Apollo space missions to the moon in the 1960s and 1970s, all onboard computations for
primary guidance, navigation and control were provided by a small custom processor called "The Apollo
Guidance Computer". It used wire wrap circuit boards whose only logic elements were three-input NOR
gates.[11]
The first microprocessors emerged in the early 1970s, and were used for electronic calculators,
using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embeddeduses of 4-bit and 8-bit
microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after.
Affordable 8-bit microprocessors with 16-bit addressing also led to the first generalpurpose microcomputers from the mid-1970s on.
Since the early 1970s, the increase in capacity of microprocessors has followed Moore's law; this
originally suggested that the number of components that can be fitted onto a chip doubles every year.
With present technology, it is actually every two years, [12] and as such Moore later changed the period to
two years.[13]
First projects[edit]
Three projects delivered a microprocessor at about the same time: Garrett AiResearch's Central Air Data
Computer (CADC), Texas Instruments (TI) TMS 1000 (1971 September), and Intel's 4004 (1971
November).
CADC[edit]
This section relies too much on references to primary sources. Please improve this
section by adding secondary or tertiary sources. (March 2010) (Learn how and when to remove
this template message)
For more details on this topic, see Central Air Data Computer.
In 1968, Garrett AiResearch (which employed designers Ray Holt and Steve Geller) was invited to
produce a digital computer to compete with electromechanical systems then under development for the
main flight control computer in the US Navy's new F-14 Tomcat fighter. The design was complete by
1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20
times) smaller and much more reliable than the mechanical systems it competed against, and was used in
all of the early Tomcat models. This system contained "a 20-bit, pipelined, parallel multimicroprocessor". The Navy refused to allow publication of the design until 1997. For this reason
the CADC, and the MP944 chipset it used, are fairly unknown. [14] Ray Holt graduated from California
Polytechnic University in 1968, and began his computer design career with the CADC. From its
inception, it was shrouded in secrecy until 1998 when at Holt's request, the US Navy allowed the
documents into the public domain. Since then people [who?] have debated whether this was the first
microprocessor. Holt has stated that no one has compared this microprocessor with those that came later.
[15]
According to Parab et al. (2007), "The scientific papers and literature published around 1971 reveal
that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the first
microprocessor. Although interesting, it was not a single-chip processor, as was not the Intel 4004 they
both were more like a set of parallel building blocks you could use to make a general-purpose form. It
contains a CPU, RAM, ROM, and two other support chips like the Intel 4004. It was made from the
same P-channel technology, operated at military specifications and had larger chips -- an excellent
computer engineering design by any standards. Its design indicates a major advance over Intel, and two
year earlier. It actually worked and was flying in the F-14 when the Intel 4004 was announced. It
indicates that todays industry theme of converging DSP-microcontroller architectures was started in
1971."[16] This convergence of DSP and microcontroller architectures is known as a digital signal
controller.[17]
Four-Phase Systems AL1[edit]
AL1 by Four-Phase Systems Inc: one from the earliest inventions in the field of microprocessor technology
The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers and an ALU. [18] It was
designed by Lee Boysel in 1969. [19][20][21] At the time, it formed part of a nine-chip, 24-bit CPU with three
AL1s, but it was later called a microprocessor when, in response to 1990s litigation by Texas Instruments,
a demonstration system was constructed where a single AL1 formed part of a courtroom demonstration
computer system, together with RAM, ROM, and an input-output device. [22]
Pico/General Instrument[edit]
The PICO1/GI250 chip introduced in 1971. This was designed by Pico Electronics (Glenrothes, Scotland) and
manufactured by General Instrument of Hicksville NY.
In 1971, Pico Electronics[23] and General Instrument (GI) introduced their first collaboration in ICs, a
complete single chip calculator IC for the Monroe/Litton Royal Digital III calculator. This chip could also
arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and
a RISC instruction set on-chip. The layout for the four layers of the PMOS process was hand drawn at
x500 scale on mylar film, a significant task at the time given the complexity of the chip.
Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs.
They had significant previous design experience on multiple calculator chipsets with both GI
and Marconi-Elliott.[24] The key team members had originally been tasked by Elliott Automation to create
an 8-bit computer in MOS and had helped establish a MOS Research Laboratory in Glenrothes, Scotland
in 1967.
Calculators were becoming the largest single market for semiconductors so Pico and GI went on to have
significant success in this burgeoning market. GI continued to innovate in microprocessors and
microcontrollers with products including the CP1600, IOB1680 and PIC1650. [25] In 1987, the GI
Microelectronics business was spun out into the Microchip PIC microcontroller business.
Intel 4004[edit]
Main article: Intel 4004
The 4004 with cover removed (left) and as actually used (right)
The Intel 4004 is generally regarded as the first commercially available microprocessor, [26][27] and
cost US$60 (equivalent to $351.18 in 2015).[28] The first known advertisement for the 4004 is dated
November 15, 1971 and appeared in Electronic News.[29] The project that produced the 4004 originated in
1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for highperformance desktop calculators. Busicom's original design called for a programmable chip set consisting
of seven different chips. Three of the chips were to make a special-purpose CPU with its program stored
in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to
evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for
data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff
came up with a four-chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM
chip for storing data, a simple I/O device and a 4-bit central processing unit (CPU). Although not a chip
designer, he felt the CPU could be integrated into a single chip, but as he lacked the technical know-how
the idea remained just a wish for the time being.
While the architecture and specifications of the MCS-4 came from the interaction of Hoff with Stanley
Mazor, a software engineer reporting to him, and with Busicom engineer Masatoshi Shima, during 1969,
Mazor and Hoff moved on to other projects. In April 1970, Intel hired Italian-born engineer Federico
Faggin as project leader, a move that ultimately made the single-chip CPU final design a reality (Shima
meanwhile designed the Busicom calculator firmware and assisted Faggin during the first six months of
the implementation). Faggin, who originally developed the silicon gate technology (SGT) in 1968
at Fairchild Semiconductor[30] and designed the worlds first commercial integrated circuit using SGT, the
Fairchild 3708, had the correct background to lead the project into what would become the first
commercial general purpose microprocessor. Since SGT was his very own invention, Faggin also used it
to create his new methodology for random logic design that made it possible to implement a single-chip
CPU with the proper speed, power dissipation and cost. The manager of Intel's MOS Design Department
was Leslie L. Vadsz at the time of the MCS-4 development but Vadsz's attention was completely
focused on the mainstream business of semiconductor memories so he left the leadership and the
management of the MCS-4 project to Faggin, who was ultimately responsible for leading the 4004 project
to its realization. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped
to other customers in late 1971.[citation needed]
Gilbert Hyatt[edit]
Gilbert Hyatt was awarded a patent claiming an invention pre-dating both TI and Intel, describing a
"microcontroller".[31] The patent was later invalidated, but not before substantial royalties were paid out. [32]
[33]
8-bit designs[edit]
This section and the sections below needs additional citations for verification. Please
help improve this article by adding citations to reliable sources. Unsourced material may
be challenged and removed. (June 2011) (Learn how and when to remove this template message)
The Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit microprocessor. The 8008
was not, however, an extension of the 4004 design, but instead the culmination of a separate design
project at Intel, arising from a contract with Computer Terminals Corporation, of San Antonio TX, for a
chip for a terminal they were designing, [35]the Datapoint 2200fundamental aspects of the design came
not from Intel but from CTC. In 1968, CTC's Vic Poor and Harry Pyle developed the original design for
the instruction set and operation of the processor. In 1969, CTC contracted two
companies, Intel and Texas Instruments, to make a single-chip implementation, known as the CTC 1201.
[36]
In late 1970 or early 1971, TI dropped out being unable to make a reliable part. In 1970, with Intel yet
to deliver the part, CTC opted to use their own implementation in the Datapoint 2200, using traditional
TTL logic instead (thus the first machine to run "8008 code" was not in fact a microprocessor at all and
was delivered a year earlier). Intel's version of the 1201 microprocessor arrived in late 1971, but was too
late, slow, and required a number of additional support chips. CTC had no interest in using it. CTC had
originally contracted Intel for the chip, and would have owed them US$50,000 (equivalent to $292,646 in
2015) for their design work.[36] To avoid paying for a chip they did not want (and could not use), CTC
released Intel from their contract and allowed them free use of the design. [36] Intel marketed it as the 8008
in April, 1972, as the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8"
computer kit advertised in the magazine Radio-Electronics in 1974. This processor had an 8-bit data bus
and a 14-bit address bus.[37]
The 8008 was the precursor to the successful Intel 8080 (1974), which offered improved performance
over the 8008 and required fewer support chips. Federico Faggin conceived and designed it using high
voltage N channel MOS. The Zilog Z80 (1976) was also a Faggin design, using low voltage N channel
with depletion load and derivative Intel 8-bit processors: all designed with the methodology Faggin
created for the 4004. Motorola released the competing 6800 in August 1974, and the similar MOS
Technology 6502 in 1975 (both designed largely by the same people). The 6502 family rivaled the Z80 in
popularity during the 1980s.
A low overall cost, small packaging, simple computer bus requirements, and sometimes the integration of
extra circuitry (e.g. the Z80's built-in memory refresh circuitry) allowed the home computer "revolution"
to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the Sinclair ZX-81,
which sold for US$99 (equivalent to $258.12 in 2015). A variation of the 6502, the MOS Technology
6510 was used in the Commodore 64 and yet another variant, the 8502, powered the Commodore 128.
The Western Design Center, Inc (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to
several firms. It was used as the CPU in the Apple IIe and IIc personal computers as well as in medical
implantable grade pacemakers and defibrillators, automotive, industrial and consumer devices. WDC
pioneered the licensing of microprocessor designs, later followed by ARM (32-bit) and other
microprocessor intellectual property (IP) providers in the 1990s.
Motorola introduced the MC6809 in 1978. It was an ambitious and well thought-through 8-bit design that
was source compatible with the 6800, and implemented using purely hard-wired logic (subsequent 16-bit
microprocessors typically used microcode to some extent, as CISC design requirements were becoming
too complex for pure hard-wired logic).
Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to
its innovative and powerful instruction set architecture.
A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA
COSMAC) (introduced in 1976), which was used on board the Galileo probe to Jupiter (launched 1989,
arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used
because it could be run at very low power, and because a variant was available fabricated using a special
production process, silicon on sapphire (SOS), which provided much better protection against cosmic
radiation and electrostatic discharge than that of any other processor of the era. Thus, the SOS version of
the 1802 was said to be the first radiation-hardened microprocessor.
The RCA 1802 had what is called a static design, meaning that the clock frequency could be made
arbitrarily low, even to 0 Hz, a total stop condition. This let the Galileo spacecraft use minimum electric
power for long uneventful stretches of a voyage. Timers or sensors would awaken the processor in time
for important tasks, such as navigation updates, attitude control, data acquisition, and radio
communication. Current versions of the Western Design Center 65C02 and 65C816 have static cores, and
thus retain data even when the clock is completely halted.
12-bit designs[edit]
The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support
and memory ICs. The microprocessor recognised the DEC PDP-8minicomputer instruction set. As such it
was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was
also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100
was being incorporated into some military designs until the early 1980s.
16-bit designs[edit]
The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early
1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8.
Other early multi-chip 16-bit microprocessors include one that Digital Equipment Corporation
(DEC) used in the LSI-11 OEM board set and the packaged PDP 11/03minicomputerand the Fairchild
Semiconductor MicroFlame 9440, both introduced in 197576. In 1975, National introduced the first 16bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by
an NMOS version, the INS8900.
Another early single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with
their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home
computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic
64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common,
smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete
with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data
8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family
later expanded to include the 99105 and 99110.
The Western Design Center (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC
CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later
the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
Intel "upsized" their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which
powers most modern PC type computers. Intel introduced the 8086 as a cost-effective way of porting
software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a
version of the 8086 that used an 8-bit external data bus, was the microprocessor in the first IBM PC. Intel
then released the 80186 and 80188, the 80286 and, in 1985, the 32-bit 80386, cementing their PC market
dominance with the processor family's backwards compatibility. The 80186 and 80188 were essentially
versions of the 8086 and 8088, enhanced with some onboard peripherals and a few new instructions.
Although Intel's 80186 and 80188 were not used in IBM PC type designs, second source versions from
NEC, the V20 and V30 frequently were. The 8086 and successors had an innovative but limited method
of memory segmentation, while the 80286 introduced a full-featured segmented memory management
unit (MMU). The 80386 introduced a flat 32-bit memory model with paged memory management.
The 16-bit Intel x86 processors up to and including the 80386 do not include floating-point units (FPUs).
Intel introduced the 8087, 80187, 80287 and 80387 math coprocessors to add hardware floating-point and
transcendental function capabilities to the 8086 through 80386 CPUs. The 8087 works with the
8086/8088 and 80186/80188,[38] the 80187 works with the 80186 but not the 80188,[39] the 80287 works
with the 80286 and the 80387 works with the 80386. The combination of an x86 CPU and an x87
coprocessor forms a single multi-chip microprocessor; the two chips are programmed as a unit using a
single integrated instruction set. [40] The 8087 and 80187 coprocessors are connected in parallel with the
data and address buses of their parent processor and directly execute instructions intended for them. The
80287 and 80387 coprocessors are interfaced to the CPU through I/O ports in the CPU's address space,
this is transparent to the program, which does not need to know about or access these I/O ports directly;
the program accesses the coprocessor and its registers through normal instruction opcodes.
32-bit designs[edit]
16-bit designs had only been on the market briefly when 32-bit implementations started to appear.
The most significant of the 32-bit designs is the Motorola MC68000, introduced in 1979.[dubious discuss] The
68k, as it was widely known, had 32-bit registers in its programming model but used 16-bit internal data
paths, three 16-bit Arithmetic Logic Units, and a 16-bit external data bus (to reduce pin count), and
externally supported only 24-bit addresses (internally it worked with full 32 bit addresses). In PC-based
IBM-compatible mainframes the MC68000 internal microcode was modified to emulate the 32-bit
System/370 IBM mainframe.[41]Motorola generally described it as a 16-bit processor. The combination of
high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most
popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a
host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga.
The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit
addresses, was the AT&T Bell LabsBELLMAC-32A, with first samples in 1980, and general production
in 1982.[42][43] After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western
Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were
used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop super
microcomputer; in the "Companion", the world's first 32-bit laptop computer; and in "Alexander", the
world's first book-sized super microcomputer, featuring ROM-pack memory cartridges similar to today's
gaming consoles. All these systems ran the UNIX System V operating system.
The first commercial, single chip, fully 32-bit microprocessor available on the market was the HP
FOCUS.
Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981, but was not a
commercial success. It had an advanced capability-based object-orientedarchitecture, but poor
performance compared to contemporary architectures such as Intel's own 80286 (introduced 1982), which
was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly
due to a rushed and therefore suboptimal Ada compiler.[citation needed]
Motorola's success with the 68000 led to the MC68010, which added virtual memory support.
The MC68020, introduced in 1984 added full 32-bit data and address buses. The 68020 became hugely
popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River
Data Systems, Cromemco) produced desktop-size systems. The MC68030 was introduced next,
improving upon the previous design by integrating the MMU into the chip. The continued success led to
the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its
performance goals and was not released, and the follow-up MC68060 was released into a market
saturated by much faster RISC designs. The 68k family faded from use in the early 1990s.
Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there
were more 68020s in embedded equipment than there were IntelPentiums in PCs.
[44]
The ColdFire processor cores are derivatives of the venerable 68020.
During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pinout,
32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named
the NS 32032. Later, National Semiconductor produced the NS 32132, which allowed two CPUs to reside
on the same memory bus with built in arbitration. The NS32016/32 outperformed the MC68000/10, but
the NS32332which arrived at approximately the same time as the MC68020did not have enough
performance. The third generation chip, the NS32532, was different. It had about double the performance
of the MC68030, which was released around the same time. The appearance of RISC processors like the
AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764.
Technically advancedwith a superscalar RISC core, 64-bit bus, and internally overclockedit could
still execute Series 32000 instructions through real-time translation.
When National Semiconductor decided to leave the Unix market, the chip was redesigned into the
Swordfish Embedded processor with a set of on chip peripherals. The chip turned out to be too expensive
for the laser printer market and was killed. The design team went to Intel and there designed the Pentium
processor, which is very similar to the NS32764 core internally. The big success of the Series 32000 was
in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good
price/performance and was adopted by large companies like Canon. By the mid1980s, Sequent introduced the first SMP server-class computer using the NS 32032. This was one of the
design's few wins, and it disappeared in the late 1980s. The MIPS R2000 (1984) and R3000 (1989) were
highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers
by SGI, among others. Other designs included the Zilog Z80000, which arrived too late to market to stand
a chance and disappeared quickly.
The ARM first appeared in 1985.[45] This is a RISC processor design, which has since come to dominate
the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model,
and its wide selection of system development tools. Semiconductor manufacturers generally license cores
and integrate them into their own system on a chip products; only a few such vendors are licensed to
modify the ARM cores. Most cell phones include an ARM processor, as do a wide variety of other
products. There are microcontroller-oriented ARM cores without virtual memory support, as well
as symmetric multiprocessor (SMP) applications processors with virtual memory.
From 1993 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and
server markets, and these microprocessors became faster and more capable. Intel had licensed early
versions of the architecture to other companies, but declined to license the Pentium,
so AMD and Cyrix built later versions of the architecture based on their own designs. During this span,
these processors increased in complexity (transistor count) and capability (instructions/second) by at least
three orders of magnitude. Intel's Pentium line is probably the most famous and recognizable 32-bit
processor model, at least with the public at broad.
bit native, the software is also geared to fully utilize the capabilities of such processors. The move to
64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of
general-purpose registers.
The move to 64 bits by PowerPC had been intended since the architecture's design in the early 90s and
was not a major cause of incompatibility. Existing integer registers are extended as are all related data
pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or
above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new
general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64bit mode for applications making no use of the larger address space is minimal. [citation needed]
In 2011, ARM introduced a new 64-bit ARM architecture.
RISC[edit]
Main article: reduced instruction set computing
In the mid-1980s to early 1990s, a crop of new high-performance reduced instruction set computer
(RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as the IBM
801 and others. RISC microprocessors were initially used in special-purpose machines
and Unix workstations, but then gained wide acceptance in other roles.
The first commercial RISC microprocessor design was released in 1984, by MIPS Computer Systems, the
32-bit R2000 (the R1000 was not released). In 1986, HP released its first system with a PA-RISC CPU. In
1987, in the non-Unix Acorn computers' 32-bit, then cache-less, ARM2-based Acorn Archimedes became
the first commercial success using the ARM architecture, then known as Acorn RISC Machine (ARM);
first silicon ARM1 in 1985. The R3000 made the design truly practical, and the R4000 introduced the
world's first commercially available 64-bit RISC microprocessor. Competing projects would result in the
IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design,
including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha.
In the late 1990s, only two 64-bit RISC architectures were still produced in volume for non-embedded
applications: SPARC and Power ISA, but as ARM has become increasingly powerful, in the early 2010s,
it became the third RISC architecture in the general computing segment.
Multi-core designs[edit]
Main article: multi-core processor
This section and the sections below needs additional citations for verification. Please
help improve this article by adding citations to reliable sources. Unsourced material may
be challenged and removed. (March 2012) (Learn how and when to remove this template
message)
In 2001, IBM introduced the first commercial multi-core processor, the monolithic two-core POWER4.
Personal computers did not receive multi-core processors until the 2003 introduction of the twocore Intel Pentium D. The Pentium D, however, was not a monolithic multi-core processor. It was
constructed from two dies, each containing a core, packaged on a multi-chip module. The first monolithic
multi-core processor in the personal computer market was the AMD Athlon X2, which was introduced a
few weeks after the Pentium D. As of 2012, dual- and quad-core processors are widely used in home PCs
and laptops, while quad-, six-, eight-, ten-, twelve-, and sixteen-core processors are common in the
professional and enterprise markets with workstations and servers.
Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core
design. The Niagara 2 supports more threads and operates at 1.6 GHz.
High-end Intel Xeon processors that are on the LGA 775, LGA 1366, and LGA 2011 sockets and high-end
AMD Opteron processors that are on the C32 and G34 sockets are DP (dual processor) capable, as well as
the older Intel Core 2 Extreme QX9775 also used in an older Mac Pro by Apple and the Intel Skulltrail
motherboard. AMD's G34 motherboards can support up to four CPUs and Intel's LGA 1567 motherboards
can support up to eight CPUs.
Modern desktop computers support systems with multiple CPUs, but few applications outside of the
professional market can make good use of more than four cores. Both Intel and AMD currently offer fast
quad, hex and octa-core desktop CPUs, making multi-CPU systems obsolete for many purposes. The
desktop market has been in a transition towards quad-core CPUs since Intel's Core 2 Quad was released
and are now common, although dual-core CPUs are still more prevalent. Older or mobile computers are
less likely to have more than two cores than newer desktops. Not all software is optimised for multi-core
CPUs, making fewer, more powerful cores preferable.
AMD offers CPUs with more cores for a given amount of money than similarly priced Intel CPUsbut
the AMD cores are somewhat slower, so the two trade blows in different applications depending on how
well-threaded the programs running are. For example, Intel's cheapest Sandy Bridge quad-core CPUs
often cost almost twice as much as AMD's cheapest Athlon II, Phenom II, and FX quad-core CPUs but
Intel has dual-core CPUs in the same price ranges as AMD's cheaper quad-core CPUs. In an application
that uses one or two threads, the Intel dual-core CPUs outperform AMD's similarly priced quad-core
CPUsand if a program supports three or four threads the cheap AMD quad-core CPUs outperform the
similarly priced Intel dual-core CPUs.
Historically, AMD and Intel have switched places as the company with the fastest CPU several times.
Intel currently leads on the desktop side of the computer CPU market, with their Sandy Bridge and Ivy
Bridge series. In servers, AMD's new Opterons seem to have superior performance for their price point.
This means that AMD are currently more competitive in low- to mid-end servers and workstations that
more effectively use fewer cores and threads.
Taken to the extreme, this trend also includes manycore designs, with hundreds of cores, with
qualitatively different architectures.
Market statistics[edit]
In 1997, about 55% of all CPUs sold in the world are 8-bit microcontrollers, over two billion of which
were sold.[46]
In 2002, less than 10% of all the CPUs sold in the world were 32-bit or more. Of all the 32-bit CPUs sold,
about 2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded
control applications such as household appliances, automobiles, and computer peripherals. Taken as a
whole, the average price for a microprocessor, microcontroller, or DSP is just over US$6 (equivalent to
$7.91 in 2015).[47]
In 2003, about US$44 (equivalent to $56.70 in 2015) billion worth of microprocessors were manufactured
and sold.[48] Although about half of that money was spent on CPUs used in desktop or laptop personal
computers, those count for only about 2% of all CPUs sold. [47] The quality-adjusted price of laptop
microprocessors improved -25% to -35% per year in 200410, and the rate of improvement slowed to
-15% to -25% per year in 201013.[49]
About ten billion CPUs were manufactured in 2008. About 98% of new CPUs produced each year are
embedded.[50]