2. Objectives
• Know the difference between computer
organization and computer architecture.
• Understand units of measurement common to
computer systems.
• Appreciate the evolution of computers.
• Understand the computer as a layered system.
• Be able to explain the von Neumann architecture
and the function of basic computer components.
3. 1.1 Overview (1 of 2)
• Why study computer organization and
architecture?
– Design better programs, including system software
such as compilers, operating systems, and device
drivers.
– Optimize program behavior.
– Evaluate (benchmark) computer system
performance.
– Understand time, space, and price tradeoffs.
4. 1.1 Overview (2 of 2)
• Computer organization
– Encompasses all physical aspects of computer systems
(e.g., circuit design, control signals, memory types).
– How does a computer work?
• Computer architecture
– Logical aspects of system implementation as seen by
the programmer (e.g., instruction sets, instruction
formats, data types, addressing modes).
– How do I design a computer?
5. 1.2 Computer Systems (1 of 2)
• There is no clear distinction between matters
related to computer organization and matters
relevant to computer architecture.
• Principle of Equivalence of Hardware and
Software:
– Any task done by software can also be done using
hardware, and any operation performed directly
by hardware can be done using software.*
* Assuming speed is not a concern.
6. 1.2 Computer Systems (2 of 2)
• At the most basic level, a computer is a
device consisting of three pieces:
– A processor to interpret and execute programs
– A memory to store both data and programs
– A mechanism for transferring data to and from
the outside world
7. 1.3 An Example System (1 of 19)
• Consider this advertisement:
8. 1.3 An Example System (2 of 19)
• Measures of capacity and speed:
– Kilo- (K) = 1 thousand = 103
and 210
– Mega- (M) = 1 million = 106
and 220
– Giga- (G) = 1 billion = 109
and 230
– Tera- (T) = 1 trillion = 1012
and 240
– Peta- (P) = 1 quadrillion = 1015
and 250
– Exa- (E) = 1 quintillion = 1018
and 260
– Zetta- (Z) = 1 sextillion = 1021
and 270
– Yotta- (Y) = 1 septillion = 1024
and 280
• Whether a metric refers to a power of ten or a power of
two typically depends upon what is being measured.
9. 1.3 An Example System (3 of 19)
• Hertz = clock cycles per second (frequency)
– 1MHz = 1,000,000Hz
– Processor speeds are measured in MHz or GHz.
• Byte = a unit of storage
– 1KB = 210
= 1024 Bytes
– 1MB = 220
= 1,048,576 Bytes
– 1GB = 230
= 1,099,511,627,776 Bytes
– Main memory (RAM) is measured in GB.
– Disk storage is measured in GB for small systems, TB (240
)
for large systems.
11. 1.3 An Example System (5 of 19)
• Millisecond = 1 thousandth of a second
– Hard disk drive access times are often 10 to 20
milliseconds.
• Nanosecond = 1 billionth of a second
– Main memory access times are often 50 to 70
nanoseconds.
• Micron (micrometer) = 1 millionth of a meter
– Circuits on computer chips are measured in
microns.
12. 1.3 An Example System (6 of 19)
• We note that cycle time is the reciprocal of
clock frequency.
• A bus operating at 133MHz has a cycle time
of 7.52 nanoseconds:
• 133,000,000 cycles/second = 7.52
ns/cycle
Now back to the advertisement ...
15. 1.3 An Example System (9 of 19)
• Computers with large main memory capacity can
run larger programs with greater speed than
computers having small memories.
• RAM is an acronym for random access memory.
Random access means that memory contents
can be accessed directly if you know its location.
• Cache is a type of temporary memory that can
be accessed faster than RAM.
20. 1.3 An Example System (14 of 19)
• Serial ports send data as a series of pulses
along one or two data lines.
• Parallel ports send data as a single pulse
along at least eight data lines.
• USB, Universal Serial Bus, is an intelligent
serial interface that is self-configuring. (It
supports “plug and play.”)
25. 1.3 An Example System (19 of 19)
• Throughout the remainder of the book you will
see how these components work and how they
interact with software to make complete
computer systems.
• This statement raises two important questions:
– What assurance do we have that computer
components will operate as we expect?
– What assurance do we have that computer
components will operate together?
26. 1.4 Standards Organizations (1 of 4)
• There are many organizations that set
computer hardware standards—to include
the interoperability of computer
components.
• Throughout this book, and in your career,
you will encounter many of them.
• Some of the most important standards-
setting groups include the following.
27. 1.4 Standards Organizations (2 of 4)
• The Institute of Electrical and Electronic
Engineers (IEEE)
– Promotes the interests of the worldwide
electrical engineering community.
– Establishes standards for computer
components, data representation, and
signaling protocols, among many other things.
28. 1.4 Standards Organizations (3 of 4)
• The International Telecommunications Union
(ITU)
– Concerns itself with the interoperability of
telecommunications systems, including data
communications and telephony.
• National groups establish standards within their
respective countries:
– The American National Standards Institute (ANSI)
– The British Standards Institution (BSI)
29. 1.4 Standards Organizations (4 of 4)
• The International Organization for
Standardization (ISO)
– Establishes worldwide standards for everything
from screw threads to photographic film.
– Is influential in formulating standards for
computer hardware and software, including
their methods of manufacture.
Note: ISO is not an acronym. ISO comes from the Greek,
isos, meaning “equal.”
30. 1.6 The Computer Level Hierarchy
(1 of 7)
• Computers consist of many things besides chips.
• Before a computer can do anything worthwhile, it
must also use software.
• Writing complex programs requires a “divide and
conquer” approach, where each program module
solves a smaller problem.
• Complex computer systems employ a similar
technique through a series of virtual machine
layers.
31. 1.6 The Computer Level Hierarchy
(2 of 7)
• Each virtual machine layer is an
abstraction of the level below
it.
• The machines at each level
execute their own particular
instructions, calling upon
machines at lower levels to
perform tasks as required.
• Computer circuits ultimately
carry out the work.
32. 1.6 The Computer Level Hierarchy
(3 of 7)
• Level 6: The User Level
– Program execution and user interface level
– The level with which we are most familiar
• Level 5: High-Level Language Level
– The level with which we interact when we
write programs in languages such as C, Pascal,
Lisp, and Java.
33. 1.6 The Computer Level Hierarchy
(4 of 7)
• Level 4: Assembly Language Level
– Acts upon assembly language produced from Level
5, as well as instructions programmed directly at
this level.
• Level 3: System Software Level
– Controls executing processes on the system.
– Protects system resources.
– Assembly language instructions often pass through
Level 3 without modification.
34. 1.6 The Computer Level Hierarchy
(5 of 7)
• Level 2: Machine Level
– Also known as the Instruction Set Architecture
(ISA) Level.
– Consists of instructions that are particular to
the architecture of the machine.
– Programs written in machine language need no
compilers, interpreters, or assemblers.
35. 1.6 The Computer Level Hierarchy
(6 of 7)
• Level 1: Control Level
– A control unit decodes and executes instructions
and moves data through the system.
– Control units can be microprogrammed or
hardwired.
– A microprogram is a program written in a low-level
language that is implemented by the hardware.
– Hardwired control units consist of hardware that
directly executes machine instructions.
36. 1.6 The Computer Level Hierarchy
(7 of 7)
• Level 0: Digital Logic Level
– This level is where we find digital circuits (the
chips).
– Digital circuits consist of gates and wires.
– These components implement the
mathematical logic of all other levels.
37. 1.9 The von Neumann Model (1 of 8)
• On the ENIAC, all programming was done at
the digital logic level.
• Programming the computer involved moving
plugs and wires.
• A different hardware configuration was
needed to solve every unique problem type.
– Configuring the ENIAC to solve a “simple” problem
required many days labor by skilled technicians.
38. 1.9 The von Neumann Model (2 of 8)
• Inventors of the ENIAC, John Mauchley and J.
Presper Eckert, conceived of a computer that
could store instructions in memory.
• The invention of this idea has since been
described to a mathematician, John von
Neumann, who was a contemporary of Mauchley
and Eckert.
• Stored-program computers have become known
as von Neumann Architecture systems.
39. 1.9 The von Neumann Model (3 of 8)
• Today’s stored-program computers have the
following characteristics:
– Three hardware systems:
• A central processing unit (CPU)
• A main memory system
• An I/O system
– The capacity to carry out sequential instruction
processing.
– A single data path between the CPU and main memory.
• This single path is known as the von Neumann bottleneck.
40. 1.9 The von Neumann Model (4 of 8)
• This is a general depiction of a von Neumann
system:
• These computers employ a fetch-decode-
execute cycle to run programs as follows . . .
41. 1.9 The von Neumann Model (5 of 8)
• The control unit fetches the next instruction
from memory using the program counter to
determine where the instruction is located.
42. 1.9 The von Neumann Model (6 of 8)
• The instruction is decoded into a language
that the ALU can understand.
43. 1.9 The von Neumann Model (7 of 8)
• Any data operands required to execute the
instruction are fetched from memory and
placed into registers within the CPU.
44. 1.9 The von Neumann Model (8 of 8)
• The ALU executes the instruction and
places results in registers or memory.
45. 1.10 Non–von Neumann Models
(1 of 2)
• Conventional stored-program computers have
undergone many incremental improvements over the
years.
• These improvements include adding specialized buses,
floating-point units, and cache memories, to name only
a few.
• But enormous improvements in computational power
require departure from the classic von Neumann
architecture.
• Adding processors is one approach.
46. 1.10 Non–von Neumann Models
(2 of 2)
• Some of today’s systems have separate buses for
data and instructions.
– Called a Harvard architecture
• Other non-von Neumann systems provide special-
purpose processors to offload work from the
main CPU.
• More radical departures include dataflow
computing, quantum computing, cellular
automata, and parallel computing.