0% found this document useful (0 votes)
21 views7 pages

Perfect ? I

Flynn's Classification categorizes parallel computer architectures into four types: SISD (Single Instruction, Single Data), SIMD (Single Instruction, Multiple Data), MISD (Multiple Instruction, Single Data), and MIMD (Multiple Instruction, Multiple Data). Each category is defined by the number of instruction and data streams processed simultaneously, with SISD representing traditional sequential processing, SIMD focusing on data parallelism, MISD being rare and used for fault tolerance, and MIMD being the most flexible and widely used in modern computing. The document provides detailed explanations, characteristics, applications, and limitations for each category, along with examples and a comparison table.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views7 pages

Perfect ? I

Flynn's Classification categorizes parallel computer architectures into four types: SISD (Single Instruction, Single Data), SIMD (Single Instruction, Multiple Data), MISD (Multiple Instruction, Single Data), and MIMD (Multiple Instruction, Multiple Data). Each category is defined by the number of instruction and data streams processed simultaneously, with SISD representing traditional sequential processing, SIMD focusing on data parallelism, MISD being rare and used for fault tolerance, and MIMD being the most flexible and widely used in modern computing. The document provides detailed explanations, characteristics, applications, and limitations for each category, along with examples and a comparison table.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Perfect 👍 I’ll now give you very detailed notes on Flynn’s Classification,

using the flow and content in your material (exam-writing style).


I’ll first give a short definition for each category, then a detailed explanation
with examples, characteristics, advantages, and limitations.

Flynn’s Classification of Parallel Computers


Flynn’s Taxonomy (1972) is one of the most widely used classifications of parallel
computer architectures. It is based on the number of instruction streams and
data streams that a computer can process simultaneously.
 Instruction stream: Sequence of instructions executed by the control
unit.
 Data stream: Sequence of data items that the instructions operate on.
According to Flynn, computers are classified into four categories:

1. SISD (Single Instruction, Single Data)


Small Definition:
A single processor executes a single instruction stream on a single data
stream.
Detailed Explanation:
 Represents the classical von Neumann architecture.
 A single control unit (CU) fetches and decodes instructions.
 A single processing unit executes the instructions.
 A single memory unit stores both data and instructions.
 No parallelism is available.
Characteristics:
 Sequential execution of instructions.
 Only one operation is performed at a time.
 Simple to design and program.
Examples:
 Traditional PCs before multicore processors.
 Early mainframes.

2. SIMD (Single Instruction, Multiple Data)


Small Definition:
A single control unit executes a single instruction simultaneously on multiple
data streams.
Detailed Explanation:
 Contains a single control unit (CU) and multiple processing elements
(PEs).
 Each processing element has its own local memory to store its data.
 CU broadcasts the same instruction to all PEs.
 Each PE executes the instruction on its own local data.
 If a PE’s data does not meet the condition, that PE remains idle.
Characteristics:
 Synchronous execution: All PEs operate in lockstep (under the same
clock).
 Data parallelism: Same instruction applied on different data elements.
 Performance degradation: Occurs if conditional operations exist (some
PEs idle).
 Very efficient for vector operations and array-based computations.
Vector Addition Example:
For
for (i = 0; i < n; i++)
x[i] += y[i];
 If system has n datapaths → all n additions are performed simultaneously.
 If system has m datapaths (m < n) → additions are performed in blocks of
m elements.
 In conditional operations like
if (y[i] > 0) x[i] += y[i];
→ datapaths holding negative values of y[i] remain idle → efficiency drops.
Applications:
 Matrix and vector operations.
 Image and signal processing.
 Scientific computations.
Modern Usage:
 Vector Processors:
o Use vector registers, pipelined functional units, and interleaved
memory.
o Operate on arrays (vectors) instead of single elements.

o Efficient for regular, predictable memory access.


 Graphics Processing Units (GPUs):
o Use massive SIMD parallelism with hundreds of datapaths per
core.
o Ideal for rendering images, graphics, and large-scale data parallel
workloads.
o Not purely SIMD, since modern GPUs also allow limited MIMD
features.
Limitations:
 Poor handling of irregular data structures.
 Conditional execution leads to idle PEs.
 Limited flexibility compared to MIMD.

3. MISD (Multiple Instruction, Single Data)


Small Definition:
Multiple instruction streams operate on the same data stream.
Detailed Explanation:
 Multiple processing elements execute different instructions on the
same data.
 Very few practical uses because most applications require multiple data
streams.
Characteristics:
 Used mainly in special-purpose systems for fault tolerance.
 Redundancy: Same data processed using different algorithms to verify
correctness.
 Reliability is more important than speed.
Applications:
 Fault-tolerant systems (e.g., aerospace, nuclear control systems).
 Space Shuttle computers used a similar concept for safety.
Limitations:
 Rare in general-purpose computing.
 Expensive and inefficient for normal applications.

4. MIMD (Multiple Instruction, Multiple Data)


Small Definition:
Multiple processors execute different instructions on different data streams
simultaneously.
Detailed Explanation:
 Consists of multiple independent control units and processing
elements.
 Each PE has its own local memory (distributed memory) or can share
global memory (shared memory).
 Most flexible and widely used architecture in modern computing.
Characteristics:
 Asynchronous operation: Each processor can work independently at its
own speed.
 Supports both task parallelism (different tasks on different processors)
and data parallelism.
 Most modern parallel systems are based on MIMD.
Types of MIMD Systems:
1. Shared Memory Systems:
o All processors share access to the same global memory.

o Communication is implicit, done through shared variables.

o Two forms:

 UMA (Uniform Memory Access): Equal access time for all


processors.
 NUMA (Non-Uniform Memory Access): Access time
depends on location of memory.
o Example: Multicore CPUs.

2. Distributed Memory Systems:


o Each processor has its own private memory.

o Communication through message passing or special network


functions.
o Examples: Computer clusters, Grids, Supercomputers.

Applications:
 High-performance computing (HPC).
 Scientific simulations, weather modeling, AI/ML workloads.
 Server farms and cloud computing.
Limitations:
 Programming complexity (synchronization, communication).
 More expensive interconnection networks.

Comparison Table (Flynn’s Taxonomy)

Mod Instruction Data


Examples Characteristics
el Streams Streams

Traditional PC,
SISD 1 1 Sequential, no parallelism
uniprocessor

Data parallelism,
Vector processors,
SIMD 1 Many synchronous, efficient for
GPUs
arrays

Space shuttle Rare, used for reliability and


MISD Many 1
computers fault tolerance

Multicore CPUs, Most common,


MIM
Many Many Clusters, asynchronous, supports
D
Supercomputers task/data parallelism

✅ This version:
 Starts with short definition for each.
 Expands into detailed explanation.
 Covers examples, characteristics, applications, limitations.
 Matches closely with your provided material (especially SIMD with
vector addition, performance issues, GPUs, vector processors; MIMD with
UMA/NUMA and distributed systems).
Would you like me to now draw labeled diagrams for SISD, SIMD, MISD,
MIMD (as in your textbook figures) so you can copy them directly in exams?

You might also like