Notice: Exam Form BE IV/II & BAR V/II (Back) for 2076 Magh
Routine: BE IV/II & BAR V/II - 2076 Magh
Result: BCE I/II exam held on 2076 Bhadra
Result: All (except BCE & BEI) I/II exam held on 2076 Bhadra
Notice: Exam Center for Barrier Exam (2076 Poush), 1st Part
View All
View Old Questions
Computer Engineering(BCT)
Electrical Engineering(BEL)
Electronics and Communication(BEX)
View All
View Syllabus
Computer Engineering(BCT)
Electrical Engineering(BEL)
Electronics and Communication(BEX)
View All

Notes of Microprocessor [EX 551]

Advanced Topics


Multiprocessing System

Multiprocessing System

- Multiprocessing is the parallelism that uses multiple CPU’s sharing the common resources like memory, storage device and so on.
- The characteristics are as follows:
a) It contains more than one general purpose processor.
b) All processors share access to common memory.
c) All processors share access to I/O devices through same channels.
d) System is controlled by integrated operating system.

Organization of Multiprocessing System:

1. Time Shared or Common Bus:
- Numbers of CPU, I/O module and memory are connected to same bus.
- When one module is controlling the bus, the other should be locked out.
- Simple approach, flexible, reliable (failure of one module do not affect system)
- The speed of system is limited.

2. Multiport Memory:
- Each processor and I/O module has dedicated path to each memory module.
- It has high performance but is complex.
- Security can be increased as the portion of memory can be configured as private to one or more CPU’s.

3. Central Control Unit:
- Manages transfers of data between independent modules.
- It is able to pass status and control message between CPU’s.

Real and Pseudo Parallelism

Traditionally, problems are solved by serial computation with a single CPU by executing instructions one after another. But, along with time, the concept for using multiple computer resources to solve a problem simultaneously arises, and known to be parallelism. Multiple CPU’s are used and a problem is divided into parts that can be solved concurrently.

Real parallelism consists of parallel modes of physical devices so that each can carry parallel operations to each other. Core parallelism is the example of real parallelism. Multiple core processor which are physically different and performs their operations in parallel.

Pseudo parallelism consists of same device carrying parallel operation. Concurrent processing that operates in time division algorithm is an example of pseudo parallelism.

Flynn's Classification

Flynn’s classification divides multiprocessing system according to how they can be classified along two independent dimension of instruction and data.

1. Single Instruction Single Data (SISD)
- Serial computer
- Only one instruction is executed by CPU at any one clock cycle.
- Only one data stream is being used as input.
- Eg: PC, older generation mainframes, etc.

2. Single Instruction Multiple Data (SIMD)
- All processing units execute same instruction at a given clock cycle.
- Each processor can operate on different data element.
- Modern computers with Graphics Processor Unit (GPU).

3. Multiple Instruction Single Data (MISD)
- Each processor operates on data independently with separate instruction.
- A single data stream is fed to all processors.
- Eg: Multiple cryptographic algorithms attempting to crack single coded message.

4. Multiple Instruction Multiple Data (MIMD)
- Every processor have different instruction stream.
- Every processor operates in different data stream.
- Modern super computers, multi-processor SMP computers, etc.

Instruction, Thread and Process Level Parallelism

Instruction Level Parallelism (ILP)

- Measure of how many operations can be performed simultaneously in a program.
- Proper use of ILP can reduce the execution time of a program.
- Existence of ILP is application specific.
- Eg: Consider program:
1. e = a + b
2. c = d + f
3. g = e – c
Here, operation 3 is dependent to results of operation 1 and 2 but operations 1 and 2 can be executed independently, so can be calculated simultaneously. The program has ILP 3/2 assuming each instruction to be completed in one unit of time.

Thread Level Parallelism

- A single program may consists of many threads or functions that can be executed independently and simultaneously giving rise to thread level parallelism.
- Emphasize distributed nature of threads.
- Consider a pseudocode:
If CPU = “a” then
Do task “A”
Else if CPU = “b” then
Do task “B”
End if
Launching the program in two processor system, the task A and task B will be performed simultaneously by processors a and b respectively.

Data Level Parallelism

- Distributing data across different computing nodes to be processed in parallel.
- Common in program loops.

Process Level Parallelism

- Use of more CPU’s in a single computer.

Interprocess Communication, Resource Allocation and Deadlock

- IPC is a set of methods for exchange of data among multiple threads in one or more processes.
- IPC methods are divided into methods for message passing, synchronization, shared memory and remote procedure call (RPC).
- IPC method depends on bandwidth, types of data and so on.


- A process requests resources and may enters a wait state if the resources are not available.
- Waiting process may never again change state, because the requested resources are being held by another waiting process in cyclic manner.
- This situation is called deadlock.

Features of Typical Operating System

1. Process Management
- Manages process on hardware level and user level.
- Scheduling, process synchronization and deadlock strategy.
- Allow multiple processes to exist at a time.
- Only one process can execute while other may perform I/O or wait.

2. File Management
- Manages the creation, deletion, copy of files.
- File manager also provides network connectivity.

3. Memory Management
- Allocates the process to main memory and minimize the access time.
- Reallocation, protection and sharing

4. Device Management
- Control devices by enabling or disabling or ignoring.
- Manages device configuration, inventory collection and S/W distribution.

5. Resource Management
- Create, manage and allocate resources.
- Resources allocation and deallocation.

RISC and CISC Architecture

RISC Architecture:

- Reduced Instruction Set Computer
- Multiple register sets.
- Pipelining
- 3 register operands
- Complexity in compiler
- Simple and few instruction
- Hardwired control
- Fixed length instruction
- Single machine cycle

CISC Architecture:

- Complex Instruction Set Computer
- Single register set
- No pipelining
- 2 register operands
- Complexity in microcode
- Many complex instructions
- Micro-programmed control
- Variable length instructions
- Multiple machine cycle

Sponsored Ads