Notice: Exam Form BE IV/II & BAR V/II (Back) for 2076 Magh
Routine: BE IV/II & BAR V/II - 2076 Magh
Result: BCE I/II exam held on 2076 Bhadra
Result: All (except BCE & BEI) I/II exam held on 2076 Bhadra
Notice: Exam Center for Barrier Exam (2076 Poush), 1st Part
View All
View Old Questions
Computer Engineering(BCT)
Electrical Engineering(BEL)
Electronics and Communication(BEX)
View All
View Syllabus
Computer Engineering(BCT)
Electrical Engineering(BEL)
Electronics and Communication(BEX)
View All

Notes of Operating System [CT 656]

Memory Management

Memory Management

Memory Address, Swapping

Memory is the part of OS that manages the memory hierarchy. It keeps track of which memory space is used and which is free. It also allocates and deallocates memory to process. It manages swapping between main memory and disk.


- Swapping is the mechanism in which a process can be swapped temporarily of main memory to a backup store and bought back into emery for continued execution.
- It is needed when there is no enough main memory to hold all the currently active in timesharing system.
Content switching in swapping system is high.

Memory Management Schemes



- While process are loaded and removed from memory, the free space is broken into pieces and sometimes no process can be allocated to memory blocks due to small size. This problem is called fragmentation.
- External Fragmentation:
Free holes exists but are not congruous, so process can not reside.
- Internal Fragmentation
Memory is broken into the size blocks. The memory is allocated but allocation space is larger than requested memory. The remaining allocated memory is wasted.

Managing Free Memory Space

With bitmaps

- Memory is divided into allocation units and each allocation unit has a bit in the bitmap.
- If the bit is 0, the unit is free.
- It requires less memory to store bit maps.
- Searching a bitmap for a run of a given length is a slow operation.

With linked list

- A linked list of allocated and free memory segments is maintained.
- Each entry in a list specifies a hole or process, address at which it starts, length and pointer to next entry.
- Process and hole are on a list sorted by address.
- The algorithms used to allocate memory are: -
a) First fit
b) Next fit
c) Best fit
d) Worst fit



Resident Monitor (Mono Programming)

- In this scheme, it is allowed to run only one program at a time, sharing memory between program and OS.
- When user provides command, the as copies requested program to memory form disk and executes it. As the program completes, the OS displays prompt character and waits for next command. When next command is provided, it beds new program into memory overwriting the previous program.


Multiprogramming with Fixed Partition

- Multiprogramming increases CPU utilization by allowing multiple processes in memory at once so that when one process is blocked, another process can be use the CPU.
- It can be achieved by dividing memory into ‘n’ partitions.
- When any process requests memory, it can be put into input queve for the smallest partition large enough to hold it.
- Any space in a partition which are not used by a process are wasted.
- It can also be done by maintaining a single queue.


Multiprogramming with Variable Partition

- The memory is partitioned dynamically when process comes and goes.
- The number, location and size of partitions vary.
- Swapping can crate multiple holes in memory and it can be combined into one big hole by moving all processes download. This is called memory compaction. (coalescing  adjacent hole.
- Memory compaction requires a lot of CPU time.
- It is inappropriate when a lot of CPU time.
- It is inappropriate when the process can grow.
- It can be resolved by allocating a memory for growth also.

Problems with Multiprogramming:

1) Relocation:
It is the method of shifting a user program frame ine memory location to another. To achieve relocation, it can be equipped with address of the start of its partition ans limit register is loaded with length of partition

2) protection:
When absolute memory address is used, a malicious program can read or write any ward in memory.

Multiple Base Register

- Some processor consists of multiple base registers.
- Each base register is used for different memory access.
- Eg. One base register for data access and another for program access.

Virtual Memory Management - Paging, Segmentation and Paged Segmentation

Virtual Memory

- Virtual memory is a technique to execute a program instruction that may not tit entirely in the main memory.
- The combined size of program, data and stack may exceed the physical memory available.
- The OS uses parts of program on the disk.
- Virtual memory can be implemented using paging and segmentation.
- The address referenced by a running process  virtual address
- The address available in primary memory  real address.
- Virtual address must be mapped into real address as a process executes.


- The program genetated address are virtual address.
- The virtual address goes to MMU which maps virtual addresses onto physical memory addresses.
- The virtual address spaces is divided into units called pages.
- The corresponding units physical memory are page frames.
- Pages and page frames are some size.
- Transfer between memory and disk always in units of page.

Example of Paging:

Consider a computer that can generate 16-bit address with only 32 k of physical memory. Assume the page size of 4k.
It means:
No. of virtual pages = 64k/4k = 16
No. of page frames = 32k/ 4k = 8
Assume the mapping as shown in figure below: -


From the above figure,
the virtual address 8192 corresponds to virtual frame 2 which according to mapping is on page frame 6.
So, the physical address is 24576.

Page fault

- When the program tires to use an unmapped page, te MMU notices that page is unmapped and causes the CPU to trap to the OS. This trap is called page fault.
- The OS pics less used page frame and write its contents back to the disk. It then fetches the page just referenced into the page frame, changes the map and restarts the trapped instruction.

Working of MMU

- For logical address of size 2m and page size of 2n;
a) High order m-n designate page number
b) Low order n designate page offset


Page Table

- Page table is a function with a virtual page number as argument and physical frame number as result.
- It maps virtual pages onto page frames.
- Each page table entry has:


Disadvantages of Paging

a) The mapping may not be fast. As it must be done for every memory reference.
b) With smaller page size, the table can extremely large.
c) With larger page size. There is possibility of internal fragmentation.

Speeding up paging

1. Multilevel page table
- It avoids keeping all the page tables in memory all the time.

2. Translation lookaside buffer
- TLB is the device within the MMU that contains the small number of entries of the page table.
- Each entry consists of information about one page.
- When virtual address is presented to MMU, it first checks if its virtual page is in TLB by comparing it to all entries simulates.
- If match, the page frame is taken directly from TLB.
- If not found, it performs an page table lookup. Then, it evicts one entry form TLB and replace it with that one.


- Segment is the logical grouping of instructions such as subroutine, array, method, etc.
- Segmentation is memory management technique in which memory is divided into variable sized logical pieces.
- Each pieces is collection of relation information.
- Segmentation eliminate internal fragmentation.
- Address generated by CPU contains segment no. and segment offset.

Mapping process

- MMU maintains segment table
- Each entry of segment table contains base and limit columns.
- The virtual address consists of segment and affect.
- The segment no is used as an index into segment table.
- If offset is within the limit, the offset is added to the base of segment tp produce physical address.
- It allows segments to grow.
- Dynamic linking and leading of segments.
- Dynamic growing is expensive

Paged segmentation

- The memory is divided into sized logical segments.
- Each segment is paged.
- Each segment has its own page table.
- The logical address contains segment no, page no and offset.
- It avoids fragmentation and supports user’s view of memory.

Demand Paging

Demand Paging

- It is the scheme in which a page is not loaded into main memory from disk until is needed.
- Pages are loaded only on demand.
- A pager is used which helps in swapping pages.
- Access to page marked invalid causes page fault.


- If no page fault, effective access time memory access time
- It page fault with probability p/ effective access time = (1-p) * m(a) + p * page fault time
- Page fault time is the time required t service the page.
- In pure demand paging, not a single page is loaded into memory initially causing page fault for the very first instruction.

Page Replacement Algorithm

- Page replacement algorithm determines how to select a page to replace when page fault occurs.
- Page replacement is done by swapping required page from pickup to main memory and vice versa.


1) Find location of desired page on disk.
2) Find a free frame: -
a. If present, user it.
b. If absent, use replacement algorithm to select victim frame.
3) Read desired page into new free frame.
4) Update the page and frame tables.
5) Restart the process.

- When page is full, oldest page in main memory is selected for replacement.
- Easy to implement using list.

- It has lowest fault rate.
- Replace the page that will not be used for longest period of time.
- It is difficult to know future knowledge of reference string.
- It is unrealizable.

Second Chance
- Searches for an old page which has not been referenced.
- A reference bit R is used.
- Examine R oldest page.
- If 0, select if for replacement
- If 1, set it to 0 and move to tail of FIFO queue and continue search.

Clock page replacement
- Arrange pages on a circular linked list with hand pointing to oldest page.
- Replacement same as second chance.

Least recently used (LRU)
- Page which has not been used for longest time in main memory is selected for replacement.

Not recently used (NRU)
- Replace a page which has not been accessed in near past.
- Has reference bit and modified bit.

Least frequently used (LFU)
- Replace page with max. frequency count of all pages.

Q) Consider page reference string 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6. How many page faults would occur if we have five frames.





Sponsored Ads