Memory Management, Virtual Memory, File Systems, Protection & Security

By Mona Kumari|Updated : June 27th, 2021

Memory Management

MEMORY MANAGEMENT TECHNIQUES

Memory Partition Techniques are of two types-

I . Contiguous Partition Scheme

II. Non-Contiguous Partition Scheme

Memory Management

MEMORY MANAGEMENT TECHNIQUES

Memory Partition Techniques are of two types-

I . Contiguous Partition Scheme

II. Non-Contiguous Partition Scheme

1.1.     Contiguous Partition Scheme

In Contiguous memory allocation, when the process arrives from the ready queue to the main memory for execution, the contiguous memory blocks are allocated to the process according to its requirement. Now, to allocate the contiguous space to user processes, the memory can be divided either in the fixed-sized partition or in the variable-sized partition.

Contiguous memory partition is of following types-

  1. fixed-partition scheme
  2. variable-partition scheme

1.1.1 Fixed-partition scheme

  • Fixed memory partitions
  • Divide memory into fixed spaces
  • Assign a process to a space when it’s free
  • Mechanisms
  • Separate input queues for each partition
  • Single input queue: better ability to optimize CPU usage

 byjusexamprep

Advantages:

  • Simple to implement
  • Little OS overhead

Disadvantages:

  • Internal fragmentation causes inefficient use of memory. Main memory is utilized in extreme inefficient manner. Every program occupies an entire partition regardless of its size. The mechanism in which space is wasted internally to a partition because the block of data loaded is smaller than the size of partition, is known as internal fragmentation.

Internal fragmentation-

A larger memory block is assigned to a process. Some portion of that memory block is left unused as it cannot be used by any other process.

 

 byjusexamprep

 

1.1.2. variable-partition scheme

  • In variable partition scheme initially, memory will be a single continuous free block.
  • Whenever the request by the process arrives accordingly partition will be mode in the memory.
  • If the smaller process keeps on coming, then the larger partition will be made into smaller partition.

 byjusexamprep

 byjusexamprep

Example, using 64 MB of main memory, is shown in Figure.

Eventually this will leads to a situation in which there are a lot of small holes and partitions created in the memory. Memory becomes more and more fragmented, inefficient and unutilized with time. This mechanism is known as external fragmentation, it shows that the memory which is external to all the memory partitions becomes fragmented increasingly.

External fragmentation

External fragmentation issue occurs when free memory is divided into small blocks and is scattered by allocated memory. It is a disadvantage of storage allocation algorithms; this happens when these algorithms fail to keep the memory in order and used by the programs efficiently.

Comparison between Internal & External Fragmentations-

Internal Fragmentation

External Fragmentation

The difference between the memory allocated and the required memory is called internal fragmentation.

The unused spaces formed between non-contiguous memory fragments are too small to serve a new process request. This is called external fragmentation.

It occurs when main memory is divided into fixed-size blocks regardless of the size of the process.

It occurs when memory is allocated to processes dynamically based on process requests.

It refers to the unused space in the partition which resides within an allocated region, hence the name.

It refers to the unused memory blocks that are too small to handle a request.

Internal fragmentation can be eliminated by allocating memory to processes dynamically.

External fragmentation can be eliminated by compaction, segmentation and paging.

 

  1. Compaction

The phenomenon when free memory chunks are collected together and it results in larger memory chunks which can be used as space available for processes, this mechanism is referred to as Compaction. In memory management, swapping creates multiple fragments in the memory because of the processes moving in and out. Compaction is simply a mechanism of combining all the empty spaces together to form large free space so that we can use that free space for processes.

 byjusexamprep

 

ALLOCATION

Some popular algorithms used for allocating the memory partitions to the processes are as follows-

  1. First Fit Algorithm
  2. Best Fit Algorithm
  3. Worst Fit Algorithm

2.1.     First Fit Algorithm

  • First fit algorithm starts scanning from the starting and scans all the partitions serially.
  • A partition is allocated to a process, if the partition is large enough to hold the process.
  • The size of the partition should be greater than or at least equal to the size of the process.

2.2.     Best Fit Algorithm

  • Best fit algorithm scans all the empty partitions.
  • Then the algorithm compares all the partitions and allocates the process to the smallest size partition which is large enough to hold the process at least.

2.3.     Worst Fit Algorithm

  • Worst fit algorithm also scans all the empty partitions.
  • Then the algorithm compares all the partitions and allocates the process to the largest size partition.
  1. NEED OF PAGING

The main disadvantage of variable size partitioning is that it causes External fragmentation. Although, external fragmentation can be removed by Compaction but the issue is compaction also makes the system inefficient.

That’s why the concept of paging is introduced, Paging is a dynamic and flexible technique that can load the processes in the memory partitions in a more optimal manner. The basic idea and concept behind paging is to divide the process into pages, so that we can store these pages in the memory at different holes(partitions) and use these holes efficiently.

3.1.     Page

A page, virtual page, or memory page, is a fixed-length contiguous memory block in a virtual memory operating system, described by a single entry in the page table. Page is the smallest unit of data in a virtual memory for memory management.

  1. PAGING: NON-CONTIGUOUS PARTITION SCHEME

4.1.     Address Space

An address space is defined as a range of valid addresses available in the memory for a program or process. It is the memory space accessible to a program or process. The memory can be physical or virtual and is used for storing data and executing instructions.

4.2.     Physical Address Space

Physical address space is defined as the size of the main memory. It is very important for comparing the process size with the physical address space. Process size should always be less than the physical address space.

4.3       Logical Address Space

Logical address space is the size of the process. The size of the process should be less enough so that it can reside in the main memory.

TYPES OF PAGE TABLE (PT) 

  • Single Level PT
  • Multi level PT
  • Inverted PT

Drawbacks of Paging Mechanism

  • For a larger process the size of the Page table will be very large and it will waste main memory.
  • While reading a single word from the main memory CPU will take more time.
  • How to decrease the page table size- The page table size can be reduced by increasing the page size but this will cause internal fragmentation and page wastage too.
  • Another method is to use multilevel paging but this will increase the effective access time, so this is not a practical method.
  • How to decrease the effective access time- CPU can use a register with a page table stored in it so that the time to access page tables can become less but the register is costly and very small as compared to the size of the page table. Therefore, this is also not a practical method.
  • To overcome these drawbacks of paging, we need a memory that is cheaper than the register in cost and faster than the main memory to reduce the time taken by the CPU to access the page table again and again, and that will only focus on accessing the actual word. 
  1. Locality of reference

The concept of locality of reference says that, OS can load only those number of pages in the main memory which are frequently accessed by the CPU instead of loading the entire process in the main memory and along with that, the OS can only load those page table entries that are corresponding to those many pages.

  1. Translation lookaside buffer (TLB)
  • A Translation lookaside buffer (TLB) is defined as a memory cache hardware unit which is used to reduce the page table access time when the page table is accessed again and again.
  • TLB is a memory cache which is nearer to the CPU and the time taken to access TLB by CPU is less than that taken by CPU to access main memory.

Or, we can say that TLB is smaller and faster than the main memory at the same time cheaper in cost and bigger in size than the CPU register.

TLB uses the concept of locality of reference which means that it contains the entries of only those pages that are frequently accessed by the CPU.

In TLB, there are keys and tags through which the mapping of addresses is done.

  • TLB hit is a situation when the desired entry is found in TLB. If a hit happens then the CPU can simply access the actual address from the main memory.
  • If the entry is not in TLB then it will be said to be a TLB miss, in this situation then the CPU has to access the page table from the main memory and then it will access the actual frame from the main memory.

So, the effective memory access time will be less in the case of TLB hit as compared to the case of TLB miss.

If the probability of TLB hit is x% (TLB hit rate) then the probability of TLB miss (TLB miss rate) will be (1-x) %.

Therefore, the effective memory access time can be formulated as;

EMAT = x (c + m) + (1 - x) (c + k.m + m)  

Where, 

x → TLB hit, 

c → time taken to access TLB, 

m → time taken to access the main memory. 

k will be taken as 1, if single level paging has been used.

With the help of the above formula, we come to know that

  • if the TLB hit rate is increased then EMAT will be decreased.
  • In the case of multilevel paging Effective access time will be increased.
  1. VIRTUAL MEMORY

Virtual Memory is a memory space where we can store large programs in the form of pages while their execution and only important pages or portions of processes are loaded into the main memory. It is a very useful technique as large virtual memory is provided for user programs even if the user's system has a very small physical memory.

Advantages of Virtual Memory

  • The degree of Multiprogramming will be increased.
  • We can run large programs in the system, as virtual space is huge as compared to physical memory.
  • No need to purchase more memory RAMs.
  • More physical memory is available, as programs are stored on virtual memory, so they occupy very less space on actual physical memory.
  • Less I/O required, leads to faster and easy swapping of processes.

Disadvantages of Virtual Memory

  • The system becomes slower as page swapping takes time.
  • Switching between different applications requires more time.
  • Hard disk space will be shrinked and users cannot use it properly.

.

  1. Demand Paging

According to the concept discussed in Virtual Memory, in order to execute some process, a part of the process is required to be present in the main memory; it means that few pages will be in the main memory at the same time.

However, deciding and selecting, which page needs to be kept in the main memory and which one should be kept in the secondary memory, is difficult because we cannot predict that a process will require which particular page at a particular time.

So, to overcome this problem, another concept called Demand Paging is introduced. This says keep all pages of a frame in the secondary memory space until they are required. Or, In other words, demand paging says that do not load any page in the main memory unless and until it is required.

Whenever any page is referenced in the main memory for the first time, then we can find that page in the secondary memory.

  1. WHAT IS A PAGE FAULT? 

If any references page is not found in the main memory, then there will be a page miss and this concept is known as Page fault or page miss.

If a page fault occurs the CPU has to access the missed page directly from the secondary memory. If the number of page misses are very high, then the effective memory access time of the system will also increase.

If the page fault rate is P %, the time taken in getting a page from the secondary memory and again restarting is S (service time) and the memory access time is ‘m’ then the effective access time can be given as;

EAT = P * S + (1 - P) * (m)

  1. PAGE REPLACEMENT

Page replacement is defined as a process of swapping out an existing page from the main memory frame and replacing it with the desired page. 

Page replacement is needed when-

  • All the page frames of the main memory are already occupied.
  • So, a page has to be replaced to create a space for the desired page.

Page replacement algorithms help to select which page should be swapped out of the main memory to create a space for the incoming required page.

We have various page replacement algorithms, they are as follows-

  • FIFO Page Replacement Algorithm
  • LIFO Page Replacement Algorithm
  • LRU Page Replacement Algorithm
  • Optimal Page Replacement Algorithm
  • Random Page Replacement Algorithm

 

Candidates can practice 150+Mock Tests with BYJU'S Exam Prep Test Series for exams like GATE, ESE, NIELIT from the following link:

Click Here to Avail Electronics Engineering Test Series (150+ Mock Tests)

Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021 Preparation with Online Classroom Program:

Click here to avail Online Classroom Program for Electronics Engineering

Thanks

Sahi Prep Hai To Life Set Hai.

 
Download BYJU'S Exam Prep, Best gate exam app for Preparation

                             

Comments

write a comment

Follow us for latest updates