Memory Management MCQ Quiz - Objective Question with Answer for Memory Management - Download Free PDF

Last updated on Jun 26, 2025

Latest Memory Management MCQ Objective Questions

Memory Management Question 1:

Paging:

  1. is a method of memory allocation by which the program is subdivided into equal portions, or pages and core is subdivided into equal portions or blocks.
  2. consists of those addresses that may be generated by a processor during execution of a computation.

  3. is a method of allocating processor time.
  4. allows multiple programs to reside in separate areas of core at the time.
  5. None of the above

Answer (Detailed Solution Below)

Option 1 : is a method of memory allocation by which the program is subdivided into equal portions, or pages and core is subdivided into equal portions or blocks.

Memory Management Question 1 Detailed Solution

The correct answer is is a method of memory allocation by which the program is subdivided into equal portions, or pages and core is subdivided into equal portions or blocks..

Key Points

  • Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory.
  • In paging, the operating system retrieves data from secondary storage in same-size blocks called pages.
  • It divides the program into fixed-size pages and the main memory into blocks of the same size, called frames.
  • When a program is to be executed, its pages are loaded into any available memory frames from the secondary storage.

Memory Management Question 2:

Dirty bit is used to show the

  1. Page with low frequency occurrence
  2. Wrong page
  3. Page with corrupted data
  4. Page that is modified after being loaded into cache memory
  5. None of the above

Answer (Detailed Solution Below)

Option 4 : Page that is modified after being loaded into cache memory

Memory Management Question 2 Detailed Solution

Concept:

Dirty bit: Dirty bit is associated with a block of cache memory and it is used to show the page that is modified after being loaded into cache memory.

Explanation:

During write back police in cache, dirty bit concept is used.
Write back means updates are written only to the cache. When line is modified it’s dirty bit is set and when the line is selected for replacement, the line needs to be written to main memory only if it’s dirty bit is set.

In write back, we first write the cache copy to update the memory. Number of write backs can be reduced if we write only when cache data is different from memory. It is done by dirty bit or modifying bit. It writes back to the cache only when dirty bit is set to 1. Thus, write back cache requires two bits one is valid bit and another is dirty bit.

Diagram:

Memory Management Question 3:

Which of the following memory allocation algorithms will search all available blocks to find the largest block and allocate it to the process?

  1. Next fit
  2. First fit
  3. Worst fit
  4. Best fit

Answer (Detailed Solution Below)

Option 3 : Worst fit

Memory Management Question 3 Detailed Solution

The correct answer is Worst Fit.

Key Points

  • Worst Fit memory allocation algorithm searches all available memory blocks and selects the largest block for allocation.
    • This approach ensures that a process is allocated to the largest available block, leaving smaller blocks free for future allocations.
    • The algorithm scans the entire list of memory blocks and identifies the block with the maximum size.
    • It is typically used in scenarios where fragmentation needs to be minimized, as it leaves smaller blocks untouched.
    • Worst fit can sometimes lead to inefficient memory usage, as the largest blocks may be split into smaller fragments.

Additional Information

  • In comparison:
    • First Fit: Allocates the first block that is large enough to satisfy the request.
    • Best Fit: Allocates the smallest block that is large enough to satisfy the request, minimizing wasted space.
    • Next Fit: Similar to First Fit but starts searching from the last allocated block.
  • Worst Fit is less commonly used compared to First Fit and Best Fit due to its tendency to create larger leftover fragments.
  • It is ideal for processes requiring large memory blocks while leaving smaller blocks available for other processes.

Memory Management Question 4:

Paging technique is used for effective management of?

  1. RAM
  2. Cache memory
  3. Disk space
  4. ROM

Answer (Detailed Solution Below)

Option 1 : RAM

Memory Management Question 4 Detailed Solution

The correct answer is RAM.

Key Points

  • Paging is a memory management technique used by operating systems to efficiently utilize RAM.
    • In paging, the physical memory (RAM) is divided into fixed-size blocks called frames.
    • Similarly, the logical memory (virtual memory) is divided into blocks of the same size called pages.
    • When a program is executed, its pages are loaded into available frames in RAM.
    • Paging ensures that the operating system can manage memory efficiently by allowing programs to use non-contiguous memory locations.
    • It also helps prevent memory fragmentation and optimizes the use of limited RAM resources.

Additional Information

  • Paging allows processes to execute even if they are larger than the available physical memory, by utilizing virtual memory.
  • The operating system uses a page table to keep track of the mapping between pages and frames.
  • When a program accesses memory that is not currently in RAM, a page fault occurs, prompting the operating system to load the required page from the disk.
  • Paging improves multitasking capabilities, enabling multiple processes to run simultaneously without interference.
  • Modern operating systems, such as Windows, Linux, and macOS, implement paging as part of their memory management strategies.

Memory Management Question 5:

If a processor has 32-bit virtual address, 28-bit physical address, 2 kB page size. How many bits are required for the virtual, physical page number?

  1. 17, 21
  2. 21, 17
  3. 16, 10
  4. More than one of the above
  5. None of the above

Answer (Detailed Solution Below)

Option 2 : 21, 17

Memory Management Question 5 Detailed Solution

Data:

Virtual address space (VAS) = 232 Byte

Physical address space (PAS) = 228 Byte

Page size (PS) = 211 Byte

Formula:

Bits required for the virtual page number = ⌈ log2 P ⌉

Bits required for the physical page number = ⌈ log2 F⌉

Calculation:

Bits required for the virtual page number = ⌈ log2 221 ⌉ = 21

Bits required for the physical page number = ⌈ log2 217 ⌉ = 17

Top Memory Management MCQ Objective Questions

_________ is a memory management scheme that permits the physical address space of a process to be noncontiguous.

  1. Segmentation 
  2. Paging
  3. Fragmentation 
  4. Swapping

Answer (Detailed Solution Below)

Option 2 : Paging

Memory Management Question 6 Detailed Solution

Download Solution PDF

Explanation:

Paging

  • Paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in the main memory.
  • In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages.
  • Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

Segmentation

  • Segmentation is a memory - management scheme that supports the user view of memory.
  • A logical address space is a collection of segments. Each segment has a name and a length.
  • The addresses specify both the segment name and the offset within the segment. The user, therefore, specifies each address.

Fragmentation 

  • Fragmentation is done at routers which makes them complex to implement when routers take too much time to fragment packets.
  • This may lead to a DOS attack on other packets.

Swapping

  • The medium-term scheduler reduces the degree of multiprogramming. Some processes are removed from memory to reduce multiprogramming. Later, the process can be reintroduced into memory, and its execution can be continued where it left off. This scheme is called swapping.
  • The long-term scheduler, or job scheduler, selects processes from a mass-storage device and loads them into memory for execution.
  • The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to execute and allocates the CPU to one of them.

In the context operating systems, which of the following statements is/are correct with respect to paging?

  1. Paging incurs memory overheads.
  2. Paging helps solve the issue of external fragmentation.
  3. Page size has no impact on internal fragmentation.
  4. Multi-level paging is necessary to support pages of different sizes.

Answer (Detailed Solution Below)

Option :

Memory Management Question 7 Detailed Solution

Download Solution PDF

Key Points

  • Pages are divided into fixed-size slots and hence no external fragmentation. But applications smaller than page size cause internal fragmentation
  • Page tables take extra pages in memory. Therefore incur an extra cost

Therefore option 1 and 2 are correct

The primary objective of a time-sharing operating system is to

  1. Avoid thrashing
  2. Provide fast response to the user of the computer
  3. Provide fast execution of processes
  4. Optimize computer memory usage

Answer (Detailed Solution Below)

Option 2 : Provide fast response to the user of the computer

Memory Management Question 8 Detailed Solution

Download Solution PDF
  • Time-sharing (or multitasking) is a logical extension of multiprogramming.
  • In time-sharing systems, the CPU executes multiple jobs by switching among them, but the switches occur so frequently that the users can interact with each program while it is running.
  • Time-sharing requires an interactive computer system, which provides direct communication between the user and the system.
  • The user gives instructions to the operating system or to a program directly, using an input device such as a keyboard, mouse, touchpad, or touch screen, and waits for immediate results on an output device.

Contiguous memory allocation having variable size partition suffers from:

  1. External Fragmentation
  2. Internal Fragmentation
  3. Both External and Internal Fragmentation
  4. None of the options

Answer (Detailed Solution Below)

Option 1 : External Fragmentation

Memory Management Question 9 Detailed Solution

Download Solution PDF

The correct solution is 'option 1'.

Key Points

  • Contiguous allocation of memory results in fragmentation. Additional fragmentation may be external or internal.
  • It leads to inflexibility and memory wastage.
  • In variable Partitioning, space in the main memory is allocated according to the requirement of the process, hence there is no case of internal fragmentation. There will be no unused space left or leftover space in the partition. 
  • The lack of internal fragmentation does not mean that external fragmentation will not occur.

Let's consider three processes with 10MB, 20MB, and 30MB are loaded in memory. After some time 10MB and 30MB processes are freed. Now there are two available memory partitions but we can't load a process with a size of 40 MB.

Thus, the correct answer is: External Fragmentation

NOTE:
This question was dropped and marks were allocated to all

Assume that in a certain computer, the virtual addresses are 64 bits long and the physical addresses are 48 bits long. The memory is word addressable. The page size is 8 kB and the word size is 4 bytes. The Translation Look-aside Buffer (TLB) in the address translation path has 128 valid entries. At most how many distinct virtual addresses can be translated without any TLB miss?

  1. 16 × 2010
  2. 256 × 210
  3. 4 × 220
  4. 8 × 220

Answer (Detailed Solution Below)

Option 2 : 256 × 210

Memory Management Question 10 Detailed Solution

Download Solution PDF

Memory is word addressable.

1 word = 4 bytes

Virtual Address (VA) = 64 bits

∴ Virtual Address Space (VAS)= 264 words

Physical Address (PA) = 48 bits

∴ Physical Address Space (PAS) = 248 words

Page size (PS) = 8 KB = 211 words

∴ page offset = 11 bit

∴ number of pages possible =

∴ number of frames possible =

VA = Page number + page offset

Translation Lookaside Buffer (TLB)

Page Number

Frame Number


Entries in TLB = 128 = 27

If a page number is found in TLB then there will be a hit for all the words (Word addresses) of that Page.

1 - page hit implies 211 distinct virtual address hits.

So 27page hit implies 27 × 211=28 × 210= 256 × 210 virtual address hits.

Therefore, at most 256 × 210 distinct virtual addresses can be translated without any TLB miss.

Tips and Tricks:

distinct virtual addresses can be translated without any TLB miss is

the number of entries in TLB × page size

The memory management scheme that allows the processes to be stored non-contiguously in memory :

  1. Spooling
  2. Swapping
  3. Paging
  4. Relocation

Answer (Detailed Solution Below)

Option 3 : Paging

Memory Management Question 11 Detailed Solution

Download Solution PDF

Spooling - It is a process in which data is temporarily held to be used and executed by a device, program or the system

Swapping - It is a memory reclamation method wherein memory contents not currently in use are swapped to a disk to make the memory available for other applications or processes.

Paging - It is a memory management scheme that allows the processes to be stored non-contiguously in the memory.

Relocation - Sometimes, as per the requirements, data is transferred from one location to another. This is called memory relocation.

If main memory access time is 400 μs, TLB access time is 50 μs, considering TLB hit as 90%, what will be the overall access time?  

  1. 800 μs 
  2. 490 μs
  3. 485 μs
  4. 450 μs

Answer (Detailed Solution Below)

Option 2 : 490 μs

Memory Management Question 12 Detailed Solution

Download Solution PDF

Data:

TLB hit ratio = p = 90% = 0.9

TLB access time = t = 50 μs

Memory access time = m = 400 μs

Effective memory acess time = EMAT

Formula:

EMAT = p × (t + m) + (1 – p) × (t + m + m)

Calculation:

EMAT = 0.9 × (50 + 400) + (1 – 0.9) × (50 + 400 + 400)

EMAT = 490 μs

∴ the overall access time is 490 μs

Important Points

During TLB hit

Frame number is fetched from the TLB (50 μs)

and page is fetched from physical memory (400 μs)

During TLB miss

TLB no entry matches (50 μs)

Frame number is fetched from the physical memory (400 μs)

and pages are fetched from physical memory (400 μs)

Consider allocation of memory to a new process. Assume that none of the existing holes in the memory will exactly fit the process’s memory requirement. Hence, a new hole of smaller size will be created if allocation is made in any of the existing holes. Which one of the following statements is TRUE?

  1. The hole created by first fit is always larger than the hole created by next fit.
  2. The hole created by worst fit is always larger than the hole created by first fit.
  3. The hole created by best fit is never larger than the hole created by first fit.
  4. The hole created by next fit is never larger than the hole created by best fit.

Answer (Detailed Solution Below)

Option 3 : The hole created by best fit is never larger than the hole created by first fit.

Memory Management Question 13 Detailed Solution

Download Solution PDF

Concept:

Best fit allocation:

The best fit allocation strategy chooses the smallest available memory partition that can satisfy the memory requirement. It creates the smallest hole.

First fit allocation:

The first fit chooses the first available memory partition that can satisfy the requirement.

Worst fit allocation:

The worst fit allocation strategy chooses the largest available memory partition that can satisfy the memory requirement. It creates the largest hole.

Next fit allocation:

It works same as First Fit, the only difference it maintain a pointer to all last allocated memory space to the process and begins it search from there if new request is arrived, unlike first fit which start will initial memory space.

Explanation:

Option 1 and Option 4 : FALSE

The hole created by first fit may or may not be larger than the hole created by next fit

Option 2: FALSE

The hole created by worst fit is always larger than or equal to the hole created by first fit

Option 3: TRUE

The hole created by best fit could never be larger than the hole created by first fit.

 Although it may be equal.

Consider the following statements:

S1: A small page size causes large page tables.

S2: Internal fragmentation is increased with small pages.

S3: I/O transfers are more efficient with large pages.

Which of the following is true?

  1. S1 and S2 are true  
  2. S1 is true and S2 is false 
  3. S2 and S3 are true 
  4. S1 is true S3 is false

Answer (Detailed Solution Below)

Option 2 : S1 is true and S2 is false 

Memory Management Question 14 Detailed Solution

Download Solution PDF

Concept:

Paging is a memory management scheme. Paging reduces the external fragmentation. Size of the page table depends upon the number of entries in the table and bytes stored in one entry.

Explanation:

S1: A small page size causes large page tables.

This statement is correct. Smaller page size means more pages required per process. It means large page tables are needed.

S2: internal fragmentation is increased with small pages.

This statement is incorrect. Internal fragmentation means when process size is smaller than the available space. When pages are small, then available space becomes less and there will be less chances of internal fragmentation.

S3: I/O transfers are more efficient with large pages.

An I/O system is required to take an application I/O request and send it to the physical device. Transferring of I/O requests are more efficient with large pages. So, given statement is correct.

Page information in memory is also called as Page Table. The essential contents in each entry of a page table is/are.

  1. Page Access information
  2. Virtual Page number
  3. Page Frame number
  4. Both virtual page number and Page Frame Number

Answer (Detailed Solution Below)

Option 3 : Page Frame number

Memory Management Question 15 Detailed Solution

Download Solution PDF

The essential content in each entry of a page table is the page frame number.

Explanation:

In paging, physical memory is divided into fixed-size blocks called page frames and logical memory is divided into fixed-size blocks called pages which are of the same size as that of frames. When a process is to be executed, its pages can be loaded into any unallocated frames from the disk.

In paging, mapping of logical addresses to physical addresses is performed at the page level.

  • When CPU generates a logical address, it is divided into two parts: page number and offset
  • Page size is always in the power of 2.
  • Address translation is performed using the page table (mapping table).
  • It stores the frame number allocated to each page and the page number is used as an index to the page table.

 

When the CPU generates a logical address, that address is sent to the MMU (memory management unit). MMU uses the page number to find the corresponding page frame number in the page table. Page frame number if attached to high order end of page offset to form physical address that is sent to the memory.

The mechanism is shown here:

Hot Links: teen patti master 2025 teen patti bliss lotus teen patti teen patti wealth teen patti game - 3patti poker