Searching, Sorting and Hashing MCQ Quiz - Objective Question with Answer for Searching, Sorting and Hashing - Download Free PDF

Last updated on Jun 10, 2025

Latest Searching, Sorting and Hashing MCQ Objective Questions

Searching, Sorting and Hashing Question 1:

Time complexity of Merge Sort Algorithm and Binary Search Algorithm are respectively:

  1. O(log2 n) and O(n log2 n)
  2. O(n log2 n) and O(log2 n)
  3. O(n2) and O(log2 n)
  4. O(2n) and O(n2)
  5. None of the above

Answer (Detailed Solution Below)

Option 2 : O(n log2 n) and O(log2 n)

Searching, Sorting and Hashing Question 1 Detailed Solution

Merge sort 

It is based on the divide and conquers approach.

Recurrence relation for merge sort will become:

T(n) = 2T (n/2) + Θ (n)

Using Master’s theorem

T (n) = n × log2n

Therefore, the time complexity of Merge Sort is θ(nlogn).

Binary Search

Search a sorted array by repeatedly dividing the search interval in half.

Recurrence for binary search is T(n) = T(n/2) + θ(1)

Using Master’s theorem

T (n) = log2n

Consider only half of the input list and throw out the other half. Hence time complexity is O(log n).

Searching, Sorting and Hashing Question 2:

Which of the following algorithm design technique is used in designing quick sort algorithm

  1. Dynamic programming method
  2. Back tracking strategy
  3. Divide and conquer strategy
  4. Greedy strategy
  5. None of the above

Answer (Detailed Solution Below)

Option 3 : Divide and conquer strategy

Searching, Sorting and Hashing Question 2 Detailed Solution

Key Points
  • Quicksort is a divide-and-conquer algorithm in which the pivot element is chosen, and this pivot element reduced the given problem into two smaller sets.
  • Quick Sort is useful for sorting arrays.
  • Inefficient implementations Quick Sort is not a stable sort, meaning that the relative order of equal sort items is not preserved.
  • The overall time complexity of Quick Sort is O(nLogn). In the worst case, it makes O(n2) comparisons, though this behavior is rare.
  • The space complexity of Quick Sort is O(nLogn). It is an in-place sort (i.e. it doesn’t require any extra storage).

Hence the correct answer is Divide and conquers strategy.

Searching, Sorting and Hashing Question 3:

Which data structure often results in a time-space tradeoff by using extra memory to speed up operations?

  1. Arrays 
  2. Linked lists
  3. Hash tables
  4. Stacks
  5. None of the above

Answer (Detailed Solution Below)

Option 3 : Hash tables

Searching, Sorting and Hashing Question 3 Detailed Solution

The correct answer is Hash tables

Key Points

  • Hash tables:
    • Hash tables use extra memory to achieve faster access times for certain operations.
    • They employ a hash function to map keys to indices in an array, providing constant time average-case access for search, insertion, and deletion.
    • However, there's a tradeoff in terms of space because collisions (two keys hashing to the same index) may occur, leading to the need for additional structures like linked lists or open addressing.

Additional Information

  • Arrays:
    • Arrays provide constant time access to elements using an index.
    • They have a fixed size, which can lead to memory wastage if the array is larger than the actual number of elements it holds.
    • No extra memory is used to speed up operations, and access time is already efficient.
  • Linked lists:
    • Linked lists provide dynamic memory allocation, allowing for efficient insertion and deletion of elements.
    • However, accessing an element at a specific index takes linear time, as you need to traverse the list from the beginning.
    • Linked lists do not use extra memory to speed up operations.
  • Stacks:
    • Stacks follow the Last-In-First-Out (LIFO) principle and allow efficient insertion and deletion of elements at one end.
    • They are typically implemented using arrays or linked lists.
    • Stacks do not use extra memory to speed up operations; their efficiency comes from the nature of the LIFO structure.

Searching, Sorting and Hashing Question 4:

Which collision resolution technique involves placing collided elements in the next available empty slot in the hash table? 

  1. Linear probing
  2. Quadratic probing 
  3. Separate chaining
  4. Double hashing 
  5. None of the above

Answer (Detailed Solution Below)

Option 1 : Linear probing

Searching, Sorting and Hashing Question 4 Detailed Solution

The correct answer is Linear probing

Key Points

  • Linear Probing:
    • When a collision occurs (two elements hash to the same location), linear probing involves placing the collided element in the next available (unoccupied) slot in the hash table.
    • If that slot is also occupied, it continues to search for the next empty slot in a linear fashion (moving one slot at a time) until an empty slot is found.
    • Linear probing can lead to clustering, where consecutive elements form clusters in the hash table.

Additional Information

  • Quadratic Probing:
    • Similar to linear probing, but instead of moving one slot at a time, quadratic probing uses a quadratic function to determine the next position to check.
    • If the slot at the hash index is occupied, it checks slots at positions incremented by successive squares.
    • Quadratic probing helps to reduce clustering compared to linear probing.
  • Separate Chaining:
    • Instead of placing collided elements in the next available slot in the hash table, separate chaining involves maintaining a linked list at each slot in the hash table.
    • When a collision occurs, the collided elements are added to the linked list at that slot.
    • Each slot contains a separate data structure (like a linked list or a tree) to handle collisions.
  • Double Hashing:
    • In double hashing, a secondary hash function is used to determine the step size between probe attempts.
    • If a collision occurs, the algorithm uses the secondary hash function to calculate a new index to probe.
    • This helps to avoid clustering and provides a more systematic way to find an empty slot.

Searching, Sorting and Hashing Question 5:

Which of the following searching technique does not get affected on increasing the size of search list.

  1. Binary Search
  2. Linear Search
  3. Search by Hashing
  4. More than one of the above
  5. None of the above

Answer (Detailed Solution Below)

Option 3 : Search by Hashing

Searching, Sorting and Hashing Question 5 Detailed Solution

The correct option is Search by Hashing.

CONCEPT:

In linear search, every element of a given list is compared one by one with the given key without skipping any element.

It is useful when we need to search for an item in a small unsorted list, but the time taken to search the list increases as the size of the list increases.

For example, consider a list of length 5 and the key element is present at the end of this list. Number of comparisons required to search the key element = size of list i.e 5

If we increase the size of the same list (say 15) and the key element is present at the end of this list. Number of comparisons required to search the key element = size of list i.e 15

 

In binary search, the key to be searched is compared with the middle element of a sorted list, If the element at the middle position:

i)  Matches the key the search is successful

ii) Is greater than the key then the key element may be present in the left part of the list

iii) Is smaller than the key, then the key element may be present in the right part of the list

This process continues till the element is found or the list is traversed completely.

Thus the time taken to search the list increases as the size of the list increases. But it will not be as large as the time required by linear search

 

Hash-based searching requires only one key comparison to find the position of a key, provided every element is present at its designated position decided by a hash function. 

For example, consider a list of length 5 and the hash function: h(element) = element % size(list)

A hashing function is a mathematical function that generates unique results for each unique value supplied to the hash function in constant time.

For example: Consider a list of length 5 and if we want to search for key = 12, the index returned by the hash function is h(12) = 12 % 5  = 2  and requires only one key comparison to find the key at that index

Similarly increasing the size of the list (say 15) and if we want to search for key = 12, the index returned by the hash function is h(12) = 12 % 5  = 12  and requires only one key comparison to find the key at that index

Thus it is independent of the length of the list.

Top Searching, Sorting and Hashing MCQ Objective Questions

Consider a hash table of size 7, with hash function H (k) = k % 7, and pseudo random i = (i + 5) % 7. We want to insert the following keys one by one from left to right.

15, 11, 25, 16, 9, 8, 12

What will be the position of the key 25, if we use random probing?

  1. 4
  2. 5
  3. 1
  4. 2

Answer (Detailed Solution Below)

Option 4 : 2

Searching, Sorting and Hashing Question 6 Detailed Solution

Download Solution PDF

Since we are using random probing:

Insert 15:

(15)%7 = 1

Insert 11:

(11)%7 = 4

Insert 25:

(25)%7 = 4 / collision:

i = 4

 i = (i + 5) % 7    / using random function

i = (4 + 5)%7 = 2

Hence 25 position is 2nd

The k-Means algorithm is an ______  algorithm.

  1. Supervised Learning
  2. Unsupervised Learning
  3. Semi-supervised Learning
  4. Reinforcement Learning

Answer (Detailed Solution Below)

Option 2 : Unsupervised Learning

Searching, Sorting and Hashing Question 7 Detailed Solution

Download Solution PDF

The correct answer is Unsupervised Learning.

Key Points 

1. Supervised Learning:

  • In supervised learning, the algorithm is trained on a labeled dataset, where the input data is paired with corresponding output labels.
  • The goal is to learn a mapping function from input to output so that the algorithm can make predictions or classifications on new, unseen data.

2. Unsupervised Learning:

  • In unsupervised learning, the algorithm is given data without explicit instructions on what to do with it.
  • The algorithm tries to find patterns, relationships, or structures within the data on its own.
  • Clustering algorithms, like k-Means, fall under unsupervised learning because they group similar data points together without using labeled output information.

3. Semi-supervised Learning:

  • Semi-supervised learning is a combination of supervised and unsupervised learning.
  • It involves a dataset that contains both labeled and unlabeled examples.
  • The algorithm is trained on the labeled data, and then it tries to make predictions on the unlabeled data by leveraging the patterns learned from the labeled data.

4. Reinforcement Learning:

  • Reinforcement learning involves an agent interacting with an environment and learning to make decisions by receiving feedback in the form of rewards or punishments.
  • The agent learns to take actions that maximize the cumulative reward over time.
  • Unlike supervised learning, where the algorithm is provided with explicit labeled examples, in reinforcement learning, the algorithm learns by trial and error.

In the case of the k-Means algorithm, it is unsupervised learning because it doesn't rely on labeled output data. Instead, it aims to partition the input data into clusters based on similarity, without using predefined class labels.

What is the best-case time complexity of the Linear search?

  1. O(n)
  2. O(1)
  3. O(n log n)
  4. O(n2)

Answer (Detailed Solution Below)

Option 2 : O(1)

Searching, Sorting and Hashing Question 8 Detailed Solution

Download Solution PDF

Concept:

  • A linear search or sequential search is a method for finding an element within an array or linked list or any data structure.
  • It sequentially checks each element of the list until a match is found or the whole list has been searched.


Explanation:

int A[ ] = {2, 1, 4, 5 , 6, 7}

Array name: A

index

0

1

2

3

4

5

element

2

1

4

5

6

7

 

Search: 2

In first comparison, 2 is found

Best case time complexity is O(1)

Consider the array A = <4, 1, 3, 2, 16, 9, 10, 14, 8, 7>. After building heap from the array A, the depth of the heap and the right child of max-heap are_______ and _____ respectively. (Root is at level 0).

  1. 3, 14 
  2. 3, 10
  3. 4, 14
  4. 4, 10

Answer (Detailed Solution Below)

Option 2 : 3, 10

Searching, Sorting and Hashing Question 9 Detailed Solution

Download Solution PDF

Concept:

Max- heap: In max - heap, root node is always is greater than or equal to the other nodes of the tree.

Explanation:

A = <4, 1, 3, 2, 16, 9, 10, 14, 8, 7>.

STEP 1

F2 R.S Madhu 13.04.20 D1

Now, As 2 is greater than 1, there is need of rearrangement of nodes.

STEP 2:

F2 R.S Madhu 13.04.20 D2

Again exchange 16 with 2. Insert remaining nodes as per the property of max- heap.

STEP 3:

F2 R.S Madhu 13.04.20 D3

STEP 4:

F2 R.S Madhu 13.04.20 D4

STEP 5:

After inserting all the nodes in this way, final max - heap is:

F2 R.S Madhu 13.04.20 D5

As, root is at level 0, So, there are total 3 levels in this max - heap. Also, right child of root node is 10.

The number of swappings needed to sort the numbers 8, 22, 7, 9, 31, 5, 13 in ascending order, using bubble sort is:

  1. 11
  2. 12
  3. 13
  4. 10

Answer (Detailed Solution Below)

Option 4 : 10

Searching, Sorting and Hashing Question 10 Detailed Solution

Download Solution PDF

Bubble sort repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted.

Array elements: 8, 22, 7, 9, 31, 5, 13

1st pass = 8, 7, 9, 22, 5, 13, 31

4 swaps

2nd pass = 7, 8, 9, 5, 13, 22, 31

3 swaps

3rd pass = 7, 8, 5, 9, 13, 22, 31

1 swap

4th pass = 7, 5, 8, 9, 13, 22, 31

1 swap

5th pass = 5, 7, 8, 9, 13, 22, 31

1 swap

Since array is sorted after 5th pass

∴ No further swaps possible

Total number of swaps = 4 + 3 + 1 + 1 + 1 = 10 

Algorithm design technique used in quick sort algorithm is

  1. Dynamic programming
  2. Backtracking
  3. Divide and Conquer
  4. Greedy Method
  5. Merging

Answer (Detailed Solution Below)

Option 3 : Divide and Conquer

Searching, Sorting and Hashing Question 11 Detailed Solution

Download Solution PDF

The correct answer is option 3

Key Points

  • Quicksort is an efficient sorting algorithm also known as a partition-exchange sort
  • Quicksort is a comparison sort, meaning that it can sort items of any type for which a "less-than" relation is defined
  • Quicksort is a divide-and-conquer algorithm in which the pivot element is chosen, and this pivot element reduced the given problem into two smaller set
  • In the efficient implementations, it is not a stable sort, meaning that the relative order of equal sort items is not preserved
  • Quicksort can operate in-place on an array, requiring small additional amounts of memory to perform the sorting

Additional Information

Quicksort:

In Quicksort, the worst-case takes Θ (n2) time. The worst case of quicksort is when the first or the last element is chosen as the pivot element.

Diagram

F2 R.S Madhu 17.12.19 D1

\(\mathop \sum \limits_{{\rm{p}} = 1}^{{\rm{N}} - 1} {\rm{p}} = 1 + 2 + 3 + \ldots . + {\rm{N}} - 1 = {\rm{\;}}\frac{{{\rm{N}}\left( {{\rm{N}} - 1} \right)}}{2} - 1\)

This will give Θ (n2) time complexity.

Recurrence relation for quick sort algorithm will be,

T (n) = T (n-1) + Θ (n)

This will give the worst-case time complexity as Θ (n2).

Which of the following algorithms use recursion for sorting an array of integers?

  1. Bubble sort and Insertion sort
  2. Bubble sort and Quicksort
  3. Bubble sort and merge sort
  4. Quicksort and merge sort

Answer (Detailed Solution Below)

Option 4 : Quicksort and merge sort

Searching, Sorting and Hashing Question 12 Detailed Solution

Download Solution PDF
  • Quick sort and merge sort algorithms are based on the divide and conquer algorithm which works in the recursive manner.
  • Recursion is used in Quick sort and merge sort.
  • In Quick sort and merge sort, the larger problem is divided into smaller problem then later after solving smaller problem we are going to combine all smaller solutions into final solution.


Therefore option 4 is the correct answer.

An array of 25 distinct elements is to be sorted using quick sort. Assume that the pivot element is chosen uniformly at random. The probability that the pivot element gets placed in the worst possible location in the first round of partitioning (rounded off to 2 decimal places) is _________.

Answer (Detailed Solution Below) 0.08

Searching, Sorting and Hashing Question 13 Detailed Solution

Download Solution PDF

Since the given array is sorted

Let given array be 1, 2, 3, 4, 5, 6, 7, …. 25

Probability of choosing each element is \(\frac{1}{25}\)

Worst case for quick sort will only we arise if 1 is chosen or 25 is chosen as pivot element

Assume 1 is chosen as a pivot element

After first round of partitioning, pivot will be in its same position (1st position), this gives rise to worst case in quick sort since the complexity will be O(n2).

Assume 25 is chosen as a pivot element

After first round of partitioning, pivot will be in its same position (last position), this gives rise to worst case, in quick sort since the complexity will be O(n2).

P(X = 1) = \(\frac{1}{25}\) = 0.04

and P(X = 50) = \(\frac{1}{25}\) = 0.04

P(X = 1 or X = 25)

= 0.04 + 0.04

= 0.08

Consider the following array.

23

32

45

69

72

73

89

97


Which algorithm out of the following options uses the least number of comparisons (among the array elements) to sort above array in ascending order?

  1. Insertion sort
  2. Selection sort
  3. Quicksort using the last element as pivot
  4. Merge sort

Answer (Detailed Solution Below)

Option 1 : Insertion sort

Searching, Sorting and Hashing Question 14 Detailed Solution

Download Solution PDF

Insertion sort:

In Insertion sort, the best-case takes Θ (n) time, the best case of insertion sort is when elements are sorted in ascending order. In that case, the number of comparisons will be n - 1 = 8 - 1 = 7

It is the least  number of comparisons (among the array elements) to sort the above array in ascending order:

The number of swaps needed is zero.

Additional Information

In Insertion sort, the worst-case takes Θ (n2) time, the worst case of insertion sort is when elements are sorted in reverse order. In that case the number of comparisons will be like:

\(\mathop \sum \limits_{{\rm{p}} = 1}^{{\rm{N}} - 1} {\rm{p}} = 1 + 2 + 3 + \ldots . + {\rm{N}} - 1 = {\rm{\;}}\frac{{{\rm{N}}\left( {{\rm{N}} - 1} \right)}}{2} - 1\)

This will give Θ (n2) time complexity.

Suppose P, Q, R, S, T are sorted sequences having lengths 20, 24, 30, 35, 50 respectively. They are to be merged into a single sequence by merging together two sequences at a time. The number of comparisons that will be needed in the worst case by the optimal algorithm for doing this is ____.

Answer (Detailed Solution Below) 358

Searching, Sorting and Hashing Question 15 Detailed Solution

Download Solution PDF

Given sequence: 20, 24, 30, 35, 50

Ascending order: 20, 24, 30, 35, 50

Merge(20, 24) → new size (44)

Number of comparisons = 20 + 24 – 1 = 43  

Sequence: 30, 35, 44, 50

Merge(30, 35) → new size (65)

Number of comparisons = 30 + 35 – 1 = 64  

Sequence:  44, 50, 65

Merge(44, 50) → new size (94)

Number of comparisons = 44 + 50 – 1 = 93

Sequence:  65, 94

Merge(65, 94) → new size (150)

Number of comparisons = 65 + 94 – 1 = 158  

Therefore, total number of comparisons, 43 + 64 + 93 + 158 = 358
Get Free Access Now
Hot Links: teen patti club teen patti chart teen patti pro lucky teen patti teen patti noble