Skip to content
Feb 25

Algo: Monotone Stack and Queue Techniques

MT
Mindli Team

AI-Generated Content

Algo: Monotone Stack and Queue Techniques

Maintaining extremal elements efficiently during sequential array processing is a cornerstone of solving many complex algorithmic problems. Mastering monotone stacks and monotone queues allows you to compute crucial information like nearest smaller elements or sliding window maxima in linear time, transforming naive solutions into elegant and performant algorithms. These structures are not just abstract concepts but powerful tools for tackling problems ranging from histogram area calculation to dynamic programming optimization, making them essential for technical interviews and competitive programming.

The Principle of Monotonicity

At their core, both structures enforce a monotonic property—either strictly increasing or decreasing—on a sequence of elements as you process an input array. The key insight is that while processing elements sequentially, you can often discard data that will never be the answer to future queries. This selective retention, managed via pops from the data structure, leads to amortized constant-time operations. Amortized analysis considers the total cost over a sequence of operations; even if a single pop or push might be expensive, each element is added once and removed at most once, leading to an overall time complexity. This principle of maintaining a "relevant" subset of processed data is what powers the efficiency of these techniques.

Monotone Stack: Finding Boundaries and Spans

A monotone stack is typically used to find the "nearest" element on either side that satisfies a relative property (e.g., smaller or greater) for each element in an array. It processes elements in a single pass, maintaining a stack where elements are kept in monotonic order (e.g., increasing). When a new element violates this order, elements are popped from the stack, and during this pop operation, you resolve the query for the popped element using the new element as the boundary.

A classic application is finding the Nearest Smaller Element (NSE) to the left. For each index i in an array arr, you want the index of the first element to the left that is smaller than arr[i]. You maintain a stack of indices representing elements whose right-smaller boundary hasn't been found. The stack maintains indices of elements in increasing order of their value. For a new element, you pop all indices from the stack where the corresponding element is greater than or equal to the new element. For each popped index, its NSE to the right is the current element. Finally, the NSE to the left for the current element is the index now at the top of the stack. You then push the current index onto the stack. This process runs in time.

Another seminal problem is the Largest Rectangle in Histogram. Given an array of bar heights, find the area of the largest rectangle that can be formed within the histogram. The solution involves using a monotone increasing stack to find the left and right boundaries (the first smaller bar) for each bar acting as the rectangle's height. For each bar index i, while the stack is not empty and the current bar's height is less than the height of the bar at the index stored at the stack's top, you pop the top. The right boundary for the popped bar is i. Its left boundary is the new index at the top of the stack (or -1 if the stack is empty). The width is right - left - 1, and the area is height * width. This elegantly computes all possible maximum rectangles in .

The Stock Span Problem is a direct variant. It calculates, for each day's stock price, how many consecutive previous days had prices less than or equal to the current price. Here, you maintain a monotone decreasing stack of indices (prices). For the current day, you pop from the stack while the price at the stacked index is less than or equal to the current price. The span for the current day is i - stack.top() if the stack is not empty after popping, else i + 1. You then push the current index. This efficiently computes all spans in one pass.

Monotone Queue: Sliding Window Extrema

A monotone queue extends the concept to efficiently track minimum or maximum values within a sliding window of fixed size k as it moves across an array. It is typically implemented using a double-ended queue (deque) that stores indices, maintaining the elements' values in monotonic order within the current window.

To find the maximum in each sliding window, you maintain a deque where potential maximum indices are stored in decreasing order of their corresponding array values. As you process each element at index i:

  1. Remove indices from the front of the deque if they are outside the current window (deque.front() <= i - k).
  2. Remove indices from the back of the deque while the element at that index is less than or equal to the new element arr[i]. This ensures the deque remains monotonically decreasing.
  3. Add the current index i to the back of the deque.
  4. Once i >= k-1, the element at arr[deque.front()] is the maximum for the window ending at i.

The key is that the deque's front always holds the index of the current window's maximum. The monotonic property ensures that any element smaller than a newer element can never be a future maximum once the newer element enters the window, justifying its removal. This yields an solution versus a naive approach. The process for finding the minimum is analogous, maintaining an increasing monotone queue.

Application in Dynamic Programming Optimization

Monotone stacks and queues are not limited to direct queries; they are powerful tools for Dynamic Programming (DP) optimization, particularly for forms of the recurrence: When the cost(j, i) function and the search space for j satisfy certain monotonicity properties (like the quadrangle inequality), the optimal j (the decision point) changes predictably. A monotone queue can be used to maintain a set of candidate j values in the order they become optimal for future i. As i increases, you remove candidates from the front that are no longer valid and from the back that are dominated by newer, better candidates. This transforms an DP into an one. Recognizing when a DP cost function allows for this "monotonic decision point" optimization is an advanced skill where these data structures shine.

Common Pitfalls

  1. Incorrect Monotonic Property Direction: Confusing whether to maintain an increasing or decreasing stack/queue is a frequent error. For a "next greater element" problem, you typically need a decreasing stack (so the next greater element can break the sequence). For a sliding window minimum, you need an increasing queue. Always reason about what condition causes you to pop elements. A good rule is: you pop when the new element violates the order needed to answer the future query correctly.
  1. Storing Values vs. Indices: While you can sometimes store values, storing indices is almost universally more powerful. Indices provide direct access to the element's value and convey positional information critical for calculating widths, distances, and checking if an element is still within a sliding window. Forgetting to store indices will cripple your ability to solve problems like the histogram or sliding window.
  1. Forgetting to Pop After the Main Loop: In problems like the Largest Rectangle in Histogram, after processing all elements, a non-empty stack remains. These indices represent bars for which no smaller right boundary was ever found (meaning their rectangle can extend to the end of the histogram). Failing to process these remaining bars in the stack will give an incorrect answer. The standard pattern is to append a sentinel value (e.g., 0 for the histogram) to the input to force a final cleanup pop of all remaining elements.
  1. Misunderstanding Amortized Complexity: It's easy to look at a nested while loop inside a for loop and incorrectly conclude complexity. Remember the amortized argument: each element is pushed once and popped at most once. The total number of operations across the entire algorithm is proportional to , hence . Internalizing this justification is key to confidently using these techniques.

Summary

  • Monotone stacks maintain elements in sorted order (increasing or decreasing) to solve "next greater/smaller" and boundary-finding problems in time, with critical applications in the Nearest Smaller Element, Largest Rectangle in Histogram, and Stock Span problems.
  • Monotone queues, implemented with a deque, efficiently track the minimum or maximum within a sliding window by maintaining a monotonic sequence of candidate indices, removing obsolete or dominated elements.
  • The efficiency of both structures stems from amortized operations; each element is added once and removed at most once, leading to linear overall time complexity.
  • These techniques are powerful tools for optimizing certain dynamic programming recurrences where the cost function exhibits monotonic properties, allowing you to maintain optimal decision points in a queue.
  • Avoid common pitfalls by carefully choosing the monotonic direction, storing indices instead of just values, ensuring complete processing of the data structure, and trusting the amortized analysis.

Write better notes with AI

Mindli helps you capture, organize, and master any subject with AI-powered summaries and flashcards.