Data Structures: Stacks, Queues, and Deques
Data Structures: Stacks, Queues, and Deques
Stacks, queues, and deques are foundational abstract data types (ADTs) that show up everywhere in real systems: from parsing expressions and managing function calls to scheduling jobs and traversing graphs. They are simple to describe, but their discipline about how elements are added and removed makes them powerful building blocks for algorithms.
This article covers what each structure guarantees, how they are typically implemented, and why those guarantees matter in practical applications such as parsing, BFS/DFS, and scheduling.
Abstract data types vs. implementations
An abstract data type defines behavior (operations and their meaning) without prescribing how it is stored in memory. For example, a stack is defined by the rule that the most recently pushed item is the first popped item. Whether you store elements in an array, a linked list, or a resizable buffer is an implementation detail.
When choosing an implementation, the main concerns are:
- Time complexity of core operations (usually constant time, , when implemented well)
- Space overhead (arrays are compact; linked structures add pointer overhead)
- Practical constraints (maximum size known or unknown, cache friendliness, concurrency needs)
Stacks (LIFO)
A stack follows LIFO: last in, first out. Think of a stack of plates. You add a plate to the top, and you also remove from the top.
Core operations
Typical stack operations include:
push(x): add element to the toppop(): remove and return the top elementpeek()ortop(): return top element without removing itisEmpty(): check if stack has no elements
With good implementations, push and pop run in time.
Common implementations
Array-based stack
An array plus an index (often called top) is the simplest approach.
- Pros: very cache-friendly, minimal per-element overhead
- Cons: fixed capacity unless you use a resizable array; resizing costs occasionally but amortizes to per push
Linked-list stack
Each node points to the next.
- Pros: grows as needed without resizing
- Cons: extra memory per element, worse cache locality, allocator overhead
In practice, many standard libraries implement stacks using dynamic arrays or by adapting an existing container.
Applications of stacks
Expression parsing and evaluation
Stacks are central to parsing tasks because nested structures map naturally to LIFO behavior.
- Matching parentheses: push opening symbols; pop when a closing symbol is encountered. Any mismatch indicates invalid structure.
- Converting infix to postfix (Reverse Polish Notation): operator stacks manage precedence and associativity.
- Evaluating postfix: push operands; when an operator appears, pop required operands, compute, push the result.
These techniques underpin compilers, interpreters, and calculators.
Function calls and recursion
Most runtimes implement a call stack to track active function invocations. Each call pushes a frame containing local variables and return addresses; returning pops the frame. This is why deep recursion can cause stack overflow: too many frames are pushed without being popped.
Depth-first search (DFS)
DFS often uses a stack explicitly (iterative DFS) or implicitly through recursion. The stack captures the path being explored and enables backtracking. Iterative DFS is especially useful when recursion depth might exceed limits.
Queues (FIFO)
A queue follows FIFO: first in, first out. This mirrors a real-world line. You enqueue at the back and dequeue from the front.
Core operations
enqueue(x): add element to the reardequeue(): remove and return element from the frontfront(): inspect element at the front without removing itisEmpty()
As with stacks, these operations are typically .
Common implementations
Circular buffer (array-based queue)
A robust queue uses a fixed-size array with two indices: head and tail, wrapping around when reaching the end. This avoids shifting elements on dequeue.
- Pros: operations, excellent cache locality
- Cons: capacity management; if the buffer fills, you need resizing or bounded behavior
Linked-list queue
Maintain pointers to both head and tail nodes.
- Pros: grows dynamically, straightforward logic
- Cons: pointer overhead and allocation costs
Many production queues are implemented with circular buffers due to predictable performance.
Applications of queues
Breadth-first search (BFS)
BFS uses a queue to explore a graph level by level. When you discover a node, you enqueue it. You then repeatedly dequeue the next node to visit and enqueue its unvisited neighbors.
This FIFO order is what guarantees BFS finds the shortest path in an unweighted graph: nodes are processed in increasing distance from the start.
Scheduling and buffering
Queues are the default model for work that should be processed in arrival order:
- Printer spooling: jobs queued as submitted
- Network packet processing: buffering bursts and draining in order (subject to QoS rules)
- Task scheduling: simple round-robin systems often keep runnable tasks in a queue
A key practical concern here is capacity. Bounded queues provide backpressure when full, which is essential in systems where producers can outrun consumers.
Deques (double-ended queues)
A deque allows insertion and removal at both ends. That flexibility makes it a generalization of both stacks and queues: you can use one end only to behave like a stack, or opposite ends to behave like a queue.
Core operations
pushFront(x),pushBack(x)popFront(),popBack()front(),back()isEmpty()
With the right representation, each operation is .
Common implementations
Doubly linked list
Each node points to both previous and next.
- Pros: true insertion and removal at both ends
- Cons: higher memory overhead; worse locality
Circular buffer with head and tail
A deque can also be implemented with a resizable circular array.
- Pros: fast in practice, good locality
- Cons: resizing and more careful index management
Standard libraries often provide deques implemented as segmented arrays (blocks) to balance fast random access with efficient growth at both ends, but the core idea remains: constant-time end operations.
Applications of deques
Sliding window algorithms
Many “sliding window” problems maintain candidates for a maximum or minimum over a moving range. A deque can store indices in a way that supports:
- removing elements that fall out of the window from the front
- maintaining a monotonic order by popping from the back
This yields linear-time solutions that would otherwise be slower.
Work-stealing and flexible scheduling
Some schedulers use deques so a worker can push and pop its own tasks from one end while other workers steal from the opposite end. The ability to operate at both ends supports different policies without changing the data structure.
Palindrome checks and bidirectional processing
When you need to compare items from both ends (for example, checking whether a sequence is a palindrome), a deque supports efficient pops from front and back.
Choosing the right structure
A good rule is to select the structure whose constraints match the algorithm’s logic:
- Use a stack when you need to reverse order, backtrack, or model nesting (parsing, DFS, undo operations).
- Use a queue when fairness and arrival order matter (BFS, buffering, producer-consumer pipelines).
- Use a deque when you need both behaviors or want to optimize end operations for more advanced patterns (sliding windows, hybrid schedulers).
Also consider implementation realities. An array-backed structure tends to be faster in practice due to memory locality, even when a linked structure has the same big-O complexity. When capacity is uncertain, resizable buffers or linked implementations avoid hard limits but may trade predictability.
Summary
Stacks, queues, and deques are simple ADTs with clear rules: LIFO, FIFO, and double-ended access. Those rules are not just academic. They directly shape correctness and performance in real tasks such as parsing expressions, exploring graphs with DFS and BFS, and scheduling work in systems. Understanding their operations and implementations helps you pick the right tool and reason about algorithm behavior with confidence.