
A stack is a fundamental LIFO (Last In, First Out) data structure: this means that the last element added to the stack is the first one to be removed. You can think of it like a stack of plates: you always add new plates on top, and when you need a plate, you take the one on top first.
Stacks are simple yet powerful, and they form the basis for many algorithms and data structures. Their strength comes from the fact that operations are always performed at one end, called the top of the stack. This allows for constant-time insertion and removal, which can be very efficient compared to other data structures like arrays or linked lists for certain use cases.
There are three main operations in a stack:
These operations make stacks ideal for temporary storage of data where you only care about the last item added, not the entire collection.
Stacks are often implicit in algorithms, even if you don’t see a formal stack object:
Stacks can be implemented using arrays or linked lists, but the key principle remains: all operations are performed at the top. This simplicity is why stacks are often a first step in algorithmic thinking, allowing you to manage data in a controlled, sequential manner.
class Stack<T> {
private items: T[] = [];
push(element: T): void {
this.items.push(element);
}
pop(): T | undefined {
return this.items.pop();
}
peek(): T | undefined {
return this.items[this.items.length - 1];
}
isEmpty(): boolean {
return this.items.length === 0;
}
size(): number {
return this.items.length;
}
}
In this example the stack has a max capacity of 8, but in general it can grow dynamically.
Stacks are not just a basic data structure. They encode a very specific way of thinking about problems.
Whenever the logic of a problem depends on what happened most recently, or when elements must be processed in a strictly reversed order,
stacks naturally emerge as the right abstraction.
Over time, a small number of recurring stack patterns appear again and again across very different problems. Recognizing these patterns is often the key step toward a clean and efficient solution.
One of the most common uses of stacks is to verify that elements are properly matched and balanced.
These problems usually involve some notion of opening and closing elements, and correctness depends on the order in which they appear.
The core idea is simple: when you encounter an opening element, you push it onto the stack.
When you encounter a closing element, you check whether it correctly matches the element on top of the stack.
If it does, you pop the stack; if it doesn’t, the structure is invalid.
At the end of the process, the stack must be empty. Any leftover elements indicate that something was opened but never closed.
This pattern appears in problems such as validating parentheses, removing adjacent duplicates in strings, or computing the longest valid parentheses substring. In all of these cases, the crucial observation is that the last opened element must be the first one to close, which is exactly the LIFO behavior provided by a stack.
Stacks are also fundamental when dealing with expression parsing and evaluation, especially when operators and operands are interleaved.
In these problems, you typically process tokens from left to right. Operands are pushed onto the stack, while operators trigger computations using the most recent operands. Once a computation is performed, the result is pushed back onto the stack and may itself be used by later operators.
const stack: string[] = [];
for (const ch of s) {
if (ch === "(") {
stack.push(ch);
} else {
if (stack.length === 0) {
return false;
}
stack.pop();
}
}
return stack.length === 0;
This approach is particularly clear in postfix (Reverse Polish) notation, where operator precedence is encoded in the order of tokens rather than explicit parentheses. However, the same principle also applies to calculator-style problems where you must respect precedence rules while scanning the expression.
Stacks work well here because intermediate results must be remembered temporarily, and operators always depend on the most recently seen operands.
const stack: number[] = [];
for (const token of tokens) {
if (isNumber(token)) {
stack.push(Number(token));
} else {
const b = stack.pop()!;
const a = stack.pop()!;
stack.push(applyOperator(a, b, token));
}
}
Another powerful application of stacks is modeling backtracking and undo behavior.
Any system where actions must be reversible, such as undoing edits, navigating backward through a history, or reverting to a previous state, can be naturally represented using a stack. Each action is pushed onto the stack as it happens. When an undo is required, the most recent action is popped and reverted.
doAction(action)
stack.push(action)
//... other code
const last = stack.pop()
undo(last)
This idea extends beyond user interfaces. Many recursive algorithms, especially depth-first search, rely on an implicit call stack to keep track of execution state. Making this stack explicit often leads to iterative solutions that are easier to control and reason about.
The reason stacks fit so well here is intuitive: when undoing, you always revert the last action first, exactly matching the stack’s behavior.
A monotonic stack is a conceptual variation of the classic stack in which elements are maintained according to an ordering invariant. The stack can be either monotonically increasing or monotonically decreasing, meaning that elements are always kept in sorted order (from bottom to top) according to that rule.
What makes a monotonic stack special is not the data structure itself (it is still a normal stack) but the logic that governs when elements are pushed or popped. Unlike a regular stack, a push operation may trigger multiple pop operations in order to restore the invariant.
The typical pattern is to iterate over a sequence (array, list, or stream) one element at a time and compare each value with the element at the top of the stack.
For example, in a monotonically increasing stack, every new element must be greater than or equal to the top of the stack. If it is smaller, larger elements are popped because they are no longer useful for the problem being solved.
At first glance, this approach might seem inefficient, since a single iteration step can involve multiple pop operations. However, the key insight is that each element is pushed onto the stack exactly once and popped at most once.
As a result, even though individual steps may perform several pops, the total number of operations across the entire iteration is linear. This is why monotonic stack solutions typically run in time, making them a practical and elegant application of amortized analysis.
Monotonic stacks are especially powerful in problems where, for each element, we need information about the nearest greater or smaller element to the left or right, or about how long a certain condition remains valid.
Conceptually, the stack acts as a filtered memory of the past: it keeps only the elements that can still influence future results and discards all others as soon as they become irrelevant.
Typical problems that benefit from a monotonic stack include:
In all these scenarios, a monotonic stack avoids repeated comparisons and quadratic solutions, reducing seemingly complex problems to simple linear scans. Below you can find an example implementation of the Next Greater Element problem using a monotonic stack.
function nextGreaterElements(nums: number[]): number[] {
const n = nums.length;
const result: number[] = Array(n).fill(-1); // default -1 if no greater element
const stack: number[] = []; // store indices
for (let i = 0; i < n; i++) {
// Pop elements that are smaller than current number
while (stack.length > 0 && nums[i] > nums[stack[stack.length - 1]]) {
const idx = stack.pop()!;
result[idx] = nums[i];
}
// Push current index
stack.push(i);
}
return result;
}
// Example usage
const nums = [2, 1, 2, 4, 3];
console.log(nextGreaterElements(nums)); // Output: [4, 2, 4, -1, -1]
When working with stacks, especially beyond the most basic problems, the real challenge is not mastering push and pop, but understanding what role the stack plays in the algorithm.
In many problems, the stack is responsible for maintaining order, while additional data structures are used to keep track of state. For example, frequency maps or auxiliary stacks are often introduced when the problem requires more information than ordering alone can provide. A classic case is removing duplicate characters, where the stack enforces lexicographical order while a frequency map tracks remaining occurrences. Similarly, in problems like Min Stack, an auxiliary stack is used to preserve historical minimum values.
A central concept in stack-based algorithms is the idea of an invariant.
Each stack maintains a specific property that must always hold true. In monotonic stacks, this invariant is ordering (either increasing or decreasing).
In parentheses-matching problems, the invariant is that the stack contains only valid unmatched opening symbols.
Every push or pop operation exists solely to preserve this invariant, and understanding it is often the key to designing the solution.
Another recurring technique is maintaining running values while traversing the input. Maximums, minimums, distances, or areas are frequently computed when elements are popped from the stack, not when they are pushed. This is particularly evident in problems like Largest Rectangle in Histogram or Stock Span, where the moment an invariant is violated reveals meaningful information about the elements being removed.
Finally, it is useful to think of the stack as dynamically expanding and shrinking. The stack grows while incoming elements respect its invariant and contracts when that invariant is broken. This behavior mirrors other patterns, such as dynamic sliding windows, but instead of moving across indices, the stack operates over previously seen states. Viewing stack operations through this lens helps unify many seemingly different problems under a single conceptual framework.
Stacks provide constant-time operations by restricting access to one end of the data structure. This simplicity allows many problems to be solved efficiently in a single pass.
| Operation / Pattern | Time Complexity | Space Complexity |
|---|---|---|
| Push | ||
| Pop | ||
| Peek / Top | ||
| Stack traversal | ||
| Matching / balancing (parentheses, duplicates) | ||
| Expression evaluation (RPN, calculators) | ||
| Monotonic stack (next greater / smaller) | amortized | |
| Span problems (stock span, temperatures) | amortized | |
| Range / area problems (histogram, max rectangle) | amortized |
| Problem | Technique | Solution |
|---|---|---|
| Valid Parentheses | Stack | Solution |
| Remove All Adjacent Duplicates in String | Stack | Solution |
| Min Stack | Stack | Solution |
| Remove Duplicate Letters | Stack | Solution |
| Removing Stars From a String | Stack | Solution |
| Evaluate Reverse Polish Notation | Stack | Solution |
| Basic Calculator II | Stack | Solution |
| Longest Valid Parentheses | Stack | Solution |
| Next Greater Element I | Monotonic Stack | Solution |
| Daily Temperatures | Monotonic Stack | Solution |
| Online Stock Span | Monotonic Stack | Solution |
| 132 Pattern | Monotonic Stack | Solution |
| Number of Visible People in a Queue | Monotonic Stack | Solution |
| Largest Rectangle in Histogram | Monotonic Stack | Solution |