Skip to content
Feb 9

Algorithms: Greedy Algorithms

MA
Mindli AI

Algorithms: Greedy Algorithms

Greedy algorithms solve problems by making a sequence of choices that look best at the moment. Each step commits to a locally optimal decision, with the hope that these decisions add up to a globally optimal solution. When this approach works, it produces efficient and elegant algorithms. When it does not, it can fail dramatically because early choices may block better outcomes later.

What separates a correct greedy algorithm from a tempting but wrong heuristic is structure. Greedy algorithms are correct only for problem classes with specific properties, most notably the greedy choice property and optimal substructure. Understanding these properties, and seeing them in canonical examples like activity selection, Huffman coding, and minimum spanning trees (MSTs), is the key to using greedy methods confidently.

What Makes an Algorithm “Greedy”?

A greedy algorithm typically follows this pattern:

  1. Start with an empty (or partial) solution.
  2. Repeatedly choose the “best” next option according to a rule.
  3. Add it to the solution if it remains feasible.
  4. Stop when the solution is complete.

The “best” option is defined by a local criterion: earliest finishing time, smallest weight edge, shortest code length contribution, and so on. The algorithm never revisits prior choices. That irreversibility is both its strength (speed, simplicity) and its weakness (potential incorrectness).

The Two Core Conditions for Correctness

Greedy choice property

A problem has the greedy choice property if there exists an optimal solution that begins with a greedy choice. In other words, picking the locally best option does not rule out achieving a global optimum.

This is not about “greedy works often.” It is a statement that can be proven. Proofs usually use an exchange argument: take an optimal solution that does not follow the greedy choice first, and show how to swap in the greedy choice without making the solution worse.

Optimal substructure

A problem has optimal substructure if an optimal solution contains within it optimal solutions to subproblems. After making a greedy choice, the remaining work must still be optimally solvable in the same sense.

Greedy algorithms rely on both properties: you choose something confidently (greedy choice property), then the remainder is “the same kind of problem” and can be solved optimally by repeating the process (optimal substructure).

Activity Selection: A Classic Greedy Success

The activity selection problem is a standard example where greedy algorithms are provably optimal.

Problem statement

You are given activities, each with a start time and finish time. You want to select the maximum number of non-overlapping activities. Two activities are compatible if one finishes before the other starts.

Greedy strategy

Sort activities by increasing finish time. Repeatedly choose the activity that finishes earliest among those compatible with what you have already chosen.

Why “earliest finish”? Because finishing sooner leaves the most room for future activities. The greedy choice property holds: among all activities that could be chosen first, selecting the one with the smallest finish time can be shown (via exchange argument) not to reduce the maximum achievable count.

Practical insight

This greedy rule is a model for scheduling problems where the objective is to maximize the number of tasks, not to maximize value or minimize idle time. If each activity had a different profit and you wanted maximum total profit, this exact greedy strategy would not generally be correct; the problem changes, and so do the correctness conditions.

Huffman Coding: Greedy Construction of Optimal Prefix Codes

Huffman coding uses a greedy algorithm to build an optimal prefix-free binary code given symbol frequencies.

Goal

Given characters (or symbols) with frequencies, assign binary codes so that:

  • No code is a prefix of another (prefix-free), enabling unambiguous decoding.
  • The expected code length is minimized.

The expected length is , where is the frequency (or probability weight) and is the code length.

Greedy strategy

Repeatedly take the two least frequent symbols (or subtrees) and merge them into a new node whose frequency is their sum. Continue until one tree remains. Assign 0/1 along edges to get codes.

This is greedy because at each step it makes a local choice: combine the two lowest weights now.

Why it works

The core structural fact is that in an optimal prefix code, the two least frequent symbols can be placed at the deepest level and share the same parent. That observation supports the greedy choice property: merging the two lowest-frequency items is consistent with some optimal solution. After merging, the problem reduces to a smaller instance with one fewer symbol, demonstrating optimal substructure.

Practical insight

Huffman coding’s greedy process is robust and widely used in real compression systems because it balances optimality with efficiency. It is also a reminder that “greedy” can mean building a structure (a tree) incrementally, not just selecting items from a list.

Minimum Spanning Trees: Greedy by Safe Edges

A minimum spanning tree connects all vertices in a connected, weighted, undirected graph with minimum total edge weight and no cycles.

Two famous greedy algorithms solve MST optimally:

  • Kruskal’s algorithm: sort edges by weight and add the next lightest edge that does not form a cycle.
  • Prim’s algorithm: start from any vertex and repeatedly add the lightest edge that connects the growing tree to a new vertex.

The greedy idea: safe edges

Both algorithms rely on the concept of a “safe” edge, an edge that can be added to some MST without breaking optimality.

A key MST property: for any cut that partitions the vertices into two sets, the lightest edge crossing that cut is safe for an MST. This cut property is what makes the greedy step provably correct. Each step chooses a locally minimal edge under a constraint (no cycles for Kruskal, or crossing the cut defined by the current tree for Prim), and that choice is guaranteed not to exclude an optimal global solution.

Practical insight

MST algorithms show how greedy methods often succeed when a problem has a strong matroid-like structure, where local feasibility and exchange properties guarantee global optimality. In practice, MSTs appear in network design, clustering, and approximation pipelines where you want a cheap connected backbone.

How Greedy Algorithms Differ from Dynamic Programming

Greedy and dynamic programming (DP) can look similar because both often use optimal substructure. The difference is commitment.

  • DP explores multiple possibilities and stores results to avoid recomputation.
  • Greedy commits to one choice per step, based on a rule.

When greedy is correct, it typically beats DP on simplicity and speed. When greedy is not correct, DP is often the next tool to consider, because it can handle cases where local choices interact in complex ways.

How to Recognize When Greedy Might Apply

Greedy algorithms are a good candidate when:

  • The objective can be optimized by repeatedly taking a best-looking next step.
  • You can argue an exchange property: any optimal solution can be transformed to include the greedy choice without worsening it.
  • The remaining problem after a choice has the same structure as the original.

A useful discipline is to treat the greedy rule as a hypothesis and try to break it with a counterexample. If you cannot find one, you still need a proof, but the search often reveals whether the problem has the right structure.

Common Pitfalls

  • Mistaking intuition for proof: “earliest start time” feels reasonable for scheduling, but it is not the correct criterion for maximizing the number of activities.
  • Changing the objective: a greedy algorithm tailored for maximizing count may fail if you shift to maximizing total value.
  • Ignoring constraints: feasibility checks (like cycle detection in Kruskal) are integral to correctness, not implementation details.

Closing Perspective

Greedy algorithms are not a universal shortcut; they are a precise tool for problems with specific mathematical structure. Activity selection demonstrates greedy choice in scheduling, Huffman coding shows greedy optimality in tree construction, and MST algorithms illustrate greedy selection of safe edges guided by cut properties. Learning greedy algorithms well means learning to justify them, not just implement them. When the greedy choice property and optimal substructure are truly present, greedy methods deliver solutions that are both fast and optimal.

Write better notes with AI

Mindli helps you capture, organize, and master any subject with AI-powered summaries and flashcards.