Skip to content
Feb 9

Advanced Algorithms: Approximation

MA
Mindli AI

Advanced Algorithms: Approximation

Many of the most important optimization problems in computing are NP-hard. For these problems, it is unrealistic to expect an algorithm that always finds the true optimum efficiently, at least under standard complexity assumptions. Approximation algorithms provide a different kind of guarantee: they run in polynomial time and return solutions that are provably close to optimal. This shift from exactness to near-optimality is not a compromise in rigor. It is a well-developed area with precise performance bounds, a toolkit of design techniques, and deep ties to linear programming and combinatorial optimization.

Why approximation matters for NP-hard problems

NP-hard problems appear in routing, scheduling, clustering, resource allocation, network design, and many other domains. In practice, teams often rely on heuristics. Heuristics can work well but usually come with no worst-case guarantees. Approximation algorithms fill that gap by offering:

  • Predictable quality: a bound on how far the returned solution can be from optimal.
  • Predictable runtime: typically polynomial in input size.
  • Transferable techniques: LP rounding and primal-dual methods apply across many problems.

The real power of approximation is that it gives you a controlled trade-off between runtime and solution quality.

Approximation ratios: the basic guarantee

For minimization problems, an algorithm is said to have approximation ratio if it always returns a solution with cost at most , where is the optimal cost. For maximization problems, the analogous guarantee is typically a factor such that the algorithm’s value is at least .

A few points are worth emphasizing:

  • Approximation ratios are worst-case guarantees, not average-case claims.
  • The ratio is meaningful only when the objective is nonnegative and the optimum is well-defined.
  • Tightness matters: an algorithm may be “2-approximate” even if it often does much better, but the factor 2 is the promise you can rely on.

Classic examples include constant-factor approximations for problems like Vertex Cover and Set Cover (with logarithmic factors). The exact constants and limits depend on the problem, but the structure of the guarantee is consistent across the field.

Linear programming relaxations and LP rounding

One of the most broadly useful strategies is to model the NP-hard problem as an integer program and then relax it into a linear program (LP) by allowing variables that should be integral (often 0 or 1) to take fractional values in . The LP can be solved efficiently, and its optimal value is a lower bound (for minimization) on the true optimum.

The remaining task is to turn the fractional solution into an integral one without losing too much in the objective. That is the role of LP rounding.

How LP rounding works in practice

LP rounding typically follows a pattern:

  1. Formulate an LP relaxation that captures the constraints but drops integrality.
  2. Solve the LP to get a fractional solution.
  3. Round the fractional values into an integral solution, using a rounding rule designed to preserve feasibility and control cost.

Rounding can be deterministic (thresholding, greedy rounding) or randomized (probabilistic rounding followed by concentration bounds). The central idea is to relate the cost of the rounded solution to the LP value. Since the LP value bounds , this yields an approximation ratio.

Integrality gap and what it tells you

The integrality gap is the worst-case ratio between the best integral solution and the best fractional (LP) solution. It acts as a ceiling on what you can prove using that relaxation: if the integrality gap is large, then no rounding scheme can consistently beat it without strengthening the relaxation.

In algorithm design, this becomes a feedback loop. If rounding cannot deliver a good ratio, the next step is often to refine the LP by adding valid inequalities or moving to stronger relaxations.

Primal-dual methods: building solutions while tracking bounds

Primal-dual approximation algorithms solve a different problem than “solve the LP then round.” They simultaneously construct:

  • A feasible (or nearly feasible) primal solution, which corresponds to a combinatorial object like a cover or a tree.
  • A feasible dual solution, which provides a lower bound on the optimal primal value.

The approximation guarantee comes from showing that the cost of the primal solution is within a bounded factor of the dual objective value. Since the dual objective is a lower bound on (for minimization), this establishes the ratio.

Why primal-dual is more than an LP trick

Primal-dual algorithms are often implementable as purely combinatorial procedures, even though their analysis is LP-based. They are especially effective when the constraints have a natural “covering” interpretation. A typical approach is to start with zero dual variables and increase them until some primal constraint becomes tight, at which point you commit to a primal decision. The dual solution then “pays for” the primal structure you build.

This style is valued for its clarity and for producing strong constant-factor approximations for several network design and covering problems.

PTAS and FPTAS: approximation schemes

Constant-factor approximations are useful, but sometimes you want accuracy arbitrarily close to optimal. Approximation schemes formalize that.

PTAS: Polynomial-Time Approximation Scheme

A PTAS is a family of algorithms parameterized by that returns a solution within a factor of optimal for minimization (or for maximization), and runs in time polynomial in the input size for any fixed .

The catch is that the degree of the polynomial can depend heavily on . In practice, this means a PTAS can be theoretically elegant but not always practical for very small .

FPTAS: Fully Polynomial-Time Approximation Scheme

An FPTAS strengthens the runtime requirement: it must be polynomial in both the input size and . This is a much more demanding standard and is often achieved through dynamic programming with scaling.

A common pattern is to scale and round numerical values (like weights or profits) so the dynamic program runs faster, and then bound the error introduced by scaling. The analysis carefully ensures the final solution is within of optimal while keeping runtime manageable.

Choosing the right approximation technique

Approximation algorithms are not one-size-fits-all. The right method depends on the problem’s structure.

  • If the problem has a natural integer programming formulation with a “tight” relaxation, LP rounding can be direct and powerful.
  • If the constraints look like covering, packing, or network design and there is a clean dual interpretation, primal-dual methods often lead to strong and implementable algorithms.
  • If the problem admits a pseudo-polynomial dynamic program and involves numeric parameters, an FPTAS may be achievable through scaling.
  • If geometric or low-dimensional structure is present, a PTAS can sometimes be built using careful decomposition and local optimization ideas.

A practical way to evaluate options is to ask two questions early: what is a strong lower bound on that you can compute efficiently, and how can you transform that bound into a feasible solution with controlled loss?

What approximation does and does not promise

Approximation guarantees are precise, but they are not a universal fix.

  • They provide worst-case bounds, not instance-specific optimality certificates.
  • The approximation ratio may be tight only on contrived inputs, yet still crucial for reliability in adversarial settings.
  • Not every NP-hard problem admits good approximations; some have strong inapproximability results. Even without diving into those results, it is important to recognize that the landscape varies widely by problem.

Closing perspective

Approximation is a disciplined response to computational hardness. Approximation ratios make solution quality measurable, LP rounding turns fractional structure into feasible decisions, primal-dual methods tie algorithm design to provable lower bounds, and PTAS/FPTAS frameworks let you trade time for accuracy in a controlled way. Together, these ideas form a mature toolkit for building algorithms that are efficient, explainable, and near-optimal where exact optimization is out of reach.

Write better notes with AI

Mindli helps you capture, organize, and master any subject with AI-powered summaries and flashcards.