Linear Algebra: Systems of Linear Equations
Linear Algebra: Systems of Linear Equations
Systems of linear equations sit at the practical heart of linear algebra. They show up whenever several unknowns must satisfy several constraints at the same time, and each constraint is linear. In engineering, that might mean currents and voltages in an electrical network. In structural analysis, it might mean forces and displacements linked by linear relationships. In data work, it often appears as fitting a model with linear parameters.
A system can be written in many forms, but the standard algebraic version looks like this:
\[ \begin{aligned} a{11}x1 + a{12}x2 + \cdots + a{1n}xn &= b1 \\ a{21}x1 + a{22}x2 + \cdots + a{2n}xn &= b2 \\ &\vdots \\ a{m1}x1 + a{m2}x2 + \cdots + a{mn}xn &= b_m \end{aligned} \]
The goal is to determine whether solutions exist, whether they are unique, and how to describe them efficiently when there are infinitely many.
From equations to matrices
Linear systems become much easier to manage once they are translated into matrix form. Collect the coefficients into a matrix , the unknowns into a vector , and the constants into a vector . Then the entire system is:
This compact form is not just notation. It enables systematic solution methods, clear criteria for existence and uniqueness, and computational algorithms that scale.
A common tool is the augmented matrix, which places and side by side:
\[ \left[\begin{array}{cccc|c} a{11} & a{12} & \cdots & a{1n} & b1 \\ a{21} & a{22} & \cdots & a{2n} & b2 \\ \vdots & \vdots & & \vdots & \vdots \\ a{m1} & a{m2} & \cdots & a{mn} & bm \end{array}\right] \]
Solving the system becomes the task of transforming this matrix into a simpler, equivalent one.
Gaussian elimination: the workhorse method
Gaussian elimination is the standard procedure for solving linear systems by systematically eliminating variables. It uses three elementary row operations:
- Swap two rows.
- Multiply a row by a nonzero constant.
- Add a multiple of one row to another row.
These operations do not change the solution set of the system; they simply rewrite the same constraints in a more convenient form.
Elimination to row echelon form
The first stage reduces the matrix to row echelon form (REF), where:
- All nonzero rows are above any all-zero rows.
- Each leading entry (pivot) of a row is to the right of the leading entry in the row above.
- Entries below each pivot are zero.
Once in REF, you can solve by back-substitution, starting from the bottom equation and working upward. This approach is efficient and mirrors how many numerical solvers operate internally.
Row reduction to reduced row echelon form
A further step, often called Gauss-Jordan elimination or row reduction, transforms the matrix into reduced row echelon form (RREF), where:
- Each pivot is 1.
- Each pivot is the only nonzero entry in its column.
RREF makes the structure of the solution immediate. You can read off pivot variables and free variables directly, which is especially valuable when the system has infinitely many solutions.
Existence and uniqueness: what the row-reduced matrix reveals
After elimination, three broad outcomes are possible.
1) A unique solution
A unique solution occurs when every variable is a pivot variable. In an -variable system, that means there are pivots. In RREF, you will see a clean identity-like structure in the coefficient part.
Practically, this is the “fully determined” case. In a circuit with as many independent equations as unknown currents, for example, the currents are fixed by the constraints.
2) Infinitely many solutions
Infinitely many solutions occur when there are fewer pivots than variables. Some variables become free variables, which can take arbitrary values, and the remaining variables depend on them.
This is common in underdetermined models: not enough independent constraints to pin down a single answer. In structural problems, it can appear as a mechanism or unrestrained motion, where the equations do not fully prevent certain displacements.
3) No solution (inconsistency)
A system has no solution when elimination produces a row corresponding to a contradiction, such as:
\[ [0 \ \ 0 \ \ \cdots \ \ 0 \mid 1] \]
This represents the impossible equation . In applied settings, inconsistency often signals conflicting measurements, incompatible constraints, or modeling errors.
Solution spaces and how to describe them
When a system has solutions, the set of all solutions has geometric structure.
Homogeneous systems:
A homogeneous system always has at least one solution: the trivial solution . If there are free variables, it has infinitely many solutions.
The solution set of is a subspace of called the null space (or kernel) of . It is closed under addition and scalar multiplication, which is why it is so useful in theory and applications.
In RREF, you can express the solutions in parametric vector form. Free variables become parameters, and the solution is written as a linear combination of direction vectors.
Non-homogeneous systems:
For , the solution set (if nonempty) is typically an affine set: a particular solution plus all solutions of the corresponding homogeneous system. Concretely, if is one solution to , then every solution has the form:
where runs over all solutions to .
This decomposition matters in practice. It separates what is forced by the external inputs () from what the system can vary internally without changing the outputs.
Circuits and structures: why linear systems appear so often
Linear systems are not just classroom artifacts. They are the natural language of many physical balance laws and linear constitutive relationships.
Circuit analysis
In resistive circuits, Kirchhoff’s laws and Ohm’s law lead directly to linear equations.
- Kirchhoff’s Current Law (KCL): currents entering a node equal currents leaving it. These are linear constraints.
- Ohm’s law: , linear in and for fixed resistance.
- Kirchhoff’s Voltage Law (KVL): sums of voltage drops around loops equal zero, again linear.
Depending on how you set up the unknowns (node voltages or loop currents), you obtain a matrix system. Gaussian elimination then becomes a systematic way to compute all node voltages or branch currents.
Structural equilibrium and stiffness models
In statics, equilibrium conditions enforce that net force and net moment are zero. In linear truss analysis, member forces contribute linearly to nodal equilibria. In displacement-based methods, the global stiffness relationship is often written as:
where is the stiffness matrix, is the displacement vector, and is the force vector. Solving for is a linear system problem. If the structure is improperly constrained, may lead to free variables (mechanisms), which show up as non-unique solutions.
Practical insight: what to watch for when solving systems
Pivot positions matter more than arithmetic
The elimination steps are important, but the key information is where the pivots land. Pivot columns correspond to dependent variables; non-pivot columns correspond to free variables. That single structural fact determines uniqueness and the dimension of the solution space.
Scaling and numerical stability
In hand calculations, any elimination path works. In computation, the choice can affect accuracy. Swapping rows to choose a better pivot (often called pivoting) helps avoid dividing by very small numbers and reduces numerical error. Even though the underlying mathematics is exact, computed results depend on finite precision.
Interpretation beats the final numbers
A solved system is not just a list of values. It answers:
- Are the constraints consistent?
- How many independent constraints are present?
- Which degrees of freedom remain?
Row reduction exposes those answers transparently, which is why it is as much an analysis tool as it is a solution technique.
Closing perspective
Systems of linear equations are where linear algebra becomes operational: constraints become matrices, elimination becomes insight, and solution sets become geometric objects. Gaussian elimination and row reduction provide a systematic route from a messy collection of equations to a clear description of what is possible, what is determined, and what is impossible. Whether you are balancing currents in a circuit or forces in a structure, the same core ideas apply: pivot structure governs solvability, and the solution space tells the real story.