This article is about the expression of a determinant in terms of minors. For the approximation of radial potentials, see
Laplace expansion (potential).
Expression of a determinant in terms of minors
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an n × n-matrix B as a weighted sum of minors, which are the determinants of some (n − 1) × (n − 1)-submatrices of B. Specifically, for every i, the Laplace expansion along the ith row is the equality
where
is the entry of the ith row and jth column of B, and
is the determinant of the submatrix obtained by removing the ith row and the jth column of B. Similarly, the Laplace expansion along the jth column is the equality
(Each identity implies the other, since the determinants of a matrix and its transpose are the same.)
The coefficient
of
in the above sum is called the cofactor of
in B.
The Laplace expansion is often useful in proofs, as in, for example, allowing recursion on the size of matrices. It is also of didactic interest for its simplicity and as one of several ways to view and compute the determinant. For large matrices, it quickly becomes inefficient to compute when compared to Gaussian elimination.
Consider the matrix
![{\displaystyle B={\begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix}}.}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/66d75e650da2d1c082ea06c60d2dc734f8b09eae)
The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields:
![{\displaystyle {\begin{aligned}|B|&=1\cdot {\begin{vmatrix}5&6\\8&9\end{vmatrix}}-2\cdot {\begin{vmatrix}4&6\\7&9\end{vmatrix}}+3\cdot {\begin{vmatrix}4&5\\7&8\end{vmatrix}}\\[5pt]&=1\cdot (-3)-2\cdot (-6)+3\cdot (-3)=0.\end{aligned}}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/43a17127798eae0e148044adf5b9255929aefb41)
Laplace expansion along the second column yields the same result:
![{\displaystyle {\begin{aligned}|B|&=-2\cdot {\begin{vmatrix}4&6\\7&9\end{vmatrix}}+5\cdot {\begin{vmatrix}1&3\\7&9\end{vmatrix}}-8\cdot {\begin{vmatrix}1&3\\4&6\end{vmatrix}}\\[5pt]&=-2\cdot (-6)+5\cdot (-12)-8\cdot (-6)=0.\end{aligned}}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/e0c63a3d7cafc0f04b7bd6e0db0a054a12bf9053)
It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.
Suppose
is an n × n matrix and
For clarity we also label the entries of
that compose its
minor matrix
as
for
Consider the terms in the expansion of
that have
as a factor. Each has the form
![{\displaystyle \operatorname {sgn} \tau \,b_{1,\tau (1)}\cdots b_{i,j}\cdots b_{n,\tau (n)}=\operatorname {sgn} \tau \,b_{ij}a_{1,\sigma (1)}\cdots a_{n-1,\sigma (n-1)}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/5c95cc2f22d35682c376818bc15748c64c369f23)
for some permutation τ ∈ Sn with
, and a unique and evidently related permutation
which selects the same minor entries as τ. Similarly each choice of σ determines a corresponding τ i.e. the correspondence
is a bijection between
and
Using Cauchy's two-line notation, the explicit relation between
and
can be written as
![{\displaystyle \sigma ={\begin{pmatrix}1&2&\cdots &i&\cdots &n-1\\(\leftarrow )_{j}(\tau (1))&(\leftarrow )_{j}(\tau (2))&\cdots &(\leftarrow )_{j}(\tau (i+1))&\cdots &(\leftarrow )_{j}(\tau (n))\end{pmatrix}}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/43b3873be8bdb279b6fe3118cde663ee7f11d1ce)
where
is a temporary shorthand notation for a cycle
.
This operation decrements all indices larger than j so that every index fits in the set {1,2,...,n-1}
The permutation τ can be derived from σ as follows.
Define
by
for
and
.
Then
is expressed as
![{\displaystyle \sigma '={\begin{pmatrix}1&2&\cdots &i&\cdots &n-1&n\\(\leftarrow )_{j}(\tau (1))&(\leftarrow )_{j}(\tau (2))&\cdots &(\leftarrow )_{j}(\tau (i+1))&\cdots &(\leftarrow )_{j}(\tau (n))&n\end{pmatrix}}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/8e66028b0bd3e9f546f00e95dac7ca26ab2b78d2)
Now, the operation which apply
first and then apply
is (Notice applying A before B is equivalent
to applying inverse of A to the upper row of B in two-line notation)
![{\displaystyle \sigma '(\leftarrow )_{i}={\begin{pmatrix}1&2&\cdots &i+1&\cdots &n&i\\(\leftarrow )_{j}(\tau (1))&(\leftarrow )_{j}(\tau (2))&\cdots &(\leftarrow )_{j}(\tau (i+1))&\cdots &(\leftarrow )_{j}(\tau (n))&n\end{pmatrix}}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/4dae629f3733d0a4a040c822771c09adf85e76ec)
where
is temporary shorthand notation for
.
the operation which applies
first and then applies
is
![{\displaystyle (\leftarrow )_{j}\tau ={\begin{pmatrix}1&2&\cdots &i&\cdots &n-1&n\\(\leftarrow )_{j}(\tau (1))&(\leftarrow )_{j}(\tau (2))&\cdots &n&\cdots &(\leftarrow )_{j}(\tau (n-1))&(\leftarrow )_{j}(\tau (n))\end{pmatrix}}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/67966028a24e8d656fc913c5389439aab80dd3d5)
above two are equal thus,
![{\displaystyle (\leftarrow )_{j}\tau =\sigma '(\leftarrow )_{i}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/d656b103ffcc935ca4ad8091a980d8fcc750e311)
![{\displaystyle \tau =(\rightarrow )_{j}\sigma '(\leftarrow )_{i}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/576fa81d5a54c15de47fd05e5af51c198354aeeb)
where
is the inverse of
which is
.
Thus
![{\displaystyle \tau \,=(j,j+1,\ldots ,n)\sigma '(n,n-1,\ldots ,i)}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/9660512364d06e0e4a15cb2561ba3c36b5efa04a)
Since the two cycles can be written respectively as
and
transpositions,
![{\displaystyle \operatorname {sgn} \tau \,=(-1)^{2n-(i+j)}\operatorname {sgn} \sigma '\,=(-1)^{i+j}\operatorname {sgn} \sigma .}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/fcf97ed0b9179780d93d1bcdddac553dce8d1f41)
And since the map
is bijective,
![{\displaystyle {\begin{aligned}\sum _{i=1}^{n}\sum _{\tau \in S_{n}:\tau (i)=j}\operatorname {sgn} \tau \,b_{1,\tau (1)}\cdots b_{n,\tau (n)}&=\sum _{i=1}^{n}\sum _{\sigma \in S_{n-1}}(-1)^{i+j}\operatorname {sgn} \sigma \,b_{ij}a_{1,\sigma (1)}\cdots a_{n-1,\sigma (n-1)}\\&=\sum _{i=1}^{n}b_{ij}(-1)^{i+j}\sum _{\sigma \in S_{n-1}}\operatorname {sgn} \sigma \,a_{1,\sigma (1)}\cdots a_{n-1,\sigma (n-1)}\\&=\sum _{i=1}^{n}b_{ij}(-1)^{i+j}M_{ij}\end{aligned}}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/131acd2c0adb6fecea2e5754797ac92d9b0df0c6)
from which the result follows. Similarly, the result holds if the index of the outer summation was replaced with
.[1]
Laplace expansion of a determinant by complementary minors
[edit]
Laplace's cofactor expansion can be generalised as follows.
Consider the matrix
![{\displaystyle A={\begin{bmatrix}1&2&3&4\\5&6&7&8\\9&10&11&12\\13&14&15&16\end{bmatrix}}.}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/8aafc1b3cc5ef887243f78fbc5f434ff7f75796b)
The determinant of this matrix can be computed by using the Laplace's cofactor expansion along the first two rows as follows. Firstly note that there are 6 sets of two distinct numbers in {1, 2, 3, 4}, namely let
be the aforementioned set.
By defining the complementary cofactors to be
![{\displaystyle b_{\{j,k\}}={\begin{vmatrix}a_{1j}&a_{1k}\\a_{2j}&a_{2k}\end{vmatrix}},}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/3380d30a694444cb64836c6bdb95d29740642e2f)
![{\displaystyle c_{\{p,q\}}={\begin{vmatrix}a_{3p}&a_{3q}\\a_{4p}&a_{4q}\end{vmatrix}},}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/3a3d5e3a55b79b99d74740053c3690064807daff)
and the sign of their permutation to be
![{\displaystyle \varepsilon ^{\{j,k\},\{p,q\}}=\operatorname {sgn} {\begin{bmatrix}1&2&3&4\\j&k&p&q\end{bmatrix}},{\text{ where }}p\neq j,q\neq k.}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/38dc53407e46f7c5777cee723cffca064313785f)
The determinant of A can be written out as
![{\displaystyle |A|=\sum _{H\in S}\varepsilon ^{H,H^{\prime }}b_{H}c_{H^{\prime }},}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/f06e7c48ec9427c8841ab3baab0107606bc86680)
where
is the complementary set to
.
In our explicit example this gives us
![{\displaystyle {\begin{aligned}|A|&=b_{\{1,2\}}c_{\{3,4\}}-b_{\{1,3\}}c_{\{2,4\}}+b_{\{1,4\}}c_{\{2,3\}}+b_{\{2,3\}}c_{\{1,4\}}-b_{\{2,4\}}c_{\{1,3\}}+b_{\{3,4\}}c_{\{1,2\}}\\[5pt]&={\begin{vmatrix}1&2\\5&6\end{vmatrix}}\cdot {\begin{vmatrix}11&12\\15&16\end{vmatrix}}-{\begin{vmatrix}1&3\\5&7\end{vmatrix}}\cdot {\begin{vmatrix}10&12\\14&16\end{vmatrix}}+{\begin{vmatrix}1&4\\5&8\end{vmatrix}}\cdot {\begin{vmatrix}10&11\\14&15\end{vmatrix}}+{\begin{vmatrix}2&3\\6&7\end{vmatrix}}\cdot {\begin{vmatrix}9&12\\13&16\end{vmatrix}}-{\begin{vmatrix}2&4\\6&8\end{vmatrix}}\cdot {\begin{vmatrix}9&11\\13&15\end{vmatrix}}+{\begin{vmatrix}3&4\\7&8\end{vmatrix}}\cdot {\begin{vmatrix}9&10\\13&14\end{vmatrix}}\\[5pt]&=-4\cdot (-4)-(-8)\cdot (-8)+(-12)\cdot (-4)+(-4)\cdot (-12)-(-8)\cdot (-8)+(-4)\cdot (-4)\\[5pt]&=16-64+48+48-64+16=0.\end{aligned}}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/31cf78d4760eec74fbfac158630611c41bb45726)
As above, it is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.
Let
be an n × n matrix and
the set of k-element subsets of {1, 2, ... , n},
an element in it. Then the determinant of
can be expanded along the k rows identified by
as follows:
![{\displaystyle |B|=\sum _{L\in S}\varepsilon ^{H,L}b_{H,L}c_{H,L}}](https://cdn.statically.io/img/wikimedia.org/api/rest_v1/media/math/render/svg/97086f8788420d6663b1b62e07955b8dc57125b9)
where
is the sign of the permutation determined by
and
, equal to
,
the square minor of
obtained by deleting from
rows and columns with indices in
and
respectively, and
(called the complement of
) defined to be
,
and
being the complement of
and
respectively.
This coincides with the theorem above when
. The same thing holds for any fixed k columns.
Computational expense
[edit]
The Laplace expansion is computationally inefficient for high-dimension matrices, with a time complexity in big O notation of O(n!). Alternatively, using a decomposition into triangular matrices as in the LU decomposition can yield determinants with a time complexity of O(n3).[2] The following Python code implements the Laplace expansion:
def determinant(M):
# Base case of recursive function: 1x1 matrix
if len(M) == 1:
return M[0][0]
total = 0
for column, element in enumerate(M[0]):
# Exclude first row and current column.
K = [x[:column] + x[column + 1 :] for x in M[1:]]
s = 1 if column % 2 == 0 else -1
total += s * element * determinant(K)
return total
- ^ Walter, Dan; Tytun, Alex (1949). "Elementary problem 834". American Mathematical Monthly. 56 (6). American Mathematical Society: 409. doi:10.2307/2306289. JSTOR 2306289.
- ^ Stoer Bulirsch: Introduction to Numerical Mathematics