# Computing determinants: expansions

I’ve spoken vehemently before about my dislike of determinants, and in a straight forward computational lower level marices-and-vectors subject I’ll have nothing to do with them. However, I am currently teaching engineers, who seem enamoured of determinants. They love things like Cramer’s rule (hideously inefficient as it is), and it must be admitted that determinants do make their presence felt in other places: Jacobians for multiple changes of variables, Wronskians, etc. So I’ve been teaching determinants and how to compute them by hand.

I assumed that having been teaching determinants for years off and on, that I’d come across most methods for their computation. But no!

So this post gathers a few methods together. I’ll be brief, plenty of sordid details can be found on each method online.

Laplace expansion
This is the one we learn at our mothers’ knees, so to speak, as mathematical toddlers. It can be defined recursively as follows

$|a| = a$

For an $n\times n$ matrix, pick any row or column. For simplicity’s sake, suppose we pick row $i$, with elements $a_{i1},a_{12},\ldots,a_{in}$. Then

$\displaystyle{|A| = \sum_{j=1}^n(-1)^{i+j}a_{ij}M_{ij}}$

where $M_{ij}$ is the minor of $a_{ij}$, and which is defined to be the determinants of what’s left when every element in the same row and column of $a_{ij}$ is removed.

It’s instructive to determine the cost of this calculation. Suppose $M_n, A_n$ are the numbers of multiplications and addtions respective;y, required to compute an $n\times n$ determinant. We have $M_1=A_1=0$ and $M_2 = 2,A_2=1$. (We shall treat subtraction as having the same cost as addition.)

By the expansion rule above, an $n\times n$ determinant requires the computation of $n$ different $n-1\times n-1$ determinants, with $n$ extra multiplications and $n-1$ additions. Thus:

$M_n = n(M_{n-1}+1)$

$A_n = n(A_{n-1}+1)-1.$

The following table shows how these values increase:

$\displaystyle{\begin{array}{c|c|c}n&M_n&A_n\\ \hline 2&2&1\\ 3&9&5\\ 4&40&23\\ 5&205&119\\ 6 &1236 &719\\ 7 &8659& 5039\\ 8 &69280 &40319\\ 9 &623529 &362879\\ 10 &6235300 &3628799\end{array}}$

It’s clear from the definition, and rammed home by this table, that the work required to compute a determinant by this method is of order $n!$. Thus, very inefficient.

General Laplace expansion
Some authors call this the Laplace expansion, and the version above then is a just a particular case of it. Curiously enough I’d never come across it until quite recently. The idea is quite simple; the main difficulty is finding good notation to describe it. Right, here goes.

Suppose we have an $n\times n$ matrix $A$, and suppose we pick $k$ rows, with indices

$R = 0\le r_1< r_2<\ldots< r_k\le n$

and also pick $k$ columns, with indices

$C = 0\le c_1< c_2<\ldots< c_k\le n$.

So $R,C$ are simple lists of length $k$ containing strictly increasing row and column indices. We then define the matrix

$A[RC]$

to be the $k\times k$ matrix whose $ij$-th element is $A[r_i][c_j]$. That is, $A[RC]$ consists of all the "intersections" of rows in $R$ with columns in $C$. We also define $\overline{A}[RC]$ to be the $(n-k)\times(n-k)$ matrix obtained when all the rows in $R$ and columns in $C$ are removed.

For example, suppose

$\displaystyle{A = \left[\begin{array}{ccccc}1&2&3&4&5\\ 6&7&8&9&10\\ 11&12&13&14&15\\ 16&17&18&19&20\\ 21&22&23&24&25\end{array}\right]}$

Let $R = 1,4$ and $C = 2, 3$. Then

$\displaystyle{A[RC]=\left[\begin{array}{cc}2&3\\ 17&18\end{array}\right]}$

and

$\displaystyle{\overline{A}[RC]=\left[\begin{array}{ccc}6&9&10\\ 11&14&15\\ 21&24&25\end{array}\right]}$.

Given all this, the generalized version of the Laplace expansion theorem claims that

$\displaystyle{\mbox{det}(A) = \sum (-1)^\sigma\mbox{det}(A[RC])\mbox{det}(\overline{A}[RC])}$

where the sum is taken over all possible ${n\choose k}$ sets of $k$ columns, and $\sigma$ is the sum of all the row and column indices. Clearly the original Laplace expansion is just this with $k=1$. For some nice examples, with pictures, see this elegant exposition by David Eberly.

But is this generalized method any better than simply using $k=1$? Well, suppose that the least possible number of multiplications and additions to obtain an $n\times n$ determinant using any Laplace expansion are $M_n, A_n$ respectively.

Suppose we perform the generalized expansion for an $n\times n$ determinant with a particular $1\le k. There will be ${n\choose k}$ $k\times k$ determinants to evaluate, and the same number of $(n-k)\times(n-k)$ determinants. These all need to be multiplied, and the products added. This means that

$\displaystyle{(M_n,A_n) = {n\choose k}\large((M_k,A_k)+(M_{n-k},A_{n-k})+\left({n\choose k},{n\choose k}-1\right)}$

More neatly:

$\displaystyle{M_n = {n\choose k}(M_n+M_{n-k}+1)}$

$\displaystyle{A_n = {n\choose k}(A_n+A_{n-k}+1)-1}$

By symmetry, these values will be the same for $k$ and $n-k$. So the smallest value will be obtained by computing these values for all $k = 1,2,\ldots,\lfloor n/2\rfloor$ and choosing the value which produces the smallest values for $M_n$ (since multiplication is more computationally expensive than addition, we must minimize the number of multiplications).

This gives the following table:

$\displaystyle{\begin{array}{c|c|c}n&M_n&A_n\\ \hline 1&0&0\\2&2&1\\ 3&9&5\\ 4&30&17\\ 5&120&69\\6&380&219\\ 7&1400&804\\ 8&4270&2449\\ 9&19026&10961\\ 10&60732&35027\end{array}}$

We can see that these values are smaller than those in the first table above, but they are still not sub-exponential (at least, I don’t think they are.)

I think I’ll split this into multiple posts. That’s enough for one.