Inverse of a larger matrix and Power Series

Hits: 75

Most textbooks I have on Lin Alg discuss finding the inverse of a 2×2 matrix,and appear to have little to say about inverting 3×3 matrices and above. I was trying to invert a 5×5 matrix, but all I could find after looking through two linear algebra textbooks, and the internet was info for 2×2 matrices.

    \begin{equation*} \begin{bmatrix}a & b\\c & d\end{bmatrix}^{-1} = \frac{1}{ad-bc} \begin{bmatrix}d & -b\\-c & a\end{bmatrix} \end{equation*}

… except the web page I read used comic sans. Then there was this really good lin alg textbook I came across that had a technique that used a 3×3 example that could scale to any size. Bingo. The next problem will be presenting this to you as there appears to be no way (after an online lookup) to present the notation to you in a satisfying way. I will separate two matrices with a right arrow (\longrightarrow) instead of using a double matrix with a vertical bar in the middle. I also have to reckon with limitations in using Latex notation in a blog article. So, please excuse any makeshift notation used in this article.

    \begin{equation*} \begin{bmatrix} 1&0&0&0&0 \\ 1&1&0&0&0 \\ 1&2&1&0&0 \\ 1&3&3&1&0 \\ 1&4&6&4&1 \\ \end{bmatrix}\longrightarrow \begin{bmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \\ \end{bmatrix} \end{equation*}

The idea being, to turn the left matrix into the right matrix, applying the techniques of Gauss-Jordan elimination. Meanwhile, any operation I commit to the left matrix must be also done to the right matrix. It turns out that you don’t necessarily get the same matrix. First, I subtract the first row from the rows below:

    \begin{equation*} \begin{bmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&2&1&0&0 \\ 0&3&3&1&0 \\ 0&4&6&4&1 \end{bmatrix}\longrightarrow \begin{bmatrix} 1&0&0&0&0 \\ -1&1&0&0&0 \\ -1&0&1&0&0 \\ -1&0&0&1&0 \\ -1&0&0&0&1 \\ \end{bmatrix} \end{equation*}

Then, I apply the same thinking with the second row to eliminate variables in the second column of the left matrix, and proceed similarly for subsequent columns of the left-hand matrix:

    \begin{equation*} \begin{bmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&3&1&0 \\ 0&0&6&4&1 \\ \end{bmatrix} \longrightarrow \begin{bmatrix} 1&0&0&0&0 \\ -1&1&0&0&0 \\ 1&-2&1&0&0 \\ 2&-3&0&1&0 \\ 3&-4&0&0&1 \\ \end{bmatrix} \end{equation*}

    \begin{equation*} \begin{bmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&4&1 \\ \end{bmatrix}\longrightarrow \begin{bmatrix} 1&0&0&0&0 \\ -1&1&0&0&0 \\ 1&-2&1&0&0 \\ -1&3&-3&1&0 \\ -3&8&-6&0&1 \\ \end{bmatrix} \end{equation*}

    \begin{equation*} \begin{bmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \\ \end{bmatrix}\longrightarrow \begin{bmatrix} 1&0&0&0&0 \\ -1&1&0&0&0 \\ 1&-2&1&0&0 \\ -1&3&-3&1&0 \\ 1&-4&6&-4&1 \\ \end{bmatrix} \end{equation*}

Thus:

    \begin{equation*} \begin{bmatrix} 1&0&0&0&0 \\ 1&1&0&0&0 \\ 1&2&1&0&0 \\ 1&3&3&1&0 \\ 1&4&6&4&1 \\ \end{bmatrix}^{-1} = \begin{bmatrix} 1&0&0&0&0 \\ -1&1&0&0&0 \\ 1&-2&1&0&0 \\ -1&3&-3&1&0 \\ 1&-4&6&-4&1 \\ \end{bmatrix} \end{equation*}

You might recognize number pattern in the very first matrix to resemble part of Pascal’s Triangle:

    \begin{equation*} \begin{bmatrix} 1& 0& 0& 0& 0 \\ 1& 1& 0& 0& 0 \\ 1& 2& 1& 0& 0 \\ 1& 3& 3& 1& 0 \\ 1& 4& 6& 4& 1 \\ \end{bmatrix} \end{equation}

There was a Mathologer video on YouTube which I saw recently which modified this Pascal-triangle based matrix. First, it changed the main diagonal from 1 to 0; then removed the top row; then moved all rows up and added a new row. Then every second diagonal is “decorated” (Burkhard Polster’s words, not mine) with minus signs in the following manner:

    \begin{equation*} \begin{bmatrix} 1& 0& 0& 0& 0 \\ -1& 2& 0& 0& 0 \\ 1& -3& 3& 0& 0 \\ -1& 4& -6& 4& 0 \\ 1& -5&10&-10& 5 \\ \end{bmatrix} \end{equation}

This weird concoction of Pascal’s triangle is based on the coefficients of the expansion of -(1-x)^k, where k is the row number of the matrix counting from 0. Also, the last term, 1, is omitted as stated before. The inverse of this matrix is a fair bit different, but has a special property. When you find the inverse, then multiply by a 5 by 1 matrix consisting of successive powers of n, you get:

    \begin{equation*} \begin{bmatrix} 1& 0& 0& 0& 0 \\ -1& 2& 0& 0& 0 \\ 1& -3& 3& 0& 0 \\ -1& 4& -6& 4& 0 \\ 1& -5&10&-10& 5 \\ \end{bmatrix}^{-1} \times \begin{bmatrix} n \\ n^2 \\ n^3 \\ n^4 \\ n^5 \end{bmatrix} = \begin{bmatrix} 1& 0& 0& 0& 0 \\ \frac{1}{2}& \frac{1}{2}& 0& 0& 0 \\ \frac{1}{6}& \frac{1}{2}& \frac{1}{3}& 0& 0 \\ 0& \frac{1}{4}& \frac{1}{2}& \frac{1}{4}& 0 \\ -\frac{1}{30}& 0& \frac{1}{3}& \frac{1}{2}& \frac{1}{5} \\ \end{bmatrix} \times \begin{bmatrix} n \\ n^2 \\ n^3 \\ n^4 \\ n^5 \end{bmatrix} \end{equation*}

We multiply by the n^r series in order to lead us to the next step, which is to reveal that each row in this matrix make up the coefficients of the sum of a power series after multiplication:

    \begin{align*} S_0 &= \sum_{n=1}^{\infty} 1 = n \\ S_1 &= \sum_{n=1}^{\infty} n = \frac{n}{2} + \frac{n^2}{2} \\ S_2 &= \sum_{n=1}^{\infty} n^2 = \frac{n}{6} + \frac{n^2}{2} + \frac{n^3}{3} \\ S_3 &= \sum_{n=1}^{\infty} n^3 = \frac{n^2}{4} + \frac{n^3}{2} + \frac{n^4}{4} \\ S_4 &= \sum_{n=1}^{\infty} n^4 = -\frac{n}{30} + \frac{n^3}{3} + \frac{n^4}{2} + \frac{n^5}{5} \end{align*}

The payoff here is that you can make a matrix as large as you want to find the summation formulae for all power series ad infinitum. The power series would utilize a 10\times 10 matrix to obtain the power series for S_9:

    \begin{align*} S_9 = \sum_{n=1}^{\infty} n^9 &= 1^9 + 2^9 + 3^9 + 4^9 + ... \\ &= -\frac{3n^2}{20} + \frac{n^4}{2} - \frac{7n^6}{10} + \frac{3n^8}{4} + \frac{n^9}{2} + \frac{n^{10}}{10} \end{align*}

More generally:

    \[ S_m = \sum_{n=1}^{\infty} n^m \]

for any m in the set of integers.

What are your thoughts?