GATE Linear Algebra Solutions

Elegant solutions to selected questions from Linear Algebra part of GATE exam.

This post is written out of my frustration at how lots of people do not realize there are elegant answers to all these GATE questions. GATE is an Indian exam used to admit students to Masters and PhD level programmes. Some PSUs also use this exam score as a filter in hiring.

I often see completely horrible answers to GATE questions in the back of those books released by coaching institutes. They take publically available question papers and sort the questions by topic. Then pay someone to just brute force these questions and present that as the "answers".

Remember the total duration of the GATE examination is 180 minutes. The subject specific section is only 85 marks so we should spend only $180\times 0.85 = 153$ minutes to earn those marks. So we can spend 1.8 mins per mark so less than 2 mins for the 25 1-mark questions and 3.6 mins for the 30 2-mark questions. But that is cutting it too close and we would have no time to check answers, deal with harder questions etc. So ideally you need to spend less than a minute and less than 2 minutes to solve a 1 mark question or 2 mark question regarding concepts you are clear with. Solutions which allow you to solve it this fast are elegant.

Since I love Linear Algebra so much I decided to start with this topic. Without further ado let me present some solutions I think that qualify as elegant. If you can think of other solutions that I missed please comment below or message me.

The problem with elegant solutions is that it seems to need that flash of inspiration, a sense of intuition that is only developed by practise. While brute force solutions can be applied mechanically, these solutions work for a larger class of problems but are typically slower. Elegant solutions appear to those who can see beauty in the subject, they are often only going to work by taking advantage of some special features in the problem.

EC below stands for Electronics and Communication Enginnering paper. Many thanks to Adarsh for curating these questions.

Q19 2013 EC GATE PAPER

Q 1) (1 mark) The minimum eigenvalue of the following matrix is

$$
\left [ \begin{matrix}
3 & 5 & 2 \\
5 & 12 & 7 \\
2 & 7 & 5 \\
\end{matrix} \right ]
$$

(A) 0            (B) 1            (C) 2            (D) 3

Ans 1)

Notice the sum of row 1 and 3 of the matrix is equal to row 2. This means that the number of linearly independent rows in this matrix is strictly less than 3.

Since the matrix is not full rank, we can be sure that atleast one eigenvalue is $0$. Why? The determinant is both the product of eigenvalues and is $0$ when all the rows are not linearly independent (think of the volume of the parellopiped formed by these rows or because a rank-deficient matrix is not invertible).

So immediately we can be sure the answer is (A) since there are no negative numbers among the options and we are sure $0$ is an eigenvalue.

Now compare the above solution to the one provided by Made Easy (a coaching company that releases "solved" gate papers).

Example of brute force solution
Would you be able to do these step in under a minute with 90%+ accuracy? I did not think so.

As an aside I felt a need for completeness, to be sure that $0$ is the minimum eigenvalue of this matrix, we need to check if this symmetric (special case of hermitian) matrix is positive semi-definite (eigenvalues are all equal to or greater than 0).

I thought we could use Sylvester's criterion for this, delete any $n-k$ rows and columns and find its determinent to find the minor $D_k$. Check if it was non negative. The entire $3 \times 3$ matrix has det zero as we found above so $k=0$ is ok. Notice all single values are positive, so $k=2$ is also fine. When $k=1$, we have to check all possible principle minors, which can be time consuming. There are alternative ways which would need you to row reduce the matrix.

Q1 2017 EC GATE PAPER session 2

Q 2) (1 mark) The rank of the following matrix is ___

$$
\left [ \begin{matrix}
1 & -1 & 0 & 0 & 0 \\
0 & 0 & 1 & -1 & 0 \\
0 & 1 & -1 & 0 & 0 \\
-1 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & -1 \\
\end{matrix} \right ]
$$

Ans 2) Notice sum of rows gives $\vec 0$. Perform the operation $R_5 = R_1 + R_2 + R_3 + R_4 + R_5$, so rank is $4$ or less. The other four rows are left unchanged after this row transformation and no linear combination of them will give us $\vec 0$ unless all the coefficients are also $0$ (find reasoning below). This means rank is $4$.

$$
a \left [ \begin{matrix}
1  \\
-1  \\
0 \\
0 \\
0  \\
\end{matrix} \right ] + b \left [ \begin{matrix}
0 \\
0  \\
1 \\
-1 \\
0  \\
\end{matrix} \right ]  + c \left [ \begin{matrix}
0  \\
1  \\
-1 \\
0 \\
0  \\
\end{matrix} \right ] + d \left [ \begin{matrix}
-1  \\
0  \\
0 \\
0 \\
1  \\
\end{matrix} \right ] =  \left [ \begin{matrix}
0  \\
0  \\
0 \\
0 \\
0  \\
\end{matrix} \right ]
$$

d needs to be zero to ensure that the last index in R4 stays zero. In the same way b needs to be zero to ensure the second last index in R2 does not contribute. We have no other way of controlling those dimensions. With b constrained to zero, c is also forced to be zero to prevent the 3rd dimension in R3 from contributing. In the same way a needs to be zero to stop the second dimension in R1 from contributing. So these 4 vectors are linearly independent.

You don't need a paper to go through this logic. Just look at the rows and try to see what weighted sum would work. You have the matrix sitting there on the screen with all the rows aligned. You convince yourself that each coefficient is forced to be zero in order.

The solution gave by Made Easy is seen below,

Lol brute force
This method would have worked on any matrix and does not show that the student understood what is special about this particular matrix.

Q29 2014 EC GATE PAPER

Q 3) (2 marks) Consider this matrix,

$$ \quad J_{{6}} = \begin{pmatrix}0&0&0&0&0&1\\0&0&0&0&1&0\\0&0&0&1&0&0\\0&0&1&0&0&0\\0&1&0&0&0&0\\1&0&0&0&0&0\end{pmatrix}$$

Which is obtained by reversing the order of the columns of the identity matrix $I_6$. Let $P_6 = I_6 + \alpha J_6$. Where $\alpha$ is an non-negative real number. The value of $\alpha$ for which $det(P) = 0$ is ____.

Ans 3)

a)

Consider what $P$ looks like. Remember the eigenvalue equation for a matrix $J$? If $\lambda$ is an eigenvalue of $J$ then it satisfies the characteristic polynomial,

$$det(\lambda I_6 - J_6) = 0$$

So to bring $P$ to this form, we note that $J_6$ is full rank and note that $\lambda \neq 0$. So we can take $\lambda$ common from both terms.

$$det\left (\lambda(I_6 + \frac{-1}{\lambda}J_6) \right ) = 0$$

Since we can take scalars out of determinants,

$$\lambda^6 det\left (I_6 + \frac{-1}{\lambda}J_6\right ) = 0$$

We know $\lambda \neq 0$ so,

$$ det\left (I_6 + \frac{-1}{\lambda}J_6\right ) = 0$$

So comparing the above equation with P, we can see that $det(P) = 0$ iff we pick an alpha $\alpha = \frac{-1}{\lambda}$.

So what are the eigenvalues of $J_6$. We see that this is an exchange matrix. It rotates the vector about the middle, so what kind of vectors can we expect to remain invariant? Vectors that are symmetric about the centre like the all ones vector could be an eigenvector, so $1$ is surely an eigenvalue. But then $\alpha = -1$, we want a positive alpha as stated in the question.

If the vector was kinda antisymmetric (instead of transpose, applying J is a rotation) then $-1$ would be an eigenvalue. Think of the vector which would be the eigenvector,

$$v = \left [ \begin{matrix}
-1  \\
-1  \\
-1 \\
1 \\
1  \\
1  \\
\end{matrix} \right ]$$

Since the matrix is symmetric, all eigenvalues are real. The eigenspace for each of these 2 eigenvalues is of dimension 3 since we can pick 3 linearly independent eigenvectors for each (shown below). So that completes the 6 dimensional space and we can conclude only $1$ and $-1$ are eigenvalues.

For eigenvalue $1$,

$$\left [ \begin{matrix}
1  \\
0  \\
0 \\
0 \\
0  \\
1  \\
\end{matrix} \right ], \left [ \begin{matrix}
0  \\
1  \\
0 \\
0 \\
1  \\
0  \\
\end{matrix} \right ], \left [ \begin{matrix}
0  \\
0  \\
1 \\
1 \\
0  \\
0  \\
\end{matrix} \right ]$$

For eigenvalue $-1$,

$$\left [ \begin{matrix}
-1  \\
0  \\
0 \\
0 \\
0  \\
1  \\
\end{matrix} \right ], \left [ \begin{matrix}
0  \\
-1  \\
0 \\
0 \\
1  \\
0  \\
\end{matrix} \right ], \left [ \begin{matrix}
0  \\
0  \\
-1 \\
1 \\
0  \\
0  \\
\end{matrix} \right ]$$

Lets check what solution Made Easy offers,

img
It makes no sense how they got all those terms, it would take too long to find all the minors
img
Well I have no clue about the steps they used. It just seems wrong and impractical.

b) If you wanted to brute force this, you could remember that row operations does not change the determinant. Just perform $R_{6-i+1} =R_{6-i+1} - \alpha R_{i}$ for $i = 1, 2, 3$. Now you have an upper triangular matrix just take the product of the diagonal elements to find its determinant.

$$det( I_6 + \alpha J_6) = 0 = (1-\alpha^2)^3$$

So $\alpha = \pm 1$.

Q43 2015 EC GATE PAPER SET 1

Q 4) Two sequences $[a,b,c]$ and $[A,B,C]$ are related as,

$$\left [ \begin{matrix}
A \\
B \\
C  \\
\end{matrix} \right ] =
\left [ \begin{matrix}
1 & 1 & 1 \\
1 & W_3^{-1} & W_3^{-2} \\
1 & W_3^{-2} & W_3^{-4}\\
\end{matrix} \right ] \left [ \begin{matrix}
a \\
b \\
c  \\
\end{matrix} \right ]  \ \ \ \ \ \ \ \ \ \ \ (1)
$$

Where $W_3 = e^{j\frac{2\pi}{3}}$. If another sequence  $[p,q,r]$ is derived as,

$$\left [ \begin{matrix}
p \\
q \\
r  \\
\end{matrix} \right ] =
\left [ \begin{matrix}
1 & 1 & 1 \\
1 & W_3^{1} & W_3^{2} \\
1 & W_3^{2} & W_3^{4}\\
\end{matrix} \right ] \left [ \begin{matrix}
1 & 0 & 0 \\
0 & W_3^{2} & 0 \\
0 & 0 & W_3^{4}\\
\end{matrix} \right ] \left [ \begin{matrix}
\frac{A}{3} \\
\frac{B}{3}  \\
\frac{C}{3}   \\
\end{matrix} \right ]   \ \ \ \ \ \ \ \ \ \ \ (2)
$$

The the relationship between the sequences $[p,q,r]$ and $[a,b,c]$ is,

(a) $[p,q,r] = [b,a,c]$

(b) $[p,q,r] = [b,c,a]$

(c) $[p,q,r] = [c,a,b]$

(d) $[p,q,r] = [c,b,a]$

Ans 4)

First we notice $W_3$ is the 3rd root of unity. $W_3 =
e^{2\pi j\frac{1}{3}}$, next if we take $\frac{1}{3}$ out of the vector on the right of (2) we can substitute (1) into (2).

$$\left [ \begin{matrix}
p \\
q \\
r  \\
\end{matrix} \right ] =
\left [ \begin{matrix}
1 & 1 & 1 \\
1 & W_3^{1} & W_3^{2} \\
1 & W_3^{2} & W_3^{4}\\
\end{matrix} \right ] \left [ \begin{matrix}
\frac{1}{3} & 0 & 0 \\
0 & \frac{W_3^{2} }{3}& 0 \\
0 & 0 & \frac{W_3^{4}}{3}\\
\end{matrix} \right ] \left [ \begin{matrix}
1 & 1 & 1 \\
1 & W_3^{-1} & W_3^{-2} \\
1 & W_3^{-2} & W_3^{-4}\\
\end{matrix} \right ] \left [ \begin{matrix}
a \\
b \\
c  \\
\end{matrix} \right ]   \ \ \ \ \ \ \ \ \ \ \ (3)
$$

At this point it should be clear that what we are seeing is a similarity transform on a diagonal matrix, which would turn it into a permutation matrix (based on  the options).

Since its a diagonalisation we can see how the eigenvector matrix used to diagonalize the permutation matrix is actually the 3 point DFT matrix. Since this matrix is unitary, the inverse is just the conjugate transpose. So now we know the eigenvalues of that permutation matrix are the diagonal elements and the eigenvectors are the column vectors.

Remember if we construct a matrix $W$ with the eigenvectors of $P_\pi$ as column vectors then the matrix eigenvalue equation is,

$$P_\pi W = W\Lambda $$

here $\Lambda$ is the diagonal matrix with the eigenvalues of $P_\pi$,

$$P_\pi = W\Lambda W^{-1} $$

So (3) is actually,

$$\left [ \begin{matrix}
p \\
q \\
r  \\
\end{matrix} \right ] = P_\pi \left [ \begin{matrix}
a \\
b \\
c  \\
\end{matrix} \right ]   = W\Lambda W^{-1} \left [ \begin{matrix}
a \\
b \\
c  \\
\end{matrix} \right ]  $$

Expanding the terms,

$$\left [ \begin{matrix}
p \\
q \\
r  \\
\end{matrix} \right ] = \frac{1}{\sqrt{3}}\left [ \begin{matrix}
1 & 1 & 1 \\
1 & \omega & \omega ^{2} \\
1 & \omega ^{2} & \omega ^{4}\\
\end{matrix} \right ]
\left [ \begin{matrix}
1 & 0 & 0 \\
0 & \omega ^{2}  & 0 \\
0 & 0 & \omega ^{4}\\
\end{matrix} \right ] \frac{1}{\sqrt{3}} \left [ \begin{matrix}
1 & 1 & 1 \\
1 & \omega^{-1} & \omega ^{-2} \\
1 & \omega ^{-2} & \omega ^{-4}\\
\end{matrix} \right ]\left [ \begin{matrix}
a \\
b \\
c  \\
\end{matrix} \right ]   \ \ \ \ \ \ \ \ \ \ \ (3)
$$

We will need to use the fact that the roots of unity are n-periodic.

$$\omega^{-5}, \omega ^{-4}, \omega^{-3}, \omega ^{-2}, \omega^{-1}, \omega ^{0}, \omega^{1}, \omega ^{2}, \omega^{3}, \omega ^{4}, \omega^{5}, \omega ^{6}$$

Since we are dealing with the 3rd root of unity the above sequence is the same as,

$$\omega^{1}, \omega ^{2}, 1, \omega ^{1}, \omega^{2}, 1, \omega, \omega ^{2}, 1, \omega, \omega^{2}, 1$$

Now the crucial observation is that $\omega^{2}$ is the eigenvalue for the 2nd column in the eigenvector matrix. So multiplying the second column with $\omega^{2}$ is the same as applying that permutation matrix $P_\pi$ on that eigenvector.

$$P_\pi \left [ \begin{matrix} 1  \\  \omega^{1}  \\ \omega ^{2} \\ \end{matrix} \right ]  = \omega^{2} \left [  \begin{matrix}  1  \\ \omega^{1}  \\ \omega ^{2} \\ \end{matrix} \right ] = \left [ \begin{matrix} \omega^{2} \\ \omega  \\ 1 \end{matrix} \right ] $$

We can say the same for the 3rd eigenvector too.

$$P_\pi \left [ \begin{matrix}
1 \\
\omega ^{2} \\
\omega ^{4}\\
\end{matrix} \right ]   = \omega^{4} \left [ \begin{matrix}
1 \\
\omega ^{2} \\
\omega ^{4}\\
\end{matrix} \right ]   = \left [ \begin{matrix}
\omega^{4} \\
1  \\
\omega^{2} \\
\end{matrix} \right ] $$

So we can conclude the permutation matrix is,

$$P_\pi = \left [ \begin{matrix}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{matrix} \right ]$$

So the answer is (c) $[p,q,r] = [c,a,b]$.


I write so much for the sake of clarity. If you want to solve this quickly we can just quickly check what the eigenvalue $W_3^{2} $ does to the second eigenvector.

$$P_\pi \left [ \begin{matrix}
1 \\
W_3\\
W_3^{2}\\
\end{matrix} \right ]   = W_3^{2} \left [ \begin{matrix}
1 \\
W_3\\
W_3^{2}\\
\end{matrix} \right ]    = \left [ \begin{matrix}
W_3^{2} \\
1\\
W_3\\
\end{matrix} \right ] $$

So we immediately see how the second element becomes the first, 3rd becomes 2nd and 1st becomes last.

The official answer key (p. 19) also confirms that it is (c).

Made Easy went for the classic brute force on equation (3),

img
Would take some time to do matrix matrix multiply
img
Then they just write the "answer". No steps in between.

I did the correct brute force over at my answer in stackexchange.

Q1 2017 EC GATE PAPER SET 1

Q 5) Consider the $5 \times 5$ matrix,

$$ A = \begin{pmatrix}1&2&3&4&5\\5&1&2&3&4\\4&5&1&2&3\\3&4&5&1&2\\2&3&4&5&1 \end{pmatrix}$$

It is given that $A$ has only one real eigen value. Then the real eigen value of $A$ is

(a) -2.5

(b) 0

(c) 15

(d) 25

Ans 5)

Notice sum of each row is equal to 15. So if you multiply this matrix with an all ones vector you will get an all 15 vector. Which means $15$ is an eigen value of $A$. Since it is real, it is the only real eigen value and thus (c) is the answer.

$$ A \vec 1 = \left [ \begin{matrix}1&2&3&4&5\\5&1&2&3&4\\4&5&1&2&3\\3&4&5&1&2\\2&3&4&5&1 \end{matrix} \right ] \left [ \begin{matrix}1\\1\\1\\1\\1 \end{matrix}  \right ] = 15 \left [ \begin{matrix}1\\1\\1\\1\\1 \end{matrix} \right ] $$

The good thing about this approach is that it can be applied to any matrix where all the row sums to the same number.

img
The solution by made easy is wrong cause of maybe a typo.

They claim sum of all elements in any one row must be zero since det is zero. If all rows sum to zero then zero is an eigenvalue (same logic as above since all ones is an eigenvector) which means det is zero (since product of eigenvalues).

But det being zero does not imply all the rows being zero. Check example below where det is zero but row sums is non zero.

$$\left | \begin{matrix}1&-4\\1&-4 \end{matrix} \right | = 0$$

If we can find a lambda such that all rows sum to $o$ then that lambda will satisfy the characteristic polynomial and will be an eigenvalue.


These kind of matrix form a class known as Circulant matrices. The permutation matrices we discussed above are a special case of these matrices. All matrices of this type can be diagonalized with the DFT matrix also shown above.

I wanted to derive the eigen vectors and values for these matrices based on this answer in stackexchange.

We can decompose any circulant matrix as a sum of powers of the right permutation matrix. Put a 1 everywhere the largest number - 5 is on $A$.

$$\phi =  \left [ \begin{matrix}0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0 \end{matrix} \right ]$$

Think about what happens if we applying this twice to a vector. The first time it pushed the first row down to second, second to third etc, until the last row is brought up to the first row. Now if we do it one more time its like the last row coming to 2nd position, 1st row to the 3rd position, etc.

$$\phi^2 =  \left [ \begin{matrix}0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0 \end{matrix} \right ] \left [ \begin{matrix}0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0 \end{matrix} \right ]$$

$$\phi^2 =  \left [ \begin{matrix}0&0&0&1&0\\0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0 \end{matrix} \right ] $$

Similarly,

$$\phi^3 =  \left [ \begin{matrix}0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&0 \end{matrix} \right ] $$

So I hope you see the pattern, the 1's travel backward as what this matrix does is a wrap around down shift of sorts. So the periodicity is clear $\phi^6 = \phi$.

Now I can use these matrices to construct my $A$. Then it should be clear why I chose to use the location of 5 to put the ones.

$$A = 5\phi + 4\phi^2 + 3\phi^3 + 2\phi^4 + 1I$$

Why? look at the positions 5, 4, 3, 2 and 1 take in $A$. These matrices have one exactly in those positions. It just so happens $\phi^5 = I$ the identity matrix.

We can do a similar decomposition for any Circulant matrix actually. Why did we write $A$ in terms of $\phi$? because we can find the eigenvalues and eigenvectors of $\phi$ easily.

If we construct a vector out of the 5 5th roots of unity ($\omega = e^{\frac{2\pi i}{5}}$),

$$v_j = \left [ \begin{matrix}1\\\omega^{j}\\\omega^{2j}\\\omega^{3j}\\\omega^{4j} \end{matrix} \right ]$$

It makes sense that these vectors are the eigenvectors since multiplying these vectors with $\omega^{4j}$ is the same as a single downshift of the vector because these roots are $5$ periodic.

$$\phi v_j =  \left [ \begin{matrix}0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0 \end{matrix} \right ] \left [ \begin{matrix}1\\\omega^{j}\\\omega^{2j}\\\omega^{3j}\\\omega^{4j} \end{matrix} \right ] = \left [ \begin{matrix}\omega^{4j}\\1 \\\omega^{j}\\\omega^{2j}\\\omega^{3j}\\ \end{matrix} \right ] = \omega^{4j} \left [ \begin{matrix}1\\\omega^{j}\\\omega^{2j}\\\omega^{3j}\\\omega^{4j} \end{matrix} \right ] $$


So $v_j$ for $j \in [0,4]$ are the 5 eigenvectors for the matrix $\phi$. See when $j$ is zero, you get all ones vector. When its one then vector is actually the slowest sinusoid. You can visualize these complex exponentials here.

$$v_0 = \left [ \begin{matrix}1\\\omega^{0}\\\omega^{0}\\\omega^{0}\\\omega^{0} \end{matrix} \right ]$$

$$v_2= \left [ \begin{matrix}1\\\omega^{2}\\\omega^{4}\\\omega^{6}\\\omega^{8} \end{matrix} \right ]$$

Now from this we can write what the eigenvalues of powers of $\phi$ would be,

$$\phi^k v_j = \omega^{4kj}v_j$$

Using this above equation we can find eigenvalues of $A$ since it is a linear combination of powers of $\phi$.

$$1+5\omega^{4j}+4\omega^{8j}+3\omega^{12j}+2\omega^{16j}$$

For the real eigenvalue we can take $j=0$,

$$1+5+4+3+2 = 15$$