Introduction to Linear Algebra

  • Linear Algebra is the study of vectors, vector spaces and mapping between vectors.
  • In this blog, we'll study about vectors, transformation of vectors and basis vectors.

1. Vectors and it's properties

1.1 Getting a handle on vectors

  • Let's suppose we've a dataset containing the heights of people.

  • The histogram of heights of people can be viewed as follows.

  • The above histogram almost looks like a normal distribution ie., $X \sim \mathcal{N}(\mu,\,\sigma^{2})\ $.

  • Hence we can fit a line to the distribution describing the data, instead of having all the data.

  • The normal distribution can be described by just two parameters $(\mu, \sigma) $.

  • $\mu$ : denotes the center of the distribution.

  • $\sigma$ : denotes the width of the distribution.

  • The equation of the normal distribution can be written as
    $\hspace{1cm} f(x) = \frac{1}{\sigma \sqrt{2\pi}} e^{(\dfrac{-(x - \mu)^2}{2 \sigma^2})}$

FITTING THE DISTRIBUTION

  • There is always a $(\mu, \sigma)$ that fits the data best and we could find it from all the possible values over the space of $(\mu, \sigma) $.

  • A vector can be used to describe the best fitting values of the distribution that vector can be denoted by $\begin{bmatrix}\vec{\mu} \\ \vec{\sigma} \\ \end{bmatrix}$.

  • The space of fitting parameters can be thought as a function of $(\mu, \sigma) $ and then the vectors$\begin{bmatrix}\vec{\mu} \\ \vec{\sigma} \\ \end{bmatrix}$
    as things that hover around the space of parameters.

  • Below is the distribution of the histogram with the values $(\mu=74, \sigma=2)$

1.2 Operations of vectors

  • The name vector has multiple interpretations.

  • A vector can be described as an object that freely moves around the space with it's tip at the origin(0, 0).

  • The number of dimensions aka the number of elements in a vector denote
    the dimension of the space in which the vector operates.

  • If size of the vector is 2, then the vector operates in 2D space.

  • A vector can be also a list of numbers. We can describe a house using a vector.
$$\begin{bmatrix}\text{sqft} \\ \text{#bedrooms} \\ \text{#bathrooms} \\ \text{price} \end{bmatrix} = \begin{bmatrix}120 \\ 2 \\ 2 \\ 150000 \end{bmatrix}$$

.

  • A vector is something that obeys two rules.
    • Addition
    • Multiplication by a scalar

1.3 Defining the co-ordinate system/basis

  • A vector in space does not have any meaning own it's own.
    So a basis is defined, any vector can be constructed using the basis vectors

  • If we've a 2D space, then we can have two basis vectors of unit length.

    • One in horizontal direction denoted by $\hat{i} = \begin{bmatrix} 1 \\ 0\\ \end {bmatrix}$.
    • One in vertical direction denoted by $\hat{j} = \begin{bmatrix} 0 \\ 1\\ \end {bmatrix}$.

  • An vector of n-D has $N$ basis vectors.

  • If $\vec{r} = \begin{bmatrix}3 \\ 2 \end{bmatrix}$, then it can be described as the the linear combination of the two basis vectors.
    ie,.$ \vec{r} = 3\hat{i} + 2\hat{j}$.

  • Move along the direction of $\hat{i}$ 3 times and then move 2 times in the direction parallel to $\hat{j}$
    ie., $ 3 \begin{bmatrix}1 \\ 0 \end{bmatrix} + 2 \begin{bmatrix}0 \\ 1 \end{bmatrix} $

  • The basis vectors we used above are called orthonormal basis vectors.
    They are of unit length and perpendicular to each other.

  • As per now, There is non reason to consider these orthonormal vectors as basis, any two disctint vectors
    can be chosen as the set of basis vectors. (there's a reason we'll see it later)

1.4 Unit Vector

  • A unit vector ( $\hat{u}$ ) can be described as the vector which lies in the same direction of ( $\vec{u}$ )
    but normalized to length 1.
  • The length of a unit vector is always one and is denoted by $\hat{u} = \dfrac{\vec{u}}{\left\lVert \vec{u} \right\rVert}$

  • If $ \vec{u} = \begin{bmatrix} 1\\ 2\\ 3\\ \end{bmatrix}$, then the unit vector in the direction of $ \vec{u} $ is $\hat{u} = \dfrac{1}{\sqrt{14}}\begin{bmatrix} 1\\ 2\\ 3\\ \end{bmatrix}$

1.5 Size of the vector

  • A vector has two properties
    • Magnitude
    • Direction
  • The magnitude of the vector denotes how big the vector is (size).
  • $\vec{r} = \begin{bmatrix} a\\ b\\ \end{bmatrix}$, then the magnitude or length the vector is ${\left\lVert \vec{r} \right\rVert} = \sqrt{a^2 + b^2}$

1.5 Dot product

  • The dot product between two vectors $(\vec{a}, \vec{b})$ denotes the direction of two vectors.
    It is denoted by ($a \cdot b$).

  • $ \vec{a} = \begin{bmatrix} a_{1}\\ a_{2}\\ \end{bmatrix}, \vec{b} = \begin{bmatrix} b_{1}\\ b_{2}\\ \end{bmatrix}$, then ($\vec{a} \cdot \vec{b}) = a_{1}\times b_{1} + a_{2}\times b_{2}$

  • if $(\vec{a}, \ \vec{b}) \in R^N$, then ($\vec{a} \cdot \vec{b}) = \sum_{i=1}^{N} a_{i}\times b_{i} $

  • If $a \cdot b > 0$, then $(a, b)$ lie in the same direction.

  • If $a \cdot b < 0$, then $(a, b)$ lie in the opposite direction.

  • If $a \cdot b = 0$, then $(a, b)$ lie in the perpendicular to each other.

Properties

  • The dot product of two vectors is commutative $(\vec{a} \cdot \vec{b}) = (\vec{b} \cdot \vec{a})$.

  • The dot product of two vectors is distributive $\vec{r} \cdot (\vec{s} + \vec{t}) = \vec{r} \cdot \vec{s} + \vec{r} \cdot \vec{t}$.

  • The dot product of two vectors is associative over scalar multiplication. $\vec{r} \cdot a\times\vec{s} = a \times (\vec{r} \cdot \vec{s})$

  • A vector dotted with itself gives the squared length of the vector $ (\vec{r} \cdot \vec{r}) = {{\left\lVert \vec{r} \right\rVert}}^2 $

1.6 Cosine and Dot product

Dot product in terms of cosine angle between the vectors

  • According to cosine rule, $c^2 = a^2 + b^2 - 2abcos\theta$.

  • From above, the sides $(a, b, c)$ are ($\vec{s},\ \vec{r},\ \vec{r} - \vec{s}$) respectively.

  • The length of side ( $c$ ) can be written as
    ${{\left\lVert \vec{r} - \vec{s} \right\rVert}}^2 = {{\left\lVert \vec{r} \right\rVert}}^2 + {{\left\lVert \vec{s} \right\rVert}}^2 - 2 \ {{\left\lVert \vec{r} \right\rVert}} \ {{\left\lVert \vec{s} \right\rVert}} \ cos\theta $

  • The vector ($\vec{r} - \vec{s}$) dotted with itself yields
    $(\vec{r} - \vec{s}) \cdot (\vec{r} - \vec{s}) = (\vec{r} \cdot \vec{r}) - (\vec{r} \cdot \vec{s}) - (\vec{s} \cdot\vec{r}) + (\vec{s} \cdot \vec{s}) = {{\left\lVert \vec{r} \right\rVert}}^2 + {{\left\lVert \vec{s} \right\rVert}}^2 - 2 \ \vec{r} \cdot \vec{s} $

  • Comparing the above two equations, we can write $ {\left\lVert \vec{r} \right\rVert} \ {\left\lVert \vec{s} \right\rVert} \ cos\theta = \vec{r} \cdot \vec{s}$.

  • From the above comparision, the cosine of the angle between two vectors can be defined as
    $cos\theta = \dfrac{\vec{r} \cdot \vec{s}}{{\left\lVert \vec{r} \right\rVert} \ {\left\lVert \vec{s} \right\rVert}}$.

  • The dot product tells us the extent in which two vectors are in the same direction.

1.7 Projections

  • Let's consider two vectors $\vec{r}$ and $\vec{s}$, the above figure shows the projection of $\vec{s}$ onto $\vec{r}$.
    (imagine passing light through $\vec{s}$ perpendicular to $\vec{r}$ ).

  • From the above figure, $cos\theta = \dfrac{Adj}{{\left\lVert \vec{s} \right\rVert}} $ and $Adj = {\left\lVert \vec{s} \right\rVert} \ cos\theta$.

  • Then $\vec{r} \cdot \vec{s} = {\left\lVert \vec{r} \right\rVert} \times Adj$.

  • $\vec{r} \cdot \vec{s} = {\left\lVert \vec{r} \right\rVert} \ \times$ projection of $\vec{s}$ onto $\vec{r}$.

  • If the vectors ( $\vec{r}, \ \vec{s}$ ) are perpendicular to each other, there is no horizontal component in $\vec{s}$
    that can be described by $\vec{r}$ that is why the dot product of any two perpendicular is zero.

SCALAR PROJECTION

  • The scalar projection of $\vec{s}$ onto $\vec{r}$ denotes how much of $\vec{s}$ is in the direction of $\vec{r}.$
    $ scalar \ proj_{r}^s ={\left\lVert \vec{s} \right\rVert} \ cos\theta = \dfrac{\vec{r} \cdot \vec{s}}{{\left\lVert \vec{r} \right\rVert}}$

  • The scalar projection is just a number.

VECTOR PROJECTION

  • The vector projection of $\vec{s}$ onto $\vec{r}$ is just the scalar projection of $\vec{s}$ onto $\vec{r}$ in the direction of $\vec{r}.$
    $vector \ proj_{r}^s = \dfrac{\vec{r} \cdot \vec{s}}{{\left\lVert \vec{r} \right\rVert}} \cdot \dfrac{\vec{r}}{{\left\lVert \vec{r} \right\rVert}}$

2. Changing the Reference

2.1 Changing the Basis

  • Let's consider a 2D space in which two basis vectors exist in two directions.

  • Any vector in the 2D space can be derived as a linear combination of those two basis vectors.

  • The choice of our basis vectors are arbitary, we can pick any two vectors as long as
    they are not linearly dependant on each other.

  • Let's say we've a set of basis vectors, $\hat{e_{1}} = \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix}$, $\hat{e_{2}} = \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}$ and $\vec{r_{e}} = \begin{bmatrix} 3 \\ 4 \\ \end{bmatrix}$
    ie,. $\vec{r_e} = 3 \hat{e_1} + 4 \hat{e_2}$ is the linear combination of these two basis vectors.

  • Here we'll define a new set of basis vectors in terms of our basis vectors.$\hat{b_{1}} = \begin{bmatrix} 2 \\ 1 \\ \end{bmatrix}$ and $\hat{b_{2}} = \begin{bmatrix} -2 \\ 4 \\ \end{bmatrix}$
    How do we define $\vec{r_e}$ as the linear combination of $\hat{b_{1}}, \hat{b_{2}}$ ie,. $\vec{r_{b}} = \begin{bmatrix} ? \\ ? \\ \end{bmatrix}$

  • If the two basis vectors are orthogonal to each other, ie,. ($\vec{b_1} \cdot \vec{b_2}=0$)
    then we can use the scalar projection we saw above to project $\vec{r_e}$ on $\vec{b_1}$ and $\vec{b_2}$.

  • $\vec{b_1} \cdot \vec{b_2} = \begin{bmatrix} 2 \\ 1 \\ \end{bmatrix} \cdot \begin{bmatrix} -2 \\ 4 \\ \end{bmatrix} = 2 \times -2 + 1 \times 4 = 0$, the basis vectors are orthogonal.
    Now we can add the vector projections of $\vec{r_e}$ on $\vec{b_1}$ and $\vec{b_2}$ to obtain $\vec{r_b}$

  • Vector projection of $\vec{r_e}$ onto $\vec{b_1}$ is
    $\dfrac{\vec{r_e} \cdot \vec{b_1}} {{\left\lVert \vec{b_1} \right\rVert}} \cdot \dfrac{\vec{b_1}}{{\left\lVert \vec{b_1} \right\rVert}} = 2\vec{b_1} = \begin{bmatrix} 4 \\ 2 \\ \end{bmatrix}$

  • Vector projection of $\vec{r_e}$ onto $\vec{b_2}$ is
    $\dfrac{\vec{r_e} \cdot \vec{b_2}} {{\left\lVert \vec{b_2} \right\rVert}} \cdot \dfrac{\vec{b_2}}{{\left\lVert \vec{b_2} \right\rVert}} = 0.5\vec{b_2} = \begin{bmatrix} -1 \\ 2 \\ \end{bmatrix}$

  • $\vec{r_e}$ in the basis $\vec{b_1}$ and $\vec{b_2}$ is $\vec{r_b} = \begin{bmatrix} 2 \\ 0.5 \\ \end{bmatrix}$
    $\vec{r_e} = 2\vec{b_1} + 0.5\vec{b_2} ; \ \ \ 2\begin{bmatrix} 2 \\ 1 \\ \end{bmatrix} + 0.5\begin{bmatrix} -2 \\ 4 \\ \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \\ \end{bmatrix}$

  • The most fascinating thing here to take away is that the original vector $\vec{r_e}$
    we described isn't tied to the axis described at all.

  • We can re-describe any vector floating in space of some basis vectors in other basis vectors.

2.2 Basis

  • The choice of basis vectors plays a huge role in linear algebra.

  • In an n-D space, the basis is a set of n-vectors that are not linear combinations of each other.
    ie,. they must be linearly independent.

  • They span the whole n-D space.

  • For a 3D space, we've a set of three basis vectors {$\vec{b_1}, \vec{b_2}, \vec{b_3}$}.

  • For $\vec{b_3}$ to exist in the third dimension, it shouldn't be a linear combination of $\vec{b_1}$ and ${\vec{b_2}}$.
    ie,. $\ \ \ \ \vec{b_3} \neq a \ \vec{b_1} + b \ \vec{b_2} \hspace{1cm} where \ (a, b) \ are \ scalars.$

  • Just in case if $\vec{b_3} = a \ \vec{b_1} + b \ \vec{b_2}$, then our 3D spaces collapses to 2D space and
    we can't span the third dimension.

  • If the basis vectors are orthogonal, then changing the basis of any vector is just the dot product.

  • One of the popular scenarios where we change the basis is Principal Component Analysis (PCA).

3. Matrices

3.1 Introduction

  • Matrices are numerical objects that can apply any transformation to a vector.

  • To multiply the matrix by a vector their inner dimensions must match.

  • Transformations like scaling, stretching, rotating, mirroring, flipping etc,.. can be applied on a vector by a matrix.
    eg,. $\begin{bmatrix} 2 & 3 \\ 10 & 1 \end{bmatrix} \times \begin{bmatrix} a \\ b \end{bmatrix} = \begin{bmatrix} 8 \\ 13 \end{bmatrix}$

  • Above, there is a $2 \times 2$ matrix which applies some transformation on $2 \times 1$ vector resulting in a $2 \times 1$ vector.

  • The rows of the matrix are dotted with the columns of the vector.

  • Let's apply some transformations. Let our basis vectors be $\hat{e_{1}} = \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix}$, $\hat{e_{2}} = \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}$
    and the transformation matrix M be $M = \begin{bmatrix} 2 & 2 \\ 4 & 1 \\ \end{bmatrix}$
    the columns of the matrix M are the new basis vectors.

  • If we apply the transformations to each of our basis vectors.

$\hspace{1cm} \begin{bmatrix} 2 & 2 \\ 4 & 1 \\ \end{bmatrix} \times \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \\ \end{bmatrix}$

$\hspace{1cm} \begin{bmatrix} 2 & 2 \\ 4 & 1 \\ \end{bmatrix} \times \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \\ \end{bmatrix}$

  • The matrix M moves the basis vectors to some other vectors.
    If the basis vectors change, then the whole space changes (or) transforms.

3.2 Operations on Vectors

  • Let's consider a matrix $ M = \begin{bmatrix} 2 & 3 \\ 10 & 1 \\ \end{bmatrix}$, the basis vectors vectors $\hat{e_1} = \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} \ \ \hat{e_2} = \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}$
    and a vector $\hat{r} = \begin{bmatrix} 3 \\ 2 \\ \end{bmatrix}$.

  • Let the transformation be $A \ \vec{r} = \vec{r^\prime}$

PROPERTIES

  • $A \ (n \times \vec{r}) = n \times (A \ \vec{r}) = n \times \vec{r^\prime}$

  • $A \ (\vec{r} + \vec{s}) = A \ \vec{r} + A \ \vec{s}$

  • $A \ (n \times \hat{e_1} + m \times \hat{e_2}) = n\times A \ \hat{e_1} + m \times A \ \hat{e_2} = n \times \hat{e_1\prime} + m \times \hat{e_2\prime}$

  • An example

$\hspace{1cm} \begin{bmatrix} 2 & 3 \\ 10 & 1 \\ \end{bmatrix} \times \begin{bmatrix} 3 \\ 2 \\ \end{bmatrix} = \begin{bmatrix} 12 \\ 32 \\ \end{bmatrix}$

$\hspace{1cm} \begin{bmatrix} 2 & 3 \\ 10 & 1 \\ \end{bmatrix} \times \bigg[ 3 \ \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} + 2 \ \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}\bigg] = \begin{bmatrix} 12 \\ 32 \\ \end{bmatrix} $

$\hspace{1cm} 3 \bigg[ \begin{bmatrix} 2 & 3 \\ 10 & 1 \end{bmatrix} \times \begin{bmatrix} 1 \\ 0 \end{bmatrix}\bigg] + 2 \bigg[ \begin{bmatrix} 2 & 3 \\ 10 & 1 \end{bmatrix} \times \begin{bmatrix} 0 \\ 1 \end{bmatrix}\bigg] = \begin{bmatrix} 12 \\ 32 \\ \end{bmatrix}$

$\hspace{1cm} 3 \begin{bmatrix} 2 \\ 10 \\ \end{bmatrix} + 2 \begin{bmatrix} 3 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 12 \\ 32 \\ \end{bmatrix}$

$\hspace{1cm} \begin{bmatrix} 12 \\ 32 \\ \end{bmatrix} = \begin{bmatrix} 12 \\ 32 \\ \end{bmatrix}$

  • From above it is evident that the transformations on the vector $\vec{r}$
    is just the linear combination of the transformed basis vectors

3.3 Types of Matrix Transformations

IDENTITY MATRIX

  • This Matrix that doesn't change the vector. It is composed of basis vectors as it's the columns.

$\hspace{1cm} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} \times \begin{bmatrix} 3 \\ 2 \\ \end{bmatrix} = \begin{bmatrix} 3 \\ 2 \\ \end{bmatrix}$

FLIPPING MATRIX

  • This matrix that inverts the vector. ie,. (180 deg rotation)

$\hspace{1cm} \begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix} \times \begin{bmatrix} 3 \\ 2 \\ \end{bmatrix} = \begin{bmatrix} -3 \\ -2 \\ \end{bmatrix}$

90DEG ROTATION

  • This Matrix rotates the vector by 90deg anti-clockwise.

$\hspace{1cm} \begin{bmatrix} 0 & -1 \\ 1 & 0 \\ \end{bmatrix} \times \begin{bmatrix} 3 \\ 2 \\ \end{bmatrix} = \begin{bmatrix} -2 \\ 3 \\ \end{bmatrix}$

  • Try plotting the above vectors on a paper to get some intuition.

  • Matrix operations can be combined to get some combination of transformations.
    The operation goes from right to left. eg,. $ \hspace{0.2cm} A_2 \ (A_1 \ \vec{r})$

  • $A_2 \ (A_1 \ \vec{r}) \neq A_1 \ (A_2 \ \vec{r})$, The Order of Matrices matter.

3.4 Inverse of a Matrix

  • The Inverse of a matrix A operates on a vector to undo the transformation caused by the matrix A.

  • Inverse exists only for square matrices, so no inverse for non square matrices.

  • The Inverse of a matrix cannot be obtained if the matrix alters the number of dimensions of the vector.
    If the matrix reduces a 3D vector to a 2D vector, then there is no way to transform points from 2D to 3D.

  • The a matrix A is multiplied by it's inverse, it results in an Identity matrix.
    $\hspace{1.2cm} A \ A^{-1} = A^{-1} \ A = I$

  • There are various ways to find the inverse of a matrix. One of them is Gaussian Elimination.

  • if $ \hspace{1cm} A \ \vec{r} = \vec{s}$,
    $\hspace{1.2cm} A \ A^{-1} \ \vec{r} = A^{-1} \ \vec{s}$
    $\hspace{1.2cm} I \ \vec{r} = A^{-1} \ \vec{s}$
    $\hspace{1.2cm} \vec{r} = A^{-1} \ \vec{s}$

3.5 Determinants

  • The determinant describes the change in the area (or) shape spanned by a vector after some transformation.

  • The determinant of a matrix A is denoted by det(A).

  • Determinants are generally computed for matrices to check whether all the basis vectors are independent or not.
    The determinant is zero if the matrix has linearly dependant columns or rows.

  • The determinant of a $2 \times 2$ Matrix with entries

$\hspace{1cm} \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix}$ is $|A| = (ad - bc)$

  • The matrix $A = \begin{bmatrix} 1 & 1 & 3 \\ 1 & 2 & 4 \\ 2 & 3 & 7 \\\end{bmatrix}$ collapses the vector space from 3D to 2D.

    col3 is a combination of col1 and col2.

  • For a matrix A, if det(A) = 0, then the matrix A is said to be singular.

3.6 Orthonormal Basis

  • It can be very helpful if we can construct a matrix such that whose column vectors
    make up the new basis are pependicular to each other.

  • The transpose of a matrix is just the matrix with rows interchanged with columns.

$\hspace{1cm} A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ \end{bmatrix}, \ \ A^{T} = \begin{bmatrix} 1 & 3 \\ 2 & 4 \\ \end{bmatrix}$

  • Let's say we've a matrix A of size $n \times n$, whose column vectors are the basis vectors of
    the new transfromed space and are perpendicular to each other. Each basis vector is of unit length.

$\hspace{1cm} A = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{n1} \\ \vdots & \vdots & \ddots & \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{bmatrix} = \begin{bmatrix} \hat{a_1} & \hat{a_2} & \dots & \hat{a_n} \end{bmatrix}$

  • If we multiply A by it;s transpose, then

$\hspace{1cm} A^{T} \ A = \begin{bmatrix} \hat{a_1} \\ \hat{a_2} \\ \vdots \\ \hat{a_n} \end{bmatrix} \times \begin{bmatrix} \hat{a_1} & \hat{a_2} & \dots & \hat{a_n} \end{bmatrix} = \begin{bmatrix} 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \\ 0 & 0 & \dots & 1 \end{bmatrix} = I$

  • From above, $a_{i} \cdot a_{j} = 0 \ \ \ \text{if} \ (i \neq j)$
    $\hspace{2.1cm} a_{i} \cdot a_{j} = 1 \ \ \ \text{if} \ (i = j)$

  • $A \ A^{T} = A^{T} \ A = I$. This proves that $A^{T}$ is itself the inverse of A.
    If the basis vectors are orthonormal, then the inverse can be computed easily, $A^{-1} = A^{T}$.

REASONS TO USE ORTHONORMAL BASIS

  • $A^{-1} = A^{T}$

  • The transformation is always reversible $|A| \neq 0$.

  • The projection is just the dot product.

References