What is Linear Transformations?

What is Vector Space?

How to do Matrix Multiplication?

This post continues on from the last post, which is a prerequisite to reading this one. In the last post, I covered basis vectors, span and linear combinations.

## Conceptualizing Linear Transformations

A __Linear Transformation__ is just a function, a function $f(x)$. It takes an input, a number x, and gives us an ouput for that number. In Linear Algebra though, we use the letter *T *for transformation.

$$T(input_x) = output_x$$

Or with vector coordinates as input and the corresponding vector coordinates output

We might think of a transformation in terms of transforming a vector, where we essentially transform vector coordinates, or even the basis vectors, for some vector. The idea is, that we give some vector coordinates as input, and then we get an output from those vector coordinates.

Any time we do that, we can visualize the transformation by imagining a vector moving from one position to another — every vector in the space moves with a transformation.

The nice thing about transformations is the fact, that we just need

- Any vector coordinates in our space, and
- The basis vectors

Then if we do a transformation, we would transform all vectors in our space, along with the basis vectors. __That means we just need to find the transformed basis vectors to calculate any transformed vector in our space.__

## How to do a Linear Transformation?

We can write a general equation like this for a vector $\vec{v}$ with vector coordinates $\begin{bmatrix}x\\y\end{bmatrix}$ and basis vectors $\hat{i} = \begin{bmatrix}i_1\\i_2\end{bmatrix}$ and $\hat{j} = \begin{bmatrix}j_1\\j_2\end{bmatrix}$

So that means we would just have to replace $\hat{i}$ and $\hat{j}$ after the transformation, and we could just do the multiplication as learned in Linear Algebra Basics 1.

An alternative way to represent the above, and perhaps a more intuitive way of understanding a transformation numerically would be something like this:

To get the transformed vector (output), we take the input vector coordinates x and y, and scale with the transformed basis vectors $\hat{i}$ and $\hat{j}$. Then what x and y end up as, is the transformed vector coordinates for the vector $\vec{v}$.

### What happens numerically

— e.g. what happens inbetween input and output?

All that happens numerically, is that we define a rule, which dictates how we transform any vector — that is how you do a transformation numerically. Such a rule can be wrong, but hang with me; So what could a rule be? Often what happens is, that we define which dimension our current vector space should transform to.

Vector spaceis the set of all vectors in our space. We can do operations on these vectors, e.g. vector addition or scaling.

So what happens is, that all of vector space transforms, as we do a linear transformation, therefore we want to define which dimension our current vector space should transform from and to.

$$

T: \mathbb{R}^{2} \rightarrow \mathbb{R}^{2}

$$

**The above reads:** *our transformation T maps from two-dimensional space to 2-dimensional space. The *$\mathbb{R}^{2}$ *means all real numbers in the 2-dimensional space.*

Before we head further, we have not touched upon multiplying vectors. So how do we go about multiplying matrices? Let me expand upon what you learned in Linear Algebra Basics 2 — scaling a vector by a number.

Say we have a matrix *A* defined as such, where a, b, c and d are real numbers

And a vector $\vec{v}$ defined by

Then we could multiply them together, exactly like this, using vector multiplication:

Note that we could define the vector as a matrix, so we could also call this matrix multiplication.

Matrix Multiplication:We multiply rows by coloumns. This means you take the first number in the first row of the second matrix and scale (multiply) it with the first coloumn in the first matrix. You do this with each number in the row and coloumn, then move to the next row and coloumn and do the same.

Now we can define the linear transformation. We can start by giving the matrix *A* numbers and then letting vector $\vec{v}$ be any possible vector in our vector space

**Then** we can choose and say that we define our linear transformation by $T(\vec{v}) = A \vec{v}$. That means, for every vector coordinate in our vector $\vec{v}$, we have to multiply that by the matrix *A*.

As I just showed you above, where we defined the matrix *A *by a, b, c and d, we can do multiplication as follows

### Calculation of a transformed vector?

To calculate any vector after a transformation, all we simply need to do, as described further up in this post, is record the basis vectors. If we know where the basis vectors are after a transformation, the calculation of any transformed vector is almost infuriatingly simple.

I have been emphasizing this matrix *A* throughout this whole post, for a very specific reason. Imagine the first coloumn of the matrix being $\hat{i}$ and the second coloumn being $\hat{j}$

Now it becomes evidently clear that we can, given any vector, calculate any vector in the transformed vector space. So, if we have $\hat{i}$ and $\hat{j}$ transformed, we can just pass any vector to this formular along with the transformed basis vectors, and it would give us the transformed vector (where T stands for transformed):

### What about 2x2 matrix multiplication?

This case is a simple matter too. Suppose we have a matrix *A* (left) and *B* (right). We would simply multiply the rows in the first matrix by the coloumns in the second matrix:

**BUT!** That is not easy to remember, so here is the intuitive way, as I showed earlier. We split the process into 2:

- Multiply matrix
*A*by the first coloumn in matrix*B* - Multiply matrix
*A*by the second coloumn in matrix*B*

This is a known scenario, I showed you this earlier:

Adding the two results together, we get the equation further up:

This was clearly a lot to learn. But we could also go further and ask the impending question of, what can be a wrong linear transformation? Because we just defined some linear transformation and assumed that it is right. For now, I will not go deeper into the subject, but as WolframAlpha suggests, we need 2 conditions to be true, before we can call it a linear transformation.

Summary (of the questions at the top):

**What is Linear Transformations?**

Linear transformations are a function $T(x)$, where we get some input and transform that input by some definition of a rule. An example is $T(\vec{v})=A \vec{v}$, where for every vector coordinate in our vector $\vec{v}$, we have to multiply that by the matrix*A.***What is Vector Space?**

Vector space is the set of all vectors in our space, which we define in dimensions. We can do operations on these vectors, e.g. vector addition or scaling.**How to do Matrix Multiplication?**

With a matrix $A = \begin{bmatrix}a & b\\c & d \end{bmatrix}$, where a, b, c and d are real numbers.

Just remember this:__We multiply rows by coloumns__. This means you take the first number in the first row of the second matrix and scale (multiply) it with the first coloumn in the first matrix. You do this with each number in the row and coloumn, then move to the next row and coloumn and do the same.

It could be two vectors, where a and c are one vector and b and d are another vector. Secondly, with a vector $\vec{v}=\begin{bmatrix}x\\y \end{bmatrix}$ ($\vec{v}$ implied being a matrix), we would multiply them like this.