**Linear Algebra**

Linear algebra is the branch of mathematics concerning linear equations, linear maps and their representations in vector spaces and through matrices.

Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as basically the application of linear algebra to spaces of functions.

Linear algebra is also used in most sciences and fields of engineering, because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear map that best approximates the function near that point.

The field of **data science** also leans on many different applications of linear algebra. This does not mean that every data scientist needs to have an extraordinary mathematical background, since the amount of math you will be dealing with depends a lot on your role. However, a good understanding of linear algebra really enhances the understanding of many machine** **learning** **algorithms. Foremost, to really understand deep learning algorithms, linear algebra is essential.

# Matrices and Vectors

In short, we can say that linear algebra is the ‘*math of vectors and matrices*’. We make use of such vectors and matrices since these are convenient mathematical ways of representing large amounts of information.

A **matrix** is an array of numbers, symbols or expressions, made up of rows and columns. A matrix is characterized by the amount of rows, m, and the amount of columns, n, it has. In general, a matrix of order ‘**m x n**’ (read: “m by n”) has m rows and n columns. Below, we display an example 2 x 3 matrix A:

We can refer to individual elements of the matrix through its corresponding row and column. For example, A[1, 2] = 2, since in the first row and second column the number 2 is placed.

A matrix with only a single column is called a **vector**. For example, every column of the matrix A above is a vector. Let us take the first column of the matrix A as the vector v:

In a vector, we can also refer to individual elements. Here, we only have to make use of a single index. For example, v[2] = 4, since 4 is the second element of the vector v.

# Matrix Operations

Our ability to analyze and solve particular problems within the field of linear algebra will be greatly enhanced when we can perform algebraic operations with matrices. Here, the most important basic tools for performing these operations are listed.

# (i) Matrix Sums

If A and B are m x n matrices, then the **sum** A+B is the m x n matrix whose columns are the sums of the corresponding columns in A and B. The sum A+B is defined only when A and B are the same size.

Of course, subtraction of the matrices, A-B, works in the same way, where the columns in B are subtracted from the columns in A.

# (ii) Scalar Multiples

If r is a scalar, then the scalar multiple of the matrix A is r*A, which is the matrix whose columns are r times the corresponding columns in A.

# (iii) Matrix-Vector Multiplication

If the matrix A is of size m x n (thus, it has n columns), and u is a vector of size n, then the product of A and u, denoted by Au, is the **linear combination** of the columns of A using the corresponding entries in u as weights.

**Note**: The product Au is defined only if the number of columns of the matrix A equals the number of entries in the vector u!

**Properties**: If A is an m x n matrix, u and v are vectors of size n and r is a scalar, then:

# (iv) Matrix Multiplication

If A is an m x n matrix and B = [**b**1, **b**2, …, **b**p] is an n x p matrix where **b**i is the i-th column of the matrix B, then the matrix product AB is the m x p matrix whose columns are A**b**1, A**b**2, …, A**b**p. So, essentially, we perform the same procedure as in **(iii)** with matrix-vector multiplication, where each column of the matrix B is a vector.

Since

**Note**: The number of columns in A must match the number of rows in B in order to perform matrix multiplication.

**Properties**: Let A be an m x n matrix, let B and C have sizes such that the sums and products are defined, and let r be scalar. Then:

# (v) Powers of a Matrix

If A is an n x n matrix and *k* is a positive integer, then A^k (A to the power k) is the product of *k* copies of A:

# (vi) Matrix Transpose

Suppose we have a matrix A of size m x n, then the **transpose** of A (denoted by A^T) is the n x m matrix whose columns are formed from the corresponding rows of A.

# Conclusion

Math is everywhere within the field of data science. To be a successful data scientist, you definitely not need to know all the ins and outs of the math behind each concept. However, to be able to make better decisions when dealing with the data and algorithms, you need to have a solid understanding of the math and statistics behind it. This article focused on the basics of linear algebra, which is a very important discipline of math to understand as a data scientist.