The world’s leading publication for data science, AI, and ML professionals.

How A Good Data Scientist Looks At Matrix Multiplication

Four different ways to look at it

Image by Author
Image by Author

Introduction:

Data Science is the field that extracts knowledge and insights from structured or unstructured data using scientific computing methods, processes and algorithms.

The data — be it structured (such as data tables) or unstructured (such as images) — are represented as matrices.

The operations this data undergo, as processed by computing processes such as Machine Learning models, predominantly involve matrix multiplications.

So, a deeper insight into matrix multiplication operation benefits those pursuing data science and machine learning fields.

The multiplication of two matrices can be perceived in 4 different ways:

  1. Dot Products of Rows and Columns
  2. Linear Combination of Columns
  3. Linear Combination of Rows
  4. Sum of Rank-1 Matrices

Let A and B be the matrices being multiplied as AB. Let the dimensions of A and B be _m_x_p and p_x_n_ , respectively. We know that to be able to multiply A and B, the number of columns in A should match the number of rows in B.

Let us consider the example dimensions below for simplicity and without loss of generality.

And as we know, AB below is the product of the two matrices, A and B.

1. Dot Products of Rows and Columns:

In matrix AB, element-11 can be seen as the dot product of row-1 from A and column-1 from B.

Similarly, element-12 can be seen as the dot product of row-1 from A and column-2 from B.

Matrix Multiplication as a Dot Product of Rows and Columns (Image by Author)
Matrix Multiplication as a Dot Product of Rows and Columns (Image by Author)

Perspective:

Element-_ij_ in AB is the dot product of row-_i_ from A and column-_j_ from B.

2. Linear Combination of Columns:

To form the column perspective of Matrix Multiplication, reorganize the matrix AB as below.

Column-1 of AB can be seen as the sum of b11 times column-1 of A and b21 times column-2. That is, column-1 of AB is the linear combination (weighted sum) of the columns of A, where the weights of the combination are the elements of column-1 of B.

Similarly, column-2 of AB is the linear combination of the columns of A, where the weights of the combination are the elements of column-2 of A.

Matrix Multiplication as a Linear Combination of Columns (Image by Author)
Matrix Multiplication as a Linear Combination of Columns (Image by Author)

Perspective:

Each column in AB is a linear combination of columns of A, where the weights of the combination are the elements of the corresponding column in B.

3. Linear Combination of Rows:

Now, let us look at AB from the row perspective by rewriting it as below.

Row-1 of AB can be seen as the sum of a11 times row-1 of B and a12 times row-2. That is, row-1 of AB is the linear combination (weighted sum) of the rows of B, where the weights of the combination are the elements of row-1 of A.

Similarly, row-2 of AB is the linear combination of the rows of B, where the weights of the combination are the elements of row-2 of A.

Matrix Multiplication as a Linear Combination of Rows (Image by Author)
Matrix Multiplication as a Linear Combination of Rows (Image by Author)

Perspective:

Each row in AB is a linear combination of rows of B, where the weights of the combination are the elements of the corresponding row in A.

4. Sum of Rank-1 Matrices:

Rewriting AB as below gives us two rank-1 matrices, each one with the size same as that of AB.

It is clear that the above two matrices are of rank-1 since their rows (and columns) are all linearly dependent, i.e., all the other rows (columns) are a multiple of one row (column). Hence rank-1.

Matrix Multiplication as a Sum of Rank-1 Matrices (Image by Author)
Matrix Multiplication as a Sum of Rank-1 Matrices (Image by Author)

Perspective:

Matrix AB is a sum of p rank-1 matrices of size _m_xn, where the _i__th matrix (among p) is the result of multiplying column-_i_ of A with the row-_i_ of B.

Conclusion:

These different perspectives find their relevance on different occasions.

For example, in the Attention mechanism in Transformer neural network architecture, Attention matrix calculation can be seen as a matrix multiplication from the ‘dot product of rows and columns’ perspective.

More about Attention mechanism and Transformer can be found in the below article.

Into The Transformer

I hope these perspectives on matrix multiplication enable readers to gain a more intuitive understanding of data flow in machine learning and data science algorithms and models.


Related Articles