Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed, and this leads to another important interpretation: matrix multiplication is function evaluation.

Arbitrary functions which take in vectors and output vectors can be very complex and thus difficult to reason about. A useful simplifying assumption is that of linearity: that f applied to a linear combination is just a linear combination of f applied to each piece of the combination separately. Linear algebra, broadly speaking, is the study of functions of this kind and the properties that emerge from making the linearity assumption.

It turns out that, if we assume a function f is linear, all of the information about that function is contained in what it does to a set of basis vectors. We can in essence "encode" the function by a table of numbers (a matrix), where the kth column contains the result of f applied to the kth basis vector. In this way, given a basis, any linear transformation f has a matrix A which compactly represents it.

Since f is linear, to compute f(v) I could write v in my chosen basis then apply f to each basis vector and recombine. Alternatively, I could write the matrix A representing f in that basis, and then multiply Av. The two are equivalent: that is, Av = f(v). And so matrix-vector multiplication is "just" evaluating the function f.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: