PrimeCalcPro
Explore 1070+ free calculators — math, finance, health & more.
All articles
Math10 min readApril 4, 2026

Vectors and Matrices Explained: Linear Algebra for the Rest of Us

A ground-up explanation of linear algebra — vectors, dot products, cross products, matrices, determinants, inverses, and eigenvalues — with real-world applications in physics, graphics, and machine learning.

Linear algebra sounds intimidating, but its core ideas are remarkably concrete. Vectors, matrices, and the operations between them describe everything from physics simulations to machine learning models. This guide makes the fundamentals accessible — no advanced notation required.

What Is a Vector?

A vector is simply a quantity with both magnitude (size) and direction. In 2D, a vector like v = [3, 4] means "move 3 units right and 4 units up." In 3D, you add a third component: v = [3, 4, 2].

Geometrically, a vector is an arrow from the origin to a point. Algebraically, it's an ordered list of numbers (components). Both views are equally valid and you'll switch between them constantly.

Magnitude (length) of a vector uses the Pythagorean theorem generalised to n dimensions:

|v| = √(v₁² + v₂² + v₃²)

For v = [3, 4]: |v| = √(9 + 16) = √25 = 5

A unit vector has magnitude exactly 1. To convert any vector to a unit vector, divide each component by the magnitude: = v / |v|.

Vector Addition and Scalar Multiplication

Two vectors add component-wise:

[1, 2, 3] + [4, 5, 6] = [5, 7, 9]

Geometrically this is the "head-to-tail" rule — place the second vector's tail at the first vector's head.

Multiplying by a scalar (ordinary number) scales each component:

3 × [1, 2, 3] = [3, 6, 9]

Positive scalars stretch the vector; a scalar of −1 reverses its direction; scalars between 0 and 1 shrink it.

The Dot Product

The dot product of two vectors produces a scalar (single number):

A·B = a₁b₁ + a₂b₂ + a₃b₃

For A = [1, 2, 3] and B = [4, 5, 6]:

A·B = (1×4) + (2×5) + (3×6) = 4 + 10 + 18 = 32

The geometric meaning is more revealing:

A·B = |A| × |B| × cos(θ)

Where θ is the angle between the vectors. This gives us a critical insight:

  • A·B > 0: Angle < 90° — vectors point roughly the same direction
  • A·B = 0: Angle = 90° — vectors are perpendicular (orthogonal)
  • A·B < 0: Angle > 90° — vectors point roughly opposite directions

The dot product is everywhere in applied mathematics. Machine learning uses cosine similarity (dot product divided by the product of magnitudes) to compare documents and user preferences. Physics uses it to calculate work: W = F·d (force dot displacement).

The Cross Product

The cross product works only in 3D and produces a vector (not a scalar) perpendicular to both inputs:

A × B = [a₂b₃ - a₃b₂, a₃b₁ - a₁b₃, a₁b₂ - a₂b₁]

The direction follows the right-hand rule: point your fingers in the direction of A, curl them toward B, and your thumb points in the direction of A × B.

The magnitude of A × B equals the area of the parallelogram spanned by the two vectors:

|A × B| = |A| × |B| × sin(θ)

Unlike the dot product, the cross product is anti-commutative: A × B = −(B × A).

Applications: Torque in physics is τ = r × F. Surface normals in computer graphics (the direction a surface faces) are computed as cross products of edge vectors.

What Is a Matrix?

A matrix is a rectangular array of numbers, organised in rows and columns. A 3×2 matrix has 3 rows and 2 columns.

Matrices represent linear transformations — functions that stretch, rotate, reflect, or shear vectors. Multiplying a vector by a matrix transforms it.

For a 2×2 matrix A and vector v:

A = [[3, 0],    v = [1]    Av = [3×1 + 0×2] = [3]
     [0, 2]]        [2]         [0×1 + 2×2]   [4]

This transformation scales the x-component by 3 and the y-component by 2.

Matrix Multiplication

Two matrices A and B multiply to give matrix C = AB, where each element c_ij is the dot product of row i of A with column j of B.

[1, 2] × [5, 6] = [(1×5 + 2×7), (1×6 + 2×8)] = [19, 22]
[3, 4]   [7, 8]   [(3×5 + 4×7), (3×6 + 4×8)]   [43, 50]

Critical rules:

  • AB is only defined when the number of columns in A equals the number of rows in B
  • Matrix multiplication is generally not commutative: AB ≠ BA

The Determinant

The determinant of a square matrix is a scalar that tells you how much the matrix scales area (in 2D) or volume (in 3D).

For a 2×2 matrix:

det [[a, b]] = ad - bc
    [[c, d]]

| Determinant value | Meaning | |------------------|---------| | det > 0 | Transformation preserves orientation | | det < 0 | Transformation reflects (flips orientation) | | |det| > 1 | Transformation expands area/volume | | |det| < 1 | Transformation contracts area/volume | | det = 0 | Transformation is singular — squashes to lower dimension |

When det = 0, the matrix is singular — it has no inverse, and the system of equations it represents has either no solution or infinitely many.

The Matrix Inverse

The inverse A⁻¹ satisfies AA⁻¹ = I (the identity matrix). It exists only when det(A) ≠ 0.

For a 2×2 matrix:

A = [[a, b]]    A⁻¹ = (1/det) × [[ d, -b]]
    [[c, d]]                     [[-c,  a]]

Matrix inverses are used to solve systems of linear equations: if Ax = b, then x = A⁻¹b.

In practice, large systems are solved by Gaussian elimination rather than computing A⁻¹ directly — numerically more efficient and stable.

Eigenvalues and Eigenvectors

An eigenvector of a matrix A is a special vector v that, when transformed by A, only gets scaled (not rotated):

Av = λv

The scalar λ is the corresponding eigenvalue — it tells you how much the eigenvector gets stretched or shrunk.

To find eigenvalues, solve the characteristic equation:

det(A - λI) = 0

For a 2×2 matrix this gives a quadratic equation with (usually) two solutions.

Why do eigenvalues matter?

  • Principal Component Analysis (PCA): The eigenvectors of the data covariance matrix define the directions of maximum variance — the "principal components" that reduce dimensionality while preserving information
  • Google PageRank: The dominant eigenvector of the web link matrix gives the stationary distribution of a random web surfer
  • Quantum mechanics: Observable quantities (energy levels, spin states) are eigenvalues of operators

Polar Coordinates

While not strictly part of linear algebra, coordinate systems are related to transformations. Polar coordinates represent any 2D point by its distance r from the origin and angle θ from the positive x-axis.

Conversion between systems:

Cartesian → Polar:   r = √(x² + y²),  θ = atan2(y, x)
Polar → Cartesian:   x = r cos(θ),    y = r sin(θ)

Polar coordinates simplify many problems involving circles and rotation — equations that are complex in Cartesian become elegant in polar form.

Putting It All Together

Linear algebra's power comes from the fact that it lets you work with many variables simultaneously as a single mathematical object. A machine learning model with millions of parameters is just a sequence of matrix multiplications and non-linear functions. A 3D game engine is transforming millions of vertices per second with rotation, scaling, and projection matrices.

The fundamentals — vectors, dot products, matrices, determinants — are the foundation for all of it.

Use our Dot Product Calculator, Cross Product Calculator, Matrix Determinant Calculator, Matrix Inverse Calculator, and Eigenvalue Calculator to explore these concepts interactively.

linear algebravectorsmatricesdot productcross producteigenvaluedeterminantmachine learning

Related articles

Settings

Theme

Light

Dark

Layout

Language

PrivacyTermsAbout© 2025 PrimeCalcPro