Of course! Finding eigenvalues is a fundamental task in linear algebra, and Python has excellent libraries for it. The most common and powerful library for this is NumPy.

Here's a complete guide on how to compute eigenvalues in Python, from the basic concept to practical examples.
The Core Concept
For a square matrix A, an eigenvector v is a non-zero vector that only changes by a scalar factor when A** is applied to it.
The equation is: Av = λv
Where:

- A is the square matrix.
- v is the eigenvector.
- (lambda) is the scalar eigenvalue corresponding to the eigenvector v.
Our goal in Python is to find the eigenvalues (λ) and their corresponding eigenvectors (v) for a given matrix A.
Method 1: Using NumPy (The Standard & Recommended Way)
NumPy is the cornerstone of numerical computing in Python. Its linalg (linear algebra) module provides a highly optimized function for this.
Step 1: Install NumPy
If you don't have it installed, open your terminal or command prompt and run:
pip install numpy
Step 2: Use numpy.linalg.eig()
The function numpy.linalg.eig() takes a square matrix as input and returns two things:

- A 1D array containing the eigenvalues.
- A 2D array (a matrix) where each column is the corresponding eigenvector.
Let's see it in action.
Example 1: A Simple 2x2 Matrix
import numpy as np
# Define a 2x2 matrix
A = np.array([[4, 2],
[1, 3]])
# Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Matrix A:")
print(A)
print("\nEigenvalues:")
print(eigenvalues)
print("\nEigenvectors (each column is an eigenvector):")
print(eigenvectors)
Output:
Matrix A:
[[4 2]
[1 3]]
Eigenvalues:
[5. 2.]
Eigenvectors (each column is an eigenvector):
[[ 0.89442719 -0.70710678]
[ 0.4472136 0.70710678]]
How to Interpret the Output:
- Eigenvalues: The eigenvalues are
0and0. - Eigenvectors:
- The first column
[0.894, 0.447]is the eigenvector corresponding to the eigenvalue0. - The second column
[-0.707, 0.707]is the eigenvector corresponding to the eigenvalue0.
- The first column
Verification: Let's manually verify one of them. The eigenvector for λ=5 should satisfy the equation Av = 5v.
# Let's pick the first eigenvalue and eigenvector
lambda1 = eigenvalues[0]
v1 = eigenvectors[:, 0] # The first column
# Calculate A @ v1
result_Av1 = A @ v1
# Calculate lambda1 * v1
result_lambda1_v1 = lambda1 * v1
print("A @ v1 =", result_Av1)
print("lambda1 * v1 =", result_lambda1_v1)
print("Are they equal?", np.allclose(result_Av1, result_lambda1_v1))
Output:
A @ v1 = [4.47213595 2.23606798]
lambda1 * v1 = [4.47213595 2.23606798]
Are they equal? True
The np.allclose() function is used for safe floating-point comparison.
Method 2: Using SciPy (For Advanced Cases)
SciPy is built on top of NumPy and provides more advanced functions. For standard eigenvalue problems, scipy.linalg.eig is an excellent alternative to numpy.linalg.eig. It can sometimes be faster or offer more options.
Installation
pip install scipy
Example: Using scipy.linalg.eig
The usage is very similar.
import numpy as np
from scipy import linalg
# Define the same matrix
A = np.array([[4, 2],
[1, 3]])
# Calculate eigenvalues and eigenvectors using SciPy
eigenvalues, eigenvectors = linalg.eig(A)
print("Matrix A:")
print(A)
print("\nEigenvalues (from SciPy):")
print(eigenvalues)
print("\nEigenvectors (from SciPy):")
print(eigenvectors)
The output will be identical to the NumPy example.
When to choose SciPy over NumPy?
- Performance: For very large matrices, SciPy's underlying LAPACK routines might have a slight edge.
- Specialized Problems: SciPy has functions for more complex problems, like generalized eigenvalues (
linalg.eig(a, b)forAv = λBv) or eigenvalues for sparse matrices (scipy.sparse.linalg.eigs).
Practical Applications of Eigenvalues
Eigenvalues are not just a mathematical curiosity; they have powerful real-world applications.
Principal Component Analysis (PCA) - Dimensionality Reduction
PCA is a technique used to reduce the dimensionality of a dataset while preserving as much variance (information) as possible. It does this by finding the eigenvectors of the covariance matrix of the data. The eigenvectors (called principal components) point in the directions of maximum variance, and the corresponding eigenvalues tell you how much variance is captured in each direction.
Google's PageRank Algorithm
The original Google algorithm modeled the entire web as a giant matrix. The importance (rank) of a webpage was determined by finding the principal eigenvector of this matrix. The eigenvalue corresponding to this eigenvector is related to the overall "importance" of the web graph.
Vibration Analysis (Mechanical Engineering)
In structural engineering, eigenvalues represent the natural frequencies at which a structure will vibrate. The corresponding eigenvectors (mode shapes) show how the structure deforms at each of these frequencies. This is crucial for designing buildings, bridges, and aircraft to avoid resonant frequencies that could cause catastrophic failure.
Stability Analysis in Differential Equations
In systems of differential equations (e.g., modeling populations, chemical reactions, or electrical circuits), the eigenvalues of the system's Jacobian matrix determine the stability of its equilibrium points. If all eigenvalues have negative real parts, the system is stable.
Handling Special Cases
Complex Eigenvalues
If a matrix is not symmetric, its eigenvalues (and eigenvectors) can be complex numbers. NumPy handles this seamlessly.
import numpy as np
# A matrix with complex eigenvalues
B = np.array([[0, -1],
[1, 0]]) # This is a rotation matrix
eigenvalues, eigenvectors = np.linalg.eig(B)
print("Matrix B:")
print(B)
print("\nComplex Eigenvalues:")
print(eigenvalues)
print("\nComplex Eigenvectors:")
print(eigenvectors)
Output:
Matrix B:
[[ 0 -1]
[ 1 0]]
Complex Eigenvalues:
[0.+1.j 0.-1.j]
Complex Eigenvectors:
[[0.70710678+0.j 0.70710678-0.j ]
[0. +0.70710678j 0. -0.70710678j]]
Symmetric Matrices
For real, symmetric matrices, the eigenvalues are always real, and the eigenvectors are orthogonal. NumPy will return real-valued arrays for these cases.
import numpy as np
# A symmetric matrix
C = np.array([[2, 1],
[1, 2]])
eigenvalues, eigenvectors = np.linalg.eig(C)
print("Matrix C (Symmetric):")
print(C)
print("\nReal Eigenvalues:")
print(eigenvalues)
print("\nOrthogonal Eigenvectors (dot product of columns is ~0):")
# Check orthogonality: dot product of the two eigenvector columns
dot_product = np.dot(eigenvectors[:, 0], eigenvectors[:, 1])
print("Dot product of eigenvectors:", dot_product)
Output:
Matrix C (Symmetric):
[[2 1]
[1 2]]
Real Eigenvalues:
[3. 1.]
Orthogonal Eigenvectors (dot product of columns is ~0):
Dot product of eigenvectors: -4.440892098500626e-17
The dot product is extremely close to zero (due to floating-point precision), confirming orthogonality.
