Numerical Solution of Linear Algebraic
Equations with Direct Methods
- Introduction
- Backsubstitution method
- Gauss-Jordan Elimination
- Cramer's rule
- Inverse of a Matrix
- What is an ill-conditioned matrix?
- Linear Algebra framework
Introduction
The most basic task in linear algebra, and perhaps in all of scientific computing, is to solve for the unknowns in a set of linear algebraic equations. Linear, algebraic equations occur in almost all branches of numerical analysis. But their most visible application in engineering is in the analysis of linear systems. Any system whose response is proportional to the input is deemed to be linear. Linear systems include structures, elastic solids, heat flow, seepage of fluids, electromagnetic fields and electric circuits. If the system is discrete, such as a truss or an electric circuit, then its analysis leads directly to linear algebraic equations.

In the case of a statically determinate
truss, like the Eiffel tower which is a three-dimensional truss structure,
the equations arise when the equilibrium conditions of the joints
are written down. The unknowns
In summary, the modeling of linear systems invariably gives rise to equations of
the form
Therefore, it is desirable to have an equation solving algorithm that can handle any number of constant vectors with minimal computational effort. In general, a set of linear algebraic equations looks like this:
The
With python, we will use the numpy library to deal with matrices. For instance, the following
so called Wilson matrix
import numpy as np
""" Define the linear system elements: matrix A and vector b """
A = np.array([[10, 7, 8, 7],[7, 5, 6, 5],[8, 6, 10, 9],[7, 5, 9, 10]])
b = np.array([1, 1, 1, 1])
We can also create a matrix by using some numpy functions
import numpy as np
A = np.zeros((3,3),dtype=np.float)
B = np.zeros((3,3),dtype=np.float64)
# Modification of the diagonal of matrix A
n = A.shape[0]
for i in range(n):
A[i,i] = 2.0
print(A)
A particularly useful representation of the equations for computational purposes
is the augmented coefficient matrix obtained by adjoining the constant vector
For
Backsubstitution method
If matrixExercice [PY-1]
In this exercice we want to define the Backsubstitution function. For this, we need to know
the dimension of the matrix by using the shape function. We assume that the matrix is a square matrix.
We will also test the accuracy of the solution and consequently the sensibility of the method according to the dimension of the matrix.
1) Define the python Backsubstitution function. This function will return the
import numpy as np
""" Definition of the Backsubstitution function """
def Backsubstitution(A,b):
n = A.shape[0]
if b.shape[0] == n:
x = np.zeros(n,dtype=np.float)
for i in range(n-1,-1,-1):
#TO DO
else:
raise ValueError("shape index error")
return x
# Use this function
A = np.array([[10, 7, 8, 7],[0, 5, 6, 5],[0, 0, 10, 9],[0, 0, 0, 10]])
b = np.array([1, 1, 1, 1])
print(A)
print(b)
X = Backsubstitution(A,b)
print("Approximation solution of the system: ",X)
def Backsubstitution(A,b):
n = A.shape[0]
if b.shape[0] == n:
x = np.zeros(n,dtype=np.float)
for i in range(n-1,-1,-1):
try:
s = 0.0
for j in range(n-1,i,-1):
s = s + x[j]*A[i,j]
x[i] = (b[i]-s)/A[i,i]
except ZeroDivisionError:
print("Error: Division by Zero")
except IndexError:
print("Error: index ",i," out of bounds")
else:
raise ValueError("shape index error")
return x
2) In order to test the accuracy of the function, we define the coefficients of
# define a random matrix A
for i in range(n):
for j in range(i,n):
# select one random value according to the continuous
# uniform distribution on -1.0 and 1.0
A[i,j] = np.random.uniform(-1.0,1.0,1)
# Create a vector with only "1"
x = np.ones(n,dtype=np.float)
# Calculate the associated vector b such that b=Ax
b = np.dot(A,x)
X = Backsubstitution(A,b)
# Compare the exact solution x with X
# which is obtained by the use of the previous function
err = np.max(np.abs(x-X))
print("Max error of the solution = ",err)
The np.dot function allows to calculate the dot product of a matrix with a matrix or a vector.
# create an empty list
ERR = []
# define the maximum dimension of the linear systems
dim = 50
# define the number of random system for each value of n
nb = 100
for n in range(1,dim+1):
temp = []
for k in range(nb):
A = np.zeros((n,n),dtype=np.float)
for i in range(n):
for j in range(i,n):
# select one random value according to the uniform distribution
# on the interval -1.0 and 1.0
A[i,j] = np.random.uniform(-1.0,1.0,1)
# Create a vector with only "1"
x = np.ones(n,dtype=np.float)
b = np.dot(A,x)
X = Backsubstitution(A,b)
err = np.max(np.abs(X-x))
temp.append(err)
ERR.append(temp)

import matplotlib.pyplot as plt
# transpose the ERR list
ERR = np.transpose(ERR)
# Display grid
plt.grid(True, which="both")
# create a list of dim integer values
t = np.linspace(1,dim)
# Linear X axis, Logarithmic Y axis
for k in range(nb):
plt.semilogy(t,ERR[k],'ro')
#plt.ylim([0,1000])
plt.xlim([1,51])
# Provide the title for the semilog plot
plt.title('Axis in Semilog')
# Give x axis label for the semilog plot
plt.xlabel('dimension of the system')
# Give y axis label for the semilog plot
plt.ylabel('Backsubstitution Error')
# Display the semilog plot
plt.show()
#plt.savefig('BacksubstitutionErrors.png')
Gauss-Jordan elimination method
The idea behind Gauss-Jordan elimination is to use elementary row operations on the system that replace a system with an equivalent but simpler sytem for which the answers to the questions about solutions are "obvious". The Gauss elimination method needs three steps:
- algebraic operations by swapping rows for successively eliminating the unknows
which is similar to define an invertible matrix
such that matrix becomes an upper triangular matrix - calculating vector
- solving the linear system
where is a upper triangular matrix
By applying the backsubstitution algorithm we can then solve this linear system.
In case one pivot is null, we search in which column we have a non-null pivot and we swap these columns. If it is impossible then we have
for k=1 to n-1 do if |akk| < eps then Error Message else for i=k+1 to n do c= aik/akk bi= bi - c bk aik = 0 for j = k+1 to n do aij = aij - c.akj if |ann|< eps then Error message
Exercice [PY-2]
import numpy as np
import time
def GaussJordan(A,b, eps = 0.000001):
n = A.shape[0]
test = true
# TO DO
return test
n = 50
# select random values according to the uniform distribution
# between -1.0 and 1.0 for each coefficient of matrix A
A = np.random.uniform(-1.0,1.0,size=(n,n))
# Create a random vector according the uniform distribution
x = np.random.uniform(-1.0,1.0,n)
b = np.dot(A,x)
# Solve the linear system and give the calculation time needed to solve the system
start_time = time.time()
if GaussJordan(A,b):
X = Backsubstitution(A,b)
t =(time.time() - start_time)
err = np.max(np.abs(X-x))
print("n= ",n," time= ",t," err= ",err)
def GaussJordan(A,b, eps = 0.000001):
n = A.shape[0]
test = True
if b.shape[0] == n:
for k in range(n-1):
try:
if np.abs(A[k,k]) > eps:
for i in range(k+1,n):
c = A[i,k]/A[k,k]
b[i] = b[i] - c * b[k]
A[i,k] = 0.0
for j in range(k+1,n):
A[i,j] = A[i,j] - c * A[k,j]
else:
test = False
raise ValueError("Division by Zero")
except ZeroDivisionError:
test = False
print("Error: Division by Zero")
except IndexError:
test = False
print("Error: index ",i," out of bounds")
if np.abs(A[n-1,n-1]) < eps:
test = False
raise ValueError("Division by Zero")
else:
test = False
raise ValueError("shape index error")
return test
We assume now that we have a computer with three significative digits.
If we solve the following linear system
By applying the Gauss elimination method, we obtain
Exercice [PY-3]
Cramer's rule
We are now going to derive explicit formulas for the solution of
Inverse of a matrix
What is a singular or noninvertible matrix?There might be cases where the matrix is not invertible and then the system cannot be solved. Consider the following linear system of equations:
Now a piece of important trivia regarding matrix inversion:
- If a matrix is non-invertible, its transpose is non-invertible too
- From the previous, the columns of a non-invertible matrix are linearly dependent
- If the determinant of a matrix is zero, then the matrix is not invertible
- The rank of an invertible matrix of size
is (full rank) - The eigenvalues of the an invertible matrix are all different from zero
While not exact linear combinations of each other, some of the equations may be so close to linearly dependent that roundoff errors in the machine render them linearly dependent at some stage in the solution process. In this case your numerical procedure will fail, and it can tell you that it has failed.
Accumulated roundoff errors in the solution process can swamp the true solution. This problem particularly emerges if
is too large. The numerical procedure does not fail algorithmically. However, it returns a set of ’s that are wrong, as can be discovered by direct substitution back into the original equations. The closer a set of equations is to being singular, the more likely this is to happen, since increasingly close cancellations will occur during the solution. In fact, the preceding item can be viewed as the special case in which the loss of significance is unfortunately total.
Much of the sophistication of well-written linear equation-solving packages
is devoted to the detection and/or correction of these two pathologies. It is difficult
to give any firm guidelines for when such sophistication is needed, since there is no
such thing as a “typical” linear problem. But here is a rough idea: Linear sets with
There are many algorithms dedicated to the solution of large sets of equations, each one being tailored to a particular formof the coefficient matrix (symmetric, banded, sparse...). A well-known collection of these routines is LAPACK—Linear Algebra PACKage, originally written in Fortran77. The NumPy linear algebra functions rely on BLAS and LAPACK to provide efficient low level implementations of standard linear algebra algorithms. Those libraries may be provided by NumPy itself using C versions of a subset of their reference implementations but, when possible, highly optimized libraries that take advantage of specialized processor functionality are preferred (see numpy.linalg).
A system of
Let's say we have the following system of linear equations:
- Swap two rows
- Multiply a row by a non-zero scalar
- Sum two rows and replace one of them with the result
Operation 3: Sum rows 2 and 3 and store the result in row 3:
To solve the inverse of a matrix
Let
What is an ill-conditioned matrix?
Let us consider that the vector
In order to evalute if a matrix is well-conditioned or not we need to introduce the matrix norm framework.
Let us recall that function
Definition. Let
Theorem. if
Theorem. Let
The following properties are the all equivalents
a)
b)
c)
d) the image
e) There exists a real value
The fact that
Theorem. Let
Remark.
There exists matrix norms that are non induced by vector norm. For example
Condition number of a linear system
Let
Exercice [PY-4]
1) Define the Wilson matrix
import numpy as np
from numpy import linalg as LA
A=np.array([# To Do])
b=np.array([32,23,33,31])
c=np.array([32.1,22.9,33.1,30.9])
# Solve the linear system
x=LA.solve(A,b)
# solve the perturbated linear system
y=LA.solve(A,c)
print("Error : ",x-y)
2) Calculate the condition number of
""" 1-Norm function """
def Norm1(A):
#TO DO
return
""" 2-Norm function """
def Norm2(A):
#TO DO
return value
Exercice [PY-5]
Define the Hilbert(n) function (see wikipedia for the definition of the Hilbert matrix)
which allows to create the Hilbert matrix of dimension
import numpy as np
from numpy import linalg as LA
""" Hilbert matrix function """
def Hilbert(n):
#TO DO
return value
n=5
H=Hilbert(n)
x=np.ones(n)
b=np.dot(H,x)
X=LA.solve(H,b)
print("Error = ",x-X)
Exercice [AL-1]
LetBy substituting
Exercice [AL-2]
LetShow that, for any
Exercice [AL-3]
Let
The main objective of this exercice is to know how the solution
Moreover, we assume that
Linear Algebra framework
LetLet
Consequently
is then called the
If
References
- P.G. Ciarlet (1982), Introduction à l’analyse numérique matricielle et à l’optimisation, Masson
- J.J. Dongarra, C.B. Moler, J.R. Bunch & G.W. Stewart (1979), LINPACK Users’ Guide. SIAM.
- D.K. Faddeev & V.N. Faddeeva (1963), Computational Methods of Linear Algebra. Freeman & Co.
- G.H. Golub & C.F. Van Loan (1989), Matrix Computations. Second edition. John Hopkins Univ. Press.
- N.J. Higham (1996), Accuracy and Stability of Numerical Algorithms. SIAM.
- A.S. Householder (1964), The Theory of Matrices in Numerical Analysis. Blaisdell Publ. Comp.
- G.W. Stewart (1973), Introduction to Matrix Computations. Academic Press.
- L.N. Trefethen & D. Bau (1997), Numerical Linear Algebra. SIAM.
- J.H. Wilkinson (1969), Rundungsfehler. Springer-Verlag.
- J.H. Wilkinson & C. Reinsch (1971), Handbook for Automatic Computation, Volume II, Linear Algebra. Springer-Verlag.