Polynomial Interpolation
- Introduction
- Canonical Interpolation
- Lagrange Interpolation
- Divided differences of a function
- Newton Interpolation
- Interpolation Polynomial Errors
- Sensibility of the solution to roundoff errors
- Cubic Spline Interpolation
Introduction
Discrete data sets, or tables of the form
Curve fitting is applied to data that contain scatter (noise), usually due to
measurement errors. Here we want to find a smooth polynomial curve that approximates the data
in some sense. Thus the curve does not necessarily hit the data points. The difference
between interpolation and polynomial curve fitting is illustrated in the following figure:
We denote by
Canonical Interpolation Problem
We want to define a continuous curve which goes through all points without a break or a kink. The problem is that there are an infinite number of curves that go smoothly though the points, and unless we can define what the French curve (or the artistic hand) does in a precise mathematical way, the computer is stuck.
For this, we use polynomial interpolation. It is often needed to estimate the value of a function
This is particulary useful if
Specifically, to find the coefficients of
In practice, however, this method is not widely used for two
reasons: (1) the high computational complexity
Exercice [PY-1]
We want to interpolate the following data by using a polynomial function with the appropriate degree
-
Let
be a polynomial function. Write the linear system which allows to the function to interpolate the previous data. Choose the appropriate value for . -
Solve this system by using the linalg library of numpy.
-
Define the Pol(x,coeff) function which return for a given
the value of the polynomial function for which the coefficients are given in the coeff list. You can use the Horner algorithm. -
By using X=np.linspace(a,b,p) which generates uniform p-values between
and , draw the previous polynomial function. You can use the list t=np.linspace(a,b,100) and Pol(t,coeff) to draw this polynomial function. -
We assume now that the given values
are issued from function . By modifiying the value of or what kind of mathematical or numerical phenomena do we have?
import numpy as np
import time
from numpy import linalg as LA
import matplotlib.pyplot as plt
def f(x):
return np.abs(x)
def Pol(x,coeff):
s = 0.0
# TO DO
return s
# Création de la matrice d'interpolation
def Mat(X):
n=len(X)-1
A=np.zeros((n+1,n+1))
# TO DO
return A
# nombre de point d'interpolation
p=5
# création de la liste des données à interpoler
a=-1.0
b= 1.0
X=np.linspace(a,b,p)
Y=f(X)
# Trace les points
plt.ylim(-1,2)
p1=plt.plot(X,Y,'bo')
# degré du polynôme
n=p-1
#création de la liste des abscisses pour le tracé de la fonction
t=np.linspace(a,b,100)
p2=plt.plot(t,f(t),'k')
#TO DO: résolution du système linéaire pour le calcul des coefficients du polynôme d'interpolation
# tracé du polynôme d'interpolation
p3=plt.plot(t,Pol(t,coeff),'r')
import numpy as np
from numpy import linalg as LA
import matplotlib.pyplot as plt
# Interpolation polynomiale
print("Interpolation polynomiale")
def f(x):
return np.abs(x)
#return (x**2-4*x+4)*(x**2+4*x+2)
# nombre de point d'interpolation
p=5
# degré du polynôme
n=p-1
# création de la liste des données à interpoler
a=-1.0
b= 1.0
# Horner Algorithm
def Pol(x,coeff):
p=len(coeff)
s=coeff[p-1]
for i in range(p-1):
s=x*s+coeff[p-2-i]
return s
X=np.linspace(a,b,p)
Y=f(X)
t=np.linspace(a,b,1000)
plt.ylim(-1,2)
p1=plt.plot(X,Y,'bo')
# Création de la matrice d'interpolation
A=np.eye(n+1)
for i in range(n+1):
for j in range(n+1):
A[i,j]=X[i]**j
# Resolution de AX=Y
coeff=LA.solve(A,Y)
print("coefficients = ",coeff)
p2=plt.plot(t,f(t),'k')
p3=plt.plot(t,Pol(t,coeff),'r')
plt.show()
-
Instead of using uniform abscissa, choose the Chebychev abscissa between
and for the canonical polynomial basis interpolation. What are the differences for or .""" The chebychev abscissa """ X = [np.cos(pi*(2*k-1)/(2*n)) for k in range(1, n+1)]
Lagrange Interpolation
In order to get Lagrange interpolating polynomial
Exercice [PY-2]
In this exercice, we want to calculate the interpolation polynomial by using the Lagrange basis.-
Create the Lagrange(t,X,Y) function which calculates for any value
the value of the Lagrange's interpolation of the data. You can create function LP(i,t,X) which calculates the -Lagrange polynomial value for and the given abscissa in list . -
Apply this method for
or .
import numpy as np
""" i-Lagrange polynomial """
def LP(i,t,X):
#TO DO
return
""" Lagrange interpolation function """
def Lagrange(t,X,Y):
#TO DO
return value
# interpolation par polynôme de Lagrange
def LP(i,x,X):
res=1.0
for j in range(len(X)):
if j!=i:
res=res*(x-X[j])/(X[i]-X[j])
return res
def Lagrange(x,X,Y):
s=0.0
for i in range(len(X)):
s=s+Y[i]*LP(i,x,X)
return s
p5=plt.plot(t,Lagrange(t,X,Y),'y')
Divided differences of a function
In the following, we need to calculate the interpolation polynomial in the newton basis. Moreover, we will calculate a bound of the error between the interpolation polynomial and a given function. We then define the concept of "divided differences" of a function.
The divided differences are not dependant of the order of the abscissa:
If
For any
For any
Let
Newton Interpolation
Let
Coefficient
Interpolation Polynomial Errors
As with any approximate method, the utility of polynomial interpolation can not be stretched too far. Here, we shall quantify the errors that can occur
in polynomial interpolation and develop techniques to minimize such errors.
Say you have a function
The error of the polynomial interpolation is defined as
Exercice [M-1]
-
If
is a polynomial of degree show that is exactly interpolated by . -
If
is a polynomial of degree show that the interpolation has a non-zero error term. Moreover for show that: -
Approximate function
by an interpolating polynomial of degree , based on the following node points and .
Draw this function and the interpolating polynomial . -
Calculate the approximation
of the 2-norm of .
Exercice [M-2]
-
If
calculate a bounded value of which depends of for and .
Draw this bound error on . -
If
calculate a bounded value of which depends of .
Exercice [PY-3]
In this exercice, we want to calculate the interpolation polynomial by using the Lagrange basis.-
Here is a famous example due to Carle Runge. Let
and be the polynomial that interpolates at evenly spaced nodes in the interval . Show that the fit becomes worse when or becomes larger (apply an interpolation method for or ). -
Apply the Chebychev abscissa of Exercice [PY-1] between
and and compare this interpolation polynomial with the previous one. -
What if the function we’re interpolating isn’t smooth? If we consider the step function
if and elsewhere, show that the polynomial interpolation has Gibbs phenomena.
In the previous examples, we note that the approximation is very poor towards to the two ends where the error
The smooth blue line is
The function in Runge’s example has a singularity at
For smooth functions interpolating at more points does improve the fit if you interpolate at the roots of Chebyshev polynomials.
As you interpolate at the roots of higher and higher degree Chebyshev polynomials,
the interpolants converge to the function being interpolated. The plot below shows how interpolating at the roots of
To make this plot, we replaced
""" The chebychev abscissa """
n = 16
x = [np.cos(pi*(2*k-1)/(2*n)) for k in range(1, n+1)]
x = 5*np.array(x)
Here’s the result of interpolating the indicator function of the interval
Sensibility of the interpolation polynomial solution to roundoff errors
We assume that the data
We then want to study the difference between the polynomial solutions of
Uniform data points case:
for
Chebyshev data points case:
for
Exercice [M-3]
Let-
For any subdivision of the interval, calculate a bounded value of
(use the Interpolation Error Theorem), the interpolation error of this function by . Apply it for . -
If
, calculate the Lebesgue constant for a Chebychev subdivision on . Draw the interpolation polynomial solution.
If
Exercice [M-4]
Let-
Draw the associated Lebesgue constant
. -
Show that for
we have: where if and if -
Show that
has a unique local maximum on each interval . For this, you can study the extrema of the .
Cubic Spline Interpolation
All previously discussed methods of polynomial interpolation fit a set of
Now we consider a different method of spline interpolation, which fits the given points by a piecewise polynomial function
-
For
to be continuous, two consecutive polynomials and must join at : Or, equivalently, must pass the two end-points, i.e., -
For
to be smooth, they need to have the same derivatives at the point they joint, i.e., must hold for some order . The higher the order is, the more smooth the spline becomes.
-
Linear spline:
with two parameters and can only satisfy the following two equations required for to be continuous: or, equivalently, has to pass the two end points and : Solving either of two problems above, we get: and which is represented in the form of by the first expression, or a linear interpolation of the two end points and in the second expression. As , the linear spline is continuous at . But as in general is not smooth, i.e., . -
Quadratic spline:
with three parameters and can satisfy the following three equations required for to be smooth ( ) as well as continuous: To obtain the three parameters , and in , we consider , which, as a linear function, can be linearly fit by the two end points and : Integrating, we get As , we have Solving this for and substituting it back into the expression of , we get Also, as , we have or Given , we can get iteratively all subsequent and thereby . Alternatively, given , we can also get iteratively all previous . It is obvious that with only three free parameters, the quadratic polynomials cannot satisfy both boundary conditions and . -
Cubic spline:
with four parameters , and can satisfy the following four equations required for to be continuous and smooth ( ): and . To obtain the four parameters , , and in , we first consider , which, as a linear function, can be linearly fit by the two end points and : Integrating twice we get As and , we have: Solving these two equations we get the two coefficients and : Substituting them back into and rearranging the terms we get To find , we take derivative of and rearrange terms to get which, when evaluated at and , becomes: Replacing by in the second equation, we also get To satisfy , we equate the above to the first equation to get: Multiplying both sides by and rearranging, we get: Note that here is simply the second divided differences. We can rewrite the equation above as where Here we have equations but unknowns . To obtain these unknowns, we need to get two additional equations based on certain assumed boundary conditions. Assume the first order derivatives at both ends and are known. Specially is called clamped boundary condition. At the front end, we set Multiplying we get Similarly, at the back end, we also set Multiplying we get We can combine these equations into a linear equation system of equations and the same number of unknowns: Alternatively, we can also assume and are known. Specially is called natural boundary condition. Now we can simply get and and solve the following system for the unknowns :
Exercice [PY-4]
In this exercice, we want to calculate the polyomial spline interpolation to the runge function.-
Define the LinearSpline(x,X,Y), QuadraticSpline(x,X,Y) and CubicSpline(x,X,Y) functions.
Apply these functions for the interpolation of uniform data of the runge function. Draw the resulting curves.
References
- J.H. Ahlberg, E.N. Nilson & J.L. Walsh (1967) The Theory of Splines and Their Applications. Academic Press, New York.
- C. de Boor (1978) A Practical Guide to Splines. Springer-Verlag.
- G.D. Knott (2000) Interpolating Cubic Splines. Birkhäuser.
- H.J. Nussbaumer (1981) Fast Fourier Transform and Convolution Algorithms. Springer-Verlag.
- H. Späth (1995) One Dimensional Spline Interpolation. AK Peters.