Polynomial Interpolation

Introduction

Discrete data sets, or tables of the form j=1j=2j=3j=pxjx1x2x3xpyjy1y2y3yp are commonly involved in technical calculations. We assume that x1<x2<<xp. The source of the data may be experimental observations or numerical computations. There is a distinction between interpolation and curve fitting. In interpolation, we construct a polynomial curve through the data points. In doing so, we make the implicit assumption that the data points are accurate and distinct.

Curve fitting is applied to data that contain scatter (noise), usually due to measurement errors. Here we want to find a smooth polynomial curve that approximates the data in some sense. Thus the curve does not necessarily hit the data points. The difference between interpolation and polynomial curve fitting is illustrated in the following figure:

We denote by Pn the linear space of polynomials of degree n.

Canonical Interpolation Problem

We want to define a continuous curve which goes through all points without a break or a kink. The problem is that there are an infinite number of curves that go smoothly though the points, and unless we can define what the French curve (or the artistic hand) does in a precise mathematical way, the computer is stuck.

For this, we use polynomial interpolation. It is often needed to estimate the value of a function y=f(x) at certan point x based on the known values of the function f(x0),,f(xn) at a set of node points a=x0x1xn=b in the interval [a,b]. This process is called interpolation if axb or extrapolation if either x<a or b<x. One way to carry out these operations is to approximate the function f(x) by an n-th degree polynomial: f(x)Pn(x)=a0+a1x+a2x2++an1xn1+anxn=j=0najxj where the n+1 coefficients a0,,an can be obtained based on the given points. Once Pn(x) is available, any operation applied to f(x), such as differentiation, intergration, and root finding, can be carried out approximately based on Pn(x)f(x).
This is particulary useful if f(x) is non-elementary and therefore difficult to manipulate, or it is only available as a set of discrete samples without a closed-form expression.

Specifically, to find the coefficients of Pn(x), we require it to pass through all node points {xi,yi=f(xi),i=0,,n}, i.e. the following n+1 linear equations hold: Pn(xi)=j=0najxij=f(xi)=yi,(i=0,,n) Now the coefficients a0,,an can be found by solving these n+1 linear equations, which can be expressed in matrix form as: [1x0x02x0n1x1x12x1n1x2x22x2n1xnxn2xnn][a0a1a2an]=[y0y1y2yn]=b where x=[a0,,an]T, b=[y0,,yn]T and A=[1x0x02x0n1x1x12x1n1x2x22x2n1xnxn2xnn] is known as the Vandermonde matrix. Solving this linear equation system, we get the coefficients [a0,,an]T=x=A1b. Here n+1 polynomials x0,x1,x2,,xn can be considered as a set of polynomial basis functions that span the space of all nth degree polynomials (which can also be spanned by any other possible bases). If the node points x0,,xn are distinct, i.e., A has a full rank and its inverse A1 exists, then the solution of the system x=A1b is unique and so is Pn(x).

In practice, however, this method is not widely used for two reasons: (1) the high computational complexity O(n3) for calculating A1, and (2) matrix A becomes more ill-conditioned as n increases. Instead, other methods to be discussed in the following section are practically used.

Exercice [PY-1]

We want to interpolate the following data by using a polynomial function with the appropriate degree 1.0120.0121.01.0120.0121.0

  • Let Pn(x)=i=0naixi be a polynomial function. Write the linear system which allows to the function to interpolate the previous data. Choose the appropriate value for n.

  • Solve this system by using the linalg library of numpy.

  • Define the Pol(x,coeff) function which return for a given x the value of the polynomial function for which the coefficients are given in the coeff list. You can use the Horner algorithm.

  • By using X=np.linspace(a,b,p) which generates uniform p-values between a=1 and b=1, draw the previous polynomial function. You can use the list t=np.linspace(a,b,100) and Pol(t,coeff) to draw this polynomial function.

  • We assume now that the given values (xj,yj) are issued from function f(x)=|x|. By modifiying the value of p=5,10,20 or p=55 what kind of mathematical or numerical phenomena do we have?

        
import numpy as np
import time
from numpy import linalg as LA
import matplotlib.pyplot as plt


def f(x):
    return np.abs(x)

def Pol(x,coeff):
    s = 0.0
    # TO DO
    return s

# Création de la matrice d'interpolation
def Mat(X):
    n=len(X)-1
    A=np.zeros((n+1,n+1))
    # TO DO
    return A

# nombre de point d'interpolation
p=5
# création de la liste des données à interpoler
a=-1.0
b= 1.0
X=np.linspace(a,b,p)
Y=f(X)
# Trace les points
plt.ylim(-1,2)
p1=plt.plot(X,Y,'bo')

# degré du polynôme
n=p-1

#création de la liste des abscisses pour le tracé de la fonction
t=np.linspace(a,b,100)
p2=plt.plot(t,f(t),'k')

#TO DO: résolution du système linéaire pour le calcul des coefficients du polynôme d'interpolation

# tracé du polynôme d'interpolation
p3=plt.plot(t,Pol(t,coeff),'r')
        
    

                        
import numpy as np
from numpy import linalg as LA
import matplotlib.pyplot as plt

# Interpolation polynomiale
print("Interpolation polynomiale")
def f(x):
    return np.abs(x)
    #return (x**2-4*x+4)*(x**2+4*x+2) 

# nombre de point d'interpolation
p=5
# degré du polynôme
n=p-1
# création de la liste des données à interpoler
a=-1.0
b= 1.0

# Horner Algorithm
def Pol(x,coeff):
    p=len(coeff)
    s=coeff[p-1]
    for i in range(p-1):
        s=x*s+coeff[p-2-i]
    return s


X=np.linspace(a,b,p)
Y=f(X)

t=np.linspace(a,b,1000)
plt.ylim(-1,2)
p1=plt.plot(X,Y,'bo')


# Création de la matrice d'interpolation
A=np.eye(n+1)
for i in range(n+1):
    for j in range(n+1):
        A[i,j]=X[i]**j    

# Resolution de AX=Y
coeff=LA.solve(A,Y)
print("coefficients = ",coeff)

p2=plt.plot(t,f(t),'k')
p3=plt.plot(t,Pol(t,coeff),'r')
plt.show()
                        
                    
  • Instead of using uniform abscissa, choose the Chebychev abscissa between 1 and 1 for the canonical polynomial basis interpolation. What are the differences for p=5,10,20 or 55.

    
    """ The chebychev abscissa """
    X = [np.cos(pi*(2*k-1)/(2*n)) for k in range(1, n+1)]
    

Lagrange Interpolation

In order to get Lagrange interpolating polynomial Pn(x)=i=0i=nLin(x)yi the i-th basis function at some parameter xi must be one, and all others must be zero. This particular value of the parameter x is associated with each interpolating point yi and is called knot. Lagrange polynomials will give the interpolation property for all points: Lin(x)=(xx0)(xxi1)(xxi+1)(xxn)(xix0)(xixi1)(xixi+1)(xixn) with Lin(xi)=1 and Lin(xk)=0 for ki.

Exercice [PY-2]

In this exercice, we want to calculate the interpolation polynomial by using the Lagrange basis.
  • Create the Lagrange(t,X,Y) function which calculates for any value t[1,1] the value of the Lagrange's interpolation of the data. You can create function LP(i,t,X) which calculates the i-Lagrange polynomial value for t and the given abscissa in list X.

  • Apply this method for p=5,10,20 or p=55.


        
        import numpy as np
        
        """ i-Lagrange polynomial """
        def LP(i,t,X):
            #TO DO 
            return
        
        """ Lagrange interpolation function """
        def Lagrange(t,X,Y):
            #TO DO 
            return value
        
                        
# interpolation par polynôme de Lagrange
def LP(i,x,X):
    res=1.0
    for j in range(len(X)):
        if j!=i:
            res=res*(x-X[j])/(X[i]-X[j])
    return res
    
def Lagrange(x,X,Y):
    s=0.0
    for i in range(len(X)):
        s=s+Y[i]*LP(i,x,X)
    return s

p5=plt.plot(t,Lagrange(t,X,Y),'y')
                        
                    

Divided differences of a function

In the following, we need to calculate the interpolation polynomial in the newton basis. Moreover, we will calculate a bound of the error between the interpolation polynomial and a given function. We then define the concept of "divided differences" of a function.

We call divided difference of f of order 0,1,2 and n the following terms: f[xi]=f(xi)f[xi,xj]=f(xi)f(xj)xixjjif[xi,xj,xk]=f[xi,xj]f[xj,xk]xixkji,jk,kif[x0,x1,,xn]=f[x0,x1,,xn1]f[x1,,xn]x0xn.

The divided differences are not dependant of the order of the abscissa: f[x0,,xm]=mi=0f(xi)mj=0ji(xixj)

If Pk is the interpolation polynomial of f at the data points x0,x1,,xk then f[x0,x1,,xk] is the main coefficient of Pk (i.e. the coefficient of the monomial xk).

For any x which is different of all the xi we have:

For any x and x0 at order 0: f(x)=f(x0)+(xx0)f[x,x0] For x, x0 and x1 at order 2: f[x,x0,x1]=f[x,x0]f[x0,x1]xx1 It yields f(x)=f(x0)+(xx0)f[x0,x1]+(xx0)(xx1)f[x,x0,x1]. By iterating the methodology we obtain: f(x)=f(x0)+(xx0)f[x0,x1]+(xx0)(xx1)f[x0,x1,x2]+(xx0)(xxn1)f[x0,x1,,xn]+(xx0)(xxn)f[x0,x1,,xn,x]

Let f be an n-order differential function and (xi,yi=f(xi)) for i=0,1,...,n with x0<x1<...<xn. Then, there exists ξ[x0,xn] such that: f[x0,x1,,xn]=f(n)(ξ)n!

Newton Interpolation

Let π0(x)=1π1(x)=xx0π2(x)=(xx0)(xx1)πn(x)=(xx0)(xx1)(xxn1) be a basis of Pn. Then, the Newton interpolation polynomial can be written in this basis: Pn(x)=i=0nαiπi(x) We have to solve the following lower triangular linear system: f(x0)=α0f(x1)=α0+α1(x1x0)f(xn)=α0++αn(xnx0)(xnxn1) The solution are given by using the divided differences of function f: α0=f[x0]α1=f[x0,x1]αn=f[x0,x1,,xn]. We deduce the Newton formula (1669) of the interpolation polynomial of f: Pn(x)=f[x0]+(xx0)f[x0,x1]+(xx0)(xx1)f[x0,x1,x2]+(xx0)(xxn1)f[x0,x1,,xn]

Coefficient αk=f[x0,x1,,xk] only depend of the data points xi and f(xi) for all ik. Consequently, we can add or delete easily some data points.

Interpolation Polynomial Errors

As with any approximate method, the utility of polynomial interpolation can not be stretched too far. Here, we shall quantify the errors that can occur in polynomial interpolation and develop techniques to minimize such errors. Say you have a function f(x) and you want to find a polynomial Pn(x) that agrees with f(x) at several points. Given a set of points x1,x2,xp you can always find a polynomial of degree n=p1 such that Pn(xi)=f(xi) for i=0,1,2,,p. It seems reasonable that the more points you pick, the better the interpolating polynomial Pn(x) will match the function f(x). If the two functions match exactly at a lot of points, they should match well everywhere. Sometimes that’s true and sometimes it’s not.

The error of the polynomial interpolation is defined as Rn(x)=f(x)Pn(x) which is non-zero in general, except if x=xi at which Rn(xi)=0,(i=0,,n). In other words, the error function Rn(x) has n+1 zeros at x0,,xn, and can therefore be written as Rn(x)=u(x)i=0n(xxi)=u(x)l(x) where u(x) is an unknown function and l(x) is a polynomial of degree n+1 defined as l(x)=i=0n(xxi) To find u(x), we construct another function F(t) of variable t with any x(a,b) as a parameter: F(t)=Rn(t)u(x)i=0n(txi)=f(t)Pn(t)u(x)i=0n(txi) which is zero when t=x: F(x)=Rn(x)u(x)i=0n(xxi)=Rn(x)Rn(x)=0 We therefore see that F(t) has n+2 zeros at x0,,xn and x. According to Rolle's theorem, which states that the derivative function f(x) of any differentiable function satisfying f(a)=f(b)=0 must have at least a zero at some point c(a,b) at which f(c)=0, F(t) has at least n+1 zeros each between two consecutive zeros of F(t), F(t) has at least n zeros, and F(n+1)(t) has at least one zero at some ξ(a,b): F(n+1)(ξ)=0=dn+1dtn+1[f(t)Pn(t)u(x)i=0n(txi)]t=ξ=[f(n+1)(t)+Pn(n+1)(t)u(x)dn+1dtn+1i=0n(txi)]t=ξ=f(n+1)(ξ)u(x)(n+1)! The last equation is due to the fact that Pn(t) and i=0n(txi) are respectively an nth and (n+1)th degree polynomials of t. Solving the above we get u(x)=f(n+1)(ξ)(n+1)! Now the error function can be written as Rn(x)=u(x)i=0n(xxi)=f(n+1)(ξx)(n+1)!l(x) where ξx is a point located anywhere between a=x0 and b=xn dependending on x. The "global" error can be quantitatively measured by the 2-normal of Rn(x): ϵ=||Rn(x)||2=(abRn2(x)dx)1/2 We then have the following result:

Let f be a function in Cn+1[a,b], and let Pn be a polynomial of degree n that interpolates the function f at n+1 distinct points x0,x1,...,xn [a,b]. Then to each x [a,b] there exists a point  ξx[a,b] such that f(x)Pn(x)=1(n+1)!f(n+1)(ξx)i=0n(xxi)

Due to the uniqueness of the polynomial interpolation, the error analysis above also applies to all other methods such as the Lagrange and Newton interpolations.

Exercice [M-1]

  • If f(x) is a polynomial of degree kn show that f is exactly interpolated by Pn(x).

  • If f(x) is a polynomial of degree k>n show that the interpolation has a non-zero error term. Moreover for f(x)=xn+1 show that: Rn(x)=l(x)

  • Approximate function f(x)=xsin(2x+π/4)+1 by an interpolating polynomial P3(x) of degree n=3, based on the following node points x0=1,x1=0,x2=1 and x3=2.
    Draw this function and the interpolating polynomial P3(x).

  • Calculate the approximation ϵ of the 2-norm of R3(x).

  • f(n+1)(x)=0
  • f(n+1)(x)=(n+1)! and Rn(x)=f(n+1)(ξ)(n+1)!l(x)=l(x).
  • We first find the Vandermonde matrix: A=[1x0x02x031x1x12x131x2x22x231x3x32x33]=[1111100011111249] and get the coefficients: x=A1b=A1[f(x0)f(x1)f(x2)f(x3)]=[1111100011111249][1.9371.0001.3490.995]=[1.0000.3690.6430.663] and then the interpolating polynomial can be found as a weighted sum of the first n+1=4 power functions used as the basis functions to span the polynomial space: P3(x)=j=0najxj=1.0+0.369x+0.643x20.663x3 The error ϵ can be approximated by a set of k>>n discrete samples u1,,uk of the function f(x) and the interpolating polynomial P3(x): ϵ=||R3(x)||2=(abR32(x)dx)1/2(1ki=1k[f(ui)P3(ui)]2)1/2=0.3063
  • Exercice [M-2]

    • If n=6 calculate a bounded value of |P6(x)sin(x)| which depends of x for x0=0,x1=1.5,x2=3,x3=4.5,x4=6,x5=7.5 and x6=9.
      Draw this bound error on [0,9].

    • If n=6 calculate a bounded value of |P6(x)11+x2| which depends of x.

  • As |f(7)(x)|1 we have the following bound of the error |x(x1.5)(x3)(x4.5)(x6)(x7.5)(x9)|·17!
  • As f(7)(x)=8!(x+1)(x1)x(x22x1)(x2+2x1)(1+x2)8 which is maximal for x=0.17632698 or x=0.17632698 we then have the following bound error: |(x220.25)(x29)(x22.25)x|·43927!
  • Exercice [PY-3]

    In this exercice, we want to calculate the interpolation polynomial by using the Lagrange basis.
    • Here is a famous example due to Carle Runge. Let f(x)=1(1+x2) and Pn be the polynomial that interpolates f(x) at n+1 evenly spaced nodes in the interval [5,5]. Show that the fit becomes worse when n or p becomes larger (apply an interpolation method for p=5,10,20 or p=55).

    • In the previous examples, we note that the approximation is very poor towards to the two ends where the error Rn(x)=f(x)Pn(x) is disappointingly high. This is known as Runge's phenomenon, indicating the fact that higher degree polynomial interpolation does not necessarily always produce more accurate result, as the degree of the interpolating polynomial may become unnecessarily high and the polynomial may become oscillatory. This Runge's phenomenon is a typical example of overfitting, due to an excessively complex model with too many parameters relative to the observed data, here specifically a polynomial of a degree too high (requiring too many coefficients) to model the given data points.

    • Apply the Chebychev abscissa of Exercice [PY-1] between 5 and 5 and compare this interpolation polynomial with the previous one.

    • What if the function we’re interpolating isn’t smooth? If we consider the step function f(x)=1 if x[1,1] and f(x)=0 elsewhere, show that the polynomial interpolation has Gibbs phenomena.

  • The smooth blue line is f(x) and the wiggly orange line is P9(x). Runge showed that polynomial interpolation at evenly-spaced points can fail spectacularly to converge. In the following example we use the function f(x)=1(1+25x2) on the interval [1,1]. Here we have 16 interpolation nodes. Runge found that in order for interpolation at evenly spaced nodes in [1,1] to converge, the function being interpolated needs to be analytic inside a "football-shaped" region of the complex plane with major axis [1,1] on the real axis and minor axis approximately [0.5255,0.5255] on the imaginary axis.

    The function in Runge’s example has a singularity at 0.2i, which is inside the "football region". Linear interpolation at evenly spaced points would converge for the function f(x)=1(1+x2) since the singularity is outside the "football region".

  • For smooth functions interpolating at more points does improve the fit if you interpolate at the roots of Chebyshev polynomials. As you interpolate at the roots of higher and higher degree Chebyshev polynomials, the interpolants converge to the function being interpolated. The plot below shows how interpolating at the roots of T16, the 16th Chebyshev polynomial, eliminates the bad behavior at the ends.

    To make this plot, we replaced x above with the roots of T16, rescaled from the interval [1,1] to the interval [5,5] to match the example above.

                    
                    """ The chebychev abscissa """
                    n = 16
                    x = [np.cos(pi*(2*k-1)/(2*n)) for k in range(1, n+1)]
                    x = 5*np.array(x)
                    
  • Here’s the result of interpolating the indicator function of the interval [1,1] at 100 Chebyshev points. We get the same “bat ears” as before.

  • Sensibility of the interpolation polynomial solution to roundoff errors

    We assume that the data yi are corrupted by a noise such that y^i=yi(1+ϵi) where |ϵi|eps with eps being the smallest number in the computer such that the rounded value of 1+esp is greater than 1.
    We then want to study the difference between the polynomial solutions of (xi,yi) and (xi,y^i). We have the following result.

    Let Pn (resp. P^n) be the polynomial interpolating function of yi (resp. y^i=yi(1+ϵi)), where |ϵi|eps for i=0,...,n. We then have: |P^n(x)Pn(x)|eps.Λn(x).maxk=0,...,n|yk| where Λn(x):=maxk=0,...,ni=0n|Lin(x)| is the Lebesgue constant associated to the values x0,...,xn on the interval [a,b] and the Lin(x) are the Lagrange polynomials.
    Uniform data points case:
    for xk=a+kn(ba) on [a,b] we have Λn(x)2n+1e.n.log(n)
    Chebyshev data points case:
    for xk=a+b2+ba2cos((2k+1)π2n+2) we have Λn(x)2πlog(n)
    Consequently, the Lebesgue constant gives us a manner to calculate how the data errors are amplifying.

    Exercice [M-3]

    Let f(x)=sin(x) be defined on [0,5].
    • For any subdivision of the interval, calculate a bounded value of Rn (use the Interpolation Error Theorem), the interpolation error of this function by Pn(x). Apply it for n=50.

    • If n=50, calculate the Lebesgue constant for a uniform subdivision on [0,5]. Draw the interpolation polynomial solution to these data.

    • If n=50, calculate the Lebesgue constant for a Chebychev subdivision on [0,5]. Draw the interpolation polynomial solution.

    Use the previous Theorem to calculate the Lebesgue constants.

    Exercice [M-4]

    Let x0=1,x1=0 and x2=1 be subdivision of [1,1].
    • Draw the associated Lebesgue constant Λ2(x).

    • Show that for x [xj1,xj] we have: Λn(x)=i=0nδiLin(x) where δi=(1)ji+1 if ij1 and δi=(1)ij if ij

    • Show that Λn(x) has a unique local maximum on each interval [xj1,xj]. For this, you can study the extrema of the Lin(x).

    Use the previous Theorem to calculate the Lebesgue constants.

    Cubic Spline Interpolation

    All previously discussed methods of polynomial interpolation fit a set of n+1 given points yi=f(xi),(i=0,,n) by an n-th degree polynomial, and a higher degree polynomial is needed to fit a larger set of data points. A major drawback of such methods is overfitting, as demonstrated in the previous exercice.

    Now we consider a different method of spline interpolation, which fits the given points by a piecewise polynomial function S(x), known as the spline, a composite function formed by n low-degree polynomials Pi(x) each fitting f(x) in the interval between xi1 and xi,(i=1,,n): S(x)={P1(x)x0xx1Pi(x)xi1xiPn(x)xn1xxn As this method does not use a single polynomial of degree n to fit all n+1 points at once, it avoids high degree polynomials and thereby the potential problem of overfitting. These low-degree polynomials need to be such that the spline S(x) they form is not only continuous but also smooth.

    • For S(x) to be continuous, two consecutive polynomials Pi(x) and Pi+1(x) must join at xi: Pi(xi)=Pi+1(xi)=f(xi)=yi Or, equivalently, Pi(x) must pass the two end-points, i.e., Pi(xi1)=f(xi1)=yi1,Pi(xi)=f(xi)=yi

    • For S(x) to be smooth, they need to have the same derivatives at the point they joint, i.e., Pi(k)(xi)=Pi+1(k)(xi) must hold for some order k. The higher the order k is, the more smooth the spline S(x) becomes.

    In the following we consider approximating f(x) between any two consecutive points xi and xi+1 by a linear, quadratic, and cubic polynomial Pi(x) (of first, second, and third degree).
    • Linear spline: Pi(x)=aix+bi with two parameters ai and bi can only satisfy the following two equations required for S(x) to be continuous: Pi(xi)=aixi+bi=f(xi)=yi,Pi(xi1)=aixi1+bi=f(xi1)=yi1 or, equivalently, Pi(x) has to pass the two end points (xi1,yi1) and (xi,yi): Pi(x)yi1xxi1=yiyi1xixi1 Solving either of two problems above, we get: Pi(x)=aix+bi=yiyi1hix+xiyi1xi1yihi and Pi(x)=xxi1hiyi+xixhiyi1,(hi=xixi1) which is represented in the form of Pi(x)=aix+bi by the first expression, or a linear interpolation of the two end points yi1=f(xi1) and yi=f(xi) in the second expression. As Pi(xi)=Pi+1(xi)=yi, the linear spline S(x) is continuous at xi. But as in general Pi(xi)=yiyi1hiPi+1(xi)=yi+1yihi S(x) is not smooth, i.e., k=0.

    • Quadratic spline: Qi(x)=aix2+bix+ci with three parameters ai,bi and ci can satisfy the following three equations required for S(x) to be smooth (k=1) as well as continuous: Qi(xi)=yi,Qi(xi1)=yi1,Qi(xi)=Qi+1(xi) To obtain the three parameters ai, bi and ci in Qi(x), we consider Qi(x)=2aix+bi, which, as a linear function, can be linearly fit by the two end points f(xi1)=Di1 and f(xi)=Di: Qi(x)=xxi1hiDi+xixhiDi1 Integrating, we get Qi(x)=Qi(x)dx=Di2hi(xxi1)2Di12hi(xix)2+ci As Qi(xi)=yi, we have Qi(xi)=Di2hi(xixi1)2+ci=Dihi2+ci=yi Solving this for ci ci=yiDihi2 and substituting it back into the expression of Qi(x), we get Qi(x)=Di2hi(xxi1)2Di12hi(xix)2+yiDihi2 Also, as Qi(xi1)=yi1, we have Qi(xi1)=Di1hi2+yiDihi2=yi1 or Di=2yiyi1hiDi1,(i=1,,n) Given f(x0)=D0, we can get iteratively all subsequent D1,,Dn and thereby Qi(x). Alternatively, given f(xn)=Dn, we can also get iteratively all previous Dn1,,D0. It is obvious that with only three free parameters, the quadratic polynomials cannot satisfy both boundary conditions Q1(x0)=f(x0) and Qn(xn)=f(xn).

    • Cubic spline: Ci(x)=aix3+bix2+cix+di with four parameters ai,bi,ci, and di can satisfy the following four equations required for S(x) to be continuous and smooth (k=2): Ci(xi)=yi,Ci(xi1)=yi1,Ci(xi)=Ci+1(xi), and Ci(xi)=Ci+1(xi). To obtain the four parameters ai, bi, ci and di in Ci(x), we first consider Ci(x)=(aix3+bix2+cix+di)=6aix+2bi, which, as a linear function, can be linearly fit by the two end points f(xi1)=Mi1 and f(xi)=Mi: Ci(x)=xixhiMi1+xxi1hiMi Integrating Ci(x) twice we get Ci(x)=(Ci(x)dx)dx=(xix)36hiMi1+(xxi1)36hiMi+cix+di As Ci(xi1)=yi1 and Ci(xi)=yi, we have: Ci(xi1)=hi26Mi1+cixi1+di=yi1,Ci(xi)=hi26Mi+cixi+di=yi Solving these two equations we get the two coefficients ci and di: ci=yiyi1hihi6(MiMi1) di=xiyi1xi1yihihi6(xiMi1xi1Mi) Substituting them back into Ci(x) and rearranging the terms we get Ci(x)=(xix)36hiMi1+(xxi1)36hiMi+(yiyi1hihi6(MiMi1))x +xiyi1xi1yihihi6(xiMi1xi1Mi) =(xix)36hiMi1+(xxi1)36hiMi+(yi1hiMi1hi6)(xix)+(yihiMihi6)(xxi1) To find Mi (i=1,,n1), we take derivative of Ci(x) and rearrange terms to get Ci(x)=(xix)22hiMi1+(xxi1)22hiMi1hi(yi1Mi1hi26)+1hi(yiMihi26) Ci(x)=(xix)22hiMi1+(xxi1)22hiMi+yiyi1hihi6(MiMi1) which, when evaluated at x=xi and x=xi1, becomes: Ci(xi)$=hi3Mi+yiyi1hi+hi6Mi1=hi6(2Mi+Mi1)+f[xi1,xi] Ci(xi1)=hi3Mi1+yiyi1hihi6Mi=hi6(2Mi1Mi)+f[xi1,xi] Replacing i by i+1 in the second equation, we also get Ci+1(xi)=hi+13Mi+yi+1yihi+1hi+16Mi+1 To satisfy Ci(xi)=Ci+1(xi), we equate the above to the first equation to get: hi3Mi+yiyi1hi+hi6Mi1=hi+13Mi+yi+1yihi+1hi+16Mi+1 Multiplying both sides by 6/(hi+1+hi)=6/(xi+1xi1) and rearranging, we get: hihi+1+hiMi1+2Mi+hi+1hi+1+hiMi+1=6f[xi1,xi,xi+1] Note that here 1xi+1xi1(yi+1yixi+1xi)=f[xi,xi+1]f[xi1,xi]xixi1=f[xi1,xi,xi+1] is simply the second divided differences. We can rewrite the equation above as μiMi1+2Mi+λiMi+1=6f[xi1,xi,xi+1],(i=1,,n1) where μi=hihi+1+hi,λi=hi+1hi+1+hi=1μi Here we have n1 equations but n+1 unknowns M0,,Mn. To obtain these unknowns, we need to get two additional equations based on certain assumed boundary conditions. Assume the first order derivatives at both ends f(x0) and f(xn) are known. Specially f(x0)=f(xn)=0 is called clamped boundary condition. At the front end, we set C1(x0)=h13M0+y1y0h1h16M1=h13M0h16M1+f[x0,x1]=f(x0) Multiplying 6/h1=6/(x1x0) we get 2M0+M1=6x1x0[f[x0,x1]f(x0)]=6f[x0,x0,x1] Similarly, at the back end, we also set Cn(xn)=hn3Mn+ynyn1hn+hn6Mn1=hn3Mn+hn6Mn1+f[xn1,xn]=f(xn) Multiplying 6/hn=6/(xnxn1) we get 2Mn+Mn1=6xnxn1[f(xn)f[xn1,xn]]=6f[xn1,xn,xn] We can combine these equations into a linear equation system of n+1 equations and the same number of unknowns: [21μ12λ1...,x2]f[xn2,xn1,xn]f[xn1,xn,xn]] Alternatively, we can also assume f(x0) and f(xn) are known. Specially f(x0)=f(xn)=0 is called natural boundary condition. Now we can simply get M0=f(x0) and Mn=f(xn) and solve the following system for the n+1 unknowns M0,,Mn: [10μ12λ1...f[x0,x1,x2]6f[xn2,xn1,xn]f(xn)]

    Exercice [PY-4]

    In this exercice, we want to calculate the polyomial spline interpolation to the runge function.
    • Define the LinearSpline(x,X,Y), QuadraticSpline(x,X,Y) and CubicSpline(x,X,Y) functions.

    • Apply these functions for the interpolation of uniform data of the runge function. Draw the resulting curves.

    See above

    References

    1. J.H. Ahlberg, E.N. Nilson & J.L. Walsh (1967) The Theory of Splines and Their Applications. Academic Press, New York.
    2. C. de Boor (1978) A Practical Guide to Splines. Springer-Verlag.
    3. G.D. Knott (2000) Interpolating Cubic Splines. Birkhäuser.
    4. H.J. Nussbaumer (1981) Fast Fourier Transform and Convolution Algorithms. Springer-Verlag.
    5. H. Späth (1995) One Dimensional Spline Interpolation. AK Peters.