Polynomial interpolation

Theorem : Given a table of n ( [Maple Math] ) data points [Maple Math] , [Maple Math] , with all the [Maple Math] 's distinct. There is a unique polynomial [Maple Math] of minimum degree no more than [Maple Math] which passes through (that is, interpolates) the data.

Proof outline: Write [Maple Math] . We have n coefficients [Maple Math] to determine. Evaluate p at each [Maple Math] to get [Maple Math] . This gives a system of n linear equations in n unknowns. There will be a unique solution if and only if the determinant of the matrix of the linear system is non zero. One does some algebra and computes that the determinant of the system is the product of the pairwise differences [Maple Math] with [Maple Math] . Since all the [Maple Math] 's are assumed distinct , we are done. QED

The algebra one does to compute the determinant in the above theorem is fairly formidable. We can use Maple to compute it for any particular value of n (n can't be too big)

> with(linalg);

[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]
[Maple Math]

> i := 'i': mat :=vandermonde([seq(x.i,i=1..3)]);

[Maple Math]

> dmat :=det(mat);

[Maple Math]

> factor(dmat);

[Maple Math]

Exercise: At some value of n, your system will not compute the determinant of the nxn vandermonde matrix. Discover that value of n.

Fortunately, there are ways to compute the interpolating polynomial which don't involve computing large determinants.

Lagrange's form of the interpolating polynomial

Here is Lagrange's construction of the interpolating polynomial: For each i from 1 to n let [Maple Math] be the polynomial

[Maple Math]

By inspection we see that [Maple Math] and [Maple Math] for [Maple Math] . Hence the polynomial

[Maple Math]

is a polynomial of degree no more than n which interpolates the data. We can construct l in Maple like so. First, define the unit polynomials l[i].

> lu := (i,x,n,`t`) -> (product(t-x[j],j=1..i-1)*product(t-x[j],j=i+1..n))/(product(x[i]-x[j],j=1..i-1)*product(x[i]-x[j],j=i+1..n));

[Maple Math]

Just to check, we take some specific x data and compute one of the polynomials for that x data.

> x := [x1,x2,x3,x4]:
lu(2,x,4,t);

[Maple Math]

Next, tell lagrange to construct his polynomial given data x,y and the name of the variable in the polynomial

> lagrange := proc(x,y,`t`)
local i;
unapply(sum(y[i]*lu(i,x,nops(x),t),i=1..nops(x)),t) end:

Test with some data

> x := [1,2,5,7]: y := [4,3,1,4]:

>

> p1 := lagrange(x,y,w);

[Maple Math]
[Maple Math]

> map(p1,x);

[Maple Math]

>

Works very nicely. But Lagrange's form has two major defects. 1. It is very expensive to compute. 2. It is not extensible: That is, if we add a data point to the data, we have to recompute all over. Newton had a better idea.

Newton's form of the interpolating polynomial.

Instead of writing the interpolating polynomial [Maple Math] as a linear combination of the n polynomials [Maple Math] , Newton says write it as a linear combination of the polynomials

[Maple Math] . Thus

[Maple Math]

This gives us a triangular system of linear equations to solve for [Maple Math] through [Maple Math]

[Maple Math]

[Maple Math]

[Maple Math]

[Maple Math]

Such systems can be solved easily, even by hand, by computing a divided difference table . The solutions [Maple Math] lie along the super diagonal of the matrix computed by the Maple word divdiff which we define below.

> divdiff := proc(x,y)
local n, nt, j, i,k,p;
n := nops(x);
nt := matrix(n,n+1,[seq(0,k=1..n*(n+1))]);
for i from 1 to n do nt[i,1]:= x[i]: nt[i,2]:= y[i]: od:
p := nt[1,2]:
for j from 3 to n+1 do
for i from j-1 to n do
nt[i,j]:= (nt[i,j-1]-nt[i-1,j-1])/(x[i]-x[i-(j-2)]);
od;
od;
evalm(nt) end:

> divdiff([seq('x'[i],i=1..3)],[seq('y'[j],j=1..3)]);

[Maple Math]

Newtinterp computes the solutions and returns the interpolating polynomial.

> newtinterp := proc(x,y,`t`)
local n, nt, j, i,k,p;
n := nops(x);
nt := matrix(n,n+1,[seq(0,k=1..n*(n+1))]);
for i from 1 to n do nt[i,1]:= x[i]: nt[i,2]:= y[i]: od:
p := nt[1,2]:
for j from 3 to n+1 do
for i from j-1 to n do
nt[i,j]:= (nt[i,j-1]-nt[i-1,j-1])/(x[i]-x[i-(j-2)]);
od;
p := p+ nt[j-1,j]*convert([seq(t-x[k],k=1..j-2)],`*`) od;
end:

We can test this out by generating the polynomial of smallest degree which is 0 at the integers from 0 to 8, except at 4 where it takes the value 1.

> x := [seq(i,i=0..8)]: y := [0,0,0,0,1,0,0,0,0]:

>

> sol:=newtinterp(x,y ,t);

[Maple Math]
[Maple Math]
[Maple Math]

> p := unapply(sol,t);

[Maple Math]
[Maple Math]
[Maple Math]

> map(p,[0,1,2,3,4,5,6,7]);

[Maple Math]

> plot(p,1..7);

[Maple Plot]

Problems:

1. Verify that the Lagrange interpolation polynomial which is 0 at the integers from 0 to 8, except at 4 where it takes the value 1 is the same as the polynomial p found above.

2. For the function [Maple Math] , calculate the interpolation polynomial p for the data [Maple Math] and [Maple Math] . Make p into a function with unapply and plot f and p from -5 to 5. You should see the polynomial start to wiggle out near the endpoints of the interval. Experiment with this by increasing the number of data points to 18. What happens? What if we change the interval to [0,1]? What if we change the function to the sin function?

3. Export to C++ : What do we want to export? If all we want to export is the final function routine for the polynomial p we found above, that can proceed as before. But suppose we want to export the procedures defined in divdiff and newtinterp to C++? How would we do that?