(from Joshua Scholar about Spline interpollation in general...)
According to sampling theory, a perfect interpolation could be found by replacing each
sample with a sinc function centered on that sample, ringing at your target nyquest
frequency, and at each target point you just sum all of contributions from the sinc
functions of every single point in source.
The sinc function has ringing that dies away very slowly, so each target sample will have
to have contributions from a large neighborhood of source samples. Luckily, by definition
the sinc function is bandwidth limited, so once we have a source that is prefilitered for
our target nyquest frequency and reasonably oversampled relative to our nyquest frequency,
ordinary interpolation techniques are quite fruitful even though they would be pretty
useless if we hadn't oversampled.
We want an interpolation routine that at very least has the following characteristics:
1. Obviously it's continuous. But since finite differencing a signal (I don't really know
about true differentiation) is equivalent to a low frequency attenuator that drops only
about 6 dB per octave, continuity at the higher derivatives is important too.
2. It has to be stiff enough to find peaks when our oversampling missed them. This is
where what I said about the combination the sinc function's limited bandwidth and
oversampling making interpolation possible comes into play.
I've read some papers on splines, but most stuff on splines relates to graphics and uses a
control point descriptions that is completely irrelevant to our sort of interpolation. In
reading this stuff I quickly came to the conclusion that splines:
1. Are just piecewise functions made of polynomials designed to have some higher order
continuity at the transition points.
2. Splines are highly arbitrary, because you can choose arbitrary derivatives (to any
order) at each transition. Of course the more you specify the higher order the polynomials
will be.
3. I already know enough about polynomials to construct any sort of spline. A polynomial
through 'n' points with a derivative specified at 'm[1]' points and second derivatives
specified at 'm[2]' points etc. will be a polynomial of the order n-1+m[1]+m[2]...
A way to construct third order splines (that admittedly doesn't help you construct higher
order splines), is to linear interpolate between two parabolas. At each point (they are
called knots) you have a parabola going through that point, the previous and the next
point. Between each point you linearly interpolate between the polynomials for each point.
This may help you imagine splines.
As a starting point I used a polynomial through 5 points for each knot and used MuPad (a
free Mathematica like program) to derive a polynomial going through two points (knots)
where at each point it has the same first two derivatives as a 4th order polynomial
through the surrounding 5 points. My intuition was that basing it on polynomials through 3
points wouldn't be enough of a neighborhood to get good continuity. When I tested it, I
found that not only did basing it on 5 point polynomials do much better than basing it on
3 point ones, but that 7 point ones did nearly as badly as 3 point ones. 5 points seems to
be a sweet spot.
However, I could have set the derivatives to a nearly arbitrary values - basing the values
on those of polynomials through the surrounding points was just a guess.
I've read that the math of sampling theory has different interpretation to the sinc
function one where you could upsample by making a polynomial through every point at the
same order as the number of points and this would give you the same answer as sinc
function interpolation (but this only converges perfectly when there are an infinite
number of points). Your head is probably spinning right now - the only point of mentioning
that is to point out that perfect interpolation is exactly as stiff as a polynomial
through the target points of the same order as the number of target points.