<- previous    index    next ->

Lecture 9, Review 1


Go over WEB pages Lecture 1 through 8.
Work the homework 1 and homework 2.

Open book, open notes, exam.
Multiple choice and short answer.
Read the instructions.
Follow the instructions, else lose points.
Read the question carefully.
Answer the question that is asked, not the question you want to answer.
No programming required.
Some code may appear as part of a question.

Example questions and answers will be presented in class.
Some things you should know for the exam:

IEEE float has 6 or 7 decimal digits precision

IEEE double has 15 or 16 decimal digits precision

Adding or subtracting numbers of large differences in magnitude
causes precision lose. Known as roundoff error.

RMS error is root mean square error, a reasonable intuitive measure

Maximum error may be the most important measure for some applications

Average error is not very useful, typically smallest error.

The machine "epsilon" for a specific floating point arithmetic is
the smallest positive machine number added to exactly 1.0 that
results in a sum that is not exactly 1.0, actually greater than 1.0 .

Be careful reading programs that use "epsilon" or "eps". Sometimes
it may be the "machine epsilon" and other times it may be a
tolerance on how close the something should be.

Most languages include the elementary functions sin and cos, yet many
do not include a complete set of reciprocal functions such as cosecant
or inverse hyperbolic functions such as inverse hyperbolic cotangent.

If given only a natural logarithm function, log2(x) and be computed
as  log(x)/log(2.0) and log10(x) as log(x)/log(10.0)

Homework 1 used a simple approximation for integration:
position s = integral for time=0 to time=t of velocity(t) dt 
as s = sum i=1 to n  s_i = s_i-1 + dv_i-1 * dt  where n*dt=t

In order to guarantee a mathematical unique solution to a set of
linear simultaneous equations, two requirements are needed:
There must be the same number of equations as unknown variables and
the equations must be linearly independent.

For two equations to be linearly independent, there must not be
two constants p and q such that equation1 * p + q = equation2

The numerical solution of liner simultaneous equations can fail even
though the mathematical condition for a unique solution exists.

Surprisingly, a good numerical solution usually results from
linear simultaneous equations based on random numbers.

The Gauss Jordan method of solving simultaneous equations A x = y produces
the solution x by performing operations on A y, reducing A to the
identity matrix such that y is replaced by x at the end of the computation

Improved numerical accuracy is obtained in the solution of linear
systems of equations by interchanging rows such that the largest
magnitude diagonal is used as the pivot element.
Some very large systems of equations can be solved accurately.

The system of linear equations is solved by the same method when
the equations have complex values.

It is better to directly solve simultaneous equations rather than
invert the "A" matrix and multiply by the "y" vector to find "x".
There are small matrices that are very hard to invert accurately.
Matrix inversion uses very similar computation to direct solving
of simultaneous equations. The matrix times vector, A * y, introduces
more error.

Given data, a least square fit of the data minimizes the RMS error
for a given degree polynomial at the data points. Between the
data points there may be large variations.

When trying to fit large degree polynomials, there may be numerical
errors such that the approximation becomes worse.

A polynomial of degree n, will exactly fit n+1 points.
It may have extreme humps and valleys between the points.

Mathematically, a least square fit of n data points should be
exactly fit by a n-1 degree polynomial. The numerical computation
may not give this result.

A least square fit may use terms: powers, sine, cosine or any other
smooth function of the data. The basic requirement is that
all functions must be linearly independent of each other.

A least square fit uses a solution of simultaneous equations

Least square fit is the easiest method of fitting data that
is given with unequal spacing.

A polynomial of degree n has n+1 coefficients. Thus n+1 data
points can be exactly fit by an n degree polynomial.

A polynomial of degree n has exactly n roots (possibly with multiplicity)

Given roots r1, r2, ... rn a polynomial with these roots is created by
(x-r1)*(x-r2)* ... *(x-rn)

Finding a root, r, divide the polynomial by (x-r) to reduce one degree.

Newton's method x_next = x - P(x)/P'(x) will have quadratic
convergence except near multiple roots or derivative P'(x)
small. An error of 1/16 will reduce to an error of 1/64
on the next iteration. 

Horner's method of evaluating polynomials provides accuracy and
efficiency by never directly computing high powers of the variable.

Given the numerical coefficients of a polynomial, the numerical
coefficients of the integral, derivative are easily computed.

Given the numerical coefficients of two polynomials, the
sum, difference, product and ratio are easily computed.

Any functions that can be continuously differentiated can be
approximated by a Taylor series expansion. It is called
"truncation error" when evaluating a Taylor series with a
specific number of terms.

Orthogonal polynomials are used to fit data and perform numerical
integration. Examples of orthogonal polynomials include:
Legendre, Chebyshev, Laguerre, Lagrange, Fourier.

Chebyshev polynomials are used to approximate smooth functions
while minimizing the maximum error at the given data points.

Legendre polynomials are used to approximate smooth functions
while minimizing RMS error at the given data points.

Lagrange polynomials are used to approximate smooth functions
while exactly fitting the given data points.

For non smooth functions, including square waves and pulses,
a Fourier series approximation may be used.

The Fejer approximation can be used to eliminate many oscillations
in the Fourier approximation at the expense of a less accurate fit.

Numerical integration is typically called numerical quadrature.
 
The Trapezoidal integration method requires a small step size
and many function evaluations to get accuracy.
I used uniform step size and the method is easy to code:
area = (b-a)/n * ( (f(a)+f(b))/2 + sum i=1..n-1  f(a+i*h) )

In general a smaller the step size, usually called "h", will
result in a more accurate value for the integral. This
implies that more evaluations of the function to be
integrated are needed.

The Gauss Legendre integration of smooth functions provides
good accuracy with a minimum number of function evaluations.
The weights and ordinates are needed for the summation:
area = sum i=1..n  w[i]*f(x[i]);

An adaptive quadrature integration method is needed to
get reasonable accuracy of functions with large derivatives
or large variation in derivatives.

Adaptive quadrature uses a variable step size and at least
two methods of integration to determine when the desired
accuracy is achieved. This method, as with all integration
methods, can fail to give accurate results.

Two dimensional and higher dimensions can use simple extensions
of one dimensional numerical quadrature methods, except adaptive
quadrature.

The difference between a Taylor Series and a Maclaurin Series
is that the Maclaurin Series always expands a function about
zero.

The two equations for Standard Deviation are mathematically
equivalent, yet may give different numerical results.
sigma = sqrt((sum(x^2)- sum(x)^2/n)/n) = sqrt(sum((x-mean)^2)/n)
For n samples of x with a mean of mean.

RMS error is computed from a set of n error measurements using
rms_error = sqrt(sum(err^2)/n)
Note that this is the same value as Standard Deviation when
the mean value of the errors is zero. Thus, considered a
reasonable intuitive measure of error.

    <- previous    index    next ->

Other links

Go to top