Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Fourier Series: Background

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Assignment

MATH1220

Joel Wong: 3128783

1. Fourier Series
Background
While Euler, D. Bernoulli and Gauss used trigonometric series before in relation to problems such as vibrating strings, Joseph Fourier (1768-1830) believed that these trigonometric series could represent arbitrary functions. He first introduced them to solve the heat equation of a metal plate.

Definition
Also known as a trigonometric series, a Fourier series is an infinite series of sine and cosine functions. If we let be a piecewise continuous function on (i.e. the period of the function is ( ) ( )) the Fourier series that converges to ( ) may be expressed as: ( ) ( ) where and ( ) ( ) ( ) (called the Fourier coefficients of ) are defined by: ( )

Note that if we do not know if the Fourier series converges to the function , we say it is the Fourier series associated with the function. Also note that this definition may be extended to include functions that have a period other than by the use of substitution. Finally, note that the Fourier series may also be expressed using complex exponentials in the place of trigonometric functions.

Convergence
Since a Fourier series is only really useful if it can be shown that it converges to the function it is meant to be representing, it is important to know the conditions under which a Fourier series converges to its associated function.

Assignment

MATH1220

Joel Wong: 3128783

The Fourier Convergence Theorem may be stated as follows (Stewart, J. 2008 [sixth edition], p5 of online appendix): If is a periodic function with period and and are piecewise continuous on ( ) is convergent. , then the Fourier series The sum of the Fourier series is equal to ( ) at all numbers where is continuous. At the numbers where is discontinuous, the sum of the Fourier series is the average of the right and left limits, that is ( ) ( )

Note: Piecewise Continuous means that there are only a finite number of jump discontinuities. A jump discontinuity is when the function jumps to a value abruptly as illustrated:

It is also possible to use Dinis Test to test the convergence of a Fourier series more precisely: Let ( )
|

(
( )|

( ) (Weisstein, E, W, 2010)

Then if

is finite, then the Fourier series converges to ( )

Example
A classic example of when Fourier series are used is the square-wave function, which may be defined by: ( ) , ( ) ( ) If you have done ELEC1700 (digital circuits), you might recognise this as the timing diagram of a clock. If you take the graph as representing a sound wave, it has the tone colour of a computers internal speaker (think of early computer game music for a comparison) The associated Fourier series is: ( )

Assignment

MATH1220

Joel Wong: 3128783

( (

( ) ) )

( ( ( ( )

( ) ) ) )

( ( ( {

( ) ( ) )) )

So the Fourier series is:

( (

) )

By the Fourier Convergence Theorem, we can say: ( ) ( ) ( ) {

Applications
Fourier series are very useful in modelling or analysing periodic phenomena, such as sound waves, electricity, heartbeats (e.g. ECG) and tides. Fourier series can also be used to solve some classes of ODEs. One interesting application of Fourier series is in analysing the the harmonics of an instrument (the greater the amount of harmonics in an instrument, the more complex the sound think of the sound of a flute vs. the sound of a violin vs. the sound of a French horn). Since the harmonics of an instrument are the second biggest factor in the instruments sound (the biggest being the players

Assignment

MATH1220

Joel Wong: 3128783

technique), analysing harmonics gives a precise basis for why instruments are different (e.g. why do Stradivarius violins tend to sound better than modern violins, and how can we replicate the waveform of these instruments) A pure sound wave is basically isomorphic to a sine wave, so by expressing a sound wave as a sum of trigonometric functions, we are essentially superimposing many pure sound waves to make a complex sound wave. Given the Fourier series: ( ) waveform by taking the nth term (i.e. ( ) ( )) called the nth harmonic. , we

are able to separate out and thus analyse the strengths of different harmonics in a instruments

The amplitude (strength) of the nth harmonic is given by . Upon computing the amplitude of several harmonics, it becomes clear that instruments with more complex tone colour have more amplitude in higher harmonics than instruments with a purer tone colour. The other trend that would be interesting to investigate would be the effect of having certain harmonics emphasised. As well as allowing more precise analysis of musical waves, Fourier series allow us to generate new tone colours by experimenting with different combinations of waves, and possibly to replicate a given tone colour.

Conclusion
A Fourier series is an infinite series of trigonometric functions and being made up of periodic functions, are very useful for investigations of periodic phenomena. The Fourier series is convergent to the associated function if the associated function is piecewise continuous over the repeating interval.

Assignment

MATH1220

Joel Wong: 3128783

2. Least Squares Approximations


Background
The idea of least squares approximations were independently formulated by Gauss, Legendre and Adrian between 1795 and 1808. The motivation for developing such a method for approximation was driven by the large amount of exploration taking place in that period of time. Explorers needed to be able to navigate accurately, and to do so, needed to be able to accurately predict the location of celestial bodies and also estimate the size of the earth. Experimental error was a large challenge to answering both of these questions, requiring better methods of approximations.

Motivation
A problem that most people who have done at least high school science would recognise is how to find the line of best fit for a set of experimental data Say you have the following data:

y 7 6 1 5 1 4 1 3 1 1 2 1 1 1 2

x 1 2 3 4 5

y 1 2 4 6 7

x 3 4 5

If you were to draw a line that was based on the first two points, you would get the following:

y 7 6 1 5 1 4 1 3 1 1 2 1 1 1 2

x 1 2 3 4 5

y 1 2 4 6 7

y prediction 1 2 3 4 5

error 0 0 1 2 2

error2 0 0 1 4 4

x 3 4 5

Assignment So the sum of the squares of the error is 9

MATH1220

Joel Wong: 3128783

Now, if you were to draw a line with the same gradient that passes through (3,4), then you would get the following results:

y 7 6 1 5 1 4 1 3 1 1 2 1 1 1 2

x 1 2 3 4 5

y 1 2 4 6 7

y prediction 2 3 4 5 6

error 1 1 0 1 1

error2 1 1 0 1 1

x 3 4 5

With this line, the sum of the squares of the error is only 4. Now, is there a methodical way to find the line with the smallest sum of the squares of the error, giving us the least squares approximation? It turns out that solving the system of equations:

Will give the required best approximation.

Definition
A least squares approximation is a procedure for numerically approximating the solution to a system that has no solution. The approximation generated minimises the squares of the sum of the errors. The least squares approximation may be computed by using the following theorem (Antont, H; Rorres, C.; 2005): If is an matrix with linearly independent column vectors, then for every matrix b, the linear system has a unique least squares solution. This solution is given by: ( ) (which is a rearranged form of ) It we wish to think of the least squares method in terms of finding the vector in the given subspace that is closest to the given vector, we come to the conclusion that any vector x that is a least squares solution of Ax=b must satisfy Ax=projwb, as the shortest distance between the closest vector in the

Assignment

MATH1220

Joel Wong: 3128783

given subspace and the given vector will be the orthogonal distance between that vector and the subspace.

Example: Line of Best Fit


(This example based on Carter, T; 2000) Suppose we have the set of data contained in the table: Then we can express a system of equations based on gradient-intercept form ( that it is true for all of the points in the data. x 1 2 3 4 5 y 2 3 7 8 9 ) such

Since it is very unlikely that there will be a and a that will fit all the equations simultaneously, we will use the least squares method to find the best approximation. First we will express the system in matrix format (i.e. Ax=b)

, [ ]

* + and [ ]

Now to find the best approximation:

+ [ * ]

* +

+ [ ]

+* +

Now this gives the augmented matrix, which we reduce by Gauss-Jordan: * | + * | + * | +

So the best approximation for c0 is 0.1, and the best approximation for c1 is 1.9. And the line of best fit is given by

Application: Fourier Series


Least squares approximations can be used to check that the Fourier series for a function actually converges to that function. This is done by checking that the mean square error approaches 0 as n approaches .

Assignment

MATH1220

Joel Wong: 3128783

The Least squares approximation procedure gives us a tool to handle the problem of if a Fourier series converges to its associated function because we may think of the problem as finding the best possible approximation to using only functions from a specified subspace (in this case, only using trigonometric functions). This allows us to use orthogonal projections onto the specified subspace to calculate the closest approximation in that subspace.

Conclusion
The least squares approximation procedure allows us to methodically find a good approximation to a unsolvable system of equations. The applications of this are wide and varied, including: allowing us to work with empirical data, allowing us to reduce the effects of interference on an analogue signal, allowing us to know when we have found the minimum error solution and in giving us tools with which we can work on other theoretical problems (such as the convergence of Fourier series).

Assignment

MATH1220

Joel Wong: 3128783

Bibliography
Anton, H. & Rorres, C. (2005) Elementary Linear Algebra (Applications Version) (9th edition), USA, John Wiley & Sons, Inc. Carter, T (2000) 8. Least Squares Approximation Retrieved 23/09/2010 from: http://ceee.rice.edu/Books/LA/leastsq/index.html Fourier series (n.d) Retrieved 23/09/2010 from: http://en.wikipedia.org/wiki/Fourier_series Fourier Series (n.d) Retrieved 23/09/2010 from: http://www.physics.miami.edu/~nearing/mathmethods/fourier_series.pdf Least squares (n.d) Retrieved 23/09/2010 from: http://en.wikipedia.org/wiki/Least_squares Lecture 2.4: Least squares approximations (n.d.) Retrieved 23/09/2010 from: http://dmpeli.mcmaster.ca/Matlab/Math4Q3/NumMethods/Lecture2-4.html Niu, R (2006) Fourier Series and Their Applications Retrieved 23/09/2010 from: http://ocw.mit.edu/courses/mathematics/18-100c-analysis-i-spring-2006/projects/niu.pdf Stewart, J (2008) Calculus (6th edition) Retrieved 23/09/2010 from: http://www.stewartcalculus.com/data/CALCULUS%206E/upfiles/topics/6e_at_01_fs_stu.pdf Stewart, J (2008) Calculus (6th edition), USA, Thompson Higher Education. Walker, J (n.d) Fourier Series Retrieved 23/09/2010 from: http://www.uwec.edu/walkerjs/media/fseries.pdf Weisstein, Eric W. Dini's Test Retrieved 23/09/2010 from: http://mathworld.wolfram.com/DinisTest.html Weisstein, Eric W. Fourier Series. Retrieved 23/09/2010 from: http://mathworld.wolfram.com/FourierSeries.html

You might also like