2.3E: Exercises - Mathematics

Exercise (PageIndex{1})

For the following exercises, the given functions represent the position of a particle traveling along a horizontal line.

a. Find the velocity and acceleration functions.

b. Determine the time intervals when the object is slowing down or speeding up.

1) (s(t)=2t^3−3t^2−12t+8)

2) (s(t)=2t^3−15t^2+36t−10)

1a. (v(t)=6t^2−30t+36,a(t)=12t−30)

b. speeds up ((2,2.5)∪(3,∞)), slows down ((0,2)∪(2.5,3))

Exercise (PageIndex{2})

A rocket is fired vertically upward from the ground. The distance s in feet that the rocket travels from the ground after t seconds is given by (s(t)=−16t^2+560t).

a. Find the velocity of the rocket 3 seconds after being fired.

b. Find the acceleration of the rocket 3 seconds after being fired.

a. (464ft/s^2)

b.(−32ft/s^2)

Exercise (PageIndex{3})

A ball is thrown downward with a speed of 8 ft/s from the top of a 64-foot-tall building. After t seconds, its height above the ground is given by (s(t)=−16t^2−8t+64.)

a. Determine how long it takes for the ball to hit the ground.

b. Determine the velocity of the ball when it hits the ground.

Under Construction

Exercise (PageIndex{4})

The position function (s(t)=t^2−3t−4) represents the position of the back of a car backing out of a driveway and then driving in a straight line, where s is in feet and t is in seconds. In this case, (s(t)=0) represents the time at which the back of the car is at the garage door, so (s(0)=−4) is the starting position of the car, 4 feet inside the garage.

a. Determine the velocity of the car when (s(t)=0).

b. Determine the velocity of the car when (s(t)=14).

a. (5 ft/s)

b. (9 ft/s)

Exercise (PageIndex{5})

7) The position of a hummingbird flying along a straight line in t seconds is given by (s(t)=3t^3−7t) meters.

a. Determine the velocity of the bird at (t=1) sec.

b. Determine the acceleration of the bird at (t=1) sec.

c. Determine the acceleration of the bird when the velocity equals 0.

Under Construction

Exercise (PageIndex{6})

A potato is launched vertically upward with an initial velocity of 100 ft/s from a potato gun at the top of an 85-foot-tall building. The distance in feet that the potato travels from the ground after (t) seconds is given by (s(t)=−16t^2+100t+85).

a. Find the velocity of the potato after (0.5s) and (5.75s).

b. Find the speed of the potato at 0.5 s and 5.75 s.

c. Determine when the potato reaches its maximum height.

d. Find the acceleration of the potato at 0.5 s and 1.5 s.

e. Determine how long the potato is in the air.

f. Determine the velocity of the potato upon hitting the ground.

a. 84 ft/s, −84 ft/s

b. 84 ft/s

c. (frac{25}{8}s)

d. (−32ft/s^2) in both cases

e. (frac{1}{8}(25+sqrt{965})s)

f. (−4sqrt{965}ft/s)

Exercise (PageIndex{7})

The position function (s(t)=t^3−8t) gives the position in miles of a freight train where east is the positive direction and (t) is measured in hours.

a. Determine the direction the train is traveling when (s(t)=0).

b. Determine the direction the train is traveling when (a(t)=0).

c. Determine the time intervals when the train is slowing down or speeding up.

Under Construction

Exercise (PageIndex{8})

The following graph shows the position (y=s(t)) of an object moving along a straight line.

a. Use the graph of the position function to determine the time intervals when the velocity is positive, negative, or zero.

b. Sketch the graph of the velocity function.

c. Use the graph of the velocity function to determine the time intervals when the acceleration is positive, negative, or zero.

d. Determine the time intervals when the object is speeding up or slowing down.

a. Velocity is positive on ((0,1.5)∪(6,7)), negative on ((1.5,2)∪(5,6)), and zero on ((2,5))

Contributors

Showing that $int_0^1 log(sin pi x)dx=-log2$

I need help with a textbook exercise (Stein's Complex Analysis, Chapter 3, Exercises 9). This exercise requires me to show that $int_0^1 log(sin pi x)dx=-log2$ A hint is given as "Use the contour shown in Figure 9."

Since this is an exercise from Chapter 3, I think I should use the residue formula or something like that. But the function $f(x)=log(sin pi x)$ becomes singular on $x=0$ and $x=1$, which makes the contour illegal for the residue theorem. Can anyone give me a further hint on this problem? Many thanks in advance!

P.S. This is my first time on Math Stack Exchange. If you find my post ambiguous, let me know.

2.3E: Exercises - Mathematics

Answers to exercises will only be provided during class time. If you cannot make it to class, you will need to see me during consultation times and we will work through the exercises together. (When you see me during consultation times, I expect you to be prepared. I will never merely provide answers to exercises. Instead, I want to see good faith effort on your part in which case I will be more than happy to help you work throught the exercises.)

1. Prove that the sample average ar is an unbiased estimator for the population mean.
2. Prove that in the linear model Y_i = mu + varepsilon_i the ordinary least squares estimator of mu is equal to the sample average. Mathematically, minimize the sum of least squares sum_^n (Y_i - hat)^2 and show that this is obtained by setting hat = ar .
3. Is there a difference between an estimator and an estimate?

We will use this first practice session to become familiar with Stata.

I will use this exercise to teach you some basic tricks you should know about Stata. For Stata help and support I can highly recommend the following two sources:

The Social Science Computing Cooperative at the University of Wisconsin provides excellent support for Stata beginners. Check out their website "Stata for Students":

Also, check out their website "Stata for Researchers":

This website should be your first port of call in all things Stata.

Furthermore, the UCLA Institute for Digital Research and Education provides fantastic resources for people who are interested in learning Stata:

Feel free to use these links throughout the semester to improve your Stata skills!

Let Y_i sim ext(mu, sigma^2). You have learnt in the lecture that hat_1 := ar_n is an unbiased and consistent estimator for the population mean mu. Are the following estimators also unbiased or consistent for mu? Discuss!

2. hat_3 := ar_n + 3/n
3. hat_4 := (Y_1 + Y_2 + Y_3 + Y_4 + Y_5)/5

Excerpt from the website of the Australian Bureau of Statistics:

In a sample of 1,000 randomly selected Australians, the average numeracy score was 312 and the sample standard deviation was 41. Construct a 95% confidence interval for the population mean of the numeracy score.

Exercise 3.3 parts a and b Exercise 3.4.

1. Continue working through the "Stata for Researchers" website (as started last week).
2. Empirical Exercise E4.2 parts a and b.

Consider the following linear model for heights:

where Y_i is the height of person i and X_ is a gender dummy variable that takes on the value 1 if person i is male and zero otherwise.

1. In that model, what does eta_0 capture? What does eta_0 + eta_1 capture?
2. Define and derive (mathematically) the OLS estimators of eta_0 and eta_1.

Empirical Exercise E3.1 part d.

The following Stata do-file is a solution to this exercise. Please feel free to use this as the starting point for all your future do-files. Just copy and paste it into your Stata do-file editor and save it as a new do-file. Since it contains the answer to Empirical Exercise 4.4, I gave it the name "E4_4.do", but you can choose whatever name you want.

What's important here is that you need to customize the code below in one place:

• Work directory: This is the location on your computer where your access and store all your files. This includes the textbook's data files (the files with a dta-suffix), your self-written Stata do-files as well as the log-files that are created by your do-files. You choose the work directory it is likely different on different computers. For example, on my office desktop computer I created a work directory called /Users/juergen/EMET2008/Stata .

For the code below to work, you need to keep ALL files in the same work directory! Again, this includes your textbook's data files as well as the Stata do-files and log-files that you create.:

Derive the bias from omitted variables.

In a recent applied econometrics research project, I have been interested in the causal effect of academic fraud on labor market outcomes. The broad research question is: Do people who commit academic fraud (at university) benefit significantly from it? Sounds like a straightforward research question, but answering it is quite challenging econometrically.

Let's say the model looks like

where Y_i are weekly earnings (full time), Fraud_i is a dummy variable that is equal to one if a person reported that s/he committed academic fraud during university and zero otherwise. (All other rhs variables are self-explanatory.)

If I run this regression and obtain the estimate hat<eta>_1 for eta_1, can I interpret this as the causal effect of academic fraud on earnings? Discuss!

In EMET2007 you (hopefully!) have learned how to test for homoskedasticity versus heteroskedasticity. How would you do this with Stata? (Use the Growth data set from the previous exercise to illustrate the test.) If you indeed find that the data is heteroskedastic, how would you correct for it with Stata?

Consider the simple linear model Y_i = eta_0 + eta_1 X_i + u_i.

Mathematically define the OLS estimator and prove that it is inconsistent under endogeneity.

Mathematically define the TSLS estimator and prove that it is consistent under endogeneity.

Which of the two estimators is consistent under exogeneity?

Research question: Do girls who attend girls' schools do better in math than girls who attend coed schools? I give you a data set that includes the following variables:

• score: score in a standardized math test
• girlshs: dummy variable which is equal to 1 if a person attended girls' school or zero otherwise
• fecud: father's education
• meduc: mother's education
• hhinc: household income
1. You run an OLS estimation of score on girlshs and all the other variables. Will your OLS estimate of the coefficient on girlshs capture the causal effect of girls' school on math score? If not, why not?
2. What would be a good instrumental variable for girlshs?

Note: this exercise is based on Wooldridge, Introductory Econometrics, A Modern Approach, 5th edition, chapter 15.

Cool things can be done with randomized control trials. Here I expose you to the work of two recent economics papers published in a top field journal.

We will read and discuss the paper on the effects of home computer use on academic achievement of school children (written by Fairlie and Robinson).

We will read and discuss the paper on the effects of dropping schools by helicopter on rural villages in Afghanistan (written by Burde and Linden).

We will review the midterm exam. In particular: Q1, Q2 and Q5. (The other two questions are easy to answer if you have read the papers.)

Empirical Exercise E11.1 (Stock and Watson book)

Maximum likelihood estimation of probit and logit coefficients.

1. Define the maximum likelihood estimator.
2. Derive the maximum likelihood estimator.
3. Discuss statistical inference for the probit and logit coefficients.
4. Discuss consistency of the probit and logit estimators.

Note: In contrast to the linear probability model (which is a linear model that can be estimated straightforwardly by OLS) the probit and logit models are non-linear (remember that S-shaped curve from the lecture?). Non-linear models are considerably more difficult to estimate. In this problem solving session I will try to explain to you the principle idea and math of maximum likelihood estimation of probit and logit models. In the end, the estimation will need to be done by computers. Luckily, Stata offers a nice set of commands to help out.

Empirical Exercise E11.2 (Stock and Watson book)

We will briefly revisit last week's problem solving session to summarize ML estimation of probit and logit models.

Revisit Empirical Exercise E11.2 (Stock and Watson book)

Empirical Exercise E10.1 (Stock and Watson book)

Regress lnvio on shall separately for the years 1977 and 1999. What is the causal effect?

Run a pooled regression across all years.

Can you think of an unobserved variable that varies by state but not across time? How about one that varies across time but not by state?

Reshape your data from long format to wide format. Use the reshaped data to create differenced variables (between 1999 and 1977) for lnvio and shall .

Run a regression of the differences. What is the causal effect? How does it compare to part (a)? Why should the estimate be different theoretically?

For the rest of this exercise, reshape your data back into long format. (Simply reload the original data set.)

Run an (n-1)-binary regressors estimation of lnvio on shall .

Run a fixed effects estimation of lnvio on shall . Do it in two different ways:

1. Hard way: demean the variables yourself and regress demeaned variables on each other.
2. Lazy way: use Stata's inbuilt fixed effect estimation command.

How do the results differ to part (f)?

Add the explanatory variables incarc_rate , density , avginc , pop , pb1064 , pw1064 , and pm1029 to the estimation.

Now also control for time fixed effects. Do it in three different (yet equivalent) ways:

1. Entity demeaning with (T-1)-binary time indicators
2. Time demeaning with (n-1)-binary entity indicators
3. (T-1)-binary time indicators and (n-1)-binary entity indicators

Redo the main estimation using the logarithms of rob and mur instead of vio as outcome variables. How do your findings change?

Define and derive the fixed effect estimator.

Continue working on Empirical Exercise E10.1 (Stock and Watson book), see previous week.

No problem solving this week.

Empirical Exercise E15.1 (Stock and Watson book).

(Note: you need to import the Excel spreadsheet that holds the data and save the imported data as a dta-file before you start working.)

Syllabus matching grid
1 Algebra
1.1: The modulus function
1.2: Division of polynomials
1.3: The remainder theorem
1.4: The factor theorem
2 Logarithms and exponential functions
2.1: Continuous exponential growth and decay
2.2: The logarithmic function
2.3: ex and logarithms to base e
2.4: Equations and inequalities using logarithms
2.5: Using logarithms to reduce equations to linear form
3 Trigonometry
3.1: Secant, cosecant, and cotangent
3.2: Further trigonometric identities
3.4: Double angle formulae
3.5: Expressing a sin Θ + b cos Θ in the form R sin(Θ ± a) or R cos(Θ ± a)
Review exercise A - Pure 2
Review exercise A - Pure 3
Maths in real-life: Predicting tidal behaviour
4 Differentiation
4.1: Differentiating the exponential function
4.2: Differentiating the natural logarithmic function
4.3: Differentiating products
4.4: Differentiating quotients
4.5: Differentiating sin x, cos x, and tan x
4.6: Implicit differentiation
4.7: Parametric differentiation
5 Integration
5.1: Integration of eax+b
5.2: Integration of 1 x + b
5.3: Integration of sin (ax + b), cos (ax + b), ec2 (ax + b)
5.4: Extending integration of trigonometric functions
5.5: Numerical integration using the trapezium rule
6 Numerical solution of equations
6.1: Finding approximate roots by change of sign or graphical methods
6.2: Finding roots using iterative relationships
6.3: Convergence behaviour of iterative functions
Review exercise B - Pure 2
Review exercise B - Pure 3

Maths in real-life: Nature of Mathematics
7 Further algebra
7.1: Partial fractions
7.2: Binomial expansions of the form (1 + x)n when n is not a positive integer
7.3: Binomial expansions of the form (a + x)n where n is not a positive integer
7.4: Binomial expansions and partial fractions
8 Further integration
8.1: Integration using partial fractions
8.2: Integration of f(x) f´(x)
8.3: Integration by parts
8.4: Integration using substitution
Review exercise C - Pure 3
9 Vectors
9.1: The equation of a straight line
9.2: Intersecting lines
9.3: The angle between two straight lines
9.4: The equation of a plane
9.5: Configurations of a line and a plane
9.6: Configurations of two planes
9.7: The distance from a point to a plane or line
10 Differential equations
10.1: Forming simple differential equations (DEs)
10.2: Solving first-order differential equations with separable variables
10.3: Finding particular solutions to differential equations
10.4: Modelling with differential equations
11 Complex numbers
11.1: Introducing complex numbers
11.2: Calculating with complex numbers
11.3: Solving equations involving complex numbers
11.4: Representing complex numbers geometrically
11.5: Polar form and exponential form
11.6: Loci in the Argand diagram
Review exercise D - Pure 3
Maths in real-life: Electrifying, magnetic and damp: how complex mathematics makes life simpler
Exam-style paper A - Pure 2
Exam-style paper B - Pure 2
Exam-style paper C - Pure 3
Exam-style paper D - Pure 3
Glossary of terms
Index

Classwork Series and Exercises : Probability, Indices and Factorisation

This is the measure of the likelihood of occurrence or a required outcome happening. It is the &ldquochance&rdquo of an event happening. For example, a student might ask himself while preparing for an example, &ldquowhat is the probability that I will score a hundred percent?&rdquo This means that the student is asking himself, what chance he has of scoring 100 out of 100.

Probability is often represented as a fraction = number of required outcomes

number of possible outcomes

For the example of the student mentioned earlier, the boy has just 1required outcome which is to score 100% whilst the possible outcomes are 2 i.e. he either scores a 100% (1) or not (1)

So the probability of scoring 100% = ½

1. When an event is certain to happen, then the probability is 1 e.g. if Ade is 5years old this year, the probability that he would be 6 years old next year is 1.

2. When an event is certain not to happen, the probability is zero e.g. if Ade is 5years old now, the probability that he would be 8 years old next year is 0.

So if p is the probability that an event would happen and q is the probability that an event would not happen, then p + q = 1 è q = 1 &ndash p. Thus if p is the probability of occurrence of a desired outcome, then the probability that the desired outcome would not occur is 1 &ndash p.

A dice has six faces numbered 1 to 6. If the dice is rolled once, find the probability of

(a) obtaining the number 6 (b) obtaining the number 10 (c) not obtaining the number 6

(d) obtaining one of the numbers 1, 2, 3, 4, 5, 6

(a) Probability of obtaining 6 = Number of 6s on face of die = 1

Total numbers of figures on die 6

(b) Probability of obtaining 10 is 0 since there is no 10 on the face of the die

(c) Probability of not obtaining 6 = 1 &ndash probability of obtaining 6 = 1 – 1/6 = 5/6

(d) Since the numbers on the face of the die are 1, 2, 3, 4, 5 and 6, we can ONLY have one of these numbers so the probability is 1.

Factorisation means writing an expression in terms of its factors. When an expression with two or more parts is being factorised, we remove all the factors common to the parts and place these factors outside the bracket leaving only the remainders in the brackets.

Example: Factorise the following

i. 3abx + 5adx = ax (3b + 5d) Reason: a & d are common

ii. 3d 2 e &ndash 8d 2 = d 2 (3e – 8) : d 2 is common

iii. -18fg &ndash 12g = -6g(3f + 2) : -6g is common

An index (plural &ndash indices) is a short form of writing powers of a number. Example 5 3 = 5x5x5

The use of indices is guided by some simple laws which are given below.

Example: 4 1 x 4 3 = 4 1 + 3 = 4 4

Example: 4 3 ÷ 4 3 = 4 1 + 3 = 4 4

EXAMPLES : Simplify the following

a. 10 4 x 10 5 = 10 4 + 5 = 10 9

c. A -2 ÷B 0 = 1 /A 2 ÷ 1 = 1 /A 2

Guideline: 10 2 x 10 5 = 10 2+5 = 10 7

1. A card is picked at random from a pack of 52 playing cards. What is the probability that is a 6? (a) 1/52 (b) 1/13 (c) 3/26 (d) ¼

Guideline: In a pack of playing cards, there are 4 of every number, therefore

2.3E: Exercises - Mathematics

A place to keep quick notes about Math that I keep forgetting. This is meant to be a scratch notes and cheat sheet for me to write math notes before I forget them or move them somewhere else. Can and will contain errors and/or not complete description in number of places. Use at your own risk.

1 general notes

(lacksquare ) Lyapunov function is used to determine stability of an equilibrium point. Taking this equilibrium point to be zero, and someone gives us a set of diﬀerential equations (egin x^left ( t ight ) y^left ( t ight ) z^left ( t ight ) end =egin f_<1>left ( x,y,z,t ight ) f_<2>left ( x,y,z,t ight ) f_<2>left ( x,y,z,t ight ) end ) and assuming (left ( 0,0,0 ight ) ) is an equilibrium point. The question is, how to determine if it stable or not? There are two main ways to do this. One by linearization of the system around origin. This means we ﬁnd the Jacobian matrix, evaluate it at origin, and check the sign of the real parts of the eigenvalues. This is the common way to do this. Another method, called Lyapunov, is more direct. There is no linearization needed. But we need to do the following. We need to ﬁnd a function (Vleft ( x,y,z ight ) ) which is called Lyapunov function for the system which meets the following conditions

1. (Vleft ( x.y,z ight ) ) is continuously diﬀerentiable function in (mathbb ^<3>) and (Vleft ( x.y,z ight ) geq 0) (positive deﬁnite or positive semideﬁnite) for all (x,y,z) away from the origin, or everywhere inside some ﬁxed region around the origin. This function represents the total energy of the system (For Hamiltonian systems). Hence (Vleft ( x,y,z ight ) ) can be zero away from the origin. But it could never be negative.
2. (Vleft ( 0,0,0 ight ) =0). This says the system has no energy when it is at the equilibrium point. (rest state).
3. The orbital derivative (frac
leq 0) (i.e. negative deﬁnite or negative semi-deﬁnite) for all (x,y,z), or inside some ﬁxed region around the origin. The orbital derivative is same as (frac
) along any solution trajectory. This condition says that the total energy is either constant in time (the zero case) or the total energy is decreasing in time (the negative deﬁnite case). Both of which indicate that the origin is a stable equilibrium point.

If (frac

) is negative semi-deﬁnite then the origin is stable in Lyapunov sense. If (frac
) is negative deﬁnite then the origin is asymptotically stable equilibrium. Negative semi-deﬁnite means the system, when perturbed away from the origin, a trajectory will remain around the origin since its energy do not increase nor decrease. So it is stable. But asymptotically stable equilibrium is a stronger stability. It means when perturbed from the origin the solution will eventually return back to the origin since the energy is decreasing. Global stability means (frac
leq 0) everywhere, and not just in some closed region around the origin. Local stability means (frac
leq 0) in some closed region around the origin. Global stability is stronger stability than local stability.

Main diﬃculty with this method is to ﬁnd (Vleft ( x.y,z ight ) ). If the system is Hamiltonian, then (V) is the same as total energy. Otherwise, one will guess. Typically a quadratic function such as (V=ax^<2>+cxy+dy^<2>) is used (for system in (x,y)) then we try to ﬁnd (a,c,d) which makes it positive deﬁnite everywhere away from origin, and also more importantly makes (frac

leq 0). If so, we say origin is stable. Most of the problems we had starts by giving us (V) and then asks to show it is Lyapunov function and what kind of stability it is.

To determine if (V) is positive deﬁnite or not, the common way is to ﬁnd the Hessian and check the sign of the eigenvalues. Another way is to ﬁnd the Hessian and check the sign of the minors. For (2 imes 2) matrix, this means the determinant is positive and the entry (left ( 1,1 ight ) ) in the matrix is positive. Similar thing to check if (frac

leq 0). We ﬁnd the Hessian of (frac
) and do the same thing. But now we check for negative eigenvalues instead.

(lacksquare ) Methods to ﬁnd Green function are

1. Fredholm theory
2. methods of images
3. separation of variables
4. Laplace transform

reference Wikipedia I need to make one example and apply each of the above methods on it.

(lacksquare ) In solving an ODE with constant coeﬃcient just use the characteristic equation to solve the solution.

(lacksquare ) In solving an ODE with coeﬃcients that are functions that depends on the independent variable, as in (y^left ( x ight ) +qleft ( x ight ) y^left ( x ight ) +pleft ( x ight ) yleft ( x ight ) =0), ﬁrst classify the point (x_<0>) type. This means to check how (pleft ( x ight ) ) and (qleft ( x ight ) ) behaves at (x_<0>). We are talking about the ODE here, not the solution yet.

There are 3 kinds of points. (x_<0>) can be normal, or regular singular point, or irregular singular point. Normal point (x_<0>) means (pleft ( x ight ) ) and (qleft ( x ight ) ) have Taylor series expansion (yleft ( x ight ) =sum _^a_left ( x-x_<0> ight ) ^) that converges to (yleft ( x ight ) ) at (x_<0>).
Regular singular point (x_<0>) means that the above test fails, but (lim _>left ( x-x_<0> ight ) qleft ( x ight ) ) has a convergent Taylor series, and also that (lim _>left ( x-x_<0> ight ) ^<2>pleft ( x ight ) ) now has a convergent Taylor series at (x_<0>). This also means the limit exist.

All this just means we can get rid of the singularity. i.e. (x_<0>) is a removable singularity. If this is the case, then the solution at (x_<0>) can be assumed to have a Frobenius series (yleft ( x ight ) =sum _^a_left ( x-x_<0> ight ) ^) where (a_<0> eq 0) and (alpha ) must be integer values.

The third type of point, is the hard one. Called irregular singular point . We can’t get rid of it using the above. So we also say the ODE has an essential singularity at (x_<0>) (another fancy name for irregular singular point). What this means is that we can’t approximate the solution at (x_<0>) using either Taylor nor Frobenius series.

If the point is an irregular singular point, then use the methods of asymptotic. See advanced mathematical methods for scientists and engineers chapter 3. For normal point, use (yleft ( x ight ) =sum _^a_x^), for regular singular point use (yleft ( x ight ) =sum _^a_x^). Remember, to solve for (r) ﬁrst. This should give two values. If you get one root, then use reduction of order to ﬁnd second solution.

(lacksquare ) Asymptotic series (Sleft ( z ight ) =c_<0>+frac >+frac >>+cdots ) is series expansion of (fleft ( z ight ) ) which gives good and rapid approximation for large (z) as long as we know when to truncate (Sleft ( z ight ) ) before it becomes divergent. This is the main diﬀerence Asymptotic series expansion and Taylor series expansion.

(Sleft ( z ight ) ) is used to approximate a function for large (z) while Taylor (or power series) is used for local approximation or for small distance away from the point of expansion. (Sleft ( z ight ) ) will become divergent, hence it needs to be truncated at some (n) to use, where (n) is the number of terms in (S_left ( z ight ) ). It is optimally truncated when (napprox left vert z ight vert ^<2>).

(Sleft ( x ight ) ) has the following two important properties

1. (lim _z^left ( fleft ( z ight ) -S_left ( z ight ) ight ) =0) for ﬁxed (n).
2. (lim _z^left ( fleft ( z ight ) -S_left ( z ight ) ight ) =infty ) for ﬁxed (z).

We write (Sleft ( z ight ) sim fleft ( z ight ) ) when (Sleft ( z ight ) ) is the asymptotic series expansion of (fleft ( z ight ) ) for large (z). Most common method to ﬁnd (Sleft ( z ight ) ) is by integration by parts. At least this is what we did in the class I took.

(lacksquare ) For Taylor series, leading behavior is (a_<0>) no controlling factor? For Frobenius series, leading behavior term is (a_<0>x^) and controlling factor is (x^). For asymptotic series, controlling factor is assumed to be (e^) always. proposed by Carlini (1817)

(lacksquare ) Method to ﬁnd the leading behavior of the solution (yleft ( x ight ) ) near irregular singular point using asymptotic is called the dominant balance method.

(lacksquare ) When solving (epsilon y^+pleft ( x ight ) y^+qleft ( x ight ) y=0) for very small (epsilon ) then use WKB method, if there is no boundary layer between the boundary conditions. If the ODE non-linear, can’t use WKB, has to use boundary layer (B.L.). Example (epsilon y^+yy^-y=0) with (yleft ( 0 ight ) =0,yleft ( 1 ight ) =-2) then use BL.

(lacksquare ) good exercise is to solve say (epsilon y^+(1+x)y^+y=0) with (yleft ( 0 ight ) =yleft ( 1 ight ) ) using both B.L. and WKB and compare the solutions, they should come out the same. (ysim frac <2><1+x>-exp left ( frac <-x>-frac ><2epsilon > ight ) +Oleft ( epsilon ight ) .) with BL had to do the matching between the outer and the inner solutions. WKB is easier. But can’t use it for non-linear ODE.

(lacksquare ) When there is rapid oscillation over the entire domain, WKB is better. Use WKB to solve Schrodinger equation where (epsilon ) becomes function of (hslash ) (Planck’s constant, (6.62606957 imes 10^<-34>) m(^<2>)kg/s)

(lacksquare ) In second order ODE with non constant coeﬃcient, (y^left ( x ight ) +pleft ( x ight ) y^left ( x ight ) +qleft ( x ight ) yleft ( x ight ) =0), if we know one solution (y_<1>left ( x ight ) ), then a method called the reduction of order can be used to ﬁnd the second solution (y_<2>left ( x ight ) ). Write (y_<2>left ( x ight ) =uleft ( x ight ) y_<1>left ( x ight ) ), plug this in the ODE, and solve for (uleft ( x ight ) ). The ﬁnal solution will be (yleft ( x ight ) =c_<1>y_<1>left ( x ight ) +c_<2>y_<2>left ( x ight ) ). Now apply I.C.’s to ﬁnd (c_<1>,c_<2>).

(lacksquare ) To ﬁnd particular solution to (y^left ( x ight ) +pleft ( x ight ) y^left ( x ight ) +qleft ( x ight ) yleft ( x ight ) =fleft ( x ight ) ), we can use a method called undetermined coeﬃcients . But a better method is called variation of parameters , In this method, assume (y_

left ( x ight ) =u_<1>left ( x ight ) y_<1>left ( x ight ) +u_<2>left ( x ight ) y_<2>left ( x ight ) ) where (y_<1>left ( x ight ) ,y_<2>left ( x ight ) ) are the two linearly independent solutions of the homogeneous ODE and (u_<1>left ( x ight ) ,u_<2>left ( x ight ) ) are to be determined. This ends up with (u_<1>left ( x ight ) =-int frac left ( x ight ) fleft ( x ight ) >dx) and (u_<2>left ( x ight ) =int frac left ( x ight ) fleft ( x ight ) >dx). Remember to put the ODE in standard form ﬁrst, so (a=1), i.e. (ay^left ( x ight ) +cdots ). In here, (W) is the Wronskian (W=egin y_<1>left ( x ight ) & y_<2>left ( x ight ) y_<1>^left ( x ight ) & y_<2>^left ( x ight ) end )

(lacksquare ) Two solutions of (y^left ( x ight ) +pleft ( x ight ) y^left ( x ight ) +qleft ( x ight ) yleft ( x ight ) =0) are linearly independent if (Wleft ( x ight ) eq 0), where (W) is the Wronskian.

(lacksquare ) For second order linear ODE deﬁned over the whole real line, the Wronskian is either always zero, or not zero. This comes from Abel formula for Wronskian, which is (Wleft ( x ight ) =kexp left ( -int frac dx ight ) ) for ODE of form (Aleft ( x ight ) y^+Bleft ( x ight ) y^+Cleft ( x ight ) y=0). Since (exp left ( -int frac dx ight ) >0), then it is decided by (k). The constant of integration. If (k=0), then (Wleft ( x ight ) =0) everywhere, else it is not zero everywhere.

(lacksquare ) For linear PDE, if boundary condition are time dependent, can not use separation of variables. Try Transform method (Laplace or Fourier) to solve the PDE.

(lacksquare ) If unable to invert Laplace analytically, try numerical inversion or asymptotic methods. Need to ﬁnd example of this.

(lacksquare ) Green function takes the homogeneous solution and the forcing function and constructs a particular solution. For PDE’s, we always want a symmetric Green’s function.

(lacksquare ) To get a symmetric Green’s function given an ODE, start by converting the ODE to a Sturm-Liouville form ﬁrst. This way the Green’s function comes out symmetric.

(lacksquare ) For numerical solutions of ﬁeld problems, there are basically two diﬀerent problems: Those with closed boundaries and those with open boundaries but with initial conditions. Closed boundaries are elliptical problems which can be cast in the form (Au=f), and the other are either hyperbolic or parabolic.

(lacksquare ) For numerical solution of elliptical problems, the basic layout is something like this:

Always start with trial solution (u(x)) such that (u_(x)=sum _^C_phi _(x)) where the (C_) are the unknowns to be determined and the (phi _) are set of linearly independent functions (polynomials) in (x).

How to determine those (C_) comes next. Use either residual method (Galerkin) or variational methods (Ritz). For residual, we make a function based on the error (R=A-u_f). It all comes down to solving (int f(R)=0) over the domain. This is a picture

(lacksquare ) Geometric probability distribution. Use when you want an answer to the question: What is the probability you have to do the experiment (N) times to ﬁnally get the output you are looking for, given that a probability of (p) showing up from doing one experiment.

For example: What is the probability one has to ﬂip a fair coin (N) times to get a head? The answer is (P(X=N)=(1-p)^p). So for a fair coin, (p=frac <1><2>) that a head will show up from one ﬂip. So the probability we have to ﬂip a coin (10) times to get a head is (P(X=10)=(1-0.5)^<9>(0.5)=0.00097) which is very low as expected.

(lacksquare ) To generate random variable drawn from some distribution diﬀerent from uniform distribution, by only using uniform distribution (U(0,1)) do this: Lets say we want to generate random number from exponential distribution with mean (mu ).

This distribution has (pdf(X)=frac <1>e^>), the ﬁrst step is to ﬁnd the cdf of exponential distribution, which is known to be (F(x)=P(X<=x)=1-e^>).

Now ﬁnd the inverse of this, which is (F^<-1>(x)=-mu ln (1-x)). Then generate a random number from the uniform distribution (U(0,1)). Let this value be called (z).

Now plug this value into (F^<-1>(z)), this gives a random number from exponential distribution, which will be (-mu ln (1-z)) (take the natural log of both side of (F(x))).

This method can be used to generate random variables from any other distribution by knowing on (U(0,1)). But it requires knowing the CDF and the inverse of the CDF for the other distribution. This is called the inverse CDF method . Another method is called the rejection method

(lacksquare ) Given (u), a r.v. from uniform distribution over [0,1], then to obtain (v), a r.v. from uniform distribution over [A,B], then the relation is (v=A+(B-A)u).

(lacksquare ) When solving using F.E.M. is best to do everything using isoparametric element (natural coordinates), then ﬁnd the Jacobian of transformation between the natural and physical coordinates to evaluate the integrals needed. For the force function, using Gaussian quadrature method.

(lacksquare ) A solution to diﬀerential equation is a function that can be expressed as a convergent series. (Cauchy. Briot and Bouquet, Picard)

(lacksquare ) To solve a ﬁrst order ODE using integrating factor. [ x^(t)+p(t)x(t)=f(t) ] then as long as it is linear and (p(t),f(t)) are integrable functions in (t), then follow these steps

1. multiply the ODE by function (I(t)), this is called the integrating factor. [ I(t)x^(t)+I(t)p(t)x(t)=I(t)f(t) ]
2. We solve for (I(t)) such that the left side satisﬁes [ frac
left ( I(t)x(t) ight ) =I(t)x^(t)+I(t)p(t)x(t) ]
3. Solving the above for (I(t)) gives egin I^(t)x(t)+I(t)x^(t) & =I(t)x^(t)+I(t)p(t)x(t) I^(t)x(t) & =I(t)p(t)x(t) I^(t) & =I(t)p(t) frac & =p(t)dt end

Integrating both sides gives egin ln (I) & =int I(t) & =e^> end

Where (I(t)) is given by (2). Hence [ x(t)=frac >f(t),dt>+C>>>] (lacksquare ) A polynomial is called ill-conditioned if we make small change to one of its coeﬃcients and this causes large change to one of its roots.

(lacksquare ) To ﬁnd rank of matrix (A) by hand, ﬁnd the row echelon form, then count how many zero rows there are. subtract that from number of rows, i.e. (n).

(lacksquare ) To ﬁnd the basis of the column space of (A), ﬁnd the row echelon form and pick the columns with the pivots, there are the basis (the linearly independent columns of (A)).

(lacksquare ) For symmetric matrix (A), its second norm is its spectral radius ( ho (A)) which is the largest eigenvalue of (A) (in absolute terms).

(lacksquare ) The eigenvalues of the inverse of matrix (A) is the inverse of the eigenvalues of (A).

(lacksquare ) If matrix (A) of order (n imes n), and it has (n) distinct eigenvalues, then it can be diagonalized (A=VLambda V^<-1>), where [ Lambda =egin e^> & 0 & 0 0 & ddots & 0 0 & 0 & e^end ] and (V) is matrix that has the (n) eigenvectors as its columns.

(lacksquare ) (lim _int _>^>f_left ( x ight ) dx=int _>^>lim _f_left ( x ight ) dx) only if (f_left ( x ight ) ) converges uniformly over (left [ x_<1>,x_<2> ight ] ).

(lacksquare ) (A^<3>=I), has inﬁnite number of (A) solutions. Think of (A^<3>) as 3 rotations, each of (120^<0>), going back to where we started. Each rotation around a straight line. Hence inﬁnite number of solutions.

(lacksquare ) How to integrate (I=int frac -1>>,dx).

Let (u=x^<3>+1), then (du=3x^<2>dx) and the above becomes[ I=int frac ><3x^<3>>,du=frac <1><3>int frac >,du ] Now let (u= an ^<2>v) or (sqrt = an v), hence (frac <1><2>frac <1>>du=sec ^<2>v,dv) and the above becomesegin I & =frac <1><3>int frac >< an ^<2>v-1>left ( 2sqrt sec ^<2>v ight ) ,dv & =frac <2><3>int frac < an ^<2>v-1>sec ^<2>v,dv & =frac <2><3>int frac < an ^<2>v>< an ^<2>v-1>sec ^<2>v,dv end

But ( an ^<2>v-1=sec ^<2>v) henceegin I & =frac <2><3>int an ^<2>v,dv & =frac <2><3>left ( an v-v ight ) end

Substituting back[ I=frac <2><3>left ( sqrt -arctan left ( sqrt ight ) ight ) ] Substituting back[ I=frac <2><3>left ( sqrt +1>-arctan left ( sqrt +1> ight ) ight ) ]

(lacksquare ) (added Nov. 4, 2015) Made small diagram to help me remember long division terms used.

(lacksquare ) If a linear ODE is equidimensional, as in (a_x^y^<(n)>+a_x^y^<(n01)>+dots ) for example (x^<2>y^-2y=0) then use ansatz (y=x^) this will give equation in (r) only. Solve for (r) and obtain (y_<1>=x^>,y_<2>=x^>) and the solution will be [ y=c_<1>y_<1>+c_<2>y_<2>] For example, for the above ode, the solution is (c_<1>x^<2>+frac >). This ansatz works only if ODE is equidimensional. So can’t use it on (xy^+y=0) for example.

If (r) is multiple root, use (x^,x^log (x),x^(log (x))^<2>dots ) as solutions.

(lacksquare ) for (x^), where (i=sqrt <-1>), write it as (x=e^>) hence (x^=e^>=cos (log )+i,sin (log ))

(lacksquare ) Some integral tricks: (int sqrt -x^<2>>dx) use (x=asin heta ). For (int sqrt +x^<2>>dx) use (x=a an heta ) and for (int sqrt -a^<2>>dx) use (x=asec heta ).

(lacksquare ) (y^+x^y=0) is called Emden-Fowler form.

(lacksquare ) For second order ODE, boundary value problem, with eigenvalue (Sturm-Liouville), remember that having two boundary conditions is not enough to fully solve it.

One boundary condition is used to ﬁnd the ﬁrst constant of integration, and the second boundary condition is used to ﬁnd the eigenvalues.

We still need another input to ﬁnd the second constant of integration. This is normally done by giving the initial value. This problem happens as part of initial value, boundary value problem. The point is, with boundary value and eigenvalue also present, we need 3 inputs to fully solve it. Two boundary conditions is not enough.

(lacksquare ) If given ODE (y^left ( x ight ) +pleft ( x ight ) y^left ( x ight ) +qleft ( x ight ) yleft ( x ight ) =0) and we are asked to classify if it is singular at (x=infty ), then let (x=frac <1>) and check what happens at (t=0). The (frac >>) operator becomes (left ( 2t^<3>frac

+t^<4>frac >> ight ) ) and (frac ) operator becomes (-t^<2>frac
). And write the ode now where (t) is the independent variable, and follow standard operating procedures. i.e. look at (lim _xpleft ( t ight ) ) and (lim _x^<2>qleft ( t ight ) ) and see if these are ﬁnite or not. To see how the operator are mapped, always start with (x=frac <1>) then write (frac =frac
frac
) and write (frac >>=left ( frac ight ) left ( frac ight ) ). For example, (frac =-t^<2>frac
) and egin frac >> & =left ( -t^<2>frac
ight ) left ( -t^<2>frac
ight ) & =-t^<2>left ( -2tfrac
-t^<2>frac >> ight ) & =left ( 2t^<3>frac
+t^<4>frac >> ight ) end

Then the new ODE becomes egin left ( 2t^<3>frac

+t^<4>frac >> ight ) yleft ( t ight ) +pleft ( t ight ) left ( -t^<2>frac
yleft ( t ight ) ight ) +qleft ( t ight ) yleft ( t ight ) & =0 t^<4>frac >>y+left ( -t^<2>pleft ( t ight ) +2t^<3> ight ) frac
y+qleft ( t ight ) y & =0 frac >>y+frac >frac
y+frac >y & =0 end

The above is how the ODE will always become after the transformation. Remember to change (pleft ( x ight ) ) to (pleft ( t ight ) ) using (x=frac <1>) and same for (qleft ( x ight ) ). Now the new (p) is (frac >) and the new (q) is (frac >). Then do (lim _tfrac ight ) >>) and (lim _t^<2>frac >) as before.

(lacksquare ) If the ODE (aleft ( x ight ) y^+bleft ( x ight ) y^+cleft ( x ight ) y=0), and say (0leq xleq 1), and there is essential singularity at either end, then use boundary layer or WKB. But Boundary layer method works on non-linear ODE’s (and also on linear ODE) and only if the boundary layer is at end of the domain, i.e. at (x=0) or (x=1).

WKB method on the other hand, works only on linear ODE, but the singularity can be any where (i.e. inside the domain). As rule of thumb, if the ODE is linear, use WKB. If the ODE is non-linear, we must use boundary layer.

Another diﬀerence, is that with boundary layer, we need to do matching phase at the interface between the boundary layer and the outer layer in order to ﬁnd the constants of integrations. This can be tricky and is the hardest part of solving using boundary layer.

Using WKB, no matching phase is needed. We apply the boundary conditions to the whole solution obtained. See my HWs for NE 548 for problems solved from Bender and Orszag text book.

(lacksquare ) In numerical, to ﬁnd if a scheme will converge, check that it is stable and also check that if it is consistent.

It could also be conditionally stable, or unconditionally stable, or unstable.

To check it is consistent, this is the same as ﬁnding the LTE (local truncation error) and checking that as the time step and the space step both go to zero, the LTE goes to zero. What is the LTE? You take the scheme and plug in the actual solution in it. An example is better to explain this part. Lets solve (u_=u_). Using forward in time and centered diﬀerence in space, the numerical scheme (explicit) is
[ U_^=U_^+frac >left ( U_^-2U_^+U_^ ight ) ] The LTE is the diﬀerence between these two (error)[ LTE=U_^-left ( U_^+frac >left ( U_^-2U_^+U_^ ight ) ight ) ] Now plug-in (uleft ( t^,x_ ight ) ) in place of (U_^) and (uleft ( t^+k,x_ ight ) ) in place of (U_^) and plug-in (uleft ( t^,x+h ight ) ) in place of (U_^) and plug-in (uleft ( t^,x-h ight ) ) in place of (U_^) in the above. It becomesegin LTE=uleft ( t+k,x_ ight ) -left ( uleft ( t^,x_ ight ) +frac >left ( uleft ( t,x-h ight ) -2uleft ( t^,x_ ight ) +uleft ( t,x+h ight ) ight ) ight ) ag <1>end Where in the above (k) is the time step (also written as (Delta t)) and (h) is the space step size. Now comes the main trick. Expanding the term (uleft ( t^+k,x_ ight ) ) in Taylor, egin uleft ( t^+k,x_ ight ) =uleft ( t^,x_ ight ) +kleft . frac ight vert _<>>+frac ><2>left . frac u>> ight vert _<>>+Oleft ( k^<3> ight ) ag <2>end And expandingegin uleft ( t^,x_+h ight ) =uleft ( t^,x_ ight ) +hleft . frac ight vert _<>>+frac ><2>left . frac u>> ight vert _<>>+Oleft ( h^<3> ight ) ag <3>end And expandingegin uleft ( t^,x_-h ight ) =uleft ( t^,x_ ight ) -hleft . frac ight vert _<>>+frac ><2>left . frac u>> ight vert _<>>-Oleft ( h^<3> ight ) ag <4>end Now plug-in (2,3,4) back into (1). Simplifying, many things drop out, and we should obtain that [ LTE=O(k)+Oleft ( h^<2> ight ) ] Which says that (LTE ightarrow 0) as (h ightarrow 0,k ightarrow 0). Hence it is consistent.

To check it is stable, use Von Neumann method for stability. This check if the solution at next time step does not become larger than the solution at the current time step. There can be condition for this. Such as it is stable if (kleq frac ><2>). This says that using this scheme, it will be stable as long as time step is smaller than (frac ><2>). This makes the time step much smaller than space step.

(lacksquare ) For (ax^<2>+bx+c=0), with roots (alpha ,eta ) then the relation between roots and coeﬃcients is egin alpha +eta & =-frac alpha eta & =frac end

(lacksquare ) Leibniz rules for integration egin frac int _^fleft ( t ight ) dt & =fleft ( bleft ( x ight ) ight ) b^left ( x ight ) -fleft ( aleft ( x ight ) ight ) a^left ( x ight ) frac int _^fleft ( t,x ight ) dt & =fleft ( bleft ( x ight ) ight ) b^left ( x ight ) -fleft ( aleft ( x ight ) ight ) a^left ( x ight ) +int _^frac fleft ( t,x ight ) dt end

(lacksquare ) Diﬀerentiable function implies continuous. But continuous does not imply diﬀerentiable. Example is (left vert x ight vert ) function.

(lacksquare ) Mean curvature being zero is a characteristic of minimal surfaces.

(lacksquare ) How to ﬁnd phase diﬀerence between 2 signals (x_<1>(t),x_<2>(t))? One way is to ﬁnd the DFT of both signals (in Mathematica this is Fourier, in Matlab ﬀt()), then ﬁnd where the bin where peak frequency is located (in either output), then ﬁnd the phase diﬀerence between the 2 bins at that location. Value of DFT at that bin is complex number. Use Arg in Mathematica to ﬁnd its phase. The diﬀerence gives the phase diﬀerence between the original signals in time domain. See https://mathematica.stackexchange.com/questions/11046/how-to-find-the-phase-difference-of-two-sampled-sine-waves for an example.

(lacksquare ) Watch out when squaring both sides of equation. For example, given (y=sqrt ). squaring both sides gives (y^<2>=x). But this is only true for (ygeq 0). Why? Let us take the square root of this in order to get back to the original equation. This gives (sqrt >=sqrt ). And here is the problem, (sqrt >=y) only for (ygeq 0). Why? Let us assume (y=-1). Then (sqrt >=sqrt >=sqrt <1>=1) which is not (-1). So when taking the square of both sides of the equation, remember this condition.

(lacksquare ) do not replace (sqrt >) by (x), but by (|x|), since (x=sqrt >) only for non negative (x).

(lacksquare ) Given an equation, and we want to solve for (x). We can square both sides in order to get rid of sqrt if needed on one side. But be careful. Even though after squaring both sides, the new equation is still true, the solutions of the new equation can introduce extraneous solution that does not satisfy the original equation. Here is an example I saw on the internet which illustrate this. Given (sqrt =x-6). And we want to solve for (x). Squaring both sides gives (x=left ( x-6 ight ) ^<2>). This has solutions (x=9,x=4). But only (x=9) is valid solution for the original equation before squaring. The solution (x=4) is extraneous. So need to check all solutions found after squaring against the original equation, and remove those extraneous one. In summary, if (a^<2>=b^<2>) then this does not mean that (a=b). But if (a=b) then it means that (a^<2>=b^<2>). For example (left ( -5 ight ) ^<2>=5^<2>). But (-5 eq 5).

2 Converting ﬁrst order ODE which is homogeneous to separable ODE

If the ODE (Mleft ( x,y ight ) +Nleft ( x,y ight ) frac =0) has both (M) and (N) homogenous functions of same power, then this ODE can be converted to separable. Here is an example. We want to solveegin left ( x^<3>+8x^<2>y ight ) +left ( 4xy^<2>-y^<3> ight ) y^=0 ag <1>end The above is homogenous in (M,N), since the total powers of each term in them is (3).[ left ( overset <3>>>+8overset <3>y>> ight ) +left ( 4overset <3>>>-overset <3>>> ight ) y^=0 ] So we look at each term in (N) and (M) and add all the powers on each (x,y) in them. All powers should add to same value, which is (3) in this case. Of course (N,M) should be polynomials for this to work. So one should check that they are polynomials in (x,y) before starting this process. Once we check (M,N) are homogeneous, then we let [ y=xv ] Therefore nowegin M & =x^<3>+8x^<2>left ( xv ight ) onumber & =x^<3>+8x^<3>v ag <2>end

Andegin N & =4xleft ( xv ight ) ^<2>-left ( xv ight ) ^<3> onumber & =4x^<3>v^<2>-x^<3>v^ <3> ag <3>end

And egin y^=v+xv^ ag <4>end Substituting (3,4,5) into (1) givesegin left ( x^<3>+8x^<3>v ight ) +left ( 4x^<3>v^<2>-x^<3>v^<3> ight ) left ( v+xv^ ight ) & =0 left ( x^<3>+8x^<3>v ight ) +left ( 4x^<3>v^<3>-x^<3>v^<4> ight ) +left ( 4x^<4>v^<2>-x^<4>v^<3> ight ) v^ & =0 end

Dividing by (x^<3> eq 0) it simpliﬁes to[ left ( 1+8v ight ) +left ( 4v^<3>-v^<4> ight ) +xleft ( 4v^<2>-v^<3> ight ) v^=0 ] Which can be written asegin xleft ( 4v^<2>-v^<3> ight ) v^ & =-left ( left ( 1+8v ight ) +left ( 4v^<3>-v^<4> ight ) ight ) v^ & =frac <-left ( left ( 1+8v ight ) +left ( 4v^<3>-v^<4> ight ) ight ) >-v^<3> ight ) >left ( frac <1> ight ) end

We see that it is now separable. We now solve this for (vleft ( x ight ) ) by direct integration of both sides And then using (y=xv) ﬁnd (yleft ( x ight ) ).

3 Direct solving of some simple PDE’s

Some simple PDE’s can be solved by direct integration, here are few examples.

[ frac =0 ] Integrating w.r.t. (x)., and remembering that now constant of integration will be function of (y), hence[ zleft ( x,y ight ) =fleft ( y ight ) ] Example 2 [ frac zleft ( x,y ight ) >>=x ] Integrating once w.r.t. (x) gives[ frac =frac ><2>+fleft ( y ight ) ] Integrating again gives[ zleft ( x,y ight ) =frac ><6>+xfleft ( y ight ) +gleft ( y ight ) ] Example 3 [ frac zleft ( x,y ight ) >>=y ] Integrating once w.r.t. (y) gives[ frac =frac ><2>+fleft ( x ight ) ] Integrating again gives[ zleft ( x,y ight ) =frac ><6>+yfleft ( x ight ) +gleft ( x ight ) ] Example 4 [ frac zleft ( x,y ight ) >=0 ] Integrating once w.r.t (x) gives[ frac =fleft ( y ight ) ] Integrating again w.r.t. (y) gives[ zleft ( x,y ight ) =int fleft ( y ight ) dy+gleft ( x ight ) ] Example 5

Solve (u_+u_=0) with (uleft ( x,1 ight ) =frac <1+x^<2>>). Let (uequiv uleft ( xleft ( t ight ) ,t ight ) ), therefore[ frac

=frac +frac frac
] Comparing the above with the given PDE, we see that if (frac
=1) then (frac
=0) or (uleft ( xleft ( t ight ) ,t ight ) ) is constant. At (t=1) we are given thategin u=frac <1+xleft ( 1 ight ) ^<2>> ag <1>end To ﬁnd (xleft ( 1 ight ) ), from (frac
=1) we obtain that (xleft ( t ight ) =t+c). At (t=1), (c=xleft ( 1 ight ) -1). Hence (xleft ( t ight ) =t+xleft ( 1 ight ) -1) or [ xleft ( 1 ight ) =xleft ( t ight ) +1-t ] Hence solution from (1) becomes[ u=frac <1+left ( x-t+1 ight ) ^<2>>] Example 6

Let (uequiv uleft ( xleft ( t ight ) ,t ight ) ), therefore[ frac

=frac +frac frac
] Comparing the above with the given PDE, we see that if (frac
=1) then (frac
=-u^<2>) or (frac <-1>=-t+c.) Hence[ u=frac <1>] At (t=0), (c=frac <1>). Let (uleft ( xleft ( 0 ight ) ,0 ight ) =fleft ( xleft ( 0 ight ) ight ) ). Therefore[ u=frac <1>>] Now we need to ﬁnd (xleft ( 0 ight ) ). From (frac
=1), then (x=t+c) or (c=xleft ( 0 ight ) ), hence (xleft ( 0 ight ) =x-t) and the above becomes[ uleft ( x,t ight ) =frac <1>>=frac ]

4 Fourier series ﬂow chart

4.1 Theorem on when we can do term by term diﬀerentiation

If (fleft ( x ight ) ) on (-Lleq xleq L) is continuous (notice, NOT piecewise continuous), this means (fleft ( x ight ) ) has no jumps in it, and that (f^left ( x ight ) ) exists on (-L<x<L) and (f^left ( x ight ) ) is either continuous or piecewise continuous (notice, that (f^left ( x ight ) ) can be piecewise continuous (P.W.C.), i.e. have ﬁnite number of jump discontinuities), and also and this is very important, that (fleft ( -L ight ) =fleft ( L ight ) ) then we can do term by term diﬀerentiation of the Fourier series of (fleft ( x ight ) ) and use (=) instead of (sim ). Not only that, but the term by term diﬀerentiation of the Fourier series of (fleft ( x ight ) ) will give the Fourier series of (f^left ( x ight ) ) itself.

So that main restriction here is that (fleft ( x ight ) ) on (-Lleq xleq L) is continuous (no jump discontinuities) and that (fleft ( -L ight ) =fleft ( L ight ) ). So look at (fleft ( x ight ) ) ﬁrst and see if it is continuous or not (remember, the whole (fleft ( x ight ) ) has to be continuous, not piecewise, so no jump discontinuities). If this condition is met, look at see if (fleft ( -L ight ) =fleft ( L ight ) ).

For example (fleft ( x ight ) =x) on (-1leq xleq 1) is continuous, but (fleft ( -1 ight ) eq fleft ( 1 ight ) ) so the F.S. of (fleft ( x ight ) ) can’t be term be term diﬀerentiated (well, it can, but the result will not be the Fourier series of (f^left ( x ight ) )). So we should not do term by term diﬀerentiation in this case.

But the Fourier series for (fleft ( x ight ) =x^<2>) can be term by term diﬀerentiated. This has its (f^left ( x ight ) ) being continuous, since it meets all the conditions. Also Fourier series for (fleft ( x ight ) =left vert x ight vert ) can be term by term diﬀerentiated. This has its (f^left ( x ight ) ) being P.W.C. due to a jump at (x=0) but that is OK, as (f^left ( x ight ) ) is allowed to be P.W.C., but it is (fleft ( x ight ) ) which is not allowed to be P.W.C.

There is a useful corollary that comes from the above. If (fleft ( x ight ) ) meets all the conditions above, then its Fourier series is absolutely convergent and also uniformly convergent. The M-test can be used to verify that the Fourier series is uniformly convergent.

4.2 Relation between coeﬃcients of Fourier series of (fleft ( x ight ) ) Fourier series of (f^left ( x ight ) )

If term by term diﬀerentiation allowed, then letegin fleft ( x ight ) & =frac ><2>+sum _^a_cos left ( nfrac x ight ) +b_sin left ( nfrac x ight ) f^left ( x ight ) & =frac ><2>+sum _^alpha _cos left ( nfrac x ight ) +eta _sin left ( nfrac x ight ) end

And Bessel’s inequality instead of (frac ^<2>><2>+sum _^left ( a_^<2>+b_^<2> ight ) <infty ) now becomes (sum _^n^<2>left ( a_^<2>+b_^<2> ight ) <infty ). So it is stronger.

4.3 Theorem on convergence of Fourier series

If (fleft ( x ight ) ) is piecewise continuous on (-L<x<L) and if it is periodic with period (2L) and if on any point (x) on the entire domain (-infty <x<infty ) both the left sided derivative and the right sided derivative exist (but these do not have to be the same !) then we say that the Fourier series of (fleft ( x ight ) ) converges and it converges to the average of (fleft ( x ight ) ) at each point including points that have jump discontinuities.

6 Linear combination of two solution is solution to ODE

Multiply the ﬁrst ODE by (c_<1>) and second ODE by (c_<2>)egin aleft ( c_<1>y_<1> ight ) ^+bleft ( c_<1>y_<1> ight ) ^+cleft ( c_<1>y_<1> ight ) & =0 aleft ( c_<2>y_<2> ight ) ^+bleft ( c_<2>y_<2> ight ) ^+cleft ( c_<2>y_<2> ight ) & =0 end

Add the above two equations, using linearity of diﬀerentials[ aleft ( c_<1>y_<1>+c_<2>y_<2> ight ) ^+bleft ( c_<1>y_<1>+c_<2>y_<2> ight ) ^+cleft ( c_<1>y_<1>+c_<2>y_<2> ight ) =0 ] Therefore (c_<1>y_<1>+c_<2>y_<2>) satisﬁes the original ODE. Hence solution.

7 To ﬁnd the Wronskian ODE

Substituting (2,3) into (1) gives the Wronskian diﬀerential equationegin -aleft ( frac ight ) -pW & =0 aW^+pW & =0 end

Remember: (Wleft ( x_<0> ight ) =0) does not mean the two functions are linearly dependent. The functions can still be Linearly independent on other interval, It just means (x_<0>) can’t be in the domain of the solution for two functions to be solutions. However, if the two functions are linearly dependent, then this implies (W=0) everywhere. So to check if two functions are L.D., need to show that (W=0) everywhere.

8 Green functions notes

(lacksquare ) Green function is what is called impulse response in control. But it is more general, and can be used for solving PDE also.

Given a diﬀerential equation with some forcing function on the right side. To solve this, we replace the forcing function with an impulse. The solution of the DE now is called the impulse response, which is the Green’s function of the diﬀerential equation.

Now to ﬁnd the solution to the original problem with the original forcing function, we just convolve the Green function with the original forcing function. Here is an example. Suppose we want to solve (Lleft [ yleft ( t ight ) ight ] =fleft ( t ight ) ) with zero initial conditions. Then we solve (Lleft [ gleft ( t ight ) ight ] =delta left ( t ight ) ). The solution is (gleft ( t ight ) ). Now (yleft ( t ight ) =gleft ( t ight ) circledast fleft ( t ight ) ). This is for initial value problem. For example. (y^left ( t ight ) +kx=e^), with (yleft ( 0 ight ) =0). Then we solve (g^left ( t ight ) +kg=delta left ( t ight ) ). The solution is (gleft ( t ight ) =left < egin [c]e^ <-kt>& t>0 0 & t<0 end ight . ), this is for causal system. Hence (yleft ( t ight ) =gleft ( t ight ) circledast fleft ( t ight ) ). The nice thing here, is that once we ﬁnd (gleft ( t ight ) ), we can solve (y^left ( t ight ) +kx=fleft ( t ight ) ) for any (fleft ( t ight ) ) by just convolving the Green function (impulse response) with the new (fleft ( t ight ) ).

(lacksquare ) We can think of Green function as an inverse operator. Given (Lleft [ yleft ( t ight ) ight ] =fleft ( t ight ) ), we want to ﬁnd solution (yleft ( t ight ) =int _<-infty >^Gleft ( t au ight ) fleft ( au ight ) d au ). So in a sense, (Gleft ( t au ight ) ) is like (L^<-1>left [ yleft ( t ight ) ight ] ).

(lacksquare ) Need to add notes for Green function for Sturm-Liouville boundary value ODE. Need to be clear on what boundary conditions to use. What is B.C. is not homogeneous?

(lacksquare ) Green function properties:

1. (Gleft ( t au ight ) ) is continuous at (t= au ). This is where the impulse is located.
2. The derivative (G^left ( t ight ) ) just before (t= au ) is not the same as (G^left ( t ight ) ) just after (t= au ). i.e. (G^left ( tt-varepsilon ight ) -G^left ( tt+varepsilon ight ) eq 0). This means there is discontinuity in derivative.
3. (Gleft ( t au ight ) ) should satisfy same boundary conditions as original PDE or ODE (this is for Sturm-Liouville or boundary value problems).
4. (Lleft [ Gleft ( t au ight ) ight ] =0) for (t eq au )
5. (Gleft ( x au ight ) ) is symmetric. i.e. (Gleft ( x au ight ) =Gleft ( au x ight ) ).

(lacksquare ) When solving for (Gleft ( t au ight ) ), in context of 1D, hence two boundary conditions, one at each end, and second order ODE (Sturm-Liouville), we now get two solutions, one for (t< au ) and one for (t> au ).

So we have (4) constants of integrations to ﬁnd (this is for second order ODE) not just two constants as normally one would get , since now we have 2 diﬀerent solutions. Two of these constants from the two boundary conditions, and two more come from property of Green function as mentioned above. (Gleft ( t au ight ) =left < egin [c]A_<1>y_<1>+A_<2>y_ <2>& 0<t< au A_<3>y_<1>+A_<4>y_ <2>& au <t<L end ight . )

9 Laplace transform notes

(lacksquare ) Remember that (u_left ( t ight ) fleft ( t-c ight ) Longleftrightarrow e^<-cs>Fleft ( s ight ) ) and (u_left ( t ight ) fleft ( t ight ) Longleftrightarrow e^<-cs>mathcal left < fleft ( t+c ight ) ight >). For example, if we are given (u_<2>left ( t ight ) t), then (mathcal left ( u_<2>left ( t ight ) t ight ) =e^<-2s>mathcal left < t+2 ight >=e^<-2s>left ( frac <1>>+frac <2> ight ) =e^<-2s>left ( frac <1+2s>> ight ) ). Do not do (u_left ( t ight ) fleft ( t ight ) Longleftrightarrow e^<-cs>mathcal left < fleft ( t ight ) ight >) ! That will be a big error. We use this allot when asked to write a piecewise function using Heaviside functions.

10 Series, power series, Laurent series notes

(lacksquare ) if we have a function (fleft ( x ight ) ) represented as series (say power series or Fourier series), then we say the series converges to (fleft ( x ight ) ) uniformly in region (D), if given (varepsilon >0), we can number (N) which depends only on (varepsilon ), such that (left vert fleft ( x ight ) -S_left ( x ight ) ight vert <varepsilon ).

Where here (S_left ( x ight ) ) is the partial sum of the series using (N) terms. The diﬀerence between uniform convergence and non-uniform convergence, is that with uniform the number (N) only depends on (varepsilon ) and not on which (x) we are trying to approximate (fleft ( x ight ) ) at. In uniform convergence, the number (N) depends on both (varepsilon ) and (x). So this means at some locations in (D) we need much larger (N) than in other locations to convergence to (fleft ( x ight ) ) with same accuracy. Uniform convergence is better. It depends on the basis functions used to approximate (fleft ( x ight ) ) in the series.

If the function (fleft ( x ight ) ) is discontinuous at some point, then it is not possible to ﬁnd uniform convergence there. As we get closer and closer to the discontinuity, more and more terms are needed to obtained same approximation away from the discontinuity, hence not uniform convergence. For example, Fourier series approximation of a step function can not be uniformly convergent due to the discontinuity in the step function.

(lacksquare ) ( )Binomial series:

General binomial is[ left ( x+y ight ) ^=x^+nx^y+frac <2!>x^y^<2>+frac <3!>x^y^<3>+cdots ] From the above we can generate all other special cases. For example, [ left ( 1+x ight ) ^=1+nx+frac ><2!>+frac ><3!>+cdots ] This work for positive and negative (n), rational or not. The sum converges when only for (left vert x ight vert <1). From this, we can derive the above sums also for the geometric series. For example, for (n=-1) the above becomesegin frac <1> & =1-x+x^<2>-x^<3>+cdots qquad left vert x ight vert <1 frac <1> & =1+x+x^<2>+x^<3>+cdots qquad left vert x ight vert <1 end

For (left vert x ight vert >1), we can still ﬁnd series expansion in negative powers of (x) as followsegin left ( 1+x ight ) ^ & =left ( xleft ( 1+frac <1> ight ) ight ) ^ & =x^left ( 1+frac <1> ight ) ^ end

And now since (left vert frac <1> ight vert <1), we can use binomial expansion to expand the term (left ( 1+frac <1> ight ) ^) in the above and obtain a convergent series, since now (left vert frac <1> ight vert <1,.) This will give the following expansionegin left ( 1+x ight ) ^ & =x^left ( 1+frac <1> ight ) ^ & =x^left ( 1+nleft ( frac <1> ight ) +frac <2!>left ( frac <1> ight ) ^<2>+frac <3!>left ( frac <1> ight ) ^<3>+cdots ight ) end

So everything is the same, we just change (x) with (frac <1>) and remember to multiply the whole expansion with (x^). For example, for (n=-1)egin frac <1> & =frac <1> ight ) >=frac <1>left ( 1-frac <1>+left ( frac <1> ight ) ^<2>-left ( frac <1> ight ) ^<3>+cdots ight ) qquad left vert x ight vert >1 frac <1> & =frac <1> ight ) >=frac <1>left ( 1+frac <1>+left ( frac <1> ight ) ^<2>+left ( frac <1> ight ) ^<3>+cdots ight ) qquad left vert x ight vert >1 end

These tricks are very useful when working with Laurent series.

(lacksquare ) ( )Arithmetic series: egin sum _^n & =frac <1><2>Nleft ( N+1 ight ) sum _^a_ & =Nleft ( frac +a_><2> ight ) end

i.e. the sum is (N) times the arithmetic mean.

(lacksquare ) ( )Taylor series: Expanded around (x=a) is [ fleft ( x ight ) =fleft ( a ight ) +left ( x-a ight ) f^left ( a ight ) +frac f^left ( a ight ) ><2!>+frac f^left ( a ight ) ><3!>+cdots +R_] Where (R_) is remainder (R_=frac >f^left ( x_<0> ight ) ) where (x_<0>) is some point between (x) and (a).

(lacksquare ) ( )Maclaurin series: Is just Taylor expanded around zero. i.e. (a=0)[ fleft ( x ight ) =fleft ( 0 ight ) +xf^left ( 0 ight ) +frac f^left ( 0 ight ) ><2!>+frac f^left ( 0 ight ) ><3!>+cdots ] (lacksquare ) ( )This diagram shows the diﬀerent convergence of series and the relation between them

The above shows that an absolutely convergent series ((B)) is also convergent. Also a uniformly convergent series ((D)) is also convergent. But the series (B) is absolutely convergent and not uniform convergent. While (D) is uniform convergent and not absolutely convergent.

The series (C) is both absolutely and uniformly convergent. And ﬁnally the series (A) is convergent, but not absolutely (called conditionally convergent). Examples of (B) (converges absolutely but not uniformly) isegin sum _^x^<2>frac <1> ight ) ^> & =x^<2>left ( 1+frac <1><1+x^<2>>+frac <1> ight ) ^<2>>+frac <1> ight ) ^<3>>+cdots ight ) & =x^<2>+frac ><1+x^<2>>+frac > ight ) ^<2>>+frac > ight ) ^<3>>+cdots end

And example of (D) (converges uniformly but not absolutely) is[ sum _^left ( -1 ight ) ^frac <1>+n>=frac <1>+1>-frac <1>+2>+frac <1>+3>-frac <1>+4>+cdots ] Example of (A) (converges but not absolutely) is the alternating harmonic series[ sum _^left ( -1 ight ) ^frac <1>=1-frac <1><2>+frac <1><3>-frac <1><4>+cdots ] The above converges to (ln left ( 2 ight ) ) but absolutely it now becomes the harmonic series and it diverges[ sum _^frac <1>=1+frac <1><2>+frac <1><3>+frac <1><4>+cdots ] For uniform convergence, we really need to have an (x) in the series and not just numbers, since the idea behind uniform convergence is if the series convergence to within an error tolerance (varepsilon ) using the same number of terms independent of the point (x) in the region.

(lacksquare ) Using partial sums. Let (sum _^a_) be some sequence. The partial sum is (S_=sum _^a_). Then[ sum _^a_=lim _S_] If (lim _S_) exist and ﬁnite, then we can say that (sum _^a_) converges. So here we use set up a sequence who terms are partial sum, and them look at what happens in the limit to such a term as (N ightarrow heta ). Need to ﬁnd an example where this method is easier to use to test for convergence than the other method below.

(lacksquare ) Given a series, we are allowed to rearrange order of terms only when the series is absolutely convergent. Therefore for the alternating series (1-frac <1><2>+frac <1><3>-frac <1><4>+cdots ), do not rearrange terms since this is not absolutely convergent. This means the series sum is independent of the order in which terms are added only when the series is absolutely convergent.

(lacksquare ) In an inﬁnite series of complex numbers, the series converges, if the real part of the series and also the complex part of the series, each converges on their own.

(lacksquare ) Power series: (fleft ( z ight ) =sum _^a_left ( z-z_<0> ight ) ^). This series is centered at (z_<0>). Or expanded around (z_<0>). This has radius of convergence (R) is the series converges for (left vert z-z_<0> ight vert <R) and diverges for (left vert z-z_<0> ight vert >R).

(lacksquare ) Tests for convergence.

1. Always start with preliminary test. If (lim _a_) does not go to zero, then no need to do anything else. The series (sum _^a_) does not converge. It diverges. But if (lim _a_=0), it still can diverge. So this is a necessary but not suﬃcient condition for convergence. An example is (sum frac <1>). Here (a_ ightarrow 0) in the limit, but we know that this series does not converge.
2. For Uniform convergence, there is a test called the weierstrass M test, which can be used to check if the series is uniformly convergent. But if this test fails, this does not necessarily mean the series is not uniform convergent. It still can be uniform convergent. (need an example).
3. To test for absolute convergence, use the ratio test. If (L=lim _left vert frac <>><>> ight vert <1) then absolutely convergent. If (L=1) then inconclusive. Try the integral test. If (L>1) then not absolutely convergent. There is also the root test. (L=lim _sqrt [n] ight vert >=lim _left vert a_ ight vert ^>).
4. The integral test, use when ratio test is inconclusive. (L=lim _int ^fleft ( x ight ) dx) where (aleft ( n ight ) ) becomes (fleft ( x ight ) ). Remember to use this only of the terms of the sequence are monotonically decreasing and are all positive. For example, (sum _^ln left ( 1+frac <1> ight ) ), then use (L=lim _int ^ln left ( 1+frac <1> ight ) dx=left ( left ( 1+x ight ) ln left ( 1+x ight ) -xln left ( x ight ) -1 ight ) ^). Notice, we only use the upper limit in the integral. This becomes (after simpliﬁcations) (lim _frac =1). Hence the limit (L) is ﬁnite, then the series converges.
5. Radius of convergence is called (R=frac <1>) where (L) is from (3) above.
6. Comparison test. Compare the series with one we happen to already know it converges. Let (sum b_) be a series which we know is convergent (for example (sum frac <1>>)), and we want to ﬁnd if (sum a_) converges. If all terms of both series are positive and if (a_leq b_) for each (n), then we conclude that (sum a_) converges also.

(lacksquare ) For Laurent series, lets say singularity is at (z=0) and (z=1). To expand about (z=0), get (fleft ( z ight ) ) to look like (frac <1><1-z>) and use geometric series for (left vert z ight vert <1). To expand about (z=1), there are two choices, to the inside and to the outside. For the outside, i.e. (left vert z ight vert >1), get (fleft ( z ight ) ) to have (frac <1><1-frac <1>>) form, since this now valid for (left vert z ight vert >1).

(lacksquare ) Can only use power series (sum a_left ( z-z_<0> ight ) ^) to expand (fleft ( z ight ) ) around (z_<0>) only if (fleft ( z ight ) ) is analytic at (z_<0>). If (fleft ( z ight ) ) is not analytic at (z_<0>) need to use Laurent series. Think of Laurent series as an extension of power series to handle singularities.

10.1 Some tricks to ﬁnd sums

10.1.1 Example 1

solution Let (fleft ( x ight ) =sum _^frac <>>), taking derivative gives egin f^left ( x ight ) & =isum _^e^ & =isum _^left ( e^ ight ) ^ & =ileft ( sum _^left ( e^ ight ) ^-1 ight ) & =frac <1-e^>-i end

Hence egin fleft ( x ight ) & =int left ( frac <1-e^>-i ight ) dx & =iint frac <1-e^>-ix+C & =ileft ( x+iln left ( 1-e^ ight ) ight ) -ix+C & =ix-ln left ( 1-e^ ight ) -ix+C & =-ln left ( 1-e^ ight ) +C end

We can set (C=0) to obtain[ sum _^frac <>>=-ln left ( 1-e^ ight ) ]

10.2 Methods to ﬁnd Laurent series

Let us ﬁnd the Laurent series for (fleft ( z ight ) =frac <5z-2>). There is a singularity of order (1) at (z=0) and (z=1).

10.2.1 Method one

Expansion around (z=0) . Letegin gleft ( z ight ) & =zfleft ( z ight ) & =frac <5z-2> end

This makes (gleft ( z ight ) ) analytic around (z), since (gleft ( z ight ) ) do not have a pole at (z=0), then it is analytic around (z=0) and therefore it has a power series expansion around (z=0) given byegin gleft ( z ight ) =sum _^a_z^ ag <1>end Where[ a_=frac <1>left . g^left ( z ight ) ight vert _] But [ gleft ( 0 ight ) =2 ] And egin g^left ( z ight ) & =frac <5left ( z-1 ight ) -left ( 5z-2 ight ) >>=frac <-3>> g^left ( 0 ight ) & =-3 end

And so on. Therefore, from (1)egin gleft ( z ight ) & =gleft ( 0 ight ) +g^left ( 0 ight ) z+frac <1><2!>g^left ( 0 ight ) z^<2>+frac <1><3!>g^left ( 0 ight ) z^<3>+cdots & =2-3z-frac <6><2>z^<2>-frac <18><3!>z^<3>-cdots & =2-3z-3z^<2>-3z^<3>-cdots end

The residue is (2). The above expansion is valid around (z=0) up and not including the next singularity, which is at (z=1). Now we ﬁnd the expansion of (fleft ( z ight ) ) around (z=1) . Letegin gleft ( z ight ) & =left ( z-1 ight ) fleft ( z ight ) & =frac <5z-2> end

This makes (gleft ( z ight ) ) analytic around (z=1), since (gleft ( z ight ) ) do not have a pole at (z=1). Therefore it has a power series expansion about (z=1) given byegin gleft ( z ight ) =sum _^a_left ( z-1 ight ) ^ ag <1>end Where[ a_=frac <1>left . g^left ( z ight ) ight vert _] But [ gleft ( 1 ight ) =3 ] And egin g^left ( z ight ) & =frac <5z-left ( 5z-2 ight ) >>=frac <2>> g^left ( 1 ight ) & =2 end

And so on. Therefore, from (1)egin gleft ( z ight ) & =gleft ( 1 ight ) +g^left ( 1 ight ) left ( z-1 ight ) +frac <1><2!>g^left ( 1 ight ) left ( z-1 ight ) ^<2>+frac <1><3!>g^left ( 1 ight ) left ( z-1 ight ) ^<3>+cdots & =3+2left ( z-1 ight ) -frac <4><2>left ( z-1 ight ) ^<2>+frac <12><3!>left ( z-1 ight ) ^<3>-cdots & =3+2left ( z-1 ight ) -2left ( z-1 ight ) ^<2>+2left ( z-1 ight ) ^<3>-cdots end

Thereforeegin fleft ( z ight ) & =frac & =frac <3>+2-2left ( z-1 ight ) +2left ( z-1 ight ) ^<2>-2left ( z-1 ight ) ^<3>+cdots end

The residue is (3). The above expansion is valid around (z=1) up and not including the next singularity, which is at (z=0) inside a circle of radius (1).

Putting the above two regions together, then we see there is a series expansion of (fleft ( z ight ) ) that is shared between the two regions, in the shaded region below.

Let check same series in the shared region give same values. Using the series expansion about (fleft ( 0 ight ) ) to ﬁnd (fleft ( z ight ) ) at point (z=frac <1><2>), gives (-2) when using (10) terms in the series. Using series expansion around (z=1) to ﬁnd (fleft ( frac <1><2> ight ) ) using (10) terms also gives (-2). So both series are valid produce same result.

10.2.2 Method Two

This method is simpler than the above, but it results in diﬀerent regions. It is based on converting the expression in order to use geometric series expansion on it.[ fleft ( z ight ) =frac <5z-2>] Since there is a pole at (z=0) and at (z=1), then we ﬁrst ﬁnd expansion for (0<left vert z ight vert <1). To do this, we write the above asegin fleft ( z ight ) & =frac <5z-2>left ( frac <1> ight ) & =frac <2-5z>left ( frac <1><1-z> ight ) end

And now expand (frac <1><1-z>) using geometric series, which is valid for (left vert z ight vert <1). This givesegin fleft ( z ight ) & =frac <2-5z>left ( 1+z+z^<2>+z^<3>+cdots ight ) & =frac <2>left ( 1+z+z^<2>+z^<3>+cdots ight ) -5left ( 1+z+z^<2>+z^<3>+cdots ight ) & =left ( frac <2>+2+2z+2z^<2>+cdots ight ) -left ( 5+5z+5z^<2>+5z^<3>+cdots ight ) & =frac <2>-3-3z-3z^<2>-3z^<3>-cdots end

The above is valid for (0<left vert z ight vert <1) which agrees with result of method 1.

Now, to ﬁnd expansion for (left vert z ight vert >1), we need a term that looks like (left ( frac <1><1-frac <1>> ight ) ). Since now it can be expanded for (left vert frac <1> ight vert <1) or (left vert z ight vert >1) which is what we want. Therefore, writing (fleft ( z ight ) ) as[ fleft ( z ight ) =frac <5z-2>=frac <5z-2>left ( 1-frac <1> ight ) >=frac <5z-2>>left ( frac <1><1-frac <1>> ight ) ] But for (left vert frac <1> ight vert <1) the above becomesegin fleft ( z ight ) & =frac <5z-2>>left ( 1+frac <1>+frac <1>>+frac <1>>+cdots ight ) & =frac <5>left ( 1+frac <1>+frac <1>>+frac <1>>+cdots ight ) -frac <2>>left ( 1+frac <1>+frac <1>>+frac <1>>+cdots ight ) & =left ( frac <5>+frac <5>>+frac <5>>+frac <5>>+cdots ight ) -left ( frac <2>>+frac <2>>+frac <2>>+frac <2>>+cdots ight ) & =frac <5>+frac <3>>+frac <3>>+frac <3>>+cdots end

With residue (5). The above is valid for (left vert z ight vert >1). The following diagram illustrates the result obtained from method 2.

10.2.3 Method Three

For expansion about (z=0), this uses same method as above, giving same series valid for (left vert z ight vert <1,.) This method is a little diﬀerent for those points other than zero. The idea is to replace (z) by (z-z_<0>) where (z_<0>) is the point we want to expand about and do this replacement in (fleft ( z ight ) ) itself. So for (z=1) using this example, we let (xi =z-1) hence (z=xi +1). Then (fleft ( z ight ) ) becomes

Now we expand (frac <1><1+xi >) for (left vert xi ight vert <1) and the above becomesegin fleft ( z ight ) & =frac <5xi +3>left ( 1-xi +xi ^<2>-xi ^<3>+xi ^<4>-cdots ight ) & =frac <5xi +3>left ( 1-xi +xi ^<2>-xi ^<3>+xi ^<4>-cdots ight ) & =left ( frac <5xi +3>-left ( 5xi +3 ight ) +left ( 5xi +3 ight ) xi -left ( 5xi +3 ight ) xi ^<2>+cdots ight ) & =left ( 5+frac <3>-5xi -3+5xi ^<2>+3xi -5xi ^<3>-3xi ^<2>+cdots ight ) & =left ( 2+frac <3>-2xi +2xi ^<2>-2xi ^<3>+cdots ight ) end

We now replace (xi =z-1) and the above becomes[ fleft ( z ight ) =left ( frac <3>+2-2left ( z-1 ight ) +2left ( z-1 ight ) ^<2>-2left ( z-1 ight ) ^<3>+2left ( z-1 ight ) ^<4>-cdots ight ) ] The above is valid for (left vert xi ight vert <1) or (left vert z-1, ight vert <1) or (,-1<left ( z-1 ight ) <1) or (0<z<2). This gives same series and for same region as in method one. But this is little faster as it uses Binomial series short cut to ﬁnd the expansion instead of calculating derivatives as in method one.

10.2.4 Conclusion

Method one and method three give same series and for same regions. Method three uses binomial expansion as short cut and requires one to convert (fleft ( z ight ) ) to form to allow using Binomial expansion. Method one does not use binomial expansion but requires doing many derivatives to evaluate the terms of the power series. It is more direct method.

Method two also uses binomial expansion, but gives diﬀerent regions that method one and three.

If one is good in diﬀerentiation, method one seems the most direct. Otherwise, the choice is between method two or three as they both use Binomial expansion. Method two seems a little more direct than method three. It also depends what the problem is asking form. If the problem asks to expand around (z_<0>) vs. if it is asking to ﬁnd expansion in (left vert z ight vert >1) for example, then this decides which method to use.

11 Gamma function notes

(lacksquare ) Gamma function is deﬁned by [ Gamma left ( x ight ) =int _<0>^t^e^<-t>dtqquad x>0 ] The above is called the Euler representation. Or if we want it deﬁned in complex domain, the above becomes [ Gamma left ( z ight ) =int _<0>^t^e^<-t>dtqquad operatorname left ( z ight ) >0 ] Since the above is deﬁned only for right half plane, there is way to extend this to left half plane, using what is called analytical continuation. More on this below. First, some relations involving (Gamma left ( x ight ) )egin Gamma left ( z ight ) & =left ( z-1 ight ) Gamma left ( z-1 ight ) qquad operatorname left ( z ight ) >1 Gamma left ( 1 ight ) & =1 Gamma left ( 2 ight ) & =1 Gamma left ( 3 ight ) & =2 Gamma left ( 4 ight ) & =3! Gamma left ( n ight ) & =left ( n-1 ight ) ! Gamma left ( n+1 ight ) & =n! Gamma left ( frac <1><2> ight ) & =sqrt Gamma left ( z+1 ight ) & =zGamma left ( z ight ) qquad ext Gamma left ( ar ight ) & =overline Gamma left ( n+frac <1><2> ight ) & =frac <1cdot 3cdot 5cdots left ( 2n-1 ight ) ><2^>sqrt end

(lacksquare ) To extend (Gamma left ( z ight ) ) to the left half plane, i.e. for negative values. Let us deﬁne, using the above recursive formula [ ar left ( z ight ) =frac qquad operatorname left ( z ight ) >-1 ] For example [ ar left ( -frac <1><2> ight ) =frac <2> ight ) ><-frac <1><2>>=-2Gamma left ( frac <1><2> ight ) =-2sqrt ] And for (operatorname left ( z ight ) >-2) [ ar left ( -frac <3><2> ight ) =frac <ar left ( -frac <3><2>+1 ight ) ><-frac <3><2>>=left ( frac <1><-frac <3><2>> ight ) ar left ( -frac <1><2> ight ) =left ( frac <1><-frac <3><2>> ight ) left ( frac <1><-frac <1><2>> ight ) Gamma left ( frac <1><2> ight ) =left ( frac <1><-frac <3><2>> ight ) left ( frac <1><-frac <1><2>> ight ) sqrt =frac <4><3>sqrt ] And so on. Notice that for (x<0) the functions (Gamma left ( x ight ) ) are not deﬁned for all negative integers (x=-1,-2,cdots ) it is also not deﬁned for (x=0)

(lacksquare ) The above method of extending (or analytical continuation) of the Gamma function to negative values is due to Euler. Another method to extend Gamma is due to Weierstrass. It starts by rewriting from the deﬁnition as follows, where (a>0)egin Gamma left ( z ight ) & =int _<0>^t^e^<-t>dt onumber & =int _<0>^t^e^<-t>dt+int _^t^e^<-t>dt ag <1>end

This takes care of the ﬁrst integral in (1). Now, since the lower limits of the second integral in (1) is not zero, then there is no problem integrating it directly. Remember that in the Euler deﬁnition, it had zero in the lower limit, that is why we said there (operatorname left ( z ight ) >1). Now can can choose any value for (a). Weierstrass choose (a=1). Hence (1) becomesegin Gamma left ( z ight ) & =int _<0>^t^e^<-t>dt+int _^t^e^<-t>dt onumber & =sum _^frac >+int _<1>^t^e^<-t>dt ag <2>end

Notice the term (a^) now is just (1) since (a=1). The second integral above can now be integrated directly. Let us now verify that Euler continuation (ar left ( z ight ) ) for say (z=-frac <1><2>) gives the same result as Weierstrass formula. From above, we found that (ar left ( z ight ) =-2sqrt ). Equation (2) for (z=-frac <1><2>) becomesegin ar left ( -frac <1><2> ight ) =sum _^frac ><2> ight ) >+int _<1>^t^<-frac <3><2>>e^<-t>dt ag <3>end Using the computer [ sum _^frac ><2> ight ) >=-2sqrt +2sqrt left ( 1-operatorname left ( 1 ight ) ight ) -2frac <1>] And direct integration [ int _<1>^t^<-frac <3><2>>e^<-t>dt=-2sqrt +2sqrt operatorname left ( 1 ight ) +frac <2>] Hence (3) becomesegin ar left ( -frac <1><2> ight ) & =left ( -2sqrt +2sqrt left ( 1-operatorname left ( 1 ight ) ight ) -2frac <1> ight ) +left ( -2sqrt +2sqrt operatorname left ( 1 ight ) +frac <2> ight ) & =-2sqrt end

Which is the same as using Euler method. Let us check for (z=-frac <2><3>). We found above that (ar left ( -frac <3><2> ight ) =frac <4><3>sqrt ) using Euler method of analytical continuation. Now we will check using Weierstrass method. Equation (2) for (z=-frac <3><2>) becomes[ ar left ( -frac <3><2> ight ) =sum _^frac ><2> ight ) >+int _<1>^t^<-frac <5><2>>e^<-t>dt ] Using the computer [ sum _^frac ><2> ight ) >=frac <4sqrt ><3>-frac <4sqrt left ( 1-operatorname left ( 1 ight ) ight ) ><3>+frac <2><3e>] And [ int _<1>^t^<-frac <5><2>>e^<-t>dt=-frac <4sqrt operatorname left ( 1 ight ) ><3>+frac <4sqrt ><3>-frac <2><3e>] Henceegin ar left ( -frac <3><2> ight ) & =left ( frac <4sqrt ><3>-frac <4sqrt left ( 1-operatorname left ( 1 ight ) ight ) ><3>+frac <2><3e> ight ) +left ( -frac <4sqrt operatorname left ( 1 ight ) ><3>+frac <4sqrt ><3>-frac <2><3e> ight ) & =frac <4><3>sqrt end

Which is the same as using the Euler method. Clearly the Euler method for analytical continuation of the Gamma function is simpler to compute.

(lacksquare ) Euler reﬂection formula egin Gamma left ( x ight ) Gamma left ( 1-x ight ) & =int _<0>^frac <>><1+t>dtqquad 0<x<1 & =frac end

Where contour integration was used to derive the above. See Mary Boas text book, page 607, second edition, example 5 for full derivation.

(lacksquare ) (Gamma left ( z ight ) ) has singularities at (z=0,-1,-2,cdots ) and (Gamma left ( 1-z ight ) ) has singularities at (z=1,2,3,cdots ) so in the above reﬂection formula, the zeros of (sin left ( pi x ight ) ) cancel the singularities of (Gamma left ( x ight ) ) when it is written as [ Gamma left ( 1-x ight ) =frac ]

(lacksquare ) There are other representations for (Gamma left ( x ight ) ). One that uses products by Euler also is egin Gamma left ( z ight ) & =frac <1>Pi _^frac ight ) ^><1+frac > & =lim _frac > end

12 Riemann zeta function notes

(lacksquare ) Given by (zeta left ( s ight ) =sum _^frac <1><>>) for (operatorname left ( s ight ) >1). Euler studied this and It was extended to the whole complex plane by Riemann. So the Riemann zeta function refer to the one with the extension to the whole complex plane. Euler only looked at it on the real line. It has pole at (s=1). Has trivial zeros at (s=-2,-4,-6,cdots ) and all its non trivial zeros are inside the critical strip (0<s<1) and they all lie on the critical line (s=frac <1><2>). (zeta left ( s ight ) ) is also deﬁned by integral formula [ zeta left ( s ight ) =frac <1>int _<0>^frac <1><>-1>frac <>>dtqquad operatorname left ( s ight ) >1 ]

(lacksquare ) The connection between (zeta left ( s ight ) ) prime numbers is given by the Euler product formula

egin zeta left ( s ight ) & =Pi _

frac <1><1-p^<-s>> & =left ( frac <1><1-2^<-s>> ight ) left ( frac <1><1-3^<-s>> ight ) left ( frac <1><1-5^<-s>> ight ) left ( frac <1><1-7^<-s>> ight ) cdots & =left ( frac <1><1-frac <1><2^>> ight ) left ( frac <1><1-frac <1><3^>> ight ) left ( frac <1><1-frac <1><5^>> ight ) left ( frac <1><1-frac <1><7^>> ight ) cdots & =left ( frac <2^><2^-1> ight ) left ( frac <3^><3^-1> ight ) left ( frac <5^><5^-1> ight ) left ( frac <7^><7^-1> ight ) cdots end

(lacksquare ) (zeta left ( s ight ) ) functional equation is

[ zeta left ( s ight ) =2^pi ^sin left ( frac <2> ight ) Gamma left ( 1-s ight ) zeta left ( 1-s ight ) ]

13 Complex functions notes

(lacksquare ) Complex identities egin left vert z ight vert ^ <2>& =zar overline ight ) > & =z overline +z_<2> ight ) > & =ar _<1>+ar _<2> left vert ar ight vert & =left vert z ight vert left vert z_<1>z_<2> ight vert & =left vert z_<1> ight vert left vert z_<2> ight vert operatorname left ( z ight ) & =frac ><2> operatorname left ( z ight ) & =frac ><2i> arg left ( z_<1>z_<2> ight ) & =arg left ( z_<1> ight ) +arg left ( z_<2> ight ) end

(lacksquare ) A complex function (fleft ( z ight ) ) is analytic in a region (D) if it is deﬁned and diﬀerentiable at all points in (D). One way to check for analyticity is to use the Cauchy Riemann (CR) equations (this is a necessary condition but not suﬃcient). If (fleft ( z ight ) ) satisﬁes CR everywhere in that region then it is analytic. Let (fleft ( z ight ) =uleft ( x,y ight ) +ivleft ( x,y ight ) ), then these two equations in Cartesian coordinates areegin frac & =frac -frac & =frac end

Sometimes it is easier to use the polar form of these. Let (fleft ( z ight ) =rcos heta +isin heta ), then the equations becomeegin frac & =frac <1>frac -frac <1>frac & =frac end

To remember them, think of the (r) as the (x) and ( heta ) as the (y).

Let us apply these on (sqrt ) to see how it works. Since (z=re^) then (fleft ( z ight ) =) (sqrt e^<2>+npi >).This is multi-valued function. One value for (n=0) and another for (n=1). The ﬁrst step is to make it single valued. Choosing (n=0) gives the principal value. Then (fleft ( z ight ) =sqrt e^<2>>). Now we ﬁnd the branch points. (z=0) is a branch point. We can pick (-pi < heta <pi ) and pick the negative real axis as the branch cut (the other branch point being (-infty )). This is one choice.

We could have picked (0< heta <2pi ) and had the positive (x) axis as the branch cut, where now the second branch point is (+infty ) but in both cases, origin is still part of the branch cut. Let us stick with (-pi < heta <pi ).

Given all of this, now(sqrt =sqrt e^<2>>=sqrt left ( cos left ( frac < heta ><2> ight ) +isin left ( frac < heta ><2> ight ) ight ) ), hence (u=sqrt cos left ( frac < heta ><2> ight ) ) and (v=sqrt sin left ( frac < heta ><2> ight ) ). Therefore (frac =frac <1><2>frac <1>>cos left ( frac < heta ><2> ight ) ,) and (frac =frac <1><2>sqrt cos left ( frac < heta ><2> ight ) ) and (frac =-frac <1><2>sqrt sin left ( frac < heta ><2> ight ) ) and (frac =frac <1><2>frac <1>>sin left ( frac < heta ><2> ight ) ). Applying Cauchy-Riemann above gives egin frac <1><2>frac <1>>cos left ( frac < heta ><2> ight ) & =frac <1>frac <1><2>sqrt cos left ( frac < heta ><2> ight ) frac <1><2>frac <1>>cos left ( frac < heta ><2> ight ) & =frac <1><2>frac <1>>cos left ( frac < heta ><2> ight ) end

Satisﬁed. and for the second equation egin -frac <1>left ( -frac <1><2>sqrt sin left ( frac < heta ><2> ight ) ight ) & =frac <1><2>frac <1>>sin left ( frac < heta ><2> ight ) frac <1><2>frac <1>>sin left ( frac < heta ><2> ight ) & =frac <1><2>frac <1>>sin left ( frac < heta ><2> ight ) end

so (sqrt ) is analytic in the region (-pi < heta <pi ), and not including branch points and branch cut.

(lacksquare ) We can’t just say (fleft ( z ight ) ) is Analytic and stop. Have to say (fleft ( z ight ) ) is analytic in a region or at a point. When we say (fleft ( z ight ) ) analytic at a point, we mean analytic in small region around the point.

If (fleft ( z ight ) ) is deﬁned only at an isolated point (z_<0>) and not deﬁned anywhere around it, then the function can not be analytic at (z_<0>) since it is not diﬀerentiable at (z_<0>). Also (fleft ( z ight ) ) is analytic at a point (z_<0>) if the power series for (fleft ( z ight ) ) expanded around (z_<0>) converges to (fleft ( z ight ) ) evaluated at (z_<0>). An analytic complex function mean it is inﬁnitely many times diﬀerentiable in the region, which means the limit exist (lim _frac ) and does not depend on direction.

(lacksquare ) Before applying the Cauchy Riemann equations, make sure the complex function is ﬁrst made to be single valued.

(lacksquare ) Remember that Cauchy Riemann equations as necessary but not suﬃcient condition for function to be analytic. The extra condition needed is that all the partial derivatives are continuous. Need to ﬁnd example where CR is satisﬁed but not the continuity on the partial derivatives. Most of the HW problems just needs the CR but good to keep an eye on this other condition.

(lacksquare ) Cauchy-Goursat: If (fleft ( z ight ) ) is analytic on and inside closed contour (C) then (> fleft ( z ight ) dz=0). But remember that if (> fleft ( z ight ) dz=0) then this does not necessarily imply (fleft ( z ight ) ) is analytic on and inside (C). So this is an IF and not an IFF relation. For example (> frac <1>>dz=0) around unit circle centered at origin, but clearly (frac <1>>) is not analytic everywhere inside (C), since it has a singularity at (z=0).

proof of Cauchy-Goursat: The proof uses two main ideas. It uses the Cauchy-Riemann equations and also uses Green theorem. Green’s Theorem saysegin int _Pdx+Qdy=int _left ( frac -frac ight ) dA ag <1>end So Green’s Theorem transforms integration on the boundary (C) of region (D) by integration over the area inside the boundary (C). Let (fleft ( z ight ) =u+iv). And since (z=x+iy) then (dz=dx+idy). Thereforeegin > fleft ( z ight ) dz & => left ( u+iv ight ) left ( dx+idy ight ) onumber & => udx+uidy+ivdx-vdy onumber & => left ( udx-vdy ight ) +i> vdx+udy ag <2>end

We now apply (1) to each of the two integrals in (3). Hence the ﬁrst integral in (2) becomes[> left ( udx-vdy ight ) =int _left ( -frac -frac ight ) dA ] But from CR, we know that (-frac =frac ), hence the above is zero. And the second integral in (2) becomes[> vdx+udy=int _left ( frac -frac ight ) dA ] But from CR, we know that (frac =frac ), hence the above is zero. Therefore the whole integral in (2) is zero. Therefore (> fleft ( z ight ) dz=0). QED.

(lacksquare ) Cauchy residue: If (fleft ( z ight ) ) is analytic on and inside closed contour (C) except at some isolated points (z_<1>,z_<2>,cdots ,z_) then (> fleft ( z ight ) dz=2pi isum _^operatorname left ( fleft ( z ight ) ight ) _<>>). The term (operatorname left ( fleft ( z ight ) ight ) _<>>) is the residue of (fleft ( z ight ) ) at point (z_). Use Laurent expansion of (fleft ( z ight ) ) to ﬁnd residues. See above on methods how to ﬁnd Laurent series.

(lacksquare ) Maximum modulus principle : If (fleft ( z ight ) ) is analytic in some region (D) and is not constant inside (D), then its maximum value must be on the boundary. Also its minimum on the boundary, as long as (fleft ( z ight ) eq 0) anywhere inside (D). In the other hand, if (fleft ( z ight ) ) happened to have a maximum at some point (z_<0>) somewhere inside (D), then this implies that (fleft ( z ight ) ) is constant everywhere and will have the value (fleft ( z_<0> ight ) ) everywhere. What all this really mean, is that if (fleft ( z ight ) ) is analytic and not constant in (D), then its maximum is on the boundary and not inside.

There is a complicated proof of this. See my notes for Physics 501. Hopefully this will not come up in the exam since I did not study the proof.

(lacksquare ) These deﬁnitions from book of Joseph Bak

1. (f) is analytic at (z) if (f) is diﬀerentiable in a neighborhood of (z). Similarly (f) is analytic on set (S) if (f) is diﬀerentiable at all points in some open set containing (S).
2. (fleft ( z ight ) ) is analytic on open set (U) is (fleft ( z ight ) ) if diﬀerentiable at each point of (U) and (f^left ( z ight ) ) is continuous on (U).

(lacksquare ) Some important formulas.

1. If (fleft ( z ight ) ) is analytic on and inside (C) then [> fleft ( z ight ) dz=0 ]
2. If (fleft ( z ight ) ) is analytic on and inside (C) then and (z_<0>) is a point in (C) then egin 2pi ifleft ( z_<0> ight ) & => frac >dz 2pi if^left ( z_<0> ight ) & => frac ight ) ^<2>>dz frac <2pi i><2!>f^left ( z_<0> ight ) & => frac ight ) ^<3>>dz\ & vdots \ frac <2pi i>f^left ( z_<0> ight ) & => frac ight ) ^>dz end
3. From the above, we ﬁnd, where here (fleft ( z ight ) =1) [> frac <1> ight ) ^>dz=left < egin [c]2pi i & & n=0 0 & & n=1,2,cdots end ight . ]

14 Hints to solve some problems

14.1 Complex analysis and power and Laurent series

1. Laurent series of (fleft ( z ight ) ) around point (z_<0>) is (sum _^a_left ( z-z_<0> ight ) ^) and (a_=frac <1><2pi i>frac ight ) ^>dz). Integration is around path enclosing (z_<0>) in counter clockwise.
2. Power series of (fleft ( z ight ) ) around (z_<0>) is (sum _<0>^a_left ( z-z_<0> ight ) ^) where (a_=frac <1>left . f^left ( z ight ) ight vert _>)
3. Problem asks to use Cauchy integral formula (> frac >dz=2pi ifleft ( z_<0> ight ) ) to evaluate another integral (> gleft ( z ight ) dz). Both over same (C). The idea is to rewrite (gleft ( z ight ) ) as (frac >) by factoring out the poles of (gleft ( z ight ) ) that are outside (C) leaving one inside (C). Then we can write egin > gleft ( z ight ) dz & => frac >dz & =2pi ifleft ( z_<0> ight ) end

For example, to solve (> frac <1>dz) around (C) unit circle. Rewriting this as (> frac >dz) where now (fleft ( z ight ) =frac <1>) and now we can use Cauchy integral formula. So all what we have to do is just evaluate (frac <1>) at (z=-1), which gives (> frac <1>dz=2pi i). This works if (gleft ( z ight ) ) can be factored into (frac >) where (fleft ( z ight ) ) is analytic on and inside (C). This would not work if (gleft ( z ight ) ) has more than one pole inside (C).

where (left ( x_<0>,y_<0> ight ) ) in the line initial point and (left ( x_<1>,y_<1> ight ) ) is the line end point. This works for straight lines. Now use the above and rewrite (z=x+iy) as (zleft ( t ight ) =xleft ( t ight ) +iyleft ( t ight ) ) and then plug-in in this (zleft ( t ight ) ) in (fleft ( z ight ) ) to obtain (fleft ( t ight ) ), then the integral becomes [ int _fleft ( z ight ) dz=int _^fleft ( t ight ) z^left ( t ight ) dt ] And now evaluate this integral using normal integration rules. If the path is a circular arc, then no need to use (t), just use ( heta ). Rewrite (x=re^) and use ( heta ) instead of (t) and follow same steps as above.

But if (hleft ( z_<0> ight ) =0) then we need to apply La’Hopital like this. If (fleft ( z ight ) =frac <1-z^<4>>) and we want to ﬁnd residue at (z=i). Then do as above, but with extra step, like this egin Rleft ( i ight ) & =lim _left ( z-i ight ) frac <1-z^<4>> & =left ( lim _sin z ight ) left ( lim _left ( z-i ight ) frac <1><1-z^<4>> ight ) & =sin ileft ( lim _frac <1-z^<4>> ight ) qquad ext & =sin ileft ( lim _frac <1><-4z^<3>> ight ) & =frac <-4i^<3>> & =frac <1><4>sinh left ( 1 ight ) end

Now if the pole is not a simple pole or order one,.say of order (m), then we ﬁrst multiply (fleft ( z ight ) ) by (left ( z-z_<0> ight ) ^) then diﬀerentiate the result (m-1) times, then divide by (left ( m-1 ight ) !), and then evaluate the result at (z=z_<0>.) in other words, [ Rleft ( z_<0> ight ) =lim _>frac <1>frac <>><>>left ( left ( z-z_<0> ight ) ^fleft ( z ight ) ight ) ] For example, if (fleft ( z ight ) =frac >) and we want residue at (z=pi ). Since order is (m=3), then egin Rleft ( z_<0> ight ) & =lim _frac <1><2!>frac >>left ( left ( z-pi ight ) ^<3>frac > ight ) & =lim _frac <1><2>frac >>left ( zsin z ight ) & =lim _frac <1><2>left ( -zsin z+2cos z ight ) & =-1 end

The above methods will work on most of the HW problems I’ve seen so far but If all else fails, try Laurent series, that always works.

14.2 Errors and relative errors

A problem gives an expression in (x,y) such as (fleft ( x,y ight ) ) and asks how much a relative error in both (x) and (y) will aﬀect (fleft ( x,y ight ) ) in worst case. For these problems, ﬁnd (df) and then ﬁnd (frac ). For example, if (fleft ( x,y ight ) =sqrt >>) and relative error is in (x) and (y) is (2\%) then what is worst relative error in (fleft ( x,y ight ) ?). Then since egin df & =frac dx+frac dy & =frac <1><2>x^<-frac <1><2>>b^<-frac <3><2>>dx-frac <3><2>x^<2>>y^<-frac <5><2>>dy end

Then [ frac =frac <1><2>frac -frac <3><2>frac ] But (frac ) and (frac ) are the relative errors in (x) and (y). So if we plug-in (2) for (frac ) and (-2) for (frac ) we get (4\%) is worst relative error in (fleft ( x,y ight ) ). Notice we used (-2\%) relative error for (y) and (+2\%) relative error for (x) since we wanted the worst (largest) relative error. If we wanted the least relative error in (fleft ( x,y ight ) ), then we will use (+2\%) for (y) also, which gives (1-3=-2) or (2\%) relative error in (fleft ( x,y ight ) ).

15 Some CAS notes

(lacksquare ) in Mathematica Exp is a symbol. Head[Exp] gives Symbol but in Maple it is not.

2.3E: Exercises - Mathematics

I can solve problems involving arithmetic and geometric sequences and series given a variety of information.

I can use a variety of words, symbols, and notation to describe or find the sum of a series or solve sequence problems.

Complete Review Set 2C on pages 91-92 and check answers on p. 700

Study for Chapter Two Test tomorrow!

I can solve problems involving arithmetic and geometric sequences and series given a variety of information.

I can use a variety of words, symbols, and notation to describe or find the sum of a series or solve sequence problems.

Study for Chapter 2 Test RETAKE if necessary!

I can write a product of factors using index notation.

I can evaluate powers of numbers with positive or negative bases or exponents.

I can understand a variety of symbols and words involving powers powers, bases, index, exponent, positive, negative, numerator, denominator, etc.

Chapter Three: Exponentials

Lesson 3A: Index Notation

Lesson 3B: Evaluating Powers

WAAST IB Mathematics SL Mrs. Hanson Week 13 November 21 st -22 nd , 2011

Learning Target

Language Target

I can show that a sequence is arithmetic OR geometric, find the formula for the general term, find any term of a sequence, and determine if a number is a member of a sequence.

I understand a variety of words and symbols that describe an arithmetic OR geometric sequence and communicate my understanding written with words and symbols.

Review IB Exam Questions worksheet #1-5

Review exercises from the

I can use compound interest concepts to solve for the amount of an investment after a fixed time period, a percentage interest rate, or the amount of time for and investment to reach a fixed amount.

I can understand how to apply compound interest rates, its components, and other geometric sequence applications.

Answers any remaining questions from review set exercises

Begin Applications of Sequences

2D.1: Compound Interest

WAAST IB Mathematics SL Mrs. Hanson Week 12 November 14 th - 18 th , 2011

Learning Target

Language Target

I can recognize a pattern in a set of numbers, describe the pattern in words, and continue the pattern.

I can describe a pattern in a set of numbers using words and symbols.

Review Chapter One Test on Functions

I can understand a pattern given by listing terms, using words, using an explicit formula, and by using a pictorial or graphical representation.

I can describe a pattern in a set of numbers using words and symbols.

Review homework from Lesson 2A

Sequences of Numbers

I can show that a sequence is arithmetic, find the formula for the general term, find any term of a sequence, and determine if a number is a member of a sequence.

I understand a variety of words and symbols that describe an arithmetic sequence and communicate my understanding written with words and symbols.

Review homework from Lesson 2B

Arithmetic Sequences

I can show that a sequence is geometric, find the formula for the general term, find any term of a sequence, and determine if a number is a member of a sequence.

I understand a variety of words and symbols that describe a geometric sequence and communicate my understanding written with words and symbols.

Review homework from Lesson 2C

Geometric Sequences

I can show that a sequence is arithmetic OR geometric, find the formula for the general term, find any term of a sequence, and determine if a number is a member of a sequence.

I understand a variety of words and symbols that describe an arithmetic OR geometric sequence and communicate my understanding written with words and symbols.

Review homework from Lesson 2D.1

Arithmetic and Geometric Sequences

WAAST IB Mathematics SL Mrs. Hanson Week 11 November 7 th -10 th , 2011

Learning Target

Language Target

I can discuss the characteristics of a reciprocal function.

I determine if a function has an inverse and if so then I can find it.

I can understand what a reciprocal is and what inverse functions are and how to find them.

I can determine if a relation is a function, use function notation to evaluate and solve equations and compose functions, state the domain and range of a relation, create a sign diagram for a graph, discuss the characteristics of a reciprocal function, and find the inverse of a function.

I can use function notation and technical words to describe functions.

I understand the meaning of technical words such as a relation, function, domain, range, asymptote, intercept, reciprocal, and inverse.

Refer to Tuesday 11/08 Learning Target

Refer to Tuesday 11/08 Language Target

Refer to Tuesday 11/08 Learning Target

Refer to Tuesday 11/08 Language Target

Complete Set of IB Exam Questions on Ch 1 Functions

WAAST IB Mathematics SL Mrs. Hanson Week 10 October 31 st - November 4 th , 2011

Learning Target

Language Target

I can find the composition of two functions, draw sign diagrams for functions, and analyze reciprocal function’s graphs.

I can communicate my knowledge in writing about a graph’s sign and asymptote using words and symbols.

*Paste weekly agenda into notebook

*Review Lessons D and E, study for Quiz on 1 D-F tomorrow

*Read and take notes on page 60, Lesson F: The Reciprocal Function

* Exit Slip on 1E exercises

I can find the composition of two functions, draw sign diagrams for functions, and analyze reciprocal function’s graphs.

I can communicate my knowledge in writing about a graph’s sign and asymptote using words and symbols.

*Complete student behavior form and submit to teacher.

*Submit graph composition notebook to portfolio folder.

Complete Investigation 1: Fluid Filling Functions on Textbook Page 54 while fulfilling the Internal Assessment Grading Criteria and submit via email to [email protected] before 9pm tonight.

I can determine the equations of asymptotes, discuss the behavior of the function as it approaches the asymptote, find axes intercepts, and graph reciprocal functions.

I can understand what a reciprocal function is and it’s characteristics.

*Lesson 1G: Asymptotes of Other Rational Functions

Study for Chapter One Test: Functions on Wed, Nov 9 th

WAAST Mathematics Mrs. Hanson IB Mathematics SL Week 8 October 17 th -21 st , 2011

I can understand the notation used in the textbook and on my calculator to carry out a variety of tasks.

I am familiar with the presumed knowledge section on my formula sheet.

I understand the different symbols and prompts used in the textbook, calculator, and formula sheet to carry out a variety of tasks.

10-11: Symbols and Notation used in this book and formula sheet

12-15: Background Knowledge and Geometric Facts

16-44: Graphics Calculator Instructions

Read through pages 10-44 of textbook and spend time reviewing unfamiliar content.

Preview Chapter One: Functions

I can determine if a graph, set of points, or equation describes a function or relation.

I can understand the meaning of a relation and a function.

I can use set notation to describe a function and evaluate it for different values.

I can understand the meaning of set notation, its usefulness.

I can determine the domain and range of a relation or function.

I can understand the meaning of domain and range.

I can write a brief report on the connection between the shape of a vessel and the corresponding shape of its depth-time graph.

I can understand expectations of the internal assessment and communicate my understanding in a written report.

Fluid Filling Functions on

Complete Investigation 1: Fluid Filling Functions on Textbook Page 54 while fulfilling the Internal Assessment Grading Criteria.

Submit report via email to [email protected] before Tuesday, Oct 25 th .

WAAST Mathematics SL Mrs. Hanson Week 7 October 10 th -13 th , 2011

I can find the missing side lengths and angles measures of objects by creating a right triangle with the object and then using the Pythagorean theorem or trigonometric ratios.

I can understand the meaning of the words ‘isosceles’, ‘tangent’, ‘chord’, ‘angle of elevation’, ‘angle of depression’, ‘altitude’.

Lesson BK N.5: Right Angled Triangle Trigonometry, Problem Solving Using Trigonometry

I can find the missing side lengths and angles measures of objects by creating a right triangle with the object and then using the Pythagorean theorem or trigonometric ratios.

I can understand the meaning of the words ‘isosceles’, ‘tangent’, ‘chord’, ‘angle of elevation’, ‘angle of depression’, ‘altitude’.

Lesson BK N.5: Right Angled Triangle Trigonometry, Problem Solving Using Trigonometry

I can find the missing side lengths and angles measures of objects by creating a right triangle with the object and then using the Pythagorean theorem or trigonometric ratios.

I can understand the meaning of the words ‘isosceles’, ‘tangent’, ‘chord’, ‘angle of elevation’, ‘angle of depression’, ‘altitude’.

Quiz on Lesson N: Right Angled Triangle Trigonometry

Study any sections you would like to retake quizzes on tomorrow

I can find the missing side lengths and angles measures of objects by creating a right triangle with the object and then using the Pythagorean theorem or trigonometric ratios.

I can understand the meaning of the words ‘isosceles’, ‘tangent’, ‘chord’, ‘angle of elevation’, ‘angle of depression’, ‘altitude’.

Retake a Quiz or Test, Review content up to this point, preview textbook pages 12-44 and review formula sheet

Study any sections you would like to retake quizzes on

Prepare for Chapter One: Functions

WAAST Mathematics SL Mrs. Hanson Week 5 September 26 th -30 th , 2011

I can find the distance, midpoint, and slope given two points or a graph. I can write equations of lines (in a variety of forms given different information) and equations of tangent lines and circles.

I can understand the meaning of the words ‘distance’, ‘midpoint’, ‘gradient’ or ‘slope’, ‘parallel’, ‘perpendicular’, ‘intercept’, ‘gradient’, ‘horizontal’, ‘vertical’, ‘tangent’ and ‘circle’.

Review and complete these exercises which should have been completed last week!

I can find the distance, midpoint, and slope given two points or a graph. I can write equations of lines (in a variety of forms given different information) and equations of tangent lines and circles.

I can understand the meaning of the words ‘distance’, ‘midpoint’, ‘gradient’ or ‘slope’, ‘parallel’, ‘perpendicular’, ‘intercept’, ‘gradient’, ‘horizontal’, ‘vertical’, ‘tangent’ and ‘circle’.

Review and complete these exercises which should have been completed last week!

I can find the distance between two points, the midpoint coordinate, the slope of a line, and determine if two lines are perpendicular, parallel, or neither.

I can understand the meaning of the words ‘distance’, ‘midpoint’, ‘gradient’ or ‘slope’, ‘parallel’, and ‘perpendicular’.

Quiz on Lessons M

I can write the equation of a line given a.) the gradient and a point, and b.) two points. I can write the equation of a line in gradient-intercept form or general form. I can use graphical methods to find where two lines intersect.

I can understand the meaning of the words ‘intercept’, ‘gradient’, ‘horizontal’, and ‘vertical’.

I can find the distance, midpoint, and slope given two points or a graph. I can write equations of lines (in a variety of forms given different information) and equations of tangent lines and circles.

I can understand the meaning of the words ‘distance’, ‘midpoint’, ‘gradient’ or ‘slope’, ‘parallel’, ‘perpendicular’, ‘intercept’, ‘gradient’, ‘horizontal’, ‘vertical’, ‘tangent’ and ‘circle’.

Review and complete these exercises which should have been completed last week! Study for a Quiz M retake on Monday!

WAAST Mathematics SL Mrs. Hanson Week 4 September 19 th -22 nd , 2011

I can determine if two or more geometric figures are congruent, by using SSS, SAS, AAS, or RHS rules.

I can understand what the abbreviations SSS, SAS, AAS, and RHS mean and how to use these rules. I can understand the difference between congruent and similar.

Congruence and Similarity

I can use Pythagoras’ Theorem to solve parts of a right triangle in two and three dimensional problems.

I can understand the words used to describe parts of a right triangle, such as “leg” and “hypotenuse”, and “converse”.

Study for Quiz on Lessons I-L on Wednesday

I can find the distance between two points, the midpoint coordinate, the slope of a line, and determine if two lines are perpendicular, parallel, or neither.

I can understand the meaning of the words ‘distance’, ‘midpoint’, ‘gradient’ or ‘slope’, ‘parallel’, and ‘perpendicular’.

Quiz on Lessons I-L

I can write the equation of a line given a.) the gradient and a point, and b.) two points. I can write the equation of a line in gradient-intercept form or general form. I can use graphical methods to find where two lines intersect.

I can understand the meaning of the words ‘intercept’, ‘gradient’, ‘horizontal’, and ‘vertical’.

I can write equations of tangents lines and equations of circles.

I can understand the meaning of the words ‘tangent’ and ‘circle’.

WAAST Mathematics SL Mrs. Hanson Week 3 September 12 th -16 th , 2011

I can factorize any quadratic expression, using a variety of methods.

I can understand the meaning of the math term ‘factorize’.

Lesson BK H: Factorization

Quiz on Lessons G-H during enrichment

Review Lesson G-H and prepare for retake if necessary

I can rearrange formulas for a variety of variables.

I can understand prompts for instructions of ‘make a variable the subject of another’ or ‘write the formula in terms of a specific variable’.

I can add and subtract algebraic fractions by creating a least common denominator.

I can understand the meaning of a least common denominator and what it means for a fraction to be in simplest form or as a single fraction.

I can determine if two or more geometric figures are congruent, by using SSS, SAS, AAS, or RHS rules.

I can understand what the abbreviations SSS, SAS, AAS, and RHS mean and how to use these rules. I can understand the difference between congruent and similar.

Congruence and Similarity

I can use Pythagoras’ Theorem to solve parts of a right triangle in two and three dimensional problems.

I can understand the words used to describe parts of a right triangle, such as “leg” and “hypotenuse”, and “converse”.

Study for Quiz on Lessons I-L on Monday

WAAST Mathematics SL Mrs. Hanson Week 2 September 6 th -9 th , 2011

I will be able to simplify expressions by combining like terms and using power rules. I will be able to solve linear equations and inequalities with one or two variables. I will be able to evaluate absolute value expressions and solve equations.

I will understand words which are prompts to perform operations such as ‘simplify’ or ‘solve’. I will understand the concept of absolute value, and be able to express it geometrically and algebraically. I will understand the meaning of the math term ‘expand’.

Lesson BK D: Algebraic Simplification

Lesson BK E: Linear Equations and Inequalities Lesson BK F: Modulus or Absolute Value

I will be able to expand and simplify quadratic expressions and equations.

I will understand the meaning of the math term ‘expand’.

Quiz on Lessons A-F

Lesson BK G: Product Expansion

I will be able to factorize any quadratic expression, using a variety of methods.

I will understand the meaning of the math term ‘factorize’.

Lesson BK H: Factorization

I will be able to rearrangement formulas for a variety of variables.

I will understand prompts for instructions of ‘make a variable the subject of another’ or ‘write the formula in terms of a variable’.

Quiz on Lessons G-H

Lesson I: Formula Rearrangement

WAAST Mathematics SL Mrs. Hanson Week 1 August 30 th -September 2 nd , 2011

I will understand the syllabus content: materials, expectations, and grading for the course.

I will listen carefully, follow spoken and written instructions, and ask questions.

Welcome back to school activities

Lesson BK A: Surds and Radical

Review, discuss, and sign the syllabus with your guardian and return it before Friday 09/02

I will understand how to write number in scientific notation and in ordinary decimal forms interchangeably.

I will understand mathematical words (scientific, decimal, power, etc.) which prompt instructions.

Lesson BK B: Scientific Notation

I will use letters to represent number sets and use symbols to group sets together.

I will understand the meaning of words to describe number sets and operations.

Lesson BK C: Number Systems and Set Notation

I will be able to simplify expressions by combining like terms and using power rules. I will be able to solve linear equations and inequalities with one or two variables. I will be able to evaluate absolute value expressions and solve equations.

I will understand words which are prompts to perform operations such as ‘simplify’ or ‘solve’. I will understand the concept of absolute value, and be able to express it geometrically and algebraically. I will understand the meaning of the math term ‘expand’.

2.3E: Exercises - Mathematics

The form a + b i is called the rectangular coordinate form of a complex number because to plot the number we imagine a rectangle of width a and height b, as shown in the graph in the previous section.

But complex numbers, just like vectors, can also be expressed in polar coordinate form, r &ang &theta . (This is spoken as &ldquor at angle &theta &rdquo.) The figure to the right shows an example. The number r in front of the angle symbol is called the magnitude of the complex number and is the distance of the complex number from the origin. The angle &theta after the angle symbol is the direction of the complex number from the origin measured counterclockwise from the positive part of the real axis.

Rectangular &rarr Polar Conversion

Solution: We have r = 5 and &theta = 53°. We compute a = 5 cos (53°) = 3 and b = 5 sin (53°) = 4, so the complex number in rectangular form must be 3 + 4 i.

Example: Convert the complex number 5 + 2 i to polar form.

Solution: We have a = 5 and b = 2. We compute

so the complex number in polar form must be 5.39 &ang 21.8°.

Example: Convert the complex number &minus5 &minus 2 i to polar form.

Solution: We have a = &minus5 and b = &minus2. We compute

which is exactly the same answer as for the previous example! What went wrong? The answer is that the arctan function always returns an angle in the first or fourth quadrants and we need an angle in the third quadrant. So we must add 180° to the angle by hand. Thus the complex number in polar form must be 5.39 &ang 201.8°.

Multiplying and Dividing Complex Numbers in Polar Form

Complex numbers in polar form are especially easy to multiply and divide. The rules are:

Multiplication rule: To form the product multiply the magnitudes and add the angles.

Example: divide 15 &ang 32° by 3 &ang 25°

Example: divide 5 + 3 i by 2 &minus 4 i

Just for fun we converted both numbers to polar form (with angles in radians), then did the division in polar, then converted the result back to rectangular form.

Complex Numbers in Exponential Form

Notice that in the exponential form we need nothing but the familiar properties of exponents to obtain the result of the multiplication. That is much more pleasing than the polar form where we have to introduce strange rules about multiplying lengths and adding angles.

Proof that r &ang &theta = r e i &theta

It is due to Leonard Euler and it shows that there is a deep connection between exponential growth and sinusoidal oscillations. To prove it we need a way to calculate the sine, cosine and exponential of any value of &theta just like a calculator does. Taylor series provides a way.

The Taylor series for e x is:

Now notice that the terms in the first brackets are just the Taylor series for cos(&theta) and that the terms in the second brackets are just the Taylor series for sin(&theta). Thus we have converted the left side of Euler&rsquos formula to read cos(&theta) + i sin(&theta) which equals the right side of Euler&rsquos formula. This ends the proof of Euler&rsquos formula.

Now that we have Euler&rsquos formula it is easy to prove that r &ang &theta = r e i &theta . Again we will manipulate the left side until it equals the right side. First, convert the left side to rectangular form:

The result is that we have converted the left side to equal the right side. This proves that r &ang &theta = r e i &theta

Examples Using Rectangular, Polar and Exponential Form

When doing calculations with complex numbers which form should you use? Generally speaking use rectangular form for adding and subtracting, polar form for multiplying and dividing, and exponential form for exponentiating or manipulating literal expressions. Here are some examples.

Example 1: Show that e i &pi = &minus1. This is known as Euler&rsquos identity.

Solution: Simply convert to polar and then to rectangular:

Example 2: Calculate i n for n = 1, 2, 3, &hellip, plot the numbers on the complex plane, and spot the pattern.

Example 3: Evaluate the exponential (3 + 4 i) (6 + 7 i)

Solution: Do the following manipulations:

 Convert the base to exponential form. Remember the angle must be in radians. The base now contains two factors. Apply the property of exponents (a b) m = a m · b m . Apply the property of exponents b m + n = b m · b n . Move the real factors to the front and evaluate them. Change the base from 5 to e by using the identity 5 = e ln(5) . Combine the exponents and evaluate. Express the answer in exponential, polar or rectangular form.

Many trigonometric laws and identities are easy to prove with complex numbers expressed in polar form. Among them are the sine and cosine laws, the sum of angles trigonometric identities, and De Moivre&rsquos formula.

Proof of the sine and cosine laws

This proof uses the complex conjugate, denoted z*.

Recall that if z = a + b i is a complex number expressed in rectangular coordinates then z* = a &ndash b i.

In general, to get the complex conjugate of any complex expression just change every i in the expression to &ndashi and change every angle &theta to &ndash&theta. The complex conjugate is useful because:

The figure to the right shows three complex numbers (the red arrows) satisfying the relationship

This is the cosine law, c 2 = a 2 + b 2 &minus 2 a b cos &theta.

which is the sine law.

Proof of the sum-of-angles identity sin( &theta + &phi ) = sin( &theta )·cos( &phi ) + cos( &theta )·sin( &phi ).

Expand the LHS and simplify.

Equating imaginary parts gives the sum-of-angles identity for sine. Equating real parts gives the sum-of-angles identity for cos.

De Moivre&rsquos formula

One application of De Moivre&rsquos formula is to prove the double, triple and higher multiple-angle identities. For example start with (1&ang&theta ) · (1&ang&theta ) = 1&ang(2 &theta ) and convert to rectangular.

Expand the LHS and simplify.

Equating imaginary parts gives the double-angle identity for sin and equating real parts gives the double-angle identity for cosine.

DeMoivre&rsquos formula can also be used to find the n th roots of any complex number r&ang&theta (more precisely, it can be used to solve the equation z n = r&ang&theta, where n is a positive integer, for z). There are n roots in all. One root (called the principal root) can be found by taking the n th root of the length and one-n th of the angle. Thus it is r 1/n &ang(&theta /n). The other n &ndash 1 roots are distributed uniformly in a circle about the origin in the complex plane. This uniform distribution guarantees that they all produce the same n th power.

For example let&rsquos find the cube roots of 8 (i.e. let&rsquos solve the equation z 3 = 8 for z). Write 8 as 8&ang0°. The principal cube root is 8 1/3 &ang(0°/3) or 2&ang0° or 2. Now distribute the three roots uniformly in a circle about the origin as illustrated by the red dots in the figure. We can verify that they are cube roots of 8 by cubing them using De Moivre&rsquos theorem:

If you found this page in a web search you won&rsquot see the

BestMaths Online

3. Given a random variable Z with expected value E(Z) = 3 and variance VAR(Z) = 2, find the following:

4. In a game of DrawBall, two balls are drawn without replacement from a bag containing 4 green balls and 5 red balls. The player receives $12 for each green ball drawn and$6 for each red ball drawn.

(i) Draw a probability tree to show the outcomes of the game.

(ii) If the random variable W = amount won in the game, draw up a probabiity distribution table, and hence find the expected value and variance of W.

(iii) It costs \$16 to play a game of DrawBall. If the random variable Z = profit from a game, find the mean and variance of Z.

5. X is a random variable with E(X) = 2 and VAR(X) = 3.

a. Find the mean of the random variable 2X + 5.

b. Find the variance of the random variable 2X + 5.

2.3E: Exercises - Mathematics

It may be necessary in a program to use common mathematical functions such as the sine, square root, or log of a number. There are some standard methods Java provides to do this simply. To calculate y = x 2 it is easiest to use the expression y = x*x , however for higher powers of x this is awkward, so in general y = x n is calculated using y = Math.pow(x,n) .

The values or variables placed in the round brackets, () , are known as arguments .

These useful methods providing mathematical functions exist: Except Math.abs() these mathematical functions all give result values of type double . They also all take a double variables or values as their arguments -- both x and n in the above are double variables. Note that Math.pow() requires two arguments , whereas Math.random() requires none, although the brackets must still be included. Using Math.abs() is slightly different -- using it on a float variable will give a float value, on an int variable will return an int value, and so on.

Note that all the trigonometric functions work in radians.

Write a program that reads in a value for an angle from the keyboard, computes and writes out the sine and cosine and, as a check, the value of sin 2 x + cos 2 x for that angle. The value of sin 2 x + cos 2 x , of course, should be very close to 1. In your laboratory notebook, record the results of entering the angles 3.5 and 2.3e-3. You should use variables of type double and include all significant figures in your written answer. Stick a print out of your program in your lab notebook.

2.3E: Exercises - Mathematics

In this section we will take a look at the first method that can be used to find a particular solution to a nonhomogeneous differential equation.

[y'' + pleft( t ight)y' + qleft( t ight)y = gleft( t ight)]

One of the main advantages of this method is that it reduces the problem down to an algebra problem. The algebra can get messy on occasion, but for most of the problems it will not be terribly difficult. Another nice thing about this method is that the complementary solution will not be explicitly required, although as we will see knowledge of the complementary solution will be needed in some cases and so we’ll generally find that as well.

There are two disadvantages to this method. First, it will only work for a fairly small class of (g(t))’s. The class of (g(t))’s for which the method works, does include some of the more common functions, however, there are many functions out there for which undetermined coefficients simply won’t work. Second, it is generally only useful for constant coefficient differential equations.

The method is quite simple. All that we need to do is look at (g(t)) and make a guess as to the form of (Y_

(t)) leaving the coefficient(s) undetermined (and hence the name of the method). Plug the guess into the differential equation and see if we can determine values of the coefficients. If we can determine values for the coefficients then we guessed correctly, if we can’t find values for the coefficients then we guessed incorrectly.

It’s usually easier to see this method in action rather than to try and describe it, so let’s jump into some examples.

The point here is to find a particular solution, however the first thing that we’re going to do is find the complementary solution to this differential equation. Recall that the complementary solution comes from solving,

The characteristic equation for this differential equation and its roots are.

The complementary solution is then,

At this point the reason for doing this first will not be apparent, however we want you in the habit of finding it before we start the work to find a particular solution. Eventually, as we’ll see, having the complementary solution in hand will be helpful and so it’s best to be in the habit of finding it first prior to doing the work for undetermined coefficients.

Now, let’s proceed with finding a particular solution. As mentioned prior to the start of this example we need to make a guess as to the form of a particular solution to this differential equation. Since (g(t)) is an exponential and we know that exponentials never just appear or disappear in the differentiation process it seems that a likely form of the particular solution would be

Now, all that we need to do is do a couple of derivatives, plug this into the differential equation and see if we can determine what (A) needs to be.

Plugging into the differential equation gives

So, in order for our guess to be a solution we will need to choose (A) so that the coefficients of the exponentials on either side of the equal sign are the same. In other words we need to choose (A) so that,

[ - 7A = 3hspace <0.25in>Rightarrow hspace<0.25in>A = - frac<3><7>]

Okay, we found a value for the coefficient. This means that we guessed correctly. A particular solution to the differential equation is then,

Before proceeding any further let’s again note that we started off the solution above by finding the complementary solution. This is not technically part the method of Undetermined Coefficients however, as we’ll eventually see, having this in hand before we make our guess for the particular solution can save us a lot of work and/or headache. Finding the complementary solution first is simply a good habit to have so we’ll try to get you in the habit over the course of the next few examples. At this point do not worry about why it is a good habit. We’ll eventually see why it is a good habit.

Now, back to the work at hand. Notice in the last example that we kept saying “a” particular solution, not “the” particular solution. This is because there are other possibilities out there for the particular solution we’ve just managed to find one of them. Any of them will work when it comes to writing down the general solution to the differential equation.

Speaking of which… This section is devoted to finding particular solutions and most of the examples will be finding only the particular solution. However, we should do at least one full blown IVP to make sure that we can say that we’ve done one.

We know that the general solution will be of the form,

[yleft( t ight) = left( t ight) + left( t ight)]

and we already have both the complementary and particular solution from the first example so we don’t really need to do any extra work for this problem.

One of the more common mistakes in these problems is to find the complementary solution and then, because we’re probably in the habit of doing it, apply the initial conditions to the complementary solution to find the constants. This however, is incorrect. The complementary solution is only the solution to the homogeneous differential equation and we are after a solution to the nonhomogeneous differential equation and the initial conditions must satisfy that solution instead of the complementary solution.

So, we need the general solution to the nonhomogeneous differential equation. Taking the complementary solution and the particular solution that we found in the previous example we get the following for a general solution and its derivative.

Now, apply the initial conditions to these.

Solving this system gives (c_ <1>= 2) and (c_ <2>= 1). The actual solution is then.

This will be the only IVP in this section so don’t forget how these are done for nonhomogeneous differential equations!

Let’s take a look at another example that will give the second type of (g(t)) for which undetermined coefficients will work.

Again, let’s note that we should probably find the complementary solution before we proceed onto the guess for a particular solution. However, because the homogeneous differential equation for this example is the same as that for the first example we won’t bother with that here.

Now, let’s take our experience from the first example and apply that here. The first example had an exponential function in the (g(t)) and our guess was an exponential. This differential equation has a sine so let’s try the following guess for the particular solution.

[left( t ight) = Asin left( <2t> ight)]

Differentiating and plugging into the differential equation gives,

[ - 4Asin left( <2t> ight) - 4left( <2Acos left( <2t> ight)> ight) - 12left( ight)> ight) = sin left( <2t> ight)]

Collecting like terms yields

[ - 16Asin left( <2t> ight) - 8Acos left( <2t> ight) = sin left( <2t> ight)]

We need to pick (A) so that we get the same function on both sides of the equal sign. This means that the coefficients of the sines and cosines must be equal. Or,

Notice two things. First, since there is no cosine on the right hand side this means that the coefficient must be zero on that side. More importantly we have a serious problem here. In order for the cosine to drop out, as it must in order for the guess to satisfy the differential equation, we need to set (A = 0), but if (A = 0), the sine will also drop out and that can’t happen. Likewise, choosing (A) to keep the sine around will also keep the cosine around.

What this means is that our initial guess was wrong. If we get multiple values of the same constant or are unable to find the value of a constant then we have guessed wrong.

One of the nicer aspects of this method is that when we guess wrong our work will often suggest a fix. In this case the problem was the cosine that cropped up. So, to counter this let’s add a cosine to our guess. Our new guess is

[left( t ight) = Acos left( <2t> ight) + Bsin left( <2t> ight)]

Plugging this into the differential equation and collecting like terms gives,

[egin - 4Acos left( <2t> ight) - 4Bsin left( <2t> ight) - 4left( < - 2Asin left( <2t> ight) + 2Bcos left( <2t> ight)> ight) - 12left( ight) + Bsin left( <2t> ight)> ight) & = sin left( <2t> ight) left( < - 4A - 8B - 12A> ight)cos left( <2t> ight) + left( < - 4B + 8A - 12B> ight)sin left( <2t> ight) & = sin left( <2t> ight) left( < - 16A - 8B> ight)cos left( <2t> ight) + left( <8A - 16B> ight)sin left( <2t> ight) & = sin left( <2t> ight)end]

Now, set the coefficients equal

Solving this system gives us

We found constants and this time we guessed correctly. A particular solution to the differential equation is then,

[left( t ight) = frac<1><<40>>cos left( <2t> ight) - frac<1><<20>>sin left( <2t> ight)]

Notice that if we had had a cosine instead of a sine in the last example then our guess would have been the same. In fact, if both a sine and a cosine had shown up we will see that the same guess will also work.

Let’s take a look at the third and final type of basic (g(t)) that we can have. There are other types of (g(t)) that we can have, but as we will see they will all come back to two types that we’ve already done as well as the next one.

Once, again we will generally want the complementary solution in hand first, but again we’re working with the same homogeneous differential equation (you’ll eventually see why we keep working with the same homogeneous problem) so we’ll again just refer to the first example.

For this example, (g(t)) is a cubic polynomial. For this we will need the following guess for the particular solution.

[left( t ight) = A + B + Ct + D]

Notice that even though (g(t)) doesn’t have a () in it our guess will still need one! So, differentiate and plug into the differential equation.

[egin6At + 2B - 4left( <3A+ 2Bt + C> ight) - 12left( <>+ B + Ct + D> ight) & = 2 - t + 3 - 12A + left( < - 12A - 12B> ight) + left( <6A - 8B - 12C> ight)t + 2B - 4C - 12D & = 2 - t + 3end]

Now, as we’ve done in the previous examples we will need the coefficients of the terms on both sides of the equal sign to be the same so set coefficients equal and solve.

Notice that in this case it was very easy to solve for the constants. The first equation gave (A). Then once we knew (A) the second equation gave (B), etc. A particular solution for this differential equation is then

Now that we’ve gone over the three basic kinds of functions that we can use undetermined coefficients on let’s summarize.

(g(t)) (Y_

(t)) guess

(a<<f>^<eta t>>) (A<<f>^<eta t>>)
(acos left( <eta t> ight)) (Acos left( <eta t> ight) + Bsin left( <eta t> ight))
(bsin left( <eta t> ight)) (Acos left( <eta t> ight) + Bsin left( <eta t> ight))
(acos left( <eta t> ight) + bsin left( <eta t> ight)) (Acos left( <eta t> ight) + Bsin left( <eta t> ight))
(n^>) degree polynomial ( + <>><>> + cdots t + )

Notice that there are really only three kinds of functions given above. If you think about it the single cosine and single sine functions are really special cases of the case where both the sine and cosine are present. Also, we have not yet justified the guess for the case where both a sine and a cosine show up. We will justify this later.

We now need move on to some more complicated functions. The more complicated functions arise by taking products and sums of the basic kinds of functions. Let’s first look at products.

You’re probably getting tired of the opening comment, but again finding the complementary solution first really a good idea but again we’ve already done the work in the first example so we won’t do it again here. We promise that eventually you’ll see why we keep using the same homogeneous problem and why we say it’s a good idea to have the complementary solution in hand first. At this point all we’re trying to do is reinforce the habit of finding the complementary solution first.

Okay, let’s start off by writing down the guesses for the individual pieces of the function. The guess for the (t) would be

while the guess for the exponential would be

Now, since we’ve got a product of two functions it seems like taking a product of the guesses for the individual pieces might work. Doing this would give

However, we will have problems with this. As we will see, when we plug our guess into the differential equation we will only get two equations out of this. The problem is that with this guess we’ve got three unknown constants. With only two equations we won’t be able to solve for all the constants.

This is easy to fix however. Let’s notice that we could do the following

If we multiply the (C) through, we can see that the guess can be written in such a way that there are really only two constants. So, we will use the following for our guess.

Notice that this is nothing more than the guess for the (t) with an exponential tacked on for good measure.

Now that we’ve got our guess, let’s differentiate, plug into the differential equation and collect like terms.

Note that when we’re collecting like terms we want the coefficient of each term to have only constants in it. Following this rule we will get two terms when we collect like terms. Now, set coefficients equal.

A particular solution for this differential equation is then

This last example illustrated the general rule that we will follow when products involve an exponential. When a product involves an exponential we will first strip out the exponential and write down the guess for the portion of the function without the exponential, then we will go back and tack on the exponential without any leading coefficient.

Let’s take a look at some more products. In the interest of brevity we will just write down the guess for a particular solution and not go through all the details of finding the constants. Also, because we aren’t going to give an actual differential equation we can’t deal with finding the complementary solution first.

1. (gleft( t ight) = 16<<f>^<7t>>sin left( <10t> ight))
2. (gleft( t ight) = left( <9- 103t> ight)cos t)
3. (gleft( t ight) = - <<f>^< - 2t>>left( <3 - 5t> ight)cos left( <9t> ight))

So, we have an exponential in the function. Remember the rule. We will ignore the exponential and write down a guess for (16sin left( <10t> ight)) then put the exponential back in.

The guess for the sine is

[Acos left( <10t> ight) + Bsin left( <10t> ight)]

Now, for the actual guess for the particular solution we’ll take the above guess and tack an exponential onto it. This gives,

One final note before we move onto the next part. The 16 in front of the function has absolutely no bearing on our guess. Any constants multiplying the whole function are ignored.

We will start this one the same way that we initially started the previous example. The guess for the polynomial is

and the guess for the cosine is

If we multiply the two guesses we get.

Let’s simplify things up a little. First multiply the polynomial through as follows.

[eginleft( <>+ Bt + C> ight)left( ight) + left( <>+ Bt + C> ight)left( ight) left( <>+ BDt + CD> ight)cos t + left( <>+ BEt + CE> ight)sin tend]

Notice that everywhere one of the unknown constants occurs it is in a product of unknown constants. This means that if we went through and used this as our guess the system of equations that we would need to solve for the unknown constants would have products of the unknowns in them. These types of systems are generally very difficult to solve.

So, to avoid this we will do the same thing that we did in the previous example. Everywhere we see a product of constants we will rename it and call it a single constant. The guess that we’ll use for this function will be.

[left( t ight) = left( <>+ Bt + C> ight)cos t + left( <>+ Et + F> ight)sin t]

This is a general rule that we will use when faced with a product of a polynomial and a trig function. We write down the guess for the polynomial and then multiply that by a cosine. We then write down the guess for the polynomial again, using different coefficients, and multiply this by a sine.

This final part has all three parts to it. First, we will ignore the exponential and write down a guess for.

The minus sign can also be ignored. The guess for this is

[left( ight)cos left( <9t> ight) + left( ight)sin left( <9t> ight)]

Now, tack an exponential back on and we’re done.

Notice that we put the exponential on both terms.

There a couple of general rules that you need to remember for products.

If (g(t)) contains an exponential, ignore it and write down the guess for the remainder. Then tack the exponential back on without any leading coefficient.

If you can remember these two rules you can’t go wrong with products. Writing down the guesses for products is usually not that difficult. The difficulty arises when you need to actually find the constants.

Now, let’s take a look at sums of the basic components and/or products of the basic components. To do this we’ll need the following fact.

If (Y_(t)) is a particular solution for

[y'' + pleft( t ight)y' + qleft( t ight)y = left( t ight)]

and if (Y_(t)) is a particular solution for

[y'' + pleft( t ight)y' + qleft( t ight)y = left( t ight)]

then (Y_(t)) + (Y_(t)) is a particular solution for

[y'' + pleft( t ight)y' + qleft( t ight)y = left( t ight) + left( t ight)]

This fact can be used to both find particular solutions to differential equations that have sums in them and to write down guess for functions that have sums in them.

This example is the reason that we’ve been using the same homogeneous differential equation for all the previous examples. There is nothing to do with this problem. All that we need to do it go back to the appropriate examples above and get the particular solution from that example and add them all together.

Let’s take a look at a couple of other examples. As with the products we’ll just get guesses here and not worry about actually finding the coefficients.

1. (gleft( t ight) = 4cos left( <6t> ight) - 9sin left( <6t> ight))
2. (gleft( t ight) = - 2sin t + sin left( <14t> ight) - 5cos left( <14t> ight))
3. (gleft( t ight) = <<f>^<7t>> + 6)
4. (gleft( t ight) = 6 - 7sin left( <3t> ight) + 9)
5. (gleft( t ight) = 10<<f>^t> - 5t<<f>^< - 8t>> + 2<<f>^< - 8t>>)
6. (gleft( t ight) = cos t - 5tsin t)
7. (gleft( t ight) = 5<<f>^< - 3t>> + <<f>^< - 3t>>cos left( <6t> ight) - sin left( <6t> ight))

This first one we’ve actually already told you how to do. This is in the table of the basic functions. However, we wanted to justify the guess that we put down there. Using the fact on sums of function we would be tempted to write down a guess for the cosine and a guess for the sine. This would give.

So, we would get a cosine from each guess and a sine from each guess. The problem with this as a guess is that we are only going to get two equations to solve after plugging into the differential equation and yet we have 4 unknowns. We will never be able to solve for each of the constants.

To fix this notice that we can combine some terms as follows.

Upon doing this we can see that we’ve really got a single cosine with a coefficient and a single sine with a coefficient and so we may as well just use

[left( t ight) = Acos left( <6t> ight) + Bsin left( <6t> ight)]

The general rule of thumb for writing down guesses for functions that involve sums is to always combine like terms into single terms with single coefficients. This will greatly simplify the work required to find the coefficients.

For this one we will get two sets of sines and cosines. This will arise because we have two different arguments in them. We will get one set for the sine with just a (t) as its argument and we’ll get another set for the sine and cosine with the 14(t) as their arguments.

The guess for this function is

[left( t ight) = Acos t + Bsin t + Ccos left( <14t> ight) + Dsin left( <14t> ight)]

The main point of this problem is dealing with the constant. But that isn’t too bad. We just wanted to make sure that an example of that is somewhere in the notes. If you recall that a constant is nothing more than a zeroth degree polynomial the guess becomes clear.

The guess for this function is

This one can be a little tricky if you aren’t paying attention. Let’s first rewrite the function

[egingleft( t ight) & = 6 - 7sin left( <3t> ight) + 9hspace<0.25in>> gleft( t ight) & = 6 + 9 - 7sin left( <3t> ight)end]

All we did was move the 9. However, upon doing that we see that the function is really a sum of a quadratic polynomial and a sine. The guess for this is then

[left( t ight) = A + Bt + C + Dcos left( <3t> ight) + Esin left( <3t> ight)]

If we don’t do this and treat the function as the sum of three terms we would get

[A + Bt + C + Dcos left( <3t> ight) + Esin left( <3t> ight) + G]

and as with the first part in this example we would end up with two terms that are essentially the same (the (C) and the (G)) and so would need to be combined. An added step that isn’t really necessary if we first rewrite the function.

Look for problems where rearranging the function can simplify the initial guess.

So, this look like we’ve got a sum of three terms here. Let’s write down a guess for that.

Notice however that if we were to multiply the exponential in the second term through we would end up with two terms that are essentially the same and would need to be combined. This is a case where the guess for one term is completely contained in the guess for a different term. When this happens we just drop the guess that’s already included in the other term.

So, the guess here is actually.

Notice that this arose because we had two terms in our (g(t)) whose only difference was the polynomial that sat in front of them. When this happens we look at the term that contains the largest degree polynomial, write down the guess for that and don’t bother writing down the guess for the other term as that guess will be completely contained in the first guess.

In this case we’ve got two terms whose guess without the polynomials in front of them would be the same. Therefore, we will take the one with the largest degree polynomial in front of it and write down the guess for that one and ignore the other term. So, the guess for the function is

[left( t ight) = left( <>+ Bt + C> ight)cos t + left( <>+ Et + F> ight)sin t]

This last part is designed to make sure you understand the general rule that we used in the last two parts. This time there really are three terms and we will need a guess for each term. The guess here is

We can only combine guesses if they are identical up to the constant. So, we can’t combine the first exponential with the second because the second is really multiplied by a cosine and a sine and so the two exponentials are in fact different functions. Likewise, the last sine and cosine can’t be combined with those in the middle term because the sine and cosine in the middle term are in fact multiplied by an exponential and so are different.

So, when dealing with sums of functions make sure that you look for identical guesses that may or may not be contained in other guesses and combine them. This will simplify your work later on.

We have one last topic in this section that needs to be dealt with. In the first few examples we were constantly harping on the usefulness of having the complementary solution in hand before making the guess for a particular solution. We never gave any reason for this other that “trust us”. It is now time to see why having the complementary solution in hand first is useful. This is best shown with an example so let’s jump into one.

This problem seems almost too simple to be given this late in the section. This is especially true given the ease of finding a particular solution for (g)((t))’s that are just exponential functions. Also, because the point of this example is to illustrate why it is generally a good idea to have the complementary solution in hand first we’ll let’s go ahead and recall the complementary solution first. Here it is,

Now, without worrying about the complementary solution for a couple more seconds let’s go ahead and get to work on the particular solution. There is not much to the guess here. From our previous work we know that the guess for the particular solution should be,

Plugging this into the differential equation gives,

Hmmmm…. Something seems wrong here. Clearly an exponential can’t be zero. So, what went wrong? We finally need the complementary solution. Notice that the second term in the complementary solution (listed above) is exactly our guess for the form of the particular solution and now recall that both portions of the complementary solution are solutions to the homogeneous differential equation,

In other words, we had better have gotten zero by plugging our guess into the differential equation, it is a solution to the homogeneous differential equation!

So, how do we fix this? The way that we fix this is to add a (t) to our guess as follows.

Plugging this into our differential equation gives,

Now, we can set coefficients equal.

So, the particular solution in this case is,

So, what did we learn from this last example. While technically we don’t need the complementary solution to do undetermined coefficients, you can go through a lot of work only to figure out at the end that you needed to add in a (t) to the guess because it appeared in the complementary solution. This work is avoidable if we first find the complementary solution and comparing our guess to the complementary solution and seeing if any portion of your guess shows up in the complementary solution.

If a portion of your guess does show up in the complementary solution then we’ll need to modify that portion of the guess by adding in a (t) to the portion of the guess that is causing the problems. We do need to be a little careful and make sure that we add the (t) in the correct place however. The following set of examples will show you how to do this.

1. (y'' + 3y' - 28y = 7t + <<f>^< - 7t>> - 1)
2. (y'' - 100y = 9<<f>^<10t>> + cos t - tsin t)
3. (4y'' + y = <<f>^< - 2t>>sin left( <2>> ight) + 6tcos left( <2>> ight))
4. (4y'' + 16y' + 17y = <<f>^< - 2t>>sin left( <2>> ight) + 6tcos left( <2>> ight))
5. (y'' + 8y' + 16y = <<f>^< - 4t>> + left( <+ 5> ight)<<f>^< - 4t>>)

In these solutions we’ll leave the details of checking the complementary solution to you.

The complementary solution is

Remembering to put the “-1” with the 7(t) gives a first guess for the particular solution.

Notice that the last term in the guess is the last term in the complementary solution. The first two terms however aren’t a problem and don’t appear in the complementary solution. Therefore, we will only add a (t) onto the last term.

The correct guess for the form of the particular solution is.

The complementary solution is

A first guess for the particular solution is

[left( t ight) = left( <>+ Bt + C> ight)<<f>^<10t>> + left( ight)cos t + left( ight)sin t]

Notice that if we multiplied the exponential term through the parenthesis that we would end up getting part of the complementary solution showing up. Since the problem part arises from the first term the whole first term will get multiplied by (t). The second and third terms are okay as they are.

The correct guess for the form of the particular solution in this case is.

[left( t ight) = tleft( <>+ Bt + C> ight)<<f>^<10t>> + left( ight)cos t + left( ight)sin t]

So, in general, if you were to multiply out a guess and if any term in the result shows up in the complementary solution, then the whole term will get a (t) not just the problem portion of the term.

The complementary solution is

A first guess for the particular solution is

In this case both the second and third terms contain portions of the complementary solution. The first term doesn’t however, since upon multiplying out, both the sine and the cosine would have an exponential with them and that isn’t part of the complementary solution. We only need to worry about terms showing up in the complementary solution if the only difference between the complementary solution term and the particular guess term is the constant in front of them.

So, in this case the second and third terms will get a (t) while the first won’t

The correct guess for the form of the particular solution is.

To get this problem we changed the differential equation from the last example and left the (g(t)) alone. The complementary solution this time is

As with the last part, a first guess for the particular solution is

This time however it is the first term that causes problems and not the second or third. In fact, the first term is exactly the complementary solution and so it will need a (t). Recall that we will only have a problem with a term in our guess if it only differs from the complementary solution by a constant. The second and third terms in our guess don’t have the exponential in them and so they don’t differ from the complementary solution by only a constant.

The correct guess for the form of the particular solution is.

The complementary solution is

The two terms in (g(t)) are identical with the exception of a polynomial in front of them. So this means that we only need to look at the term with the highest degree polynomial in front of it. A first guess for the particular solution is

Notice that if we multiplied the exponential term through the parenthesis the last two terms would be the complementary solution. Therefore, we will need to multiply this whole thing by a (t).

The next guess for the particular solution is then.

This still causes problems however. If we multiplied the (t) and the exponential through, the last term will still be in the complementary solution. In this case, unlike the previous ones, a (t) wasn’t sufficient to fix the problem. So, we will add in another (t) to our guess.

The correct guess for the form of the particular solution is.

Upon multiplying this out none of the terms are in the complementary solution and so it will be okay.

As this last set of examples has shown, we really should have the complementary solution in hand before even writing down the first guess for the particular solution. By doing this we can compare our guess to the complementary solution and if any of the terms from your particular solution show up we will know that we’ll have problems. Once the problem is identified we can add a (t) to the problem term(s) and compare our new guess to the complementary solution. If there are no problems we can proceed with the problem, if there are problems add in another (t) and compare again.

Can you see a general rule as to when a (t) will be needed and when a t 2 will be needed for second order differential equations?