Articles

5.5E: The Method of Undetermined Coefficients II (Exercises) - Mathematics


Q5.5.1

In Exercises 5.5.1-5.5.17 find a particular solution.

1. (y''+3y'+2y=7cos x-sin x)

2. (y''+3y'+y=(2-6x)cos x-9sin x)

3. (y''+2y'+y=e^x(6cos x+17sin x))

4. (y''+3y'-2y=-e^{2x}(5cos2x+9sin2x))

5. (y''-y'+y=e^x(2+x)sin x)

6. (y''+3y'-2y=e^{-2x}left[(4+20x)cos 3x+(26-32x)sin 3x ight])

7. (y''+4y=-12cos2x-4sin2x)

8. (y''+y=(-4+8x)cos x+(8-4x)sin x)

9. (4y''+y=-4cos x/2-8xsin x/2)

10. (y''+2y'+2y=e^{-x}(8cos x-6sin x))

11. (y''-2y'+5y=e^xleft[(6+8x)cos 2x+(6-8x)sin2x ight])

12. (y''+2y'+y=8x^2cos x-4xsin x)

13. (y''+3y'+2y=(12+20x+10x^2)cos x+8xsin x)

14. (y''+3y'+2y=(1-x-4x^2)cos2x-(1+7x+2x^2)sin2x)

15. (y''-5y'+6y=-e^xleft[(4+6x-x^2)cos x-(2-4x+3x^2)sin x ight])

16. (y''-2y'+y=-e^xleft[(3+4x-x^2)cos x+(3-4x-x^2)sin x ight])

17. (y''-2y'+2y=e^xleft[(2-2x-6x^2)cos x+(2-10x+6x^2)sin x ight])

Q5.5.2

In Exercises 5.5.18-5.5.21 find a particular solution and graph it.

18. (y''+2y'+y=e^{-x}left[(5-2x)cos x-(3+3x)sin x ight])

19. (y''+9y=-6cos 3x-12sin 3x)

20. (y''+3y'+2y=(1-x-4x^2)cos2x-(1+7x+2x^2)sin2x)

21. (y''+4y'+3y=e^{-x}left[(2+x+x^2)cos x+(5+4x+2x^2)sin x ight])

Q5.5.3

In Exercises 5.5.22-5.5.26 solve the initial value problem.

22. (y''-7y'+6y=-e^x(17cos x-7sin x), quad y(0)=4,; y'(0)=2)

23. (y''-2y'+2y=-e^x(6cos x+4sin x), quad y(0)=1,; y'(0)=4)

24. (y''+6y'+10y=-40e^xsin x, quad y(0)=2,quad y'(0)=-3)

25. (y''-6y'+10y=-e^{3x}(6cos x+4sin x), quad y(0)=2,quad y'(0)=7)

26. (y''-3y'+2y=e^{3x}left[21cos x-(11+10x)sin x ight], ; y(0)=0, quad y'(0)=6)

Q5.5.4

In Exercises 5.5.27-5.5.32 use the principle of superposition to find a particular solution. Where indicated, solve the initial value problem.

27. (y''-2y'-3y=4e^{3x}+e^x(cos x-2sin x))

28. (y''+y=4cos x-2sin x+xe^x+e^{-x})

29. (y''-3y'+2y=xe^x+2e^{2x}+sin x)

30. (y''-2y'+2y=4xe^xcos x+xe^{-x}+1+x^2)

31. (y''-4y'+4y=e^{2x}(1+x)+e^{2x}(cos x-sin x)+3e^{3x}+1+x)

32. (y''-4y'+4y=6e^{2x}+25sin x, quad y(0)=5,; y'(0)=3)

Q5.5.5

In Exercises 5.5.33-5.5.35 solve the initial value problem and graph the solution.

33. (y''+4y=-e^{-2x}left[(4-7x)cos x+(2-4x)sin x ight], ; y(0)=3, quad y'(0)=1)

34. (y''+4y'+4y=2cos2x+3sin2x+e^{-x}, quad y(0)=-1,; y'(0)=2)

35. (y''+4y=e^x(11+15x)+8cos2x-12sin2x, quad y(0)=3,; y'(0)=5)

Q5.5.6

36.

  1. Verify that if [y_p=A(x)cosomega x+B(x)sinomega x] where (A) and (B) are twice differentiable, then [egin{aligned} y_p'&=(A'+omega B)cosomega x+(B'-omega A) sinomega xquad mbox{ and} y_p''&=(A''+2omega B'-omega^2A)cosomega x +(B''-2omega A'-omega^2B)sinomega x.end{aligned}]
  2. Use the results of (a) to verify that [egin{aligned} ay_p''+by_p'+cy_p=&left[(c-aomega^2)A+bomega B+2aomega B'+bA'+aA'' ight] cosomega x+ & left[-bomega A+(c-aomega^2)B-2aomega A'+bB'+aB'' ight]sinomega x.end{aligned}]
  3. Use the results of (a) to verify that [y_p''+omega^2 y_p=(A''+2omega B')cosomega x+ (B''-2omega A')sinomega x.]
  4. Prove Theorem 5.5.2.

37. Let (a), (b), (c), and (omega) be constants, with (a e0) and (omega>0), and let

[P(x)=p_0+p_1x+cdots+p_kx^k quad ext{and} quad Q(x)=q_0+q_1x+cdots+q_kx^k,]

where at least one of the coefficients (p_k), (q_k) is nonzero, so (k) is the larger of the degrees of (P) and (Q).

  1. Show that if (cosomega x) and (sinomega x) are not solutions of the complementary equation [ay''+by'+cy=0,] then there are polynomials [A(x)=A_0+A_1x+cdots+A_kx^k quad ext{and} quad B(x)=B_0+B_1x+cdots+B_kx^k ag{A}] such that [egin{array}{lcl} quad (c-aomega^2)A+bomega B+2aomega B'+bA'+aA''&=Pphantom{.} -bomega A+(c-aomega^2)B-2aomega A'+bB'+aB''&=Q, end{array}] where ((A_k,B_k)), ((A_{k-1},B_{k-1})), …,((A_0,B_0)) can be computed successively by solving the systems [egin{array}{lcl} phantom{-}(c-aomega^2)A_k+bomega B_k&=p_kphantom{.} -bomega A_k+(c-aomega^2)B_k&=q_k, end{array}] and, if (1le rle k), [egin{array}{lcl} phantom{-}(c-aomega^2)A_{k-r}+bomega B_{k-r}&=p_{k-r}+cdotsphantom{.} -bomega A_{k-r}+(c-aomega^2)B_{k-r}&=q_{k-r}+cdots, end{array}] where the terms indicated by “(cdots)” depend upon the previously computed coefficients with subscripts greater than (k-r). Conclude from this and Exercise 5.5.36b that [y_p=A(x)cosomega x+B(x)sinomega x ag{B}] is a particular solution of [ay''+by'+cy=P(x)cosomega x+Q(x)sinomega x.]
  2. Conclude from Exercise 5.5.36c that the equation [a(y''+omega^2 y)=P(x)cosomega x+Q(x)sinomega x ag{C}] does not have a solution of the form (B) with (A) and (B) as in (A). Then show that there are polynomials [A(x)=A_0x+A_1x^2+cdots+A_kx^{k+1}quad ext{and} quad B(x)=B_0x+B_1x^2+cdots+B_kx^{k+1}] such that [egin{array}{rcl} a(A''+2omega B')&=P a(B''-2omega A')&=Q, end{array}] where the pairs ((A_k,B_k)), ((A_{k-1},B_{k-1})), …, ((A_0,B_0)) can be computed successively as follows: [egin{aligned} A_k&=-{q_kover2aomega(k+1)} B_k&=phantom{-}{p_kover2aomega(k+1)},end{aligned}] and, if (kge 1), [egin{aligned} A_{k-j}&=-{1over2omega} left[{q_{k-j}over a(k-j+1)}-(k-j+2)B_{k-j+1} ight] B_{k-j}&=phantom{-}{1over2omega} left[{p_{k-j}over a(k-j+1)}-(k-j+2)A_{k-j+1} ight]end{aligned}] for (1le jle k). Conclude that (B) with this choice of the polynomials (A) and (B) is a particular solution of (C).

38. Show that Theorem 5.5.1 implies the next theorem:

Theorem (PageIndex{1})

Suppose (omega) is a positive number and (P) and (Q) are polynomials. Let (k) be the larger of the degrees of (P) and (Q). Then the equation

[ay''+by'+cy=e^{lambda x}left(P(x)cos omega x+Q(x)sin omega x ight)]

has a particular solution

[y_p=e^{lambda x}left(A(x)cosomega x+B(x)sinomega x ight), ag{A}]

where

[A(x)=A_0+A_1x+cdots+A_kx^k quad ext{and} quad B(x)=B_0+B_1x+cdots+B_kx^k,]

provided that (e^{lambda x}cosomega x) and (e^{lambda x}sinomega x) are not solutions of the complementary equation. The equation

[aleft[y''-2lambda y'+(lambda^2+omega^2)y ight]= e^{lambda x}left(P(x)cos omega x+Q(x)sin omega x ight)]

(()for which (e^{lambda x}cosomega x) and (e^{lambda x}sinomega x) are solutions of the complementary equation()) has a particular solution of the form (A), where

[A(x)=A_0x+A_1x^2+cdots+A_kx^{k+1} quad ext{and} quad B(x)=B_0x+B_1x^2+cdots+B_kx^{k+1}.]

39. This exercise presents a method for evaluating the integral

[y=int e^{lambda x}left(P(x)cos omega x+Q(x)sin omega x ight),dx]

where (omega e0) and

[P(x)=p_0+p_1x+cdots+p_kx^k,quad Q(x)=q_0+q_1x+cdots+q_kx^k.]

  1. Show that (y=e^{lambda x}u), where [u'+lambda u=P(x)cos omega x+Q(x)sin omega x. ag{A}]
  2. Show that (A) has a particular solution of the form [u_p=A(x)cos omega x+B(x)sin omega x,] where [A(x)=A_0+A_1x+cdots+A_kx^k,quad B(x)=B_0+B_1x+cdots+B_kx^k,] and the pairs of coefficients ((A_k,B_k)), ((A_{k-1},B_{k-1})), …,((A_0,B_0)) can be computed successively as the solutions of pairs of equations obtained by equating the coefficients of (x^rcosomega x) and (x^rsinomega x) for (r=k), (k-1), …, (0).
  3. Conclude that [int e^{lambda x}left(P(x)cos omega x+Q(x)sin omega x ight),dx = e^{lambda x}left(A(x)cos omega x+B(x)sin omega x ight) +c,] where (c) is a constant of integration.

40. Use the method of Exercise 5.5.39 to evaluate the integral.

  1. (int x^{2}cos x dx)
  2. (int x^{2} e^{x}cos x dx)
  3. (int xe^{-x}sin 2x dx)
  4. (int x^{2}e^{-x}sin x dx)
  5. (int x^{3}e^{x}sin x dx)
  6. (int e^{x}[xcos x - (1+3x)sin x ]dx)
  7. (int e^{-x}[(1+x^{2})cos x +(1_x^{2})sin x]dx)

The position of a particle moving in a straight line is given by s(t) = (e^(-t))(cos(5t)) for t>0, where t is in seconds. If the particle changes direction at time T seconds, then T must satisfy the equation: cos(5T) = 0 5T = arctan(-1/5) 5e^(-t) sin(5t) =

Factorthen use fundamental identities to simplify the expression below and determine which of the following is not equivalent cot^2 a * tan^2 a + cot^2 a A. csc^ 2 alpha B.1/ sin^ 2 alpha C.1/ 1-cos^ 2 alpha D.sec^ 2 alpha E.1+cot^ 2 alpha

Which of the following expressions is equivalent to (cos(3x))/sin(x)cos(x))? csc(x) cos(2x) - sec(x) sin(2x) sec(x) cos(2x) - csc(x) sin(2x) sec(x) cos(x) - csc(x) sin(x) csc(x) cos(x) - sec(x) sin(x) This is my last question and I've tried solving it


CLASSICAL SOLUTION OF DIFFERENTIAL EQUATIONS

In the classical method we solve differential equation to find the natural and forced components rather than the zero-input and zero-state components of the response. Although this method is relatively simple compared with the method discussed so far, as we shall see, it also has several glaring drawbacks.

As Section 2.4-5 showed, when all the characteristic mode terms of the total system response are lumped together, they form the system's natural response y n (t) (also known as the homogeneous solution or complementary solution). The remaining portion of the response consists entirely of

noncharacteristic mode terms and is called the system's forced response y φ (t) (also known as the particular solution). Equation (2.52b) showed these two components for the loop current in the RLC circuit of Fig. 2.1a .

The total system response is y(t) = y n(t) +y φ (t). Since y(t) must satisfy the system equation [Eq. (2.1)], or But y n (t) is composed entirely of characteristic modes. Therefore so that

The natural response, being a linear combination of the system's characteristic modes, has the same form as that of the zero-input response only its arbitrary constants are different. These constants are determined from auxiliary conditions, as explained later. We shall now discuss a method of determining the forced response.

2.5-1 Forced Response: The Method of Undetermined Coefficients

It is a relatively simple task to determine y φ(t) , the forced response of an LTIC system, when the input x(t) is such that it yields only a finite number of independent derivatives. Inputs having the form e ζt or t r fall into this category. For example, the repeated differentiation of e ζt yields the same form

as this input that is, e ζt . Similarly, the repeated differentiation of t r yields only r independent derivatives. The forced response to such an input can be expressed as a linear combination of the input and its independent derivatives. Consider, for example, the input has at 2 + bt + c. The successive derivatives of this input are 2at + b and 2a. In this case, the input has only two independent derivatives. Therefore, the forced response can be

assumed to be a linear combination of x(t) and its two derivatives. The suitable form for y φ(t) in this case is, therefore

The undetermined coefficients β 0 , β 1 and β 2 are determined by substituting this expression for y φ (t)in Eq. (2.53)

and then equating coefficients of similar terms on both sides of the resulting expression. Although this method can be used only for inputs with a finite number of derivatives, this class of inputs includes a wide variety of the most

commonly encountered signals in practice. Table 2.2 shows a variety of such inputs and the form of the forced response corresponding to each input. We shall demonstrate this procedure with an example.

Open table as spreadsheet

5 (t r +a r-1 t r-1 + . + α 1 t+ α 0 )e ζt )

( β r t r + β r-1 t r-1 + . + β 1 t+ β 0 )e ζt )

Note: By definition, y φ (t) cannot have any characteristic mode terms. If any term appearing in the right-hand column for the forced response is also a characteristic mode of the system, the correct form of the forced response must be modified to t i y φ (t), where i is the smallest possible integer that can be used and still can prevent t i y φ (t) from having a characteristic mode term. For example, when the input is e ζt , the forced

response (right-hand column) has the form βe ζt . But if e ζt happens to be a characteristic mode of the system, the correct form of the forced

response is βte ζt (see pair 2). If te ζt also happens to be a characteristic mode of the system, the correct form of the forced response is βt 2 e ζt ,

Solve the differential equation if the input and the initial conditions are y(0 + ) = 2 and y(0 + ) = 3.

The characteristic polynomial of the system is Therefore, the characteristic modes are e −t and e -2t . The natural response is then a linear combination of these modes, so that

Here the arbitrary constants K 1 and K 2 must be determined from the system's initial conditions. The forced response to the input t 2 + 5t + 3, according to Table 2.2 (pair 5 with ζ = 0), is

Moreover, y φ (t) satisfies the system equation [ Eq. (2.53) ] that is,

Substituting these results in Eq. (2.54) yields or Equating coefficients of similar powers on both sides of this expression yields

Solution of these three simultaneous equations yields β 0 = 1, β 1 = 1, and β 2 = 0. Therefore

The total system response y(t) is the sum of the natural and forced solutions. Therefore

so that Setting t = 0 and substituting y(0) = 2 y(0) = 3 in these equations, we have

The solution of these two simultaneous equations is K 1 = 4 and K 2 = −3. Therefore

COMMENTS ON INTIAL CONDITIONS

In the classical method, the initial conditions are required at t = 0 + . The reason is that because at t = 0 − , only the zero-input component exists, and the initial conditions at t = 0 − can be applied to the zero-input component only. In the classical method, the zero-input and zero-state components cannot be separated Consequently, the initial conditions must be applied to the total response, which begins at t = 0 + .

An LTIC system is specified by the equation

The input is x(t) = 6t 2 . Find the following: a. The forced response y φ (t) b. The total response y(t) if the initial conditions are y(0 + ) = 25/18 and y(0 + )= −2/3

THE EXPONENTIAL INPUT e ζ(t) The exponential signal is the most important signal in the study of LTIC systems. Interestingly, the forced response for an exponential input signal

turns out to be very simple. From Table 2.2 we see that the forced response for the input e ζt has the form βe ζt . We now show that β = Q(ζ)/P(ζ). [ † ]

To determine the constant β, we substitute y φt in the system equation [ Eq. (2.53) ] to obtain Now observe that

Consequently Therefore, Eq. (2.53) becomes and

Thus, for the input x(t) = e ζt u(t), the forced response is given by

This is an interesting and significant result. It states that for an exponential input e ζt the forced response y ζ (t) is the same exponential multiplied by H( ζ) = P(ζ)/Q(ζ). The total system response y(t) to an exponential input e ζt is then given by [ ‡ ]

where the arbitrary constants K 1 ,K 2 . K N are determined from auxiliary conditions. The form of Eq. (2.57) assumes N distinct roots. If the roots are not distinct, proper form of modes should be used.

Recall that the exponential signal includes a large variety of signals, such as a constant ( ζ = 0), a sinusoid (ζ = ± jω), and an exponentially growing or decaying sinusoid ( ζ = σ ± jω). Let us consider the forced response for some of these cases.

Because C = C e 0t , the constant input is a special case of the exponential input Ce ζt with ζ = 0. The forced response to this input is then given by

THE EXPONENTIAL INPUT e j ωt Here ζ = jω and

THE SINUSOIDA INPUT x(t) = cos ωt We know that the forced response for the input e ±jwt is H(±jw)e ±jwt . Since cos ω t = (e j ωt +e −jwt )/2, the forced response to cos ωt is

Because the two terms on the right-hand side are conjugates, But so that

This result can be generalized for the input x(t) = cos ( ωt + θ). The forced response in this case is

Solve the differential equation if the initial conditions are y(0 + ) = 2 and

a. 10e −3t b. 5 −2t c. e d. 10 cos(3t + 30°)

According to Example 2.10 , the natural response for this case is For this case

a. For input x(t) = 10e −3t ζ = −3, and

The total solution (the sum of the forced and the natural response) is and The initial conditions are y(0 + ) = 2 and

. Setting t = 0 in the foregoing equations and then substituting the initial conditions yields

Solution of these equations yields K 1 = −8 and K 2 = 25. Therefore

b. For input x (t) = 5 = 5e 0t , ζ = 0, and The complete solution is K 1 e −t +K 2 e −2t + 2te − t. Using the initial conditions, we determine K 1 and K 2 as in part (a).

c. Here ζ = −2, which is also a characteristic root of the system. Hence (see pair 2, Table 2.2 , or the note at the bottom of the table). To find β, we substitute yφ,(t) in the system equation to obtain or But

Consequently or Therefore, β = 2 so that

The complete solution is K e −t +K 2 e −2u . Using the initial conditions, we determine K 1 and K 2 as in part (a).

d. For the input x(t) = 10cos (3 t + 30°), the forced response [see Eq. (2.61) ] is where

and The complete solution is K 1 e −1 +K 2 e −2t + 2.63cos (3t − 7.9°). We then use the initial conditions to determine K 1 and K 2 as in part (a).

Use the classical method to find the loop current y(t) in the RLC circuit of Example 2.2 ( Fig. 2.1) if the input voltage x(t) = 10 −3t e and the initial

conditions are y(0 − ) = 0 and υ c (0 − ) = 5.

The zero-input and zero-state responses for this problem are found in Examples 2.2 and 2.6 , respectively. The natural and forced responses appear in Eq. (2.52b) . Here we shall solve this problem by the classical method, which requires the initial conditions at t = 0 + . These conditions, already found in Eq. (2.15) , are

The loop equation for this system [see Example 2.2 or Eq. (1.55) ] is

The characteristic polynomial is λ 2 +3 λ + 2 = (λ + 1)(λ + 2) Therefore, the natural response is

The forced response, already found in part (a) of Example 2.11 , is The total response is Differentiation of this equation yields Setting t = 0 + and substituting y(0 + ) = 0, y(0 + ) = 5 in these equations yields

Therefore which agrees with the solution found earlier in Eq. (2.52b) .

Solve the differential equation

using the input x(t) = 5t + 3 and initial conditions y 0 (0) = 2 and y 0 (0) = 3.

>>y = dsolve('D2y+3*Dy+2*y=5*t+3', y(0)=2', 'Dy(0)=3', 't') >>disp (['y(t) = (',char(y), ')u(t) ' ]) y(t) = (-9/4+5/2*t+9*exp(-t)-19/4*exp(-2*t))u(t)

ASSESSMENT OF THE CLASSICAL METHOD

The development in this section shows that the classical method is relatively simple compared with the method of finding the response as a sum of the zero-input and zero-state components. Unfortunately, the classical method has a serious drawback because it yields the total response, which cannot be separated into components arising from the internal conditions and the external input. In the study of systems it is important to be able to express the system response to an input x(t) as an explicit function of x(t). This is not possible in the classical method. Moreover, the classical The development in this section shows that the classical method is relatively simple compared with the method of finding the response as a sum of the zero-input and zero-state components. Unfortunately, the classical method has a serious drawback because it yields the total response, which cannot be separated into components arising from the internal conditions and the external input. In the study of systems it is important to be able to express the system response to an input x(t) as an explicit function of x(t). This is not possible in the classical method. Moreover, the classical

If we must solve a particular linear differential equation or find a response of a particular LTIC system, the classical method may be the best. In the theoretical study of linear systems, however, the classical method is not so valuable.

Caution We have shown in Eq. (2.52a) that the total response of an LTI system can be expressed as a sum of the zero-input and the zero-

state components. In Eq. (2.52b) , we showed that the same response can also be expressed as a sum of the natural and the forced components. We have also seen that generally the zero-input response is not the same as the natural response (although both are made of natural modes). Similarly, the zero-state response is not the same as the forced response. Unfortunately, such erroneous claims are often encountered in the literature.

[ † ] This result is valid only if ζ is not a characteristic root of the system. [ ‡ ] Observe the closeness of Eqs. (2.57) and (2.47) . Why is there a difference between the two equations? Equation (2.47) is the response to an

exponential that started at −∞, while Eq. (2.57) is the response to an exponential that starts at t = 0. As t → ∞, Eq. (2.57) approaches Eq. (2.47) . In Eq. (2.47) , the term y n (t), which starts at t = −∞, has already decayed at t = 0, and hence, is missing.


3. Random Walk And Gambler’s Ruin

3.1 Random Walks on a Regular Grid

I’ve noted that, the above method of counting up ways to get the results in question will not solve the problem at hand. As noted above, however, several LetsSolveMathProblems subscribers assumed it would. They just needed to figure out the pattern behind the number of ways each outcome could come about—that is, a pattern that tells you the number of ways to win (or lose) in a given number of moves, and thus the probability (or weight) corresponding to that number of moves.

The problem is that there is no such pattern for some arbitrary n (if you know of one, let me know!), though it will work for a specific, reasonably sized n. It’s instructive to look at what’s going on here.

I will consider n = 2, 3, 4, and 5. We can use good old fashioned expectation and counting methods, just as above, for these, but different n‘s require different formulas. Actually n=2 and n=3 use the same formula (they follow the same pattern), but 4 and 5 each produces a new pattern.

To get started in figuring these out, let’s review random walks. A random walk is a situation in which you start at some point on the number line and end up at another point in a certain number of steps. There’s a lot to cover in this simple idea. I’ll only review the basics.

Let’s first look at the simple case of going from 2 to 4 in two steps. We don’t need a sophisticated diagram to see the number of ways to do this:

Next we’ll work out the number of ways (or paths) for getting from 2 to 4 in four steps (notice that it is impossible to get there in three steps!): from 2 to 3 to 4 to 5 to 4: OR from 2 to 3 to 4 to 3 to 4 OR from 2 to 1 to 2 to 3 to 4 OR from 2 to 3 to 2 to 3 to 4. Here’s an illustration:

There are four ways to do that. Notice that in each one of these four ways, we go one to the left (or back) and three to the right (or forward). In other words, we can count how many ways to get from 2 to 4 in four steps by just choosing one of them as our leftward move: 4 C1 = 4*.

[*This denotes a combination, and is usually stated as “four-choose-one.” Of course, we could just as well do 4 C3. You’ll more often see these represented in math problems as .

The formula for these calculations is:

If you’re not familiar with, or need to review, basic combinatorics, I recommend starting with this Khan Academy video, which deals with Permutations, and working through to the next section on Combinations: “Factorial and Counting Seat Arrangements.”]

Now, if we want to get from 2 to 4 in six steps (notice again, we can’t do it in five steps), we can just figure out how many forward and back steps are needed. For example, we can go from 2 to 3 to 4 to 5 to 6 to 5 to 4. That’s four forward and two back. We see the same result if we go 2 to 1 to 0 to 1 to 2 to 3 to 4.

This gives us 6 C2 = 15. Instead of writing those 15 paths with the same method as above, I’ll use a grid (made at GeoGebra) in which the x-axis represents the first, second, third, fourth, etc. step taken and the y-axis represents location. For example, here’s a diagram for starting at 2 and ending at 4 four steps:

The above graph contains all possible paths for getting from 2 to 4 in four steps. Here is the same diagram with two paths highlighted:

The pink path goes from 2 to 1 to 2 to 3 to 4 the green path goes from 2 to 3 to 4 to 3 to 4. Because the location axis is vertical, we can think of this as up three, down one (note that each new move must go to the right with respect to the x-axis otherwise, it would be like going back in time).

Luckily, there is a simple way to count the number of paths on a grid. We can call this the Pascal’s Triangle method of counting as such, it’s also intimately related to the binomial theorem and to combinations (which makes sense, given that we’ve been using combinations as a shortcut thus far).*

[*I’ll try to explain the method clearly here, but it might be extra helpful to see this method explained and performed in real-time, which you can do in the following YouTube videos, which also cover the irregular cases—e.g., cases in which grids overlap or appear to have chunks removed (it will soon be clear that our Gambler’s Ruin problem involves an irregular random walk):

“Math Principles: Paths on a Grid: Two Approaches” by readingrocks88: Nicely gets across the basic idea in just over five minutes.

“Math 12 Apr 7 U4L5 Pascal and Pathways” by Kelvin Dueck: A longer video (about 37 minutes) that goes more into Pascal’s Triangle.

“The mathematical secrets of Pascal’s triangle – Wajdi Mohamed Ratemi” by TED-ed: This short video (just under five minutes) isn’t about paths on a grid, but gives a nice overview of Pascal’s Triangle (includes a bit about binomial expansion and points out that Pascal was by no means the first to discover this mathematical tool).

A search will yield many other tutorials.]

All we need to do is count the number of ways to get to each point, until we’ve arrived at the final point. Here again is the same diagram from 2 to 4 in four steps, with the counting method included (if this doesn’t clear things up right away, it will hopefully get clearer as we look at more examples):

Now let’s apply this to getting from 2 to 4 in six steps. I already calculated that this is 6 C2 = 15. When drawing such a diagram, I like to start with the path that includes the lowest location I can get to—here that’s 0, as we can go from 2 to 1 to 0 to 1 to 2 t0 3 to 4—along with the highest location we can get to, which is 6 (i.e., from 2 to 3 to 4 to 5 to 6 to 5 to 4):

Then I fill in the rest of the grid. Here’s the whole grid, with numbers filled in to indicate the path to each point:

3.2 Random Walks to a Gambler’s Ruin: Counting Paths on an Irregular Grid

We are finally ready to return to our Gambler’s Ruin problem. As I noted above, the problem we’re interested in does not produce a regular grid, due to our boundary conditions.

Recall that we are starting with n chips (i.e., at location n on our grid), and the game ends when we hit 2n or 0. In other words, if we start at 2 chips, the game ends when we have 4 chips or 0 chips. Let’s focus for now just on what this means for winning. In fact, I’m only going to represent winning in these diagrams (an easy thing to account for later, when calculating the overall expectation).

If the game ends when we hit 4, that means we have a boundary condition that we do not have with basic random walks. For example, look again at getting from 2 to 4 in four steps. We can take a path from 2 to 3 to 4 to 5 to 4 : OR from 2 to 3 to 4 to 3 to 4 OR from 2 to 1 to 2 to 3 to 4 OR from 2 to 3 to 2 to 3 to 4. But only two of these are valid, because once you hit 4, the game ends! I’ve crossed out the invalid paths. If we don’t remove those, we over-count. And if we were looking at getting from 2 to 4 in six steps (or turns, really), we’d have to remove anything that hits 0 or 4 before turn number six.

What to do? First, let’s take a step back to remind ourselves of what we’re looking for. We’re trying to formulate a simple bit of expectation, as described in the dice scenarios above. This means that, for starting with two chips, we need to fill in the following (again, I’m focused for now only on winning recall that “winning” here means “getting from n to 2n in this particular case, that means going from 2 to chips 4 chips):

[2 × (probability of winning in two turns)] + [4 × (probability of winning in four turns)] + [6 × (probability of winning in six turns)] + …. on forever (we can’t restrict how long the game will last it can be shown that the game won’t last forever, but we need not worry about this here.)

All we’re missing is the probability of winning in each number of turns. This breaks down to the (the number of paths to win in that many turns) × (1/2 to the power of the number of turns). Where did the 1/2 come from? Easy: the probability of winning a turn is 1/2. So the probability of winning in two turns is the (number of paths to win in two turns) × (1/2) 2 . This is all analogous to what we saw above with the probability of two rolled dice summing to a given number.

I should also note that for more than two turns, the 1/2 will still be raised to the number of turns, because there is also a 1/2 probability of losing a given turn. For example, the probability of winning two turns and then losing four turns is (1/2) 6 this would of course work out differently were the probability of winning and losing not the same! (See below for a little more on that point.)

This in mind, we end up with:

[(2) × (number of paths that win in two turns) × (1/2) 2 ] + [(4) × (number of paths that win in four turns) × (1/2) 4 ] + [(6) × (number of paths that win in six turns) × (1/2) 6 ] + …. and onward with this pattern.

There is an obvious pattern here for everything except the number of paths to win in a certain number of turns. Let’s find that pattern. I’ll make a diagram for winning in two moves, four moves, six moves, and eight moves, and will count the paths for each exactly as I did with the regular grids above.

There is one way (or path) to win (i.e., to get from 2 to 4) in two moves (notice that the path to losing is symmetrical to the path for winning I’m not including losing paths here, but this fact will help us count the paths to game’s end later):

There are two paths for winning in four turns:

There are four paths for winning in six turns:

A pattern is emerging. A new square is added each time we increase the number of turns. And, for anything greater than four moves, that new square bumps what we did in the preceding graph two units to the right. It’s safe to assume that this will keep happening over and over again, but let’s do a couple more.

Notice, too, that we are essentially bouncing around between 1 and 3 before crossing over to 4 for the win. In other words, if we pass 3, we win (and if we pass 1, we lose). I’ll demarcate those boundaries here as well:

I’d say the patter is now well established. In fact, we could have just mapped out several steps at once in order to see the pattern. Here is a diagram for winning in up to ten turns (or “moves,” as I put it in the diagram):

We now have 1 way to win in two turns 2 ways to win in four turns 4 ways to win in six turns 8 ways to win in eight turns 16 ways to win in ten turns. The ways to win are doubling, which is to say we have this sequence: 2 0 ways to win in two moves, then 2 1 , 2 2 , 2 3 , 2 4 , 2 5 , … and so on.

This makes it easy to find expectation with a calculator. Recall what we are missing:

[(2) × (number of paths that win in two turns) × (1/2) 2 ] + [(4) × (number of paths that win in four turns) × (1/2) 4 ] + [(6) × (number of paths that win in six turns) × (1/2) 6 ] + …. and onward with this pattern.

[(2) × (2 0 ) × (1/2) 2 ] + [(4) × (2 1 ) × (1/2) 4 ] + [(6) × (2 2 ) × (1/2) 6 ] + ….

I’m going to rewrite this which different bracketing (and a few more terms) in order to make the probabilities more salient:

(2 × [2 0 × (1/2) 2 ]) + (4 × [2 1 × (1/2) 4 ]) + (6 × [2 2 × (1/2) 6 ]) + (8 × [2 3 × (1/2) 8 ]) + (10 × [2 4 × (1/2) 10 ]) + (12 × [2 5 × (1/2) 12 ]) + …

The probabilities (i.e., the weights for our average) are in square brackets. The probabilities alone continue indefinitely in the form 2 i × (1/2) 2+2i , where i starts at 0, which is one less than the turn number. Just as with the dice examples above, these probability are multiplied by the number of turns involved: 2, 4, 6, 8, 10, 12, …

Since the power to which we raise 1/2 is equal to the number of turns involved, we can represent the number of turns with (2+2i).

I’m probably making this more complicated than it needs to be. The upshot is that we end up with a final expression of: (2+2i) × 2 i × (1/2) 2+2i . To add this up for large i‘s, simply pop this into a sigma calculator (if you need a refresher on sigma notation, see this page at Math Is Fun calculator-wise, I like Desmos):

I constructed it so that the number of turns is on the left and the probability (slightly simplified) is on the right. Recall that we know the final answer should be 2 2 , or 4. The answer here is 2 because we’ve only included the number of ways to win. Losing here is symmetrical to winning, so all we need to do now is double the number of ways to win this updates our probabilities to reflect the number of ways to win or lose, which effectively doubles the final result of the above expectation, which yields 4. And that’s that.

When each player has two chips, we expect the game to last about four turns on average.

(Note that if we remove the (2+2i) from the expression, the series converges to 1, which is something we require of a valid probability distribution. That is, the probability that you win or lose in 2 or 4 or 6 or 8 or so on turns is 1:

This time I put the 2 out front to account for both winning and losing. This approach also works for the series considered below.)

Interestingly, we get the same pattern when starting with three chips, just replace the 2 with 3. I’ll go right to the extended diagram this time:

Pop this into a sigma calculator with appropriate adjustments (and multiplying by 2):

As expected, we get n 2 , which in this case is 3 2 = 9.

But look at what happens when n = 4:

This is a new pattern: 1, 4, 14, 48, 164, 560, 1912 (I calculated the 1912 in order to refine the search I’m about to do).

To figure out what sequence this is, I’ll pop it into The On-Line Encyclopedia of Integer Sequences, which yields a sequence that starts: 1, 4, 14, 48, 164, 560, 1912, 6528, …

This is sequence number A007070, which can be found here with discussion and formulas and whatnot (also included is a nearly identical sequence, but with an extra 1 at the beginning). I’ll go ahead and pop a formula for this sequence into our sigma notation as well, just for comparison’s sake.

Let’s express the sequence as a function f:

This gives: f(0) = 1 f(1) = 4 f(2) = 14 f(3) = 48 … and so on. So, f(i) will be used in the sigma notation instead of 2 i or 3 i :

As assumed, we get 4 2 = 16. The main point, however, is that we can’t use the same setup for n = 4 that we used for n = 2 or 3. This is also true for n = 5, which I’ll graph for up to 15 turns:

Here we get: 1, 5, 20, 75, 275, 1000. A search for this at eois brings up sequence number A030191, which is actually the same as sequence A093131, but without the starting number of 0 both sequences can be found here. I won’t bother with popping this one into our sigma notation.

It’s clear by now that there’s no obvious pattern (to me) that we can use by this method to answer the question at hand, though it’s been instructive (to me) to work through the diagrams to more intuitively understand the problem and to get practice with counting and alternative ways of modeling such problems, etc. It has also answered the question of why we can’t solve it in the same way as we would a basic random walk: we cannot pass 1 or 2n-1 until the next-to-last move before game’s end, which leads to over-counting when using standard methods. Still, it’s a pretty deterministic and well-controlled setup—and thus, good practice but also a nice reminder of how much messier and complex real-world situations must get.

At any rate, while I’m tempted to see how much more I can pull out of the above patterns, I’ll move on to finally solving the problem.


Give the oxidation number of bromine in the following:(a) KBr(b) BrF₃(c) HBrO₃(d) CBr₄

Q: Enough of a monoprotic acid is dissolved in water to produce a 1.74 M solution. The pH of the result.

A: Let, HA be the monoprotic acid. Given, concentration of monoprotic acid, [HA] = 1.74 M and, pH =.

Q: The acid-dissociation constant for chlorous acid 1HClO22 is 1.1 * 10-2. Calculate the concentrations.

A: Given: Initial Concentration of HClO2 = 0.0125 M

Q: Label attached compound as chiral or achiral.

Q: What orbitals are used to form each bond in methanol, CH3OH?

A: The orbitals are used to form each bond in methanol, CH3OH has to be determined

Q: The reaction 2 NO2 ¡ 2 NO + O2 has the rate constant k = 0.63 M - 1s - 1. (a) Based on the units for.

A: The given reaction is, 2 NO2----&gt2 NO + O2 Now, we know that the relationship o.

Q: A graduate student tried to make o-fluorophenylmagnesium bromide by adding magnesium to an ether sol.

A: Diel’s alder reaction- The conjugated diene(alkene) and dienophile(may be alkene or alkyne) reacts .

Q: Write the propagation steps for the addition of HBr to 1-methylcyclohexene in the presence of a pero.

A: The first part of the propagation steps for the addition of HBr to 1-methylcyclohexene in the presen.

Q: The pKa values of a compound with two ionizable groups are pK1 =4.10 and pK2 between 7 and 10. A bio.

A: The pK2 is to be calculated.

Q: What mass of glycerin (C3H8O3), a nonelectrolyte, must be dissolved in 200.0 g water to give a solut.

A: Given Data: Molar mass of glycerin (C3H8O3)= M =92.09 gm/mol Mass of water = W1=200.0 g freezi.


(a) Interpretation: The value of ξ , if 3 .5 mol of Al reacts to make products, is to be calculated. Concept introduction: The extent of the reaction is a physical quantity that measures the progress of the chemical reaction. It is represented by ξ . The value of extent coefficient will remain same throughout the chemical reaction. The value of ξ is given by the expression. ξ = n i − n i , 0 v i Where, • ξ is the extent of the reaction. • n i is the number of moles of i-th chemical species at time t . • n i , 0 is the number of moles of i-th chemical species at time t = 0 . • v i is the stoichiometric coefficient.

The value of ξ , if 3 .5   mol of Al reacts to make products, is to be calculated.

Concept introduction:

The extent of the reaction is a physical quantity that measures the progress of the chemical reaction. It is represented by ξ . The value of extent coefficient will remain same throughout the chemical reaction.

The value of ξ is given by the expression.

• ξ is the extent of the reaction.

• n i is the number of moles of i-th chemical species at time t .

• n i , 0 is the number of moles of i-th chemical species at time t = 0 .

• v i is the stoichiometric coefficient.

Interpretation:

Whether the possible value for ξ is equal to 5 for the given reaction is to be stated. The reason for the same is to be stated.

Concept introduction:

The extent of the reaction is a physical quantity that measures the progress of the chemical reaction. It is represented by ξ . The value of extent coefficient will remain same throughout the chemical reaction.

The value of ξ is given by the expression.

• ξ is the extent of the reaction.

• n i is the number of moles of i-th chemical species at time t .


CHAPTER V Solar and Terrestrial Radiation

This chapter discusses the solar and terrestrial radiation. The source of solar energy is believed to be fusion of four hydrogen atoms to form one helium atom, and the slight decrease in mass which occurs in this reaction accounts for the energy released in the solar interior. This energy is transferred by radiation and convection to the surface, and is then emitted as both electromagnetic and particulate radiation. The distribution of electromagnetic radiation emitted by the Sun approximates black-body radiation for a temperature of about 6000 K. Even though solar radiation is attenuated by scattering and absorption in passing through the atmosphere, the irradiance of solar radiation at the top of the atmosphere, the solar constant, may be calculated from measurements made at the Earth's surface. Satellite instruments provide measurements of long-wave radiation from the Earth and atmosphere and measurements of direct solar radiation and solar radiation reflected from the Earth and atmosphere. The flux of solar radiation per unit horizontal area at the top of the atmosphere depends strongly on zenith angle of the Sun and much less strongly on the variable distance of the Earth from the Sun.


Abstract

The data from published studies were used to derive systematic relationships between learning outcomes and air quality in classrooms. Psychological tests measuring cognitive abilities and skills, school tasks including mathematical and language-based tasks, rating schemes, and tests used to assess progress in learning including end-of-year grades and exam scores were used to quantify learning outcomes. Short-term sick leave was also included because it may influence progress in learning. Classroom indoor air quality was characterized by the concentration of carbon dioxide (CO2). For psychological tests and school tasks, fractional changes in performance were regressed against the average concentrations of CO2 at which they occurred all data reported in studies meeting the inclusion criteria were used to derive the relationship, regardless of whether the change in performance was statistically significant at the examined levels of classroom air quality. The analysis predicts that reducing CO2 concentration from 2,100 ppm to 900 ppm would improve the performance of psychological tests and school tasks by 12% with respect to the speed at which the tasks are performed and by 2% with respect to errors made. For other learning outcomes and short-term sick leave, only the relationships published in the original studies were available. They were therefore used to make predictions. These relationships show that reducing the CO2 concentration from 2,300 ppm to 900 ppm would improve performance on the tests used to assess progress in learning by 5% and that reducing CO2 from 4,100 ppm to 1,000 ppm would increase daily attendance by 2.5%. These results suggest that increasing the ventilation rate in classrooms in the range from 2 L/s-person to 10 L/s-person can bring significant benefits in terms of learning performance and pupil attendance no data are available for higher rates. The results provide a strong incentive for improving classroom air quality and can be used in cost-benefit analyses.


Whether the value of the equilibrium constant remains the same or different when 0.50 atm of Krypton were part of the given equilibrium reaction and volume of the reaction remains the same, is to be predicted. Whether the given case is different or not from the condition, when volume changes, is to be predicted. Concept introduction: When any reaction is at equilibrium then a constant expresses a relationship between the reactant side and the product side. This constant is known as equilibrium constant. It is denoted by K . The equilibrium constant is independent of the initial amount of the reactant and product.

Whether the value of the equilibrium constant remains the same or different when 0.50   atm of Krypton were part of the given equilibrium reaction and volume of the reaction remains the same, is to be predicted. Whether the given case is different or not from the condition, when volume changes, is to be predicted.

Concept introduction:

When any reaction is at equilibrium then a constant expresses a relationship between the reactant side and the product side. This constant is known as equilibrium constant. It is denoted by K . The equilibrium constant is independent of the initial amount of the reactant and product.


Ordinary Differential Equations 9781461436171, 9781461436188, 1461436176

Table of contents :
Preface. Page 6
Contents. Page 10
List of Tables. Page 14
1.1 An Introduction to Differential Equations. Page 15
1.2 Direction Fields. Page 31
1.3 Separable Differential Equations. Page 41
1.4 Linear First Order Equations. Page 59
1.5 Substitutions. Page 77
1.6 Exact Equations. Page 87
1.7 Existence and Uniqueness Theorems. Page 99
2.1 Laplace Transform Method: Introduction. Page 115
2.2 Definitions, Basic Formulas, and Principles. Page 125
2.3 Partial Fractions: A Recursive Algorithm for Linear Terms. Page 143
2.4 Partial Fractions: A Recursive Algorithm for IrreducibleQuadratics. Page 157
2.5 Laplace Inversion. Page 165
2.6 The Linear Spaces Eq: Special Cases. Page 181
2.7 The Linear Spaces Eq: The General Case. Page 193
2.8 Convolution. Page 201
2.9 Summary of Laplace Transforms and Convolutions. Page 213
Chapter3 Second Order Constant Coefficient Linear Differential Equations. Page 217
3.1 Notation, Definitions, and Some Basic Results. Page 219
3.2 Linear Independence. Page 231
3.3 Linear Homogeneous Differential Equations. Page 243
3.4 The Method of Undetermined Coefficients. Page 251
3.5 The Incomplete Partial Fraction Method. Page 259
3.6 Spring Systems. Page 267
3.7 RCL Circuits. Page 281
Chapter4 Linear Constant Coefficient Differential Equations. Page 288
4.1 Notation, Definitions, and Basic Results. Page 290
4.2 Linear Homogeneous Differential Equations. Page 298
4.3 Nonhomogeneous Differential Equations. Page 306
4.4 Coupled Systems of Differential Equations. Page 314
4.5 System Modeling. Page 326
Chapter5 Second Order Linear Differential Equations. Page 343
5.1 The Existence and Uniqueness Theorem. Page 345
5.2 The Homogeneous Case. Page 353
5.3 The Cauchy–Euler Equations. Page 361
5.4 Laplace Transform Methods. Page 367
5.5 Reduction of Order. Page 379
5.6 Variation of Parameters. Page 385
5.7 Summary of Laplace Transforms. Page 393
Chapter6 Discontinuous Functions and the Laplace Transform. Page 394
6.1 Calculus of Discontinuous Functions. Page 396
6.2 The Heaviside Class H. Page 410
6.3 Laplace Transform Method for f(t)H. Page 426
6.4 The Dirac Delta Function. Page 438
6.5 Convolution. Page 450
6.6 Periodic Functions. Page 464
6.7 First Order Equations with Periodic Input. Page 476
6.8 Undamped Motion with Periodic Input. Page 484
6.9 Summary of Laplace Transforms. Page 496
Chapter7 Power Series Methods. Page 498
7.1 A Review of Power Series. Page 500
7.2 Power Series Solutions About an Ordinary Point. Page 516
7.3 Regular Singular Points and the Frobenius Method. Page 530
7.4 Application of the Frobenius Method:Laplace Inversion Involving Irreducible Quadratics. Page 550
7.5 Summary of Laplace Transforms. Page 566
Chapter8 Matrices. Page 567
8.1 Matrix Operations. Page 569
8.2 Systems of Linear Equations. Page 579
8.3 Invertible Matrices. Page 603
8.4 Determinants. Page 615
8.5 Eigenvectors and Eigenvalues. Page 629
9.1 Introduction. Page 639
9.2 Linear Systems of Differential Equations. Page 643
9.3 The Matrix Exponential and Its Laplace Transform. Page 659
9.4 Fulmer's Method for Computing eAt. Page 667
9.5 Constant Coefficient Linear Systems. Page 675
9.6 The Phase Plane. Page 691
9.7 General Linear Systems. Page 711
A.1 The Laplace Transform is Injective. Page 732
A.2 Polynomials and Rational Functions. Page 734
A.3 Bq Is Linearly Independent and Spans Eq. Page 736
A.4 The Matrix Exponential. Page 741
A.5 The Cayley–Hamilton Theorem. Page 742
AppendixB Selected Answers. Page 745
C.1 Laplace Transforms. Page 793
C.2 Convolutions. Page 797
Symbol Index. Page 799
Index. Page 801

Citation preview

Undergraduate Texts in Mathematics

Undergraduate Texts in Mathematics

Series Editors: Sheldon Axler San Francisco State University, San Francisco, CA, USA Kenneth Ribet University of California, Berkeley, CA, USA

Advisory Board: Colin Adams, Williams College, Williamstown, MA, USA Alejandro Adem, University of British Columbia, Vancouver, BC, Canada Ruth Charney, Brandeis University, Waltham, MA, USA Irene M. Gamba, The University of Texas at Austin, Austin, TX, USA Roger E. Howe, Yale University, New Haven, CT, USA David Jerison, Massachusetts Institute of Technology, Cambridge, MA, USA Jeffrey C. Lagarias, University of Michigan, Ann Arbor, MI, USA Jill Pipher, Brown University, Providence, RI, USA Fadil Santosa, University of Minnesota, Minneapolis, MN, USA Amie Wilkinson, University of Chicago, Chicago, IL, USA

Undergraduate Texts in Mathematics are generally aimed at third- and fourthyear undergraduate mathematics students at North American universities. These texts strive to provide students and teachers with new perspectives and novel approaches. The books include motivation that guides the reader to an appreciation of interrelations among different aspects of the subject. They feature examples that illustrate key concepts as well as exercises that strengthen understanding.

For further volumes: http://www.springer.com/series/666

William A. Adkins • Mark G. Davidson

Ordinary Differential Equations

William A. Adkins Department of Mathematics Louisiana State University Baton Rouge, LA USA

Mark G. Davidson Department of Mathematics Louisiana State University Baton Rouge, LA USA

ISSN 0172-6056 ISBN 978-1-4614-3617-1 ISBN 978-1-4614-3618-8 (eBook) DOI 10.1007/978-1-4614-3618-8 Springer New York Heidelberg Dordrecht London Library of Congress Control Number: 2012937994 Mathematics Subject Classification (2010): 34-01 © Springer Science+Business Media New York 2012 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)

This text is intended for the introductory three- or four-hour one-semester sophomore level differential equations course traditionally taken by students majoring in science or engineering. The prerequisite is the standard course in elementary calculus. Engineering students frequently take a course on and use the Laplace transform as an essential tool in their studies. In most differential equations texts, the Laplace transform is presented, usually toward the end of the text, as an alternative method for the solution of constant coefficient linear differential equations, with particular emphasis on discontinuous or impulsive forcing functions. Because of its placement at the end of the course, this important concept is not as fully assimilated as one might hope for continued applications in the engineering curriculum. Thus, a goal of the present text is to present the Laplace transform early in the text, and use it as a tool for motivating and developing much of the remaining differential equation concepts for which it is particularly well suited. There are several rewards for investing in an early development of the Laplace transform. The standard solution methods for constant coefficient linear differential equations are immediate and simplified. We are able to provide a proof of the existence and uniqueness theorems which are not usually given in introductory texts. The solution method for constant coefficient linear systems is streamlined, and we avoid having to introduce the notion of a defective or nondefective matrix or develop generalized eigenvectors. Even the Cayley–Hamilton theorem, used in Sect. 9.6, is a simple consequence of the Laplace transform. In short, the Laplace transform is an effective tool with surprisingly diverse applications. Mathematicians are well aware of the importance of transform methods to simplify mathematical problems. For example, the Fourier transform is extremely important and has extensive use in more advanced mathematics courses. The wavelet transform has received much attention from both engineers and mathematicians recently. It has been applied to problems in signal analysis, storage and transmission of data, and data compression. We believe that students should be introduced to transform methods early on in their studies and to that end, the Laplace transform is particularly well suited for a sophomore level course in differential v

equations. It has been our experience that by introducing the Laplace transform near the beginning of the text, students become proficient in its use and comfortable with this important concept, while at the same time learning the standard topics in differential equations. Chapter 1 is a conventional introductory chapter that includes solution techniques for the most commonly used first order differential equations, namely, separable and linear equations, and some substitutions that reduce other equations to one of these. There are also the Picard approximation algorithm and a description, without proof, of an existence and uniqueness theorem for first order equations. Chapter 2 starts immediately with the introduction of the Laplace transform as an integral operator that turns a differential equation in t into an algebraic equation in another variable s. A few basic calculations then allow one to start solving some differential equations of order greater than one. The rest of this chapter develops the necessary theory to be able to efficiently use the Laplace transform. Some proofs, such as the injectivity of the Laplace transform, are delegated to an appendix. Sections 2.6 and 2.7 introduce the basic function spaces that are used to describe the solution spaces of constant coefficient linear homogeneous differential equations. With the Laplace transform in hand, Chap. 3 efficiently develops the basic theory for constant coefficient linear differential equations of order 2. For example, the homogeneous equation q.D/y D 0 has the solution space Eq that has already been described in Sect. 2.6. The Laplace transform immediately gives a very easy procedure for finding the test function when teaching the method of undetermined coefficients. Thus, it is unnecessary to develop a rule-based procedure or the annihilator method that is common in many texts. Chapter 4 extends the basic theory developed in Chap. 3 to higher order equations. All of the basic concepts and procedures naturally extend. If desired, one can simultaneously introduce the higher order equations as Chap. 3 is developed or very briefly mention the differences following Chap. 3. Chapter 5 introduces some of the theory for second order linear differential equations that are not constant coefficient. Reduction of order and variation of parameters are topics that are included here, while Sect. 5.4 uses the Laplace transform to transform certain second order nonconstant coefficient linear differential equations into first order linear differential equations that can then be solved by the techniques described in Chap. 1. We have broken up the main theory of the Laplace transform into two parts for simplicity. Thus, the material in Chap. 2 only uses continuous input functions, while in Chap. 6 we return to develop the theory of the Laplace transform for discontinuous functions, most notably, the step functions and functions with jump discontinuities that can be expressed in terms of step functions in a natural way. The Dirac delta function and differential equations that use the delta function are also developed here. The Laplace transform works very well as a tool for solving such differential equations. Sections 6.6–6.8 are a rather extensive treatment of periodic functions, their Laplace transform theory, and constant coefficient linear differential equations with periodic input function. These sections make for a good supplemental project for a motivated student.

Chapter 7 is an introduction to power series methods for linear differential equations. As a nice application of the Frobenius method, explicit Laplace inversion formulas involving rational functions with denominators that are powers of an irreducible quadratic are derived. Chapter 8 is primarily included for completeness. It is a standard introduction to some matrix algebra that is needed for systems of linear differential equations. For those who have already had exposure to this basic algebra, it can be safely skipped or given as supplemental reading. Chapter 9 is concerned with solving systems of linear differential equations. By the use of the Laplace transform, ˚ it is possible  to give an explicit formula for the matrix exponential eAt D L1 .sI  A/1 that does not involve the use of eigenvectors or generalized eigenvectors. Moreover, we are then able to develop an efficient method for computing eAt known as Fulmer’s method. Another thing which is somewhat unique is that we use the matrix exponential in order to solve a constant coefficient system y 0 D Ay C f .t/, y.t0 / D y0 by means of an integrating factor. An immediate consequence of this is the existence and uniqueness theorem for higher order constant coefficient linear differential equations, a fact that is not commonly proved in texts at this level. The text has numerous exercises, with answers to most odd-numbered exercises in the appendix. Additionally, a student solutions manual is available with solutions to most odd-numbered problems, and an instructors solution manual includes solutions to most exercises.

Chapter Dependence The following diagram illustrates interdependence among the chapters.