4.4.1: Autonomous Second Order Equations (Exercises) - Mathematics


In Exercises 4.4.1-4.4.4 find the equations of the trajectories of the given undamped equation. Identify the equilibrium solutions, determine whether they are stable or unstable, and plot some trajectories.

1. (y''+y^3=0)

2. (y''+y^2=0)

3. (y''+y|y|=0)

4. (y''+ye^{-y}=0)


In Exercises 4.4.5–4.4.8 find the equations of the trajectories of the given undamped equation. Identify the equilibrium solutions, determine whether they are stable or unstable, and find the equations of the separatrices (that is, the curves through the unstable equilibria). Plot the separatrices and some trajectories in each of the regions of Poincaré plane determined by them.

5. (y''-y^3+4y=0)

6. (y''+y^3-4y=0)

7. (y''+y(y^2-1)(y^2-4)=0)

8. (y''+y(y-2)(y-1)(y+2)=0)


In Exercises 4.4.9–4.4.12 plot some trajectories of the given equation for various values (positive, negative, zero) of the parameter a. Find the equilibria of the equation and classify them as stable or unstable. Explain why the phase plane plots corresponding to positive and negative values of a differ so markedly. Can you think of a reason why zero deserves to be called the critical value of (a)?

9. (y''+y^2-a=0)

10. (y''+y^3-ay=0)

11. (y''-y^3+ay=0)

12. (y''+y-ay^3=0)


In Exercises 4.4.13-4.4.18 plot trajectories of the given equation for (c = 0) and small nonzero (positive and negative) values of (c) to observe the effects of damping.

13. (y''+cy'+y^3=0)

14. (y''+cy'-y=0)

15. (y''+cy'+y^3=0)

16. (y''+cy'+y^2=0)

17. (y''+cy'+y|y|=0)

18. (y''+y(y-1)+cy=0)


19. The van der Pol equation

[y''-mu(1-y^2)y'+y=0, ag{A}]

where (mu) is a positive constant and (y) is electrical current (Section 6.3), arises in the study of an electrical circuit whose resistive properties depend upon the current. The damping term (-mu(1-y^2)y') works to reduce (|y|) if (|y|<1) or to increase (|y|) if (|y|>1). It can be shown that van der Pol’s equation has exactly one closed trajectory, which is called a limit cycle. Trajectories inside the limit cycle spiral outward to it, while trajectories outside the limit cycle spiral inward to it (Figure [figure:4.4.16}). Use your favorite differential equations software to verify this for (mu=.5,1.1.5,2). Use a grid with (-4

20. Rayleigh’s equation,


also has a limit cycle. Follow the directions of Exercise 4.4.19 for this equation.

21. In connection with Equation 4.4.16, suppose (y(0)=0) and (y'(0)=v_0), where (0

  1. Let (T_1) be the time required for (y) to increase from zero to (y_{max}=2sin^{-1}(v_0/v_c)). Show that [{dyover dt}=sqrt{v_0^2-v_c^2sin^2y/2},quad 0le t
  2. Separate variables in (A) and show that [T_1=int_0^{y_{max}}{duoversqrt{v_0^2-v_c^2sin^2u/2}} ag{B}]
  3. Substitute (sin u/2=(v_0/v_c)sin heta) in (B) to obtain [T_1=2int_0^{pi/2}{d hetaoversqrt{v_c^2-v_0^2sin^2 heta}}. ag{C}]
  4. Conclude from symmetry that the time required for ((y(t),v(t))) to traverse the trajectory [v^2=v_0^2-v_c^2sin^2y/2] is (T=4T_1), and that consequently (y(t+T)=y(t)) and (v(t+T)=v(t)); that is, the oscillation is periodic with period (T).
  5. Show that if (v_0=v_c), the integral in (C) is improper and diverges to (infty). Conclude from this that (y(t)

22. Give a direct definition of an unstable equilibrium of (y''+p(y)=0).

23. Let (p) be continuous for all (y) and (p(0)=0). Suppose there’s a positive number ( ho) such that (p(y)>0) if (0

[alpha(r)=minleft{int_0^r p(x),dx, int_{-r}^0 |p(x)|,dx ight} mbox{quad and quad} eta(r)=maxleft{int_0^r p(x),dx, int_{-r}^0 |p(x)|,dx ight}.]

Let (y) be the solution of the initial value problem

[y''+p(y)=0,quad y(0)=v_0,quad y'(0)=v_0,]

and define (c(y_0,v_0)=v_0^2+2int_0^{y_0}p(x),dx).

  1. Show that [0
  2. Show that [v^2+2int_0^y p(x),dx=c(y_0,v_0),quad t>0.]
  3. Conclude from (b) that if (c(y_0,v_0)<2alpha(r)) then (|y|0).
  4. Given (epsilon>0), let (delta>0) be chosen so that [delta^2+2eta(delta)0), which implies that (overline y=0) is a stable equilibrium of (y''+p(y)=0).
  5. Now let (p) be continuous for all (y) and (p(overline y)=0), where (overline y) is not necessarily zero. Suppose there’s a positive number ( ho) such that (p(y)>0) if (overline y

24. Let (p) be continuous for all (y).

  1. Suppose (p(0)=0) and there’s a positive number ( ho) such that (p(y)<0) if (00). Conclude that (overline y=0) is an unstable equilibrium of (y''+p(y)=0).
  2. Now let (p(overline y)=0), where (overline y) isn’t necessarily zero. Suppose there’s a positive number ( ho) such that (p(y)<0) if (overline y
  3. Modify your proofs of (a) and (b) to show that if there’s a positive number ( ho) such that (p(y)>0) if (overline y- hole y

Second order Autonomous Differential Equations

Introduce v = dR/dt. Then the differential equation is
v dv/dR = W^2 R.

Integrating once gives
v^2 - v0^2 = W^2 R^2 - W^2 R0^2
Where i have assumed v(t=0) = v0 and R(t=0) = R0.

a quick arrangement
v = +/- sqrt( v0^2 - W^2 R0^2 + W^2 R^2 )

and thus dR/dt = +/- sqrt( v0^2 - W^2 R0^2 + W^2 R^2 )

This is a seperable first order ODE

define a such that W^2 a^2 = v0^2 - W^2 R0^2

dR/dt = +/- sqrt(W^2 a^2 + W^2 R^2)

The right hand side is +/- Wt.
To integrate the left hand side Put R = a*sinh(x) so that
dR = a cosh(x) dx then
sqrt( a^2 + R^2 ) = sqrt(a^2 + a^2 sinh^2(x)) = a sqrt(1 + sinh(x)^2)
= a sqrt(cosh^2(x)) = a*cosh(x).

This dR/sqrt(a^2 + R^2) -> dx
The integral is thus

arcsinh(R/a) - arcsinh(R0 / a) = +/- Wt
and therefore

R = a sinh( +/- Wt + arcsinh(R0 / a))
and a = sqrt(V0^2/W^2 - R0^2).

Plugging this in and checking shows us that the
- sign gives the v(0) = - v0

Math 2552: Differential Equations - Fall 2018 Sec T

Click here for the syllabus, where you can find the grading scheme and class policy.

Recitations and TA office hours

Recitation Section TA Name and Email TA Office Hours
T1 MW 4:30-5:20pm Skiles 170 Alexander Winkles
awinkles3 AT
Thursday 11:15-12:15
Clough 280 (MathLab)
T2 MW 4:30-5:20pm Skiles 255 Tao Yu
tyu70 AT
Thursday 3-4
Clough 280 (MathLab)
T3 MW 4:30-5:20pm Skiles 257 Renyi Chen
rchen342 AT
Thursday 1:45-2:45pm
Clough 280 (MathLab)

Where to get help

  • My Office Hours and your TA's office hours: see above. : Free service provided by the School of Math : Free service by GT. PLUS leader: Samantha Bordy, sbordy3 at Session time and location: Tuesday and Thursday 6-7pm in CULC 262.
  • Piazza (online discussion forum): I have created a class page on, which provides a free online forum for course-related discussions. The system is highly catered to getting your help fast and efficiently from classmates, the TAs, and myself. Rather than emailing questions to the teaching staff, I encourage you to post your questions on Piazza if your questions have nothing to do with your privacy. You may post on Piazza anonymously if that makes you more comfortable. Everyone in class should feel absolutely free to ask questons, discuss, help, comment, explore, and exchange ideas on Piazza. You can find our Piazza class page at:


Our final exam will be on Tuesday (12/11) 2:40-5:30pm, at Boggs B9. It is closed-book, closed-notes, and calculators are not allowed. You are allowed to bring one double-sided letter-size (8.5 by 11 inches) cheat sheet, which must be hand-written by yourself. (A printed or photocopied cheat sheet is not allowed.)
My office hours in this and next week will be: Wed (12/5) 3-4pm, Fri (12/7) 2-3pm, Mon (12/10) 11am-12pm.

Elementary Differential Equations

Office : ACD 114A
Phone : (860) 405-9294
Office Hours : TTh 9:30 - 10:30am. and by appointment
Open Door Policy: You are welcome to drop by to discuss any aspect of the course, anytime, on the days I am on campus-- Tuesday, Thursday, and Friday.

MATH 2410 covers material mainly from chapters 1-5 of the textbook.

Class Meeting Times/Place: Tuesday, Thursday 2:00 - 3:15 p.m. Classroom ACD 206.

All classes start in ACD 206.

Elementary Differential Equations by William F. Trench.

It is an open source textbook available free online here

Homework is assigned every class and collected every Thursday. They are returned the following Tuesday with remarks and graded. The total weight of the homework grades is 50 point of the total 500 course points.

Exam Schedule: Exam 1: Tuesday, February 9 ,  2:00 - 3:15p.m., Room: ACD 206
Exam 2: Thursday, March 9 ,  2:00 - 3:15p.m., Room: ACD 206
Exam 3: Tuesday, April 11 ,  2:00 - 3:15p.m., Room: ACD 206
Final Exam: Tuesday, May 2, 1:30 - 3:30 p.m., Room: ACD 206

Grading Policy: Homework: 50, Exam 1: 100, Exam 2: 100, Exam 3: 100, Final Exam: 150.

Date Chapter Topic Homework
Week 1 Tues. 1/17 1.1 Applications leading to differential equations

Thur. 1/19 1.2 Basic concepts Ch. 1.2: Exercises 1,2,4(a-d),5,7,9

Week 2 Tues. 1/24 1.3 Direction fields for first order ODEs Ch. 1.3: Exercises 1,2,3,4,5,12,13,14,15

Thur. 1/26 2.1 Linear first order equations Ch. 2.1 Exercises: 4,5,6,9,16,18,20,21

Week 3 Tues. 1/31 2.2 Separable equations Ch. 2.2 Exercises: 1,3,4,6,11,12,17,18

Thur. 2/2 2.3 Existence and uniqueness of solutions Ch. 2.3 Exercises: 1,2,3,4,14,16,17,20

Week 4 Tues. 2/7
Practice Exam 1
Practice Exam 1. Solutions

Thur. 2/9
Snow Day!

Week 5 Tues. 2/14
Exam 1

Thur. 2/16 3.1 Euler's Method. Ch. 3.1 Exercises: 1,4,6,14

Week 6 Tues. 2/21 4.1 Growth and decay Ch. 4.1 Exercises: 2,3,5,11

Thur. 2/23 4.2-4.3 Cooling, mixing and elementary mechanics Ch. 4.2 Exercises: 2,3,5,12
Ch. 4.3 Exercises:4,10

Week 7 Tues. 2/28 4.4 Autonomous second order equations Ch. 4.4 Exercises:

Thur. 3/2 4.5 Applications to curves Ch. 4.5 Exercises:

Week 8 Tues. 3/7
Review. Practice Exam 2
Practice Exam 2. Solutions

Thur. 3/9
Exam 2

Week 9 Tues. 3/14
Spring recess

Thur. 3/16
Spring recess

Week 10 Tues. 3/21 5.1 Homogeneous linear equations Ch. 5.1 Exercises:

Thur. 3/23 5.2 Constant coefficient homogeneous equations Ch. 5.2 Exercises:

Week 11 Tues. 3/28 5.3 Nonhomogeneous linear equations Ch. 5.3 Exercises:

Thur. 3/30 5.4 The method of underterminate coefficients 1 Ch. 5.4 Exercises:

Week 12 Tues. 4/4 5.5 The method of underterminate coefficients 2 Ch. 5.5 Assignment 4
Week 12 Thur. 4/6 6.1-6.2 Spring problems Ch. 6.1

Week 12 Tues. 4/11
Review. Practice Exam 3
Practice Exam 3. Solutions

Thur. 4/13
Exam 3

Week 14 Tues. 4/18 8.1-8.2 Laplace Transforms. Inverse transforms Ch. 8.1-8.2 Assignment 5
Thur. 4/20 8.3-8.4 Solutions to initial value problem Ch. 8.3 Exercises:
Ch. 8.4 Exercises:

Week 15 Tues. 4/25 8.5 Constant coefficient equations with piecewise continuous forcing functions Ch. 8.5 Exercises:
Thur. 4/27
Review. Practice Final Exam
Practice Final Exam. Solutions

Week 16 Tues. 5/2
Final Exam, 1:30 p.m.-3:30 p.m.

This page is maintained by Dmitriy Leykekhman
Last modified: 4/27/2017

This is linearizable by differentiation ode.

As you have seen, applied like that this substitution only increases the number of variables. You get a homogeneous pattern similar to an Euler-Cauchy equation by multiplying the original equation by $x^2$ , egin 0&=3(x^2y'')^2-2(3xy'+y)(x^2y'')+4(xy')^2 end One can now make this autonomous by borrowing the substitution $u(t)=y(e^t)$ from the Cauchy-Euler equation, $u'(t)=e^ty'(e^t)=xy'(x)$ , $u''(t)=e^<2t>y''(t)+e^ty'(e^t)=x^2y''(x)+u'(t)$ egin 0&=3(u''(t)-u'(t))^2-2(3u'(t)+u(t))(u''(t)-u'(t))+4u'(t)^2 &=3u''^2-6u''u'+3u'^2-6u''u'+6u'^2-2u''u+2u'u+4u'^2 &=3u''^2-12u''u'+13u'^2-2u''u+2u'u end As this is autonomous, one could insert $u'=v(u)$ , $u''=v'(u)v(u)$ . But even that does not appear helpful.

The expression can be rewritten as:

$3x^2(y'')^2 - 6 x y'y'' - 2yy'' + 4(y')^2 = 0.$

We make an observation as follows. Suppose that $y'' eq 0$ for all $x$ in the appropriate domain. The above equation simplifies to:

This implies that we can guess a possible solution could take the form of a simple polynomial $y(x) = C_0 + C_1 x + C_2 x^2 + C_3 x^3 ldots$ (This is due to the fact that the derivative of a polynomial is always one degree lower than that of the original polynomial.)

Recall that we have guessed that $y'' eq 0$ for all $x$ . This thus inspires that we guess $y'' = D$ for some constant $D$ . The equivalent "guess" for the solution would be

With that, substitute our guess into the main equation to obtain:

$ 3x^2(2C)^2 - 6 x(B+2Cx)(2C) = 2(A + Bx + Cx^2)(2C) - 4(B+2Cx)^2.$

Surprisingly, we obtain that the coefficient of $x^2$ and $x$ to be $ . The equation is thus reduced to

Recall that we are free to choose the values of $A,B$ and $C$ , as long as they satisfy the constraint that we have just derived. In particular, if you have tried solving the equation using WolframAlpha, the suggested solution is given by:

$y[x] ightarrow x c_1 + x^2 frac + c_2.$

Since $A = c_2, B = c_1, C = frac$ , we indeed have $B^2 = AC$ . The parameterization of the solution from WolframAlpha is just one particular parameterization for $B^2 = AC$ .

Complete the problem sets:

This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left.

MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)

4.4.1: Autonomous Second Order Equations (Exercises) - Mathematics

Section 2.1 introduces the terminology and ideas via a natural predator-prey example. In Sections 2.2 and 2.3, the notions of autonomous systems, vector fields, direction fields, phase planes, solutions and equilibrium points are presented. These sections are closely related and can be thought of as one long section which requires a couple of classes to cover. Linear, homogeneous, constant coefficient, second-order equations (harmonic oscillators) are introduced and related to systems in Section 2.4. Euler's method for first-order systems and second-order equations is covered in Section 2.5, and in Section 2.6 we discuss qualitative methods for drawing phase planes. Section 2.7 begins the qualitative discussion of the Lorenz system as an introduction to three-dimensional systems.

The availability of some sort of technology that students can use to draw vector fields, direction fields and phase planes is essential. In order to begin to get a feel for what pictures to expect, students must see many examples accurately drawn. Software is available from several sources (check our web page for more specific information).

2.1 The Predator-Prey Model

Using the analysis of a single predator-prey model, the basic ideas of first-order systems are introduced in this section. Our goal is to introduce the relationship between various graphical representations of the system, its solutions, and the interpretation of the solutions in terms of the model. The graphical representations include the phase plane, the graphs of the component functions of the solutions, the vector field, and the direction field.

The transition to systems is natural for students. Relating the solution in the phase plane to the graphs of the components is considered difficult by some, but is generally mastered after sufficient effort.

Some instructors feel that too much new material is presented in this section. However, we repeat the basic ideas in a more general setting in Sections 2.2 and 2.3, and we cover Section 2.1 quickly. We like to use the material of Section 2.1 as a running example for the more formal definitions in the next two sections.

Comments on selected exercises

Exercises 1 and 15 involve the interpretation of the parameters in a system while Exercises 9--14 involve the interpretation of the equations.

Exercises 2--6 require an analysis of a predator-prey system similar to that carried out in the section.

Exercises 7, 8, 16 and 17 give practice in going from the phase plane to the graphs of component functions and in the interpretation of solutions. Exercise 17 is particularly good for assigning an essay. (This predator-prey phenomenon really does occur.)

In Exercises 9--14 and 18, modifications are made to a predator prey model. This is easier than developing models from scratch, but it is still challenging. In Exercise 18, there is more than one reasonable answer.

In Exercises 19--24, models for concentrations of reactants in simple chemical reactions are developed. These models reappear in subsequent sections (Section 2.3, Exercises 22--26, and Section 2.6, Exercises 14--18).

2.2 Systems of Differential Equations

This section establishes notation and terminology for systems. Vectors are introduced along with many adjectives for describing systems. Vector fields, direction fields, and equilibrium points are also discussed.

One major goal of the section is to develop an understanding of what initial conditions and solutions are and how to check (by substituting into the system) that a given vector-valued function is a solution. Once students master the ability to check solutions of systems (and that they come to think of the idea as natural), they have reached an important mile post in their understanding of systems.

Comments on selected exercises

Exercises 1--8 concern the vocabulary for systems. In Exercises 36 and 37, the relationship between the systems in Exercises 5 and 7 and the systems in Exercises 6 and 8 (respectively) are explored.

Exercises 9--16 and 21--27 involve checking that given functions are solutions of a given system. This task is straightforward but extremely necessary.

In Exercises 17--20, direction fields are matched to systems. This type of exercise is less tedious than sketching direction fields by hand. If essays are required to justify why a given system corresponds to a particular direction field, students must examine the fields closely.

Exercises 28--35 ask for equilibrium points and sketches of direction fields for given systems. These systems reappear in Section 2.3, Exercises 1--8, where the phase plane is also requested.

2.3 Graphical Representation of Solutions of Systems

In this section we look at the various graphs of solutions of systems and how sketches of these graphs can be generated from the direction field. The relationship between a solution curve in the phase plane and the graphs of the component functions is difficult at first, but it is eventually mastered by most students. It is important for students to realize that both types of graphs are necessary because neither graph alone contains all of the information about a solution. A good analogy is trying to understand a slinky from its shadows (see Figure 2.26).

Some examples are given for which formulas for solutions can be found, and the idea of the general solution of a system is briefly introduced. Also, the Existence-Uniqueness Theorem for system is briefly stated. As with first-order equations, it is the uniqueness half of the theorem which is stressed since it is the most useful for drawing phase planes of autonomous systems.

Comments on selected exercises

Exercises 1--8 ask for a detailed analysis of the given complicated systems. Equilibrium points can be found by hand. Ideally, technology should be used to draw the direction fields, then the students should sketch the solution curves on top of these fields.

The systems in Exercises 9--12 can be explicitly solved because the systems decouple. These exercises are a good review of separable and linear equations as discussed in Chapter 1.

The systems in Exercises 13--16 can be explicitly solved for the given initial condition due to some special geometry of the system. These also give a good review of separable and linear equations and are quite difficult.

Exercises 17--21 concern a model of an arms race. Use of technology for drawing direction fields and phase planes should be encouraged, e.g., to approximate the coordinates of the equilibrium points.

Exercises 22--26 begin the analysis of the chemical reaction models introduced in the exercise set in Section 2.1. These models reappear in Section 2.6, Exercises 14--18.

Exercises 27--31 concern the Uniqueness Theorem for systems.

Exercise 32 gives an example of a solution that is not defined for all real numbers.

2.4 Second Order Equations and the Harmonic Oscillator

In this section, we derive the second-order equation for the motion of a harmonic oscillator using Newton's and Hooke's laws. This second-order equation is then converted into a first-order system, and we analyze examples using the vector field. We present solution techniques in detail in Chapter 3.

We relate the qualitative description of solutions to the what is physically reasonable for the mass-spring harmonic oscillator. This approach is both important and dangerous. Students sometimes think that the physical argument extit the analysis of the system rather than simply a check of the results of the mathematical analysis.

We chose not to include the standard ``guess-and-test'' method for solving the harmonic oscillator equation at this point for several reasons. First, we wish to maintain the emphasis on the qualitative analysis of solutions. Second, since we have not discussed the significance of linearity, it is difficult to do much with guess-and-test at this point. We return to this discussion in Section 3.1 after the Linearity Principle is discussed.

It is certainly possible to skip directly to Chapter 3 at this point, but we prefer to cover the material (with the possible exception of Section 2.7) in the order given in order to maintain the balance among the analytic, numeric, and qualitative approaches.

Comments on selected exercises

Exercises 1--4 involve the conversion of second-, third-, and fourth-order equations into first-order systems.

Exercises 5--7 derive the equations for a hanging spring with gravity as extra force.

Exercises 8--11 ask for the direction field and qualitative behavior of solutions for the harmonic oscillator with given coefficients. We encourage the use of technology in these exercises.

Exercises 12--14 are similar to Exercises 5--7. A system involving two opposing springs is considered.

Exercise 15 asks for the model of hard and soft springs (see also Lab 4.1).

Exercises 16--20 concern a model for a flexible suspension bridge. This model is covered in detail in Section 5.4, and these problems are quite challenging at this point in the course.

2.5 Euler's Method for Autonomous Systems

The approach to Euler's method is kept as simple and geometric as possible. Again, the most difficult point is the relationship between the graphs. For example, solutions near an equilibrium point move slowly (the vectors in the vector field are small), so the Euler's approximation in the phase plane is made up of small steps.

The example of a swaying building is presented in this section because quantitative information determines which model is more appropriate.

Comments on selected exercises

Exercises 1--6 involve computing Euler's method solutions with fairly large step sizes and comparing the results with the direction field and/or actual solutions. The computations are tedious but manageable if done by hand.

Exercises 7--11 refer to the swaying building model. One of two models is to be selected by comparing to given numerical data. Exercise 11 asks what experiment should be done to distinguish between the two systems.

2.6 Qualitative Analysis

In this section we use the direction field, along with some numerics when necessary, to study the long-term behavior of solutions of nonlinear systems. The only new technique introduced is the location of nullclines in the phase plane. Unfortunately, many students are confused initially about the difference between nullclines and solution curves.

Geometric analysis of this sort is particularly hard for students because is involves many steps and many different ideas and techniques. (They keep hoping you will just give them the magic bullet for understanding systems and are skeptical when you say there isn't one.) Extended projects are particularly helpful in making students realize that there is no template that leads to a complete of a phase plane.

Comments on selected exercises

In Exercises 1--6, 10--13, and 14--18, a qualitative analysis of the given system is requested. This analysis should go beyond what a student can print out from a good numerical solver. Exercises 14--18 relate to the chemical reaction systems of Section 2.1 (Exercises 19--24) and Section 2.3 (Exercises 22--26).

Exercise 7 is a fairly hard problem on geometry of solutions in the phase plane.

Exercises 8 and 9 concern the general Volterra-Lotka models of a pair of species.

Exercises 19--21 study a nonlinear saddle.

2.7 The Lorenz Equations

We introduce the Lorenz system here mainly because it is possible to do so. Almost none of our students have seen any modern (i.e., post 1800) mathematics. and they are surprised to learn that there are unanswered questions and that there is active research in mathematics. At this point, we can only describe the Lorenz system and display some numerical solutions. Consequently, this is something of a "golly-gee-whiz" section. Three-dimensional linear systems are discussed in Section 3.7, and the Lorenz system is studied more carefully in Sections 4.4 and 6.4.

If you cover this section, we recommend your mentioning James Gleick's book "Chaos". There are also a number of interesting videos that have been produced. They usually do a better job of illustrating the solution curves than we can do with our solvers.

Comments on selected exercises

Exercises 1--5 cover details of the Lorenz system that can be easily verified by hand

Exercise 6 requires some fairly sophisticated numerics to compare solutions of the Lorenz system.

All of these labs require technology capable of sketching solutions in the phase plane. The ability to draw graphs of the coordinate functions is also very useful.

Lab 2.1: Cooperative and Competitive Species Population Models

This lab can be started as soon as Section 2.1 is covered. It can either be a purely computer exploration, or if Section 2.6 has been covered, it can include some more careful qualitative analysis. Particular attention should be paid to the interpretation of the solutions in physical terms.

Lab 2.2: The Harmonic Oscillator with Modified Damping

Section 2.4 must be covered before this lab can be assigned. The first part relates to the harmonic oscillator. Hence, that portion of the lab should be complete before going far into Chapter 3.

Lab 2.3: Swaying Building Models

This lab requires a solver that produces numeric data (rather than just graphs). It can be done with a programmable calculator.

Second Order Ordinary Differential Equations in Jet Bundles and the Inverse Problem of the Calculus of Variations

O. Krupková , G.E. Prince , in Handbook of Global Analysis , 2008

6.1 History and setting of the problem

The inverse problem for second order equations in normal form has a rather different history and current state to the problem in covariant form as discussed in section 3.2 and elsewhere in section 3 . This is essentially because in the covariant case we ask if the system as it stands is variational and in the contravariant case we have to search for a variational covariant form. So the inverse problem for semisprays involves deciding whether the solutions of a given system of second-order ordinary differential equations (49) , namely

are the solutions of a set of Euler-Lagrange equations

for some Lagrangian function L ( t , x b , x ˙ b ) .

Because the Euler-Lagrange equations are not generally in normal form, the problem is to find a so-called (non-degenerate) multiplier matrix g a b ( t , x c , x ˙ c ) such that

As in previous sections we use the notation gab to stress that the multipliers we consider are regular (non-degenerate).

The most commonly used set of necessary and sufficient conditions for the existence of the gab are the so-called Helmholtz conditions due to Douglas [30] and put in the following form by Sarlet [118] :

where we have replaced x by u and utilised all our notations to date.

These algebraic-differential conditions ultimately require the application of a theory of integrability in order to determine the existence and uniqueness of their solutions. To date the integrability theories that have been used are associated with the names of Riquier-Janet, Cartan-Kähler and Spencer. Of these we will outline only the use of the Cartan-Kähler theorem in its exterior differential systems manifestation (in section 6.3 ).

Before proceeding with the mathematical description and analysis, we provide the reader with some historical perspective into this local inverse problem for second order ordinary differential equations.

Helmholtz [47] first discussed whether systems of second order ordinary differential equations are Euler-Lagrange for a first-order Lagrangian (that is, one depending on velocities but not accelerations) in the form presented (the covariant inverse problem), and found necessary conditions for this to be true. Mayer [99] later proved that the conditions are also sufficient.

However, in 1886, a year earlier than Helmholtz published his celebrated result, in a paper that unfortunately remained unknown for years, Sonin [133] found out that one sode

can always be put into the form of an Euler-Lagrange equation by multiplying x - f by a suitable function g ≠ 0. He also characterized the multiplicity of the solution, i.e. provided a description of all Lagrangians for (63) . Now, Sonin's result can be proved easily using the Helmholtz conditions, that for one equation (63) reduce to a single partial differential equation for the unknown function g ( t , x , x ˙ ) :

Since g ≠ 0, this equation takes the form

that is well-known be solvable its general solution depends upon a single arbitrary function of any two specific solutions of the corresponding homogeneous equation. Consequently, the most general Lagrangian for (63) depends upon one arbitrary function of two parameters.

Later Hirsch [50] formulated independently, and in a more general setting, the multiplier problem, that is, the question of the existence of multiplier functions which convert a system of second order ordinary differential equations in normal form into Euler-Lagrange equations. Surprisingly it turned out that a solution to the multiplier problem need not exist if there is more than one equation. Hirsch gave certain self-adjointness conditions for the problem but they are not effective in classifying second order equations according to the existence and uniqueness of the corresponding multipliers.

This multiplier problem was completely solved by Douglas in 1941 [30] for two degrees of freedom, that is, a pair of second order equations on the plane. He produced an exhaustive classification of all such equations in normal form. In each case Douglas identified all (if any) Lagrangians producing Euler-Lagrange equations whose normal form is that of the equations in that particular case. His method avoided Hirsch's self-adjointness conditions and he produced his own necessary and sufficient algebraic-differential conditions. His approach was to generate a sequence of integrability conditions, solving these using Riquier-Janet theory. While this approach is singularly effective and forms the basis of current efforts, it has been particularly difficult to see how to cast it into a form suitable for higher dimensions.

Interest from the physics community in the non-uniqueness aspects of the inverse problem provided the next contribution to solving the Helmholtz conditions. Henneaux [48] and Henneaux and Shepley [49] developed an algorithm for solving the Helmholtz conditions for any given system of second order equations. In particular, they solved the problem for spherically symmetric problems in dimension 3. In this fundamental case Henneaux and Shepley showed that a two-parameter family of Lagrangians produce the same equations of motion. Startlingly these Lagrangians produced inequivalent quantum mechanical hydrogen atoms. Further mathematical aspects of this case were elaborated by Crampin and Prince [ 17 , 19 ].)

At around the same time Sarlet [118] showed that the part of the Helmholtz conditions which ensures the correct time evolution of the multiplier matrix could be replaced by a possibly infinite sequence of purely algebraic initial conditions. Along with the work of Henneaux this provided a prototype for geometrising Douglas's Helmholtz conditions.

Over the next 10 years or so Cantrijn, CariñTena, Crampin, Ibort, Marmo, Prince, Sarlet, Saunders and Thompson explored the tangent bundle geometry of second order ordinary differential equations in general and the Euler-Lagrange equations in particular. The inverse problem provided central inspiration for their examination of the integrability theorems of classical mechanics, multi-Lagrangian systems, geodesic first integrals and equations with symmetry. Using the geometrical approach to second order equations of Klein and Grifone [ 41 , 42 , 56 , 57 ], the Helmholtz conditions for non-autonomous second order equations on a manifold were reformulated in terms of the corresponding non-linear connection on its tangent bundle (see section 5.1 ). This occurred in 1985 after a sequence of papers [ 15 , 21 , 118 ]. The work of Sarlet [121] , and collectively Martínez, Carinena and Sarlet [95, 96, 97] on derivations along the tangent bundle projection (see section 5.2 ) opened the way to the geometrical reformulation of Douglas's solution of the two-degree of freedom case. This was achieved in 1993 by Crampin, Sarlet, Martínez, Byrnes and Prince and is reported in [23] . A number of dimension n classes were subsequently solved ([ 124 , 123 , 20 ]). The reader is directed to the review by Prince [110] for more details of this program up to the turn of the current century.

Separately Anderson and Thompson [7] applied exterior differential systems theory to some special cases of the geometrised problem with considerable success. In order to pursue the EDS approach Aldridge [1] used the Massa and Pagani connection of section 5.3 and recovered all the dimension n results to date along with an overall classification scheme for this general case. It appears that the inverse problem still holds many accessible secrets.

4.4.1: Autonomous Second Order Equations (Exercises) - Mathematics

In the introduction to this section we briefly discussed how a system of differential equations can arise from a population problem in which we keep track of the population of both the prey and the predator. It makes sense that the number of prey present will affect the number of the predator present. Likewise, the number of predator present will affect the number of prey present. Therefore the differential equation that governs the population of either the prey or the predator should in some way depend on the population of the other. This will lead to two differential equations that must be solved simultaneously in order to determine the population of the prey and the predator.

The whole point of this is to notice that systems of differential equations can arise quite easily from naturally occurring situations. Developing an effective predator-prey system of differential equations is not the subject of this chapter. However, systems can arise from (n^< ext>) order linear differential equations as well. Before we get into this however, let’s write down a system and get some terminology out of the way.

We are going to be looking at first order, linear systems of differential equations. These terms mean the same thing that they have meant up to this point. The largest derivative anywhere in the system will be a first derivative and all unknown functions and their derivatives will only occur to the first power and will not be multiplied by other unknown functions. Here is an example of a system of first order, linear differential equations.

We call this kind of system a coupled system since knowledge of (x_<2>) is required in order to find (x_<1>) and likewise knowledge of (x_<1>) is required to find (x_<2>). We will worry about how to go about solving these later. At this point we are only interested in becoming familiar with some of the basics of systems.

Now, as mentioned earlier, we can write an (n^< ext>) order linear differential equation as a system. Let’s see how that can be done.

We can write higher order differential equations as a system with a very simple change of variable. We’ll start by defining the following two new functions.

[eginleft( t ight) & = yleft( t ight) left( t ight) & = y'left( t ight)end]

Now notice that if we differentiate both sides of these we get,

Note the use of the differential equation in the second equation. We can also convert the initial conditions over to the new functions.

[eginleft( 3 ight) & = yleft( 3 ight) = 6 left( 3 ight) & = y'left( 3 ight) = - 1end]

Putting all of this together gives the following system of differential equations.

We will call the system in the above example an Initial Value Problem just as we did for differential equations with initial conditions.

Let’s take a look at another example.

Just as we did in the last example we’ll need to define some new functions. This time we’ll need 4 new functions.

[egin & = y & Rightarrow hspace<0.25in><_1> & = y' = \ & = y' & Rightarrow hspace<0.25in><_2> & = y'' = \ & = y'' & Rightarrow hspace<0.25in><_3> & = y''' = \ & = y''' & Rightarrow hspace<0.25in><_4>& = > = - 8y + sin left( t ight)y' - 3y'' + = - 8 + sin left( t ight) - 3 + end]

The system along with the initial conditions is then,

[egin<_1> & = & hspace<0.25in>left( 0 ight) & = 1 <_2> & = & hspace<0.25in>left( 0 ight) & = 2 <_3> & = & hspace<0.25in>left( 0 ight) & = 3 <_4> & = - 8 + sin left( t ight) - 3 + & hspace<0.25in>left( 0 ight) & = 4end]

Now, when we finally get around to solving these we will see that we generally don’t solve systems in the form that we’ve given them in this section. Systems of differential equations can be converted to matrix form and this is the form that we usually use in solving systems.

First write the system so that each side is a vector.

Now the right side can be written as a matrix multiplication,

The system can then be written in the matrix form,

We’ll start with the system from Example 1.

Now, let’s do the system from Example 2.

In this case we need to be careful with the t 2 in the last equation. We’ll start by writing the system as a vector again and then break it up into two vectors, one vector that contains the unknown functions and the other that contains any known functions.

Now, the first vector can now be written as a matrix multiplication and we’ll leave the second vector alone.

Note that occasionally for “large” systems such as this we will go one step farther and write the system as,

[vec x' = Avec x + vec gleft( t ight)]

The last thing that we need to do in this section is get a bit of terminology out of the way. Starting with

[vec x' = Avec x + vec gleft( t ight)]

we say that the system is homogeneous if (vec gleft( t ight) = vec 0) and we say the system is nonhomogeneous if (vec gleft( t ight) e vec 0).

4.4.1: Autonomous Second Order Equations (Exercises) - Mathematics

Recall that we call a differential equation autonomous if it doesn't depend on the independent variable (the $"x"$) except in that the derivatives are taken with respect to $x$. For example, $displaystylefrac+3frac-5y=2$ and $displaystylefrac-2yfrac=0$ are both autonomous equations. There is a special trick that will let us reduce a second-order autonomous equation into a pair of first-order equations. This trick isn't important when we are dealing with linear equations, since a linear autonomous equation must be constant coefficient and can be more easily solved using the techniques of this section. But the trick for autonomous second-order equations also applies to non-linear equations, like the second example above, which can't be solved by the other techniques we've learned.

Consider the autonomous equation $ frac=fleft(y,frac ight). $ We will make the substitution $displaystyle v=frac$, just as in the development of numerical methods for second-order equations. This will give us $ frac=f(y,v). ag <1>$ Now by the chain rule $ frac=fracfrac=fracv. $ Substituting this into equation (1), we now have $ vfrac=f(y,v) $ and we have reduced our problem from a second-order equation in $y$ and $x$ to a first-order equation in $v$ and $y$. We now solve this using our techniques for first order equations to get a solution $v=g(y)$ for some function $g$. But recalling that $v=displaystylefrac$, this solution $v=g(y)$ becomes the first-order equation $ frac=g(y). $ We solve this equation, which is separable, and we have the solution of our original equation with $y$ as a function of $x$.

Watch the video: Anden ordens differentialligning med konstante koefficienter (December 2021).