## Q7.6.1

In *Exercises 7.6.1-7.6.11* find a fundamental set of Frobenius solutions. Optionally, write a computer program to implement the applicable recurrence formulas and take (N>7).

1. (x^2y''-x(1-x)y'+(1-x^2)y=0)

2. (x^2(1+x+2x^2)y'+x(3+6x+7x^2)y'+(1+6x-3x^2)y=0)

3. (x^2(1+2x+x^2)y''+x(1+3x+4x^2)y'-x(1-2x)y=0)

4. (4x^2(1+x+x^2)y''+12x^2(1+x)y'+(1+3x+3x^2)y=0)

5. (x^2(1+x+x^2)y''-x(1-4x-2x^2)y'+y=0)

6. (9x^2y''+3x(5+3x-2x^2)y'+(1+12x-14x^2)y=0)

7. (x^2y''+x(1+x+x^2)y'+x(2-x)y=0)

8. (x^2(1+2x)y''+x(5+14x+3x^2)y'+(4+18x+12x^2)y=0)

9. (4x^2y''+2x(4+x+x^2)y'+(1+5x+3x^2)y=0)

10. (16x^2y''+4x(6+x+2x^2)y'+(1+5x+18x^2)y=0)

11. (9x^2(1+x)y''+3x(5+11x-x^2)y'+(1+16x-7x^2)y=0)

## Q7.6.2

In *Exercises 7.6.12-7.6.22* find a fundamental set of Frobenius solutions. Give explicit formulas for the coefficients.

12. (4x^2y''+(1+4x)y=0)

13. (36x^2(1-2x)y''+24x(1-9x)y'+(1-70x)y=0)

14. (x^2(1+x)y''-x(3-x)y'+4y=0)

15. (x^2(1-2x)y''-x(5-4x)y'+(9-4x)y=0)

16. (25x^2y''+x(15+x)y'+(1+x)y=0)

17. (2x^2(2+x)y''+x^2y'+(1-x)y=0)

18. (x^2(9+4x)y''+3xy'+(1+x)y=0)

19. (x^2y''-x(3-2x)y'+(4+3x)y=0)

20. (x^2(1-4x)y''+3x(1-6x)y'+(1-12x)y=0)

21. (x^2(1+2x)y''+x(3+5x)y'+(1-2x)y=0)

22. (2x^2(1+x)y''-x(6-x)y'+(8-x)y=0)

## Q7.6.3

In *Exercises 7.6.23-7.6.27* find a fundamental set of Frobenius solutions. Compare the terms involving (x^{n+r_{1}}), where (0leq nleq N) ((N) at least (7)) and (r_{1}) is the root of the indicial equation. Optionally, write a computer program to implement the applicable recurrence formulas and take (N>7).

23. (x^2(1+2x)y''+x(5+9x)y'+(4+3x)y=0)

24. (x^2(1-2x)y''-x(5+4x)y'+(9+4x)y=0)

25. (x^2(1+4x)y''-x(1-4x)y'+(1+x)y=0)

26. (x^2(1+x)y''+x(1+2x)y'+xy=0)

27. (x^2(1-x)y''+x(7+x)y'+(9-x)y=0)

## Q7.6.4

In *Exercises 7.6.28-7.6.38* find a fundamental set of Frobenius solutions. Give explicit formulas for the coefficients.

28. (x^2y''-x(1-x^2)y'+(1+x^2)y=0)

29. (x^2(1+x^2)y''-3x(1-x^2)y'+4y=0)

30. (4x^2y''+2x^3y'+(1+3x^2)y=0)

31. (x^2(1+x^2)y''-x(1-2x^2)y'+y=0)

32. (2x^2(2+x^2)y''+7x^3y'+(1+3x^2)y=0)

33. (x^2(1+x^2)y''-x(1-4x^2)y'+(1+2x^2)y=0)

34. (4x^2(4+x^2)y''+3x(8+3x^2)y'+(1-9x^2)y=0)

35. (3x^2(3+x^2)y''+x(3+11x^2)y'+(1+5x^2)y=0)

36. (4x^2(1+4x^2)y''+32x^3y'+y=0)

37. (9x^2y''-3x(7-2x^2)y'+(25+2x^2)y=0)

38. (x^2(1+2x^2)y''+x(3+7x^2)y'+(1-3x^2)y=0)

## Q7.6.5

In *Exercises 7.6.39-7.6.43* find a fundamental set of Frobenius solutions. Compute the terms involving (x^{2m+r_{1}}), where (0 ≤ m ≤ M) ((M) at least (3)) and (r_{1}) is the root of the indicial equation. Optionally, write a computer program to implement the applicable recurrence formulas and take (M > 3).

39. (x^2(1+x^2)y''+x(3+8x^2)y'+(1+12x^2)y)

40. (x^2y''-x(1-x^2)y'+(1+x^2)y=0)

41. (x^2(1-2x^2)y''+x(5-9x^2)y'+(4-3x^2)y=0)

42. (x^2(2+x^2)y''+x(14-x^2)y'+2(9+x^2)y=0)

43. (x^2(1+x^2)y''+x(3+7x^2)y'+(1+8x^2)y=0)

## Q7.6.6

In *Exercises 7.6.44-7.6.52* find a fundamental set of Frobenius solutions. Give explicit formulas for the coefficients.

44. (x^2(1-2x)y''+3xy'+(1+4x)y=0)

45. (x(1+x)y''+(1-x)y'+y=0)

46. (x^2(1-x)y''+x(3-2x)y'+(1+2x)y=0)

47. (4x^2(1+x)y''-4x^2y'+(1-5x)y=0)

48. (x^2(1-x)y''-x(3-5x)y'+(4-5x)y=0)

49. (x^2(1+x^2)y''-x(1+9x^2)y'+(1+25x^2)y=0)

50. (9x^2y''+3x(1-x^2)y'+(1+7x^2)y=0)

51. (x(1+x^2)y''+(1-x^2)y'-8xy=0)

52. (4x^2y''+2x(4-x^2)y'+(1+7x^2)y=0)

## Q7.6.7

53. Under the assumptions of Theorem 7.6.2, suppose the power series

[sum_{n=0}^infty a_n(r_1)x^n quadmbox{ and }quad sum_{n=1}^infty a_n'(r_1)x^n onumber ]

converge on ((- ho, ho)).

- Show that [y_1=x^{r_1}sum_{n=0}^infty a_n(r_1)x^nquadmbox{ and }quad y_2=y_1ln x+x^{r_1}sum_{n=1}^infty a_n'(r_1)x^n
onumber] are linearly independent on ((0,
ho)). HINT: Show that if (c_{1}) and (c_{2}) are constants such that (c_{1}y_{1}+c_{2}y_{2}≡0) on ((0,
ho )), then [(c_{1}+c_{2}ln x)sum_{n=0}^{infty}a_{n}(r_{1})x^{n}+c_{2}sum_{n=1}^{infty}a_{n}'(r_{1})x^{n}=0,quad 0
- Use the result of (a) to complete the proof of Theorem 7.6.2.

54. Let

[Ly=x^2(alpha_0+alpha_1x)y''+x(eta_0+eta_1x)y'+(gamma_0+gamma_1x)y onumber]

and define

[p_0(r)=alpha_0r(r-1)+eta_0r+gamma_0quadmbox{ and }quad p_1(r)=alpha_1r(r-1)+eta_1r+gamma_1. onumber]

Theorem 7.6.1 and *Exercise 7.5.55a*

imply that if

[y(x,r)=x^rsum_{n=0}^infty a_n(r)x^n onumber]

where

[a_n(r)=(-1)^nprod_{j=1}^n{p_1(j+r-1)over p_0(j+r)}, onumber]

then

[Ly(x,r)=p_0(r)x^r. onumber]

Now suppose (p_0(r)=alpha_0(r-r_1)^2) and (p_1(k+r_1) e0) if (k) is a nonnegative integer.

- Show that (Ly=0) has the solution [y_1=x^{r_1}sum_{n=0}^infty a_n(r_1)x^n, onumber] where [a_n(r_1)={(-1)^noveralpha_0^n(n!)^2}prod_{j=1}^np_1(j+r_1-1). onumber]
- Show that (Ly=0) has the second solution [y_2=y_1ln x+x^{r_1}sum_{n=1}^infty a_n(r_1)J_nx^n, onumber] where [J_n=sum_{j=1}^n{p_1'(j+r_1-1)over p_1(j+r_1-1)}-2sum_{j=1}^n{1over j}. onumber]
- Conclude from (a) and (b) that if (gamma_1 e0) then [y_1=x^{r_1}sum_{n=0}^infty {(-1)^nover(n!)^2}left(gamma_1overalpha_0 ight)^nx^n onumber] and [y_2=y_1ln x-2x^{r_1}sum_{n=1}^infty {(-1)^nover(n!)^2}left(gamma_1overalpha_0 ight)^n left(sum_{j=1}^n{1over j} ight)x^n onumber] are solutions of [alpha_0x^2y''+eta_0xy'+(gamma_0+gamma_1x)y=0. onumber] (The conclusion is also valid if (gamma_1=0). Why?)

55. Let

[Ly=x^2(alpha_0+alpha_qx^q)y''+x(eta_0+eta_qx^q)y'+(gamma_0+gamma_qx^q)y onumber]

where (q) is a positive integer, and define

[p_0(r)=alpha_0r(r-1)+eta_0r+gamma_0quadmbox{ and }quad p_q(r)=alpha_qr(r-1)+eta_qr+gamma_q. onumber]

Suppose

[p_0(r)=alpha_0(r-r_1)^2 quadmbox{ and }quad p_q(r) otequiv0. onumber]

- Recall from
*Exercise 7.5.59*that (Ly~=0) has the solution [y_1=x^{r_1}sum_{m=0}^infty a_{qm}(r_1)x^{qm}, onumber] where [a_{qm}(r_1)={(-1)^mover (q^2alpha_0)^m(m!)^2}prod_{j=1}^mp_qleft(q(j-1)+r_1 ight). onumber] - Show that (Ly=0) has the second solution [y_2=y_1ln x+x^{r_1}sum_{m=1}^infty a_{qm}'(r_1)J_mx^{qm}, onumber] where [J_m=sum_{j=1}^m{p_q'left(q(j-1)+r_1 ight)over p_qleft(q(j-1)+r_1 ight)}-{2over q}sum_{j=1}^m{1over j}. onumber]
- Conclude from (a) and (b) that if (gamma_q e0) then [y_1=x^{r_1}sum_{m=0}^infty {(-1)^mover(m!)^2}left(gamma_qover q^2alpha_0 ight)^mx^{qm} onumber] and [y_2=y_1ln x-{2over q}x^{r_1}sum_{m=1}^infty {(-1)^mover(m!)^2}left(gamma_qover q^2alpha_0 ight)^mleft(sum_{j=1}^m{1over j} ight)x^{qm} onumber] are solutions of [alpha_0x^2y''+eta_0xy'+(gamma_0+gamma_qx^q)y=0. onumber]

56. The equation

[xy''+y'+xy=0 onumber]

is *Bessel’s equation of order zero.* (See *Exercise 7.5.53*.) Find two linearly independent Frobenius solutions of this equation.

57. Suppose the assumptions of *Exercise 7.5.53* hold, except that

[p_0(r)=alpha_0(r-r_1)^2. onumber]

Show that

[y_1={x^{r_1}overalpha_0+alpha_1x+alpha_2x^2}quadmbox{ and }quad y_2={x^{r_1}ln xoveralpha_0+alpha_1x+alpha_2x^2} onumber]

are linearly independent Frobenius solutions of

[x^2(alpha_0+alpha_1x+alpha_2 x^2)y''+x(eta_0+eta_1x+eta_2x^2)y'+ (gamma_0+gamma_1x+gamma_2x^2)y=0 onumber]

on any interval ((0, ho)) on which (alpha_0+alpha_1x+alpha_2x^2) has no zeros.

## Q7.6.8

58. (4x^2(1+x)y''+8x^2y'+(1+x)y=0)

59. (9x^2(3+x)y''+3x(3+7x)y'+(3+4x)y=0)

60. (x^2(2-x^2)y''-x(2+3x^2)y'+(2-x^2)y=0)

61. (16x^2(1+x^2)y''+8x(1+9x^2)y'+(1+49x^2)y=0)

62. (x^2(4+3x)y''-x(4-3x)y'+4y=0)

63. (4x^2(1+3x+x^2)y''+8x^2(3+2x)y'+(1+3x+9x^2)y=0)

64. (x^2(1-x)^2y''-x(1+2x-3x^2)y'+(1+x^2)y=0)

65. (9x^2(1+x+x^2)y''+3x(1+7x+13x^2)y'+(1+4x+25x^2)y=0)

## Q7.6.9

66.

- Let (L) and (y(x,r)) be as in
*Exercises 7.5.57*and*7.5.58*. Extend Theorem 7.6.1 by showing that [Lleft({partial yover partial r}(x,r) ight)=p'_0(r)x^r+x^rp_0(r)ln x. onumber] - Show that if [p_0(r)=alpha_0(r-r_1)^2 onumber] then [y_1=y(x,r_1) quad ext{and} quad y_2={partial yoverpartial r}(x,r_1) onumber] are solutions of (Ly=0).

## 7.7E: The Method of Frobenius II (Exercises) - Mathematics

Calculus solution manual 7e James Stewart [PDF]

### Calculus Early Transcendentals (7E Solution) by James Stewart

MathSchoolinternational contain 5000+ of Mathematics Free PDF Books and Physics Free PDF Books . Which cover almost all topics for students of Mathematics, Physics and Engineering. Here is extisive list of Calculus ebooks . We hope students and teachers like these textbooks , notes and solution manuals.

As an engineering, economics, mathematics or physics student you need to take calculus course. mathschoolinternational provide comprehensive collection of following best textbooks , best solution manuals and solved notes which will helpful for you and increase your efficiency.

• Top 10 Best Single-Variable Calculus

• Top 10 Best Mutlivariable Calculus

• Top 5 Best AP Calculus

• Top 7 Best Calculus with Analytic Geometry

• Early Transcendentals Calculus

• Top 25 Best Solved Calculus

• Top 50 Best Advance Calculus

Congratulations, the link is avaliable for free download.

###### How to Download a Book?, . Need Help?

About this book :-

Calculus Early Transcendentals (7E) written by James Stewart .

Success in your calculus course starts here! James Stewart's CALCULUS: EARLY TRANSCENDENTALS texts are world-wide best-sellers for a reason: they are clear, accurate, and filled with relevant, real-world examples. With CALCULUS: EARLY TRANSCENDENTALS, Eighth Edition, Stewart conveys not only the utility of calculus to help you develop technical competence, but also gives you an appreciation for the intrinsic beauty of the subject. His patient examples and built-in learning aids will help you build your mathematical confidence and achieve your goals in the course

Book Detail :-

**Title:** Calculus Early Transcendentals (Solution)

**Edition:** 7th

**Author(s):** James Stewart

**Publisher:** Brooks/Cole

**Series:**

**Year:** 2015

**Pages:** 1778

**Type:** PDF

**Language:** English

**ISBN:** 1285741552,978-1-285-74155-0,978-1-305-27235-4

**Country:** Canada

Get this book from Amazon

About Author :-

Calculus Early Transcendentals (7E Solution) written by James Stewart .

He received his M. S. (master of science) at Stanford University and his Ph.D. (doctor of philosophy) from the University of Toronto.

He worked as a postdoctoral fellow at the University of London, where his research focused on harmonic and functional analysis. Stewart was most recently Professor of Mathematics at McMaster University, and his research field was harmonic analysis. Stewart was the author of a best calculus textbook series published by Cengage Learning, including Calculus, Calculus: Early Transcendentals and Calculus: Concepts and Contexts, as well as a series of precalculus texts.

Join our new updates, alerts:-

For new updates and alerts join our WhatsApp Group and Telegram Group (you can also ask any [pdf] book/notes/solutions manual).

Join WhatsApp Group

Join Telegram Group

Book Contents :-

Calculus Early Transcendentals (7E Solution) written by James Stewart cover the following topics. **1. FUNCTIONS AND MODELS.**

Four Ways to Represent a Function. Mathematical Models: A Catalog of Essential Functions. New Functions from Old Functions. Graphing Calculators and Computers. Principles of Problem Solving. **2. LIMITS.**

The Tangent and Velocity Problems. The Limit of a Function. Calculating Limits Using the Limit Laws. The Precise Definition of a Limit. Continuity. **3. DERIVATIVES.**

Derivatives and Rates of Change. Writing Project: Early Methods for Finding Tangents. The Derivative as a Function. Differentiation Formulas. Applied Project: Building a Better Roller Coaster. Derivatives of Trigonometric Functions. The Chain Rule. Applied Project: Where Should a Pilot Start Descent?. Imlicit Differentiation. Rates of Change in the Natural and Social Sciences. Related Rates. Linear Approximations and Differentials. Laboratory Project: Taylor Polynomials. **4. APPLICATIONS OF DIFFERENTIATION.**

Maximum and Minimum Values. Applied Project: The Calculus of Rainbows. The Mean Value Theorem. How Derivatives Affect the Shape of a Graph. Limits at Infinity Horizontal Asymptotes. Summary of Curve Sketching. Graphing with Calculus and Calculators. Optimization Problems. Applied Project: The Shape of a Can. Newton's Method. Antiderivatives. **5. INTEGRALS.**

Areas and Distances. The Definite Integral. Discovery Project: Area Functions. The Fundamental Theorem of Calculus. Indefinite Integrals and the Net Change Theorem. Writing Project: Newton, Leibniz, and the Invention of Calculus. The Substitution Rule. **6. APPLICATIONS OF INTEGRATION.**

Areas between Curves. Volume. Volumes by Cylindrical Shells. Work. Average Value of a Function. **7. INVERSE FUNCTIONS:**

EXPONENTIAL, LOGARITHMIC, AND INVERSE TRIGONOMETRIC FUNCTIONS. Inverse Functions. (Instructors may cover either Sections 7.2-7.4 or Sections 7.2*-7.4*). Exponential Functions and Their Derivatives. Logarithmic Functions. Derivatives of Logarithmic Functions. *The Natural Logarithmic Function. *The Natural Exponential Function. *General Logarithmic and Exponential Functions. Exponential Growth and Decay. Inverse Trigonometric Functions. Applied Project: Where to Sit at the Movies. Hyperbolic Functions. Indeterminate Forms and L'Hospital's Rule. Writing Project: The Origins of L'Hospital's Rule. **8. TECHNIQUES OF INTEGRATION.**

Integration by Parts. Trigonometric Integrals. Trigonometric Substitution. Integration of Rational Functions by Partial Fractions. Strategy for Integration. Integration Using Tables and Computer Algebra Systems. Discovery Project: Patterns in Integrals. Approximate Integration. Improper Integrals. **9. FURTHER APPLICATIONS OF INTEGRATION.**

Arc Length. Discovery Project: Arc Length Contest. Area of a Surface of Revolution. Discovery Project: Rotating on a Slant. Applications to Physics and Engineering. Discovery Project: Complementary Coffee Cups. Applications to Economics and Biology. Probability. **10. DIFFERENTIAL EQUATIONS.**

Modeling with Differential Equations. Direction Fields and Euler's Method. Separable Equations. Applied Project: Which is Faster, Going Up or Coming Down? Models for Population Growth. Applied Project: Calculus and Baseball. Linear Equations. Predator-Prey Systems. **PARAMETRIC Equations and Polar Coordinates.**

Curves Defined by Parametric Equations. Laboratory Project: Families of Hypocycloids. Calculus with Parametric Curves. Laboratory Project: Bezier Curves. Polar Coordinates. Areas and Lengths in Polar Coordinates. Conic Sections. Conic Sections in Polar Coordinats. **11. INFINITE SEQUENCES AND SERIES.**

Sequences. Laboratory Project: Logistic Sequences. Series. The Integral Test and Estimates of Sums. The Comparison Tests. Alternating Series. Absolute Convergence and the Ratio and Root Tests. Strategy for Testing Series. Power Series. Representation of Functions as Power Series. Taylor and Maclaurin Series. Writing Project: How Newton Discovered the Binomial Series. Applications of Taylor Polynomials. Applied Project: Radiation from the Stars. **12. VECTORS AND THE GEOMETRY OF SPACE.**

Three-Dimensional Coordinate Systems. Vectors. The Dot Product. The Cross Product. Discovery Project: The Geometry of a Tetrahedron. Equations of Lines and Planes. Cylinders and Quadric Surfaces. **13. VECTOR FUNCTIONS.**

Vector Functions and Space Curves. Derivatives and Integrals of Vector Functions. Arc Length and Curvature. Motion in Space: Velocity and Acceleration. Applied Project: Kepler's Laws. **14. PARTIAL DERIVATIVES.**

Functions of Several Variables. Limits and Continuity. Partial Derivatives. Tangent Planes and Linear Approximations. The Chain Rule. Directional Derivatives and the Gradient Vector. Maximum and Minimum Values. Applied Project: Designing a Dumpster. Discovery Project: Quadratic Approximations and Critical Points. Lagrange Multipliers. Applied Project: Rocket Science. Applied Project: Hydro-Turbine Optimization. **15. MULTIPLE INTEGRALS.**

Double Integrals over Rectangles. Iterated Integrals. Double Integrals over General Regions. Double Integrals in Polar Coordinates. Applications of Double Integrals. Triple Integrals. Discovery Project: Volumes of Hyperspheres. Triple Integrals in Cylindrical. Discovery Project: The Intersection of Three Cylinders. Triple Integrals in Spherical Coordinates. Applied Project: Roller Derby. Change of Variables in Multiple Integrals. 17. VECTOR CALCULUS. Vector Fields. Line Integrals. The Fundamental Theorem for Line Integrals. Green's Theorem. Curl and Divergence. Parametric Surfaces and Their Areas. Surface Integrals. Stokes' Theorem. Writing Project: Three Men and Two Theorems. The Divergence Theorem. Summary. **16. VECTOR CALCULUS**

Vector Fields, Line Integrals, The Fundamental Theorem for Line Integrals, Green’s Theorem, Curl and Divergence, Parametric Surfaces and Their Areas, Surface Integrals, Stokes’ Theorem, Writing Project • Three Men and Two Theorems, The Divergence Theorem **17. SECOND-ORDER DIFFERENTIAL EQUATIONS.**

Second-Order Linear Equations. Nonhomogeneous Linear Equations. Applications of Second-Order Differential Equations. Series Solutions. **Appendixes.**

A: Intervals, Inequalities, and Absolute Values. B: Coordinate Geometry and Lines. C: Graphs of Second-Degree Equations. D: Trigonometry. E: Sigma Notation. F: Proofs of Theorems. G. Complex Numbers. H: Answers to Odd-Numbered Exercises.

We are not the owner of this book/notes. We provide it which is already avialable on the internet. For any further querries please contact us. We never SUPPORT PIRACY. This copy was provided for students who are financially troubled but want studeing to learn. If You Think This Materials Is Useful, Please get it legally from the PUBLISHERS. Thank you.

## Linear Codes

MinJia Shi , . Patrick Sole , in Codes and Rings , 2017

### 5.2 Modular Independence

In general, over a finite semilocal ring a simple matrix form like in the previous section does not exist. A standard form, using the CRT, was defined in [5] for the case of rings Z m , and generalized to finite PIRs in [1] . Our expositions follows [1] .

**Exercise 5.6**

Let R = Z 4 [ x ] / ( x 2 ) . Show that *R* is local Frobenius but not a chain ring.

First, define **modular independence** over a local Frobenius ring *R*, with maximal ideal *M*. A family of *s* vectors w 1 , … , w s is said to be modular independent if any linear combination relation

We define a **basis** of a code over a finite Frobenius ring as a system of vectors that is independent, modular independent, and spanning. As noted in [1, Remark 2] , the two properties of independence and modular independence are logically independent.

**Exercise 5.7**

Let R = Z 12 , and w 1 = ( 11 , 7 ) , w 2 = ( 3 , 9 ) . Show that the system < w 1 , w 2 >is modular independent, but not independent.

**Exercise 5.8**

Let R = Z 12 , and w 1 = ( 4 , 0 ) , w 2 = ( 0 , 3 ) . Show that the system < w 1 , w 2 >is independent, but not modular independent.

The following result is derived from [4, Th. 25.4.6.B] and [4, Th. 25.3.3] in [1, Th. 4.4] .

*If C is a code of length n over a finite PIR R, then there exists a tower of ideals* ( d 1 ) ⊆ ( d 2 ) ⊆ ⋯ ⊆ ( d r ) *, such that we have the isomorphism of R-modules*

Denoting the above isomorphism as *ϕ*, and by e i the image of the canonical basis of R r , in the direct product R / ( d 1 ) × ⋯ × R / ( d r ) , we have the following existence result for a basis.

**Theorem 5.10**

*[1, Th. 4.6]* *Let* v i = ϕ − 1 ( e i ) *for* i = 1 , … , r *. The system* v 1 , … , v r *is a basis of C.*

## Related Books

#### Aspects of Quantum Field Theory in Curved Spacetime

#### What Is a Quantum Field Theory?

A First Introduction for Mathematicians

#### Quantum Field Theory for Mathematicians

#### Topological and Non-Topological Solitons in Scalar Field Theories

#### Topology, Geometry and Quantum Field Theory

Proceedings of the 2002 Oxford Symposium in Honour of the 60th Birthday of Graeme Segal

#### New Directions in Hopf Algebras

## Author(s)

### Biography

Kenneth B. Howell earned bachelor degrees in both mathematics and physics from Rose-Hulman Institute of Technology, and master’s and doctoral degrees in mathematics from Indiana University. For more than thirty years, he was a professor in the Department of Mathematical Sciences of the University of Alabama in Huntsville (retiring in 2014). During his academic career, Dr. Howell published numerous research articles in applied and theoretical mathematics in prestigious journals, served as a consulting research scientist for various companies and federal agencies in the space and defense industries, and received awards from the College and University for outstanding teaching. He is also the author of Principles of Fourier Analysis (Chapman & Hall/CRC, 2001).

## 7.7E: The Method of Frobenius II (Exercises) - Mathematics

mathematical logic part II, rene cori, daniel lascar [pdf]

### Mathematical Logic: A Course with Exercises Part II by Rene Cori, Daniel Lascar, Donald H. Pelletier

MathSchoolinternational contain 5000+ of Mathematics Free PDF Books and Physics Free PDF Books . Which cover almost all topics for students of Mathematics, Physics and Engineering. Here is extisive list of Basic Mathematics ebooks . We hope students and teachers like these textbooks , notes and solution manuals.

Congratulations, the link is avaliable for free download.

###### How to Download a Book?, . Need Help?

About this book :-

Mathematical Logic: A Course with Exercises Part II written by Rene Cori, Daniel Lascar, Donald H. Pelletier

This book is based upon several years' experience teaching logic at the UFR of Mathematics of the University of Paris 7, at the beginning graduate level as well as within the DEA of Logic and the Foundations of Computer Science. As soon as the author began to prepare our first lectures, he realized that it was going to be very difficult to introduce our students to general works about logic written in (or even translated into) French. The authors therefore decided to take advantage of this opportunity to correct the situation. Thus the first versions of the eight chapters that you are about to read were drafted at the same time that their content was being taught. The authors insist on warmly thanking all the students who contributed thereby to a tangible improvement of the initial presentation.

Logic forms the basis of mathematics, and is hence a fundamental part of any mathematics course. It is a major element in theoretical computer science and has undergone a huge revival with the every- growing importance of computer science. This text is based on a course to undergraduates and provides a clear and accessible introduction to mathematical logic. The concept of model provides the underlying theme, giving the text a theoretical coherence whilst still covering a wide area of logic. The foundations having been laid in "Part I", this book starts with recursion theory, a topic essential for the complete scientist. Then follows Godel's incompleteness theorems and axiomatic set theory. Chapter 8 provides an introduction to model theory. There are examples throughout each section, and varied selection of exercises at the end. Answers to the exercises are given in the appendix.

Rene Cori and Daniel Lascar, Equipe de Logique Mathematique, Universite Paris VII, Translated by Donald H. Pelletier, York University, Toronto

Book Detail :-

**Title:** Mathematical Logic: A Course with Exercises Part II

**Edition:**

**Author(s):** Rene Cori, Daniel Lascar, Donald H. Pelletier

**Publisher:** Oxford University Press

**Series:**

**Year:** 2001

**Pages:** 347

**Type:** PDF

**Language:** English

**ISBN:** 0198500505,9780198500506

**Country:** US

Get this Books from Amazon

About Author :-

The author Rene Cori is french mathematician.

Join our new updates, alerts:-

For new updates and alerts join our WhatsApp Group and Telegram Group (you can also ask any [pdf] book/notes/solutions manual).

Join WhatsApp Group

Join Telegram Group

Book Contents :- Mathematical Logic: A Course with Exercises Part II written by Rene Cori, Daniel Lascar, Donald H. Pelletier cover the following topics.

Introduction

Part-I

1. Propositional calculus

2. Boolean algebras

3. Predicate calculus

4. The completeness theorems

Solutions to the exercises of Part I

Part-II

5. Recursion theory

6. Formalization of arithmetic, Godel's theorems

7. Set theory

8. Some model theory

Solutiions to the exercises of Part II

Bibliography

Index

We are not the owner of this book/notes. We provide it which is already avialable on the internet. For any further querries please contact us. We never SUPPORT PIRACY. This copy was provided for students who are financially troubled but want studeing to learn. If You Think This Materials Is Useful, Please get it legally from the PUBLISHERS. Thank you.

## 7.7E: The Method of Frobenius II (Exercises) - Mathematics

THE PERRON-FROBENIUS THEOREM

The projects in this collection are concerned with models from many different areas that is part of their purpose, to show that linear algebra is a broadly applicable branch of mathematics. If one reviews them as a whole, they do have a couple of common mathematical characteristics: eigenvalues are very useful and the matrices studied are almost all nonnegative (all entries in them 0 or greater). To be a bit more precise about the former, it is often only the largest, or dominant , eigenvalue that we need to know.

This makes life much easier. Computing all the eigenvalues of a matrix may be a difficult task. For a 10x10 matrix from a population model, Maple often has trouble computing all of them. Yet we only need one of the 10 and the methods shown in Project #10 work reliably.

The mathematical question that all of this raises is: how do we know if the dominant eigenvalue of a matrix is positive ? Another question is: could there ever be positive and negative eigenvalues of the same size? If so, the behavior of the associated system could be quite different. The answer to this question is yes, as is shown in problem 3 of Project #10

Both of these questions are answered by the Perron-Frobenius Theorem for nonnegative matrices. The results of the theorem depend upon what kind of nonnegative matrix one has. The first kind we look at are called irreducible .

DEFINITION An nxn nonnegative matrix A is said to be irreducible if there is no permutation of coordinates such that

where P is an nxn permutation matrix (each row and each column have exactly one 1 entry and all others 0), A 11 is rxr, and A 22 is (n-r)x(n-r). This is not a particularly inspiring definition in view of the fact it only tells us what is NOT irreducible. About the only thing we know for sure is that a matrix with all positive entries is irreducible.

To clarify the notion of irreducibility, we examine it in three different contexts:

1. Markov Chains . Suppose A is the transition matrix of a Markov chain. Then it is nonnegative and suppose further it is set up so that

a ij = the probability of going from state j to state i

If one looks at, say, the fourth row of A, then one sees the probabilities of going from state 4 to the various other states (as well as remaining in state 4). Any entry that is zero indicates that one cannot go to that state from state 4, at least in one step.

Now if A is reducible, for instance,

then we can see that it is possible to go from states 1,2 or 3 to any states but only from states 4 or 5 to themselves. This is, of course, the traditional definition of "absorbing" states in a Markov Chain. The above mentioned permutation matrix P simply amounts to labeling the states so the absorbing ones are last.

2. Graphs . If we considered directed graphs then each has associated with it a nonnegative matrix with all entries 0 or 1 with

a ij = 1 if there is an arc from vertex i to vertex j.

If the associated matrix is irreducible then one can get from any vertex to any other vertex (perhaps in several steps) whereas if it is reducible then (like the Markov Chain case), there are vertices from which one cannot travel to all other vertices.

The former case, in the realm of graph theory, is called a "strongly connected" graph.

3. Dynamical Systems . Suppose, as in the population case we have a system of the form

and that A is reducible as in the definition above. Then in the manner of partitioned matrices, one can rewrite this system as

where Y has the first r components of X and Z has the last n-r so together they comprise the original vector X . (The reader who has either not worked with partitioned matrices or is rusty on the subject is urged to sketch the details out at the component level and verify it the result should follow in a straightforward manner.) While this may not seem helpful at first glance, it means the solution for Z may be obtained first, with no reference to the system governing Y , and then the solution for Y obtained from the (nonhomogeneous) system that treats Z as known. In the early language of such a problem, the original system has been "reduced" to two simpler systems. To physicists, it has been partially decoupled .

Saying, then, that the matrix A is irreducible means that the system cannot be reduced it must be treated as a whole in studying its behavior.

While the above discussion may clarify the notion of an irreducible matrix, it is not much help in verifying that a given matrix actually is irreducible. For example, it is not blatantly obvious that of the two following matrices

the latter is irreducible while the former is not. If we went strictly by the definition, we would have to keep trying permutations and looking for the critical zero submatrix to appear. But for an nxn matrix there are, of course, n! possible such matrices P making for a great deal of work ( if A is 10 x 10 then there are over 3 million possibilities!).

The following theorem is thus quite direct and helpful.

THEOREM . A is a nonnegative irreducible nxn matrix if and only if

(I n + A) n-1 > 0. (see [12], p.6 for details and proof)

We note that the power in the above expression contains the same n as in the size of the matrix. Since computer software is readily available to compute matrix powers, the above may be readily checked. Note the result is also an nxn matrix, and if any entry is zero, then the contrapositive of the theorem says A is reducible. Referring to the two matrices above, we find that by direct computation

At this point, it seems appropriate to finally state the

PERRON-FROBENIUS THEOREM FOR IRREDUCIBLE MATRICES

If A is nxn, nonnegative, irreducible, then

1 . one of its eigenvalues is positive and greater than or equal to (in absolute value) all other eigenvalues

2 . there is a positive eigenvector corresponding to that eigenvalue

and 3 . that eigenvalue is a simple root of the characteristic equation of A.

Such an eigenvalue is called the "dominant eigenvalue" of A and we assume hereafter we have numbered the eigenvalues so it is the first one. We should point out that other eigenvalues may be positive, negative or complex (and if they are complex then by "absolute value" we mean modulus, or distance in the complex plane from the origin). Complex eigenvalues are a real possibility as only symmetric matrices are guaranteed to not have them, and very few of the matrices we have been discussing, in application, will be symmetric with the notable exception of undirected graphs. Part 3 of the theorem also merits brief comment. One ramification of it is that the dominant eigenvalue cannot be a multiple root. One will not be left with the classic situation of having more roots than corresponding linearly independent eigenvectors and hence having to worry about or calculate generalized eigenvectors and/or work with Jordan blocks. The same may not be said for the other eigenvalues of A but in the models here, they do not concern us.

Primitive Matrices and the Perron-Frobenius Theorem

Irreducible matrices are not the only nonnegative matrices of possible interest in the models we have looked at. Suppose we have a dynamical system of the form

(this matrix, while containing many suspicious looking zeroes, is indeed irreducible. The easiest way to see this is to construct the associated graph for it and check that you can get from any vertex to any other vertex.)We calculate that its dominant eigenvalue is 1.653 and that an associated eigenvector is (.29,.48,.29,.35,.5,.48) t so based upon the above discussion, we believe the long term behavior of the system to be of the form:

x (k) = c 1 (1.653) k (.29,.48,.29,.35,.5,.48) t (c 1 determined from initial conditions)

Simulation of the system is, of course, quite easy as one just needs to keep multiplying the current solution by A to get the solution one time period later. However, in this case, doing so does not seem to validate the predicted behavior and in fact does not even seem to show any sort of limit at all! (the reader is encouraged to fire up some software, pick an initial condition and see this behavior).

So what went wrong? If one calculates all of the eigenvalues for the matrix, they turn out to be: 1.653,0,0,+- .856i and -1.653. The latter is where the limit problem arises since we are taking integral powers of the eigenvalues, we get an algebraic "flip-flop" effect:

x (k) = c 1 (1.653) k e 1 + . + c 6 (-1.653) k e 6

(It is, by the way, a known result that an irreducible matrix cannot have two independent eigenvectors that are both nonnegative see [16], chapter 2. Thus e 6 in the above expansion has components of mixed sign.)

Thus 1.653 did not turn out to be as neatly "dominant" as we would have liked. If we look back at the statement of the Perron-Frobenius Theorem, we see it guaranteed a positive eigenvalue (with positive eigenvector) with absolute value greater than or equal to that of any other eigenvalue. In the example just considered, the equality was met.

So the question comes up: what stronger condition than irreducibility should one impose so that a nonnegative matrix has a truly dominant eigenvalue strictly greater in absolute value than any other eigenvalue? The answer is that the matrix needs to be "primitive". While there are several possible definitions of "primitive", most of which have a graphical context in terms of cycles, we will state a more general, algebraic definition as the models we may wish to look at are from a diverse group.

DEFINITION An nxn nonnegative matrix A is primitive iff A k > 0 for some power k.

We note the strict inequality all n 2 entries of A k have to be positive for some power. Such a condition, again considering the availability of computer software and ease of use, is easy to check. If one experiments with the 6x6 from the last example, one never finds a power where all 36 entries are positive. The question might come up: how many powers

of A does one have to look at before concluding it is not primitive? If A is primitive then the power which should have all positive entries is less than or equal to n 2 -2n +2 (this is due to Wielandt in 1950, see [ 17 ]). Also, it can be easily shown that if A is primitive than A is irreducible. Thus the class of primitive matrices has as a subset the class of irreducible matrices. Finally, primitive matrices indeed have the desired property in terms of a dominant eigenvalue:

PERRON-FROBENIUS THEOREM FOR PRIMITIVE MATRICES

If A is an nxn nonnegative primitive matrix then

1. one of its eigenvalues is positive and greater than (in absolute value) all other eigenvalues

2. there is a positive eigenvector corresponding to that eigenvalue

3. that eigenvalue is a simple root of the characteristic equation of A.

In addition to the various projects, some other applications which involve the Perron Frobenious Theorem desire mention:

Application #1: Ranking of Football Teams.

James P. Keener has developed several models of schemes for ranking college football teams which may be found in [6].

In general, it should be remarked that graph theory and nonnegative matrices have a very strong relationship and that the Perron-Frobenius Theorem is often a powerful tool in graph theory. The interested reader is referred to, for example, the excellent books by Minc and Varga for an in depth discussion.

As stated above, a graph (directed or not) has associated with it a nonnegative, "adjacency" matrix whose entries are 0s and 1s. A fundamental result about lengths of cycles in the graph may be obtained by determining whether the matrix is primitive or not. The very elegant result which occurs with the help of the Perron-Frobenius Theorem is this:

* if the matrix is primitive (hence a dominant eigenvalue with absolute value strictly greater than that of all other eigenvalues) then the greatest common divisor (gcd) of the lengths of all cycles is 1

* if the matrix is irreducible but not primitive then the greatest common divisor of the lengths of all cycles is the same as the number of eigenvalues with magnitude the same as the dominant eigenvalue (and including it).

It is common to refer to graphs with matrices which are irreducible but not primitive, naturally, as imprimitive and aforementioned gcd. as the index of the graph. It should also be mentioned that the collection of such eigenvalues lie equally spaced in the complex plane on a circle of radius equaling the dominant eigenvalue.

The interested reader is encouraged to examine the following pair of graphs in light of this result:

In the case of the first graph, the eigenvalues are 1,i,-i, and -1 while in the second graph they are 1.221, -.248 +/-1.034i, and -.724, consistent with the gcd of paths for graph 1 being 4 and the gcd of paths for graph 2 being 1.

The Perron-Frobenius Theorem has proven to be a consistently powerful result for examining certain nonnegative matrices arising in discrete models. It has been shown that careful consideration need be given to what hypothesis is used depending on whether one has an irreducible or primitive matrix. In applications, knowledge of the dominant eigenvalue and eigenvector is very helpful and also attainable while knowledge of the rest of the "spectrum" is both unnecessary and computationally extensive.

The author wishes to thank Dr. Kenneth Lane of KDLLabs, Satellite Beach, Florida, for many inspiring insights and conversations concerning the power and richness of the Perron-Frobenius Theorem.

1. Berman, A. and Plemmons R. 1979. Nonnegative Matrices in the Mathematical Sciences. New York: Academic Press.

2. Chartrand, G. 1977. Graphs as Mathematical Models . Boston: Prindle, Weber and Schmidt.

3. Gould, Peter. 1967. The Geographic Interpretation of Eigenvalues. Transactions of the Institute of British Geographers 42: 53-85.

4. Goulet, J. 1982. Mathematical Systems Analysis - A Course. The UMAP Journal 3 (4):395-406.

6. Keener, James P., 1993. The Perron-Frobenius Theorem and the Ranking of Football Teams. SIAM Review 35 (1): 80-93.

7. Kemeny, John and Snell, Laurie. 1960. Finite Markov Chains . New York: Van Nostrand Reinhold.

8. Lane,Kenneth D.. 1983. Computing the Index of a Graph .Congressus Numerantium, 40 ,143-154

9. Luenberger, D.G. 1979. Dynamic Systems . New York: John Wiley.

11. Maki, D.P. and Thompson, M. 1973. Mathematical Models and Applications . Englewood Cliffs, New Jersey: Prentice-Hall.

12. Minc, Henryk.1988. Nonnegative Matrices . New York: John Wiley and Sons.

14. Straffin, Philip D. 1980. Linear Algebra in Geography: Eigenvectors of Networks. Mathematics Magazine 53 (5): 269-276.

16. Varga, Richard S. 1962. Matrix Iterative Analysis . Englewood Cliffs N.J.: Prentice-Hall.

17. Wielandt,H. 1950. "Unzerlegbare nicht negativen Matrizen" Math. Z . 52 , 642-648.

## Table of Contents

Contents

Preface

Chapter I. Introduction To Partial Differential Equations

1. Introduction

2. The One-Dimensional Wave Equation

3. Method Of Separation Of Variables

4. The Two-Dimensional Wave Equation

5. Three-Dimensional Wave Equation

6. The Wave Equation In Plane And Cylindrical Polar Coordinates

A. Plane Polars

B. Cylindrical Polars

7. The Wave Equation In Spherical Polar Coordinates

8. Laplace's Equation In Two Dimensions

A. Cartesian Coordinates

B. Polar Coordinates

9. Laplace's Equation In Three Dimensions

10. The Diffusion Or Heat Flow Equation

10.1. Neutron Diffusion

11. A Fourth Order Partial Differential Equation

12. The Bending Of An Elastic Plate — The Biharmonic Equation

13. Characteristics

13.1. Cauchy's Problem

13.2. Reduction Of (13.1.1) To The Standard Form

13.3. Riemann's Method Of Solution Of (13.1.1)

13.4. Numerical Integration Of Hyperbolic Differential Equations

Problems

General References

Chapter II. Ordinary Differential Equations: Frobenius' And Other Methods Of Solution

1. Introduction

2. Solution In Series By The Method Of Frobenius

3. Bessel's Equation

4. Legendre's Equation

5. Hyper Geometric Equation

6. Series Solution About A Point Other Than The Origin

6.1. The Transformation X = (1 - ξ)/2

7. Series Solution In Descending Powers Of X

8. Confluent Hypergeometric Equation

8.1. Laguerre Polynomials

8.2. Hermite Polynomials

9. Asymptotic Or Semi-Convergent Series

10. Change Of Dependent Variable

11. Change Of The Independent Variable

12. Exact Equations

13. The Inhomogeneous Linear Equation

14. Perturbation Theory For Non-Linear Differential Equations

14.1. The Perturbation Method

14.2. Periodic Solutions

Problems

General References

Chapter III. Bessel And Legendre Functions

1. Definition Of Special Functions 127

2. Jn(X), The Bessel Function Of The First Kind Of Order N

2.1. Recurrence Relations: Jn(X)

3. Bessel Function Of The Second Kind Of Order N, Yn(X)

4. Equations Reducible To Bessel's Equation

5. Applications

6. Modified Bessel Functions: In(X), Kn(X)

6.1. Recurrence Relations For In(X) And Kn(X)

6.2. Equations Reducible To Bessel's Modified Equation

6.3. Bessel Functions Of The Third Kind (Hankel Functions)

7. Illustrations Involving Modified Bessel Functions

8. Orthogonal Properties

8.1. Expansion Of F(X) In Terms Of Jn(ξix)

8.2. Jn(X) As An Integral (Where N Is Zero Or An Integer)

8.3. Other Important Integrals

9. Integrals Involving The Modified Bessel Functions

10. Zeros Of The Bessel Functions

11. A Generating Function For The Legendre Polynomials

11.1. Recurrence Relations

11.2. Orthogonality Relations For The Legendre Polynomials

11.3. Associated Legendre Functions

12. Applications From Electromagnetism

13. Spherical Harmonics

14. The Addition Theorem For Spherical Harmonics

Problems

General References

Chapter IV. The Laplace And Other Transforms

1. Introduction

2. Laplace Transforms And Some General Properties

3. Solution Of Linear Differential Equations With Constant Coefficients

4. Further Theorems And Their Application

5. Solution Of The Equation Φ(D)x(t) = F(t) By Means Of The Convolution Theorem

6. Application To Partial Differential Equations

7. The Finite Sine Transform

8. The Simply Supported Rectangular Plate

9. Free Oscillations Of A Rectangular Plate

10. Plate Subject To Combined Lateral Load And A Uniform Compression

11. The Fourier Transform

Problems

Chapter V. Matrices

1. Introduction

1.1. Definitions

2. Determinants

2.1. Evaluation Of Determinants

3. Reciprocal Of A Square Matrix

3.1. Determinant Of The Adjoint Matrix

4. Solution Of Simultaneous Linear Equations

4.1. Choleski-Turing Method

4.2. A Special Case: The Matrix A Is Symmetric

5. Eigenvalues (Latent Roots)

5.1. The Cayley-Hamilton Theorem

5.2. Iterative Method For Determination Of Eigenvalues

5.3. Evaluation Of Subdominant Eigenvalue

6. Special Types Of Matrices

6.1. Orthogonal Matrix

6.2. Hermitian Matrix

7. Simultaneous Diagonalization Of Two Symmetric Matrices

Problems

General References

Chapter VI. Analytical Methods In Classical And Wave Mechanics

1. Introduction

2. Definitions

3. Lagrange's Equations Of Motion For Holonomic Systems

3.1. Derivation Of The Equations

3.2. Conservative Forces

3.3. Illustrative Examples

3.4. Energy Equation

3.5. Orbital Motion

3.6. The Symmetrical Top

3.7. The Two-Body Problem

3.8. Velocity-Dependent Potentials

3.9. The Relativistic Lagrangian

4. Hamilton's Equations Of Motion

5. Motion Of A Charged Particle In An Electromagnetic Field

6. The Solution Of The Schrödinger Equation

6.1. The Linear Harmonic Oscillator

6.2. Spherically Symmetric Potentials In Three Dimensions

6.3. Two-Body Problems

Problems

General References

Chapter VII. Calculus Of Variations

1. Introduction

2. The Fundamental Problem: Fixed End-Points

2.1. Special Cases

2.2. Variable End-Points

2.3. A Generalization Of The Fixed End-Point Problem

2.4. One Independent, Several Dependent Variables

2.5. One Dependent And Several Independent Variables

3. Isoperimetric Problems

4. Rayleigh-Ritz Method

4.1. Sturm-Liouville Theory For Fourth-Order Equations

5. Torsion And Viscous Flow Problems

5.1. Torsional Rigidity

5.2. Trefftz Method

5.3. Generalization To Three Dimensions

6. Variational Approach To Elastic Plate Problems

6.1. Boundary Conditions

6.2. Buckling Of Plates

7. Binding Energy Of The He4 Nucleus

8. The Approximate Solution Of Differential Equations

Problems

General References

Chapter VIII. Complex Variable Theory And Conformal Transformations

1. The Argand Diagram

2. Definitions Of Fundamental Operations

3. Function Of A Complex Variable

3.1. Cauchy-Riemann Equations

4. Geometry Of Complex Plane

5. Complex Potential

5.1. Uniform Stream

5.2. Source, Sink And Vortex

5.3. Doublet (Dipole)

5.4. Uniform Flow + Doublet -F Vortex. Flow Past A Cylinder

5.5. A Torsion Problem In Elasticity

6. Conformal Transformation

6.1. Bilinear (Möbius) Transformation

7. Schwarz-Christoffel Transformation

7.1. Applications

7.2. The Kirchhoff Plane

8. Transformation Of A Circle Into An Aerofoil

Problems

General References

Chapter IX. The Calculus Of Residues

1. Definition Of Integration

2. Cauchy's Theorem

3. Cauchy's Integral

3.1. Differentiation

4. Series Expansions

4.1. Laurent's Theorem

5. Zeros And Singularities

5.1. Residues

6. Cauchy Residue Theorem

6.1. Application Of Cauchy's Theorem

6.2. Flow Round A Cylinder

6.3. Definite Integrals. Integration Round Unit Circle

6.4. Infinite Integrals

6.5. Jordan's Lemma

6.6. Another Type Of Infinite Integral

7. Harnack's Theorem And Applications

7.1. The Schwarz And Poisson Formulas

7.2. Application Of Conformai Transformation To Solution Of A Torsion Problem

8. Location Of Zeros Of f(z)

8.1. Nyquist Stability Criterion

9. Summation Of Series By Contour Integration

10. Representation Of Functions By Contour Integrals

10.1. Gamma Function

10.2. Bessel Functions

10.3. Legendre's Function As A Contour Integral

11. Asymptotic Expansions

12. Saddle-Point Method

Problems

General References

Chapter X. Transform Theory

1. Introduction

1.1. Complex Fourier Transform

1.2. Laplace Transform

1.3. Hilbert Transform

1.4. Hankel Transform

1.5. Mellin Transform.

2. Fourier's Integral Theorem

3. Inversion Formulas

3.1. Complex Fourier Transform

3.2. Fourier Sine And Cosine Transforms

3.3. Convolution Theorems For Fourier Transforms

4. Laplace Transform

4.1. The Inversion Integral On The Infinite Circle

4.2. Exercises In The Use Of The Laplace Transform

4.3. Linear Approximation To Axially Symmetrical Supersonic Flow

4.4. Supersonic Flow Round A Slender Body Of Revolution

5. Mixed Transforms

5.1. Linearized Supersonic Flow Past Rectangular Symmetrical Aerofoil

5.2. Heat Conduction In A Wedge

6. Integral Equations

6.1. The Solution Of A Certain Type Of Integral Equation Of The First Kind

6.2. Poisson's Integral Equation

6.3. Abel's Integral Equation

7. Hilbert Transforms

7.1. Infinite Hilbert Transform

7.2. Finite Hilbert Transform

7.3. Alternative Forms Of The Finite Hilbert Transform

Problems

General References

Chapter XI. Numerical Methods

1. Introduction

1.1. Finite Difference Operators

2. Interpolation And Extrapolation

2.1. Linear Interpolation

2.2. Everett's And Bessel's Interpolation Formulas

2.3. Inverse Interpolation

2.4. Lagrange Interpolation Formula

2.5. Formulas Involving Forward Or Backward Differences

3. Some Basic Expansions

4. Numerical Differentiation

5. Numerical Evaluation Of Integrals

5.1. Note On Limits Of Integration

5.2. Evaluation Of Double Integrals

6. Euler-Maclaurin Integration Formula

6.1. Summation Of Series

7. Solution Of Ordinary Differential Equations By Means Of Taylor Series

8. Step-By-Step Method Of Integration For First-Order Equations

8.1. Simultaneous First-Order Equations And Second-Order Equations With The First Derivative Present

8.2. The Second-Order Equation y" = f(x,y)

8.3. Alternative Method For The Linear Equation y" = g(x)y + h(x)

9. Boundary Value Problems For Ordinary Differential Equations Of The Second Order

9.1. Approximate Solution Of Eigenvalue Problems By Finite Differences

9.2. Numerical Solution Of Eigenvalue Equations

10. Linear Difference Equations With Constant Coefficients

11. Finite Differences In Two Dimensions

Problems

General References

Chapter XII. Integral Equations

1. Introduction

1.1. Types Of Integral Equations

1.2. Some Simple Examples Of Linear Integral Equations

2. Volterra Integral Equation Form For A Differential Equation

2.1. Higher Order Equations

3. Fredholm Integral Equation Form For Sturm-Liouville Differential Equations

3.1. The Modified Green's Function

3.2. Green's Function For Fourth-Order Differential Equations

4. Numerical Solution

4.1. The Numerical Solution Of The Homogeneous Equation

4.2. The Volterra Equation

4.3. Iteration Method Of Solution

5. The Variation-Iteration Method For Eigenvalue Problems

Problems

General References

Appendix

1. Δ2Φ In Spherical And Cylindrical Polar Coordinates

1.1. Plane Polar Coordinates

1.2. Cylindrical Polar Coordinates

1.3. Spherical Polar Coordinates

2. Partial Fractions

3. Sequences, Series, And Products

3.1. Sequences

3.2. Series

3.3. Infinite Products

4. Maxima And Minima For Functions Of Two Variables

4.1. Euler's Theorem Of Homogeneous Functions

4.2. The Expansion Of (Sinh aU/)/(Sinh U) In Powers Of 2 Sinh (½)

5. Integration

5.1. Uniform Convergence Of Infinite Integrals

5.2. Change Of Variables In A Double Integral

5.3. Special Integrals

5.4. Elliptic Integrals

6. Principal Valued Integrals

7. Vector Algebra And Calculus

7.1. Curvilinear Coordinates

7.2. The Equation Of Heat Conduction

7.3. Components Of Velocity And Acceleration In Plane Polar Coordinates

7.4. Vectors, Dyads And Tensors

8. Legendre Functions Of Non-Integral Order

8.1. The Value Of Pv(0)

9. An Equivalent Form For F(a,bcx)

10. Integrals Involving Ln(k)(x)

Problems

General References

Solutions Of Problems

Chapter I

Chapter II

Chapter III

Chapter IV

Chapter V

Chapter VI

Chapter VII

Chapter VIII

Chapter IX

Chapter X

Chapter XI

Chapter XII

Appendix

Subject Index

## Handbook of the Geometry of Banach Spaces

### 5 Invariant subspaces of positive operators

In this section we will discuss the Invariant Subspace Problem for operators that are either positive or closely associated with positive operators. The general theory concerning the invariant subspace problem will be presented in a separate article prepared for this volume by Enflo and Lomonosov.

The invariant subspace problem . *Does a continuous linear operator T* : *X* → *X on a Banach space have a non-trivial closed invariant subspace*?

A vector subspace is “non-trivial” if it is different from <0>and *X*. A subspace *V* of *X* is *T-invariant* if *T*(*V*) ⊆ *V*. If *V* is invariant under every continuous operator that commutes with *T*, then *V* is called *T-hyperinvariant.*

If *X* is a finite dimensional complex Banach space of dimension greater than one, then each non-zero operator *T* has a non-trivial closed invariant subspace. On the other hand, if *X* is non-separable, then the closed vector subspace generated by the orbit < x , T x , T 2 x , … >of any non-zero vector *x* is a non-trivial closed *T*-invariant subspace. Thus, the “invariant subspace problem” is of substance only when *X* is an infinite dimensional separable Banach space. Accordingly, without any further mentioning, all Banach spaces under consideration in this section will be assumed to be *infinite dimensional separable real or complex Banach spaces.* The only exception will be made while discussing the Perron–Frobenius Theorem.

In 1976, Enflo [ 56 ] was the first to construct an example of a continuous operator on a separable Banach space without a non-trivial closed invariant subspace, and thus he demonstrated that in this general form the invariant subspace problem has a negative answer. Subsequently, Read [ 115–117 ] has constructed a class of bounded operators on ℓ_{1} without invariant subspaces. For operators on a Hilbert space, the existence of an invariant subspace is still unknown and is one of the famous unsolved problems of modern mathematics. Due to the above counterexamples, the present study of the invariant subspace problem for operators on Banach spaces has been focused on various classes of operators for which one can expect the existence of an invariant subspace.

We start our invariant subspace results with a version of the classical Perron–Frobenius theorem for positive matrices. As usual, we denote by *r*(*T*) the spectral radius of an operator *T*.

*If A is a non-negative n* × *n matrix such that for some k* ≥ 1 *the matrix A k has strictly positive entries, then the spectral radius of A is a strictly positive eigenvalue of multiplicity one having a strictly positive eigenvector.*

The proof of this theorem, discovered by Frobenius [ 60 ] and Perron [ 108 ], is available in practically every book treating non-negative matrices, for instance in [ 33,35,98 ]. One more proof of the Perron–Frobenius theorem as well as many interesting generalizations can be found in [ 112 ]. If all entries of *A* are strictly positive, then the sequence < 1 [ r ( A ) ] k A k u >converges to a unique strictly positive eigenvector corresponding to the eigenvalue *r*(*A*), no matter which initial vector *u* > 0 is chosen. This fact has numerous applications. A major step in extending the Perron–Frobenius Theorem to infinite dimensional settings was done by Krein and Rutman [ 82 ] who proved the following theorem.

*For any positive operator T* : *X* → *X on a Banach lattice r*(*T*) ∈ σ(*T*), *i.e., the spectral radius of T belongs to the spectrum of T. Furthermore, if T is also compact and r*(*T*) > 0, *then there exists some x* > 0 *such that T x* = *r*(*T*)*x*.

Proof . We will sketch a proof. The inclusion *r*(*T*) ∈ σ(*T*) is caused merely by the positivity of *T*. Indeed, if we denote by *R*(λ) the resolvent operator of *T*, then clearly *R*(λ) > 0 for each λ > *r*(*T*), Also for each λ with |λ| > *r*(*T*) the inequality || R ( | λ | ) || ≥ || R ( λ ) || holds. Therefore, for λ n = r ( T ) + 1 n we have || R ( λ n ) || → ∞ , whence *r*(*T*) ∈ σ(*T*).

Assume further that *T* is compact and *r*(*T*) > 0. There exist unit vectors *y _{n}* ∈

*X*

_{+}such that || R ( λ n ) y n || → ∞ . Using the vectors

*y*, we introduce the unit vectors x n = R ( λ n ) y n / || R ( λ n ) y n || ∈ X + . Since

_{n}*T*is compact we can assume that

*Tx*→

_{n}*x*∈

*X*

_{+}. Finally, using the identities

The conclusion of the previous theorem remains valid if we replace the compactness of *T* by the compactness of some power of *T*. Indeed, assume that *T k* is compact for some *k*. Since r ( T k ) = [ r ( T ) ] k > 0 the previous theorem implies that there is a vector *x* > 0 such that T k x = [ r ( T ) ] k x . It remains to verify that the non-zero positive vector y = ∑ i = 0 k − 1 r i T k − 1 − i x is an eigenvector of *T* corresponding to the eigenvalue *r*(*T*).

The reader is referred to [ 121,123,144 ] for complete proofs and many pertinent results concerning the Krein–Rutman theorem. Some relevant results can be found in [ 4 ]. Note that the Krein–Rutman theorem holds not only for Banach lattices but for ordered Banach spaces as well. There is an interesting approach allowing to relax the compactness assumption. Namely, as shown by Zabre i ⌣ ko and Smickih [ 146 ] and independently by Nussbaum [ 102 ], instead of the compactness of *T* it is enough to assume only that the essential spectral radius *r _{e}*(

*T*) is strictly less than the spectral radius

*r*(

*T*). A different type of relaxation is considered in [ 121 ], where the restriction of

*T*to

*X*

_{+}is assumed to be compact, that is,

*T*maps the positive part of the unit ball into a precompact set. A version of this result, given in terms of

*r*(

_{e}*T*), can be found in [ 102 ].

Another classical result by M. Krein [ 82 , Theorem 6.3] is the following.

*Let T* : *C*(Ω) → *C*(Ω) *be a positive operator, where* Ω *is a compact Hausdorff space. Then T* * , *the adjoint of T, has a positive eigenvector corresponding to a non-negative eigenvalue.*

Proof . Consider the set G = < f ∈ C ( Ω ) + * : f ( 1 ) = 1 >, where **1** denotes the constant function one on Ω. Clearly, *G* is a nonempty, convex, and *w* * -compact subset of *C*(Ω) * . Next, define the mapping *F* : *G* → *G* by

A proof of Theorem 34 that does not use fixed point theorems can be found in [ 77 ].

*Every positive operator on a C*(Ω)-*space* (*where* Ω *is Hausdorff, compact and not a singleton*) *which is not a multiple of the identity has a non-trivial hyperinvariant closed subspace.*

Proof . Let *T* : *C*(Ω) → *C*(Ω) be a positive operator which is not a multiple of the identity. By Theorem 34 the adjoint operator *T* * has a positive eigenvector. If λ. denotes the corresponding eigenvalue, then the subspace ( T − λ I ) ( x ) ¯ has the desired properties.

Recall that a continuous operator *T* : *X* → *X* on a Banach space is said to be *quasinilpotent* if its spectral radius is zero. It is well known that *T* is quasinilpotent if and only if lim n → ∞ || T n x || 1 / n = 0 for each *x* ∈ *X*. It can happen that a continuous operator *T* : *X* → *X* is not quasinilpotent but, nevertheless, lim n → ∞ || T n x || 1 / n = 0 for some *x* ≠ 0. In this case we say that *T* is *locally quasinilpotent* at *x*. This property was introduced in [ 6 ], where it was found to be useful in the study of the invariant subspace problem. The set of points at which *T* is quasinilpotent is denoted by Q * _{T}*, i.e., Q T = < x ∈ X : lim n → ∞ || T n x || 1 / n = 0 >.

It is easy to prove that the set Q * _{T}* is a

*T*-hyperinvariant vector subspace. We formulate below a few simple properties of the vector space Q

*.*

_{T}The operator *T* is quasinilpotent if and only if Q * _{T}* =

*X*.

Q * _{T}* = <0>is possible — every isometry

*T*satisfies Q

*= <0>. Notice also that even a compact positive operator can fail to be locally quasinilpotent at every non-zero vector. For instance, consider the compact positive operator*

_{T}*T*: ℓ

_{2}→ ℓ

_{2}defined by T ( x 1 , x 2 , … ) = ( x 1 , x 2 2 , x 3 3 , ⋯ ) . For each non-zero

*x*∈ ℓ

_{2}pick some

*k*for which

*x*≠ 0 and note that || T n x || 1 / n ≥ 1 k | x k | 1 / n for each

_{k}*n*, from which it follows that

*T*is not quasinilpotent at

*x*.

Q * _{T}* can be dense without being equal to

*X*. For instance, the left shift

*S*: ℓ

_{2}→ ℓ

_{2}, defined by S ( x 1 , x 2 , x 3 , … ) = ( x 2 , x 3 , … ) , has this property.

If Q * _{T}* ≠ <0>and Q T ¯ ≠ X , then Q T ¯ is a non-trivial closed

*T*-hyperinvariant subspace of

*X*.

The above properties show that as far as the invariant subspace problem is concerned, we need only consider the two extreme cases: Q * _{T}* = <0>and Q T ¯ = X .

We are now ready to state the main result about the existence of invariant subspaces of positive operators on ℓ* _{p}*-spaces. It implies, in particular, that if a positive operator is quasinilpotent at a non-zero positive vector, then the operator has an invariant subspace. This is an improvement of the main result in [ 6 ].

*Let* T : ℓ p → ℓ p ( 1 ≤ p < ∞ ) *be a continuous operator with modulus. If there exists a non-zero positive operator S* : ℓ* _{p}* → ℓ

*|*

_{p}which is quasinilpotent at a non-zero positive vector and S*T*| ≤ |

*T*|

*S*,

*then T has a non-trivial closed invariant ideal.*

We do not know presently if an analogue of the previous result is true if the inequality *S*|*T*| ≤ |*T*|*S* is replaced by *S*|*T*| ≥ |*T*|*S*. However, if we assume additionally that *S* is quasinilpotent (rather than just being locally quasinilpotent), then such an analogue is true.

We now state several immediate consequences of the preceding results.

*Assume that a positive matrix A* = [*a _{ij}*]

*defines an operator on an*ℓ

*-*

_{p}*space*(1 ≤

*p*< ∞)

*which is locally quasinilpotent at some non-zero positive vector. If*w = < w i j : i , j = 1 , 2 , … >

*is an arbitrary bounded double sequence of complex numbers, then the continuous operator defined by the weighted matrix*A w = [ w i j a i j ]

*has a non-trivial closed invariant subspace. Moreover, all these operators A*

_{w}have a common non-trivial closed invariant subspace.Proof. By Theorem 36 , the operator *A* has a non-trivial closed invariant ideal *J*. Now if *B* = [*b _{ij}*] is a matrix such that |

*B*| ≤

*cA*, then from | B x | ≤ | B | ( | x | ) ≤ c A ( | x | ) it follows that

*Bx*∈

*J*for each

*x*∈

*J*, i.e.,

*J*is

*B*-invariant. It remains to let

*B*=

*A*.

_{w}It is worth mentioning that in the preceding corollary our assumption that the weights are bounded is not necessary. It suffices to assume only that the modulus of the matrix *A _{w}* defines an operator on ℓ

*.*

_{p}*If the modulus of a bounded operator* T : ℓ p → ℓ p ( 1 ≤ p < ∞ ) *exists and is locally quasinilpotent at a non-zero positive vector, then T has a non-trivial closed invariant subspace.*

*Every positive operator on an* ℓ* _{p}*-

*space*(1 ≤

*p*< ∞)

*which is locally quasinilpotent at a non-zero positive vector has a non-trivial closed invariant subspace.*

For quasinilpotent positive operators on ℓ_{2}, Corollary 39 was also obtained in [ 48 ]. Although Theorem 36 and its corollaries are new even for a quasinilpotent operator on ℓ* _{p}*, their main attractiveness is due to the fact that we do not really need to know that a positive operator

*T*: ℓ

*→ ℓ*

_{p}*is quasinilpotent. The only thing needed is the existence of a single vector*

_{p}*x*

_{0}> 0 for which || T n x 0 || 1 / n → 0 . This alone implies that

*T*has a non-trivial closed invariant subspace of a simple geometric form. In view of this, the following important question arises. How can we recognize by “looking at” a matrix [

*t*] defining a positive operator

_{ij}*T*: ℓ

*→ ℓ*

_{p}*if the set of positive vectors at which*

_{p}*T*is locally quasinilpotent is non-trivial? This question is addressed in [ 9 ].

A very interesting open problem related to Corollary 39 is whether or not each positive operator on ℓ_{1} has a nontrivial closed invariant subspace. Since each continuous operator on ℓ_{1} has a modulus (see Theorem 10 ), a natural candidate to test this problem is the modulus of any operator on ℓ_{1} without a nontrivial closed invariant subspace. In particular, each operator on ℓ_{1} without invariant subspace constructed by Read [ 115,117 ] is such a candidate. Troitsky [ 131 ] has recently managed to handle the case of the quasinilpotent operator *T* constructed by Read in [ 117 ]. Not only does |*T*| have a nontrivial closed invariant subspace, but |*T*| also has a positive eigenvector.

In our previous discussion, we were considering only operators on ℓ* _{p}*-spaces. However, we only used the

*discreteness*of ℓ

*-spaces, the above results remain true for operators on arbitrary discrete Banach lattices, 3 in particular, for operators on Lorentz and Orlicz sequence spaces. For instance, the following analogue of Theorem 36 is true.*

_{p}*Let T* : *X* → *X be a continuous operator with modulus, where X is a discrete Banach lattice. If there exists a non-zero positive operator S*: *X* → *X which is locally quasinilpotent at a non-zero positive vector and S*|*T*| ≤ |*T*|*S* (*in particular this holds if S commutes with* |*T*|), *then T has a non-trivial closed invariant subspace.*

Generalizing the approach developed in [ 5–7 ] for individual operators, Drnovšek [ 53 ] and Jahandideh [ 69 ] considered various collections of positive operators (for instance semigroups of operators) and proved the existence of common invariant subspaces for such collections. The main result in [ 53 ] is given next.

*Let S be a* (*multiplicative or additive*) *semigroup of positive operators on a Banach lattice X. If there exists a discrete element x*_{0} ∈ *X such that each operator in* S *is locally quasinilpotent at x*_{0}, *then* S *has a common non-trivial closed invariant ideal.*

We refer to [ 8 ] for generalizations of some of the results in this section to Banach spaces with a Schauder basis. The situation with non-discrete spaces is considerably more complicated and will be discussed in the next section.

## Frobenius Splitting Methods in Geometry and Representation Theory

The theory of Frobenius splittings has made a significant impact in the study of the geometry of flag varieties and representation theory. This work, unique in book literature, systematically develops the theory and covers all its major developments.

* Concise, efficient exposition unfolds from basic introductory material on Frobenius splittings—definitions, properties and examples—to cutting edge research

* Studies in detail the geometry of Schubert varieties, their syzygies, equivariant embeddings of reductive groups, Hilbert Schemes, canonical splittings, good filtrations, among other topics

* Applies Frobenius splitting methods to algebraic geometry and various problems in representation theory

* Many examples, exercises, and open problems suggested throughout

* Comprehensive bibliography and index

This book will be an excellent resource for mathematicians and graduate students in algebraic geometry and representation theory of algebraic groups.