7.6: The Method of Frobenius II

In this section we discuss a method for finding two linearly independent Frobenius solutions of a homogeneous linear second order equation near a regular singular point in the case where the indicial equation has a repeated real root. As in the preceding section, we consider equations that can be written as

[label{eq:7.6.1} x^2(alpha_0+alpha_1x+alpha_2x^2)y''+x(eta_0+eta_1x+eta_2x^2)y' +(gamma_0+gamma_1x+gamma_2x^2)y=0]

where (alpha_0 e0). We assume that the indicial equation (p_0(r)=0) has a repeated real root (r_1). In this case Theorem 7.5.3 implies that Equation ef{eq:7.6.1} has one solution of the form

but does not provide a second solution (y_2) such that ({y_1,y_2}) is a fundamental set of solutions. The following extension of Theorem 7.5.2 provides a way to find a second solution.

Theorem (PageIndex{1})

Let

[label{eq:7.6.2} Ly= x^2(alpha_0+alpha_1x+alpha_2x^2)y''+x(eta_0+eta_1x+eta_2x^2)y' +(gamma_0+gamma_1x+gamma_2x^2)y,]

where (alpha_0 e0) and define

[egin{align*} p_0(r)&= alpha_0r(r-1)+eta_0r+gamma_0,[4pt] p_1(r)&=alpha_1r(r-1)+eta_1r+gamma_1,[4pt] p_2(r)&=alpha_2r(r-1)+eta_2r+gamma_2. end{align*} onumber ]

Suppose (r) is a real number such that (p_0(n+r)) is nonzero for all positive integers (n), and define

[egin{align*} a_0(r) &= 1, a_1(r) &= -{p_1(r)over p_0(r+1)},[10pt] a_n(r) &= -{p_1(n+r-1)a_{n-1}(r)+p_2(n+r-2)a_{n-2}(r)over p_0(n+r)},quad nge2. end{align*} onumber ]

Then the Frobenius series

[label{eq:7.6.3} y(x,r)=x^rsum_{n=0}^infty a_n(r)x^n]

satisfies

[label{eq:7.6.4} Ly(x,r)=p_0(r)x^r.]

Moreover(,)

[label{eq:7.6.5} {partial yover partial r}(x,r)=y(x,r)ln x+x^rsum_{n=1}^infty a_n'(r) x^n,]

and

[label{eq:7.6.6} Lleft({partial yover partial r}(x,r) ight)=p'_0(r)x^r+x^rp_0(r)ln x.]

Proof

Theorem 7.5.2 implies Equation ef{eq:7.6.4}. Differentiating formally with respect to (r) in Equation ef{eq:7.6.3} yields

[egin{aligned} {partial yover partial r}(x,r)&={{partialoverpartial r}(x^r)sum_{n=0}^infty a_n(r)x^n +x^rsum_{n=1}^infty a_n'(r)x^n}[10pt] &={x^rln xsum_{n=0}^infty a_n(r)x^n +x^rsum_{n=1}^infty a_n'(r)x^n}[10pt] &=y(x,r) ln x + x^rsum_{n=1}^infty a_n'(r)x^n,end{aligned} onumber]

which proves Equation ef{eq:7.6.5}.

To prove that (partial y(x,r)/partial r) satisfies Equation ef{eq:7.6.6}, we view (y) in Equation ef{eq:7.6.2} as a function (y=y(x,r)) of two variables, where the prime indicates partial differentiation with respect to (x); thus,

With this notation we can use Equation ef{eq:7.6.2} to rewrite Equation ef{eq:7.6.4} as

[label{eq:7.6.7} x^2q_0(x){partial^2 yover partial x^2}(x,r)+xq_1(x){partial yover partial x}(x,r)+q_2(x)y(x,r)=p_0(r)x^r,]

where

[egin{aligned} q_0(x) &= alpha_0+alpha_1x+alpha_2x^2,[4pt] q_1(x) &= eta_0+eta_1x+eta_2x^2,[4pt] q_2(x) &= gamma_0+gamma_1x+gamma_2x^2.[4pt]end{aligned}]

Differentiating both sides of Equation ef{eq:7.6.7} with respect to (r) yields

[x^2q_0(x){partial^3yover partial rpartial x^2}(x,r)+ xq_1(x){partial^2yover partial rpartial x}(x,r)+q_2(x){partial yoverpartial r}(x,r)=p'_0(r)x^r+p_0(r) x^r ln x. onumber]

By changing the order of differentiation in the first two terms on the left we can rewrite this as

[x^2q_0(x){partial^3 yover partial x^2partial r}(x,r) +xq_1(x){partial^2 yover partial xpartial r}(x,r)+q_2(x){partial yover partial r}(x,r)=p'_0(r)x^r+p_0(r) x^r ln x, onumber]

or

[x^2q_0(x){partial^2over partial x^2} left({partial yoverpartial r}(x,r) ight) +xq_1(x){partialoverpartial r}left({partial yoverpartial x}(x,r) ight) +q_2(x){partial yoverpartial r}(x,r)= p'_0(r)x^r+p_0(r) x^r ln x, onumber]

which is equivalent to Equation ef{eq:7.6.6}.

Theorem (PageIndex{2})

Let (L) be as in Theorem (PageIndex{1}) and suppose the indicial equation (p_0(r)=0) has a repeated real root (r_1.) Then

[y_1(x)=y(x,r_1)=x^{r_1}sum_{n=0}^infty a_n(r_1)x^n onumber]

and

[label{eq:7.6.8} y_2(x)={partial yoverpartial r}(x,r_1)=y_1(x)ln x+x^{r_1}sum_{n=1}^infty a_n'(r_1)x^n]

form a fundamental set of solutions of (Ly=0.)

Proof

Since (r_1) is a repeated root of (p_0(r)=0), the indicial polynomial can be factored as

[p_0(r)=alpha_0(r-r_1)^2, onumber]

so

[p_0(n+r_1)=alpha_0n^2, onumber]

which is nonzero if (n>0). Therefore the assumptions of Theorem (PageIndex{1}) hold with (r=r_1), and Equation ef{eq:7.6.4} implies that (Ly_1=p_0(r_1)x^{r_1}=0). Since

[p_0'(r)=2alpha(r-r_1) onumber]

it follows that (p_0'(r_1)=0), so Equation ef{eq:7.6.6} implies that

[Ly_2=p_0'(r_1)x^{r_1}+x^{r_1}p_0(r_1)ln x=0. onumber]

This proves that (y_1) and (y_2) are both solutions of (Ly=0). We leave the proof that ({y_1,y_2}) is a fundamental set as an Exercise 7.6.53.

Example (PageIndex{1})

Find a fundamental set of solutions of

[label{eq:7.6.9} x^2(1-2x+x^2)y''-x(3+x)y'+(4+x)y=0.]

Compute just the terms involving (x^{n+r_1}), where (0le nle4) and (r_1) is the root of the indicial equation.

Solution

For the given equation, the polynomials defined in Theorem (PageIndex{1}) are

[egin{align*} p_0(r) &= r(r-1)-3r+4 [4pt] &= (r-2)^2,[4pt] p_1(r) &= -2r(r-1)-r+1 [4pt] &= -(r-1)(2r+1),[4pt] p_2(r) &= r(r-1). end{align*} onumber ]

Since (r_1=2) is a repeated root of the indicial polynomial (p_0), Theorem (PageIndex{2}) implies that

form a fundamental set of Frobenius solutions of Equation ef{eq:7.6.9}. To find the coefficients in these series, we use the recurrence formulas from Theorem (PageIndex{1}) :

[label{eq:7.6.11} egin{array}{lll} a_0(r) &= 1, a_1(r) &= -{p_1(r)over p_0(r+1)} =-{(r-1)(2r+1)over(r-1)^2} ={2r+1over r-1}, a_n(r) &= -{p_1(n+r-1)a_{n-1}(r)+p_2(n+r-2)a_{n-2}(r)over p_0(n+r)} &= {(n+r-2)left[(2n+2r-1)a_{n-1}(r) -(n+r-3)a_{n-2}(r) ight]over(n+r-2)^2} &= {{(2n+2r-1)over(n+r-2)}a_{n-1}(r)- {(n+r-3)over(n+r-2)}a_{n-2}(r)},nge2. end{array}]

Differentiating yields

[label{eq:7.6.12} egin{array}{lll} a'_1(r) &= -{3over (r-1)^2}, a'_n(r) &= {{2n+2r-1over n+r-2}a'_{n-1}(r)-{n+r-3over n+r-2}a'_{n-2}(r)} &{-{3over(n+r-2)^2}a_{n-1}(r)-{1over(n+r-2)^2}a_{n-2}(r)},quad nge2. end{array}]

Setting (r=2) in Equation ef{eq:7.6.11} and Equation ef{eq:7.6.12} yields

[label{eq:7.6.13} egin{array}{lll} a_0(2) &= 1, a_1(2) &= 5,[10pt] a_n(2) &= {{(2n+3)over n} a_{n-1}(2)-{(n-1)over n}a_{n-2}(2)},quad nge2 end{array}]

and

[label{eq:7.6.14} egin{array}{lll} a_1'(2) &= -3,[10pt] a'_n(2) &= {{2n+3over n}a'_{n-1}(2)-{n-1over n}a'_{n-2}(2) -{3over n^2}a_{n-1}(2)-{1over n^2}a_{n-2}(2)},quad nge2. end{array}]

Computing recursively with Equation ef{eq:7.6.13} and Equation ef{eq:7.6.14} yields

[a_0(2)=1,,a_1(2)=5,,a_2(2)=17,,a_3(2)={143over3},,a_4(2)={355over3}, onumber]

and

[a_1'(2)=-3,,a_2'(2)=-{29over2},,a_3'(2)=-{859over18}, ,a_4'(2)=-{4693over36}. onumber]

Substituting these coefficients into Equation ef{eq:7.6.10} yields

[y_1=x^2left(1+5x+17x^2+{143over3}x^3 +{355over3}x^4+cdots ight) onumber]

and

[y_2=y_1 ln x -x^3left(3+{29over2}x+{859over18}x^2+{4693over36}x^3 +cdots ight). onumber ]

Since the recurrence formula Equation ef{eq:7.6.11} involves three terms, it is not possible to obtain a simple explicit formula for the coefficients in the Frobenius solutions of Equation ef{eq:7.6.9}. However, as we saw in the preceding sections, the recurrrence formula for ({a_n(r)}) involves only two terms if either (alpha_1=eta_1=gamma_1=0) or (alpha_2=eta_2=gamma_2=0) in Equation ef{eq:7.6.1}. In this case, it is often possible to find explicit formulas for the coefficients. The next two examples illustrate this.

Example (PageIndex{2})

Find a fundamental set of Frobenius solutions of

[label{eq:7.6.15} 2x^2(2+x)y''+5x^2y'+(1+x)y=0.]

Give explicit formulas for the coefficients in the solutions.

Solution

For the given equation, the polynomials defined in Theorem (PageIndex{1}) are

[egin{array}{ccccc} p_0(r) &= 4r(r-1)+1 &= (2r-1)^2,[4pt] p_1(r) &= 2r(r-1)+5r+1 &= (r+1)(2r+1),[4pt] p_2(r) &= 0. end{array} onumber ]

Since (r_1=1/2) is a repeated zero of the indicial polynomial (p_0), Theorem (PageIndex{2}) implies that

[label{eq:7.6.16} y_1=x^{1/2}sum_{n=0}^infty a_n(1/2)x^n]

and

[label{eq:7.6.17} y_2=y_1ln x+x^{1/2}sum_{n=1}^infty a_n'(1/2)x^n]

form a fundamental set of Frobenius solutions of Equation ef{eq:7.6.15}. Since (p_2equiv0), the recurrence formulas in Theorem (PageIndex{1}) reduce to

[egin{align*} a_0(r) &= 1, a_n(r) &= -{p_1(n+r-1)over p_0(n+r)}a_{n-1}(r),[10pt] &= -{(n+r)(2n+2r-1)over(2n+2r-1)^2}a_{n-1}(r),[10pt] &= -{n+rover2n+2r-1}a_{n-1}(r),quad nge0. end{align*} onumber ]

We leave it to you to show that

Setting (r=1/2) yields

[label{eq:7.6.19} egin{array}{ccl} a_n(1/2) &= (-1)^nprod_{j=1}^n{j+1/2over2j}= (-1)^nprod_{j=1}^n{2j+1over4j},[10pt] &= {(-1)^nprod_{j=1}^n(2j+1)over4^nn!},quad nge0. end{array}]

Substituting this into Equation ef{eq:7.6.16} yields

[y_1=x^{1/2}sum_{n=0}^infty{(-1)^nprod_{j=1}^n(2j+1)over4^nn!}x^n. onumber]

To obtain (y_2) in Equation ef{eq:7.6.17}, we must compute (a_n'(1/2)) for (n=1), (2),…. We’ll do this by logarithmic differentiation. From Equation ef{eq:7.6.18},

Therefore

[ln |a_n(r)|=sum^n_{j=1} left(ln |j+r|-ln|2j+2r-1| ight). onumber]

Differentiating with respect to (r) yields

[{a'_n(r)over a_n(r)}=sum^n_{j=1} left({1over j+r}-{2over2j+2r-1} ight). onumber]

Therefore

[a'_n(r)=a_n(r) sum^n_{j=1} left({1over j+r}-{2over2j+2r-1} ight). onumber]

Setting (r=1/2) here and recalling Equation ef{eq:7.6.19} yields

[label{eq:7.6.20} a'_n(1/2)={(-1)^nprod_{j=1}^n(2j+1)over4^nn!}left(sum_{j=1}^n{1over j+1/2}-sum_{j=1}^n{1over j} ight).]

Since

[{1over j+1/2}-{1over j}={j-j-1/2over j(j+1/2)}=-{1over j(2j+1)}, onumber]

Equation ef{eq:7.6.20} can be rewritten as

[a'_n(1/2)=-{(-1)^nprod_{j=1}^n(2j+1)over4^nn!} sum_{j=1}^n{1over j(2j+1)}. onumber]

Therefore, from Equation ef{eq:7.6.17},

[y_2=y_1ln x-x^{1/2}sum_{n=1}^infty{(-1)^nprod_{j=1}^n(2j+1)over4^nn!} left(sum_{j=1}^n{1over j(2j+1)} ight)x^n. onumber]

Example (PageIndex{3})

Find a fundamental set of Frobenius solutions of

[label{eq:7.6.21} x^2(2-x^2)y''-2x(1+2x^2)y'+(2-2x^2)y=0.]

Give explicit formulas for the coefficients in the solutions.

Solution

For Equation ef{eq:7.6.21}, the polynomials defined in Theorem (PageIndex{1}) are

[egin{align*} p_0(r) &= 2r(r-1)-2r+2 &= 2(r-1)^2,[4pt] p_1(r) &= 0,[4pt] p_2(r) &= -r(r-1)-4r-2 &= -(r+1)(r+2). end{align*} onumber ]

As in Section 7.5, since (p_1equiv0), the recurrence formulas of Theorem (PageIndex{1}) imply that (a_n(r)=0) if (n) is odd, and

[egin{align*} a_0(r) &= 1, a_{2m}(r) &= -{p_2(2m+r-2)over p_0(2m+r)}a_{2m-2}(r)[10pt] &= {(2m+r-1)(2m+r)over2(2m+r-1)^2}a_{2m-2}(r)[10pt] &= {2m+rover2(2m+r-1)}a_{2m-2}(r),quad mge1. end{align*} onumber ]

Since (r_1=1) is a repeated root of the indicial polynomial (p_0), Theorem (PageIndex{2}) implies that

[label{eq:7.6.22} y_1=xsum_{m=0}^infty a_{2m}(1)x^{2m}]

and

[label{eq:7.6.23} y_2=y_1ln x+xsum_{m=1}^infty a'_{2m}(1)x^{2m}]

form a fundamental set of Frobenius solutions of Equation ef{eq:7.6.21}. We leave it to you to show that

[label{eq:7.6.24} a_{2m}(r)={1over2^m}prod_{j=1}^m{2j+rover2j+r-1}.]

Setting (r=1) yields

[label{eq:7.6.25} a_{2m}(1)={1over2^m}prod_{j=1}^m{2j+1over2j} ={prod_{j=1}^m(2j+1)over4^mm!},]

and substituting this into Equation ef{eq:7.6.22} yields

[y_1=xsum_{m=0}^infty{prod_{j=1}^m(2j+1)over4^mm!}x^{2m}. onumber]

To obtain (y_2) in Equation ef{eq:7.6.23}, we must compute (a_{2m}'(1)) for (m=1), (2), …. Again we use logarithmic differentiation. From Equation ef{eq:7.6.24},

[|a_{2m}(r)|={1over2^m}prod_{j=1}^m{|2j+r|over|2j+r-1|}. onumber]

Taking logarithms yields

[ln |a_{2m}(r)|=-mln2+ sum^m_{j=1} left(ln |2j+r|-ln|2j+r-1| ight). onumber]

Differentiating with respect to (r) yields

[{a'_{2m}(r)over a_{2m}(r)}=sum^m_{j=1} left({1over 2j+r}-{1over2j+r-1} ight). onumber]

Therefore

[a'_{2m}(r)=a_{2m}(r) sum^m_{j=1} left({1over 2j+r}-{1over2j+r-1} ight). onumber]

Setting (r=1) and recalling Equation ef{eq:7.6.25} yields

[label{eq:7.6.26} a'_{2m}(1)={{prod_{j=1}^m(2j+1)over4^mm!} sum_{j=1}^mleft({1over2j+1}-{1over2j} ight)}.]

Since

[{1over2j+1}-{1over2j}=-{1over2j(2j+1)}, onumber]

Equation ef{eq:7.6.26} can be rewritten as

[a_{2m}'(1)=-frac{prod_{j=1}^{m}(2j+1)}{2cdot 4^{m}m!}sum_{j=1}^{m}frac{1}{j(2j+1)} onumber ]

Substituting this into Equation ef{eq:7.6.23} yields

[y_2=y_1ln x-{xover2}sum_{m=1}^infty{prod_{j=1}^m(2j+1)over4^mm!} left(sum_{j=1}^m{1over j(2j+1)} ight)x^{2m}. onumber ]

If the solution (y_1=y(x,r_1)) of (Ly=0) reduces to a finite sum, then there’s a difficulty in using logarithmic differentiation to obtain the coefficients ({a_n'(r_1)}) in the second solution. The next example illustrates this difficulty and shows how to overcome it.

Example (PageIndex{4})

Find a fundamental set of Frobenius solutions of

[label{eq:7.6.27} x^2y''-x(5-x)y'+(9-4x)y=0.]

Give explicit formulas for the coefficients in the solutions.

Solution

For Equation ef{eq:7.6.27} the polynomials defined in Theorem (PageIndex{1}) are

[egin{align*} p_0(r) &= r(r-1)-5r+9 &= (r-3)^2,[4pt] p_1(r) &= r-4,[4pt] p_2(r) &= 0. end{align*} onumber ]

Since (r_1=3) is a repeated zero of the indicial polynomial (p_0), Theorem (PageIndex{2}) implies that

[label{eq:7.6.28} y_1=x^3sum_{n=0}^infty a_n(3)x^n]

and

[label{eq:7.6.29} y_2=y_1ln x+x^3sum_{n=1}^infty a_n'(3)x^n]

are linearly independent Frobenius solutions of Equation ef{eq:7.6.27}. To find the coefficients in Equation ef{eq:7.6.28} we use the recurrence formulas

[egin{align*} a_0(r) &= 1, a_n(r) &= -{p_1(n+r-1)over p_0(n+r)}a_{n-1}(r)[10pt] &= -{n+r-5over(n+r-3)^2}a_{n-1}(r),quad nge1. end{align*} onumber ]

We leave it to you to show that

[label{eq:7.6.30} a_n(r)=(-1)^nprod_{j=1}^n{j+r-5over(j+r-3)^2}.]

Setting (r=3) here yields

[a_n(3)=(-1)^nprod_{j=1}^n{j-2over j^2}, onumber]

so (a_1(3)=1) and (a_n(3)=0) if (nge2). Substituting these coefficients into Equation ef{eq:7.6.28} yields

[y_1=x^3(1+x). onumber ]

To obtain (y_2) in Equation ef{eq:7.6.29} we must compute (a_n'(3)) for (n=1), (2), …. Let’s first try logarithmic differentiation. From Equation ef{eq:7.6.30},

so

[ln |a_n(r)|=sum^n_{j=1} left(ln |j+r-5|-2ln|j+r-3| ight). onumber]

Differentiating with respect to (r) yields

[{a'_n(r)over a_n(r)}=sum^n_{j=1} left({1over j+r-5}-{2over j+r-3} ight). onumber]

Therefore

[label{eq:7.6.31} a'_n(r)=a_n(r) sum^n_{j=1} left({1over j+r-5}-{2over j+r-3} ight).]

However, we can’t simply set (r=3) here if (nge2), since the bracketed expression in the sum corresponding to (j=2) contains the term (1/(r-3)). In fact, since (a_n(3)=0) for (nge2), the formula Equation ef{eq:7.6.31} for (a_n'(r)) is actually an indeterminate form at (r=3).

We overcome this difficulty as follows. From Equation ef{eq:7.6.30} with (n=1),

[a_1(r)=-{r-4over (r-2)^2}. onumber]

Therefore

[a_1'(r)={r-6over(r-2)^3}, onumber]

so

[label{eq:7.6.32} a_1'(3)=-3.]

From Equation ef{eq:7.6.30} with (nge2),

[a_n(r)=(-1)^n (r-4)(r-3),{prod_{j=3}^n(j+r-5)overprod_{j=1}^n(j+r-3)^2} =(r-3)c_n(r), onumber]

where

Therefore

which implies that (a_n'(3)=c_n(3)) if (nge3). We leave it to you to verify that

Substituting this and Equation ef{eq:7.6.32} into Equation ef{eq:7.6.29} yields

[y_2=x^3(1+x)ln x-3x^4-x^3{sum_{n=2}^infty {(-1)^nover n(n-1)n!}x^n}. onumber]

40 CFR § 112.7 - General requirements for Spill Prevention, Control, and Countermeasure Plans.

If you are the owner or operator of a facility subject to this part you must prepare a Plan in accordance with good engineering practices. The Plan must have the full approval of management at a level of authority to commit the necessary resources to fully implement the Plan. You must prepare the Plan in writing. If you do not follow the sequence specified in this section for the Plan, you must prepare an equivalent Plan acceptable to the Regional Administrator that meets all of the applicable requirements listed in this part, and you must supplement it with a section cross-referencing the location of requirements listed in this part and the equivalent requirements in the other prevention plan. If the Plan calls for additional facilities or procedures, methods, or equipment not yet fully operational, you must discuss these items in separate paragraphs, and must explain separately the details of installation and operational start-up. As detailed elsewhere in this section, you must also:

(1) Include a discussion of your facility's conformance with the requirements listed in this part.

(2) Comply with all applicable requirements listed in this part. Except as provided in § 112.6, your Plan may deviate from the requirements in paragraphs (g), (h)(2) and (3), and (i) of this section and the requirements in subparts B and C of this part, except the secondary containment requirements in paragraphs (c) and (h)(1) of this section, and §§ 112.8(c)(2), 112.8(c)(11), 112.9(c)(2), 112.9(d)(3), 112.10(c), 112.12(c)(2), and 112.12(c)(11), where applicable to a specific facility, if you provide equivalent environmental protection by some other means of spill prevention, control, or countermeasure. Where your Plan does not conform to the applicable requirements in paragraphs (g), (h)(2) and (3), and (i) of this section, or the requirements of subparts B and C of this part, except the secondary containment requirements in paragraph (c) and (h)(1) of this section, and §§ 112.8(c)(2), 112.8(c)(11), 112.9(c)(2), 112.10(c), 112.12(c)(2), and 112.12(c)(11), you must state the reasons for nonconformance in your Plan and describe in detail alternate methods and how you will achieve equivalent environmental protection. If the Regional Administrator determines that the measures described in your Plan do not provide equivalent environmental protection, he may require that you amend your Plan, following the procedures in § 112.4(d) and (e).

(3) Describe in your Plan the physical layout of the facility and include a facility diagram, which must mark the location and contents of each fixed oil storage container and the storage area where mobile or portable containers are located. The facility diagram must identify the location of and mark as “exempt” underground tanks that are otherwise exempted from the requirements of this part under § 112.1(d)(4). The facility diagram must also include all transfer stations and connecting pipes, including intra-facility gathering lines that are otherwise exempted from the requirements of this part under § 112.1(d)(11). You must also address in your Plan:

(i) The type of oil in each fixed container and its storage capacity. For mobile or portable containers, either provide the type of oil and storage capacity for each container or provide an estimate of the potential number of mobile or portable containers, the types of oil, and anticipated storage capacities

(iii) Discharge or drainage controls such as secondary containment around containers and other structures, equipment, and procedures for the control of a discharge

(iv) Countermeasures for discharge discovery, response, and cleanup (both the facility's capability and those that might be required of a contractor)

(v) Methods of disposal of recovered materials in accordance with applicable legal requirements and

(vi) Contact list and phone numbers for the facility response coordinator, National Response Center, cleanup contractors with whom you have an agreement for response, and all appropriate Federal, State, and local agencies who must be contacted in case of a discharge as described in § 112.1(b).

(4) Unless you have submitted a response plan under § 112.20, provide information and procedures in your Plan to enable a person reporting a discharge as described in § 112.1(b) to relate information on the exact address or location and phone number of the facility the date and time of the discharge, the type of material discharged estimates of the total quantity discharged estimates of the quantity discharged as described in § 112.1(b) the source of the discharge a description of all affected media the cause of the discharge any damages or injuries caused by the discharge actions being used to stop, remove, and mitigate the effects of the discharge whether an evacuation may be needed and, the names of individuals and/or organizations who have also been contacted.

(5) Unless you have submitted a response plan under § 112.20, organize portions of the Plan describing procedures you will use when a discharge occurs in a way that will make them readily usable in an emergency, and include appropriate supporting material as appendices.

(b) Where experience indicates a reasonable potential for equipment failure (such as loading or unloading equipment, tank overflow, rupture, or leakage, or any other equipment known to be a source of a discharge), include in your Plan a prediction of the direction, rate of flow, and total quantity of oil which could be discharged from the facility as a result of each type of major equipment failure.

(c) Provide appropriate containment and/or diversionary structures or equipment to prevent a discharge as described in § 112.1(b), except as provided in paragraph (k) of this section for qualified oil-filled operational equipment, and except as provided in § 112.9(d)(3) for flowlines and intra-facility gathering lines at an oil production facility. The entire containment system, including walls and floor, must be capable of containing oil and must be constructed so that any discharge from a primary containment system, such as a tank, will not escape the containment system before cleanup occurs. In determining the method, design, and capacity for secondary containment, you need only to address the typical failure mode, and the most likely quantity of oil that would be discharged. Secondary containment may be either active or passive in design. At a minimum, you must use one of the following prevention systems or its equivalent:

(i) Dikes, berms, or retaining walls sufficiently impervious to contain oil

(iii) Sumps and collection systems

(iv) Culverting, gutters, or other drainage systems

(v) Weirs, booms, or other barriers

(2) For offshore facilities:

(i) Curbing or drip pans or

(ii) Sumps and collection systems.

(d) Provided your Plan is certified by a licensed Professional Engineer under § 112.3(d), or, in the case of a qualified facility that meets the criteria in § 112.3(g), the relevant sections of your Plan are certified by a licensed Professional Engineer under § 112.6(d), if you determine that the installation of any of the structures or pieces of equipment listed in paragraphs (c) and (h)(1) of this section, and §§ 112.8(c)(2), 112.8(c)(11), 112.9(c)(2), 112.10(c), 112.12(c)(2), and 112.12(c)(11) to prevent a discharge as described in § 112.1(b) from any onshore or offshore facility is not practicable, you must clearly explain in your Plan why such measures are not practicable for bulk storage containers, conduct both periodic integrity testing of the containers and periodic integrity and leak testing of the valves and piping and, unless you have submitted a response plan under § 112.20, provide in your Plan the following:

(1) An oil spill contingency plan following the provisions of part 109 of this chapter.

(2) A written commitment of manpower , equipment, and materials required to expeditiously control and remove any quantity of oil discharged that may be harmful.

(e) Inspections, tests, and records. Conduct inspections and tests required by this part in accordance with written procedures that you or the certifying engineer develop for the facility. You must keep these written procedures and a record of the inspections and tests, signed by the appropriate supervisor or inspector, with the SPCC Plan for a period of three years. Records of inspections and tests kept under usual and customary business practices will suffice for purposes of this paragraph.

(f) Personnel, training, and discharge prevention procedures.

(1) At a minimum, train your oil-handling personnel in the operation and maintenance of equipment to prevent discharges discharge procedure protocols applicable pollution control laws, rules, and regulations general facility operations and, the contents of the facility SPCC Plan.

(2) Designate a person at each applicable facility who is accountable for discharge prevention and who reports to facility management.

(3) Schedule and conduct discharge prevention briefings for your oil-handling personnel at least once a year to assure adequate understanding of the SPCC Plan for that facility. Such briefings must highlight and describe known discharges as described in § 112.1(b) or failures, malfunctioning components, and any recently developed precautionary measures.

(g) Security (excluding oil production facilities). Describe in your Plan how you secure and control access to the oil handling, processing and storage areas secure master flow and drain valves prevent unauthorized access to starter controls on oil pumps secure out-of-service and loading/unloading connections of oil pipelines and address the appropriateness of security lighting to both prevent acts of vandalism and assist in the discovery of oil discharges.

(1) Where loading/unloading rack drainage does not flow into a catchment basin or treatment facility designed to handle discharges, use a quick drainage system for tank car or tank truck loading/unloading racks. You must design any containment system to hold at least the maximum capacity of any single compartment of a tank car or tank truck loaded or unloaded at the facility.

(2) Provide an interlocked warning light or physical barrier system, warning signs, wheel chocks or vehicle brake interlock system in the area adjacent to a loading/unloading rack, to prevent vehicles from departing before complete disconnection of flexible or fixed oil transfer lines.

(3) Prior to filling and departure of any tank car or tank truck, closely inspect for discharges the lowermost drain and all outlets of such vehicles, and if necessary, ensure that they are tightened, adjusted, or replaced to prevent liquid discharge while in transit.

(i) If a field-constructed aboveground container undergoes a repair, alteration, reconstruction, or a change in service that might affect the risk of a discharge or failure due to brittle fracture or other catastrophe, or has discharged oil or failed due to brittle fracture failure or other catastrophe, evaluate the container for risk of discharge or failure due to brittle fracture or other catastrophe, and as necessary, take appropriate action.

(j) In addition to the minimal prevention standards listed under this section, include in your Plan a complete discussion of conformance with the applicable requirements and other effective discharge prevention and containment procedures listed in this part or any applicable more stringent State rules, regulations, and guidelines.

(k) Qualified Oil-filled Operational Equipment. The owner or operator of a facility with oil-filled operational equipment that meets the qualification criteria in paragraph (k)(1) of this sub-section may choose to implement for this qualified oil-filled operational equipment the alternate requirements as described in paragraph (k)(2) of this sub-section in lieu of general secondary containment required in paragraph (c) of this section.

(1) Qualification Criteria - Reportable Discharge History: The owner or operator of a facility that has had no single discharge as described in § 112.1(b) from any oil-filled operational equipment exceeding 1,000 U.S. gallons or no two discharges as described in § 112.1(b) from any oil-filled operational equipment each exceeding 42 U.S. gallons within any twelve month period in the three years prior to the SPCC Plan certification date, or since becoming subject to this part if the facility has been in operation for less than three years (other than oil discharges as described in § 112.1(b) that are the result of natural disasters, acts of war or terrorism) and

(2) Alternative Requirements to General Secondary Containment. If secondary containment is not provided for qualified oil-filled operational equipment pursuant to paragraph (c) of this section, the owner or operator of a facility with qualified oil-filled operational equipment must:

(i) Establish and document the facility procedures for inspections or a monitoring program to detect equipment failure and/or a discharge and

(ii) Unless you have submitted a response plan under § 112.20, provide in your Plan the following:

(A) An oil spill contingency plan following the provisions of part 109 of this chapter.

(B) A written commitment of manpower , equipment, and materials required to expeditiously control and remove any quantity of oil discharged that may be harmful.

Contents

Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix A are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers A k as k → ∞ is controlled by the eigenvalue of A with the largest absolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix. Early results were due to Oskar Perron (1907) and concerned positive matrices. Later, Georg Frobenius (1912) found their extension to certain classes of non-negative matrices.

Positive matrices Edit

1. There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue (also called the leading eigenvalue or dominant eigenvalue), such that r is an eigenvalue of A and any other eigenvalue λ (possibly complex) in absolute value is strictly smaller than r , |λ| < r. Thus, the spectral radius ρ ( A ) is equal to r. If the matrix coefficients are algebraic, this implies that the eigenvalue is a Perron number.
2. The Perron–Frobenius eigenvalue is simple: r is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to r is one-dimensional. (The same is true for the left eigenspace, i.e., the eigenspace for A T , the transpose of A.)
3. There exists an eigenvector v = (v1. vn) T of A with eigenvalue r such that all components of v are positive: A v = r v, vi > 0 for 1 ≤ in. (Respectively, there exists a positive left eigenvector w : w T A = r w T , wi > 0.) It is known in the literature under many variations as the Perron vector, Perron eigenvector, Perron-Frobenius eigenvector, leading eigenvector, or dominant eigenvector.
4. There are no other positive (moreover non-negative) eigenvectors except positive multiples of v (respectively, left eigenvectors except w), i.e., all other eigenvectors must have at least one negative or non-real component.
5. lim k → ∞ A k / r k = v w T A^/r^=vw^> , where the left and right eigenvectors for A are normalized so that w T v = 1. Moreover, the matrix v w T is the projection onto the eigenspace corresponding to r. This projection is called the Perron projection.
6. Collatz–Wielandt formula: for all non-negative non-zero vectors x, let f(x) be the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real valued function whose maximum over all non-negative non-zero vectors x is the Perron–Frobenius eigenvalue.
7. A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors x, let g(x) be the maximum value of [Ax]i / xi taken over i. Then g is a real valued function whose minimum over all strictly positive vectors x is the Perron–Frobenius eigenvalue.
8. Birkhoff–Varga formula: Let x and y be strictly positive vectors. Then r = sup x > 0 inf y > 0 y ⊤ A x y ⊤ x = inf x > 0 sup y > 0 y ⊤ A x y ⊤ x = inf x > 0 sup y > 0 ∑ i , j = 1 n x i A i j y j / ∑ i = 1 n y i x i . inf _Ax>x>>=inf _sup _Ax>x>>=inf _sup _sum _^x_A_y_/sum _^y_x_.>[8]
9. Donsker–Varadhan–Friedland formula: Let p be a probability vector and x a strictly positive vector. Then r = sup p inf x > 0 ∑ i = 1 n p i [ A x ] i / x i . inf _sum _^p_[Ax]_/x_.>[9][10]
10. Fiedler formula: r = sup z > 0 inf x > 0 , y > 0 , x ∘ y = z y ⊤ A x y ⊤ x = sup z > 0 inf x > 0 , y > 0 , x ∘ y = z ∑ i , j = 1 n x i A i j y j / ∑ i = 1 n y i x i . inf _Ax>x>>=sup _ inf _sum _^x_A_y_/sum _^y_x_.>[11]
11. The Perron–Frobenius eigenvalue satisfies the inequalities

All of these properties extend beyond strictly positive matrices to primitive matrices (see below). Facts 1-7 can be found in Meyer [12] chapter 8 claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669.

The left and right eigenvectors w and v are sometimes normalized so that the sum of their components is equal to 1 in this case, they are sometimes called stochastic eigenvectors. Often they are normalized so that the right eigenvector v sums to one, while w T v = 1 v=1> .

Non-negative matrices Edit

However, Frobenius found a special subclass of non-negative matrices — irreducible matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form ω r , where r is a real strictly positive eigenvalue, and ω ranges over the complex hth roots of 1 for some positive integer h called the period of the matrix. The eigenvector corresponding to r has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below.

Classification of matrices Edit

Let A be a square matrix (not necessarily positive or even real). The matrix A is irreducible if any of the following equivalent properties holds.

Definition 2: A cannot be conjugated into block upper triangular form by a permutation matrix P:

where E and G are non-trivial (i.e. of size greater than zero) square matrices.

If A is non-negative another definition holds:

Definition 3: One can associate with a matrix A a certain directed graph GA. It has exactly n vertices, where n is size of A, and there is an edge from vertex i to vertex j precisely when Aij > 0. Then the matrix A is irreducible if and only if its associated graph GA is strongly connected.

A matrix is reducible if it is not irreducible.

A matrix A is primitive if it is non-negative and its mth power is positive for some natural number m (i.e. all entries of A m are positive).

Let A be non-negative. Fix an index i and define the period of index i to be the greatest common divisor of all natural numbers m such that (A m )ii > 0. When A is irreducible, the period of every index is the same and is called the period of A. In fact, when A is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in GA (see Kitchens [15] page 16). The period is also called the index of imprimitivity (Meyer [12] page 674) or the order of cyclicity. If the period is 1, A is aperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices.

All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period.

Results for non-negative matrices were first obtained by Frobenius in 1912.

Perron–Frobenius theorem for irreducible non-negative matrices Edit

Let A be an irreducible non-negative n × n matrix with period h and spectral radius ρ(A) = r. Then the following statements hold.

1. The number r is a positive real number and it is an eigenvalue of the matrix A, called the Perron–Frobenius eigenvalue.
2. The Perron–Frobenius eigenvalue r is simple. Both right and left eigenspaces associated with r are one-dimensional.
3. A has a right eigenvector v with eigenvalue r whose components are all positive.
4. Likewise, A has a left eigenvector w with eigenvalue r whose components are all positive.
5. The only eigenvectors whose components are all positive are those associated with the eigenvalue r.
6. The matrix A has exactly h (where h is the period) complex eigenvalues with absolute value r. Each of them is a simple root of the characteristic polynomial and is the product of r with an hth root of unity.
7. Let ω = 2π/h. Then the matrix A is similar to eA, consequently the spectrum of A is invariant under multiplication by e (corresponding to the rotation of the complex plane by the angle ω).
8. If h > 1 then there exists a permutation matrix P such that

Further properties Edit

Let A be an irreducible non-negative matrix, then:

1. (I+A) n−1 is a positive matrix. (Meyer [12]claim 8.3.5 p. 672).
2. Wielandt's theorem. [clarification needed] If |B|<A, then ρ(B)≤ρ(A). If equality holds (i.e. if μ=ρ(A)e iφ is eigenvalue for B), then B = eD AD −1 for some diagonal unitary matrix D (i.e. diagonal elements of D equals to el , non-diagonal are zero). [16]
3. If some power A q is reducible, then it is completely reducible, i.e. for some permutation matrix P, it is true that: P A q P − 1 = ( A 1 0 0 … 0 0 A 2 0 … 0 ⋮ ⋮ ⋮ ⋮ 0 0 0 … A d ) P^<-1>=<eginA_<1>&0&0&dots &0&A_<2>&0&dots &0vdots &vdots &vdots &&vdots &0&0&dots &A_end>> , where Ai are irreducible matrices having the same maximal eigenvalue. The number of these matrices d is the greatest common divisor of q and h, where h is period of A. [17]
4. If c(x) = x n + ck1 x n-k1 + ck2 x n-k2 + . + cks x n-ks is the characteristic polynomial of A in which only the non-zero terms are listed, then the period of A equals the greatest common divisor of k1, k2, . , ks. [18]averages: lim k → ∞ 1 / k ∑ i = 0 , . . . , k A i / r i = ( v w T ) , 1/ksum _A^/r^=(vw^),> where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix v w T is the spectral projection corresponding to r, the Perron projection. [19]
5. Let r be the Perron–Frobenius eigenvalue, then the adjoint matrix for (r-A) is positive. [20]
6. If A has at least one non-zero diagonal element, then A is primitive. [21]
7. If 0 ≤ A < B, then rArB. Moreover, if B is irreducible, then the inequality is strict: rA < rB.

A matrix A is primitive provided it is non-negative and A m is positive for some m, and hence A k is positive for all k ≥ m. To check primitivity, one needs a bound on how large the minimal such m can be, depending on the size of A: [22]

• If A is a non-negative primitive matrix of size n, then An 2 − 2n + 2 is positive. Moreover, this is the best possible result, since for the matrix M below, the power M k is not positive for every k < n 2 − 2n + 2, since (Mn 2 − 2n+1 )11 = 0.

Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain.

Non-negative matrices Edit

The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrix A may be written in upper-triangular block form (known as the normal form of a reducible matrix) [23]

where P is a permutation matrix and each Bi is a square matrix that is either irreducible or zero. Now if A is non-negative then so too is each block of PAP −1 , moreover the spectrum of A is just the union of the spectra of the Bi.

The invertibility of A can also be studied. The inverse of PAP −1 (if it exists) must have diagonal blocks of the form Bi −1 so if any Bi isn't invertible then neither is PAP −1 or A. Conversely let D be the block-diagonal matrix corresponding to PAP −1 , in other words PAP −1 with the asterisks zeroised. If each Bi is invertible then so is D and D −1 (PAP −1 ) is equal to the identity plus a nilpotent matrix. But such a matrix is always invertible (if N k = 0 the inverse of 1 − N is 1 + N + N 2 + . + N k−1 ) so PAP −1 and A are both invertible.

Therefore, many of the spectral properties of A may be deduced by applying the theorem to the irreducible Bi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possible that none of these will be positive.

Stochastic matrices Edit

A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible.

If A is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. If A is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal.

Algebraic graph theory Edit

The theorem has particular use in algebraic graph theory. The "underlying graph" of a nonnegative n-square matrix is the graph with vertices numbered 1, . n and arc ij if and only if Aij ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the adjacency matrix of a strongly connected graph is irreducible. [24] [25]

Finite Markov chains Edit

The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain see, for example, the article on the subshift of finite type).

Compact operators Edit

More generally, it can be extended to the case of non-negative compact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name of transfer operators, or sometimes Ruelle–Perron–Frobenius operators (after David Ruelle). In this case, the leading eigenvalue corresponds to the thermodynamic equilibrium of a dynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of point-set topology. [26]

A common thread in many proofs is the Brouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work. [27] Another proof is based on the spectral theory [28] from which part of the arguments are borrowed.

Perron root is strictly maximal eigenvalue for positive (and primitive) matrices Edit

If A is a positive (or more generally primitive) matrix, then there exists a real positive eigenvalue r (Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hence r is the spectral radius of A.

This statement does not hold for general non-negative irreducible matrices, which have h eigenvalues with the same absolute eigenvalue as r, where h is the period of A.

Proof for positive matrices Edit

Let A be a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise consider A/ρ(A)). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integer m such that A m is a positive matrix and the real part of λ m is negative. Let ε be half the smallest diagonal entry of A m and set T = A mεI which is yet another positive matrix. Moreover, if Ax = λx then A m x = λ m x thus λ mε is an eigenvalue of T. Because of the choice of m this point lies outside the unit disk consequently ρ(T) > 1. On the other hand, all the entries in T are positive and less than or equal to those in A m so by Gelfand's formula ρ(T) ≤ ρ(A m ) ≤ ρ(A) m = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle.

Absolutely the same arguments can be applied to the case of primitive matrices we just need to mention the following simple lemma, which clarifies the properties of primitive matrices.

Lemma Edit

Given a non-negative A, assume there exists m, such that A m is positive, then A m+1 , A m+2 , A m+3 . are all positive.

A m+1 = AA m , so it can have zero element only if some row of A is entirely zero, but in this case the same row of A m will be zero.

Applying the same arguments as above for primitive matrices, prove the main claim.

Power method and the positive eigenpair Edit

For a positive (or more generally irreducible non-negative) matrix A the dominant eigenvector is real and strictly positive (for non-negative A respectively non-negative.)

This can be established using the power method, which states that for a sufficiently generic (in the sense below) matrix A the sequence of vectors bk+1 = Abk / | Abk | converges to the eigenvector with the maximum eigenvalue. (The initial vector b0 can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vector b0 produces the sequence of non-negative vectors bk. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector for A, proving the assertion. The corresponding eigenvalue is non-negative.

The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this.

Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest:

Lemma: given a positive (or more generally irreducible non-negative) matrix A and v as any non-negative eigenvector for A, then it is necessarily strictly positive and the corresponding eigenvalue is also strictly positive.

Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexes i,j there exists m, such that (A m )ij is strictly positive. Given a non-negative eigenvector v, and that at least one of its components say j-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, given n such that (A n )ii >0, hence: r n vi = A n vi ≥ (A n )iivi >0. Hence r is strictly positive. The eigenvector is strict positivity. Then given m, such that (A m )ij >0, hence: r m vj = (A m v)j ≥ (A m )ijvi >0, hence vj is strictly positive, i.e., the eigenvector is strictly positive.

Multiplicity one Edit

This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalue r is one-dimensional. The arguments here are close to those in Meyer. [12]

Given a strictly positive eigenvector v corresponding to r and another eigenvector w with the same eigenvalue. (The vectors v and w can be chosen to be real, because A and r are both real, so the null space of A-r has a basis consisting of real vectors.) Assuming at least one of the components of w is positive (otherwise multiply w by −1). Given maximal possible α such that u=v- α w is non-negative, then one of the components of u is zero, otherwise α is not maximum. Vector u is an eigenvector. It is non-negative, hence by the lemma described in the previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of u is zero. The contradiction implies that w does not exist.

Case: There are no Jordan cells corresponding to the Perron–Frobenius eigenvalue r and all other eigenvalues which have the same absolute value.

If there is a Jordan cell, then the infinity norm (A/r) k tends to infinity for k → ∞ , but that contradicts the existence of the positive eigenvector.

Given r = 1, or A/r. Letting v be a Perron–Frobenius strictly positive eigenvector, so Av=v, then:

|A^|_leq |v|/min _(v_)> So A k is bounded for all k. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan cell for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan cell implies that A k is unbounded. For a two by two matrix:

hence J k = |k + λ| (for |λ| = 1), so it tends to infinity when k does so. Since J k = C −1 A k C, then A kJ k / (C −1 C ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan cells for the corresponding eigenvalues.

Combining the two claims above reveals that the Perron–Frobenius eigenvalue r is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as r. The same claim is true for them, but requires more work.

No other non-negative eigenvectors Edit

Given positive (or more generally irreducible non-negative matrix) A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for A.

Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional.

Assuming there exists an eigenpair (λ, y) for A, such that vector y is positive, and given (r, x), where x – is the left Perron–Frobenius eigenvector for A (i.e. eigenvector for A T ), then rx T y = (x T A) y = x T (Ay) = λx T y, also x T y > 0, so one has: r = λ. Since the eigenspace for the Perron–Frobenius eigenvalue r is one-dimensional, non-negative eigenvector y is a multiple of the Perron–Frobenius one. [29]

Collatz–Wielandt formula Edit

Given a positive (or more generally irreducible non-negative matrix) A, one defines the function f on the set of all non-negative non-zero vectors x such that f(x) is the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real-valued function, whose maximum is the Perron–Frobenius eigenvalue r.

For the proof we denote the maximum of f by the value R. The proof requires to show R = r. Inserting the Perron-Frobenius eigenvector v into f, we obtain f(v) = r and conclude r ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vector x and let ξ=f(x). The definition of f gives 0 ≤ ξx ≤ Ax (componentwise). Now, we use the positive right eigenvector w for A for the Perron-Frobenius eigenvalue r, then ξ w T x = w T ξx ≤ w T (Ax) = (w T A)x = r w T x . Hence f(x) = ξ ≤ r, which implies R ≤ r. [30]

Perron projection as a limit: A k /r k Edit

Let A be a positive (or more generally, primitive) matrix, and let r be its Perron–Frobenius eigenvalue.

1. There exists a limit A k /r k for k → ∞, denote it by P.
2. P is a projection operator: P 2 = P, which commutes with A: AP = PA.
3. The image of P is one-dimensional and spanned by the Perron–Frobenius eigenvector v (respectively for P T —by the Perron–Frobenius eigenvector w for A T ).
4. P = vwT , where v,w are normalized such that wTv = 1.
5. Hence P is a positive operator.

Hence P is a spectral projection for the Perron–Frobenius eigenvalue r, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices.

Actually the claims above (except claim 5) are valid for any matrix M such that there exists an eigenvalue r which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic polynomial. (These requirements hold for primitive matrices as above).

Given that M is diagonalizable, M is conjugate to a diagonal matrix with eigenvalues r1, . , rn on the diagonal (denote r1 = r). The matrix M k /r k will be conjugate (1, (r2/r) k , . , (rn/r) k ), which tends to (1,0,0. 0), for k → ∞, so the limit exists. The same method works for general M (without assuming that M is diagonalizable).

The projection and commutativity properties are elementary corollaries of the definition: MM k /r k = M k /r k M P 2 = lim M 2k /r 2k = P. The third fact is also elementary: M(Pu) = M lim M k /r k u = lim rM k+1 /r k+1 u, so taking the limit yields M(Pu) = r(Pu), so image of P lies in the r-eigenspace for M, which is one-dimensional by the assumptions.

Denoting by v, r-eigenvector for M (by w for M T ). Columns of P are multiples of v, because the image of P is spanned by it. Respectively, rows of w. So P takes a form (a v w T ), for some a. Hence its trace equals to (a w T v). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that P acts identically on the r-eigenvector for M. So it is one-dimensional. So choosing (w T v) = 1, implies P = vw T .

Inequalities for Perron–Frobenius eigenvalue Edit

For any non-negative matrix A its Perron–Frobenius eigenvalue r satisfies the inequality:

This fact is specific to non-negative matrices for general matrices there is nothing similar. Given that A is positive (not just non-negative), then there exists a positive eigenvector w such that Aw = rw and the smallest component of w (say wi) is 1. Then r = (Aw)i ≥ the sum of the numbers in row i of A. Thus the minimum row sum gives a lower bound for r and this observation can be extended to all non-negative matrices by continuity.

Another way to argue it is via the Collatz-Wielandt formula. One takes the vector x = (1, 1, . 1) and immediately obtains the inequality.

Further proofs Edit

Perron projection Edit

The proof now proceeds using spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property:

The Perron projection of an irreducible non-negative square matrix is a positive matrix.

Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if A is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if P is its Perron projection then AP = PA = ρ(A)P so every column of P is a positive right eigenvector of A and every row is a positive left eigenvector. Moreover, if Ax = λx then PAx = λPx = ρ(A)Px which means Px = 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). If A is a primitive matrix with ρ(A) = 1 then it can be decomposed as P ⊕ (1 − P)A so that A n = P + (1 − P)A n . As n increases the second of these terms decays to zero leaving P as the limit of A n as n → ∞.

The power method is a convenient way to compute the Perron projection of a primitive matrix. If v and w are the positive row and column vectors that it generates then the Perron projection is just wv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition.

Peripheral projection Edit

The analysis when A is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 − P)A decaying as in the primitive case whenever ρ(A) = 1. So we consider the peripheral projection, which is the spectral projection of A corresponding to all the eigenvalues that have modulus ρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal.

Cyclicity Edit

Suppose in addition that ρ(A) = 1 and A has h eigenvalues on the unit circle. If P is the peripheral projection then the matrix R = AP = PA is non-negative and irreducible, R h = P, and the cyclic group P, R, R 2 , . R h−1 represents the harmonics of A. The spectral projection of A at the eigenvalue λ on the unit circle is given by the formula h − 1 ∑ 1 h λ − k R k sum _<1>^lambda ^<-k>R^> . All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of A is given by A = R ⊕ (1 − P)A so the difference between A n and R n is A nR n = (1 − P)A n representing the transients of A n which eventually decay to zero. P may be computed as the limit of A nh as n → ∞.

A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms strictly positive and positive to mean > 0 and ≥ 0 respectively. In this article positive means > 0 and non-negative means ≥ 0. Another vexed area concerns decomposability and reducibility: irreducible is an overloaded term. For avoidance of doubt a non-zero non-negative square matrix A such that 1 + A is primitive is sometimes said to be connected. Then irreducible non-negative square matrices and connected matrices are synonymous. [31]

The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity in this case, the eigenvector is the vector of a probability distribution and is sometimes called a stochastic eigenvector.

Perron–Frobenius eigenvalue and dominant eigenvalue are alternative names for the Perron root. Spectral projections are also known as spectral projectors and spectral idempotents. The period is sometimes referred to as the index of imprimitivity or the order of cyclicity.

Advanced Linear Algebra: Foundations to Frontiers

One can think of the Frobenius norm as taking the columns of the matrix, stacking them on top of each other to create a vector of size (m imes n ext<,>) and then taking the vector 2-norm of the result.

Homework 1.3.3.1 .

Partition (m imes n ) matrix (A ) by columns:

Homework 1.3.3.2 .

Prove that the Frobenius norm is a norm.

Establishing that this function is positive definite and homogeneous is straight forward. To show that the triangle inequality holds it helps to realize that if (A = left( egin a_0 amp a_1 amp cdots amp a_ end ight) ) then

In other words, it equals the vector 2-norm of the vector that is created by stacking the columns of (A ) on top of each other. One can then exploit the fact that the vector 2-norm obeys the triangle inequality.

Homework 1.3.3.3 .

Partition (m imes n ) matrix (A ) by rows:

Let us review the definition of the transpose of a matrix (which we have already used when defining the dot product of two real-valued vectors and when identifying a row in a matrix):

Definition 1.3.3.2 . Transpose.

For complex-valued matrices, it is important to also define the of a matrix:

Definition 1.3.3.3 . Hermitian transpose.

where (overline A) denotes the , in which each element of the matrix is conjugated.

(overline A^T = overline < A^T > ext<.>)

If (A in mathbb R^ ext<,>) then (A^H = A^T ext<.>)

If (x in Cm ext<,>) then (x^H ) is defined consistent with how we have used it before.

If (alpha in mathbb C ext<,>) then (alpha^H = overline alpha ext<.>)

(If you view the scalar as a matrix and then Hermitian transpose it, you get the matrix with as only element (overline alpha ext<.>))

Don't Panic!. While working with complex-valued scalars, vectors, and matrices may appear a bit scary at first, you will soon notice that it is not really much more complicated than working with their real-valued counterparts.

Homework 1.3.3.4 .

Let (A in C^ ) and (B in C^ ext<.>) Using what you once learned about matrix transposition and matrix-matrix multiplication, reason that ((A B)^H = B^H A^H ext<.>)

lt X^H = overline gt overline < (A B)^T >

lt mbox < you may check separately that >overline < XY>= overline X overline Y gt overline

lt overline = overline X^T gt B^H A^H end end

Definition 1.3.3.4 . Hermitian.

A matrix (A in mathbb C^ ) is if and only if (A = A^H ext<.>)

Obviously, if (A in mathbb R^ ext<,>) then (A ) is a Hermitian matrix if and only if (A ) is a symmetric matrix.

Let R be a commutative ring with prime characteristic p (an integral domain of positive characteristic always has prime characteristic, for example). The Frobenius endomorphism F is defined by

for all r in R. It respects the multiplication of R:

and F(1) is clearly 1 also. What is interesting, however, is that it also respects the addition of R . The expression (r + s) p can be expanded using the binomial theorem. Because p is prime, it divides p! but not any q! for q < p it therefore will divide the numerator, but not the denominator, of the explicit formula of the binomial coefficients

if 1 ≤ kp − 1 . Therefore, the coefficients of all the terms except r p and s p are divisible by p , the characteristic, and hence they vanish. [1] Thus

This shows that F is a ring homomorphism.

If φ : RS is a homomorphism of rings of characteristic p , then

If FR and FS are the Frobenius endomorphisms of R and S , then this can be rewritten as:

This means that the Frobenius endomorphism is a natural transformation from the identity functor on the category of characteristic p rings to itself.

If the ring R is a ring with no nilpotent elements, then the Frobenius endomorphism is injective: F(r) = 0 means r p = 0 , which by definition means that r is nilpotent of order at most p . In fact, this is necessary and sufficient, because if r is any nilpotent, then one of its powers will be nilpotent of order at most p . In particular, if R is a field then the Frobenius endomorphism is injective.

The Frobenius morphism is not necessarily surjective, even when R is a field. For example, let K = Fp(t) be the finite field of p elements together with a single transcendental element equivalently, K is the field of rational functions with coefficients in Fp . Then the image of F does not contain t . If it did, then there would be a rational function q(t)/r(t) whose p -th power q(t) p /r(t) p would equal t . But the degree of this p -th power is p deg(q) − p deg(r) , which is a multiple of p . In particular, it can't be 1, which is the degree of t . This is a contradiction so t is not in the image of F .

A field K is called perfect if either it is of characteristic zero or it is of positive characteristic and its Frobenius endomorphism is an automorphism. For example, all finite fields are perfect.

Consider the finite field Fp . By Fermat's little theorem, every element x of Fp satisfies x p = x . Equivalently, it is a root of the polynomial X pX . The elements of Fp therefore determine p roots of this equation, and because this equation has degree p it has no more than p roots over any extension. In particular, if K is an algebraic extension of Fp (such as the algebraic closure or another finite field), then Fp is the fixed field of the Frobenius automorphism of K .

Let R be a ring of characteristic p > 0 . If R is an integral domain, then by the same reasoning, the fixed points of Frobenius are the elements of the prime field. However, if R is not a domain, then X pX may have more than p roots for example, this happens if R = Fp × Fp .

Iterating the Frobenius map gives a sequence of elements in R :

This sequence of iterates is used in defining the Frobenius closure and the tight closure of an ideal.

The Galois group of an extension of finite fields is generated by an iterate of the Frobenius automorphism. First, consider the case where the ground field is the prime field Fp . Let Fq be the finite field of q elements, where q = p n . The Frobenius automorphism F of Fq fixes the prime field Fp , so it is an element of the Galois group Gal(Fq/Fp) . In fact, since F q × _^< imes >> is cyclic with q − 1 elements, we know that the Galois group is cyclic and F is a generator. The order of F is n because F n acts on an element x by sending it to x q , and this is the identity on elements of Fq . Every automorphism of Fq is a power of F , and the generators are the powers F i with i coprime to n .

Now consider the finite field Fq f as an extension of Fq , where q = p n as above. If n > 1 , then the Frobenius automorphism F of Fq f does not fix the ground field Fq , but its n th iterate F n does. The Galois group Gal(Fq f /Fq) is cyclic of order f and is generated by F n . It is the subgroup of Gal(Fq f /Fp) generated by F n . The generators of Gal(Fq f /Fq) are the powers F ni where i is coprime to f .

The Frobenius automorphism is not a generator of the absolute Galois group

because this Galois group is isomorphic to the profinite integers

which are not cyclic. However, because the Frobenius automorphism is a generator of the Galois group of every finite extension of Fq , it is a generator of every finite quotient of the absolute Galois group. Consequently, it is a topological generator in the usual Krull topology on the absolute Galois group.

There are several different ways to define the Frobenius morphism for a scheme. The most fundamental is the absolute Frobenius morphism. However, the absolute Frobenius morphism behaves poorly in the relative situation because it pays no attention to the base scheme. There are several different ways of adapting the Frobenius morphism to the relative situation, each of which is useful in certain situations.

The absolute Frobenius morphism Edit

Suppose that X is a scheme of characteristic p > 0 . Choose an open affine subset U = Spec A of X . The ring A is an Fp -algebra, so it admits a Frobenius endomorphism. If V is an open affine subset of U , then by the naturality of Frobenius, the Frobenius morphism on U , when restricted to V , is the Frobenius morphism on V . Consequently, the Frobenius morphism glues to give an endomorphism of X . This endomorphism is called the absolute Frobenius morphism of X , denoted FX . By definition, it is a homeomorphism of X with itself. The absolute Frobenius morphism is a natural transformation from the identity functor on the category of Fp -schemes to itself.

The absolute Frobenius morphism is a purely inseparable morphism of degree p . Its differential is zero. It preserves products, meaning that for any two schemes X and Y , FX×Y = FX × FY .

Restriction and extension of scalars by Frobenius Edit

Suppose that φ : XS is the structure morphism for an S -scheme X . The base scheme S has a Frobenius morphism FS. Composing φ with FS results in an S -scheme XF called the restriction of scalars by Frobenius. The restriction of scalars is actually a functor, because an S -morphism XY induces an S -morphism XFYF .

For example, consider a ring A of characteristic p > 0 and a finitely presented algebra over A:

The action of A on R is given by:

where α is a multi-index. Let X = Spec R . Then XF is the affine scheme Spec R , but its structure morphism Spec R → Spec A , and hence the action of A on R, is different:

Because restriction of scalars by Frobenius is simply composition, many properties of X are inherited by XF under appropriate hypotheses on the Frobenius morphism. For example, if X and SF are both finite type, then so is XF.

The extension of scalars by Frobenius is defined to be:

The projection onto the S factor makes X (p) an S -scheme. If S is not clear from the context, then X (p) is denoted by X (p/S) . Like restriction of scalars, extension of scalars is a functor: An S -morphism XY determines an S -morphism X (p) → Y (p) .

As before, consider a ring A and a finitely presented algebra R over A, and again let X = Spec R . Then:

A global section of X (p) is of the form:

where α is a multi-index and every a and bi is an element of A. The action of an element c of A on this section is:

Consequently, X (p) is isomorphic to:

A similar description holds for arbitrary A-algebras R.

Because extension of scalars is base change, it preserves limits and coproducts. This implies in particular that if X has an algebraic structure defined in terms of finite limits (such as being a group scheme), then so does X (p) . Furthermore, being a base change means that extension of scalars preserves properties such as being of finite type, finite presentation, separated, affine, and so on.

Extension of scalars is well-behaved with respect to base change: Given a morphism S′ → S , there is a natural isomorphism:

Relative Frobenius Edit

Let X be an S -scheme with structure morphism φ . The relative Frobenius morphism of X is the morphism:

defined by the universal property of the pullback X (p) (see the diagram above):

Because the absolute Frobenius morphism is natural, the relative Frobenius morphism is a morphism of S -schemes.

Consider, for example, the A-algebra:

The relative Frobenius morphism is the homomorphism R (p) → R defined by:

Relative Frobenius is compatible with base change in the sense that, under the natural isomorphism of X (p/S) ×S S′ and (X ×S S′) (p/S′) , we have:

Relative Frobenius is a universal homeomorphism. If XS is an open immersion, then it is the identity. If XS is a closed immersion determined by an ideal sheaf I of OS , then X (p) is determined by the ideal sheaf I p and relative Frobenius is the augmentation map OS/I pOS/I .

X is unramified over S if and only if FX/S is unramified and if and only if FX/S is a monomorphism. X is étale over S if and only if FX/S is étale and if and only if FX/S is an isomorphism.

Arithmetic Frobenius Edit

The arithmetic Frobenius morphism of an S -scheme X is a morphism:

That is, it is the base change of FS by 1X.

then the arithmetic Frobenius is the homomorphism:

If we rewrite R (p) as:

then this homomorphism is:

Geometric Frobenius Edit

then extending scalars by F S − 1 ^<-1>> gives:

and then there is an isomorphism:

The geometric Frobenius morphism of an S -scheme X is a morphism:

Continuing our example of A and R above, geometric Frobenius is defined to be:

Arithmetic and geometric Frobenius as Galois actions Edit

Suppose that the Frobenius morphism of S is an isomorphism. Then it generates a subgroup of the automorphism group of S . If S = Spec k is the spectrum of a finite field, then its automorphism group is the Galois group of the field over the prime field, and the Frobenius morphism and its inverse are both generators of the automorphism group. In addition, X (p) and X (1/p) may be identified with X . The arithmetic and geometric Frobenius morphisms are then endomorphisms of X , and so they lead to an action of the Galois group of k on X.

Consider the set of K-points X(K) . This set comes with a Galois action: Each such point x corresponds to a homomorphism OXK from the structure sheaf to K, which factors via k(x), the residue field at x, and the action of Frobenius on x is the application of the Frobenius morphism to the residue field. This Galois action agrees with the action of arithmetic Frobenius: The composite morphism

is the same as the composite morphism:

by the definition of the arithmetic Frobenius. Consequently, arithmetic Frobenius explicitly exhibits the action of the Galois group on points as an endomorphism of X.

Given an unramified finite extension L/K of local fields, there is a concept of Frobenius endomorphism which induces the Frobenius endomorphism in the corresponding extension of residue fields. [2]

Suppose L/K is an unramified extension of local fields, with ring of integers OK of K such that the residue field, the integers of K modulo their unique maximal ideal φ , is a finite field of order q , where q is a power of a prime. If Φ is a prime of L lying over φ , that L/K is unramified means by definition that the integers of L modulo Φ , the residue field of L , will be a finite field of order q f extending the residue field of K where f is the degree of L/K . We may define the Frobenius map for elements of the ring of integers OL of L as an automorphism sΦ of L such that

In algebraic number theory, Frobenius elements are defined for extensions L/K of global fields that are finite Galois extensions for prime ideals Φ of L that are unramified in L/K . Since the extension is unramified the decomposition group of Φ is the Galois group of the extension of residue fields. The Frobenius element then can be defined for elements of the ring of integers of L as in the local case, by

where q is the order of the residue field OK/(Φ ∩ OK) .

Lifts of the Frobenius are in correspondence with p-derivations.

and so is unramified at the prime 3 it is also irreducible mod 3. Hence adjoining a root ρ of it to the field of 3 -adic numbers Q3 gives an unramified extension Q3(ρ) of Q3 . We may find the image of ρ under the Frobenius map by locating the root nearest to ρ 3 , which we may do by Newton's method. We obtain an element of the ring of integers Z3[ρ] in this way this is a polynomial of degree four in ρ with coefficients in the 3 -adic integers Z3 . Modulo 3 8 this polynomial is

This is algebraic over Q and is the correct global Frobenius image in terms of the embedding of Q into Q3 moreover, the coefficients are algebraic and the result can be expressed algebraically. However, they are of degree 120, the order of the Galois group, illustrating the fact that explicit computations are much more easily accomplished if p -adic results will suffice.

If L/K is an abelian extension of global fields, we get a much stronger congruence since it depends only on the prime φ in the base field K . For an example, consider the extension Q(β) of Q obtained by adjoining a root β satisfying

to Q . This extension is cyclic of order five, with roots

for integer n . It has roots which are Chebyshev polynomials of β :

β 2 − 2, β 3 − 3β, β 5 − 5β 3 + 5β

give the result of the Frobenius map for the primes 2, 3 and 5, and so on for larger primes not equal to 11 or of the form 22n + 1 (which split). It is immediately apparent how the Frobenius map gives a result equal mod p to the p -th power of the root β .

Contents

The simplest primality test is trial division: given an input number, n, check whether it is evenly divisible by any prime number between 2 and √ n (i.e. that the division leaves no remainder). If so, then n is composite. Otherwise, it is prime. [1]

For example, consider the number 100, which is evenly divisible by these numbers:

Note that the largest factor, 50, is half of 100. This holds true for all n: all divisors are less than or equal to n/2.

Actually, when we test all possible divisors up to n/2, we will discover some factors twice. To observe this, rewrite the list of divisors as a list of products, each equal to 100:

2 × 50, 4 × 25, 5 × 20, 10 × 10, 20 × 5, 25 × 4, 50 × 2

Notice that products past 10 x 10 merely repeat numbers which appeared in earlier products. For example, 5 x 20 and 20 x 5 consist of the same numbers. This holds true for all n: all unique divisors of n are numbers less than or equal to √ n , so we need not search past that. [1] (In this example, √ n = √ 100 = 10.)

All even numbers greater than 2 can also be eliminated since, if an even number can divide n, so can 2.

We can improve this method further. Observe that all primes greater than 3 are of the form 6k ± 1 , where k is any integer greater than 0. This is because all integers can be expressed as (6k + i) , where i = −1, 0, 1, 2, 3, or 4. Note that 2 divides (6k + 0), (6k + 2), and (6k + 4) and 3 divides (6k + 3) . So, a more efficient method is to test whether n is divisible by 2 or 3, then to check through all numbers of the form 6 k ± 1 ≤ n >> . This is 3 times faster than testing all numbers up to √ n .

Generalising further, all primes greater than c# (c primorial) are of the form c# · k + i, for i < c#, where c and k are integers and i represents the numbers that are coprime to c#. For example, let c = 6 . Then c# = 2 · 3 · 5 = 30 . All integers are of the form 30k + i for i = 0, 1, 2. 29 and k an integer. However, 2 divides 0, 2, 4. 28 3 divides 0, 3, 6. 27 and 5 divides 0, 5, 10. 25. So all prime numbers greater than 30 are of the form 30k + i for i = 1, 7, 11, 13, 17, 19, 23, 29 (i.e. for i < 30 such that gcd(i,30) = 1 ). Note that if i and 30 were not coprime, then 30k + i would be divisible by a prime divisor of 30, namely 2, 3 or 5, and would therefore not be prime. (Note: Not all numbers which meet the above conditions are prime. For example: 437 is in the form of c#k+i for c#(7)=210,k=2,i=17. However, 437 is a composite number equal to 19*23).

As c → ∞ , the number of values that c#k + i can take over a certain range decreases, and so the time to test n decreases. For this method, it is also necessary to check for divisibility by all primes that are less than c. Observations analogous to the preceding can be applied recursively, giving the Sieve of Eratosthenes.

A good way to speed up these methods (and all the others mentioned below) is to pre-compute and store a list of all primes up to a certain bound, say all primes up to 200. (Such a list can be computed with the Sieve of Eratosthenes or by an algorithm that tests each incremental m against all known primes < √ m ). Then, before testing n for primality with a serious method, n can first be checked for divisibility by any prime from the list. If it is divisible by any of those numbers then it is composite, and any further tests can be skipped.

A simple, but very inefficient primality test uses Wilson's theorem, which states that p is prime if and only if:

Although this method requires about p modular multiplications, rendering it impractical, theorems about primes and modular residues form the basis of many more practical methods.

Python code Edit

The following is a simple primality test in Python using the simple 6k ± 1 optimization mentioned earlier. More sophisticated methods described below are much faster for large n.

C# code Edit

The following is a primality test in C# using the same optimization as above.

Metals

With the exception of hydrogen, all elements that form positive ions by losing electrons during chemical reactions are called metals. Thus metals are electropositive elements with relatively low ionization energies. They are characterized by bright luster, hardness, ability to resonate sound and are excellent conductors of heat and electricity. Metals are solids under normal conditions except for Mercury.

Physical Properties of Metals

Metals are lustrous, malleable, ductile, good conductors of heat and electricity. Other properties include:

• State: Metals are solids at room temperature with the exception of mercury, which is liquid at room temperature (Gallium is liquid on hot days).
• Luster: Metals have the quality of reflecting light from their surface and can be polished e.g., gold, silver and copper.
• Malleability: Metals have the ability to withstand hammering and can be made into thin sheets known as foils. For example, a sugar cube sized chunk of gold can be pounded into a thin sheet that will cover a football field.
• Ductility: Metals can be drawn into wires. For example, 100 g of silver can be drawn into a thin wire about 200 meters long.
• Hardness: All metals are hard except sodium and potassium, which are soft and can be cut with a knife.
• Valency: Metals typically have 1 to 3 electrons in the outermost shell of their atoms.
• Conduction: Metals are good conductors because they have free electrons. Silver and copper are the two best conductors of heat and electricity. Lead is the poorest conductor of heat. Bismuth, mercury and iron are also poor conductors
• Density: Metals have high density and are very heavy. Iridium and osmium have the highest densities whereas lithium has the lowest density.
• Melting and Boiling Points: Metals have high melting and boiling points. Tungsten has the highest melting and boiling points whereas mercury has the lowest. Sodium and potassium also have low melting points.

Chemical Properties of Metals

Metals are electropositive elements that generally form basic or amphoteric oxides with oxygen. Other chemical properties include:

• Electropositive Character: Metals tend to have low ionization energies, and typically lose electrons (i.e. are oxidized) when they undergo chemicalreactions They normally do not accept electrons. For example:
• Alkali metals are always 1 + (lose the electron in s subshell)
• Alkaline earth metals are always 2 + (lose both electrons in s subshell)
• Transition metal ions do not follow an obvious pattern, 2 + is common (lose both electrons in s subshell), and 1 + and 3 + are also observed

Compounds of metals with non-metals tend to be ionic in nature. Most metal oxides are basic oxides and dissolve in water to form metal hydroxides:

Metal oxides exhibit their basic chemical nature by reacting with acids to form metal salts and water:

What is the chemical formula for aluminum oxide?

Al has a 3+ charge, the oxide ion is (O^<2->), thus (Al_2O_3).

Would you expect it to be solid, liquid or gas at room temperature?

Oxides of metals are characteristically solid at room temperature

Write the balanced chemical equation for the reaction of aluminum oxide with nitric acid:

Metal oxide + acid -> salt + water

Contents

Solving single-variable equation Edit

In the secant method, we replace the first derivative f′ at xn with the finite-difference approximation:

where n is the iteration index.

Solving a system of nonlinear equations Edit

Consider a system of k nonlinear equations

where f is a vector-valued function of vector x :

For such problems, Broyden gives a generalization of the one-dimensional Newton's method, replacing the derivative with the Jacobian J . The Jacobian matrix is determined iteratively, based on the secant equation in the finite-difference approximation:

where n is the iteration index. For clarity, let us define:

so the above may be rewritten as

The above equation is underdetermined when k is greater than one. Broyden suggests using the current estimate of the Jacobian matrix Jn−1 and improving upon it by taking the solution to the secant equation that is a minimal modification to Jn−1 :

This minimizes the following Frobenius norm:

We may then proceed in the Newton direction:

Broyden also suggested using the Sherman–Morrison formula to update directly the inverse of the Jacobian matrix:

This first method is commonly known as the "good Broyden's method".

A similar technique can be derived by using a slightly different modification to Jn−1 . This yields a second method, the so-called "bad Broyden's method" (but see [3] ):

This minimizes a different Frobenius norm:

Many other quasi-Newton schemes have been suggested in optimization, where one seeks a maximum or minimum by finding the root of the first derivative (gradient in multiple dimensions). The Jacobian of the gradient is called Hessian and is symmetric, adding further constraints to its update.

Broyden has defined not only two methods, but a whole class of methods. Other members of this class have been added by other authors.

Contents

This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g. A vectors in lowercase bold, e.g. a and entries of vectors and matrices are italic (since they are numbers from a field), e.g. A and a . Index notation is often the clearest way to express definitions, and is used as standard in the literature. The i, j entry of matrix A is indicated by (A)ij , Aij or aij , whereas a numerical label (not matrix entries) on a collection of matrices is subscripted only, e.g. A1, A2 , etc.

If A is an m × n matrix and B is an n × p matrix,

the matrix product C = AB (denoted without multiplication signs or dots) is defined to be the m × p matrix [6] [7] [8] [9]

for i = 1, . m and j = 1, . p .

Therefore, AB can also be written as

Thus the product AB is defined if and only if the number of columns in A equals the number of rows in B , [2] in this case n .

In most scenarios, the entries are numbers, but they may be any kind of mathematical objects for which an addition and a multiplication are defined, that are associative, and such that the addition is commutative, and the multiplication is distributive with respect to the addition. In particular, the entries may be matrices themselves (see block matrix).

Illustration Edit

The figure to the right illustrates diagrammatically the product of two matrices A and B , showing how each intersection in the product matrix corresponds to a row of A and a column of B .

The values at the intersections marked with circles are:

Historically, matrix multiplication has been introduced for facilitating and clarifying computations in linear algebra. This strong relationship between matrix multiplication and linear algebra remains fundamental in all mathematics, as well as in physics, engineering and computer science.

Linear maps Edit

If a vector space has a finite basis, its vectors are each uniquely represented by a finite sequence of scalars, called a coordinate vector, whose elements are the coordinates of the vector on the basis. These coordinate vectors form another vector space, which is isomorphic to the original vector space. A coordinate vector is commonly organized as a column matrix (also called column vector), which is a matrix with only one column. So, a column vector represents both a coordinate vector, and a vector of the original vector space.

A linear map A from a vector space of dimension n into a vector space of dimension m maps a column vector

The linear map A is thus defined by the matrix

System of linear equations Edit

The general form of a system of linear equations is

Using same notation as above, such a system is equivalent with the single matrix equation

Dot product, bilinear form and inner product Edit

The dot product of two column vectors is the matrix product

More generally, any bilinear form over a vector space of finite dimension may be expressed as a matrix product

and any inner product may be expressed as

Matrix multiplication shares some properties with usual multiplication. However, matrix multiplication is not defined if the number of columns of the first factor differs from the number of rows of the second factor, and it is non-commutative, [10] even when the product remains definite after changing the order of the factors. [11] [12]

Non-commutativity Edit

One special case where commutativity does occur is when D and E are two (square) diagonal matrices (of the same size) then DE = ED . [10] Again, if the matrices are over a general ring rather than a field, the corresponding entries in each must also commute with each other for this to hold.

Distributivity Edit

The matrix product is distributive with respect to matrix addition. That is, if A, B, C, D are matrices of respective sizes m × n , n × p , n × p , and p × q , one has (left distributivity)

This results from the distributivity for coefficients by

Product with a scalar Edit

If the scalars have the commutative property, then all four matrices are equal. More generally, all four are equal if c belongs to the center of a ring containing the entries of the matrices, because in this case, cX = Xc for all matrices X .

These properties result from the bilinearity of the product of scalars:

Transpose Edit

If the scalars have the commutative property, the transpose of a product of matrices is the product, in the reverse order, of the transposes of the factors. That is

where T denotes the transpose, that is the interchange of rows and columns.

This identity does not hold for noncommutative entries, since the order between the entries of A and B is reversed, when one expands the definition of the matrix product.

Complex conjugate Edit

If A and B have complex entries, then

where * denotes the entry-wise complex conjugate of a matrix.

This results from applying to the definition of matrix product the fact that the conjugate of a sum is the sum of the conjugates of the summands and the conjugate of a product is the product of the conjugates of the factors.

Transposition acts on the indices of the entries, while conjugation acts independently on the entries themselves. It results that, if A and B have complex entries, one has

where † denotes the conjugate transpose (conjugate of the transpose, or equivalently transpose of the conjugate).

Associativity Edit

Given three matrices A, B and C , the products (AB)C and A(BC) are defined if and only if the number of columns of A equals the number of rows of B , and the number of columns of B equals the number of rows of C (in particular, if one of the products is defined, then the other is also defined). In this case, one has the associative property

As for any associative operation, this allows omitting parentheses, and writing the above products as A B C . .>

This extends naturally to the product of any number of matrices provided that the dimensions match. That is, if A1, A2, . An are matrices such that the number of columns of Ai equals the number of rows of Ai + 1 for i = 1, . n – 1 , then the product

is defined and does not depend on the order of the multiplications, if the order of the matrices is kept fixed.

These properties may be proved by straightforward but complicated summation manipulations. This result also follows from the fact that matrices represent linear maps. Therefore, the associative property of matrices is simply a specific case of the associative property of function composition.

Complexity is not associative Edit

Although the result of a sequence of matrix products does not depend on the order of operation (provided that the order of the matrices is not changed), the computational complexity may depend dramatically on this order.

For example, if A, B and C are matrices of respective sizes 10×30, 30×5, 5×60 , computing (AB)C needs 10×30×5 + 10×5×60 = 4,500 multiplications, while computing A(BC) needs 30×5×60 + 10×30×60 = 27,000 multiplications.

Algorithms have been designed for choosing the best order of products, see Matrix chain multiplication. When the number n of matrices increases, it has been shown that the choice of the best order has a complexity of O ( n log ⁡ n ) .

Application to similarity Edit

Similarity transformations map product to products, that is

If n > 1 , many matrices do not have a multiplicative inverse. For example, a matrix such that all entries of a row (or a column) are 0 does not have an inverse. If it exists, the inverse of a matrix A is denoted A −1 , and, thus verifies

A matrix that has an inverse is an invertible matrix. Otherwise, it is a singular matrix.

A product of matrices is invertible if and only if each factor is invertible. In this case, one has

When R is commutative, and, in particular, when it is a field, the determinant of a product is the product of the determinants. As determinants are scalars, and scalars commute, one has thus

The other matrix invariants do not behave as well with products. Nevertheless, if R is commutative, AB and BA have the same trace, the same characteristic polynomial, and the same eigenvalues with the same multiplicities. However, the eigenvectors are generally different if ABBA .

Powers of a matrix Edit

One may raise a square matrix to any nonnegative integer power multiplying it by itself repeatedly in the same way as for ordinary numbers. That is,

Computing the k th power of a matrix needs k – 1 times the time of a single matrix multiplication, if it is done with the trivial algorithm (repeated multiplication). As this may be very time consuming, one generally prefers using exponentiation by squaring, which requires less than 2 log2 k matrix multiplications, and is therefore much more efficient.

An easy case for exponentiation is that of a diagonal matrix. Since the product of diagonal matrices amounts to simply multiplying corresponding diagonal elements together, the k th power of a diagonal matrix is obtained by raising the entries to the power k :

The definition of matrix product requires that the entries belong to a semiring, and does not require multiplication of elements of the semiring to be commutative. In many applications, the matrix elements belong to a field, although the tropical semiring is also a common choice for graph shortest path problems. [13] Even in the case of matrices over fields, the product is not commutative in general, although it is associative and is distributive over matrix addition. The identity matrices (which are the square matrices whose entries are zero outside of the main diagonal and 1 on the main diagonal) are identity elements of the matrix product. It follows that the n × n matrices over a ring form a ring, which is noncommutative except if n = 1 and the ground ring is commutative.

A square matrix may have a multiplicative inverse, called an inverse matrix. In the common case where the entries belong to a commutative ring r , a matrix has an inverse if and only if its determinant has a multiplicative inverse in r . The determinant of a product of square matrices is the product of the determinants of the factors. The n × n matrices that have an inverse form a group under matrix multiplication, the subgroups of which are called matrix groups. Many classical groups (including all finite groups) are isomorphic to matrix groups this is the starting point of the theory of group representations.

Rather surprisingly, this complexity is not optimal, as shown in 1969 by Volker Strassen, who provided an algorithm, now called Strassen's algorithm, with a complexity of O ( n log 2 ⁡ 7 ) ≈ O ( n 2.8074 ) . 7>)approx O(n^<2.8074>).> [14] As of December 2020 [update] , the best matrix multiplication algorithm is by Josh Alman and Virginia Vassilevska Williams and has complexity O(n 2.3728596 ) . [15] It is not known whether matrix multiplication can be performed in O(n 2 + o(1) ) time. This would be optimal, since one must read the n 2 > elements of a matrix in order to multiply it with another matrix.

Since matrix multiplication forms the basis for many algorithms, and many operations on matrices even have the same complexity as matrix multiplication (up to a multiplicative constant), the computational complexity of matrix multiplication appears throughout numerical linear algebra and theoretical computer science.

Angeleri Hügel, L., Mendoza Hernández, O.: Homological dimensions in cotorsion pairs. Illinois J. Math. 53(1), 251–263 (2009)

Auslander, M., Buchweitz, R.-O.: The homological theory of maximal Cohen–Macaulay approximations. Mém. Soc. Math. France (N.S.), (38):5–37 (1989). Colloque en l’honneur de Pierre Samuel (Orsay, 1987)

Auslander, M., Reiten, I.: Applications of contravariantly finite subcategories. Adv. Math. 86(1), 111–152 (1991)

Auslander, M., Smalø, S.O.: Preprojective modules over Artin algebras. J. Algebra 66, 61–122 (1980)

Bass, H.: On the ubiquity of Gorenstein rings. Math. Z. 82, 8–28 (1963)

Beligiannis, A.: The homological theory of contravariantly finite subcategories: Auslander–Buchweitz contexts, Gorenstein categories and (co-)stabilization. Comm. Algebra 28(10), 4547–4596 (2000)

Beligiannis, A., Reiten, I.: Homological and homotopical aspects of Torsion theories. Mem. Am. Math. Soc. 188(883), viii+207 (2007)

Bennis, D.: Rings over which the class of Gorenstein flat modules is closed under extensions. Commun. Algebra 37(3), 855–868 (2009)

Bican, L., El Bashir, R., Enochs, E.E.: All modules have flat covers. Bull. Lond. Math. Soc. 33(4), 385–390 (2001)

Bravo, D., Gillespie, J., Hovey, M.: The stable module category of a general ring. Preprint. arXiv:1405.5768 (2014)

Bühler, T.: Exact categories. Expo. Math. 28(1), 1–69 (2010)

Eilenberg, S., Moore, J.C.: Foundations of relative homological algebra. Mem. Am. Math. Soc. No. 55, 39 (1965)

Eklof, P.C., Trlifaj, J.: How to make Ext vanish. Bull. Lond. Math. Soc. 33(1), 41–51 (2001)

Enochs, E.E., Jenda, O.M.G.: Relative homological algebra. Volume 30 of De Gruyter Expositions in Mathematics. Walter de Gruyter and Co., Berlin (2000)

Estrada, S., Iacob, A., Pérez, M.A.: Model structures and relative Gorenstein flat modules. Preprint. arXiv:1709.00658 (2017)

Fieldhouse, D.J.: Character modules, dimension and purity. Glasg. Math. J. 13, 144–146 (1972)

Foxby, H.-B.: Isomorphisms between complexes with applications to the homological theory of modules. Math. Scand. 40(1), 5–19 (1977)

Gillespie, J.: Model structures on modules over Ding–Chen rings. Homology Homotopy Appl. 12(1), 61–73 (2010)

Gillespie, J.: Model structures on exact categories. J. Pure Appl. Algebra 215(12), 2892–2902 (2011)

Gillespie, J.: How to construct a Hovey triple from two cotorsion pairs. Fund. Math. 230(3), 281–289 (2015)

Gillespie, J.: Duality pairs and stable module categories. Preprint. arXiv:1710.09906 (2017)

Gillespie, J.: The flat stable module category of a coherent ring. J. Pure Appl. Algebra 221(8), 2025–2031 (2017)

Gillespie, J., Hovey, M.: Gorenstein model structures and generalized derived categories. Proc. Edinb. Math. Soc., II. Ser. 53(3), 675–696 (2010)

Göbel, R., Trlifaj, J.: Approximations and endomorphism algebras of modules. Volume 41 of De Gruyter Expositions in Mathematics. Walter de Gruyter GmbH and Co. KG, Berlin (2006)

Happel, D.: Triangulated categories in the representation theory of finite-dimensional algebras. Volume 119 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge (1988)

Hashimoto, M.: Auslander–Buchweitz approximations of equivariant modules. Volume 282 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge (2000)

Holm, H.: Gorenstein homological dimensions. J. Pure Appl. Algebra 189(1–3), 167–193 (2004)

Holm, H.: The structure of balanced big Cohen–Macaulay modules over Cohen-Macaulay rings. Glasg. Math. J., In: press (2016)

Holm, H., Jørgensen, P.: Covers, precovers, and purity. Illinois J. Math. 52(2), 691–703 (2008)

Holm, H., Jørgensen, P.: Cotorsion pairs induced by duality pairs. J. Commut. Algebra 1(4), 621–633 (2009)

Hovey, M.: Model categories, Volume 63 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI (1999)

Hovey, M.: Cotorsion pairs, model category structures, and representation theory. Math. Z. 241(3), 553–592 (2002)

Krause, H.: Krull–Schmidt categories and projective covers. Expo. Math. 33(4), 535–549 (2015)

Krause, H., Solberg, Ø.: Applications of cotorsion pairs. J. Lond. Math. Soc. (2) 68(3), 631–650 (2003)

Marcos, E.N., Mendoza, O., Sáenz, C., Santiago, V.: Wide subcategories of finitely generated () -modules. J. Algebra Appl. https://doi.org/10.1142/S0219498818500822 (2018)

Matsumura, H.: Commutative ring theory. Volume 8 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, second edition (1989). Translated from the Japanese by M. Reid

Mendoza, O., Sáenz, C.: Tilting categories with applications to stratifying systems. J. Algebra 302(1), 419–449 (2006)

Murfet, D., Salarian, S.: Totally acyclic complexes over noetherian schemes. Adv. Math. 226(2), 1096–1133 (2011)

Quillen, D.G.: Homotopical algebra. Lecture Notes in Mathematics, No. 43. Springer, Berlin-New York (1967)

Quillen, D.G.: Higher Algebraic (K) -Theory. I. pp. 85–147. Lecture Notes in Math., vol. 341 (1973)

Reiten, I.: Tilting theory and homologically finite subcategories with applications to quasihereditary algebras. In: Handbook of Tilting Theory, volume 332 of London Math. Soc. Lecture Note Ser., pp. 179–214. Cambridge Univ. Press, Cambridge (2007)

Salce, L.: Cotorsion theories for abelian groups. In Symposia Mathematica, Vol. XXIII (Conf. Abelian Groups and their Relationship to the Theory of Modules, INDAM, Rome, 1977), pp. 11–32. Academic Press, London-New York (1979)

Sather-Wagstaff, S., Sharif, T., White, D.: Stability of Gorenstein categories. J. Lond. Math. Soc., II. Ser. 77(2), 481–502 (2008)

Sieg, D.: A Homological Approach to the Splitting Theory of PLS-spaces. PhD thesis, Universität Trier, Universitätsring 15, 54296 Trier (2010)

Stenström, B.: Coherent rings and (F, P) -injective modules. J. Lond. Math. Soc. 2(2), 323–329 (1970)

Verdier, J.-L.: Des Catégories Dérivées des Catégories Abéliennes. Paris: Société Mathématique de France (1996)

Zhang, P.: A brief introduction to Gorenstein projective modules. www.math.uni-bielefeld.de/sek/sem/abs/zhangpu4.pdf