While the study of linear transformations from one vector space to another is important, the central problem of linear algebra is to understand the structure of a linear transformation (T : V o V) from a space (V) to itself. Such transformations are called **linear operators**. If (T : V o V) is a linear operator where (func{dim}(V) = n), it is possible to choose bases (B) and (D) of (V) such that the matrix (M_{DB}(T)) has a very simple form: (M_{DB}(T) = leftB egin{array}{cc} I_r & 0 0 & 0 end{array}
ightB) where (r = func{rank }T) (see Example [exa:028178]). We begin this task in this section.

### The B-matrix of an Operator

Matrix (M_{DB}(T)) of (T : V o W) for basis (B)028619 If (T : V o V) is an operator on a vector space (V), and if (B) is an ordered basis of (V), define (M_{B}(T) = M_{BB}(T)) and call this the (m{B})-**matrix** of (T).

Recall that if (T : RR^n o RR^n) is a linear operator and (E = {vect{e}_{1}, vect{e}_{2}, dots, vect{e}_{n}}) is the standard basis of (RR^n), then (C_{E}(vect{x}) = vect{x}) for every (vect{x} in RR^n), so (M_{E}(T) = leftB T(vect{e}_{1}), T(vect{e}_{2}), dots, T(vect{e}_{n})
ightB) is the matrix obtained in Theorem [thm:005789]. Hence (M_{E}(T)) will be called the **standard matrix** of the operator (T).

For reference the following theorem collects some results from Theorem [thm:027955], Theorem [thm:028067], and Theorem [thm:028086], specialized for operators. As before, (C_{B}(vect{v})) denoted the coordinate vector of (vect{v}) with respect to the basis (B).

028640 Let (T : V o V) be an operator where (func{dim }V = n), and let (B) be an ordered basis of (V).

- (C_{B}(T(vect{v})) = M_{B}(T)C_{B}(vect{v})) for all (vect{v}) in (V).
- If (S : V o V) is another operator on (V), then (M_{B}(ST) = M_{B}(S)M_{B}(T)).
- (T) is an isomorphism if and only if (M_{B}(T)) is invertible. In this case (M_{D}(T)) is invertible for every ordered basis (D) of (V).
- If (T) is an isomorphism, then (M_{B}(T^{-1}) = [M_{B}(T)]^{-1}).
- If (B = {vect{b}_{1}, vect{b}_{2}, dots, vect{b}_{n}}), then (M_B(T) = leftB egin{array}{cccc} C_B[T(vect{b}_1)] & C_B[T(vect{b}_2)] & cdots & C_B[T(vect{b}_n)] end{array} ightB).

For a fixed operator (T) on a vector space (V), we are going to study how the matrix (M_{B}(T)) changes when the basis (B) changes. This turns out to be closely related to how the coordinates (C_{B}(vect{v})) change for a vector (vect{v}) in (V). If (B) and (D) are two ordered bases of (V), and if we take (T = 1_{V}) in Theorem [thm:027955], we obtain [C_D(vect{v}) = M_{DB}(1_V)C_B(vect{v}) quad mbox{ for all } vect{v} mbox{ in } V]

Change Matrix (P_{D gets B}) for bases (B) and (D)028675 With this in mind, define the **change matrix** (P_{D gets B}) by [P_{D gets B} = M_{DB}(1_V) quad mbox{ for any ordered bases } B mbox{ and } D mbox{ of } V]

This proves equation [eqn:thm_9_2_2_b] in the following theorem:

028683 Let (B = {vect{b}_{1}, vect{b}_{2}, dots, vect{b}_{n}}) and (D) denote ordered bases of a vector space (V). Then the change matrix (P_{D gets B}) is given in terms of its columns by [label{eqn:thm_9_2_2_a} P_{D gets B} = leftB egin{array}{cccc} C_D(vect{b}_1) & C_D(vect{b}_2) & cdots & C_D(vect{b}_n) end{array} ightB] and has the property that [label{eqn:thm_9_2_2_b} C_D(vect{v}) = P_{D gets B}C_B(vect{v}) mbox{ for all } vect{v} mbox{ in } V] Moreover, if (E) is another ordered basis of (V), we have

- (P_{B gets B} = I_{n})
- (P_{D gets B}) is invertible and ((P_{D gets B})^{-1} = P_{B gets D})
- (P_{E gets D}P_{D gets B} = P_{E gets B})

The formula [eqn:thm_9_2_2_b] is derived above, and [eqn:thm_9_2_2_a] is immediate from the definition of (P_{D gets B}) and the formula for (M_{DB}(T)) in Theorem [thm:027955].

- (P_{B gets B} = M_{BB}(1_{V}) = I_{n}) as is easily verified.
- This follows from (1) and (3).
- Let (V stackrel{T}{ o} W stackrel{S}{ o} U) be operators, and let (B), (D), and (E) be ordered bases of (V), (W), and (U) respectively. We have (M_{EB}(ST) = M_{ED}(S)M_{DB}(T)) by Theorem [thm:028067]. Now (3) is the result of specializing

(V = W = U) and (T = S = 1_{V}).

Property (3) in Theorem [thm:028683] explains the notation (vectspace{P}_{D gets B}).

028754 In (vectspace{P}_{2}) find (P_{D gets B}) if (B = {1, x, x^{2}}) and (D = {1, (1 - x), (1 - x)^{2}}). Then use this to express (p = p(x) = a + bx + cx^{2}) as a polynomial in powers of ((1 - x)).

To compute the change matrix (P_{D gets B}), express (1, x, x^{2}) in the basis (D): [egin{aligned} 1 & = 1 + 0(1 - x) + 0(1 - x)^2 x & = 1 - 1(1 - x) + 0(1 - x)^2 x^2 & = 1 - 2(1 - x) + 1(1 - x)^2end{aligned}] Hence (P_{D gets B} = leftB C_D(1), C_D(x), C_D(x)^2 ightB = leftB egin{array}{rrr} 1 & 1 & 1 0 & -1 & -2 0 & 0 & 1 end{array} ightB). We have (C_B(p) = leftB egin{array}{c} a b c end{array} ightB), so [C_D(p) = P_{D gets B}C_B(p) = leftB egin{array}{rrr} 1 & 1 & 1 0 & -1 & -2 0 & 0 & 1 end{array} ightB leftB egin{array}{c} a b c end{array} ightB = leftB egin{array}{c} a + b + c -b - 2c c end{array} ightB] Hence (p(x) = (a + b + c) - (b + 2c)(1 - x) + c(1 - x)^{2}) by Definition [def:027894].

Now let (B = {vect{b}_{1}, vect{b}_{2}, dots, vect{b}_{n}}) and (B_{0}) be two ordered bases of a vector space (V). An operator (T : V o V) has different matrices (M_{B}[T]) and (M_{B_0}[T]) with respect to (B) and (B_{0}). We can now determine how these matrices are related. Theorem [thm:028683] asserts that [C_{B_0}(vect{v}) = P_{B_0 gets B}C_B(vect{v}) mbox{ for all } vect{v} mbox{ in } V] On the other hand, Theorem [thm:028640] gives [C_B[T(vect{v})] = M_B(T)C_B(vect{v}) mbox{ for all } vect{v} mbox{ in } V] Combining these (and writing (P = P_{B_{0} gets B}) for convenience) gives [egin{aligned} PM_B(T)C_B(vect{v}) & = PC_B[T(vect{v})] & = C_{B_0}[T(vect{v})] & = M_{B_0}(T)C_{B_0}(vect{v}) & = M_{B_0}(T)PC_B(vect{v})end{aligned}] This holds for all (vect{v}) in (V). Because (C_{B}(vect{b}_{j})) is the (j)th column of the identity matrix, it follows that [egin{aligned} PM_B(T) = M_{B_0}(T)Pend{aligned}] Moreover (P) is invertible (in fact, (P^{-1} = P_{B gets B_0}) by Theorem [thm:028683]), so this gives [M_B(T) = P^{-1}M_{B_0}(T)P] This asserts that (M_{B_0}(T)) and (M_{B}(T)) are similar matrices, and proves Theorem [thm:028802].

Similarity Theorem028802 Let (B_{0}) and (B) be two ordered bases of a finite dimensional vector space (V). If (T : V o V) is any linear operator, the matrices (M_{B}(T)) and (M_{B_0}(T)) of (T) with respect to these bases are similar. More precisely, [M_B(T) = P^{-1}M_{B_0}(T)P] where (P = P_{B_0 gets B}) is the change matrix from (B) to (B_{0}).

028812 Let (T : RR^3 o RR^3) be defined by (T(a, b, c) = (2a - b, b + c, c - 3a)). If (B_{0}) denotes the standard basis of (RR^3) and (B = {(1, 1, 0), (1, 0, 1), (0, 1, 0)}), find an invertible matrix (P) such that (P^{-1}M_{B_0}(T)P = M_B(T)).

We have [egin{gathered} M_{B_0}(T) = leftB egin{array}{ccc} C_{B_0}(2, 0, -3) & C_{B_0}(-1, 1, 0) & C_{B_0}(0, 1, 1) end{array} ightB = leftB egin{array}{rrr} 2 & -1 & 0 0 & 1 & 1 -3 & 0 & 1 end{array} ightB M_{B}(T) = leftB egin{array}{ccc} C_{B}(1, 1, -3) & C_{B}(2, 1, -2) & C_{B}(-1, 1, 0) end{array} ightB = leftB egin{array}{rrr} 4 & 4 & -1 -3 & -2 & 0 -3 & -3 & 2 end{array} ightB P = P_{B_0 gets B} = leftB egin{array}{ccc} C_{B_0}(1, 1, 0) & C_{B_0}(1, 0, 1) & C_{B_0}(0, 1, 0) end{array} ightB = leftB egin{array}{rrr} 1 & 1 & 0 1 & 0 & 1 0 & 1 & 0 end{array} ightBend{gathered}] The reader can verify that (P^{-1}M_{B_0}(T)P = M_B(T)); equivalently that (M_{B_0}(T)P = PM_B(T)).

A square matrix is diagonalizable if and only if it is similar to a diagonal matrix. Theorem [thm:028802] comes into this as follows: Suppose an (n imes n) matrix (A = M_{B_0}(T)) is the matrix of some operator (T : V o V) with respect to an ordered basis (B_{0}). If another ordered basis (B) of (V) can be found such that (M_{B}(T) = D) is diagonal, then Theorem [thm:028802] shows how to find an invertible (P) such that (P^{-1}AP = D). In other words, the “algebraic” problem of finding (P) such that (P^{-1}AP) is diagonal comes down to the “geometric” problem of finding a basis (B) such that (M_{B}(T)) is diagonal. This shift of emphasis is one of the most important techniques in linear algebra.

Each (n imes n) matrix (A) can be easily realized as the matrix of an operator. In fact, (Example [exa:028025]), [M_E(T_A) = A] where (T_{A} : RR^n o RR^n) is the matrix operator given by (T_{A}(vect{x}) = Avect{x}), and (E) is the standard basis of (RR^n). The first part of the next theorem gives the converse of Theorem [thm:028802]: Any pair of similar matrices can be realized as the matrices of the same linear operator with respect to different bases. This is part 1 of the following theorem.

028841 Let (A) be an (n imes n) matrix and let (E) be the standard basis of (RR^n).

- Let (A^{prime}) be similar to (A), say (A^{prime} = P^{-1}AP), and let (B) be the ordered basis of (RR^n) consisting of the columns of (P) in order. Then (T_{A} : RR^n o RR^n) is linear and [M_E(T_A) = A quad mbox{ and } quad M_B(T_A) = A^{prime}]
- If (B) is any ordered basis of (RR^n), let (P) be the (invertible) matrix whose columns are the vectors in (B) in order. Then [M_B(T_A) = P^{-1}AP]

- We have (M_{E}(T_{A}) = A) by Example [exa:028025]. Write (P = leftB egin{array}{ccc} vect{b}_{1} & cdots & vect{b}_{n} end{array} ightB) in terms of its columns so (B = {vect{b}_{1}, dots, vect{b}_{n}}) is a basis of (RR^n). Since (E) is the standard basis, [P_{E gets B} = leftB egin{array}{ccc} C_E(vect{b}_1) & cdots & C_E(vect{b}_n) end{array} ightB = leftB egin{array}{ccc} vect{b}_1 & cdots & vect{b}_n end{array} ightB = P] Hence Theorem [thm:028802] (with (B_{0} = E)) gives (M_{B}(T_{A}) = P^{-1}M_{E}(T_{A})P = P^{-1}AP = A^{prime}).
- Here (P) and (B) are as above, so again (P_{E gets B} = P) and (M_{B}(T_{A}) = P^{-1}AP).

028887 Given (A = leftB egin{array}{rr} 10 & 6 -18 & -11 end{array} ightB), (P = leftB egin{array}{rr} 2 & -1 -3 & 2 end{array} ightB), and (D = leftB egin{array}{rr} 1 & 0 0 & -2 end{array} ightB), verify that (P^{-1}AP = D) and use this fact to find a basis (B) of (RR^2) such that (M_{B}(T_{A}) = D).

(P^{-1}AP = D) holds if (AP = PD); this verification is left to the reader. Let (B) consist of the columns of (P) in order, that is (B = left{ leftB egin{array}{r} 2 -3 end{array} ightB, leftB egin{array}{r} -1 2 end{array} ightB ight}). Then Theorem [thm:028841] gives (M_{B}(T_{A}) = P^{-1}AP = D). More explicitly, [M_B(T_A) = leftB C_B left( T_A leftB egin{array}{r} 2 -3 end{array} ightB ight) quad C_B left( T_A leftB egin{array}{r} -1 2 end{array} ightB ight) ightB = leftB C_B leftB egin{array}{r} 2 -3 end{array} ightB quad C_B leftB egin{array}{r} 2 -4 end{array} ightB ightB = leftB egin{array}{rr} 1 & 0 0 & -2 end{array} ightB = D]

Let (A) be an (n imes n) matrix. As in Example [exa:028887], Theorem [thm:028841] provides a new way to find an invertible matrix (P) such that (P^{-1}AP) is diagonal. The idea is to find a basis (B = {vect{b}_{1}, vect{b}_{2}, dots, vect{b}_{n}}) of (RR^n) such that (M_{B}(T_{A}) = D) is diagonal and take (P = leftB egin{array}{cccc} vect{b}_{1} & vect{b}_{2} & cdots & vect{b}_{n} end{array} ightB) to be the matrix with the (vect{b}_{j}) as columns. Then, by Theorem [thm:028841], [P^{-1}AP = M_B(T_A) = D] As mentioned above, this converts the algebraic problem of diagonalizing (A) into the geometric problem of finding the basis (B). This new point of view is very powerful and will be explored in the next two sections.

Theorem [thm:028841] enables facts about matrices to be deduced from the corresponding properties of operators. Here is an example.

028921

- If (T : V o V) is an operator where (V) is finite dimensional, show that (TST = T) for some invertible operator (S : V o V).
- If (A) is an (n imes n) matrix, show that (AUA = A) for some invertible matrix (U).

- Let (B = {vect{b}_{1}, dots, vect{b}_{r}, vect{b}_{r+1}, dots, vect{b}_{n}}) be a basis of (V) chosen so that (func{ker }T = func{span}{vect{b}_{r+1}, dots, vect{b}_{n}}). Then ({T(vect{b}_{1}), dots, T(vect{b}_{r})}) is independent (Theorem [thm:021572]), so complete it to a basis ({T(vect{b}_{1}), dots, T(vect{b}_{r}), vect{f}_{r+1}, dots, vect{f}_{n}}) of (V).
By Theorem [thm:020916], define (S : V o V) by [egin{aligned} S[T(vect{b}_i)] & = vect{b}_i quad mbox{ for } 1 leq i leq r S(vect{f}_j) & = vect{b}_j quad mbox{ for } r < j leq nend{aligned}] Then (S) is an isomorphism by Theorem [thm:022044], and (TST = T) because these operators agree on the basis (B). In fact, [egin{gathered} (TST)(vect{b}_i) = T[ST(vect{b}_i)] = T(vect{b}_i) mbox{ if } 1 leq i leq r mbox{, and} (TST)(vect{b}_j) = TS[T(vect{b}_j)] = TS(vect{0}) = vect{0} = T(vect{b}_j) mbox{ for } r < j leq nend{gathered}]

- Given (A), let (T = T_{A} : RR^n o RR^n). By (1) let (TST = T) where (S : RR^n o RR^n) is an isomorphism. If (E) is the standard basis of (RR^n), then (A = M_{E}(T)) by Theorem [thm:028841]. If (U = M_{E}(S)) then, by Theorem [thm:028640], (U) is invertible and [AUA = M_E(T)M_E(S)M_E(T) = M_E(TST) = M_E(T) = A] as required.

The reader will appreciate the power of these methods if he/she tries to find (U) directly in part 2 of Example [exa:028921], even if (A) is (2 imes 2).

A property of (n imes n) matrices is called a **similarity invariant** if, whenever a given (n imes n) matrix (A) has the property, every matrix similar to (A) also has the property. Theorem [thm:016008] shows that (func{rank}), determinant, trace, and characteristic polynomial are all similarity invariants.

To illustrate how such similarity invariants are related to linear operators, consider the case of (func{rank}). If (T : V o V) is a linear operator, the matrices of (T) with respect to various bases of (V) all have the same (func{rank}) (being similar), so it is natural to regard the common (func{rank}) of all these matrices as a property of (T) itself and not of the particular matrix used to describe (T). Hence the (func{rank}) of (T) could be *defined* to be the (func{rank}) of (A), where (A) is *any* matrix of (T). This would be unambiguous because (func{rank}) is a similarity invariant. Of course, this is unnecessary in the case of (func{rank}) because (func{rank }T) was defined earlier to be the dimension of (func{im }T), and this was *proved* to equal the (func{rank}) of every matrix representing (T) (Theorem [thm:028139]). This definition of (func{rank }T) is said to be *intrinsic* because it makes no reference to the matrices representing (T). However, the technique serves to identify an intrinsic property of (T) with *every* similarity invariant, and some of these properties are not so easily defined directly.

In particular, if (T : V o V) is a linear operator on a finite dimensional space (V), define the **determinant** of (T) (denoted (func{det }T)) by [func{det } T = func{det } M_B(T), quad B mbox{ any basis of } V] This is independent of the choice of basis (B) because, if (D) is any other basis of (V), the matrices (M_{B}(T)) and (M_{D}(T)) are similar and so have the same determinant. In the same way, the **trace** of (T) (denoted (func{tr }T)) can be defined by [func{tr } T = func{tr } M_B(T), quad B mbox{ any basis of } V] This is unambiguous for the same reason.

Theorems about matrices can often be translated to theorems about linear operators. Here is an example.

028977 Let (S) and (T) denote linear operators on the finite dimensional space (V). Show that [func{det }(ST) = func{det } S func{det } T]

Choose a basis (B) of (V) and use Theorem [thm:028640]. [egin{aligned} func{det }(ST) = func{det } M_B(ST) & = func{det } [M_B(S)M_B(T)] & = func{det } [M_B(S)] func{det } [M_B(T)] = func{det } S func{det } Tend{aligned}]

Recall next that the characteristic polynomial of a matrix is another similarity invariant: If (A) and (A^{prime}) are similar matrices, then (c_{A}(x) = c_{A^{prime}}(x)) (Theorem [thm:016008]). As discussed above, the discovery of a similarity invariant means the discovery of a property of linear operators. In this case, if (T : V o V) is a linear operator on the finite dimensional space (V), define the **characteristic polynomial** of (T) by [c_T(x) = c_A(x) mbox{ where } A = M_B(T) mbox{, } B mbox{ any basis of } V] In other words, the characteristic polynomial of an operator (T) is the characteristic polynomial of *any* matrix representing (T). This is unambiguous because any two such matrices are similar by Theorem [thm:028802].

028991 Compute the characteristic polynomial (c_{T}(x)) of the operator (T : vectspace{P}_{2} o vectspace{P}_{2}) given by (T(a + bx + cx^{2}) = (b + c) + (a + c)x + (a + b)x^{2}).

If (B = {1, x, x^{2}}), the corresponding matrix of (T) is [M_B(T) = leftB egin{array}{ccc} C_B[T(1)] & C_B[T(x)] & C_B[T(x^2)] end{array} ightB = leftB egin{array}{rrr} 0 & 1 & 1 1 & 0 & 1 1 & 1 & 0 end{array} ightB] Hence (c_{T}(x) = func{det}[xI - M_{B}(T)] = x^{3} - 3x - 2 = (x + 1)^{2}(x - 2)).

In Section [sec:4_4] we computed the matrix of various projections, reflections, and rotations in (RR^3). However, the methods available then were not adequate to find the matrix of a rotation about a line through the origin. We conclude this section with an example of how Theorem [thm:028802] can be used to compute such a matrix.

029011 Let (L) be the line in (RR^3) through the origin with (unit) direction vector (vect{d} = frac{1}{3}leftB egin{array}{ccc} 2 & 1 & 2 end{array} ightB^T). Compute the matrix of the rotation about (L) through an angle ( heta) measured counterclockwise when viewed in the direction of (vect{d}).

l5cm

Let (R : RR^3 o RR^3) be the rotation. The idea is to first find a basis (B_{0}) for which the matrix of (M_{B_0}(R)) of (R) is easy to compute, and then use Theorem [thm:028802] to compute the “standard” matrix (M_{E}(R)) with respect to the standard basis (E = {vect{e}_{1}, vect{e}_{2}, vect{e}_{3}}) of (RR^3).

To construct the basis (B_{0}), let (K) denote the plane through the origin with (vect{d}) as normal, shaded in the diagram. Then the vectors (vect{f} = frac{1}{3}leftB egin{array}{ccc} -2 & 2 & 1 end{array} ightB^T) and (vect{g} = frac{1}{3}leftB egin{array}{ccc} 1 & 2 & -2 end{array} ightB^T) are both in (K) (they are orthogonal to (vect{d})) and are independent (they are orthogonal to each other).

Hence (B_{0} = {vect{d}, vect{f}, vect{g}}) is an orthonormal basis of (RR^3), and the effect of (R) on (B_{0}) is easy to determine. In fact (R(vect{d}) = vect{d}) and (as in Theorem [thm:006021]) the second diagram gives [R(vect{f}) = cos heta vect{f} + sin heta vect{g} quad mbox{ and } quad R(vect{g}) = -sin heta vect{f} + cos heta vect{g}]

l5cm

because (vectlengthvect{f}vectlength = 1 = vectlengthvect{g}vectlength). Hence [M_{B_0}(R) = leftB egin{array}{ccc} C_{B_0}(vect{d}) & C_{B_0}(vect{f}) & C_{B_0}(vect{g}) end{array} ightB = leftB egin{array}{ccc} 1 & 0 & 0 0 & cos heta & -sin heta 0 & sin heta & cos heta end{array} ightB] Now Theorem [thm:028802] (with (B = E)) asserts that (M_E(R) = P^{-1}M_{B_0}(R)P) where [P = P_{B_0 gets E} = leftB egin{array}{ccc} C_{B_0}(vect{e}_1) & C_{B_0}(vect{e}_2) & C_{B_0}(vect{e}_3) end{array} ightB = frac{1}{3} leftB egin{array}{rrr} 2 & 1 & 2 -2 & 2 & 1 1 & 2 & -2 end{array} ightB] using the expansion theorem (Theorem [thm:015082]). Since (P^{-1} = P^{T}) ((P) is orthogonal), the matrix of (R) with respect to (E) is [egin{aligned} M_E(R) & = P^TM_{B_0}(R)P & = frac{1}{9} leftB egin{array}{ccc} 5 cos heta + 4 & 6 sin heta - 2 cos heta + 2 & 4 - 3 sin heta - 4 cos heta 2 - 6 sin heta - 2 cos heta & 8 cos heta + 1 & 6 sin heta - 2 cos heta + 2 3 sin heta - 4 cos heta + 4 & 2 - 6 sin heta - 2 cos heta & 5 cos heta + 4 end{array} ightBend{aligned}] As a check one verifies that this is the identity matrix when ( heta = 0), as it should.

Note that in Example [exa:029011] not much motivation was given to the choices of the (orthonormal) vectors (vect{f}) and (vect{g}) in the basis (B_{0}), which is the key to the solution. However, if we begin with *any* basis containing (vect{d}) the Gram-Schmidt algorithm will produce an orthogonal basis containing (vect{d}), and the other two vectors will automatically be in (L^{perp} = K).

## Exercises for 1

solutions

2

In each case find (P_{D gets B}), where (B) and (D) are ordered bases of (V). Then verify that

(C_{D}(vect{v}) = P_{D gets B}C_{B}(vect{v})).

- (V = RR^2), (B = {(0, -1), (2, 1)}),

(D = {(0, 1), (1, 1)}), (vect{v} = (3, -5)) - (V = vectspace{P}_2), (B = {x, 1 + x, x^2}), (D = {2, x + 3, x^2 - 1}), (vect{v} = 1 + x + x^2)
- (V = vectspace{M}_{22}),

(B = left{ leftB egin{array}{rr} 1 & 0 0 & 0 end{array} ightB, leftB egin{array}{rr} 0 & 1 0 & 0 end{array} ightB, leftB egin{array}{rr} 0 & 0 0 & 1 end{array} ightB, leftB egin{array}{rr} 0 & 0 1 & 0 end{array} ightB ight}),

(D = left{ leftB egin{array}{rr} 1 & 1 0 & 0 end{array} ightB, leftB egin{array}{rr} 1 & 0 1 & 0 end{array} ightB, leftB egin{array}{rr} 1 & 0 0 & 1 end{array} ightB, leftB egin{array}{rr} 0 & 1 1 & 0 end{array} ightB ight}), (vect{v} = leftB egin{array}{rr} 3 & -1 1 & 4 end{array} ightB)

- (frac{1}{2} leftB egin{array}{rrr} -3 & -2 & 1 2 & 2 & 0 0 & 0 & 2 end{array} ightB)

In (RR^3) find (P_{D gets B}), where

(B = {(1, 0, 0), (1, 1, 0), (1, 1, 1)}) and

(D = {(1, 0, 1), (1, 0, -1), (0, 1, 0)}). If (vect{v} = (a, b, c)), show that (C_D(vect{v}) = frac{1}{2} leftB egin{array}{c} a + c a - c 2b end{array}
ightB) and (C_B(vect{v}) = leftB egin{array}{c} a - b b - c c end{array}
ightB), and verify that (C_{D}(vect{v}) = P_{D gets B}C_{B}(vect{v})).

In (vectspace{P}_{3}) find (P_{D gets B}) if (B = {1, x, x^{2}, x^{3}}) and (D = {1, (1 - x), (1 - x)^{2}, (1 - x)^{3}}). Then express (p = a + bx + cx^{2} + dx^{3}) as a polynomial in powers of ((1 - x)).

In each case verify that (P_{D gets B}) is the inverse of (P_{B gets D}) and that (P_{E gets D}P_{D gets B} = P_{E gets B}), where (B), (D), and (E) are ordered bases of (V).

- (V = RR^3), (B = {(1, 1, 1), (1, -2, 1), (1, 0, -1)}), (D =) standard basis,

(E = {(1, 1, 1), (1, -1, 0), (-1, 0, 1)}) - (V = vectspace{P}_2), (B = {1, x, x^2}), (D = {1 + x + x^2, 1 - x, -1 + x^2}), (E = {x^2, x, 1})

- (P_{B gets D} = leftB egin{array}{rrr} 1 & 1 & -1 1 & -1 & 0 1 & 0 & 1 end{array} ightB), (P_{D gets B} = frac{1}{3} leftB egin{array}{rrr} 1 & 1 & 1 1 & -2 & 1 -1 & -1 & 2 end{array} ightB), (P_{E gets D} = leftB egin{array}{rrr} 1 & 0 & 1 1 & -1 & 0 1 & 1 & -1 end{array} ightB), (P_{E gets B} = leftB egin{array}{rrr} 0 & 0 & 1 0 & 1 & 0 1 & 0 & 0 end{array} ightB)

Use property (2) of Theorem [thm:028683], with (D) the standard basis of (RR^n), to find the inverse of:

(A = leftB egin{array}{rrr} 1 & 1 & 0 1 & 0 & 1 0 & 1 & 1 end{array} ightB) (A = leftB egin{array}{rrr} 1 & 2 & 1 2 & 3 & 0 -1 & 0 & 2 end{array} ightB)

- (A = P_{D gets B}), where

(B = {(1, 2, -1), (2, 3, 0), (1, 0, 2)}). Hence (A^{-1} = P_{B gets D} = leftB egin{array}{rrr} 6 & -4 & -3 -4 & 3 & 2 3 & -2 & -1 end{array} ightB)

Find (P_{D gets B}) if (B = {vect{b}_{1}, vect{b}_{2}, vect{b}_{3}, vect{b}_{4}}) and (D = {vect{b}_{2}, vect{b}_{3}, vect{b}_{1}, vect{b}_{4}}). Change matrices arising when the bases differ only in the *order* of the vectors are called **permutation matrices**.

In each case, find (P = P_{B_0 gets B}) and verify that (P^{-1}M_{B_0}(T)P = M_B(T)) for the given operator (T).

- (T : RR^3 o RR^3), (T(a, b, c) = (2a - b, b + c, c - 3a)); (B_{0} = {(1, 1, 0), (1, 0, 1), (0, 1, 0)}) and (B) is the standard basis.
- (T : vectspace{P}_2 o vectspace{P}_2),

(T(a + bx + cx^2) = (a + b) + (b + c)x + (c + a)x^2);

(B_0 = {1, x, x^2}) and (B = {1 - x^2, 1 + x, 2x + x^2}) - (T : vectspace{M}_{22} o vectspace{M}_{22}),

(TleftB egin{array}{cc} a & b c & d end{array} ightB = leftB egin{array}{cc} a + d & b + c a + c & b + d end{array} ightB);

(B_0 = left{ leftB egin{array}{rr} 1 & 0 0 & 0 end{array} ightB, leftB egin{array}{rr} 0 & 1 0 & 0 end{array} ightB, leftB egin{array}{rr} 0 & 0 1 & 0 end{array} ightB, leftB egin{array}{rr} 0 & 0 0 & 1 end{array} ightB ight}), and

(B = left{ leftB egin{array}{rr} 1 & 1 0 & 0 end{array} ightB, leftB egin{array}{rr} 0 & 0 1 & 1 end{array} ightB, leftB egin{array}{rr} 1 & 0 0 & 1 end{array} ightB, leftB egin{array}{rr} 0 & 1 1 & 1 end{array} ightB ight})

- (P = leftB egin{array}{rrr} 1 & 1 & 0 0 & 1 & 2 -1 & 0 & 1 end{array} ightB)

In each case, verify that (P^{-1}AP = D) and find a basis (B) of (RR^2) such that (M_{B}(T_{A}) = D).

- (A = leftB egin{array}{rr} 11 & -6 12 & -6 end{array} ightB) (P = leftB egin{array}{rr} 2 & 3 3 & 4 end{array} ightB) (D = leftB egin{array}{rr} 2 & 0 0 & 3 end{array} ightB)
- (A = leftB egin{array}{rr} 29 & -12 70 & -29 end{array} ightB) (P = leftB egin{array}{rr} 3 & 2 7 & 5 end{array} ightB) (D = leftB egin{array}{rr} 1 & 0 0 & -1 end{array} ightB)

- (B = left{ leftB egin{array}{c} 3 7 end{array} ightB, leftB egin{array}{c} 2 5 end{array} ightB ight})

In each case, compute the characteristic polynomial (c_{T}(x)).

- (T : RR^2 o RR^2), (T(a, b) = (a - b, 2b - a))
- (T : RR^2 o RR^2), (T(a, b) = (3a + 5b, 2a + 3b))
- (T : vectspace{P}_{2} o vectspace{P}_{2}),

(T(a + bx + cx^{2}) hspace*{1em}= (a - 2c) + (2a + b + c)x + (c - a)x^{2}) - (T : vectspace{P}_{2} o vectspace{P}_{2}),

(T(a + bx + cx^{2}) hspace*{1em}= (a + b - 2c) + (a - 2b + c)x + (b - 2a)x^{2}) - (T : RR^3 o RR^3), (T(a, b, c) = (b, c, a))
- (T : vectspace{M}_{22} o vectspace{M}_{22}), (T leftB egin{array}{cc} a & b c & d end{array} ightB = leftB egin{array}{cc} a - c & b - d a - c & b - d end{array} ightB)

- (c_{T}(x) = x^{2} - 6x - 1)
- (c_{T}(x) = x^{3} + x^{2} - 8x - 3)
- (c_{T}(x) = x^{4})

If (V) is finite dimensional, show that a linear operator (T) on (V) has an inverse if and only if (func{det }T eq 0).

Let (S) and (T) be linear operators on (V) where (V) is finite dimensional.

- Show that (func{tr}(ST) = func{tr}(TS)). [
*Hint*: Lemma [lem:015978].] - [See Exercise [ex:ex9_1_19].] For (a) in (RR), show that (func{tr}(S + T) = func{tr }S + func{tr }T), and (func{tr}(aT) = a func{tr}(T)).

If (A) and (B) are (n imes n) matrices, show that they have the same (func{null}) space if and only if (A = UB) for some invertible matrix (U). [*Hint*: Exercise [ex:ex7_3_28].]

Define (T_{A} : RR^n o RR^n) by (T_{A}(vect{x}) = Avect{x}) for all (vect{x}) in (RR^n). If (func{null }A = func{null }B), then (func{ker}(T_{A}) = func{null }A = func{null }B = func{ker}(T_{B})) so, by Exercise [ex:ex7_3_28], (T_{A} = ST_{B}) for some isomorphism (S : RR^n o RR^n). If (B_{0}) is the standard basis of (RR^n), we have (A = M_{B_0}(T_{A}) = M_{B_0}(ST_{B}) = M_{B_0}(S)M_{B_0}(T_{B}) = UB) where (U = M_{B_0}(S)) is invertible by Theorem [thm:028640]. Conversely, if (A = UB) with (U) invertible, then (Avect{x} = vect{0}) if and only (Bvect{x} = vect{0}), so (func{null }A = func{null }B).

If (A) and (B) are (n imes n) matrices, show that they have the same column space if and only if (A = BU) for some invertible matrix (U). [*Hint*: Exercise [ex:ex7_3_28].]

Let (E = {vect{e}_{1}, dots, vect{e}_{n}}) be the standard ordered basis of (RR^n), written as columns. If

(D = {vect{d}_{1}, dots, vect{d}_{n}}) is any ordered basis, show that

(P_{E gets D} = leftB egin{array}{ccc} vect{d}_{1} & cdots & vect{d}_{n} end{array}
ightB).

Let (B = {vect{b}_{1}, vect{b}_{2}, dots, vect{b}_{n}}) be any ordered basis of (RR^n), written as columns. If (Q = leftB egin{array}{cccc} vect{b}_{1} & vect{b}_{2} & cdots & vect{b}_{n} end{array} ightB) is the matrix with the (vect{b}_{i}) as columns, show that (QC_{B}(vect{v}) = vect{v}) for all (vect{v}) in (RR^n).

Given a complex number (w), define (T_{w} : mathbb{C} o mathbb{C}) by (T_{w}(z) = wz) for all (z) in (mathbb{C}).

- Show that (T_{w}) is a linear operator for each (w) in (mathbb{C}), viewing (mathbb{C}) as a real vector space.
- If (B) is any ordered basis of (mathbb{C}), define (S : mathbb{C} o vectspace{M}_{22}) by (S(w) = M_{B}(T_{w})) for all (w) in (mathbb{C}). Show that (S) is a one-to-one linear transformation with the additional property that (S(wv) = S(w)S(v)) holds for all (w) and (v) in (mathbb{C}).
- Taking (B = {1, i}) show that

(S(a + bi) = leftB egin{array}{rr} a & -b b & a end{array} ightB) for all complex numbers (a + bi). This is called the**regular representation**of the complex numbers as (2 imes 2) matrices. If ( heta) is any angle, describe (S(e^{i heta})) geometrically. Show that (S(overline{w}) = S(w)^T) for all (w) in (mathbb{C}); that is, that conjugation corresponds to transposition.

- Showing (S(w + v) = S(w) + S(v)) means (M_{B}(T_{w+v}) = M_{B}(T_{w}) + M_{B}(T_{v})). If (B = {b_{1}, b_{2}}), then column (j) of (M_{B}(T_{w+v})) is (C_{B}[(w + v)b_{j}] = C_{B}(wb_{j} + vb_{j}) = C_{B}(wb_{j}) + C_{B}(vb_{j})) because (C_{B}) is linear. This is column (j) of (M_{B}(T_{w}) + M_{B}(T_{v})). Similarly (M_{B}(T_{aw}) = aM_{B}(T_{w})); so (S(aw) = aS(w)). Finally (T_{w}T_{v} = T_{wv}) so (S(wv) = M_{B}(T_{w}T_{v}) = M_{B}(T_{w})M_{B}(T_{v}) = S(w)S(v)) by Theorem [thm:028640].

Let (B = {vect{b}_{1}, vect{b}_{2}, dots, vect{b}_{n}}) and

(D = {vect{d}_{1}, vect{d}_{2}, dots, vect{d}_{n}}) be two ordered bases of a vector space (V). Prove that (C_{D}(vect{v}) = P_{D gets B}C_{B}(vect{v})) holds for all (vect{v}) in (V) as follows: Express each (vect{b}_{j}) in the form (vect{b}_j = p_{1j}vect{d}_1 + p_{2j}vect{d}_2 + cdots + p_{nj}vect{d}_n) and write (P = leftB p_{ij}
ightB). Show that (P = leftB egin{array}{cccc} C_D(vect{b}_1) & C_D(vect{b}_1) & cdots & C_D(vect{b}_1) end{array}
ightB) and that (C_{D}(vect{v}) = PC_{B}(vect{v})) for all (vect{v}) in (B).

Find the standard matrix of the rotation (R) about the line through the origin with direction vector (vect{d} = leftB egin{array}{ccc} 2 & 3 & 6 end{array}
ightB^{T}). [*Hint*: Consider (vect{f} = leftB egin{array}{ccc} 6 & 2 & -3 end{array}
ightB^{T}) and (vect{g} = leftB egin{array}{ccc} 3 & -6 & 2 end{array}
ightB^{T}).]

## 9.2: Operators and Similarity

**pg_similarity** is an extension to support similarity queries on PostgreSQL. The implementation is tightly integrated in the RDBMS in the sense that it defines operators so instead of the traditional operators (= and <>) you can use

and ! (any of these operators represents a similarity function).

**pg_similarity** has three main components:

**Functions**: a set of functions that implements similarity algorithms available in the literature. These functions can be used as UDFs and, will be the base for implementing the similarity operators**Operators**: a set of operators defined at the top of similarity functions. They use similarity functions to obtain the similarity threshold and, compare its value to a user-defined threshold to decide if it is a match or not**Session Variables**: a set of variables that store similarity function parameters. Theses variables can be defined at run time.

**pg_similarity** is supported on those platforms that PostgreSQL is. The installation steps depend on your operating system.

You can also keep up with the latest fixes and features cloning the Git repository.

UNIX based Operating Systems

Before you are able to use your extension, you should build it and load it at the desirable database.

The typical usage is to copy a sample file at tarball (*pg_similarity.conf.sample*) to PGDATA (as *pg_similarity.conf*) and include the following line in *postgresql.conf*:

Sorry, never tried^H^H^H^H^H Actually I tried that but it is not that easy as on UNIX. :( There are two ways to build PostgreSQL on Windows: (i) MingW and (ii) MSVC. The former is supported but it is not widely used and the latter is popular because Windows binaries (officially distributed) are built using MSVC. If you choose to use Mingw, just follow the UNIX instructions above to build pg_similarity. Otherwise, the MSVC steps are below:

- Edit pg_similarity.vcxproj replacing c:postgrespg130 with PostgreSQL prefix directory
- Open this project file in MS Visual Studio and build it
- Copy pg_similarity.dll to pg_config --pkglibdir
- Copy pg_similarity.control and pg_similarity--*.sql to SHAREDIR/extension (SHAREDIR is pg_config --sharedir ).

This extension supports a set of similarity algorithms. The most known algorithms are covered by this extension. You must be aware that each algorithm is suited for a specific domain. The following algorithms are provided.

- L1 Distance (as known as City Block or Manhattan Distance)
- Cosine Distance
- Dice Coefficient
- Euclidean Distance
- Hamming Distance
- Jaccard Coefficient
- Jaro Distance
- Jaro-Winkler Distance
- Levenshtein Distance
- Matching Coefficient
- Monge-Elkan Coefficient
- Needleman-Wunsch Coefficient
- Overlap Coefficient
- Q-Gram Distance
- Smith-Waterman Coefficient
- Smith-Waterman-Gotoh Coefficient
- Soundex Distance.

The several parameters control the behavior of the pg_similarity functions and operators. I don't explain in detail each parameter because they can be classified in three classes: **tokenizer**, **threshold**, and **normalized**.

**tokenizer**: controls how the strings are tokenized. The valid values are**alnum**,**gram**,**word**, and**camelcase**. All tokens are lowercase (this option can be set at compile time see PGS_IGNORE_CASE at source code). Default is**alnum****alnum**: delimiters are any non-alphanumeric characters. That means that only alphabetic characters in the standard C locale and digits (0-9) are accepted in tokens. For example, the string "Euler_Taveira_de_Oliveira 22/02/2011" is tokenized as "Euler", "Taveira", "de", "Oliveira", "22", "02", "2011"**gram**: an n-gram is a subsequence of length n. Extracting n-grams from a string can be done by using the sliding-by-one technique, that is, sliding a window of length n through out the string by one character. For example, the string "euler taveira" (using n = 3) is tokenized as "eul", "ule", "ler", "er ", "r t", " ta", "tav", "ave", "vei", "eir", and "ira". There are some authors that consider n-grams adding " e", " eu", "ra ", and "a " to the set of tokens, that is called full n-grams (this option can be set at compile time see PGS_FULL_NGRAM at source code)**word**: delimiters are white space characters (space, form-feed, newline, carriage return, horizontal tab, and vertical tab). For example, the string "Euler Taveira de Oliveira 22/02/2011" is tokenized as "Euler", "Taveira", "de", "Oliveira", and "22/02/2011"**camelcase**: delimiters are capitalized characters but they are also included as first token characters. For example, the string "EulerTaveira de Oliveira" is tokenized as "Euler", "Taveira de ", and "Oliveira".

Set parameters at run time.

Simple tables for examples.

*Example 1*: Using similarity functions**cosine**,**jaro**, and**euclidean**.*Example 2*: Using operator**levenshtein**(==) and changing its threshold at run time.

*Example 3*: Using operator**qgram**() and changing its threshold at run time.

*Example 4*: Using a set of operators using the same threshold (0.7) to ilustrate that some similarity functions are appropriated to certain data domains.Copyright © 2008-2020 Euler Taveira de Oliveira All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution Neither the name of the Euler Taveira de Oliveira nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES LOSS OF USE, DATA, OR PROFITS OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

## Major new features of the 3.9 series, compared to 3.8

Some of the new major new features and changes in Python 3.9 are:

- , Module State Access from C Extension Methods , Union Operators in dict , Type Hinting Generics In Standard Collections , Flexible function and variable annotations , Python adopts a stable annual release cadence , Relaxing Grammar Restrictions On Decorators , Support for the IANA Time Zone Database in the Standard Library , String methods to remove prefixes and suffixes , New PEG parser for CPython , garbage collection does not block on resurrected objects , os.pidfd_open added that allows process management without races and signals , Unicode support updated to version 13.0.0 , when Python is initialized multiple times in the same process, it does not leak memory anymore

- A number of Python builtins (range, tuple, set, frozenset, list, dict) are now sped up using PEP 590 vectorcall
- A number of Python modules (_abc, audioop, _bz2, _codecs, _contextvars, _crypt, _functools, _json, _locale, operator, resource, time, _weakref) now use multiphase initialization as defined by PEP 489
- A number of standard library modules (audioop, ast, grp, _hashlib, pwd, _posixsubprocess, random, select, struct, termios, zlib) are now using the stable ABI defined by PEP 384.

You can find a more comprehensive list in this release's "What's New" document.

When defining a linear transformation, it can be the case that a change of basis can result in a simpler form of the same transformation. For example, the matrix representing a rotation in ℝ 3 when the axis of rotation is not aligned with the coordinate axis can be complicated to compute. If the axis of rotation were aligned with the positive z -axis, then it would simply be

where x' and y' are respectively the original and transformed vectors in a new basis containing a vector parallel to the axis of rotation. In the original basis, the transform would be written as

where vectors x and y and the unknown transform matrix T are in the original basis. To write T in terms of the simpler matrix, we use the change-of-basis matrix P that transforms x and y as x ′ = P x

Similarity is an equivalence relation on the space of square matrices.

Because matrices are similar if and only if they represent the same linear operator with respect to (possibly) different bases, similar matrices share all properties of their shared underlying operator:

Because of this, for a given matrix *A*, one is interested in finding a simple "normal form" *B* which is similar to *A*—the study of *A* then reduces to the study of the simpler matrix *B*. For example, *A* is called diagonalizable if it is similar to a diagonal matrix. Not all matrices are diagonalizable, but at least over the complex numbers (or any algebraically closed field), every matrix is similar to a matrix in Jordan form. Neither of these forms is unique (diagonal entries or Jordan blocks may be permuted) so they are not really normal forms moreover their determination depends on being able to factor the minimal or characteristic polynomial of *A* (equivalently to find its eigenvalues). The rational canonical form does not have these drawbacks: it exists over any field, is truly unique, and it can be computed using only arithmetic operations in the field *A* and *B* are similar if and only if they have the same rational canonical form. The rational canonical form is determined by the elementary divisors of *A* these can be immediately read off from a matrix in Jordan form, but they can also be determined directly for any matrix by computing the Smith normal form, over the ring of polynomials, of the matrix (with polynomial entries) *XI*_{n} − *A* (the same one whose determinant defines the characteristic polynomial). Note that this Smith normal form is not a normal form of *A* itself moreover it is not similar to *XI*_{n} − *A* either, but obtained from the latter by left and right multiplications by different invertible matrices (with polynomial entries).

Similarity of matrices does not depend on the base field: if *L* is a field containing *K* as a subfield, and *A* and *B* are two matrices over *K*, then *A* and *B* are similar as matrices over *K* if and only if they are similar as matrices over *L*. This is so because the rational canonical form over *K* is also the rational canonical form over *L*. This means that one may use Jordan forms that only exist over a larger field to determine whether the given matrices are similar.

In the definition of similarity, if the matrix *P* can be chosen to be a permutation matrix then *A* and *B* are **permutation-similar** if *P* can be chosen to be a unitary matrix then *A* and *B* are **unitarily equivalent.** The spectral theorem says that every normal matrix is unitarily equivalent to some diagonal matrix. Specht's theorem states that two matrices are unitarily equivalent if and only if they satisfy certain trace equalities.

## 9.2: Operators and Similarity

This section describes the Overlap Similarity algorithm in the Neo4j Labs Graph Algorithms library.

This is documentation for the Graph Algorithms Library, which has been deprecated by the Graph Data Science Library (GDS).

Overlap similarity measures overlap between two sets. It is defined as the size of the intersection of two sets, divided by the size of the smaller of the two sets.

The Overlap Similarity algorithm was developed by the Neo4j Labs team and is not officially supported.

#### 9.5.5.1. History and explanation

Overlap similarity is computed using the following formula:

The library contains both procedures and functions to calculate similarity between sets of data. The function is best used when calculating the similarity between small numbers of sets. The procedures parallelize the computation, and are therefore more appropriate for computing similarities on bigger datasets.

#### 9.5.5.2. Use-cases - when to use the Overlap Similarity algorithm

We can use the Overlap Similarity algorithm to work out which things are subsets of others. We might then use these computed subsets to learn a taxonomy from tagged data, as described by Jesús Barrasa.

#### 9.5.5.3. Overlap Similarity algorithm function sample

The following will return the Overlap similarity of two lists of numbers:

These two lists of numbers have an overlap similarity of 0.66. We can see how this result is derived by breaking down the formula:

#### 9.5.5.4. Overlap Similarity algorithm procedures sample

The following will create a sample graph:

The following will return a stream of node pairs, along with their intersection and overlap similarities:

Fantasy and Dystopia are both clear subgenres of Science Fiction - 100% of the books that list those as genres also list Science Fiction as a genre. Dystopia is also a subgenre of Classics. The others are less obvious Dystopia probably isn’t a subgenre of Fantasy, but the other two pairs could be subgenres.

The following will return a stream of node pairs that have a similarity of at least 0.75, along with their intersection and overlap similarities:

We can see that those genres with lower similarity have been filtered out. If we’re implementing a k-Nearest Neighbors type query we might instead want to find the most similar k super genres for a given genre. We can do that by passing in the topK parameter.

The following will return a stream of genres, along with the two most similar super genres to them (i.e. k=2 ):

The following will find the most similar genre for each genre, and store a relationship between those genres:

We then could write a query to find out the genre hierarchy for a specific genre.

The following will find the genre hierarchy for the Fantasy genre.

["Fantasy", "Science Fiction", "Classics"]

#### 9.5.5.5. Specifying source and target ids

Sometimes, we don’t want to compute all pairs similarity, but would rather specify subsets of items to compare to each other. We do this using the sourceIds and targetIds keys in the config.

We could use this technique to compute the similarity of a subset of items to all other items.

The following will return the super genres for the Fantasy and Classics genres:

## 12 CFR § 9.2 - Definitions.

For the purposes of this part, the following definitions apply:

(a) Affiliate has the same meaning as in 12 U.S.C. 221a(b).

(b) Applicable law means the law of a state or other jurisdiction governing a national bank's fiduciary relationships, any applicable Federal law governing those relationships, the terms of the instrument governing a fiduciary relationship, or any court order pertaining to the relationship.

(c) Custodian under a uniform gifts to minors act means a fiduciary relationship established pursuant to a state law substantially similar to the Uniform Gifts to Minors Act or the Uniform Transfers to Minors Act as published by the American Law Institute.

(d) Fiduciary account means an account administered by a national bank acting in a fiduciary capacity.

(e) Fiduciary capacity means: trustee, executor, administrator, registrar of stocks and bonds, transfer agent, guardian, assignee, receiver, or custodian under a uniform gifts to minors act investment adviser, if the bank receives a fee for its investment advice any capacity in which the bank possesses investment discretion on behalf of another or any other similar capacity that the OCC authorizes pursuant to 12 U.S.C. 92a.

(f) Fiduciary officers and employees means all officers and employees of a national bank to whom the board of directors or its designee has assigned functions involving the exercise of the bank's fiduciary powers.

(g) Fiduciary powers means the authority the OCC permits a national bank to exercise pursuant to 12 U.S.C. 92a.

(h) Guardian means the guardian or conservator, by whatever name used by state law, of the estate of a minor, an incompetent person, an absent person, or a person over whose estate a court has taken jurisdiction, other than under bankruptcy or insolvency laws.

(i) Investment discretion means, with respect to an account, the sole or shared authority (whether or not that authority is exercised) to determine what securities or other assets to purchase or sell on behalf of the account. A bank that delegates its authority over investments and a bank that receives delegated authority over investments are both deemed to have investment discretion.

## Order of evaluation of logical operators

In the case of multiple operators, Python always evaluates the expression from left to right. This can be verified by the below example.

Attention geek! Strengthen your foundations with the **Python Programming Foundation** Course and learn the basics.

To begin with, your interview preparations Enhance your Data Structures concepts with the **Python DS** Course. And to begin with your Machine Learning Journey, join the **Machine Learning – Basic Level Course**

## Assigning Collections

One collection can be assigned to another by an INSERT , UPDATE , FETCH , or SELECT statement, an assignment statement, or a subprogram call.

You can assign the value of an expression to a specific element in a collection using the syntax:

where *expression* yields a value of the type specified for elements in the collection type definition.

#### Example: Datatype Compatibility

This example shows that collections must have the same datatype for an assignment to work. Having the same element type is not enough.

#### Example: Assigning a Null Value to a Nested Table

You assign an atomically null nested table or varray to a second nested table or varray. In this case, the second collection must be reinitialized:

In the same way, assigning the value NULL to a collection makes it atomically null.

#### Example: Possible Exceptions for Collection Assignments

Assigning a value to a collection element can cause various exceptions:

- If the subscript is null or is not convertible to the right datatype, PL/SQL raises the predefined exception VALUE_ERROR . Usually, the subscript must be an integer. Associative arrays can also be declared to have VARCHAR2 subscripts. If the subscript refers to an uninitialized element, PL/SQL raises SUBSCRIPT_BEYOND_COUNT . If the collection is atomically null, PL/SQL raises COLLECTION_IS_NULL .

## Chapter 12 Functions and Operators

Expressions can be used at several points in SQL statements, such as in the ORDER BY or HAVING clauses of SELECT statements, in the WHERE clause of a SELECT , DELETE , or UPDATE statement, or in SET statements. Expressions can be written using values from several sources, such as literal values, column values, NULL , variables, built-in functions and operators, loadable functions, and stored functions (a type of stored object).

This chapter describes the built-in functions and operators that are permitted for writing expressions in MySQL. For information about loadable functions and stored functions, see Section 5.7, “MySQL Server Loadable Functions”, and Section 25.2, “Using Stored Routines”. For the rules describing how the server interprets references to different kinds of functions, see Section 9.2.5, “Function Name Parsing and Resolution”.

An expression that contains NULL always produces a NULL value unless otherwise indicated in the documentation for a particular function or operator.

By default, there must be no whitespace between a function name and the parenthesis following it. This helps the MySQL parser distinguish between function calls and references to tables or columns that happen to have the same name as a function. However, spaces around function arguments are permitted.

To tell the MySQL server to accept spaces after function names by starting it with the --sql-mode=IGNORE_SPACE option. (See Section 5.1.11, “Server SQL Modes”.) Individual client programs can request this behavior by using the CLIENT_IGNORE_SPACE option for mysql_real_connect() . In either case, all function names become reserved words.

For the sake of brevity, some examples in this chapter display the output from the **mysql** program in abbreviated form. Rather than showing examples in this format:

## Relational and Logical Operators in C

Relational operators are used to compare two values in C language. It checks the relationship between two values. If relation is true, it returns 1. However, if the relation is false, it returns 0.

Here is the table of relational operators in C language

Operators | Operator Name |
---|---|

== | Equal to |

> | Greater than |

< | Less than |

!= | Not equal to |

>= | Greater than or equal to |

<= | Less than or equal to |

Here is an example of relational operator in C language