Articles

2: Vectors in Space


Learning Objectives

  • Describe three-dimensional space mathematically.
  • Locate points in space using coordinates.
  • Write the distance formula in three dimensions.
  • Write the equations for simple planes and spheres.
  • Perform vector operations in (mathbb{R}^{3}).

Vectors are useful tools for solving two-dimensional problems. To expand the use of vectors to more realistic applications, it is necessary to create a framework for describing three-dimensional space. For example, although a two-dimensional map is a useful tool for navigating from one place to another, in some cases the topography of the land is important. Does your planned route go through the mountains? Do you have to cross a river? To appreciate fully the impact of these geographic features, you must use three dimensions. This section presents a natural extension of the two-dimensional Cartesian coordinate plane into three dimensions.

Three-Dimensional Coordinate Systems

As we have learned, the two-dimensional rectangular coordinate system contains two perpendicular axes: the horizontal (x)-axis and the vertical (y)-axis. We can add a third dimension, the (z)-axis, which is perpendicular to both the (x)-axis and the (y)-axis. We call this system the three-dimensional rectangular coordinate system. It represents the three dimensions we encounter in real life.

Definition: Three-dimensional Rectangular Coordinate System

The three-dimensional rectangular coordinate system consists of three perpendicular axes: the (x)-axis, the (y)-axis, and the (z)-axis. Because each axis is a number line representing all real numbers in (ℝ), the three-dimensional system is often denoted by (ℝ^3).

In Figure (PageIndex{1a}), the positive (z)-axis is shown above the plane containing the (x)- and (y)-axes. The positive (x)-axis appears to the left and the positive (y)-axis is to the right. A natural question to ask is: How was this arrangement determined? The system displayed follows the right-hand rule. If we take our right hand and align the fingers with the positive (x)-axis, then curl the fingers so they point in the direction of the positive (y)-axis, our thumb points in the direction of the positive (z)-axis (Figure (PageIndex{1b})). In this text, we always work with coordinate systems set up in accordance with the right-hand rule. Some systems do follow a left-hand rule, but the right-hand rule is considered the standard representation.

In two dimensions, we describe a point in the plane with the coordinates ((x,y)). Each coordinate describes how the point aligns with the corresponding axis. In three dimensions, a new coordinate, (z), is appended to indicate alignment with the (z)-axis: ((x,y,z)). A point in space is identified by all three coordinates (Figure (PageIndex{2})). To plot the point ((x,y,z)), go (x) units along the (x)-axis, then (y) units in the direction of the (y)-axis, then (z) units in the direction of the (z)-axis.

Example (PageIndex{1}): Locating Points in Space

Sketch the point ((1,−2,3)) in three-dimensional space.

Solution

To sketch a point, start by sketching three sides of a rectangular prism along the coordinate axes: one unit in the positive (x) direction, (2) units in the negative (y) direction, and (3) units in the positive (z) direction. Complete the prism to plot the point (Figure).

Exercise (PageIndex{1})

Sketch the point ((−2,3,−1)) in three-dimensional space.

Hint

Start by sketching the coordinate axes. e.g., Figure (PageIndex{3}). Then sketch a rectangular prism to help find the point in space.

Answer

In two-dimensional space, the coordinate plane is defined by a pair of perpendicular axes. These axes allow us to name any location within the plane. In three dimensions, we define coordinate planes by the coordinate axes, just as in two dimensions. There are three axes now, so there are three intersecting pairs of axes. Each pair of axes forms a coordinate plane: the (xy)-plane, the (xz)-plane, and the (yz)-plane (Figure (PageIndex{3})). We define the (xy)-plane formally as the following set: ({(x,y,0):x,y∈ℝ}.) Similarly, the (xz)-plane and the (yz)-plane are defined as ({(x,0,z):x,z∈ℝ}) and ({(0,y,z):y,z∈ℝ},) respectively.

To visualize this, imagine you’re building a house and are standing in a room with only two of the four walls finished. (Assume the two finished walls are adjacent to each other.) If you stand with your back to the corner where the two finished walls meet, facing out into the room, the floor is the (xy)-plane, the wall to your right is the (xz)-plane, and the wall to your left is the (yz)-plane.

In two dimensions, the coordinate axes partition the plane into four quadrants. Similarly, the coordinate planes divide space between them into eight regions about the origin, called octants. The octants fill (ℝ^3) in the same way that quadrants fill (ℝ^2), as shown in Figure (PageIndex{4}).

Most work in three-dimensional space is a comfortable extension of the corresponding concepts in two dimensions. In this section, we use our knowledge of circles to describe spheres, then we expand our understanding of vectors to three dimensions. To accomplish these goals, we begin by adapting the distance formula to three-dimensional space.

If two points lie in the same coordinate plane, then it is straightforward to calculate the distance between them. We that the distance (d) between two points ((x_1,y_1)) and ((x_2,y_2)) in the x(y)-coordinate plane is given by the formula

[d=sqrt{(x_2−x_1)^2+(y_2−y_1)^2}.]

The formula for the distance between two points in space is a natural extension of this formula.

The Distance between Two Points in Space

The distance (d) between points ((x_1,y_1,z_1)) and ((x_2,y_2,z_2)) is given by the formula

[d=sqrt{(x_2−x_1)^2+(y_2−y_1)^2+(z_2−z_1)^2}. label{distanceForm}]

The proof of this theorem is left as an exercise. (Hint: First find the distance (d_1) between the points ((x_1,y_1,z_1)) and ((x_2,y_2,z_1)) as shown in Figure (PageIndex{5}).)

Example (PageIndex{2}): Distance in Space

Find the distance between points (P_1=(3,−1,5)) and (P_2=(2,1,−1).)

Solution

Substitute values directly into the distance formula (Equation ef{distanceForm}):

[egin{align*} d(P_1,P_2) &=sqrt{(x_2−x_1)^2+(y_2−y_1)^2+(z_2−z_1)^2} [4pt] &=sqrt{(2−3)^2+(1−(−1))^2+(−1−5)^2} [4pt] &=sqrt{(-1)^2+2^2+(−6)^2} [4pt] &=sqrt{41}. end{align*}]

Exercise (PageIndex{2})

Find the distance between points (P_1=(1,−5,4)) and (P_2=(4,−1,−1)).

Hint

(d=sqrt{(x_2−x_1)^2+(y_2−y_1)^2+(z_2−z_1)^2})

Answer

(5sqrt{2})

Before moving on to the next section, let’s get a feel for how (ℝ^3) differs from (ℝ^2). For example, in (ℝ^2), lines that are not parallel must always intersect. This is not the case in (ℝ^3). For example, consider the line shown in Figure (PageIndex{7}). These two lines are not parallel, nor do they intersect.

Figure (PageIndex{7}): These two lines are not parallel, but still do not intersect.

You can also have circles that are interconnected but have no points in common, as in Figure (PageIndex{8}).

Figure (PageIndex{8}): These circles are interconnected, but have no points in common.

We have a lot more flexibility working in three dimensions than we do if we stuck with only two dimensions.

Writing Equations in (ℝ^3)

Now that we can represent points in space and find the distance between them, we can learn how to write equations of geometric objects such as lines, planes, and curved surfaces in (ℝ^3). First, we start with a simple equation. Compare the graphs of the equation (x=0) in (ℝ), (ℝ^2),and (ℝ^3) (Figure (PageIndex{9})). From these graphs, we can see the same equation can describe a point, a line, or a plane.

In space, the equation (x=0) describes all points ((0,y,z)). This equation defines the (yz)-plane. Similarly, the (xy)-plane contains all points of the form ((x,y,0)). The equation (z=0) defines the (xy)-plane and the equation (y=0) describes the (xz)-plane (Figure (PageIndex{10})).

Understanding the equations of the coordinate planes allows us to write an equation for any plane that is parallel to one of the coordinate planes. When a plane is parallel to the (xy)-plane, for example, the (z)-coordinate of each point in the plane has the same constant value. Only the (x)- and (y)-coordinates of points in that plane vary from point to point.

Equations of Planes Parallel to Coordinate Planes

  1. The plane in space that is parallel to the (xy)-plane and contains point ((a,b,c)) can be represented by the equation (z=c).
  2. The plane in space that is parallel to the (xz)-plane and contains point ((a,b,c)) can be represented by the equation (y=b).
  3. The plane in space that is parallel to the (yz)-plane and contains point ((a,b,c)) can be represented by the equation (x=a).

Example (PageIndex{3}): Writing Equations of Planes Parallel to Coordinate Planes

  1. Write an equation of the plane passing through point ((3,11,7)) that is parallel to the (yz)-plane.
  2. Find an equation of the plane passing through points ((6,−2,9), (0,−2,4),) and ((1,−2,−3).)

Solution

  1. When a plane is parallel to the (yz)-plane, only the (y)- and (z)-coordinates may vary. The (x)-coordinate has the same constant value for all points in this plane, so this plane can be represented by the equation (x=3).
  2. Each of the points ((6,−2,9), (0,−2,4),) and ((1,−2,−3)) has the same (y)-coordinate. This plane can be represented by the equation (y=−2).

Exercise (PageIndex{3})

Write an equation of the plane passing through point ((1,−6,−4)) that is parallel to the (xy)-plane.

Hint

If a plane is parallel to the (xy)-plane, the z-coordinates of the points in that plane do not vary.

Answer

(z=−4)

As we have seen, in (ℝ^2) the equation (x=5) describes the vertical line passing through point ((5,0)). This line is parallel to the (y)-axis. In a natural extension, the equation (x=5) in (ℝ^3) describes the plane passing through point ((5,0,0)), which is parallel to the (yz)-plane. Another natural extension of a familiar equation is found in the equation of a sphere.

Definition: Sphere

A sphere is the set of all points in space equidistant from a fixed point, the center of the sphere (Figure (PageIndex{11})), just as the set of all points in a plane that are equidistant from the center represents a circle. In a sphere, as in a circle, the distance from the center to a point on the sphere is called the radius.

The equation of a circle is derived using the distance formula in two dimensions. In the same way, the equation of a sphere is based on the three-dimensional formula for distance.

Standard Equation of a Sphere

The sphere with center ((a,b,c)) and radius (r) can be represented by the equation

[(x−a)^2+(y−b)^2+(z−c)^2=r^2.]

This equation is known as the standard equation of a sphere.

Example (PageIndex{4}): Finding an Equation of a Sphere

Find the standard equation of the sphere with center ((10,7,4)) and point ((−1,3,−2)), as shown in Figure (PageIndex{12}).

Figure (PageIndex{12}): The sphere centered at ((10,7,4)) containing point ((−1,3,−2).)

Solution

Use the distance formula to find the radius (r) of the sphere:

[egin{align*} r &=sqrt{(−1−10)^2+(3−7)^2+(−2−4)^2} [4pt] &=sqrt{(−11)^2+(−4)^2+(−6)^2} [4pt] &=sqrt{173} end{align*} ]

The standard equation of the sphere is

[(x−10)^2+(y−7)^2+(z−4)^2=173. onumber]

Exercise (PageIndex{4})

Find the standard equation of the sphere with center ((−2,4,−5)) containing point ((4,4,−1).)

Hint

First use the distance formula to find the radius of the sphere.

Answer

[(x+2)^2+(y−4)^2+(z+5)^2=52 onumber]

Example (PageIndex{5}): Finding the Equation of a Sphere

Let (P=(−5,2,3)) and (Q=(3,4,−1)), and suppose line segment (overline{PQ}) forms the diameter of a sphere (Figure (PageIndex{13})). Find the equation of the sphere.

Solution:

Since (overline{PQ}) is a diameter of the sphere, we know the center of the sphere is the midpoint of (overline{PQ}).Then,

[C=left(dfrac{−5+3}{2},dfrac{2+4}{2},dfrac{3+(−1)}{2} ight)=(−1,3,1). onumber]

Furthermore, we know the radius of the sphere is half the length of the diameter. This gives

[egin{align*} r &=dfrac{1}{2}sqrt{(−5−3)^2+(2−4)^2+(3−(−1))^2} [4pt] &=dfrac{1}{2}sqrt{64+4+16} [4pt] &=sqrt{21} end{align*}]

Then, the equation of the sphere is ((x+1)^2+(y−3)^2+(z−1)^2=21.)

Exercise (PageIndex{5})

Find the equation of the sphere with diameter (overline{PQ}), where (P=(2,−1,−3)) and (Q=(−2,5,−1).)

Hint

Find the midpoint of the diameter first.

Answer

[x^2+(y−2)^2+(z+2)^2=14 onumber]

Example (PageIndex{6}): Graphing Other Equations in Three Dimensions

Describe the set of points that satisfies ((x−4)(z−2)=0,) and graph the set.

Solution

We must have either (x−4=0) or (z−2=0), so the set of points forms the two planes (x=4) and (z=2) (Figure (PageIndex{14})).

Exercise (PageIndex{6})

Describe the set of points that satisfies ((y+2)(z−3)=0,) and graph the set.

Hint

One of the factors must be zero.

Answer

The set of points forms the two planes (y=−2) and (z=3).

Example (PageIndex{7}): Graphing Other Equations in Three Dimensions

Describe the set of points in three-dimensional space that satisfies ((x−2)^2+(y−1)^2=4,) and graph the set.

Solution

The (x)- and (y)-coordinates form a circle in the (xy)-plane of radius (2), centered at ((2,1)). Since there is no restriction on the (z)-coordinate, the three-dimensional result is a circular cylinder of radius (2) centered on the line with (x=2) and (y=1). The cylinder extends indefinitely in the (z)-direction (Figure (PageIndex{15})).

Exercise (PageIndex{7})

Describe the set of points in three dimensional space that satisfies (x^2+(z−2)^2=16), and graph the surface.

Hint

Think about what happens if you plot this equation in two dimensions in the (xz)-plane.

Answer

A cylinder of radius 4 centered on the line with (x=0) and (z=2).

Working with Vectors in (ℝ^3)

Just like two-dimensional vectors, three-dimensional vectors are quantities with both magnitude and direction, and they are represented by directed line segments (arrows). With a three-dimensional vector, we use a three-dimensional arrow.

Three-dimensional vectors can also be represented in component form. The notation (vecs{v}=⟨x,y,z⟩) is a natural extension of the two-dimensional case, representing a vector with the initial point at the origin, ((0,0,0)), and terminal point ((x,y,z)). The zero vector is (vecs{0}=⟨0,0,0⟩). So, for example, the three dimensional vector (vecs{v}=⟨2,4,1⟩) is represented by a directed line segment from point ((0,0,0)) to point ((2,4,1)) (Figure (PageIndex{16})).

Vector addition and scalar multiplication are defined analogously to the two-dimensional case. If (vecs{v}=⟨x_1,y_1,z_1⟩) and (vecs{w}=⟨x_2,y_2,z_2⟩) are vectors, and (k) is a scalar, then

[vecs{v}+vecs{w}=⟨x_1+x_2,y_1+y_2,z_1+z_2⟩]

and

[kvecs{v}=⟨kx_1,ky_1,kz_1⟩.]

If (k=−1,) then (kvecs{v}=(−1)vecs{v}) is written as (−vecs{v}), and vector subtraction is defined by (vecs{v}−vecs{w}=vecs{v}+(−vecs{w})=vecs{v}+(−1)vecs{w}).

The standard unit vectors extend easily into three dimensions as well, (hat{mathbf i}=⟨1,0,0⟩), (hat{mathbf j}=⟨0,1,0⟩), and (hat{mathbf k}=⟨0,0,1⟩), and we use them in the same way we used the standard unit vectors in two dimensions. Thus, we can represent a vector in (ℝ^3) in the following ways:

[vecs{v}=⟨x,y,z⟩=xhat{mathbf i}+yhat{mathbf j}+zhat{mathbf k}].

Example (PageIndex{8}): Vector Representations

Let (vecd{PQ}) be the vector with initial point (P=(3,12,6)) and terminal point (Q=(−4,−3,2)) as shown in Figure (PageIndex{17}). Express (vecd{PQ}) in both component form and using standard unit vectors.

Solution

In component form,

[egin{align*} vecd{PQ} =⟨x_2−x_1,y_2−y_1,z_2−z_1⟩ [4pt] =⟨−4−3,−3−12,2−6⟩ [4pt] =⟨−7,−15,−4⟩. end{align*}]

In standard unit form,

[vecd{PQ}=−7hat{mathbf i}−15hat{mathbf j}−4hat{mathbf k}. onumber]

Exercise (PageIndex{8})

Let (S=(3,8,2)) and (T=(2,−1,3)). Express (vec{ST}) in component form and in standard unit form.

Hint

Write (vecd{ST}) in component form first. (T) is the terminal point of (vecd{ST}).

Answer

(vecd{ST}=⟨−1,−9,1⟩=−hat{mathbf i}−9hat{mathbf j}+hat{mathbf k})

As described earlier, vectors in three dimensions behave in the same way as vectors in a plane. The geometric interpretation of vector addition, for example, is the same in both two- and three-dimensional space (Figure (PageIndex{18})).

We have already seen how some of the algebraic properties of vectors, such as vector addition and scalar multiplication, can be extended to three dimensions. Other properties can be extended in similar fashion. They are summarized here for our reference.

Properties of Vectors in Space

Let (vecs{v}=⟨x_1,y_1,z_1⟩) and (vecs{w}=⟨x_2,y_2,z_2⟩) be vectors, and let (k) be a scalar.

  • Scalar multiplication: [kvecs{v}=⟨kx_1,ky_1,kz_1⟩]
  • Vector addition: [vecs{v}+vecs{w}=⟨x_1,y_1,z_1⟩+⟨x_2,y_2,z_2⟩=⟨x_1+x_2,y_1+y_2,z_1+z_2⟩]
  • Vector subtraction: [vecs{v}−vecs{w}=⟨x_1,y_1,z_1⟩−⟨x_2,y_2,z_2⟩=⟨x_1−x_2,y_1−y_2,z_1−z_2⟩]
  • Vector magnitude: [|vecs{v}|=sqrt{x_1^2+y_1^2+z_1^2}]
  • Unit vector in the direction of (vecs{v}): [dfrac{1}{|vecs{v}|}vecs{v}=dfrac{1}{|vecs{v}|}⟨x_1,y_1,z_1⟩=⟨dfrac{x_1}{|vecs{v}|},dfrac{y_1}{|vecs{v}|},dfrac{z_1}{|vecs{v}|}⟩, quad ext{if} , vecs{v}≠vecs{0}]

We have seen that vector addition in two dimensions satisfies the commutative, associative, and additive inverse properties. These properties of vector operations are valid for three-dimensional vectors as well. Scalar multiplication of vectors satisfies the distributive property, and the zero vector acts as an additive identity. The proofs to verify these properties in three dimensions are straightforward extensions of the proofs in two dimensions.

Example (PageIndex{9}): Vector Operations in Three Dimensions

Let (vecs{v}=⟨−2,9,5⟩) and (vecs{w}=⟨1,−1,0⟩) (Figure (PageIndex{19})). Find the following vectors.

  1. (3vecs{v}−2vecs{w})
  2. (5|vecs{w}|)
  3. (|5 vecs{w}|)
  4. A unit vector in the direction of (vecs{v})

Solution

a. First, use scalar multiplication of each vector, then subtract:

[egin{align*} 3vecs{v}−2vecs{w} =3⟨−2,9,5⟩−2⟨1,−1,0⟩ [4pt] =⟨−6,27,15⟩−⟨2,−2,0⟩ [4pt] =⟨−6−2,27−(−2),15−0⟩ [4pt] =⟨−8,29,15⟩. end{align*}]

b. Write the equation for the magnitude of the vector, then use scalar multiplication:

[5|vecs{w}|=5sqrt{1^2+(−1)^2+0^2}=5sqrt{2}. onumber]

c. First, use scalar multiplication, then find the magnitude of the new vector. Note that the result is the same as for part b.:

[|5 vecs{w}|=∥⟨5,−5,0⟩∥=sqrt{5^2+(−5)^2+0^2}=sqrt{50}=5sqrt{2} onumber]

d. Recall that to find a unit vector in two dimensions, we divide a vector by its magnitude. The procedure is the same in three dimensions:

[egin{align*} dfrac{vecs{v}}{|vecs{v}|} =dfrac{1}{|vecs{v}|}⟨−2,9,5⟩ [4pt] =dfrac{1}{sqrt{(−2)^2+9^2+5^2}}⟨−2,9,5⟩ [4pt] =dfrac{1}{sqrt{110}}⟨−2,9,5⟩ [4pt] =⟨dfrac{−2}{sqrt{110}},dfrac{9}{sqrt{110}},dfrac{5}{sqrt{110}}⟩ . end{align*}]

Exercise (PageIndex{9}):

Let (vecs{v}=⟨−1,−1,1⟩) and (vecs{w}=⟨2,0,1⟩). Find a unit vector in the direction of (5vecs{v}+3vecs{w}.)

Hint

Start by writing (5vecs{v}+3vecs{w}) in component form.

Answer

(⟨dfrac{1}{3sqrt{10}},−dfrac{5}{3sqrt{10}},dfrac{8}{3sqrt{10}}⟩)

Example (PageIndex{10}): Throwing a Forward Pass

A quarterback is standing on the football field preparing to throw a pass. His receiver is standing 20 yd down the field and 15 yd to the quarterback’s left. The quarterback throws the ball at a velocity of 60 mph toward the receiver at an upward angle of (30°) (see the following figure). Write the initial velocity vector of the ball, (vecs{v}), in component form.

Solution

The first thing we want to do is find a vector in the same direction as the velocity vector of the ball. We then scale the vector appropriately so that it has the right magnitude. Consider the vector (vecs{w}) extending from the quarterback’s arm to a point directly above the receiver’s head at an angle of (30°) (see the following figure). This vector would have the same direction as (vecs{v}), but it may not have the right magnitude.

The receiver is 20 yd down the field and 15 yd to the quarterback’s left. Therefore, the straight-line distance from the quarterback to the receiver is

Dist from QB to receiver(=sqrt{15^2+20^2}=sqrt{225+400}=sqrt{625}=25) yd.

We have (dfrac{25}{|vecs{w}|}=cos 30°.) Then the magnitude of (vecs{w}) is given by

(|vecs{w}|=dfrac{25}{cos 30°}=dfrac{25⋅2}{sqrt{3}}=dfrac{50}{sqrt{3}}) yd

and the vertical distance from the receiver to the terminal point of (vecs{w}) is

Vert dist from receiver to terminal point of (vecs{w}=|vecs{w}| sin 30°=dfrac{50}{sqrt{3}}⋅dfrac{1}{2}=dfrac{25}{sqrt{3}}) yd.

Then (vecs{w}=⟨20,15,dfrac{25}{sqrt{3}}⟩), and has the same direction as (vecs{v}).

Recall, though, that we calculated the magnitude of (vecs{w}) to be (|vecs{w}|=dfrac{50}{sqrt{3}}), and (vecs{v}) has magnitude (60) mph. So, we need to multiply vector (vecs{w}) by an appropriate constant, (k). We want to find a value of (k) so that (∥kvecs{w}∥=60) mph. We have

(|k vecs{w}|=k|vecs{w}|=kdfrac{50}{sqrt{3}}) mph,

so we want

(kdfrac{50}{sqrt{3}}=60)

(k=dfrac{60sqrt{3}}{50})

(k=dfrac{6sqrt{3}}{5}).

Then

(vecs{v}=kvecs{w}=k⟨20,15,dfrac{25}{sqrt{3}}⟩=dfrac{6sqrt{3}}{5}⟨20,15,dfrac{25}{sqrt{3}}⟩=⟨24sqrt{3},18sqrt{3},30⟩).

Let’s double-check that (|vecs{v}|=60.) We have

(|vecs{v}|=sqrt{(24sqrt{3})^2+(18sqrt{3})^2+(30)^2}=sqrt{1728+972+900}=sqrt{3600}=60) mph.

So, we have found the correct components for (vecs{v}).

Exercise (PageIndex{10})

Assume the quarterback and the receiver are in the same place as in the previous example. This time, however, the quarterback throws the ball at velocity of (40) mph and an angle of (45°). Write the initial velocity vector of the ball, (vecs{v}), in component form.

Hint

Follow the process used in the previous example.

Answer

(v=⟨16sqrt{2},12sqrt{2},20sqrt{2}⟩)

Key Concepts

  • The three-dimensional coordinate system is built around a set of three axes that intersect at right angles at a single point, the origin. Ordered triples ((x,y,z)) are used to describe the location of a point in space.
  • The distance (d) between points ((x_1,y_1,z_1)) and ((x_2,y_2,z_2)) is given by the formula [d=sqrt{(x_2−x_1)^2+(y_2−y_1)^2+(z_2−z_1)^2}. onumber]
  • In three dimensions, the equations (x=a,y=b,) and (z=c) describe planes that are parallel to the coordinate planes.
  • The standard equation of a sphere with center ((a,b,c)) and radius (r) is [(x−a)^2+(y−b)^2+(z−c)^2=r^2. onumber ]
  • In three dimensions, as in two, vectors are commonly expressed in component form, (v=⟨x,y,z⟩), or in terms of the standard unit vectors, (xi+yj+zk.)
  • Properties of vectors in space are a natural extension of the properties for vectors in a plane. Let (v=⟨x_1,y_1,z_1⟩) and (w=⟨x_2,y_2,z_2⟩) be vectors, and let (k) be a scalar.

Scalar multiplication:

[(kvecs{v}=⟨kx_1,ky_1,kz_1⟩ onumber]

Vector addition:

[vecs{v}+vecs{w}=⟨x_1,y_1,z_1⟩+⟨x_2,y_2,z_2⟩=⟨x_1+x_2,y_1+y_2,z_1+z_2⟩ onumber]

Vector subtraction:

[vecs{v}−vecs{w}=⟨x_1,y_1,z_1⟩−⟨x_2,y_2,z_2⟩=⟨x_1−x_2,y_1−y_2,z_1−z_2⟩ onumber]

Vector magnitude:

[‖vecs{v}‖=sqrt{x_1^2+y_1^2+z_1^2} onumber]

Unit vector in the direction of (vecs{v}):

[dfrac{vecs{v}}{‖vecs{v}‖}=dfrac{1}{‖vecs{v}‖}⟨x_1,y_1,z_1⟩=⟨dfrac{x_1}{‖vecs{v}‖},dfrac{y_1}{‖vecs{v}‖},dfrac{z_1}{‖vecs{v}‖}⟩, vecs{v}≠vecs{0} onumber]

Key Equations

Distance between two points in space:

[d=sqrt{(x_2−x_1)^2+(y_2−y_1)^2+(z_2−z_1)^2}]

Sphere with center ((a,b,c)) and radius (r):

[(x−a)^2+(y−b)^2+(z−c)^2=r^2]

Glossary

coordinate plane
a plane containing two of the three coordinate axes in the three-dimensional coordinate system, named by the axes it contains: the (xy)-plane, (xz)-plane, or the (yz)-plane
right-hand rule
a common way to define the orientation of the three-dimensional coordinate system; when the right hand is curved around the (z)-axis in such a way that the fingers curl from the positive (x)-axis to the positive (y)-axis, the thumb points in the direction of the positive (z)-axis
octants
the eight regions of space created by the coordinate planes
sphere
the set of all points equidistant from a given point known as the center
standard equation of a sphere
((x−a)^2+(y−b)^2+(z−c)^2=r^2) describes a sphere with center ((a,b,c)) and radius (r)
three-dimensional rectangular coordinate system
a coordinate system defined by three lines that intersect at right angles; every point in space is described by an ordered triple ((x,y,z)) that plots its location relative to the defining axes

Angle Between Two Vectors Calculator

With this angle between two vectors calculator, you&aposll quickly learn how to find the angle between two vectors. It doesn&apost matter if your vectors are in 2D or 3D, nor if their representations are coordinates or initial and terminal points - our tool is a safe bet in every case. Play with the calculator and check the definitions and explanations below if you&aposre searching for the angle between two vectors formulas, you&aposll definitely find them there.

Since you&aposre here, hunting down solutions to your vector problems, can we assume that you&aposre also interested in vector operations? If you want to start from the basics, have a look at our unit vector calculator. For those who want to dig even more into vector algebra, we recommend the vector projection tool and the cross product calculator.


Calculate rotation matrix to align two vectors in 3D space?

I have two separate vectors of 3D data points that represent curves and I'm plotting these as scatter data in a 3D plot with matplotlib.

Both the vectors start at the origin, and both are of unit length. The curves are similar to each other, however, there is typically a rotation between the two curves (for test purposes, I've actually being using one curve and applying a rotation matrix to it to create the second curve).

I want to align the two curves so that they line up in 3D e.g. rotate curve b, so that its start and end points line up with curve a. I've been trying to do this by subtracting the final point from the first, to get a direction vector representing the straight line from the start to the end of each curve, converting these to unit vectors and then calculating the cross and dot products and using the methodology outlined in this answer (https://math.stackexchange.com/a/476311/357495) to calculate a rotation matrix.

However, when I do this, the calculated rotation matrix is wrong and I'm not sure why?

My code is below (I'm using Python 2.7):

In my test case, curve 2 is simply curve 1 with the following rotation matrix applied:

(just a 60 degree rotation around the x axis).

The rotation matrix computed by my code to align the two vectors again is:

The plot of the direction vectors for the two original curves (a and b in blue and green respectively) and the result of b transformed with the computed rotation matrix (red) is below. I'm trying to compute the rotation matrix to align the green vector to the blue.


2: Vectors in Space

For understanding the concept behind Machine Learning, as well as Deep Learning, Linear Algebra principles, are crucial. Linear algebra is a branch of mathematics that allows to define and perform operations on higher-dimensional coordinates and plane interactions in a concise way. Its main focus is on linear equation systems.

  • Idea behind basis vector?
  • Defination of basis vector
  • Properties of basis vector
  • Basis vectors for a given space
  • It’s important from a data science viewpoint

What’s the idea behind basis vectors?
So, the idea here is the following,

Let us take an R-squared space which basically means that, we are looking at vectors in 2 dimensions. It means that there are 2 components in each of these vectors as we have taken in the above image. We can take many many vectors. So, there will be an infinite number of vectors, which will be in 2 dimensions. So, the point is can we represent all of these vectors using some basic elements and then some combination of these basic elements.

Now, let us consider 2 vectors for example,

Now, if you take any vector that given in R squared space, let us say take
We can write this vector as some linear combination, of this vector plus this vector as follows.

Similarly, if you take

We can also write this vector as some linear combination, of this vector plus this vector as follows.

Similarly,

And that would be true for any vector that you have in this space.

So, in some sense what we say is that these 2 vectors(v1 and v2) characterize the space or they form a basis for space and any vector in this space, can simply be written as a linear combination of these 2 vectors. Now you can notice, the linear combinations are actually the numbers themselves. So, for example, if I want vector(2, 1) to be written as a linear combination of the vector(1, 0) and vector(0, 1), the scalar multiples are 2 and 1 which is similarly for vector(4, 4) and so on.

So, the key point is while we have an infinite number of vectors here, they can all be generated as a linear combination of just 2 vectors and we have seen here that these 2 vectors are vector(1, 0) and vector(0, 1). Now, these 2 vectors are called the basis for the whole space.

Defination of basis vector: If you can write every vector in a given space as a linear combination of some vectors and these vectors are independent of each other then we call them as basis vectors for that given space.

  1. Basis vectors must be linearly independent of each other:
    If I multiply v1 by any scalar, I will never be able to get the vector v2. And that proves that v1 and v2 are linearly independent of each other. We want basis vectors to be linearly independent of each other because we want every vector, that is on the basis to generate unique information. If they become dependent on each other, then this vector is not going to bring in anything unique.
  2. Basis vectors must span the whole space:
    The word span basically means that any vector in that space, I can write as a linear combination of the basis vectors as we see in our previous example.
  3. Basis vectors are not unique: One can find many many sets of basis vectors. The only conditions are that they have to be linearly independent and should span the whole space. So let’s understand this property in detail by taking the same example as we have taken before.

Let us consider 2 other vectors, which are linearly independent of each other.

First we have to check are these 2 vectors obeying the properties of basis vector?
You can see that these 2 vectors are linearly independent of each other as multiplying v1 by any scalar never able to get the vector v2. So, for example, if I multiply v1 by -1 I will get vector(-1, -1), but not the vector(1, -1).

To verify the second property, let’s take the vector(2, 1). Now, let us see whether we can represent this vector(2, 1) as a linear combination of the vector(1, 1) and vector(1, -1).

So, if you take a look at this we have successfully represented this vector(2, 1) as a linear combination of the vector(1, 1) and vector(1, -1). You can notice that in the previous case when we use the vector(1, 0) and vector(0, 1), we said this can be written as 2 times of vector(1, 0) and 1 time of vector(0, 1) however, the numbers have changed now. Nonetheless, I can write this as a linear combination of these 2 basis vectors.

Similarly, if you take the vector(1,3)

Similarly, if you take the vector(4,4)
So, this is another linear combination of the same basis vectors. So, the key point that I want to make here is that the basis vectors are not unique. There are many ways in which you can define the basis vectors however, they all share the same property that, if I have a set of vectors which I call as a basis vector, those vectors have to be independent of each other and they should span the whole space.

Point to remember:
An interesting thing to note here is that we cannot have 2 basis sets which have a different number of vectors. What I mean here is in the previous example though the basis vectors were v1(1, 0) and v2(0, 1) there were only 2 vectors. Similarly, in this case, the basis vectors are v1(1, 1) and v2(1, -1). However, there are still only 2 vectors. So, while you could have many sets of basis vectors, all of them being equivalent to the number of vectors in each set will be the same, they cannot be different. So something that you should keep in mind that for the same space you can not have 2 basis sets one with n vectors and another one with m vectors that is not possible. So, if it is a basic set for the same space, the number of vectors in each set should be the same.

Find basis vectors:

  • Step 1: To find basis vectors of the given set of vectors, arrange the vectors in matrix form as shown below.
  • Step 2: Find the rank of this matrix.
    If you identify the rank of this matrix it will give you the number of linearly independent columns. The rank of the matrix will tell us, how many are fundamental to explaining all of these columns, and how many columns do we need. So, that we can generate the remaining columns as a linear combination of these columns.

Explanation:
If the rank of the matrix is 1 then we have only 1 basis vector, if the rank is 2 then there are 2 basis vectors if 3 then there are 3 basis vectors and so on. In this case, since the rank of the matrix turns out to be 2, there are only 2 column vectors that I need to represent every column in this matrix. So, the basis set has size 2. So, we can pick any 2 linearly independent columns here and then those could be the basis vectors.

So, for example, we could choose v1(6, 5, 8, 11) and v2(1, 2, 3, 4) and say, this is the basis vector for all of these columns or we could choose v1(3, -1, -1, -1) and v2(7, 7, 11, 15) and so on. We can choose any 2 columns as long as they are linearly independent of each other and this is something that we know from above that the basis vectors need not be unique. So, I pick any 2 linearly independent columns that represent this data.

Important from a data science viewpoint
Now, let me explain to you why this basis vectors concept is very very important from a data science viewpoint. Just take a look at the previous example. We have 10 samples and we want to store these 10 samples since each sample has 4 numbers, we would be storing 4 x 10 = 40 numbers.
Now, let us assume we do the same exercise, for these 10 samples and then we find that we have only 2 basis vectors, which are going to be 2 vectors out of this set. What we could do is, we could store these 2 basis vectors that, would be 2 x 4 = 8 numbers and for the remaining 8 samples, instead of storing all the samples and all the numbers in each of these samples, what we could do is for each sample we could just store 2 numbers, which are the linear combinations that we are going to use to construct this. So, instead of storing these 4 numbers, we could simply store those 2 constants and since we already have stored the basis vectors, whenever we want to reconstruct this, we can simply take the first constant and multiply it by v1 plus the second constant multiply it by v2 and we will get this number.

We store 2 basis vectors which give me: 4 x 2 = 8 numbers
And then for the remaining 8 samples, we simply store 2 constants e.g: 8 x 2 = 16 numbers
So, this would give us: 8 + 16 = 24 numbers
Hence instead of storing 4 x 10 = 40 numbers, we can store only 24 numbers, which is the approximately half reduction in number. And we will be able to reconstruct the whole data set by storing only 24 numbers.

So, for example, if you have a 30-dimensional vector and the basis vectors are just 3, then you can see the kind of reduction, that you will get in terms of data storage. So, this is one viewpoint of data science.

  • You can identify this basis to identify a model between this data.
  • You can identify a basis to do noise reduction in the data.

Attention reader! Don&rsquot stop learning now. Get hold of all the important Machine Learning Concepts with the Machine Learning Foundation Course at a student-friendly price and become industry ready.


In science, mathematics, and engineering, we distinguish two important quantities: scalars and vectors. A scalar is simply a real number or a quantity that has magnitude. For example, length, temperature, and blood pressure are represented by numbers such as 80 m, 20°C, and the systolic/diastolic ratio 120/80, respectively. A vector, on the other hand, is usually described as a quantity that has both magnitude and direction.

Geometric Vectors

Geometrically, a vector can be represented by a directed line segment—that is, by an arrow—and is denoted either by a boldface symbol or by a symbol with an arrow over .

Get Advanced Engineering Mathematics, 7th Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.


LINEARLY INDEPENDENT SETS OF VECTORS

That is, an infinite number of solutions can be constructed in terms of just two vectors, and analysis of the solutions can be performed by considering just these two vectors. To use similar methods of analysis in vector spaces, we will need the concepts of span and linear independence of sets of vectors. Both concepts involve linear combinations of vectors.

where the numbers are called the coefficient of the linear combination.

Solution Five possibilities are

Note that the first and last linear combinations yield the same vector (0,0), even though the coefficients are not the same. The last four linear combinations are called nontrivial because in each at least one coefficient is nonzero.

Solution We want to find so that

Solution We check to see whether the equation

has a solution. This is equivalent to

and there is no solution. Hence cannot be written as a linear combination of the given vectors.

The span of a set of vectors from is actually a subspace of .

which are also in span . Therefore, span is a subspace of . It is called the subspace of spanned by .

for all complex and . Another way to say this is that all solutions form the subspace span .

In some instances span may be all of .

where , and can be any real numbers.

Solution We must show that any vector in can be written as a linear combination of the three given vectors. That is, we must show that there are constants so that

regardless of what real values , and take. The last equation is equivalent to

Therefore , and the span is all of .

Solution Let be an arbitrary vector in . We want to know whether it is possible to write

The last equation is equivalent to

Therefore a solution exists only if but this places a restriction on , and so the very first equation cannot be solved for an arbitrary vector . Therefore the span of the given vectors is not all .

The question in Example 10 could have been asked in a slightly different way.

Solution Suppose is in span . Then the equation

must be solvable. Working as in Example 10, we conclude that . Thus span . That is, the span of is all vectors whose third component is the sum of the first two components. So, for example, and .

The vector space is spanned by . It is also spanned by a larger set . As we will see later, and functions on can be analyzed by using spanning sets hence for economy's sake, we want to be able to find the smallest possible spanning sets for vector spaces. To do this, the idea of linear independence is required.

is . If the set is not linearly independent, it is called linearly dependent .

To determine whether a set is linearly independent or linearly dependent, we need to find out about the solution of

If we find (by actually solving the resulting system or by any other technique) that only the trivial solution exists, then is linearly independent. However, if one or more of the 's is nonzero, then the set is linearly dependent.

This equation is equivalent to

which has only as a solution. Therefore, is linearly independent.

This system has solution if , then we have a nontrivial solution, and so is not linearly independent--it is linearly dependent.

By collecting terms on the left-hand side, this equation can be rewritten

From algebra we know that a polynomial is identically zero only when all the coefficients are zero. So we have

which has only the trivial solution. Therefore, is linearly independent.

Linear dependence of a set of two or more vectors means that at least one of the vectors in the set can be written as a linear combination of the others. Recall Example 13 and the set . In Fig. 3.4.1 we have shown geometrically the dependence of the vectors in . A general statement of this situation is as follows:

Suppose is a nonzero coefficient in the linear combination. Then

Therefore, is a linear combination of the other vectors in .

Suppose . Then, adding to both sides, we have

Because the coefficient of is nonzero, the set is linearly dependent

is linearly dependent in . Write one of the vectors as a linear combination of the others.

This equation is equivalent to

Therefore is a solution, where is arbitrary. Thus the set is linearly dependent. Choosing , we have

Of course, we could also write

Some Geometry of Spanning Sets in and The span of a single nonzero vector is a line containing the origin. Span is all multiples of v , which is all position vectors in the same direction as v (see Fig. 3.4.2). The terminal points of these vectors form the line with vector equation

The span of two independent vectors is a plane containing the origin. To see this in , let v and w be given by and , respectively. The plane containing v and w has normal vector and vector equation

If we calculate and write the vector equation, we find

where is a vector in the plane. However, if is to be a linear combination of v and w , we must have

which has a solution if and only if

which holds if and only if the vector equation holds. So the span of two independent vectors is the plane containing the vectors. See Fig. 3.4.3.

The span of three nonzero vectors in can be a line, a plane, or all of , depending on the degree of dependence of the three vectors. If all three are multiples of each other, we have only a line. If two of the vectors and are independent but the entire set is linearly dependent, then is a linear combination of and and lies in the plane defined by and . That is, the vectors are coplanar . Lay three pencils on a tabletop with erasers joined for a graphic example of coplanar vectors. If is linearly independent, then the span is all . This can be verified directly in individual cases to show it in general requires methods of the next section.

Linear combinations in complex vector spaces have important applications, as the next examples illustrate.

Show that is linearly independent in . Discuss the importance of the independence.


2 2 -Hilbert spaces

2-vector spaces have to a large extent been motivated by and applied in (2-dimensional) quantum field theory. In that context it is usually not the concept of a plain vector space which needs to be categorified, but that of a Hilbert space.

2-Hilbert spaces as a Hilb Hilb -enriched categories with some extra properties were discribed in

In applications one often assumes these 2-Hilbert spaces to be semisimple in which case such a 2-Hilbert space is a Kapranov–Voevodsky 2 2 -vector space equipped with extra structure.

A review of these ideas of 2-Hilbert spaces as well as applications of 2-Hilbert spaces to finite group representation theory are in

  • Bruce Bartlett, On unitary 2-representations of finite groups and topological quantum field theory (arXiv)

Contents

The cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b . [1] In physics and applied mathematics, the wedge notation ab is often used (in conjunction with the name vector product), [5] [6] [7] although in pure mathematics such notation is usually reserved for just the exterior product, an abstraction of the vector product to n dimensions.

The cross product a × b is defined as a vector c that is perpendicular (orthogonal) to both a and b, with a direction given by the right-hand rule [2] and a magnitude equal to the area of the parallelogram that the vectors span. [3]

The cross product is defined by the formula [8] [9]

  • θ is the angle between a and b in the plane containing them (hence, it is between 0° and 180°)
  • a‖ and ‖b‖ are the magnitudes of vectors a and b
  • and n is a unit vectorperpendicular to the plane containing a and b, in the direction given by the right-hand rule (illustrated). [3]

If the vectors a and b are parallel (i.e., the angle θ between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0.

Direction Edit

By convention, the direction of the vector n is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the adjacent picture). Using this rule implies that the cross product is anti-commutative, that is, b × a = −(a × b) . By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector.

As the cross product operator depends on the orientation of the space (as explicit in the definition above), the cross product of two vectors is not a "true" vector, but a pseudovector. See § Handedness for more detail.

In 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced both the dot product and the cross product using a period ( a . b ) and an "x" ( a x b ), respectively, to denote them. [10]

In 1877, to emphasize the fact that the result of a dot product is a scalar while the result of a cross product is a vector, William Kingdon Clifford coined the alternative names scalar product and vector product for the two operations. [10] These alternative names are still widely used in the literature.

Both the cross notation ( a × b ) and the name cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product ab involves multiplications between corresponding components of a and b. As explained below, the cross product can be expressed in the form of a determinant of a special 3 × 3 matrix. According to Sarrus's rule, this involves multiplications between matrix elements identified by crossed diagonals.

Coordinate notation Edit

The standard basis vectors i, j, and k satisfy the following equalities in a right hand coordinate system: [2]

which imply, by the anticommutativity of the cross product, that

The anticommutativity of the cross product (and the obvious lack of linear independence) also implies that

These equalities, together with the distributivity and linearity of the cross product (but neither follows easily from the definition given above), are sufficient to determine the cross product of any two vectors a and b. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors:

Their cross product a × b can be expanded using distributivity:

This can be interpreted as the decomposition of a × b into the sum of nine simpler cross products involving vectors aligned with i, j, or k. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above-mentioned equalities and collecting similar terms, we obtain:

meaning that the three scalar components of the resulting vector s = s1i + s2j + s3k = a × b are

Using column vectors, we can represent the same result as follows:

Matrix notation Edit

The cross product can also be expressed as the formal determinant: [note 1] [2]

This determinant can be computed using Sarrus's rule or cofactor expansion. Using Sarrus's rule, it expands to

Using cofactor expansion along the first row instead, it expands to [11]

which gives the components of the resulting vector directly.

Using Levi-Civita tensors Edit

  • In any basis, the cross-product a × b is given by the tensorial formula E i j k a i b j a^b^> where E i j k > is the covariant Levi-Civita tensor (we note the position of the indices). That corresponds to the intrinsic formula given here.
  • In an orthonormal basis having the same orientation than the space, a × b is given by the pseudo-tensorial formula ε i j k a i b j a^b^> where ε i j k > is the Levi-Civita symbol (which is a pseudo-tensor). That’s the formula used for everyday physics but it’s works only in this special case of basis.
  • In any orthonormal basis, a × b is given by the pseudo-tensorial formula ( − 1 ) B ε i j k a i b j varepsilon _a^b^> where ( − 1 ) B = ± 1 =pm 1> whether the basis has the same orientation than the space or not.

The latest formula avoid to change the orientation of the space when we inverse an orthonormal basis.

Geometric meaning Edit

The magnitude of the cross product can be interpreted as the positive area of the parallelogram having a and b as sides (see Figure 1): [2]

Indeed, one can also compute the volume V of a parallelepiped having a, b and c as edges by using a combination of a cross product and a dot product, called scalar triple product (see Figure 2):

Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value. For instance,

Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of perpendicularity in the same way that the dot product is a measure of parallelism. Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The dot product of two unit vectors behaves just oppositely: it is zero when the unit vectors are perpendicular and 1 if the unit vectors are parallel.

Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive).

Algebraic properties Edit

If the cross product of two vectors is the zero vector (i.e. a × b = 0 ), then either one or both of the inputs is the zero vector, ( a = 0 or b = 0 ) or else they are parallel or antiparallel ( ab ) so that the sine of the angle between them is zero ( θ = 0° or θ = 180° and sin θ = 0 ).

The self cross product of a vector is the zero vector:

and compatible with scalar multiplication so that

It is not associative, but satisfies the Jacobi identity:

Distributivity, linearity and Jacobi identity show that the R 3 vector space together with vector addition and the cross product forms a Lie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3). The cross product does not obey the cancellation law: that is, a × b = a × c with a0 does not imply b = c , but only that:

This can be the case where b and c cancel, but additionally where a and bc are parallel that is, they are related by a scale factor t, leading to:

If, in addition to a × b = a × c and a0 as above, it is the case that ab = ac then

As bc cannot be simultaneously parallel (for the cross product to be 0) and perpendicular (for the dot product to be 0) to a, it must be the case that b and c cancel: b = c .

From the geometrical definition, the cross product is invariant under proper rotations about the axis defined by a × b . In formulae:

More generally, the cross product obeys the following identity under matrix transformations:

The cross product of two vectors lies in the null space of the 2 × 3 matrix with the vectors as rows:

For the sum of two cross products, the following identity holds:

Differentiation Edit

The product rule of differential calculus applies to any bilinear operation, and therefore also to the cross product:

where a and b are vectors that depend on the real variable t.

Triple product expansion Edit

The cross product is used in both forms of the triple product. The scalar triple product of three vectors is defined as

It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any order that's an even permutation of the above ordering. The following therefore are equal:

The vector triple product is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula

The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vector calculus, is

where ∇ 2 is the vector Laplacian operator.

Other identities relate the cross product to the scalar triple product:

where I is the identity matrix.

Alternative formulation Edit

The cross product and the dot product are related by:

The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle θ between the two vectors, as:

the above given relationship can be rewritten as follows:

which is the magnitude of the cross product expressed in terms of θ, equal to the area of the parallelogram defined by a and b (see definition above).

The combination of this requirement and the property that the cross product be orthogonal to its constituents a and b provides an alternative definition of the cross product. [13]

Lagrange's identity Edit

can be compared with another relation involving the right-hand side, namely Lagrange's identity expressed as: [14]

where a and b may be n-dimensional vectors. This also shows that the Riemannian volume form for surfaces is exactly the surface element from vector calculus. In the case where n = 3 , combining these two equations results in the expression for the magnitude of the cross product in terms of its components: [15]

The same result is found directly using the components of the cross product found from:

In R 3 , Lagrange's equation is a special case of the multiplicativity | vw | = | v || w | of the norm in the quaternion algebra.

It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case of the Binet–Cauchy identity: [16] [17]

If a = c and b = d this simplifies to the formula above.

Infinitesimal generators of rotations Edit

The cross product conveniently describes the infinitesimal generators of rotations in R 3 . Specifically, if n is a unit vector in R 3 and R(φ, n) denotes a rotation about the axis through the origin specified by n, with angle φ (measured in radians, counterclockwise when viewed from the tip of n), then

for every vector x in R 3 . The cross product with n therefore describes the infinitesimal generator of the rotations about n. These infinitesimal generators form the Lie algebra so(3) of the rotation group SO(3), and we obtain the result that the Lie algebra R 3 with cross product is isomorphic to the Lie algebra so(3).

Conversion to matrix multiplication Edit

The vector cross product also can be expressed as the product of a skew-symmetric matrix and a vector: [16]

where superscript T refers to the transpose operation, and [a]× is defined by:

The columns [a]×,i of the skew-symmetric matrix for a vector a can be also obtained by calculating the cross product with unit vectors, i.e.:

Also, if a is itself expressed as a cross product:

Hence, the left hand side equals

Now, for the right hand side,

Evaluation of the right hand side gives

Comparison shows that the left hand side equals the right hand side.

This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector. [18] In three dimensions bivectors are dual to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors. [18]

This notation is also often much easier to work with, for example, in epipolar geometry.

From the general properties of the cross product follows immediately that

and from fact that [a]× is skew-symmetric it follows that

The above-mentioned triple product expansion (bac–cab rule) can be easily proven using this notation.

As mentioned above, the Lie algebra R 3 with cross product is isomorphic to the Lie algebra so(3), whose elements can be identified with the 3×3 skew-symmetric matrices. The map a → [a]× provides an isomorphism between R 3 and so(3). Under this map, the cross product of 3-vectors corresponds to the commutator of 3x3 skew-symmetric matrices.

These matrices share the following properties:

For other properties of orthogonal projection matrices, see projection (linear algebra).

Index notation for tensors Edit

The cross product can alternatively be defined in terms of the Levi-Civita symbol εijk and a dot product η mi (= δ mi for an orthonormal basis), which are useful in converting vector notation for tensor applications:

in which repeated indices are summed over the values 1 to 3. This representation is another form of the skew-symmetric representation of the cross product:

In classical mechanics: representing the cross product by using the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are isotropic. (An example: consider a particle in a Hooke's Law potential in three-space, free to oscillate in three dimensions none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation). [ citation needed ]

Mnemonic Edit

The word "xyzzy" can be used to remember the definition of the cross product.

The second and third equations can be obtained from the first by simply vertically rotating the subscripts, xyzx . The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzy sequence.

Since the first diagonal in Sarrus's scheme is just the main diagonal of the above-mentioned 3×3 matrix, the first three letters of the word xyzzy can be very easily remembered.

Cross visualization Edit

Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may be helpful for remembering the correct cross product formula.

The cross product has applications in various contexts: e.g. it is used in computational geometry, physics and engineering. A non-exhaustive list of examples follows.

Computational geometry Edit

The cross product appears in the calculation of the distance of two skew lines (lines not in the same plane) from each other in three-dimensional space.

The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in computer graphics. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle.

which is the signed length of the cross product of the two vectors.

The cross product is used in calculating the volume of a polyhedron such as a tetrahedron or parallelepiped.

Angular momentum and torque Edit

The angular momentum L of a particle about a given origin is defined as:

where r is the position vector of the particle relative to the origin, p is the linear momentum of the particle.

In the same way, the moment M of a force FB applied at point B around point A is given as:

In mechanics the moment of a force is also called torque and written as τ >

Since position r , linear momentum p and force F are all true vectors, both the angular momentum L and the moment of a force M are pseudovectors or axial vectors.

Rigid body Edit

The cross product frequently appears in the description of rigid motions. Two points P and Q on a rigid body can be related by:

Lorentz force Edit

The cross product is used to describe the Lorentz force experienced by a moving electric charge qe :

Since velocity v , force F and electric field E are all true vectors, the magnetic field B is a pseudovector.

Other Edit

In vector calculus, the cross product is used to define the formula for the vector operator curl.

The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints.

The cross product can be defined in terms of the exterior product. It can be generalized to an external product in other than three dimensions. [19] This view [ which? ] allows for a natural geometric interpretation of the cross product. In exterior algebra the exterior product of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors a and b, one can view the bivector ab as the oriented parallelogram spanned by a and b. The cross product is then obtained by taking the Hodge star of the bivector ab , mapping 2-vectors to vectors:

This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. Only in three dimensions is the result an oriented one-dimensional element – a vector – whereas, for example, in four dimensions the Hodge dual of a bivector is two-dimensional – a bivector. So, only in three dimensions can a vector cross product of a and b be defined as the vector dual to the bivector ab : it is perpendicular to the bivector, with orientation dependent on the coordinate system's handedness, and has the same magnitude relative to the unit normal vector as ab has relative to the unit bivector precisely the properties described above.

Consistency Edit

When physics laws are written as equations, it is possible to make an arbitrary choice of the coordinate system, including handedness. One should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two polar vectors, one must take into account that the result is an axial vector. Therefore, for consistency, the other side must also be an axial vector. [ citation needed ] More generally, the result of a cross product may be either a polar vector or an axial vector, depending on the type of its operands (polar vectors or axial vectors). Namely, polar vectors and axial vectors are interrelated in the following ways under application of the cross product:

  • polar vector × polar vector = axial vector
  • axial vector × axial vector = axial vector
  • polar vector × axial vector = polar vector
  • axial vector × polar vector = polar vector
  • polar × polar = axial
  • axial × axial = axial
  • polar × axial = polar
  • axial × polar = polar

Because the cross product may also be a polar vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a polar vector and the other one is an axial vector (e.g., the cross product of two polar vectors). For instance, a vector triple product involving three polar vectors is a polar vector.

A handedness-free approach is possible using exterior algebra.

The paradox of the orthonormal basis Edit

Let (i, j,k) be an orthonormal basis. The vectors i, j and k don't depend on the orientation of the space. They can even be defined in the absence of any orientation. They can’t therefore be axial vectors. But if i and j are polar vectors then k is an axial vector for i × j = k or j × i = k. This is a paradox.

"Axial" and "polar" are physical qualifiers for physical vectors, that is to say vectors which represent physical quantities such as the velocity or the magnetic field. The vectors i, j and k are mathematical vectors, neither axial nor polar. In mathematics, the cross-product of two vectors is a vector. There is no contradiction.

There are several ways to generalize the cross product to the higher dimensions.

Lie algebra Edit

The cross product can be seen as one of the simplest Lie products, and is thus generalized by Lie algebras, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, called Lie theory.

Quaternions Edit

The cross product can also be described in terms of quaternions. In general, if a vector [a1, a2, a3] is represented as the quaternion a1i + a2j + a3k , the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors.

Octonions Edit

A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result from Hurwitz's theorem that the only normed division algebras are the ones with dimension 1, 2, 4, and 8.

Exterior product Edit

In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the exterior product, which has similar properties, except that the exterior product of two vectors is now a 2-vector instead of an ordinary vector. As mentioned above, the cross product can be interpreted as the exterior product in three dimensions by using the Hodge star operator to map 2-vectors to vectors. The Hodge dual of the exterior product yields an (n − 2) -vector, which is a natural generalization of the cross product in any number of dimensions.

The exterior product and dot product can be combined (through summation) to form the geometric product in geometric algebra.

External product Edit

As mentioned above, the cross product can be interpreted in three dimensions as the Hodge dual of the exterior product. In any finite n dimensions, the Hodge dual of the exterior product of n − 1 vectors is a vector. So, instead of a binary operation, in arbitrary finite dimensions, the cross product is generalized as the Hodge dual of the exterior product of some given n − 1 vectors. This generalization is called external product. [20]

Commutator product Edit

The commutator product could be generalised to arbitrary multivectors in three dimensions, which results in a multivector consisting of only elements of grades 1 (1-vectors/true vectors) and 2 (2-vectors/pseudovectors). While the commutator product of two 1-vectors is indeed the same as the exterior product and yields a 2-vector, the commutator of a 1-vector and a 2-vector yields a true vector, corresponding instead to the left and right contractions in geometric algebra. The commutator product of two 2-vectors has no corresponding equivalent product, which is why the commutator product is defined in the first place for 2-vectors. Furthermore, the commutator triple product of three 2-vectors is the same as the vector triple product of the same three pseudovectors in vector algebra. However, the commutator triple product of three 1-vectors in geometric algebra is instead the negative of the vector triple product of the same three true vectors in vector algebra.

Generalizations to higher dimensions is provided by the same commutator product of 2-vectors in higher-dimensional geometric algebras, but the 2-vectors are no longer pseudovectors. Just as the commutator product/cross product of 2-vectors in three dimensions correspond to the simplest Lie algebra, the 2-vector subalgebras of higher dimensional geometric algebra equipped with the commutator product also correspond to the Lie algebras. [22] Also as in three dimensions, the commutator product could be further generalised to arbitrary multivectors.

Multilinear algebra Edit

In the context of multilinear algebra, the cross product can be seen as the (1,2)-tensor (a mixed tensor, specifically a bilinear map) obtained from the 3-dimensional volume form, [note 2] a (0,3)-tensor, by raising an index.

In the same way, in higher dimensions one may define generalized cross products by raising indices of the n-dimensional volume form, which is a ( 0 , n ) -tensor. The most direct generalizations of the cross product are to define either:

  • a ( 1 , n − 1 ) -tensor, which takes as input n − 1 vectors, and gives as output 1 vector – an ( n − 1 ) -ary vector-valued product, or
  • a ( n − 2 , 2 ) -tensor, which takes as input 2 vectors and gives as output skew-symmetric tensor of rank n − 2 – a binary product with rank n − 2 tensor values. One can also define ( k , n − k ) -tensors for other k.

These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and parity.

This formula is identical in structure to the determinant formula for the normal cross product in R 3 except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors (v1, . vn−1, Λ n–1
i=0 vi) have a positive orientation with respect to (e1, . en). If n is odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that n is even, however, the distinction must be kept. This ( n − 1 ) -ary form enjoys many of the same properties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments.

In 1773, Joseph-Louis Lagrange introduced the component form of both the dot and cross products in order to study the tetrahedron in three dimensions. [23] In 1843, William Rowan Hamilton introduced the quaternion product, and with it the terms "vector" and "scalar". Given two quaternions [0, u] and [0, v] , where u and v are vectors in R 3 , their quaternion product can be summarized as [−uv, u × v] . James Clerk Maxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and other reasons quaternions for a time were an essential part of physics education.

In 1878 William Kingdon Clifford published his Elements of Dynamic which was an advanced text for its time. He defined the product of two vectors [24] to have magnitude equal to the area of the parallelogram of which they are two sides, and direction perpendicular to their plane.

Oliver Heaviside and Josiah Willard Gibbs also felt that quaternion methods were too cumbersome, often requiring the scalar or vector part of a result to be extracted. Thus, about forty years after the quaternion product, the dot product and cross product were introduced — to heated opposition. Pivotal to (eventual) acceptance was the efficiency of the new approach, allowing Heaviside to reduce the equations of electromagnetism from Maxwell's original 20 to the four commonly seen today. [25]

Largely independent of this development, and largely unappreciated at the time, Hermann Grassmann created a geometric algebra not tied to dimension two or three, with the exterior product playing a central role. In 1853 Augustin-Louis Cauchy, a contemporary of Grassmann, published a paper on algebraic keys which were used to solve equations and had the same multiplication properties as the cross product. [26] [27] Clifford combined the algebras of Hamilton and Grassmann to produce Clifford algebra, where in the case of three-dimensional vectors the bivector produced from two vectors dualizes to a vector, thus reproducing the cross product.

The cross notation and the name "cross product" began with Gibbs. Originally they appeared in privately published notes for his students in 1881 as Elements of Vector Analysis. The utility for mechanics was noted by Aleksandr Kotelnikov. Gibbs's notation and the name "cross product" later reached a wide audience through Vector Analysis, a textbook by Edwin Bidwell Wilson, a former student. Wilson rearranged material from Gibbs's lectures, together with material from publications by Heaviside, Föpps, and Hamilton. He divided vector analysis into three parts:

First, that which concerns addition and the scalar and vector products of vectors. Second, that which concerns the differential and integral calculus in its relations to scalar and vector functions. Third, that which contains the theory of the linear vector function.

Two main kinds of vector multiplications were defined, and they were called as follows:

  • The direct, scalar, or dot product of two vectors
  • The skew, vector, or cross product of two vectors

Several kinds of triple products and products of more than three vectors were also examined. The above-mentioned triple product expansion was also included.


The simplest example of a vector space is the trivial one: <0>, which contains only the zero vector (see the third axiom in the Vector space article). Both vector addition and scalar multiplication are trivial. A basis for this vector space is the empty set, so that <0>is the 0-dimensional vector space over F. Every vector space over F contains a subspace isomorphic to this one.

The zero vector space is conceptually different from the null space of a linear operator L, which is the kernel of L. (Incidentally, the null space of L is a zero space if and only if L is injective.)

The next simplest example is the field F itself. Vector addition is just field addition, and scalar multiplication is just field multiplication. This property can be used to prove that a field is a vector space. Any non-zero element of F serves as a basis so F is a 1-dimensional vector space over itself.

The field is a rather special vector space in fact it is the simplest example of a commutative algebra over F. Also, F has just two subspaces: <0>and F itself.

The original example of a vector space is the following. For any positive integer n, the set of all n-tuples of elements of F forms an n-dimensional vector space over F sometimes called coordinate space and denoted F n . An element of F n is written

where each xi is an element of F. The operations on F n are defined by

Commonly, F is the field of real numbers, in which case we obtain real coordinate space R n . The field of complex numbers gives complex coordinate space C n . The a + bi form of a complex number shows that C itself is a two-dimensional real vector space with coordinates (a,b). Similarly, the quaternions and the octonions are respectively four- and eight-dimensional real vector spaces, and C n is a 2n-dimensional real vector space.

The vector space F n has a standard basis:

where 1 denotes the multiplicative identity in F.

Let F ∞ denote the space of infinite sequences of elements from F such that only finitely many elements are nonzero. That is, if we write an element of F ∞ as

then only a finite number of the xi are nonzero (i.e., the coordinates become all zero after a certain point). Addition and scalar multiplication are given as in finite coordinate space. The dimensionality of F ∞ is countably infinite. A standard basis consists of the vectors ei which contain a 1 in the i-th slot and zeros elsewhere. This vector space is the coproduct (or direct sum) of countably many copies of the vector space F.

Note the role of the finiteness condition here. One could consider arbitrary sequences of elements in F, which also constitute a vector space with the same operations, often denoted by F N - see below. F N is the product of countably many copies of F.

By Zorn's lemma, F N has a basis (there is no obvious basis). There are uncountably infinite elements in the basis. Since the dimensions are different, F N is not isomorphic to F ∞ . It is worth noting that F N is (isomorphic to) the dual space of F ∞ , because a linear map T from F ∞ to F is determined uniquely by its values T(ei) on the basis elements of F ∞ , and these values can be arbitrary. Thus one sees that a vector space need not be isomorphic to its double dual if it is infinite dimensional, in contrast to the finite dimensional case.

Starting from n vector spaces, or a countably infinite collection of them, each with the same field, we can define the product space like above.

Let F m×n denote the set of m×n matrices with entries in F. Then F m×n is a vector space over F. Vector addition is just matrix addition and scalar multiplication is defined in the obvious way (by multiplying each entry by the same scalar). The zero vector is just the zero matrix. The dimension of F m×n is mn. One possible choice of basis is the matrices with a single entry equal to 1 and all other entries 0.

When m = n the matrix is square and matrix multiplication of two such matrices produces a third. This vector space of dimension n 2 forms an algebra over a field.

One variable Edit

The set of polynomials with coefficients in F is a vector space over F, denoted F[x]. Vector addition and scalar multiplication are defined in the obvious manner. If the degree of the polynomials is unrestricted then the dimension of F[x] is countably infinite. If instead one restricts to polynomials with degree less than or equal to n, then we have a vector space with dimension n + 1.

One possible basis for F[x] is a monomial basis: the coordinates of a polynomial with respect to this basis are its coefficients, and the map sending a polynomial to the sequence of its coefficients is a linear isomorphism from F[x] to the infinite coordinate space F ∞ .

The vector space of polynomials with real coefficients and degree less than or equal to n is often denoted by Pn.

Several variables Edit

The set of polynomials in several variables with coefficients in F is vector space over F denoted F[x1, x2, …, xr]. Here r is the number of variables.

Let X be a non-empty arbitrary set and V an arbitrary vector space over F. The space of all functions from X to V is a vector space over F under pointwise addition and multiplication. That is, let f : XV and g : XV denote two functions, and let α in F. We define

where the operations on the right hand side are those in V. The zero vector is given by the constant function sending everything to the zero vector in V. The space of all functions from X to V is commonly denoted V X .

If X is finite and V is finite-dimensional then V X has dimension |X|(dim V), otherwise the space is infinite-dimensional (uncountably so if X is infinite).

Many of the vector spaces that arise in mathematics are subspaces of some function space. We give some further examples.

Generalized coordinate space Edit

Let X be an arbitrary set. Consider the space of all functions from X to F which vanish on all but a finite number of points in X. This space is a vector subspace of F X , the space of all possible functions from X to F. To see this, note that the union of two finite sets is finite, so that the sum of two functions in this space will still vanish outside a finite set.

The space described above is commonly denoted (F X )0 and is called generalized coordinate space for the following reason. If X is the set of numbers between 1 and n then this space is easily seen to be equivalent to the coordinate space F n . Likewise, if X is the set of natural numbers, N, then this space is just F ∞ .

A canonical basis for (F X )0 is the set of functions <>x | xX> defined by

The dimension of (F X )0 is therefore equal to the cardinality of X. In this manner we can construct a vector space of any dimension over any field. Furthermore, every vector space is isomorphic to one of this form. Any choice of basis determines an isomorphism by sending the basis onto the canonical one for (F X )0.

Generalized coordinate space may also be understood as the direct sum of |X| copies of F (i.e. one for each point in X):

The finiteness condition is built into the definition of the direct sum. Contrast this with the direct product of |X| copies of F which would give the full function space F X .

Linear maps Edit

An important example arising in the context of linear algebra itself is the vector space of linear maps. Let L(V,W) denote the set of all linear maps from V to W (both of which are vector spaces over F). Then L(V,W) is a subspace of W V since it is closed under addition and scalar multiplication.

Note that L(F n ,F m ) can be identified with the space of matrices F m×n in a natural way. In fact, by choosing appropriate bases for finite-dimensional spaces V and W, L(V,W) can also be identified with F m×n . This identification normally depends on the choice of basis.

Continuous functions Edit

If X is some topological space, such as the unit interval [0,1], we can consider the space of all continuous functions from X to R. This is a vector subspace of R X since the sum of any two continuous functions is continuous and scalar multiplication is continuous.

Differential equations Edit

The subset of the space of all functions from R to R consisting of (sufficiently differentiable) functions that satisfy a certain differential equation is a subspace of R R if the equation is linear. This is because differentiation is a linear operation, i.e., (a f + b g)′ = a f ′ + b g′, where ′ is the differentiation operator.

Suppose K is a subfield of F (cf. field extension). Then F can be regarded as a vector space over K by restricting scalar multiplication to elements in K (vector addition is defined as normal). The dimension of this vector space, if it exists, [a] is called the degree of the extension. For example the complex numbers C form a two-dimensional vector space over the real numbers R. Likewise, the real numbers R form a vector space over the rational numbers Q which has (uncountably) infinite dimension, if a Hamel basis exists. [b]

If V is a vector space over F it may also be regarded as vector space over K. The dimensions are related by the formula

dimKV = (dimFV)(dimKF)

For example C n , regarded as a vector space over the reals, has dimension 2n.

Apart from the trivial case of a zero-dimensional space over any field, a vector space over a field F has a finite number of elements if and only if F is a finite field and the vector space has a finite dimension. Thus we have Fq, the unique finite field (up to isomorphism) with q elements. Here q must be a power of a prime (q = p m with p prime). Then any n-dimensional vector space V over Fq will have q n elements. Note that the number of elements in V is also the power of a prime (because a power of a prime power is again a prime power). The primary example of such a space is the coordinate space (Fq) n .

These vector spaces are of critical importance in the representation theory of finite groups, number theory, and cryptography.


1 Answer 1

As $Y$ is a $6$ dimensional linear space over $mathbb F$ , it is sufficient to prove that $$ is linearly independent. Suppose that $a_u,a_v,b_u,b_v,c_u,c_v in mathbb F$ are such that

$a_u u +a_v v + b_u T(u)+ b_v T(v) +c_u T^2(u) + c_v T^2(v)=0.$

Applying $T^2$ on both sides of this equality, you get $a_u=a_v=0$ according to the hypothesis. Then applying $T$ to the remaining terms, $b_u=b_v=0$ . And applying $T$ again, $c_u=c_v=0$ .


Watch the video: Vectors in 2D and 3D Space (January 2022).