Articles

Introduction to Vector Field Chapter - Mathematics


Hurricanes are huge storms that can produce tremendous amounts of damage to life and property, especially when they reach land. Predicting where and when they will strike and how strong the winds will be is of great importance for preparing for protection or evacuation. Scientists rely on studies of rotational vector fields for their forecasts.

In this chapter, we learn to model new kinds of integrals over fields such as magnetic fields, gravitational fields, or velocity fields. We also learn how to calculate the work done on a charged particle traveling through a magnetic field, the work done on a particle with mass traveling through a gravitational field, and the volume per unit time of water flowing through a net dropped in a river.

All these applications are based on the concept of a vector field, which we explore in this chapter. Vector fields have many applications because they can be used to model real fields such as electromagnetic or gravitational fields. A deep understanding of physics or engineering is impossible without an understanding of vector fields. Furthermore, vector fields have mathematical properties that are worthy of study in their own right. In particular, vector fields can be used to develop several higher-dimensional versions of the Fundamental Theorem of Calculus.


GeoGebra (named for "geometry" and "algebra" together) is a very powerful tool and is free to use online. Follow the link above to its 3D graphing tool, and just type in a function, such as f(t)=(3t,2t+1,sin(6t)) to graph it.

GeoGebra automatically converts this to its own internal code for you, which looks like the following, and lets you specify the lower and upper bounds on the variable t .

You can type such code in directly, with or without the f = , if you prefer.

You can also use GeoGebra to graph 2D parametric plots.


Vectors and Scalars

A vector quantity, or vector, provides information about not just the magnitude but also the direction of the quantity. When giving directions to a house, it isn't enough to say that it's 10 miles away, but the direction of those 10 miles must also be provided for the information to be useful. Variables that are vectors will be indicated with a boldface variable, although it is common to see vectors denoted with small arrows above the variable.

Just as we don't say the other house is -10 miles away, the magnitude of a vector is always a positive number, or rather the absolute value of the "length" of the vector (although the quantity may not be a length, it may be a velocity, acceleration, force, etc.) A negative in front a vector doesn't indicate a change in the magnitude, but rather in the direction of the vector.

In the examples above, distance is the scalar quantity (10 miles) but displacement is the vector quantity (10 miles to the northeast). Similarly, speed is a scalar quantity while velocity is a vector quantity.

A unit vector is a vector that has a magnitude of one. A vector representing a unit vector is usually also boldface, although it will have a carat (^) above it to indicate the unit nature of the variable. The unit vector x, when written with a carat, is generally read as "x-hat" because the carat looks kind of like a hat on the variable.

The zero vector, or null vector, is a vector with a magnitude of zero. It is written as 0 in this article.


Vectors

You use vectors in almost every activity you do. A vector is a quantity that has size and direction. The fancy word for size is "magnitude".

Examples of everyday activities that involve vectors include:

  • Breathing (your diaphragm muscles exert a force that has a magnitude and direction)
  • Walking (you walk at a velocity of around 6 km/h in the direction of the bathroom)
  • Going to school (the bus has a length of about 20 m and is headed towards your school)
  • Lunch (the displacement from your class room to the canteen is about 40 m in a northerly direction)

Each vector quantity has a magnitude and a direction.


Introduction to Vector Field Chapter - Mathematics

We need to start this chapter off with the definition of a vector field as they will be a major component of both this chapter and the next. Let’s start off with the formal definition of a vector field.

Definition

A vector field on two (or three) dimensional space is a function (vec F) that assigns to each point (left( ight)) (or (left( ight))) a two (or three dimensional) vector given by (vec Fleft( ight)) (or (vec Fleft( ight))).

That may not make a lot of sense, but most people do know what a vector field is, or at least they’ve seen a sketch of a vector field. If you’ve seen a current sketch giving the direction and magnitude of a flow of a fluid or the direction and magnitude of the winds then you’ve seen a sketch of a vector field.

The standard notation for the function (vec F) is,

[eginvec Fleft( ight) & = Pleft( ight)vec i + Qleft( ight)vec j vec Fleft( ight) & = Pleft( ight)vec i + Qleft( ight)vec j + Rleft( ight)vec kend]

depending on whether or not we’re in two or three dimensions. The function (P), (Q), (R) (if it is present) are sometimes called scalar functions.

Let’s take a quick look at a couple of examples.

Okay, to graph the vector field we need to get some “values” of the function. This means plugging in some points into the function. Here are a couple of evaluations.

[eginvec Fleft( <2>,frac<1><2>> ight) & = - frac<1><2>vec i + frac<1><2>vec j vec Fleft( <2>, - frac<1><2>> ight) & = - left( < - frac<1><2>> ight)vec i + frac<1><2>vec j = frac<1><2>vec i + frac<1><2>vec jvec Fleft( <2>,frac<1><4>> ight) & = - frac<1><4>vec i + frac<3><2>vec jend]

So, just what do these evaluations tell us? Well the first one tells us that at the point (left( <2>,frac<1><2>> ight)) we will plot the vector ( - frac<1><2>vec i + frac<1><2>vec j). Likewise, the third evaluation tells us that at the point (left( <2>,frac<1><4>> ight)) we will plot the vector ( - frac<1><4>vec i + frac<3><2>vec j).

We can continue in this fashion plotting vectors for several points and we’ll get the following sketch of the vector field.

If we want significantly more points plotted, then it is usually best to use a computer aided graphing system such as Maple or Mathematica. Here is a sketch with many more vectors included that was generated with Mathematica.

In the case of three dimensional vector fields it is almost always better to use Maple, Mathematica, or some other such tool. Despite that let’s go ahead and do a couple of evaluations anyway.

[eginvec Fleft( <1, - 3,2> ight) & = 2,vec i + 6,vec j - 2,vec k vec Fleft( <0,5,3> ight) & = - 10,vec jend]

Notice that (z) only affects the placement of the vector in this case and does not affect the direction or the magnitude of the vector. Sometimes this will happen so don’t get excited about it when it does.

Here is a couple of sketches generated by Mathematica. The sketch on the left is from the “front” and the sketch on the right is from “above”.

Now that we’ve seen a couple of vector fields let’s notice that we’ve already seen a vector field function. In the second chapter we looked at the gradient vector. Recall that given a function (fleft( ight)) the gradient vector is defined by,

This is a vector field and is often called a gradient vector field.

In these cases, the function (fleft( ight)) is often called a scalar function to differentiate it from the vector field.

Note that we only gave the gradient vector definition for a three dimensional function, but don’t forget that there is also a two dimension definition. All that we need to drop off the third component of the vector.

Here is the gradient vector field for this function.

[ abla f = leftlangle <2xsin left( <5y> ight),5cos left( <5y> ight)> ight angle ]

There isn’t much to do here other than take the gradient.

Let’s do another example that will illustrate the relationship between the gradient vector field of a function and its contours.

Recall that the contours for a function are nothing more than curves defined by,

for various values of (k). So, for our function the contours are defined by the equation,

and so they are circles centered at the origin with radius (sqrt k ).

Here is the gradient vector field for this function.

[ abla fleft( ight) = 2x,vec i + 2y,vec j]

Here is a sketch of several of the contours as well as the gradient vector field.

Notice that the vectors of the vector field are all orthogonal (or perpendicular) to the contours. This will always be the case when we are dealing with the contours of a function as well as its gradient vector field.

The (k)’s we used for the graph above were 1.5, 3, 4.5, 6, 7.5, 9, 10.5, 12, and 13.5. Now notice that as we increased (k) by 1.5 the contour curves get closer together and that as the contour curves get closer together the larger the vectors become. In other words, the closer the contour curves are (as (k) is increased by a fixed amount) the faster the function is changing at that point. Also recall that the direction of fastest change for a function is given by the gradient vector at that point. Therefore, it should make sense that the two ideas should match up as they do here.

The final topic of this section is that of conservative vector fields. A vector field (vec F) is called a conservative vector field if there exists a function (f) such that (vec F = abla f). If (vec F) is a conservative vector field then the function, (f), is called a potential function for (vec F).

All this definition is saying is that a vector field is conservative if it is also a gradient vector field for some function.

For instance the vector field (vec F = y,vec i + x,vec j) is a conservative vector field with a potential function of (fleft( ight) = xy) because ( abla f = leftlangle ight angle ).

On the other hand, (vec F = - y,vec i + x,vec j) is not a conservative vector field since there is no function (f) such that (vec F = abla f). If you’re not sure that you believe this at this point be patient, we will be able to prove this in a couple of sections. In that section we will also show how to find the potential function for a conservative vector field.


Math Insight

Many physical force fields (vector fields) that you are familiar with are conservative vector fields. The term comes from the fact that some kind of energy is conserved by these force fields. The important consequence for us, though, is that as you move an object from point $vc$ to point $vc$, the work performed by a conservative force field does not depend on the path taken from point $vc$ to point $vc$. For this reason, we often refer to such vector fields as path-independent vector fields. Path-independent and conservative are just two terms that mean the same thing.

For example, imagine you have to carry a heavy box from your front door to your bedroom upstairs. Because of the gravity (which can be viewed as a force field), you have to do work to carry the box up.

Here we mean the scientific definition of work, which is force times distance. Although it may feel like work to move the box from one room to another on the same floor, the actual work done against gravity is zero.

Next, imagine that you have two stairways in your house: a gently sloping front staircase, and a steep back staircase. Since the gravitational field is a conservative vector field, the work you must do against gravity is exactly the same if you take the front or the back staircase. As long as the box starts in the same position and ends in the same position, the total work is the same. (In fact, if you decided to first carry the box to your neighbor's house, then carry it up and down your backyard tree, and then in your back door before taking it upstairs, it wouldn't make a difference for this scientific definition of work. The net work you performed against gravity would be the same.)

The line integral of a vector field can be viewed as the total work performed by the force field on an object moving along the path. For the above gravity example, we discussed the work you performed against the gravity field, which is exactly opposite the work performed by the gravity field. We'd need to multiply the line integral by $-1$ to get the work you performed against the gravity field, but that's a technical point we don't need to worry much about.

The vector field $dlvf(x,y)=(x,y)$ is a conservative vector field. (You can read how to test for path-independence later. For now, take it on faith.) It is illustrated by the black arrows in the below figure. We want to compute the integral egin dlint end where $dlc$ is a path from the point $vc=(3,-3)$ (shown by the cyan square) to the point $vc=(2,4)$ (shown by the magenta square). Since $dlvf$ is path-independent, we don't need to know anything else about the path $dlc$ to compute the line integral. Later, you'll learn to compute that the value of the integral is 1, as shown by the magenta line on the slider below the figure.

The below applet demonstrate the path-independence of $dlvf$, as one can see that the integrals along three different paths give the same value. The vector field appears to be path-independent, as promised. (You'd have to check all the infinite number of possible paths from all points $vc$ to all points $vc$ to determine that $dlvf$ was really path-independent. Fortunately, you'll learn some simpler methods.)


Introduction to Vector Field Chapter - Mathematics

This chapter is concerned with applying calculus in the context of vector fields. A two-dimensional vector field is a function $f$ that maps each point $(x,y)$ in $R^2$ to a two-dimensional vector $langle u,v angle$, and similarly a three-dimensional vector field maps $(x,y,z)$ to $langle u,v,w angle$. Since a vector has no position, we typically indicate a vector field in graphical form by placing the vector $f(x,y)$ with its tail at $(x,y)$. Figure 16.1.1 shows a representation of the vector field $f(x,y)=langle -x/sqrt,y/sqrt angle$. For such a graph to be readable, the vectors must be fairly short, which is accomplished by using a different scale for the vectors than for the axes. Such graphs are thus useful for understanding the sizes of the vectors relative to each other but not their absolute size.

Vector fields have many important applications, as they can be used to represent many physical quantities: the vector at a point may represent the strength of some force (gravity, electricity, magnetism) or a velocity (wind speed or the velocity of some other fluid).

We have already seen a particularly important kind of vector field&mdashthe gradient. Given a function $f(x,y)$, recall that the gradient is $langle f_x(x,y),f_y(x,y) angle$, a vector that depends on (is a function of) $x$ and $y$. We usually picture the gradient vector with its tail at $(x,y)$, pointing in the direction of maximum increase. Vector fields that are gradients have some particularly nice properties, as we will see. An important example is $<f F>= left langle <-xover (x^2+y^2+z^2)^<3/2>>,<-yover (x^2+y^2+z^2)^<3/2>>,<-zover (x^2+y^2+z^2)^<3/2>> ight angle,$ which points from the point $(x,y,z)$ toward the origin and has length $over(x^2+y^2+z^2)^<3/2>>= <1over(sqrt)^2>,$ which is the reciprocal of the square of the distance from $(x,y,z)$ to the origin&mdashin other words, $<f F>$ is an "inverse square law''. The vector $f F$ is a gradient: $eqalignno < <f F>&= abla <1oversqrt>,& (16.1.1) >$ which turns out to be extremely useful.


Contents

Vector fields on subsets of Euclidean space Edit

Given a subset S in R n , a vector field is represented by a vector-valued function V: SR n in standard Cartesian coordinates (x1, …, xn) . If each component of V is continuous, then V is a continuous vector field, and more generally V is a C k vector field if each component of V is k times continuously differentiable.

A vector field can be visualized as assigning a vector to individual points within an n-dimensional space. [1]

Given two C k -vector fields V , W defined on S and a real-valued C k -function f defined on S , the two operations scalar multiplication and vector addition

define the module of C k -vector fields over the ring of C k -functions where the multiplication of the functions is defined pointwise (therefore, it is commutative with the multiplicative identity being fid(p) := 1 ).

Coordinate transformation law Edit

In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector.

Thus, suppose that (x1. xn) is a choice of Cartesian coordinates, in terms of which the components of the vector V are

and suppose that (y1. yn) are n functions of the xi defining a different coordinate system. Then the components of the vector V in the new coordinates are required to satisfy the transformation law

Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of n functions in each coordinate system subject to the transformation law (1) relating the different coordinate systems.

Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes.

Vector fields on manifolds Edit

If the manifold M is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold M is often denoted by Γ ( T M ) or C ∞ ( M , T M ) (M,TM)> (especially when thinking of vector fields as sections) the collection of all smooth vector fields is also denoted by X ( M ) >(M)> (a fraktur "X").

  • A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind the length (magnitude) of the arrow will be an indication of the wind speed. A "high" on the usual barometric pressure map would then act as a source (arrows pointing away), and a "low" would be a sink (arrows pointing towards), since air tends to move from high pressure areas to low pressure areas. field of a moving fluid. In this case, a velocity vector is associated to each point in the fluid. are 3 types of lines that can be made from (time-dependent) vector fields. They are:
    . The fieldlines can be revealed using small iron filings. allow us to use a given set of initial and boundary conditions to deduce, for every point in Euclidean space, a magnitude and direction for the force experienced by a charged test particle at that point the resulting vector field is the electromagnetic field.
  • A gravitational field generated by any massive object is also a vector field. For example, the gravitational field vectors for a spherically symmetric body would all point towards the sphere's center with the magnitude of the vectors reducing as radial distance from the body increases.

Gradient field in euclidean spaces Edit

Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇). [4]

A vector field V defined on an open set S is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) f on S such that

The associated flow is called the gradient flow , and is used in the method of gradient descent.

The path integral along any closed curve γ (γ(0) = γ(1)) in a conservative field is zero:

Central field in euclidean spaces Edit

A C ∞ -vector field over R n <0>is called a central field if

where O(n, R) is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0.

The point 0 is called the center of the field.

Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0 this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient.

Line integral Edit

A common technique in physics is to integrate a vector field along a curve, also called determining its line integral. Intuitively this is summing up all vector components in line with the tangents to the curve, expressed as their scalar products. For example, given a particle in a force field (e.g. gravitation), where each vector at some point in space represents the force acting there on the particle, the line integral along a certain path is the work done on the particle, when it travels along this path. Intuitively, it is the sum of the scalar products of the force vector and the small tangent vector in each point along the curve.

The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous.

Given a vector field V and a curve γ , parametrized by t in [a, b] (where a and b are real numbers), the line integral is defined as

Divergence Edit

The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by

with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem.

The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors.

Curl in three dimensions Edit

The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three dimensions, it is defined by

The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem.

Index of a vector field Edit

The index of a vector field is an integer that helps to describe the behaviour of a vector field around an isolated zero (i.e., an isolated singularity of the field). In the plane, the index takes the value -1 at a saddle singularity but +1 at a source or sink singularity.

Let the dimension of the manifold on which the vector field is defined be n. Take a small sphere S around the zero so that no other zeros lie in the interior of S. A map from this sphere to a unit sphere of dimensions n − 1 can be constructed by dividing each vector on this sphere by its length to form a unit length vector, which is a point on the unit sphere S n-1 . This defines a continuous map from S to S n-1 . The index of the vector field at the point is the degree of this map. It can be shown that this integer does not depend on the choice of S, and therefore depends only on the vector field itself.

The index of the vector field as a whole is defined when it has just a finite number of zeroes. In this case, all zeroes are isolated, and the index of the vector field is defined to be the sum of the indices at all zeroes.

The index is not defined at any non-singular point (i.e., a point where the vector is non-zero). it is equal to +1 around a source, and more generally equal to (−1) k around a saddle that has k contracting dimensions and n-k expanding dimensions. For an ordinary (2-dimensional) sphere in three-dimensional space, it can be shown that the index of any vector field on the sphere must be 2. This shows that every such vector field must have a zero. This implies the hairy ball theorem, which states that if a vector in R 3 is assigned to each point of the unit sphere S 2 in a continuous manner, then it is impossible to "comb the hairs flat", i.e., to choose the vectors in a continuous way such that they are all non-zero and tangent to S 2 .

For a vector field on a compact manifold with a finite number of zeroes, the Poincaré-Hopf theorem states that the index of the vector field is equal to the Euler characteristic of the manifold.

Michael Faraday, in his concept of lines of force, emphasized that the field itself should be an object of study, which it has become throughout physics in the form of field theory.

In addition to the magnetic field, other phenomena that were modeled by Faraday include the electrical field and light field.

Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity.

Given a vector field V defined on S, one defines curves γ(t) on S such that for each t in an interval I

By the Picard–Lindelöf theorem, if V is Lipschitz continuous there is a unique C 1 -curve γx for each point x in S so that, for some ε > 0,

The curves γx are called integral curves or trajectories (or less commonly, flow lines) of the vector field V and partition S into equivalence classes. It is not always possible to extend the interval (−ε, +ε) to the whole real number line. The flow may for example reach the edge of S in a finite time. In two or three dimensions one can visualize the vector field as giving rise to a flow on S. If we drop a particle into this flow at a point p it will move along the curve γp in the flow depending on the initial point p. If p is a stationary point of V (i.e., the vector field is equal to the zero vector at the point p), then the particle will remain at p.

Given a smooth function between manifolds, f : MN, the derivative is an induced map on tangent bundles, f* : TMTN. Given vector fields V : MTM and W : NTN, we say that W is f-related to V if the equation Wf = fV holds.

If Vi is f-related to Wi, i = 1, 2, then the Lie bracket [V1, V2] is f-related to [W1, W2].

Replacing vectors by p-vectors (pth exterior power of vectors) yields p-vector fields taking the dual space and exterior powers yields differential k-forms, and combining these yields general tensor fields.

Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras.


Math Insight

The line integral of a vector field $dlvf$ could be interpreted as the work done by the force field $dlvf$ on a particle moving along the path. The surface integral of a vector field $dlvf$ actually has a simpler explanation. If the vector field $dlvf$ represents the flow of a fluid, then the surface integral of $dlvf$ will represent the amount of fluid flowing through the surface (per unit time).

The amount of the fluid flowing through the surface per unit time is also called the flux of fluid through the surface. For this reason, we often call the surface integral of a vector field a flux integral.

If water is flowing perpendicular to the surface, a lot of water will flow through the surface and the flux will be large. On the other hand, if water is flowing parallel to the surface, water will not flow through the surface, and the flux will be zero. To calculate the total amount of water flowing through the surface, we want to add up the component of the vector $dlvf$ that is perpendicular to the surface.

Let $vc$ be a unit normal vector to the surface. The choice of normal vector orients the surface and determines the sign of the fluid flux. The flux of fluid through the surface is determined by the component of $dlvf$ that is in the direction of $vc$, i.e. by $dlvf cdot vc$. Note that $dlvf cdot vc$ will be zero if $dlvf$ and $vc$ are perpendicular, positive if $dlvf$ and $vc$ are pointing the same direction, and negative if $dlvf$ and $vc$ are pointing in opposite directions.

Let's illustrate this with the function egin dlsp(spfv,spsv) = (spfvcos spsv, spfvsin spsv, spsv). end that parametrizes a helicoid for $(spfv,spsv) in dlr = [0,1] imes [0, 2pi]$. As shown in the following figure, we chose the upward point normal vector. (We could have used the downward point normal instead. If we did, our fluid flux calculation we have the opposite sign.)

Applet loading

Applet loading

A parametrized helicoid with normal vector. The function $dlsp(spfv,spsv) = (spfvcos spsv, spfvsin spsv, spsv)$ parametrizes a helicoid when $(spfv,spsv) in dlr$, where $dlr$ is the rectangle $[0,1] imes [0, 2pi]$ shown in the first panel. The cyan vector at the blue point $dlsp(spfv,spsv)$ is the upward pointing unit normal vector at that point. You can drag the blue point in $dlr$ or on the helicoid to specify both $spfv$ and $spsv$.

Given some fluid flow $dlvf$, if we integrate $dlvf cdot vc$, we will determine the total flux of fluid through the helicoid, counting flux in the direction of $vc$ as positive and flux in the opposite direction as negative.

We represent the fluid flow vector field $dlvf$ by magenta arrows in the following applet.

Applet loading

Applet loading

Fluid flow through oriented helicoid. The function $dlsp(spfv,spsv) = (spfvcos spsv, spfvsin spsv, spsv)$ parametrizes a helicoid when $(spfv,spsv) in dlr$, where $dlr$ is the rectangle $[0,1] imes [0, 2pi]$ shown in the first panel. The cyan vector at the blue point $dlsp(spfv,spsv)$ is the upward pointing unit normal vector at that point. The magenta vector field represents fluid flow that passes through the surface. In this example, the vector field is the constant $dlvf=(0,1,1)$. You can drag the blue point in $dlr$ or on the helicoid to specify both $spfv$ and $spsv$.

It appears that the fluid is flowing generally in the same direction as $vc$ (for the most part $dlvf$ and $vc$ are closer to pointing in the same direction than pointing in the opposite direction). However, notice, for example, that when $spfv=0$ and $spsv=2pi$ (or when $spfv=0$ and $spsv=0$), the fluid is flowing in the opposite direction of $vc$ (at least the flow is closer to the opposite direction than the same direction). At these points, the fluid is crossing the surface in the opposite direction than it is at most points on the surface.

The below figure demonstrates this more clearly. Here, you can see the fluid vector $dlvf$ (in magenta) at the same point as the normal vector (in cyan). The value of the flux $dlvf cdot vc$ across the surface at the blue point is shown in the lower right corner. Note that $dlvf cdot vc$ is usually positive, but is negative at a few points, such as those mentioned above. When $dlvf cdot vc=0$, what is the relationship between the fluid vector $dlvf$ and the surface?

Applet loading

Applet loading

Fluid flow through a point of oriented helicoid. The function $dlsp(spfv,spsv) = (spfvcos spsv, spfvsin spsv, spsv)$ parametrizes a helicoid when $(spfv,spsv) in dlr$, where $dlr$ is the rectangle $[0,1] imes [0, 2pi]$ shown in the first panel. The cyan vector at the blue point $dlsp(spfv,spsv)$ is the upward pointing unit normal vector at that point. The magenta vector at that point represents fluid flow that passes through the surface. In this case, the fluid flow is the constant $dlvf=(0,1,1)$ at every point. Even though the fluid flow is constant, the flux through the surface changes, as it is the component of the flow normal to the surface. At the location of the blue point, the flux through the surface, $dlvf cdot vc$, is shown in the lower right corner. You can drag the blue point in $dlr$ or on the helicoid to specify both $spfv$ and $spsv$.

The total flux of fluid flow through the surface $dls$, denoted by $dsint$, is the integral of the vector field $dlvf$ over $dls$. The integral of the vector field $dlvf$ is defined as the integral of the scalar function $dlvf cdot vc$ over $dls$ egin ext &= dsint = ssint>. end The formula for a surface integral of a scalar function over a surface $dls$ parametrized by $dlsp$ is egin ssint = iint_dlr f(dlsp(spfv,spsv))left| pdiff(spfv,spsv) imes pdiff(spfv,spsv) ight| dspfv,dspsv. end

Plugging in $f = dlvf cdot vc$, the total flux of the fluid is egin dsint= iint_dlr (dlvf cdot vc) left| pdiff imes pdiff ight| dspfv,dspsv. end

Lastly, the formula for a unit normal vector of the surface is egin vc = frac imes pdiff> imes pdiff ight|>. end If we plug in this expression for $vc$, the $left| pdiff imes pdiff ight|$ factors cancel, and we obtain the final expression for the surface integral: egin dsint= iint_dlr dlvf(dlsp(spfv,spsv)) cdot left( pdiff(spfv,spsv) imes pdiff(spfv,spsv) ight) dspfv,dspsv. label end

Note how the equation for a surface integral is similar to the equation for the line integral of a vector field egin dlint = int_a^b dlvf(dllp(t)) cdot dllp'(t) dt. end For line integrals, we integrate the component of the vector field in the tangent direction given by $dllp'(t)$. For surface integrals, we integrate the component of the vector field in the normal direction given by $pdiff(spfv,spsv) imes pdiff(spfv,spsv)$.

You can read some examples of calculating surface integrals of vector fields.


Introductory Mathematics For Engineers Lectures In Higher Mathematics

In this post, we will see the book Introductory Mathematics for Engineers: Lectures in Higher Mathematics by A. D. Myškis.

The present book is based on lectures given by the author over a number of years to students of various engineering and physics. The book includes some optional can be skipped for the first reading. The corresponding Items m the table of contents are marked by an asterisk.

.

The book is composed in such a way that it is possible to use it both for studying in a college under the guidance of a teacher and for self-education. The subject matter of the book is divided into small sections so that the reader could study the material in suitable order and to any extent depending on the profession and the needs of the reader. It is also intended that the book can be used by students taking a correspondence course and by the readers who have some prerequisites in higher mathematics and want to perfect their knowledge by reading some chapters of the book.

.

The book can be of use to readers of various professions dealing with applications of mathematics in their work. Modern applied mathematics of many important special divisions which are not included m this book. The author intends to write another book devoted to some supplementary topics such as the theory of functions of a complex argument, variational calculus, mathematical physics, some special questions of the theory of ordinary differential equations and so on.

The book has interesting ways to treat affine mappings (pages 344-345) and non-linear mappings (pages 358-359).

The book was translated from the Russian by V. M. Volosov and was first published by Mir in 1972.

Note: quite a few pages are missing from the scan:

56-57 70-71 210-211 240-241 312-313 315 320-321 337-338 338-339-340 418-419 464-465 759-760 764-765

Credits to the original scanner. The original scan was not clean or bookmarked. We cleaned, OCRed and bookmarked the original scan.

Front Cover
Title Page
Preface 5
Contents
Introduction 19
1. The Subject of Mathematics 19
2. The Importance of Mathematics and Mathematical Education 20
3. Abstractness 20
4. Characteristic Features of Higher Mathematics 22
5. Mathematics in the Soviet Union 23
CHAPTER I. VARIABLES AND FUNCTIONS 25
§ 1. Quantities 25
1. Concept of a Quantity 25
2. Dimensions of Quantities 25
3. Constants and Variables 26
4. Number Scale. Slide Rule 27
5. Characteristics of Variables 29
§ 2. Approximate Values of Quantities 32
6. The Notion of an Approximate Value 32
7. Errors 32
8. Writing Approximate Numbers 33
9. Addition and Subtraction of Approximate Numbers 34
10. Multiplication and Division of Approximate Numbers Remarks 36
§ 3. Functions and Graphs 39
11. Functional Relation 39
12. Notation 40
13. Methods of Representing Functions 42
14. Graphs of Functions 45
15. The Domain of Definition of a Function 47
16. Characteristics of Behaviour of Functions 48
17. Algebraic Classification of Functions 51
18. Elementary Functions 53
19. Transforming Graphs 54
20. Implicit Functions 56
21. Inverse Functions 58
§ 4. Review of Basic Functions 60
22. Linear Function 60
23. Quadratic Function 62
24. Power Function 63
25. Linear-Fractional Function 66
26. Logarithmic Function 68
27. Exponential Function 69
28. Hyperbolic Functions 70
29. Trigonometric Functions 72
30. Empirical Formulas 75
CHAPTER II. PLANE ANALYTIC GEOMETRY 78
§ 1. Plane Coordinates 78
1. Cartesian Coordinates 78
2. Some Simple Problems Concerning Cartesian Coordinates 79
3. Polar Coordinates 81
§ 2. Curves in Plane 82
4. Equation of a Curve in Cartesian Coordinates 82
5. Equation of a Curve in Polar Coordinates 84
6. Parametric Representation of Curves and Functions 87
7. Algebraic Curves 90
8. Singular Cases 92
§ 3. First-Order and Second-Order Algebraic Curves 94
9. Curves of the First Order 94
10. Ellipse 96
11. Hyperbola 99
12. Relationship Between Ellipse, Hyperbola and Parabola 102
13. General Equation of a Curve of the Second Order 105
CHAPTER III. LIMIT. CONTINUITY 109
§ 1. Infinitesimal and Infinitely Large Variables 109
1. Infinitesimal Variables 109
2. Properties of Infinitesimals 111
3. Infinitely Large Variables 112
§ 2. Limits 113
4. Definition 113
5. Properties of Limits 115
6. Sum of a Numerical Series 117
§ 3. Comparison of Variables 121
7. Comparison of Infinitesimals 121
8. Properties of Equivalent Infinitesimals 122
9. Important Examples 122
10. Orders of Smallness 124
11. Comparison of Infinitely Large Variables 125
§ 4. Continuous and Discontinuous Functions 125
12. Definition of a Continuous Function 125
13. Points of Discontinuity 126
14. Properties of Continuous Functions 129
15. Some Applications 131
CHAPTER IV. DERIVATIVES, DIFFERENTIALS, INVESTIGATION OF THE BEHAVIOUR OF FUNCTIONS 134
§ 1. Derivative 134
1. Some Problems Leading to the Concept of a Derivative 134
2. Definition of Derivative 136
3. Geometrical Meaning of Derivative 137
4. Basic Properties of Derivatives 139
5. Derivatives of Basic Elementary Functions 142
6. Determining Tangent in Polar Coordinates 146
§ 2. Differential 148
7. Physical Examples 148
8. Definition of Differential and Its Connection with Increment 149
9. Properties of Differential 152
10. Application of Differentials to Approximate Calculations 153
§ 3. Derivatives and Differentials of Higher Orders 155
11. Derivatives of Higher Orders 155
12. Higher-Order Differentials 156
§ 4. L'Hospital's Rule 158
13. Indeterminate Forms of the Type $dfrac<0><0>$ 158
14. Indeterminate Forms of tl1e Type $dfrac$ 160
§ 5. Taylor's Formula and Series 161
15. Taylor's Formula 161
16. Taylor's Series 163
§ 6. Intervals of Monotonicity. Exrtremum 165
17. Sign of Derivative 165
18. Points of Extremum 166
19. The Greatest and the Least Values of a Function 168
§ 7. Constructing Graphs of Functions 173
20. Intervals of Convexity of a Graph and Points of Inflection 173
21. Asymptotes of a Graph 174
22. General Scheme for Investigating a Function and Constructing Its Graph 175
CHAPTER V. APPROXIMATING ROOTS OF EQUATIONS. INTERPOLATION 179
§ 1. Approximating Roots of Equations 179
1. Introduction 179
2. Cut-and-Try Method. Method of Chords. Method of Tangents 181
3. Iterative Method 185
4. Formula of Finite Increments 187
5*. Small Parameter Method 189
§ 2. Interpolation 191
6. Lagrange's Interpolation Formula 191
7. Finite Differences and Their Connection with Derivatives 192
8. Newton's Interpolation Formulas 196
9. Numerical Differentiation 198
CHAPTER VI. DETERMINANTS AND SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS 200
§ 1. Determinants 200
1. Definition 200
2. Properties 201
3. Expanding a Determinant in Minors of Its Row or Column 203
§ 2. Systems of Linear Algebraic Equations 206
4. Basic Case 206
5. Numerical Solution 208
6. Singular Case 209
CHAPTER VII. VECTORS 212
§ 1. Linear Operations on Vectors 212
1. Scalar and Vector Quantities 212
2. Addition of Vectors 213
3. Zero Vector and Subtraction of Vectors 215
4. Multiplying a Vector by a Scalar 215
5. Linear Combination of Vectors 216
§ 2. Scalar Product of Vectors 219
6. Projection of Vector on Axis 219
7. Scalar Product 220
8. Properties of Scalar Product 221
§ 3. Cartesian Coordinates in Space 222
9. Cartesian Coordinates in Space 222
10. Some Simple Problems Concerning Cartesian Coordinates 223
§ 4. Vector Product of Vectors 227
11. Orientation of Surface and Vector of an Area 227
12. Vector Product 228
13. Properties of Vector Products 230
14*. Pseudovectors 233
§ 5. Products of Three Vectors 235
15. Triple Scalar Product 235
16. Triple Vector Product 236
§ 6. Linear Spaces 237
17. Concept of Linear Space 237
18. Examples 239
19. Dimension of Linear Space 241
20. Concept of Euclidean Space 244
21. Orthogonality 245
§ 7. Vector Functions of Scalar Argument. Curvature 248
22. Vector Variables 248
23. Vector Functions of Scalar Argument 248
24. Some Notions Related to the Second Derivative 251
25. Osculating Circle 252
26. Evolute and Evolvent 255
CHAPTER VIII. COMPLEX NUMBERS AND FUNCTIONS 259
§ 1. Complex Numbers 259
1. Complex Plane 259
2. Algebraic Operations on Complex Numbers 261
3. Conjugate Complex Numbers 263
4. Euler's Formula 264
5. Logarithms of Complex Numbers 266
§ 2. Complex Functions of a Real Argument 267
6. Definition and Properties 267
7*. Applications to Describing Oscillations 269
§ 3. The Concept of a Function of a Complex Variable 271
8. Factorization of a Polynomial 271
9*. Numerical Methods of Solving Algebraic Equations 273
10. Decomposition of a Rational Fraction into Partial Rational Fractions 277
11*. Some General Remarks on Functions of a Complex Variable 280
CHAPTER IX. FUNCTIONS OF SEVERAL VARIABLES 283
§ 1. Functions of Two Variables 283
1. Methods of Representing 283
2. Domain of Definition 286
3. Linear Function 287
4. Continuity and Discontinuity 288
5. Implicit Functions 291
§ 2. Functions of Arbitrary Number of Variables 291
6. Methods of Representing 291
7. Functions of Three Arguments 292
8. General Case 292
9. Concept of Field 293
§ 3. Partial Derivatives and Differentials of the First Order 294
10. Basic Definitions 294
11. Total Differential 296
12. Derivative of Composite Function 298
13. Derivative of Implicit Function 300
§ 4. Partial Derivatives and Differentials of Higher Orders 303
14. Definitions 303
15. Equality of Mixed Derivatives 304
16. Total Differentials of Higher Order 305
CHAPTER X. SOLID ANALYTIC GEOMETRY 307
§ 1. Space Coordinates 307
1. Coordinate Systems in Space 307
2*. Degrees of Freedom 309
4. Cylinders, Cones and Surfaces of Revolution 314
5. Curves In Space 316
6. Parametric Representation of Surfaces in Space. Parametric Representation of Functions of Several Variables 317
§ 3. Algebraic Surfaces of the First and of the Second Orders 319
7. Algebraic Surfaces of the First Order 319
8. Ellipsoids 322
9. Hyperboloids 324
10. Paraboloids 326
11. General Review of the Algebraic surfaces of the second order 327
CHAPTER XI. MATRICES AND THEIR APPLICATIONS 329
§ 1. Matrices 329
1. Definitions 329
2. Operations on Matrices 331
3. Inverse Matrix 333
4. Eigenvectors and Eigenvalues of a Matrix 335
5. The Rank of a Matrix 337
7. Transformation of the Matrix of a Linear Mapping When the Basis Is Changed 347
8. The Matrix of a Mapping Relative to the Basis Consisting of Its Eigenvectors 350
9. Transforming Cartesian Basis 352
10. Symmetric Matrices 353
§ 3. Quadratic Forms 355
11. Quadratic Forms 355
12. Simplification of Equations of Second-Order Curves and Surfaces 357
§ 4. Non-Linear Mappings 358
13*. General Notions 358
14*. Non Linear Mapping in the Small 360
15*. Functional Relation Between Functions 362
CHAPTER XII. APPLICATIONS OF PARTIAL DERIVATIVES 365
§ 1. Scalar Field 365
1. Directional Derivative. Gradient 365
2. Level Surfaces 368
3. Implicit Functions of Two Independent Variables 370
4. Plane Fields 371
5. Envelope of One-Parameter Family of Curves 372
§ 2. Extremum of a Function of Several Variables 374
6. Taylor's Formula for a Function of Several Variables 374
7. Extremum 375
8. The Method of Least Squares 380
9*. Curvature of Surfaces 381
10. Conditional Extremum 384
11. Extremum with Unilateral Constraints 388
12*. Numerical Solution of Systems of Equations 390
CHAPTER XIII. INDEFINITE INTEGRAL 393
§ 1. Elementary Methods of Integration 393
1. Basic Definitions 393
2. The Simplest Integrals 394
3. The Simplest Properties of an Indefinite Integral 397
4. Integration by Parts 399
5. Integration by Change of Variable (by Substitution) 402
§ 2. Standard Methods of Integration 404
6. Integration of Rational Functions 405
7. Integration of Irrational Functions Involving Linear and Linear-Fractional Expressions 407
8. Integration of Irrational Expressions Containing Quadratic Trinomials 408
9. Integrals of Binomial Differentials 411
lO. Integration of Functions Rationally Involving Trigonometric Functions 412
11. General Remarks 415
CHAPTER XIV. DEFINITE INTEGRAL 417
§ 1. Definition and Basic Properties 417
1. Examples Lending to the Concept of Definite Integral 417
3. Relationship Between Definite Integral and Indefinite Integral 426
4. Basic Properties of Definite Integral 433
5. Integrating Inequalities 436
§ 2. Applications of Definite Integral 436
6. Two Schemes of Application 436
7. Differential Equations with Variables Separable 437
8. Computing Areas of Plane Geometric Figures 443
9. The Arc Length of a Curve 445
10. Computing Volumes of Solids 447
11. Computing Area of Surface of Revolution 448
§ 3. Numerical Integration 448
12. General Remarks 448
13. Formulas of Numerical Integration 450
§ 4. Improper Integrals 454
14. Integrals with Infinite Limits of Integration 455
15. Basic Properties of Integrals with Infinite Limits 464
16. Other Types of Improper Integral 468
17*. Gamma Function 468
18*. Beta Function 471
19*. Principal Value of Divergent Integral 473
§ 5. Integrals Dependent on Parameters 474
20*. Proper Integrals 474
21*· Improper Integrals 476
§ 6. Line Integrals of Integration 478
22. Line Integrals of the First Type 482
23. Line Integrals of the Second Type 484
24. Conditions for a Line Integral of the Second Type to Be Independent of the Path of Integration 488
§ 7. The Concept of Generalized Function 488
25*. Delta Function 488
26*. Application to Constructing Influence Function 492
27*. Other Generalized Functions 495
CHAPTER XV. DIFFERENTIAL EQUATIONS 497
§ 1. General Notions 497
1. Examples 497
2. Basic Definitions 498
§ 2. First-Order Differential Equations 500
3. Geometric Meaning 500
4. Integrable Types of Equations 503
5*. Equation for Exponential Function 506
6. Integrating Exact Differential Equations 509
7. Singular Points and Singular Solutions 512
8. Equations Not Solved for the Derivative 516
9. Method of Integration by Means of Differentiation 517
§ 3. Higher-Order Equations and Systems of Differential Equations 519
10. Higher-Order Differential Equations 519
11*. Connection Between Higher-Order Equations and Systems of First-Order Equations 521
12*. Geometric Interpretation of System of First-Order Equations 522
13*. First Integrals 526
§ 4. Linear Equations of General Form 528
14. Homogeneous Linear Equations 528
15. Non-Homogeneous Equations 530
16*. Boundary-Value Problems 535
§ 5. Linear Equations with Constant Coefficients 541
17. Homogeneous Equations 541
18. Non-Homogeneous Equations with Right-Hand Sides of Special Form 545
19. Euler's Equations 548
20*. Operators and the Operator Method of Solving Differential Equations 549
§ 6. Systems of Linear Equations 553
21. Systems of Linear Equations 553
22*. Applications to Testing Lyapunov Stability of Equilibrium State 558
§ 7. Approximate and Numerical Methods of Solving Differential Equations 562
23. Iterative Method 562
24*. Application of Taylor's Series 564
25. Application of Power Series with Undetermined coefficients 565
26*. Bessel's Functions 566
27*. Small Parameter Method 569
28*. General Remarks on Dependence of Solutions on Parameters 572
29*. Methods of Minimizing Discrepancy 575
30*. Simplification Method 576
31. Euler's Method 578
32. Runge-Kutta Method 580
33. Adams Method 582
34. Milne's Method 583
CHAPTER XVI. Multiple Integrals 585
§ 1. Definition and Basic Properties of Multiple Integrals 585
1. Some Examples Leading to the Notion of a Multiple Integral 585
2. Definition of a Multiple Integral 586
3. Basic Properties of Multiple Integrals 587
4. Methods of Applying Multiple Integrals 589
5. Geometric Meaning of an Integral Over a Plane Region 591
§ 2. Two Types of Physical Quantities 592
6*. Basic Example. Mass and Its Density 592
7*. Quantities Distributed in Space 594
§ 3. Computing Multiple Integrals in Cartesian Coordinates 596
8. Integral Over Rectangle 596
9. Integral Over an Arbitrary Plane Region 599
10. Integral Over an Arbitrary Surface 602
11. Integral Over a Three-Dimensional Region 604
§ 4. Change of Variables in Multiple Integrals 605
12. Passing to Polar Coordinates in Plane 605
13. Passing to Cylindrical and Spherical Coordinates 606
14*. Curvilinear Coordinates in Plane 608
15*. Curvilinear Coordinates in Space 611
16*. Coordinates on a Surface 612
§ 5. Other Types of Multiple Integrals 615
17*. Improper Integrals 615
18*. Integrals Dependent on a Parameter 617
19*. Integrals with Respect to Measure. Generalized Functions 620
20*. Multiple Integrals of Higher Order 622
§ 6. Vector Field 626
21*. Vector Lines 626
22*. The Flux. of a Vector Through a Surface 627
23*. Divergence 629
24*. Expressing Divergence in Cartesian Coordinates 632
25. Line Integral and Circulation 634
26*. Rotation 634
27. Green's Formula. Stokes' Formula 638
28*. Expressing Differential Operations on Vector Fields in a Curvilinear Orthogonal Coordinate System 641
29*. General Formula for Transforming Integrals 642
CHAPTER XVII. SERIES 645
§ 1. Number Series 645
1. Positive Series 645
2. Series with Terms of Arbitrary Signs 650
3. Operations on Series 652
4*. Speed of Convergence of a Series 654
5. Series with Complex, Vector and Matrix Terms 658
6. Multiple Series 659
§ 2. Functional Series 661
7. Deviation of Functions 661
8. Convergence of a Functional Series 662
9. Properties of Functional Series 664
§ 3. Power Series 666
10. Interval of Convergence 666
11. Properties of Power Series 667
12. Algebraic Operations on Power Series 671
13. Power Series as a Taylor Series 675
14. Power Series with Complex Terms 676
15*. Bernoullian Numbers 677
16*. Applying Series to Solving Difference Equations 678
17*. Multiple Power Series 680
18*. Functions of Matrices 681
19*. Asymptotic Expansions 685
§ 4. Trigonometric Series 686
20. Orthogonality 686
21. Series in Orthogonal Functions 689
22. Fourier Series 690
23. Expanding a Periodic Function 695
24*. Example. Bessel's Functions as Fourier Coefficients 697
25. Speed of Convergence of a Fourier Series 698
26. Fourier Series in Complex Form 702
27*. Parseval Relation 704
28*. Hilbert Space 706
29*. Orthogonality with Weight Function 708
30*. Multiple Fourier Series 710
31*. Application to the Equation of Oscillations of a String 711
§ 5. Fourier Transformation 713
32*. Fourier Transform 713
33*. Properties of Fourier Transforms 717
34*. Application to Oscillations of Infinite String 719
CHAPTER XVIII. ELEMENTS OF THE THEORY OF PROBABILITY 721
§ 1. Random Events and Their Probabilities 721
1. Random Events 721
2. Probability 722
3. Basic Properties of Probabilities 725
4. Theorem of Multiplication of Probabilities 727
5. Theorem of Total Probability 729
6*. Formulas for the Probability of HyPotheses 730
7. Disregarding Low-Probability Events 731
§ 2. Random Variables 732
8. Definitions 732
9. Examples of Discrete Random Variables 734
10. Examples of Continuous Random Variables 736
11. Joint Distribution of Several Random Variables 737
12. Functions of Random Variables 739
§ 3. Numerical Characteristics of Random Variables 741
13. The Mean Value 741
14. Properties of the Mean Value 742
15. Variance 744
16*. Correlation 746
17. Characteristic Functions 748
§ 4. Applications of the Normal Law 750
18. The Normal Law as the Limiting One 750
19. Confidence Interval 752
20. Data Processing 754
CHAPTER XIX. COMPUTERS 757
§ 1. Two Classes of Computers 757
1. Analogue Computers 758
2. Digital Computers 762
§ 2. Programming 764
3. Number Systems 764
4. Representing Numbers in a Computer 766
5. Instructions 769
6. Examples of Programming 772
Appendix. Equations of Mathematical Physics 780
1*. Derivation of Some Equations 780
2*. Some Other Equations 783
3*. Initial and Boundary Conditions 784
§ 2. Method of Separation of Variables 786
4*. Basic Example 786
5*. Some Other Problems 791
Bibliography 796
Name Index 798
Subject Index 8OO
List of Symbols 81


Unit Vectors

A unit vector has length 1 unit and can take any direction.

A one-dimensional unit vector is usually written i.

Example 5 - Unit Vector

In the following diagram, we see the unit vector (in red, labeled i) and two other vectors that have been obtained from i using scalar multiplication (2i and 7i).

We have seen how to draw and write vectors. We now learn how to add vectors.