# 1.12: Higher-Order Derivatives

Given two quantities, (y) and (x,) with (y) a function of (x,) we know that the derivative (frac{d y}{d x}) is the rate of change of (y) with respect to (x). Since (frac{d y}{d x}) is then itself a function of (x,) we may ask for its rate of change with respect to (x,) which we call the second-order derivative of (y) with respect to (x) and denote (frac{d^{2} y}{d x^{2}} .)

Example (PageIndex{1})

If (y=4 x^{5}-3 x^{2}+4,) then

[frac{d y}{d x}=20 x^{4}-6 x ,]and so[frac{d^{2} y}{d x^{2}}=80 x^{3}-6 .]Of course, we could continue to differentiate: the third derivative of (y) with respect to (x) is[frac{d^{3} y}{d x^{3}}=240 x^{2} ,]the fourth derivative of (y) with respect to (x) is[frac{d^{4} y}{d x^{4}}=480 x ,]and so on.

If (y) is a function of (x) with (y=f(x),) then we may also denote the second derivative of (y) with respect to (x) by (f^{prime prime}(x),) the third derivative by (f^{prime prime prime}(x),) and so on. The prime notation becomes cumbersome after awhile, and so we may replace the primes with the corresponding number in parentheses; that is, we may write, for example, (f^{prime prime prime prime}(x)) as (f^{(4)}(x)).

Example (PageIndex{2})

If

[f(x)=frac{1}{x} ,]then[egin{aligned} f^{prime}(x) &=-frac{1}{x^{2}}, [12pt] f^{prime prime}(x) &=frac{2}{x^{3}}, [12pt] f^{prime prime prime}(x) &=-frac{6}{x^{4}}, end{aligned}]and[f^{(4)}(x)=frac{24}{x^{5}} .]

Exercise (PageIndex{1})

Find the first, second, and third-order derivatives of (y=sin (2 x)).

(frac{d y}{d x}=2 cos (2 x), frac{d^{2} y}{d x^{2}}=-4 sin (2 x), frac{d^{3} y}{d x^{3}}=-8 cos (2 x))

Exercise (PageIndex{2})

Find the first, second, and third-order derivatives of (f(x)=sqrt{4 x+1}).

(f^{prime}(x)=frac{2}{sqrt{4 x+1}}, f^{prime prime}(x)=-frac{4}{(4 x+1)^{frac{3}{2}}}, f^{prime prime prime}(x)=frac{6}{(4 x+1)^{frac{5}{2}}})

## Acceleration

If (x) is the position, at time (t,) of an object moving along a straight line, then we know that

[v=frac{d x}{d t}]is the velocity of the object at time (t .) since acceleration is the rate of change of velocity, it follows that the acceleration of the object is[a=frac{d v}{d t}=frac{d^{2} x}{d t^{2}} .]

Example (PageIndex{3})

Suppose an object, such as a lead ball, is dropped from a height of 100 meters. Ignoring air resistance, the height of the ball above the earth after (t) seconds is given by

[x(t)=100-4.9 t^{2} ext { meters, }]as we discussed in Section 1.2. Hence the velocity of the object after (t) seconds is[v(t)=-9.8 t ext { meters / second }]and the acceleration of the object is[a(t)=-9.8 ext { meters / second }^{2}.]Thus the acceleration of an object in free-fall near the surface of the earth, ignoring air resistance, is constant. Historically, Galileo started with this observation about acceleration of ojects in free-fall and worked in the other direction to discover the formulas for velocity and position.

Exercise (PageIndex{3})

Suppose an object oscillating at the end of a spring has position (x=10 cos (pi t)) (measured in centimeters from the equilibrium position) at time (t) seconds. Find the acceleration of the object at time (t=1.25 .)

(69.79 mathrm{cm} / mathrm{sec})

### Concavity

The second derivative of a function (f) tells us the rate at which the slope of the graph of (f) is changing. Geometrically, this translates into measuring the concavity of the graph of the function.

Definition

We say the graph of a function (f) is concave upward on an open interval ((a, b)) if (f^{prime}) is an increasing function on ((a, b) .) We say the graph of a function (f) is concave downward on an open interval ((a, b)) if (f^{prime}) is a decreasing function on ((a, b) .)

To determine the concavity of the graph of a function (f,) we need to determine the intervals on which (f^{prime}) is increasing and the intervals on which (f^{prime}) is decreasing. Hence, from our earlier work, we need identify when the derivative of (f^{prime}) is positive and when it is negative. Theorem (PageIndex{1})

If (f) is twice differentiable on ((a, b),) then the graph of (f) is concave upward on ((a, b)) if (f^{prime prime}(x)>0) for all (x) in ((a, b),) and concave downward on ((a, b)) if (f^{prime prime}(x)<0) for all (x) in ((a, b) .)

Example (PageIndex{4})

If (f(x)=2 x^{3}-3 x^{2}-12 x+1,) then

[f^{prime}(x)=6 x^{2}-6 x-12]and[f^{prime prime}(x)=12 x-6 .]Hence (f^{prime prime}(x)<0) when (x0) when (x>frac{1}{2},) and so the graph of (f) is concave downward on the interval (left(-infty, frac{1}{2} ight)) and concave upward on the interval (left(frac{1}{2}, infty ight) .) One may see the distinction between concave downward and concave upward very clearly in the graph of (f) shown in Figure (1.12 .1 .)We call a point on the graph of a function (f) at which the concavity changes, either from upward to downward or from downward to upward, a point of inflection. In the previous example, (left(frac{1}{2},-frac{11}{2} ight)) is a point of inflection.

Exercise (PageIndex{4})

Find the intervals on which the graph of (f(x)=5 x^{3}-3 x^{5}) is concave upward and the intervals on which the graph is concave downward. What are the points of inflection?

Concave upward on ((-infty,-1)) and ((0,1) ;) concave downward on ((-1,0)) and ((1, infty) ;) Points of inflection: ((-1,-2),(0,0),(1,2))

#### The Second-Derivative Test

Suppose (c) is a stationary point of (f) and (f^{prime prime}(c)>0 .) Then, since (f^{prime prime}) is the derivative of (f^{prime}) and (f^{prime}(c)=0,) for any infinitesimal (d x eq 0),

[frac{f^{prime}(c+d x)-f^{prime}(c)}{d x}=frac{f^{prime}(c+d x)}{d x}>0 .]It follows that (f^{prime}(c+d x)>0) when (d x>0) and (f^{prime}(c+d x)<0) when (d x<0). Hence (f) is decreasing to the left of (c) and increasing to the right of (c,) and so (f) has a local minimum at (c .) Similarly, if (f^{prime prime}(c)<0) at a stationary point (c,) then (f) has a local maximum at (c .) This result is the second-derivative test. Example (PageIndex{5})

If (f(x)=x^{4}-x^{3},) then

[f^{prime}(x)=4 x^{3}-3 x^{2}=x^{2}(4 x-3)]and[f^{prime prime}(x)=12 x^{2}-6 x=6 x(2 x-1) .]Hence (f) has stationary points (x=0) and (x=frac{3}{4} .) Since[f^{prime prime}(0)=0]and[f^{prime prime}left(frac{3}{4} ight)=frac{9}{4}>0 ,]we see that (f) has a local minimum at (x=frac{3}{4} .) Although the second derivative test tells us nothing about the nature of the critical point (x=0,) we know, since (f) has a local minimum at (x=frac{3}{4},) that (f) is decreasing on (left(0, frac{3}{4} ight)) and increasing on (left(frac{3}{4}, infty ight) .) Moreover, since (4 x-3<0) for all (x<0,) it follows that (f^{prime}(x)<0) for all (x<0,) and so (f) is also decreasing on ((-infty, 0) .) Hence (f) has neither a local maximum nor a local minimum at (x=0 .) Finally, since (f^{prime prime}(x)<0) for (00) for all other (x,) we see that the graph of (f) is concave downward on the interval (left(0, frac{1}{2} ight)) and concave upward on the intervals ((-infty, 0)) and (left(frac{1}{2}, infty ight)). See Figure (1.12 .2 .)

Exercise (PageIndex{5})

Use the second-derivative test to find all local maximums and minimums of

[f(x)=x+frac{1}{x} .]

Local maximum of (-2) at (x=-1 ;) local minimum of 2 at (x=1)

Exercise (PageIndex{6})

Find all local maximums and minimums of (g(t)=5 t^{7}-7 t^{5}).

Local maximum of 2 at (t=-1 ;) local minimum of (-2) at (t=1)

Вводите запросы на обычном английском языке. Использование скобок, в случае необходимости, позволяет избежать неоднозначностей в запросе. Вот некоторые примеры, иллюстрирующие запросы для вычисления производной.

Get immediate feedback and guidance with step-by-step solutions and Wolfram Problem Generator ## Contents

The partial sums of the series 1 + 2 + 3 + 4 + 5 + 6 + ⋯ are 1, 3, 6, 10, 15 , etc. The nth partial sum is given by a simple formula:

This equation was known to the Pythagoreans as early as the sixth century BCE.  Numbers of this form are called triangular numbers, because they can be arranged as an equilateral triangle.

The infinite sequence of triangular numbers diverges to +∞, so by definition, the infinite series 1 + 2 + 3 + 4 + ⋯ also diverges to +∞. The divergence is a simple consequence of the form of the series: the terms do not approach zero, so the series diverges by the term test.

### Heuristics Edit

The first key insight is that the series of positive numbers 1 + 2 + 3 + 4 + ⋯ closely resembles the alternating series 1 − 2 + 3 − 4 + ⋯ . The latter series is also divergent, but it is much easier to work with there are several classical methods that assign it a value, which have been explored since the 18th century. 

In order to transform the series 1 + 2 + 3 + 4 + ⋯ into 1 − 2 + 3 − 4 + ⋯ , one can subtract 4 from the second term, 8 from the fourth term, 12 from the sixth term, and so on. The total amount to be subtracted is 4 + 8 + 12 + 16 + ⋯ , which is 4 times the original series. These relationships can be expressed using algebra. Whatever the "sum" of the series might be, call it c = 1 + 2 + 3 + 4 + ⋯. Then multiply this equation by 4 and subtract the second equation from the first:

Generally speaking, it is incorrect to manipulate infinite series as if they were finite sums. For example, if zeroes are inserted into arbitrary positions of a divergent series, it is possible to arrive at results that are not self-consistent, let alone consistent with other methods. In particular, the step 4c = 0 + 4 + 0 + 8 + ⋯ is not justified by the additive identity law alone. For an extreme example, appending a single zero to the front of the series can lead to a different result. 

One way to remedy this situation, and to constrain the places where zeroes may be inserted, is to keep track of each term in the series by attaching a dependence on some function.  In the series 1 + 2 + 3 + 4 + ⋯ , each term n is just a number. If the term n is promoted to a function n −s , where s is a complex variable, then one can ensure that only like terms are added. The resulting series may be manipulated in a more rigorous fashion, and the variable s can be set to −1 later. The implementation of this strategy is called zeta function regularization.

### Zeta function regularization Edit

In zeta function regularization, the series ∑ n = 1 ∞ n ^n> is replaced by the series ∑ n = 1 ∞ n − s ^n^<-s>> . The latter series is an example of a Dirichlet series. When the real part of s is greater than 1, the Dirichlet series converges, and its sum is the Riemann zeta function ζ(s). On the other hand, the Dirichlet series diverges when the real part of s is less than or equal to 1, so, in particular, the series 1 + 2 + 3 + 4 + ⋯ that results from setting s = –1 does not converge. The benefit of introducing the Riemann zeta function is that it can be defined for other values of s by analytic continuation. One can then define the zeta-regularized sum of 1 + 2 + 3 + 4 + ⋯ to be ζ(−1).

## Copy & Paste this embed code into your website’s HTML

Post by Quinn Hollister on May 23, 2015

I believe 2nd derivative is acceleration, and first derivative is the velocity function

Post by Nolan Zhang on April 10, 2015

example 3 was quite long, you should probably use another one

Post by Quazi Niloy on October 13, 2014

On example 3, wouldn't it have been simple to take sin(t)/t^3, and then just multiply it where you have sin(t)*(t^-3) and then apply either the product or the chain rule? Last reply by: John Zhu
Tue Aug 13, 2013 8:36 PM

Post by Vanessa Munoz on June 12, 2013

I don't understand in 10:17, how you can replace with zero if it makes de denominator zero..wouldn't that be undefined?

Our previous way of finding the derivative was based on its definition

such that, if we don’t take the limit we can approximate the derivative by

Here we look to the right of the current posiiton (i) and devide by the interval (Delta x) .

Some better notation may be found by the Taylor expansion:

which gives in discrete notation

The same can be done to obtain the function value at (i-1)

Subtracting the two last equations, i.e. calculating (f_-f_) , results in

which when neglecting the last term on the right side yields

This is similar to the formula we obtained from the definition of the derivative. Here, however, we use the function value to the left and the right of the position (i) and twice the interval, which actually improves accuracy.

One may continue that type of derivation now to obtain higher order approximation of the first derivative with better accuracy. For that purpose you may calculate now (f_) and combining that with (f_-f_) will lead to

This can be used to give better values for the first derivative.

### Matrix version of the first derivative¶

If we supply to the above function an array of positions (x_) at which we would like to calculate the derivative, then we obtain and array of derivative values. We can write this procedure also in a different way, which will be helpful for solving differential equation later.

If we consider the above finite difference formulas for a set of positions (x_) , we can represent the first derivative at these positions by matrix operation as well:

Note that we took here the derivative only to the right side! Each row of the matrix multiplied by the vector containing the positions is then containing the derivative of the function (f) at the position (x_) and the resulting vector represents the deravitave in a certain position region.

We will demonstrate how to generate such a mtrix with the SciPy module below.

Mistake 1: The function $(-1)^x$ is not well-defined, even if $x$ is a complex number.

Mistake 2: You cannot say $left( e^ ight)^x = e^$. The multiplication of exponent rule does not extend to complex numbers. Indeed, otherwise you have problems like $1^ = e^<2pi^2 i>$, which is clearly false. It's easy to see that $frac(-1)^x=(ipi)^n(-1)^x$.

Observe the function $w=(-1)^z$ This is equivalent to $w=e^$ Now, recalling that the logarithm is a multivalued function, $ln(-1)=lnvert(-1)vert +iarg(-1)=i(pi+2pi n)$We see that $(-1)^z$ Is equivalent to $w=e^$ For all integers $n$. This means, except in special cases, that $(-1)^z$ has an infinite number of complex values. If we choose to let n be constant, and define $(-1)^z$ as the resulting single valued function, the complex derivative is equal to$ipi (2n+1)e^$

## 1.12: Higher-Order Derivatives The Second Order Derivative is defined as the derivative of the first derivative of the given function. The first-order derivative at a given point gives us the information about the slope of the tangent at that point or the instantaneous rate of change of a function at that point. Second-Order Derivative gives us the idea of the shape of the graph of a given function. The second derivative of a function f(x) is usually denoted as f”(x). It is also denoted by D 2 y or y2 or y” if y = f(x).

Let y = f(x)

Then, dy/dx = f'(x)

If f'(x) is differentiable, we may differentiate (1) again w.r.t x. Then, the left-hand side becomes d/dx(dy/dx) which is called the second order derivative of y w.r.t x.

Example 1: Find d 2 y/dx 2 , if y = x 3 ?

Given that, y = x 3

Then, first derivative will be
dy/dx = d/dx (x 3 ) = 3x 2

Again, we will differentiate further to find its
second derivative,

Therefore, d 2 y/dx 2 = d/dx (dy/dx)

= d/dx (3x 2 )

= 6x

Note: d/dx (x n ) = nx n – 1 , where n is the power raised to x.

Example 2: Find d 2 y/dx 2 , if y = Asinx + Bcosx, Where A and B are constants?

Given that, y = Asinx + Bcosx

Then, first derivative will be

dy/dx = d/dx (Asinx + Bcosx)

= A d/dx (sinx) + B d/dx (cosx)

= A(cosx) + B(-sinx)

= Acosx – Bsinx

Again, we will differentiate further to find its second derivative,

d 2 y/dx 2 = d/dx (dy/dx)

= d/dx (Acosx – Bsinx)

= A d/dx (cosx) – B d/dx (sinx)

= A(-sinx) – B(cosx)

= -Asinx – Bcosx

= -(Asinx + Bcosx)

= -y

Note:

d/dx (sinx) = cosx

d/dx( cosx) = -sinx

Example 3: y = logx, Find d 2 y/dx 2 ?

Given that, y = logx

Then first derivative will be,

dy/dx = d/dx (logx)

= (1 / x)

Again, we will further differentiate to find its second derivative,

d 2 y/dx 2 = d/dx (dy/dx)

= d/dx (1 / x) (from first derivative)

= -1 / x 2

Example 4: y = e x sin5x, Find d 2 y/dx 2 ?

Given that, y = e x sin5x

Then first derivative will be,

dy/dx = d/dx (e x sin5x)

= e x d/dx (sin5x) + sin5x d/dx (e x ) (multiplication rule)

= e x (cos5x . 5) + sin5x . e x

= e x (5cos5x + sin5x)

Again, we will further differentiate to find its second derivative,

d 2 y/dx 2 = d/dx (dy/dx)

= d/dx (e x (5cos5x + sin5x))

= e x d/dx (5cos5x + sin5x) + (5cos5x + sin5x) d/dx(e x )

= e x (d/dx (5cos5x) + d/dx (sin5x)) + (5cos5x + sin5x) d/dx (e x )

= e x (5(-sin5x)5 + 5cos5x) + (5cos5x + sin5x)(e x )

= e x (-25sin5x + 5cos5x + 5cos5x + sin5x)

= e x (10cos5x – 24sin5x)

= 2e x (5cos5x – 12sin5x)

Note: Multiplication Rule of Differentiation
d(uv) / dx = (u. dv/dx) + (v. du/dx)

### Second-Order Derivatives of a Function in Parametric Form

To calculate the second derivative of the function in the parametric form we use the chain rule twice. Hence to find the second derivative, we find the derivative with respect to t of the first derivative and then divide by the derivative of x with respect to t. Suppose that x = x(t) and y = y(t), then its Parametric form in Second Order:

First Derivative: dy/dx = (dy/dt) / (dx/dt)

Second Derivative: d 2 y/dx 2 = d/dx (dy/dx)

= d/dt (dy/dx) / (dx/dt)

Note: It is totally wrong to write the above formula as d 2 y/dx 2 = (d 2 y/dt 2 ) / (d 2x /dt 2 )

Example: If x = t + cost, y = sint, find the second derivative.

Given that, x = t + cost and y = sint

First Derivative,

dy/dx = (dy/dt) / (dx/dt)

= (d/dt (sint)) / (d/dt (t + cost))

= (cost) / (1 – sint) —- (1)

Second Derivative,

d 2 y / dx 2 = d/dx (dy/dx)

= d/dx (cost / 1 – sint) —- (from eq.(1))

= d/dt (cost / 1 – sint) / (dx/dt) —- (chain rule)

= ((1 – sint) (-sint) – cost(-cost)) / (1 – sint) 2 / (dx/dt) —- (quotient rule)

= (-sint + sin 2 t + cos 2 t) / (1 – sint) 2 / (1 – sint)

= (-sint + 1) / (1 – sint) 3

= 1 / (1 – sint) 2

Note:

1) Quotient Rule of Differentiation: dy/dx = v(du/dx) – u(dv/dx) / v 2

2) Chain Rule: dy/dx = (dy/du) . (du/dx)

### Graphical Representation of Second-Order Derivatives

Graphically the first derivative represents the slope of the function at a point, and the second derivative describes how the slope changes over the independent variable in the graph. In this graph, the blue line indicates the slope, i.e. the first derivative of the given function. For example, we use the second derivative test to determine the maximum, minimum, or point of inflection. The second derivative of a given function corresponds to the curvature or concavity of the graph. If the second-order derivative value is positive, then the graph of a function is upwardly concave. If the second-order derivative value is negative, then the graph of a function is downwardly open.

Concavity of Function

Let f(x) be a differentiable function in a suitable interval. Then, the graph of f(x) can be categorized as:

Concave Up: A section of a curve is concave up if the y-value grows at a faster and faster rate moving from left to right. Concave Down: The opposite of concave up, in which the y-value decreases from left to right, is called concave down. Points of Inflection: Inflection points are points where the function changes concavity, i.e. from being “concave up” to being “concave down” or vice versa. The second derivative of a function determines the local maximum or minimum, inflection point values. These can be identified with the help of the below conditions:

## An explicit form for higher order approximations of fractional derivatives ☆

An explicit form for coefficients of shifted Grünwald type approximations of fractional derivatives is presented. This form directly gives approximations for any order of accuracy with any desired shift leading to efficient and automatic computations of the coefficients. To achieve this, we consider generating functions in the form of power of a polynomial. Then, an equivalent characterization for consistency and order of accuracy established on a general generating function is used to form a linear system of equations with Vandermonde matrix. This linear system is solved for the coefficients of the polynomial in the generating function. These generating functions completely characterize Grünwald type approximations with shifts and order of accuracy. Incidentally, the constructed generating functions happen to be a generalization of the previously known Lubich forms of generating functions without shift. We also present a formula to compute leading and some successive error terms from the coefficients. We further show that finite difference formulas for integer-order derivatives with desired shift and order of accuracy are some special cases of our explicit form.

## Proof of some technical estimates

In this section, we will establish the claim estimates that have been used in Sect. 2. That is to say, we will establish the claim estimates (2.18), (2.21), (2.22), (2.30) and (2.36).

### Proof of inequality (2.18)

Let (zeta =xi sqrt) , we obtain

where there exits a positive large time (t_<*>) such that for (tge t_<*>) , we can obtain

with C a positive constant that independent of time. (square )

### Proof of inequality (2.21)

Multiplying the first and second equations of (2.19) by (varrho _delta ) and (m_delta ) , respectively, it holds on

Due to the fact that (P'(1)=1) , we can use the Taylor expression formula to get

which, together with Sobolev’s inequality, yields directly

where the symbol (sim ) represents the equivalent relation. By virtue of integrating by parts and using the transport equation, it is easy to get

Applying the equation (2.19), it is easy to obtain for (k=1,2,3) ,

At first, we estimate the term (int abla ^Delta varrho _delta cdot abla ^k m_delta mathrmx) for (k=1,2,3) . Using integration by parts and transport equation, we can obtain

Now we give the estimates for (Vert abla ^k SVert _^2, k=1,2,3) . Indeed, when (k=1) , we apply Sobolev’s inequality to obtain

Similarly, we also have for (k=1) ,

By virtue of the Taylor expression formula, we get

where we have used the fact that (P'(1)=1) . Then, we use Sobolev’s inequality to obtain

Then, we use Cauchy inequality to get

Next, employing Sobolev’s inequality, it is easy to get

Then, we obtain the following estimates

Therefore, we complete the proof of claim estimate (2.21). (square )

### Proof of inequality (2.22)

Taking (k(k=0,1,2,3)) th spatial derivatives to the second equation of (2.19) and multiplying the equation by ( abla ^ varrho _delta ) , then we have

Using the first equation of (2.19), it holds on

After integration by part, it holds on

Thus, we combine the above three equalities to obtain

which, together with integration by parts and Cauchy inequality, yields directly

which implies (2.22). Therefore, we complete proof of claim estimate (2.22). (square )

### Proof of inequality (2.30)

In , the following estimate holds on

and (delta ) is a small positive number. By multiplying the inequality (3.1) with (l=3) by (e^>t>) , it holds on

then, by integrating about time over [0, t], similar to (2.29), one arrives at

where we utilize decay estimate (1.7). Therefore, we complete proof of claim estimate (2.30). (square )

### Proof of inequality (2.36)

Replacing l by (l+1) in (3.1), and then multiplying both sides by ((1+t)^<2>>) , one arrives at

The integration of the above inequality with respect to time over [0, t] implies that

where we used the estimate (mathcal _^<3>(t)le C (1+t)^<-frac<3+2l><2>>) for (l=0,1,2,3) . Therefore, we complete proof of claim estimate (2.36). (square )

## =DERIVF(f, x, p, [options])

Use DERIVF to compute first or higher order derivatives of a function f(x) at x=p using highly accurate adaptive algorithm. With optional arguments, you can specify a higher derivative order, as well as override the default algorithm parameters.

DERIVF can be nested to compute partial derivatives of any order.

### Required Inputs

f a reference to the function formula.

If your function is too complex to represent by nested formulas, you can code it in a VBA function (see Example 3).

x a reference to the variable of differentiation.

p the point at which to compute the derivative.

### Optional Inputs

n the derivative order. You can enter an integer number from 1 to 4. Default is 1.

ctrl a set of key/value pairs for algorithmic control as detailed below.

 Key INITSTEP Admissible Values (Integer) > 0 Default Value 0.05 Remarks This parameter is very influential. Try different small and large values when encountering convergence difficulties.
 Key ITRNMAX Admissible Values (Real) >= 3 Default Value 50 Remarks Sets an upper bound on the maximum size of the generated Neville' tableau by Ridders' algorithm.

The analytic derivatives of f(x) up to order four are:

f ' ( x ) = sin ( x 2 ) + 2 &InvisibleTimes x 2 &InvisibleTimes cos ( x 2 ) We compute the numerical derivatives of at x = 0 and at x = 1 for orders 1 to 4 and compare them to the analytical values shown in column B of the tables below:

#### At x=1

 A B 6 1.922075597 1.922075597 7 -0.124070104 -0.124070104 8 -21.27590825 -21.27590825 9 -80.24890792 80.24890780

Consider the partial derivative:

&part &part y &part &part x cos ( x , y ) = - sin ( x y ) - x &InvisibleTimes y cos ( x y )

We compute the partial derivative of cos(xy) at (&pi,&pi) by nesting DERIVF and compare the result with the analytical value shown in B3 below:

#### Partial derivative at (&pi,&pi)

We demonstrate how to compute the derivative for a user defined VBA function with DERIVF . You can define your own VBA functions in Excel which is quite powerful when your function is difficult to define with standard formulas. For illustration, we compute the derivative for

VBA is supported in ExceLab 7.0 only. ExceLab 365 which is based on cross-platform Office JS technology is not compatible with VBA

### Solution

Insert a Module from the Insert Tab, then code the following function: ##### Compute derivative at x=2

Your VBA function name must be prefixed with "vb" to be used with ExceLab solvers.

X1 is just a dummy variable for the function and its value is ignored.

DERIVF implements Ridders' method which uses an adaptive step to produce a much higher precision than a simple finite differencing method with a fixed step. It employs Neville' algorithm and polynomial extrapolation to drive the step size to zero within machine accuracy.

The starting step size for Ridders's algorithm is an important parameter that can aid successful convergence of the algorithm. You can override the default value of this parameter using the control key INITSTEP (for example, DERIVF(A1, X1, P1, 1, <"INITSTEP",0.1>) ). The starting step size need not necessarily be small but rather should scale with a range around the point p over which the function changes noticeably. Ridders' method attempts to drive the step size to zero by polynomial extrapolation using Neville' algorithm.