Articles

3.3: The Runge-Kutta Method - Mathematics


In general, if (k) is any positive integer and (f) satisfies appropriate assumptions, there are numerical methods with local truncation error (O(h^{k+1})) for solving an initial value problem

[label{eq:3.3.1} y'=f(x,y),quad y(x_0)=y_0.]

Moreover, it can be shown that a method with local truncation error (O(h^{k+1})) has global truncation error (O(h^k)). In Sections 3.1 and 3.2 we studied numerical methods where (k=1) and (k=2). We’ll skip methods for which (k=3) and proceed to the Runge-Kutta method, the most widely used method, for which (k=4). The magnitude of the local truncation error is determined by the fifth derivative (y^{(5)}) of the solution of the initial value problem. Therefore the local truncation error will be larger where (|y^{(5)}|) is large, or smaller where (|y^{(5)}|) is small. The Runge-Kutta method computes approximate values (y_1), (y_2), …, (y_n) of the solution of Equation ef{eq:3.3.1} at (x_0), (x_0+h), …, (x_0+nh) as follows: Given (y_i), compute

[egin{align*} k_{1i}&=f(x_i,y_i), k_{2i}&=f left(x_i+{hover2},y_i+{hover2}k_{1i} ight), k_{3i}&=fleft(x_i+{hover2},y_i+{hover2}k_{2i} ight), k_{4i}&=f(x_i+h,y_i+hk_{3i}),end{align*}]

and

[y_{i+1}=y_i+{hover6}(k_{1i}+2k_{2i}+2k_{3i}+k_{4i}). onumber ]

The next example, which deals with the initial value problem considered in Examples and Example (PageIndex{1}), illustrates the computational procedure indicated in the Runge-Kutta method.

Example (PageIndex{1})

Use the Runge-Kutta method with (h=0.1) to find approximate values for the solution of the initial value problem

[label{eq:3.3.2} y'+2y=x^3e^{-2x},quad y(0)=1,]

at (x=0.1,0.2).

Solution

Again we rewrite Equation ef{eq:3.3.2} as

[y'=-2y+x^3e^{-2x},quad y(0)=1, onumber]

which is of the form Equation ef{eq:3.3.1}, with

[f(x,y)=-2y+x^3e^{-2x}, x_0=0,mbox{ and} y_0=1. onumber]

The Runge-Kutta method yields

[egin{aligned} k_{10} & = f(x_0,y_0) = f(0,1)=-2, k_{20} & = f(x_0+h/2,y_0+hk_{10}/2)=f(.05,1+(.05)(-2)) &= f(.05,.9)=-2(.9)+(.05)^3e^{-.1}=-1.799886895, k_{30} & = f(x_0+h/2,y_0+hk_{20}/2)=f(.05,1+(.05)(-1.799886895)) &= f(.05,.910005655)=-2(.910005655)+(.05)^3e^{-.1}=-1.819898206, k_{40} & = f(x_0+h,y_0+hk_{30})=f(.1,1+(.1)(-1.819898206)) &=f(.1,.818010179)=-2(.818010179)+(.1)^3e^{-.2}=-1.635201628, y_1&=y_0+{hover6}(k_{10}+2k_{20}+2k_{30}+k_{40}), &=1+{.1over6}(-2+2(-1.799886895)+2(-1.819898206) -1.635201628)=.818753803,[4pt] k_{11} & = f(x_1,y_1) = f(.1,.818753803)=-2(.818753803))+(.1)^3e^{-.2}=-1.636688875, k_{21} & = f(x_1+h/2,y_1+hk_{11}/2)=f(.15,.818753803+(.05)(-1.636688875)) &= f(.15,.736919359)=-2(.736919359)+(.15)^3e^{-.3}=-1.471338457, k_{31} & = f(x_1+h/2,y_1+hk_{21}/2)=f(.15,.818753803+(.05)(-1.471338457)) &= f(.15,.745186880)=-2(.745186880)+(.15)^3e^{-.3}=-1.487873498, k_{41} & = f(x_1+h,y_1+hk_{31})=f(.2,.818753803+(.1)(-1.487873498)) &=f(.2,.669966453)=-2(.669966453)+(.2)^3e^{-.4}=-1.334570346, y_2&=y_1+{hover6}(k_{11}+2k_{21}+2k_{31}+k_{41}), &=.818753803+{.1over6}(-1.636688875+2(-1.471338457)+2(-1.487873498)-1.334570346) &=.670592417.end{aligned}]

The Runge-Kutta method is sufficiently accurate for most applications.

Example (PageIndex{2})

Table (PageIndex{1}) shows results of using the Runge-Kutta method with step sizes (h=0.1) and (h=0.05) to find approximate values of the solution of the initial value problem

[y'+2y=x^3e^{-2x},quad y(0)=1 onumber]

at (x=0), (0.1), (0.2), (0.3), …, (1.0). For comparison, it also shows the corresponding approximate values obtained with the improved Euler method in Example 3.2.2:

Add text here. For the automatic number to work, you need to add the “AutoNum” template (preferably at3.2.2}, and the values of the exact solution

[y={e^{-2x}over4}(x^4+4). onumber ]

The results obtained by the Runge-Kutta method are clearly better than those obtained by the improved Euler method in fact; the results obtained by the Runge-Kutta method with (h=0.1) are better than those obtained by the improved Euler method with (h=0.05).

Improved EulerRunge-Kutta
xh=0.1h=0.05h=0.1h-0.05"exact"
0.01.0000000001.0000000001.0000000001.0000000001.000000000
0.10.8200409370.8190505720.8187538030.8187513700.818751221
0.20.6727344450.6710864550.6705924170.6705884180.670588174
0.30.5525976430.5505438780.5499282210.5499232810.549922980
0.40.4551606370.4528906160.4522104300.4522050010.452204669
0.50.3766812510.3743357470.3736334920.3736278990.373627557
0.60.3139709200.3116522390.3109587680.3109532420.310952904
0.70.2642876110.2620676240.2614045680.2613992700.261398947
0.80.2252677020.2231942810.2225759890.2225710240.222570721
0.90.1948795010.1929817570.1924168820.1924123170.192412038
1.00.1713880700.1696806730.1691734890.1691693560.169169104

Table (PageIndex{1}): Numerical solution of (y'+2y=x^3e^{-2x}, y(0)=1), by the Runge-Kuttta method and the improved Euler method.

Example (PageIndex{3})

Table (PageIndex{2}) shows analogous results for the nonlinear initial value problem

[y'=-2y^2+xy+x^2, y(0)=1. onumber]

We applied the improved Euler method to this problem in Example 3.2.3.

Improved EulerRunge-Kutta
xh=0.1h=0.05h=0.1h-0.05"exact"
0.01.0000000001.0000000001.0000000001.0000000001.000000000
0.10.8405000000.8382883710.8375871920.8375847590.837584494
0.20.7334308460.7305566770.7296444870.7296421550.729641890
0.30.6616008060.6585521900.6575824490.6575805980.657580377
0.40.6159618410.6128844930.6119033800.6119019690.611901791
0.50.5916347420.5885589520.5875767160.5875756350.587575491
0.60.5860069350.5829272240.5819432100.5819423420.581942225
0.70.5977121200.5946180120.5936304030.5936296270.593629526
0.80.6260088240.6228982790.6219083780.6219075530.621907458
0.90.6703512250.6672376170.6662519880.6662509420.666250842
1.00.7300696100.7269858370.7260173780.7260159080.726015790

Table (PageIndex{2}): Numerical solution of (y'=-2y^2+xy+x^2, y(0)=1), by the Runge-Kuttta method and the improved Euler method.

Example (PageIndex{4})

Tables (PageIndex{3}) and (PageIndex{4}) show results obtained by applying the Runge-Kutta and Runge-Kutta semilinear methods to to the initial value problem

[y'-2xy=1, y(0)=3, onumber]

which we considered in Examples (PageIndex{3}) and (PageIndex{4})

(x)(h=0.2)(h=0.1)(h=0.05)"Exact"
0.03.0000000003.0000000003.0000000003.000000000
0.23.3278464003.3278516333.3278519523.327851973
0.43.9660449733.9660585353.9660593003.966059348
0.65.0669967545.0670371235.0670393965.067039535
0.86.9365341786.9366906796.9367003206.936700945
1.010.18423225210.18487773310.18492099710.184923955
1.216.06434480516.06691558316.06709869916.067111677
1.427.27877183327.28860521727.28933895527.289392347
1.649.96055366049.99731396650.00016574450.000377775
1.898.83433781598.97114614698.98213670298.982969504
2.0211.393800152211.908445283211.951167637211.954462214
(x)(h=0.2)(h=0.1)(h=0.05)"Exact"
0.03.0000000003.0000000003.0000000003.000000000
0.23.3278532863.3278520553.3278519783.327851973
0.43.9660617553.9660594973.9660593573.966059348
0.65.0670426025.0670397255.0670395475.067039535
0.86.9367040196.9367011376.9367009576.936700945
1.010.18492617110.18492409310.18492396310.184923955
1.216.06711196116.06711169616.06711167816.067111677
1.427.28938941827.28939216727.28939233527.289392347
1.650.00037015250.00037730250.00037774550.000377775
1.898.98295551198.98296863398.98296945098.982969504
2.0211.954439983211.954460825211.954462127211.954462214

Table (PageIndex{4}): Numerical solution of (y'-2xy=1, y(0)=3), by the Runge-Kutta semilinear method.

The Case Where xₒ Isn’t The Left Endpoint

So far in this chapter we’ve considered numerical methods for solving an initial value problem

[label{eq:3.3.3} y'=f(x,y),quad y(x_0)=y_0]

on an interval ([x_0,b]), for which (x_0) is the left endpoint. We haven’t discussed numerical methods for solving Equation ef{eq:3.3.3} on an interval ([a,x_0]), for which (x_0) is the right endpoint. To be specific, how can we obtain approximate values (y_{-1}), (y_{-2}), …, (y_{-n}) of the solution of Equation ef{eq:3.3.3} at (x_0-h, dots,x_0-nh), where (h=(x_0-a)/n)? Here’s the answer to this question:

Question

Consider the initial value problem

[label{eq:3.3.4} z'=-f(-x,z),quad z(-x_0)=y_0,] on the interval ([-x_0,-a]), for which (-x_0) is the left endpoint. Use a numerical method to obtain approximate values (z_1), (z_2), …, (z_n) of the solution of (eqref{eq:3.3.4}) at (-x_0+h), (-x_0+2h), …, (-x_0+nh=-a). Then (y_{-1}=z_1), (y_{-2}=z_2), (dots), (y_{-n}=z_n) are approximate values of the solution of (eqref{eq:3.3.3}) at (x_0-h), (x_0-2h), …, (x_0-nh=a).

The justification for this answer is sketched in Exercise 3.3.23. Note how easy it is to make the change the given problem Equation ef{eq:3.3.3} to the modified problem Equation ef{eq:3.3.4} : first replace (f) by (-f) and then replace (x), (x_0), and (y) by (-x), (-x_0), and (z), respectively.

Example (PageIndex{5})

Use the Runge-Kutta method with step size (h=0.1) to find approximate values of the solution of

[label{eq:3.3.5} (y-1)^2y'=2x+3,quad y(1)=4]

at (x=0), (0.1), (0.2), …, (1).

Solution

We first rewrite Equation ef{eq:3.3.5} in the form Equation ef{eq:3.3.3} as

[label{eq:3.3.6} y'={2x+3over(y-1)^2},quad y(1)=4.]

Since the initial condition (y(1)=4) is imposed at the right endpoint of the interval ([0,1]), we apply the Runge-Kutta method to the initial value problem

[label{eq:3.3.7} z'={2x-3over(z-1)^2},quad z(-1)=4]

on the interval ([-1,0]). (You should verify that Equation ef{eq:3.3.7} is related to Equation ef{eq:3.3.6} as Equation ef{eq:3.3.4} is related to Equation ef{eq:3.3.3}.) Table [table:3.3.5} shows the results. Reversing the order of the rows in Table [table:3.3.5} and changing the signs of the values of (x) yields the first two columns of Table [table:3.3.6}. The last column of Table [table:3.3.6} shows the exact values of (y), which are given by

[y=1+(3x^2+9x+15)^{1/3}. onumber ]

(Since the differential equation in Equation ef{eq:3.3.6} is separable, this formula can be obtained by the method of Section 2.2.)

(x)(z)
-1.04.000000000
-0.93.944536474
-0.83.889298649
-0.73.834355648
-0.63.779786399
-0.53.725680888
-0.43.672141529
-0.33.619284615
-0.23.567241862
-0.13.516161955
0.03.466212070

Table (PageIndex{5}): Numerical solution of (z'= {2x-3over(z-1)^2}, z(-1)=4), on ([-1,0]).

(x)(y)Exact
0.003.4662120703.466212074
0.103.5161619553.516161958
0.203.5672418623.567241864
0.303.6192846153.619284617
0.403.6721415293.672141530
0.503.7256808883.725680889
0.603.7797863993.779786399
0.703.8343556483.834355648
0.803.8892986493.889298649
0.903.9445364743.944536474
1.004.0000000004.000000000

Table (PageIndex{6}): Numerical solution of ((y-1)^2y'=2x+3, y(1)=4), on ([0,1]).

We leave it to you to develop a procedure for handling the numerical solution of Equation ef{eq:3.3.3} on an interval ([a,b]) such that (aExercises 3.3.26 and 3.3.27).


3.3: The Runge-Kutta Method - Mathematics

1 Department of Mathematics, Lagos State University, Lagos, Nigeria

2 Department of Mathematics, University of Lagos, Lagos, Nigeria

Copyright © 2013 Ashiribo Senapon Wusu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received January 10, 2013 revised February 16, 2013 accepted March 18, 2013

Keywords: Multiderivative Autonomous Rung-Kutta Stability Convergence Initial Value Problems

In recent years, the derivation of Runge-Kutta methods with higher derivatives has been on the increase. In this paper, we present a new class of three stage Runge-Kutta method with first and second derivatives. The consistency and stability of the method is analyzed. Numerical examples with excellent results are shown to verify the accuracy of the proposed method compared with some existing methods.

The derivation of Runge-Kutta schemes involving higher derivatives is now on the increase. Traditionally, given an initial value problem (IVP), classical explicit RungeKutta methods are derived with the intention of performing multiple evaluations of in each internal stage for a given accuracy. Recently, Akanbi [1,2] derived multi-derivative explicit Runge-Kutta method involving up to second derivative. Goeken and Johnson [3] also derived explicit Runge-Kutta schemes of stages up to four with the first derivative of. However, the new scheme is derived with the notion of incorporating higher order derivatives of up to the second derivative. The cost of internal stage evaluations is reduced greatly and there is an appreciable improvement on the attainable order of accuracy of the method.

2. Derivation of the Proposed Scheme

The general form of a single step method for solving the Initial Value Problem (IVP)

(1)

(2)

where is obtained using the Taylor’s series expansion of an arbitrary function:

(3)

and for the autonomous case of (1), in which, becomes

(4)

The proposed scheme of this paper is of the form

(5)

(6)

Expanding and in Taylor’s series and substituting the result into (5), the coefficients of the powers of are then compared with that of (3) to obtain the following system of equations:

Solving the above system of equations, we have the set of solutions in Table 1 .

The above solution set gives rise to a family of 3-stage multi-derivative explicit Runge-Kutta schemes. The proposed scheme denoted by 3sMERK above is thus given by

3. Convergence and Stability of the Method

3.1. Existence and Uniqueness of Solution

The properties of the incremental function of the newly derived scheme are in general, very crucial to its stability and convergence characteristics [1,4-8].

Let, where, be defined and continuous for all in the region defined by, where a, b are finite, and let there exist a constant L such that

(7)

holds every, then for any, there exist a unique solution of the problem (1), where is continuous and differentiable for all.

The requirement (7) is known as the Lipschitz’s condition, and the constant is a Lipschitz’s constants [6, 7,9-11]. We shall assume that the hypothesis of this theorem is satisfied by the IVP (1). The following lemma will be useful for establishing the aforementioned characteristics.

Let be a set of real numbers. If there exist finite constants and such that

(8)

(9)

Proof. When, (9) is satisfied identically as.

Suppose (9) holds whenever so that

(10)

Then, from (8) implies that

(11)

On substituting (10) into (11), we have

(12)

Hence, (9) holds for all.

3.2. Accuracy and Stability

Usually, during the implementation of a computational scheme, errors are generated. The magnitude of the error determines how accurate and stable a scheme is. For instance, if the magnitude of the error is sufficiently small, the computational results would be accurate. However, if the magnitude of the error becomes so large, it can make the method unstable. The sources of error for these schemes and their principal error functions are discussed in Butcher [5,6], Fatunla [7] and Lambert [9, 10]. The following theorem guarantees the stability of the 3sMERK methods.

Suppose the IVP (1) satisfies the hypotheses of Theorem 3.1, then the new 3sMERK algorithm is stable.

Table 1 . Examples of three-stage MERK methods.

Proof. Let and be two sets of solutions generated recursively by the 3sMERK method with the initial condition, and

(13)

(14)

(15)

Applying triangle inequality and (13), we have

If we assume, and, then Lemma 3.2 implies that, where. This implies the stability of the 3sMERK scheme.

The proposed 3sMERK scheme (6) is applied to the two IVPs below and the results obtained are compared with the standard 3-stage methods of Runge-Kutta (Heun’s) [7,9,10] and that of Goeken and Johnson [3] stated in (16) and (17) respectively.

(16)

(17)

Table 2 . The absolute values of error of y(x) in Problem 1 using the proposed scheme and other methods, h = 0.001 0.005 0.025 0.125.

Table 3 . The absolute values of error of y(x) in Problem 2 using the proposed scheme and other methods, h = 0.001 0.005 0.025 0.125.

(18)

with the theoretical solution.

(19)

with the theoretical solution

.

The results generated by the proposed scheme in this paper when applied to the problems above, evidently proved the extent of accuracy of the scheme. Tables 2 and 3 above show the absolute error associated with the schemes for the test problems with the variation of the step length. The computations above clearly show the accuracy of the method. The standard Heun’s (third order) method grows faster in error than the method of Goeken and the newly derived scheme. However, 3sMERK performed best among the three methods.

Based on the two problems solved above, it follows that the scheme is quite efficient. We therefore conclude that the 3sMERK method proposed is reliable, stable and with high accuracy.


3.3: The Runge-Kutta Method - Mathematics

This method is a third order procedure for which Richardson extrapolation can be used.

Function List

  • double Runge_Kutta_v1_3( double (*f)(double, double), double y0, double x0, double h, int number_of_steps )

This function uses the above third order method to return the estimate of the solution of the initial value problem, y' = f(x,y) y(x0) = y0, at x = x0 + h * n, where h is the step size and n is number_of_steps.

This function uses the above third order method together with Richardson extrapolation to the limit to return the estimate of the solution of the initial value problem, y' = f(x,y) y(x0) = y0, at x = x0 + h * n, where h is the step size and n is number_of_steps. The argument richardson_columns is the number of step size halving + 1 used in Richardson extrapolation so that if richardson_columns = 1 then no extrapolation to the limit is performed.

This function uses the above third order method to estimate the solution of the initial value problem, y' = f(x,y) y(x0) = y0, at x = x0 + h * n * m, where h is the step size and n is the interval number 0 &le n &le number_of_intervals, and m is the number_of_steps_per_interval. The values are return in the array y[ ] i.e.
y[n] = y(x0 + h * m * n), where m, n are as above.

This function uses the above third order method together with Richardson extrapolation to the limit to estimate the solution of the initial value problem, y' = f(x,y) y(x0) = y0, at x = x0 + h * n * m, where h is the step size and n is the interval number 0 &le n &le number_of_intervals, and m is the number_of_steps_per_interval. The values are return in the array y[ ] i.e. y[n] = y(x0 + h * m * n), where m, n are as above. The argument richardson_columns is the number of step size halving + 1 used in Richardson extrapolation so that if richardson_columns = 1 then no extrapolation to the limit is performed.

C Source Code

  • The file, runge_kutta_v1_3.c, contains versions of Runge_Kutta_v1_3( ), Runge_Kutta_v1_3_Richardson( ), Runge_Kutta_v1_3_Integral_Curve( ), and Runge_Kutta_v1_3_Richardson_Integral_Curve( ) written in C.


Runge-Kutta Methods.

The Runge-Kutta methods are a series of numerical methods for solving differential equations and systems of differential equations.
We will see the Runge-Kutta methods in detail and its main variants in the following sections.

One-step Linear methods

Are numerical methods whose to forward a step, only the previous step information is needed, ie step n+1 only depends on the step n. Or with more precision, are methods of the form

where (x_) is a vector of R n , (t_) is the real and independent variable, h the size-step, and F is a vector function of x n , t n , h, ie

Note that this problem, is really an equations system.

There are other methods called multi-step, which for to forward a step is required two or more previous steps and there are not linear methods, we will not discuss both kinds of methods here.

Extended Theory

Runge-Kutta methods are a specialization of one-step numerical methods . Essentially, what characterizes Runge-Kutta methods is that the error is of the form

Where C is a positive real constant, the number k is called the order of the method

The Runge-Kutta method number of stages of is the number of times the function is evaluated at each one step i, this concept is important because evaluating the function requires a computational cost (sometimes higher) and so are preferred methods with ao minimum number of stages as possible.

Runge Kutta Methods examples

The Euler Method (Runge-Kutta method with order 1)

The error is in the form (e le = Ch) and so this method has order 1

Note: The function is one-tme evaluation at each step, so the number of stages is 1.

The middle point rule (Runge-Kutta method with order two)

The error is in the form (e le = Ch^<2>) and so this method has order 2

Note: function are evaluated two times at each step, so stage-number is 2.

Standar fourth-order Runge kutta (Runge-Kutta method with order four)

$x_ = x_ + frac<6>(k_ <1>+ 2k_ <2>+ 2k_ <3>+ k_<4>)$ where

Now the error is in the form (e le = Ch^<4>), so the method has order 4

Observation: Stage-number: 4.

Error grahp size-step h function

We adopt the following definition as Runge-Kutta Methods:

Runge-Kutta methods definition

A Runge-Kutta method with s-stages and order p is a method in the form

and the error holds the condition

So, to give a Runge-Kutta Method is necessarily give the s 2 + 2s numbers

An interesting feature of the Runge-Kutta methods is that it is not needed to calculate derivatives of f to forward. The price to pay for it is to evaluate more times the function f with the consequent operational cost.

Convergence Theorem for Runge-Kutta methods

One method is more efficient if has a reduced number of stages, maintaining order, for example between a 3-stage method with order 3 and one 4-stages of order 3, is much more interesting first one because if we take a step h, the number of calculations to be done will be lower for it.
Butcher Boards
Given a Runge-Kutta, we construct a board as

Also it is possible to write as board Butcher

Where A ∈ M sxs , b ∈ R s , C ∈ R s

For example, the board Butcher for the Euler method is

For the midpoint rule of order 2

And for the standard Runge-Kutta of order 4

A Runge-Kutta method is said to be consistent if the truncation error tends to zero when Gloval the step size tends to zero.
It can be shown that a necessary and sufficient condition for the consistency of a Runge-Kutta is the sum of bi's equal to 1, ie if it satisfies

In addition, the method is of order 2 if it satisfies that
$ 1= 2sum_sum_^a_b_ $

Similar conditions can be given for methods with orderers 3, 4, .
Explicit Runge-KuttaMethods
In a Runge-Kutta explicit, given in the ki the definition does not appear as a function of them themselves are clear The matrix a in the Butcher board is "almost inferior triangular" because it is inferior triangular and the diagonal elements are zero too.

It is known that there are not Runge-Kutta explicit methods with s stages with order s for s greater than or equal to 5
It is also known that there aren't Runge-Kutta explicit s-stage order s-1, for s greater than or equal that 7.

More generally we have the following table

That step size is necessary? The answer to this question is that depends on the specific problem and the desired degree of accuracy.

One thing to consider is that Runge-Kutta methods lose some precision when the derivative of the function analysis is very large or frequently changing sign, such cases requires a very small step size to obtain an acceptable degree of accuracy

At next section we will see the Fehlberg Pairs embebbed , are methods in which the step size will vary automatically depending on (among other things) of changes in the derivative of the function

If you want to see now an example of how these methods works, access to RungeKutta Calculator where you will see the default problem


V. Method Comparison

You also may want to compare some of the different methods to see how
they perform for a specific problem.

Implicit Runge--Kutta methods have a number of desirable properties.

The Gauss--Legendre methods, for example, are self-adjoint, meaning
that they provide the same solution when integrating forward or
backward in time.


A generic framework for implicit Runge--Kutta methods has been implemented. The focus so far is on methods with interesting geometric properties and currently covers the following schemes:

"ImplicitRungeKuttaGaussCoefficients"
"ImplicitRungeKuttaLobattoIIIACoefficients"
"ImplicitRungeKuttaLobattoIIIBCoefficients"
"ImplicitRungeKuttaLobattoIIICCoefficients"
"ImplicitRungeKuttaRadauIACoefficients"
"ImplicitRungeKuttaRadauIIACoefficients"

The derivation of the method coefficients can be carried out to arbitrary order and arbitrary precision.

References on Runge--Kutta algorithms

  1. explicit Runge--Kutta
  2. Butcher, J.C., A history of Runge--Kutta methods, Applied Numerical Mathematics, 20 (1996) 247--260

Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)


3rd Order Runge-Kutta - HP 15C

This program uses a 3rd Order Runge-Kutta method to assist in solving a first order-differential equation.

Given the initial condition (x0, y0) to the differential equation:

Find the value of y1 where h is a given step size (preferably small). The value of y1 is given by the following equations:

Where:
k1 = h * f(x0, y0)
k2 = h * f(x0 + h/3, y0 + k1/3)
k3 = h * f(x0 + (2 * h)/3, y0 + (2 * k2)/3)

Error estimated on the order of h^4

Source: Smith, Jon M. Scientific Analysis on the Pocket Calculator. John Wiley & Sons, Inc.: New York 1975 pg 174

Memory Registers Used

R0 = h
R1 = x
R2 = y
R3 = x1 (result)
R4 = y1 (result)
R5 = x0 (copied from R1)
R6 = y0 (copied from R2)
R7 = k1
R8 = k2
R9 = k3

Label A = Main Program
Label 0 = Program where equation is maintained

1. Store h, x0, and y0 in memory registers R0, R1, and R2 respectively.
2. Enter or edit the equation in program mode using Label 0. Use RCL 1 for x and RCL 2 for y.
3. Run Program A. The first answer is x1. Press [R/S] for y1.
4. To find (x2, y2), store x1 in R1 and y1 in R2 and run Program A again. You can do this by pressing [STO] [ 2 ] [x<>y] [STO] [ 1 ] [ f ] [ √ ] (A) immediately following program execution.


Kansas State University

Euler's method and the improved Euler's method both try to approximate $y(x_0+h)$ by approximating the slope $m$ of the secant line from $(x_0,y(x_0))$ to $(x_0+h,y(x_0+h))$ and using the formula $y(x_0+h)=y(x_0)+mh$ (see figure 1). Euler's method approximates the slope of the secant line by the slope of the tangent line at the left endpoint $(x_0,y(x_0))$. The improved Euler's method uses the average of the slopes at the left endpoint and the approximate right endpoint (that is the right endpoint as computed by Euler's method) to approximate the slope of the secant line. We don't have to stop there either. We can keep finding slopes at different points and computing weighted averages to approximate the slope of the tangent line. Numerical methods to approximate the solution of differential equations in this fashion are called Runge-Kutta methods (after the mathematicians Runge and Kutta).

Third Order Runge-Kutta Method

To approximate the solution of $ frac=f(x,y),qquad y(x_0)=y_0 $ compute $ egin x_1&=x_0+h k_1&=f(x_0,y_0) k_2&=f(x_0+h,y_0+hk_1) k_3&=f(x_0+h/2,y_0+(h/2)(k_1+k_2)/2) y_1&=y_0+frac<6>h end $ Then $y(x_1)approx y_1$.

This method uses the improved Euler method to find an approximate midpoint of the secant line and then takes a weighted average of the slopes at the left and right endpoints and the midpoint. Note that if $f(x,y)$ is just $f(x)$, a function of $x$ alone, then solving the differential equation $displaystylefrac=f(x)$ is just evaluating the integral $displaystyleint_^ f(x)quad dx$. In this case, the third order Runge-Kutta method is the same as Simpson's rule for numerical approximation of integrals from Calculus 2. Just as for Euler's method and the improved Euler's method, the RK3 method can be easily put in a spreadsheet. The first part of a spreadsheet for the RK3 method applied to $displaystylefrac=2xy$, $y(0)=1$ is shown below. Note that this is the problem for exercise 1, so you can use this spreadsheet to check your work.

Fourth Order Runge-Kutta method

Just as with Euler's method and the improved Euler's method, we can repeat the process to find an $x_2=x_1+h=x_0+2h$ and $y_2approx y(x_2)=y(x_0+2h)$ and so on. The fourth order Runge-Kutta method listed above is powerful enough that it was a popular technique for practical computation (and not just for educational purposes). The steps are well suited to programming into a computer, just loop through the basic algorithm as many times as needed to get to the values you are interested in. The first part of a spreadsheet for the RK4 method applied to $displaystylefrac=2xy$, $y(0)=1$ is shown below. Note that this is the problem for exercise 2, so you can use this spreadsheet to check your work.

Exercises

  1. Use the third order Runge-Kutta method with step sizes $h=0.1$ and $h=0.05$ to approximate $y(2)$ for the initial value problem $displaystylefrac=2xy$, $y(0)=1$. Find the error in both approximations.

Runge-Kutta-Fehlberg Methods

Discussion

  1. The error in a single step of the improved Euler's method is about $C'h^3$ and the error in a single step of the third order Runge-Kutta method is about $C''h^4$ where $C'$ and $C''$ are constants that depend on the problem but not the step size. (The error in a single step is the error in $y(x_0+h)$ computed with a step-size of $h$. It is also called the local error.)
  2. $C''h^4$ is much smaller than $C'h^3$.

To see how Fehlberg used these assumptions to compute a good step-size, consider the following equations where $y_1'$ is the approximation from the improved Euler's method, $y_1''$ is the approximation from the third order Runge-Kutta method, and $y(h)$ is the true value. $egin y_1'&approx y(h)+C'h^3 y_1''&approx y(h)+C''h^4 |y_1'-y_1''|&approx|C'h^3-C''h^4|approx |C'|h^3 end $ where the last line uses the assumption that $C''h^4$ is very small compared to $C'h^3$. Now we can solve this last line for the constant $C'$ to get $ |C'|approx frac<|y_1'-y_1''|> $ Once we have this approximation for $C'$, we can pick a step-size $h_1$ to get the local error of the size we want. If we want the local error to be about size $T$, we just take a step-size $h_< ext>$ where $ h_< ext>=hleft(frac<|y_1'-y_1''|> ight)^ <1/3>$ and then the error should be about $ egin | ext|&approx |C'h_< ext>^3| &approxfrac<|y_1'-y_1''|>h^3frac <|y_1'-y_1''|> &approx T end $ You might be a little worried about how all the errors in the different approximations mount up as we carry out all these computations to get our new step-size $h_< ext>$. This is a serious consideration and is dealt with by introducing a chicken factor, usually taken to be 0.9. We actually use a step-size $ h_1=0.9hleft(frac<|y_1'-y_1''|> ight)^<1/3>. $ This is a bit smaller than the approximation we computed above, and thus a bit safer too.

The Runge-Kutta-Fehlberg 2(3) method uses exactly this technique to pick the right step-size. Suppose the initial value problem we want to solve is $ frac=f(x,y),qquad y(x_0)=y_0 $ We have an initial step-size $h$ (taken to be whatever value you fancy, we will update it automatically as needed). We compute the improved Euler's and RK3 estimates in the usual fashion. $egin k1&=f(x_0,y_0) k2&=f(x_0+h,y_0+h*k_1) k3&=f(x_0+h/2,y_0+h*(k1+k2)/4) y_1'&=y_0+h*(k1+k2)/2 y_1''&=y_0+hfrac <6>end $ Next we compute the estimated error $ | ext|approx|y_1'-y_1''| $ If this error is small enough, say within a tolerance of $T=.001*max(|y_0|,1)$ (which is the tolerance used by the matlab program ode23), then we accept this step-size for the current step and let $egin x_1&=x_0+h y_1&=y_1'' end$ If the error is greater than $T$, we reject this step-size for the current step and leave $x_0$ and $y_0$ as they are. In either case, we choose a new step-size $ h_< ext>=.9hleft(frac<|y_1'-y_1''|> ight)^ <1/3>$ We then either compute the next step with the new step-size (if our error was less than $T$) or we repeat the current step with the new step-size $h_< ext>$ (if the error was greater than $T$) and try again to find $x_1$ and $y_1$.


Second Order Runge-Kutta Method (The Math)

The Second Order Runge-Kutta algorithm described above was developed in a purely ad-hoc way. It seemed reasonable that using an estimate for the derivative at the midpoint of the interval between t₀ and t₀+h (i.e., at t₀+½h) would result in a better approximation for the function at t₀+h, than would using the derivative at t₀ (i.e., Euler's Method &emdash the First Order Runge-Kutta). And, in fact, we showed that the resulting approximation was better. But, is there a way to derive the Second Order Runge-Kutta from first principles? If so, we might be able to develop even better algorithms.

Aside: Some Useful Math Review

In the following derivation we will use two math facts that are reviewed here. You should be familiar with this from a course in multivariable calculus.

1) If there is a function of two variable, g(x,y), it's Taylor Series about x₀, y₀ is given by:

where &Deltax is the increment in the first dimenstion, and &Deltay is the increment in the second dimension. In the last line we use a shorthand notation that removes the explicit functional notation, and also represents the partial derivative of g with respect to x as gx (and likewise for gy).

2) If z is a function of two variables z=g(x,y), where x=r(t) and y=s(t), then by the chain rule for partial derivatives

To start, recall that we are expressing our differential equation as

Now define two approximations to the derivative

In all cases &alpha and &beta will represent fractional values between 0 and 1. These equation state that k1 is the approximation to the derivative based on the estimated value of y(t) at t=t₀ (i.e., y*(t₀)) and the time at t₀. The value of k2 is based upon the estimated value, y*(t₀), plus some fraction of the step size, &betah, times the slope k1, and the time at t₀+&alphah (i.e., some time between t₀ and t₀+h). In the method described previously &alpha=&beta=½, but other choices are possible.

To update our solution with the next estimate of y(t) at t₀+h we use the equation

This equation states that we get the value of y*(t₀+h) from the value of y*(t₀) plus the time step, h, multiplied by a slope that is a weighted sum of k1 and k2. In the method described previously a=0 and b=1, so we used only the second estimate for the slope. (Note that Euler's Method (First Order Runge-Kutta) is a special case of this method with a=1, b=0, and &alpha and &beta don't matter because k2 is not used in the update equation.)

Our goal now is to determine, from first principles, how to find the values a, b, &alpha and &beta that result in low error. Starting with the update equation (above) we can substitue the previously given expressions for k1 and k2 which yields

We can use a two-dimensional Taylor Series (where the increment in the first dimension is &betahk1, and the increment in the second dimension is &alphah) to expand the rightmost term.

$eqalign <
fleft( <() + eta h,, + alpha h> ight) &= f + eta h + alpha h + cdots cr
&= f + feta h + alpha h + cdots cr> $

In the last line we used the fact the k1=f . The ellipsis denotes terms that are second order or higher. Substituting this into the previous expression for y*(t₀+h) and rearranging we get

The ellipsis was multiplied by h between the first or second line and now represents terms that are third order or higher.

To finish we compare this approximation with the expressionfor a Taylor Expansion of the exact solution (going from the first line to the second we used the chain rule for partical derivatives). In this expression the ellipsis represents terms that are third order or higher .

Comparing this expression with our final expression for the approximation,

we see that they agree up to the error terms (third order and higher) represented by the ellipsis if we define the constants, a, b, &alpha and &beta such that

$displaylines <
a + b = 1 cr
balpha = <1 over 2>cr
beta = <1 over 2>cr> $

This system is underspecified, there are four unknowns, and only three equations, so more than one solution is possible. Clearly the choice we made a=0 and b=1 and &alpha=&beta=½ is one of these solutions. Because all of the terms of the approximation are equal to the terms in the exact solution, up to the error terms, the local error of this method is therefore O(h 3 ) (O(h 2 ) globally, hence the term "second order" Runge-Kutta).

Key Concept: Error of Second Order Runge Kutta

The global error of the Second Order Runge-Kutta algorithm is O(h 2 ).


Euler's Formula: A Numerical Method

Euler's Method is one of the simplest and oldest numerical methods for approximating solutions to differential equations that cannot be solved with a nice formula. Euler's Method is also called the tangent line method, and in essence it is an algorithmic way of plotting an approximate solution to an initial value problem through the direction field. It can also be used to get an estimated value for that solution at a given value for x . Here is how it works:

Say we have a generic initial value problem of the form dy &frasldx = f(x,y) with y(x0) = y0 . We may not be able to find a formula for the solution y = &phi(x) , but if f(x,y) is a continuous function, we do know that such a solution exists. (See Theorem 2.8.1 in Boyce & DiPrima.) We can therefore talk about the solution y = &phi(x) and its properties. Our goal is to say as much as we can about this solution, even though we may not be able to write down a formula for it.

At any pair of values for x and y , the direction field gives us the slope of the tangent, and therefore the direction, of the solution curve y = &phi(x) passing through that point. We know that the point (x0 , y0) lies on the solution curve, and at that point, the slope of the tangent is f(x0 , y0) . Therefore, the equation of the tangent line to the solution at the point (x0 , y0) is:

Since a tangent line is a good approximation to a curve near the point (x0 , y0) , we can move a short distance along the tangent line by increasing x by some small amount h then we arrive at a new point, which we will call (x1 , y1) . That is, we choose x1 = x0 + h , and then we obtain y1 by plugging x1 into the equation for the tangent line. Since we arrived at this new point by moving from (x0 , y0) a small distance along the tangent line at that point, we can say that the point (x1 , y1) is almost on the solution curve y = &phi(x) .

We can now find the tangent line to the solution at this new point, y = y1 + f(x1 , y1)(x - x1) , and then increase x by the small amount h again, and come up with a new point (x2 , y2) . Continuing with this method, we use the value of y found at each step to calculate the next value of y . This whole process can be stated more succinctly by Euler's Formula:

Another way of writing this is as follows:

A more complete description of the formula is given in the textbook. As a side remark, note that a natural variant of this algorithm changes the x equation to

which is general enough to handle first-order two-dimensional systems.

To see Euler's Method in use, let's try an example.

Consider the following initial value problem:

Suppose we want to use Euler's Method to graph an estimate for the initial value problem with f(x,y) = x 2 - 1 given above, over the interval 0 &le x &le 2 . From the initial value condition, we know that when x = 0 , the value of y is 1. Hence we will start at the initial point (x0 , y0) = (0,1) . The tangent line at this point is y = 1 - x . If we use a "step size" of h = 1 , then our x coordinates will be x0 = 0 , x1 = 1 , and x2 = 2 . Euler's Method will give us estimates for the y values corresponding to each of the x values. For this step size, Euler's Method takes just two steps:

x0 = 0,
x1 = 1,
x2 = 2,
y0 = 1
y1 = 1 + f(0,1) · 1 = 0
y2 = 0 + f(1,0) · 1 = 0

So for h = 1 , Euler's Method is estimating that our solution curves goes through the three points (0,1), (1,0), and (2,0).

What if we used a smaller step size, say, h = 0.5 ? Then we would get y values corresponding to x0 = 0 , x1 = 0.5 , x2 = 1 , x3 = 1.5 , and x4 = 2 :

x0 = 0,
x1 = 0.5,
x2 = 1,
x3 = 1.5,
x4 = 2,
y0 = 1
y1 = 1 + f(0,1) · (0.5) = 0.5
y2 = 0.5 + f(0.5, 0.5) · (0.5) = 0.125
y3 = 0.125 + f(1, 0.125) · (0.5) = 0.125
y4 = 0.125 + f(1.5, 0.125) · (0.5) = 0.75

With a step size of h = 0.5 , we now get the five points (0,1), (0.5, 0.5), (1, 0.125), (1.5, 0.125), and (2, 0.75).

Keep this example in mind we will soon verify these calculations using MATLAB.


3.3: The Runge-Kutta Method - Mathematics

Given the IVP of Eq. 6, and a time step h , and the solution y n at the n th time step, let's say that we wish to compute y n +1 in the following fashion:

where the constants , , a and b have to be evaluated so that the resulting method has a LTE O( h 3 ) . Note that if k 2 =0 and a =1, then Eq. 13 reduces to the forward Euler method.

Now, let's write down the Taylor series expansion of y in the neighborhood of t n correct to the h 2 term i.e.,

However, we know from the IVP (Eq. 6) that dy / dt = f ( y , t ) so that

So from the above analysis, i.e., Eqs. 14 and 15, we get

However, the term k 2 in the proposed RK method of Eq. 13 can be expanded correct to O( h 3 ) as

Now, substituting for k 2 from Eq. 17 in Eq. 13, we get

Comparing the terms with identical coefficients in Eqs. 16 and 18 gives us the following system of equations to determine the constants:

There are infinitely many choices of a , b , and which satisfy Eq. 19, we can choose for instance and a = b =1/2. With this choice, we have the classical second order accurate Runge-Kutta method (RK2) which is summarized as follows.

In a similar fashion Runge-Kutta methods of higher order can be developed. One of the most widely used methods for the solution of IVPs is the fourth order Runge-Kutta (RK4) technique. The LTE of this method is order h 5 . The method is given below.

k 1 = hf ( y n , t n )
k 2 = hf ( y n + k 1 /2, t n + h /2)
k 4 = h ( y n + k 3 , t n + h )
y n +1 = y n + ( k 1 + 2 k 2 + 2 k 3 + k 4 )/6. (20)

Note that the RK methods are explicit techniques, hence they are only conditionally stable.


Watch the video: ΠΛΗ20 - ΜΑΘΗΜΑ - Η ΓΛΩΣΣΑ ΤΗΣ ΘΕΩΡΙΑΣ ΣΥΝΟΛΩΝ - ΘΕΩΡΙΑ 2 από 2 (December 2021).