[go: up one dir, main page]

100% found this document useful (1 vote)
183 views12 pages

Differential Equations: Honor's Project (Working Title)

The document discusses numerical methods for solving differential equations. It begins by introducing Euler's method, which uses a fixed step size and the slope at each point to find the next point. Euler's method is first-order with an error of O(h). Improved Euler's method averages slopes to reduce error to O(h^2). Runge-Kutta 4th-order method further improves accuracy by averaging four slopes, achieving an error of O(h^4) but requiring more computations. Multistep methods like Adams-Bashforth-Moulton use information from past steps, saving computations but requiring an initial method to generate starting values.

Uploaded by

brockskaggs_81
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
183 views12 pages

Differential Equations: Honor's Project (Working Title)

The document discusses numerical methods for solving differential equations. It begins by introducing Euler's method, which uses a fixed step size and the slope at each point to find the next point. Euler's method is first-order with an error of O(h). Improved Euler's method averages slopes to reduce error to O(h^2). Runge-Kutta 4th-order method further improves accuracy by averaging four slopes, achieving an error of O(h^4) but requiring more computations. Multistep methods like Adams-Bashforth-Moulton use information from past steps, saving computations but requiring an initial method to generate starting values.

Uploaded by

brockskaggs_81
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 12

Differential Equations: Honor’s Project (Working Title)

Need to mention the focus on first order IVPs early on!

By: Brock Skaggs


Skaggs 2

Throughout the semester most of our time has been spent solving differential equations whose

solutions could be represented in a closed form comprised of elementary functions. Ideally all differential

equations would fit into this nice category, but unfortunately most real-world problems modeled with

differential equations do not fit. One way to manage these differential equations is to obtain a solution in

the form of an infinite series as was done in chapter six. Yet since the series is infinite we would have to

ultimately settle for only a partial sum unless we could find the value that it converged too. An

alternative which employs the use of the raw computing power of digital computers is to use a numerical

method in order to obtain a close approximation.

The basic idea when using numerical methods to solve differential equations is to use the

differential equation itself in order to obtain information about the solution. For instance in the case of a

first order initial value problem, a point on the solution curve is known as well as the slope of the curve at

any other point. Therefore, if the incremental distance traveled along the x axis or step size (h) is

controlled, then the differential equation (y’ = f(x, y)) can be used to provide the amount of rise or fall

over that small increment. This simply comes from the definition of slope: f(x, y) = y/h and

consequently y = f(x, y)*h. Having arrived at the next point (x + x, y + y) the process can then

be repeated to obtain the following point and so on. This simple train of thought practically defines the

most basic numerical method known as Euler’s Method (Arthur).

Since numerical methods are always approximations of the actual solution curve the amount of

error in the computed values from the actual values is quite important. The dominant error which is

defined as the difference between the actual solution value and the approximated solution value obtained

at a specific point is known as discretization or truncation error (Boor, 333). This error can be visualized

in the figure below showing the results of solving the IVP: y’ = 2y, y(0) = 1. The blue curve

corresponds to the familiar analytic solution (y = e^(2x)) while the red curve is an

approximation obtained through Euler’s method with a step size of 0.1. The

truncation error is then simply the distance at each point between the two cures. A
Skaggs 3

second type of error that occurs in numerical methods is known as round-off error

and is usually much smaller than truncation error. Round-off error arises from the

fact that the computer only has a finite number of digits to perform calculations

thus forcing it to round at some point. Factors that play into round-off error include

the type of arithmetic used by the computer, order of the arithmetic operations and

the numerical procedure being used (Boor, 362). A common way to denote the

error of a given method is the notation 0(hn), which would represent an nth order

numerical method (Zill, 341). This essentially means that the error is proportional

to hn power. Thus, if the method in question was of the second order and the step

size was halved, the resulting error would decrease by a factor of one-fourth

(Arthur).

Figure 1. Euler’s Method Example

As mentioned earlier, the most basic numerical method is known as Euler’s Method. This

method simply uses a fixed step size and the known slope at each point (from evaluating the differential

equation) to find the next point. As in other numerical methods, Euler’s Method can be described through

recurrsive formulas which are presented below (Zill, 76).


Skaggs 4

The error associated with Euler’s Method places it as first order method, O(h), which explains why that it

is not used in everyday practice. In order to obtain an accurate solution one would have use an extremely

small step size, which causes the computer to have to do more function evaluations in order to reach the

solution since more steps are involved. This inevitably results in a longer program runtime (Boor, 336).

Furthermore, the more calculations that are made increases the accumulated round-off error in the

computation which could become significant (Boor, 363).

From looking at the figure above and studying the recursive formulas there are a few thoughts

that come to mind about the errors in the Euler’s Method approximation. One such thought is that the

only chance that the Euler’s Method has of producing a value that is exactly on the solution curve is if it

happens to be a straight line in that step size interval. In addition to knowing that this error exists, the

differential equation provides enough information to tell whether the Euler approximation is above or

below the actual value. This information is exposed by evaluating the second derivative of the actual

function at the point in question. If this value is positive, then the solution curve is convex and Euler’s

Method will produce a value that is too low. Consequently if the value is negative, then the solution

curve is concave and Euler’s Method produces a value that is too high (Arthur).
Skaggs 5

In light of the flaws associated with Euler’s Method other methods were developed to improve

the approximation of the solution curve. The main goal of these methods is to improve the slope of line

segment that is produced at each increment. The next evolutionary step toward this improvement was in

a method known as the Improved Euler’s Method or Second-Order Runge-Kutta Method (RK2). This

method can be defined as the set of recursive formulas below (Zill, 342).

As can be seen from examining the formula, the slope that is used in moving to the next value is actually

an average. This average consists of the values of the differential equation when evaluated at the present

point and of the value of the differential equation when evaluated at the next upcoming point. This

2
averaging of slopes allows this method to be classified as O(h ) (Arthur).

Although RK2 is definitely an improvement over the original Euler’s Method, other methods

have been developed in order to gain an even better approximation of the solution. One such method

which is successfully used in practice is known as the Fourth-Order Runge-Kutta Method (RK4) (Boor,

338). This method gets its improved accuracy from taking a weighted average of slopes four slopes
Skaggs 6

which are selected between the starting point and a full step size away. This can be seen in the recursive

formulas below which define RK4 (Zill 346).

Through taking an average of four slopes within one step size the RK4 method is able to obtain a

4
classification of a fourth order method, O(h ) (Zill). Even though the RK4 method can obtain great

accuracy, this extra refinement does come at a computational price. As can be seen from the equations,

there are four different function evaluations per time step which inevitably slows down overall program

execution. Furthermore, this as well as all previous methods, has had the disadvantage of essentially

starting over the entire process after moving forward each time step. This flaw is especially emphasized

within RK4 with its four function evaluations (Hamming, 212).

In order to combat the inefficiencies associated with the RK4 method, other types of methods

were conceived which did not have to start from square one after each time step. These methods which in

general are called Multistep or Continuing Methods use information from past step sizes in order to aid in
Skaggs 7

the present step size calculation. One popular example of such a method is known as the Adams-

Bashforth-Moulton Method, whose recursive equations are listed below (Zill, 351).

Adams-Bashforth Predictor:

Adams-Moulton Corrector:

As can be seen from the groupings of the formulas this method is broken into two separate parts, the

predictor and the corrector. Initially the predictor portion is carried out and then substituted into the

corrector set of formulas. Here the predicted slope is adjusted in order to produce a better fit to the

solution curve. An obvious advantage of this method is that it is using information from previous time

steps in order to generate the slope at the current which saves computation time. Yet at the same time,

this forces the values of previous time steps to already be known. This is usually accomplished through

first using a “self-starting” method, such as RK4, to generate the initial needed values to feed the Adams-

Bashforth-Moulton Method (Boor, 354). Besides less computation time, this method provides a certain

measure of error by looking at the difference between the predicted and corrected values. This

information can then be used control the step size in order to generate the best possible approximation.

Unfortunately this method also tends to be difficult to program (Hamming, 194).


Skaggs 8

In looking at the equations in our introductory Differential Equations course, it is obvious that for

most cases a numerical method of solution which not be the best choice. Yet outside of the classroom

many differential equations must be solved through numerical approximations, because of the lack of

elementary functions found in their actual solution. Numerical solutions also become increasingly

attractive as these equations become quite complex and lengthy as they attempt to model physical

phenomena. In such cases, the computational power of a computer is utilized in order to laboriously

produce a satisfactorily approximation to the solution.

Bibliography

Arthur Mattuck, 18.03 Differential Equations, Spring 2006. (Massachusetts Institute of Technology: MIT

OpenCourseWare), http://ocw.mit.edu (Accessed April 15,2010). License: Creative Commons

BY-NC-SA.

Boor, Carl de, and S.D. Conte. Elementary Numerical Analysis: An Algorithmic Approach. United

States: McGraw-Hill Book Company, 1965.

Hamming, R.W. Numerical Methods For Scientists And Engineers. United States: McGraw-Hill

Company, 1962.

Zill, Dennis G. A First Course in Differential Equations With Modeling Applications. Belmont, CA:

Brooks/Cole, 2009.
Skaggs 9

Appendix

This appendix contains source code from C++ programs which were written to solve some of the
problems in the text.

This program used Euler’s Method to solve problem one of section 2.6 with a step size of 0.05.
//Program: Euler's Method (based on #1 p.79)
//Author: Brock Skaggs

#include <iostream> //Used for input and output

using namespace std;

int main()
{
//Incrementing values
float x_value;
float y_value;
Skaggs 10

//step size
float step;

//Terminating value
float dest;

//Initial Values
float x_intvalue;
float y_intvalue;

//Problem specific
x_intvalue = 1;
y_intvalue = 5;
dest = 1.2;
step = 0.05;

//Setting y value to initial value


y_value = y_intvalue;

//Recursive for loop to find approximation


for(x_value = x_intvalue; x_value+step < dest; x_value+= step)
{
//Finds y value at certain x.
y_value = y_value + (step*(2*x_value - 3*y_value + 1));

//Print out of results to screen


cout << "SOLUTION: " << endl;
cout << "x value: " << x_value << endl;
cout << "y value: " << y_value << endl;

return 0;
}

This program uses the Improved Euler’s Method to solve example two of section 9.1.
//Program: Improved Euler's Method (based on ex.2 p.343)
//Author: Brock Skaggs

#include <iostream> //Used for input and output

using namespace std;

int main()
{
//Incrementing values
float x_value;
float y_value;
Skaggs 11

//Step Size
float step;

//Terminating value
float dest;

//Initial Values
float x_intvalue;
float y_intvalue;

//Predicting value
float y_pred;

//Solution value
float y_value2;

//Problem Specific
x_intvalue = 1;
y_intvalue = 1;
dest = 1.50;
step = 0.1;

//Setting y value to initial value


y_value = y_intvalue;

//Recursive for loop


//NOTE: When step size is 0.1, the test condition should read: x_value
< dest
//NOTE: When step size is 0.05, the test condition should read:
x_value + step < dest
for(x_value = x_intvalue; x_value < dest; x_value += step)
{
//Predictor Equation
y_pred = y_value + (step*(2*x_value*y_value));

//Solution value
y_value2 = y_value + step*(0.5*((2*x_value*y_value) + 2*(x_value
+ step)*y_pred));

//Resetting y_value for next loop


y_value = y_value2;
}

cout << "x value: " << x_value << endl;


cout << "y value: " << y_value2 << endl;

return 0;
}

This program uses the RK4 method to solve example 1 in section 9.2.
//Program: Fourth-Order Runge-Kutta Method (based on ex.1 pg. 347)
//Author: Brock Skaggs

#include <iostream> //Used for input and output

using namespace std;


Skaggs 12

int main()
{
//Incrementing values
float x_value, y_value;

//Step size
float step;

//Terminating value
float dest;

//Initial Values
float x_intvalue, y_intvalue;

//Parameters for RK4 Method


float k1, k2, k3, k4;

//Problem Specific
x_intvalue = 1;
y_intvalue = 1;
step = 0.05;
dest = 1.5;

//Setting y_value to intial y value


y_value = y_intvalue;

//Recursive for loop


//NOTE: When step size is 0.1, the test condition should read: x_value
< dest
//NOTE: When step size is 0.05, the test condition should read:
x_value + step < dest
for(x_value = x_intvalue; x_value + step < dest; x_value += step)
{
//Evaluation of Parameters
k1 = 2*x_value*y_value;
k2 = 2*(x_value + 0.5*step)*(y_value + 0.5*step*k1);
k3 = 2*(x_value + 0.5*step)*(y_value + 0.5*step*k2);
k4 = 2*(x_value +step)*(y_value + step*k3);

//Evaluation of the next y value


y_value = y_value + (step/6)*(k1 + 2*k2 + 2*k3 + k4);
}

//Printing out value of solution


cout << "x value: " << x_value << endl;
cout << "y value: " << y_value << endl;

return 0;
}

You might also like