Adaptive step size Runge-Kutta method

I am still trying to implement an adaptive step size RK routine. So far, I've been able to implement the step-halving method but not the RK-Fehlberg. I am not able to figure out how to increase the step size after reducing it initially.

To give some background on the topic, Runge-Kutta methods are used to solve ordinary differential equations, of any order. For example, in a first order differential equation, it uses the derivative of the function to predict what the function value at the next step should be. Euler's method is a rudimentary implementation of RK. Adaptive step size RK is changing the step size depending on how fastly or slowly the function is changing. If a function is rapidly rising or falling, it is in a region that we should sample carefully and therefore, we reduce the step size and if the rate of change of the function is small, we can increase the step size. I've been able to implement a way to reduce the step size depending on the rate of change of function but I've not been able to do the converse. I'll try till the end of the day and then get some help. Searching for a solution online hasn't helped much.

I really wish there were well documented code available. Rosetta Code seems like an interesting place to start, maybe I should contribute to it.

Update : I was still bored after writing this post so I started adding labels to all of the posts so far. I used to label the posts religiously much earlier but not as sincerely, since I started writing daily. Also, I forgot to add references to RK4 methods. Here and here are two good introductions to the method and it's application.

Popular posts from this blog

Farewell to Enthought

Arxiv author affiliations using Python

Elementary (particle physics), my dear Watson