the multiprocessing library in python

As part of a course on computational methods, I am simulating molecular dynamics using the velocity verlet algorithm. Simply put, given an initial distribution of particle position & momenta and an interaction potential, we can use the momenta to update the position and the forces, derived from the potential, to update the momenta. Depending on which is updated first, the position or the momentum, we call them the verlet or the velocity verlet. One of the major hurdles here, given a large system size is to measure forces on all particles, the x and y components, simultaneously. Note that one cannot measure the force for a particle, update it's position and momentum and then move to the next particle! That's wrongedy wrong wrong! One needs to measure the forces on each particle given the current positions/configuration and then update the momentum, followed by an update to the positions. And then the cycle repeats. Like i was saying, one of the time-taking things is to measure the forces on each particle and I was vexed having to wait. I tried making the code lean but i couldn't go beyond a point conventionally. Then it struck me that one could parallelize computing the forces. I started looking for examples of multiprocessing in python and came across this excellent introduction. And voila! 4 lines of code and i reduced the time it took to compute forces by half! maybe if i try a bit more i can bring it down further. And all it took was 4 lines of code -

import multiprocessing as mp
pool = mp.Pool(processes=4)
results = [pool.apply_async(local_force(i), args=(i,)) for i in range(399)]
output = [p.get() for p in results]

* - local _force(i) is a function that i had defined earlier that measures the force on a given particle i

Popular posts from this blog

Farewell to Enthought

Arxiv author affiliations using Python

Elementary (particle physics), my dear Watson