I defined a python function which run a bash script. Let's say the function is: calc(x,y,z)
. If I run this function in python with some variables,
>>> calc(1,2,3)
It generates a C code which simulate something using the variables (x=1, y=2, z=3)
, compiles the C code and executes the compiled output file.
I want to run multiple calc(x,y,z)
s with different (x,y,z)
s in jupyter notebook simultaneously. As you may noticed, the problem is that cells in jupyter notebook are executed sequentially. If I run three calc
functions, it takes three times longer than the one function running time.
I tried two ways but they didn't work well.
- Use
multiprocessing
module: By using the module, it is possible to execute multiplecalc
s simultaneously in "one cell". But for later analysis, I would like to execute multiple cells simultaneously which include only onecalc
each using different processors (or cpu cores). Use
ipyparallel
cell magic (inspired by this answer): I tried as following after importipyparallel
# Cell 1 %%px --targets 0 # use processor 0 calc(1,1,1)
.
# Cell 2 %%px --targets 1 # use processor 1 calc(2,2,2)
.
# Cell 3 %%px --targets 2 # use processor 2 calc(3,3,3)
But the cells are executed sequentially: Cell 2 is executed after Cell 1 simulation was done, similar for Cell 3.
How to run multiple jupyter cells using different cores?