In my project I need to compute euclidian distance beetween each points stored in an array. The entry array is a 2D numpy array with 3 columns which are the coordinates(x,y,z) and each rows define a new point.
I'm usualy working with 5000 - 6000 points in my test cases.
My first algorithm use Cython and my second numpy. I find that my numpy algorithm is faster than cython.
edit: with 6000 points :
numpy 1.76 s / cython 4.36 s
Here's my cython code:
cimport cython
from libc.math cimport sqrt
@cython.boundscheck(False)
@cython.wraparound(False)
cdef void calcul1(double[::1] M,double[::1] R):cdef int i=0cdef int max = M.shape[0]cdef int x,ycdef int start = 1for x in range(0,max,3):for y in range(start,max,3):R[i]= sqrt((M[y] - M[x])**2 + (M[y+1] - M[x+1])**2 + (M[y+2] - M[x+2])**2)i+=1 start += 1
M is a memory view of the initial entry array but flatten()
by numpy before the call of the function calcul1()
, R is a memory view of a 1D output array to store all the results.
Here's my Numpy code :
def calcul2(M):return np.sqrt(((M[:,:,np.newaxis] - M[:,np.newaxis,:])**2).sum(axis=0))
Here M is the initial entry array but transpose()
by numpy before the function call to have coordinates(x,y,z) as rows and points as columns.
Moreover this numpy function is quite convinient because the array it returns is well organise. It's a n by n array with n the number of points and each points has a row and a column. So for example the distance AB is stored at the intersection index of row A and column B.
Here's how I call them (cython function):
cpdef test():cdef double[::1] Mf cdef double[::1] out = np.empty(17998000,dtype=np.float64) # (6000² - 6000) / 2M = np.arange(6000*3,dtype=np.float64).reshape(6000,3) # Example array with 6000 pointsMf = M.flatten() #because my cython algorithm need a 1D arrayMt = M.transpose() # because my numpy algorithm need coordinates as rowscalcul2(Mt)calcul1(Mf,out)
Am I doing something wrong here ? For my project both are not fast enough.
1: Is there a way to improve my cython code in order to beat numpy's speed ?
2: Is there a way to improve my numpy code to compute even faster ?
3: Or any other solutions, but it must be a python/cython (like parallel computing) ?
Thank you.