I a little confused by the broadcasting rules of numpy. Suppose you want to perform an axis-wise scalar product of a higher dimension array to reduce the array dimension by one (basically to perform a weighted summation along one axis):
from numpy import *A = ones((3,3,2))
v = array([1,2])B = zeros((3,3))# V01: this works
B[0,0] = v.dot(A[0,0])# V02: this works
B[:,:] = v[0]*A[:,:,0] + v[1]*A[:,:,1] # V03: this doesn't
B[:,:] = v.dot(A[:,:])
Why does V03 not work?
Cheers
np.dot(a, b)
operates over the last axis of a and the second-to-last of b. So for your particular case in your question,you could always go with :
>>> a.dot(v)
array([[ 3., 3., 3.],[ 3., 3., 3.],[ 3., 3., 3.]])
If you want to keep the v.dot(a)
order, you need to get the axis into position, which can easily be achieved with np.rollaxis
:
>>> v.dot(np.rollaxis(a, 2, 1))
array([[ 3., 3., 3.],[ 3., 3., 3.],[ 3., 3., 3.]])
I don't like np.dot
too much, unless it is for the obvious matrix or vector multiplication, because it is very strict about the output dtype when using the optional out
parameter. Joe Kington has mentioned it already, but if you are going to be doing this type of things, get used to np.einsum
: once you get the hang of the syntax, it will cut down the amount of time you spend worrying about reshaping things to a minimum:
>>> a = np.ones((3, 3, 2))
>>> np.einsum('i, jki', v, a)
array([[ 3., 3., 3.],[ 3., 3., 3.],[ 3., 3., 3.]])
Not that it is too relevant in this case, but it is also ridiculously fast:
In [4]: %timeit a.dot(v)
100000 loops, best of 3: 2.43 us per loopIn [5]: %timeit v.dot(np.rollaxis(a, 2, 1))
100000 loops, best of 3: 4.49 us per loopIn [7]: %timeit np.tensordot(v, a, axes=(0, 2))
100000 loops, best of 3: 14.9 us per loopIn [8]: %timeit np.einsum('i, jki', v, a)
100000 loops, best of 3: 2.91 us per loop