I don't get how the line: results[i, sequence] = 1
works in the following.
I am following along in the debugger with some sample code in a Manning book: "Deep Learning with Python" (Example 3.5-classifying-movie-reviews.ipynb from the book) and while I understand what the code does, I don't get the syntax or how it works. I'm trying to learn Python and Deep learning together and want to understand what the code is doing.
def vectorize_sequences(sequences, dimension=10000):# Create an all-zero matrix of shape (len(sequences), dimension)results = np.zeros((len(sequences), dimension))for i, sequence in enumerate(sequences):results[i, sequence] = 1. # <--- How does this work?return results
- This creates
results
, a 25,000 x 10,000 array of zeros. sequences
is a list-of-tuples, for example (3, 0, 5). It then walkssequences
and for each non-zero value of each sequence, set the corresponding index inresults[i]
to 1.0. They call it one-hot encoding.- I don't understand how the line:
results[i, sequence] = 1
accomplishes this in numpy. - I get the
for i, sequence in enumerate(sequences)
part: just enumerating thesequences
list and keeping track ofi
. - I'm guessing there is some numpy magic that is somehow setting values in
results[i]
based on examiningsequence[n]
element by element and inserting a 1.0 wheneversequence[n]
is non-zero(?) Just want to understand the syntax.