I am new to TensorFlow and have difficulties understanding the RNN module. I am trying to extract hidden/cell states from an LSTM. For my code, I am using the implementation from https://github.com/aymericdamien/TensorFlow-Examples.
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])# Define weights
weights = {'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))}
biases = {'out': tf.Variable(tf.random_normal([n_classes]))}def RNN(x, weights, biases):# Prepare data shape to match `rnn` function requirements# Current data input shape: (batch_size, n_steps, n_input)# Required shape: 'n_steps' tensors list of shape (batch_size, n_input)# Permuting batch_size and n_stepsx = tf.transpose(x, [1, 0, 2])# Reshaping to (n_steps*batch_size, n_input)x = tf.reshape(x, [-1, n_input])# Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)x = tf.split(0, n_steps, x)# Define a lstm cell with tensorflow#with tf.variable_scope('RNN'):lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)# Get lstm cell outputoutputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)# Linear activation, using rnn inner loop last outputreturn tf.matmul(outputs[-1], weights['out']) + biases['out'], statespred, states = RNN(x, weights, biases)# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
Now I want to extract the cell/hidden state for each time step in a prediction. The state is stored in a LSTMStateTuple of the form (c,h), which I can find out by evaluating print states
. However, trying to call print states.c.eval()
(which according to the documentation should give me values in the tensor states.c
), yields an error stating that my variables are not initialized even though I am calling it right after I am predicting something. The code for this is here:
# Launch the graph
with tf.Session() as sess:sess.run(init)step = 1# Keep training until reach max iterationsfor v in tf.get_collection(tf.GraphKeys.VARIABLES, scope='RNN'):print v.namewhile step * batch_size < training_iters:batch_x, batch_y = mnist.train.next_batch(batch_size)# Reshape data to get 28 seq of 28 elementsbatch_x = batch_x.reshape((batch_size, n_steps, n_input))# Run optimization op (backprop)sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})print states.c.eval()# Calculate batch accuracyacc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})step += 1print "Optimization Finished!"
and the error message is
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
The states are also not visible in tf.all_variables()
, only the trained matrix/bias tensors (as described here: Tensorflow: show or save forget gate values in LSTM). I don't want to build the whole LSTM from scratch though since I have the states in the states
variable, I just need to call it.