I have one bug I cannot find out the reason. Here is the code:
with tf.Graph().as_default():global_step = tf.Variable(0, trainable=False)images = tf.placeholder(tf.float32, shape = [FLAGS.batch_size,33,33,1])labels = tf.placeholder(tf.float32, shape = [FLAGS.batch_size,21,21,1])logits = inference(images)losses = loss(logits, labels)train_op = train(losses, global_step)saver = tf.train.Saver(tf.all_variables())summary_op = tf.merge_all_summaries()init = tf.initialize_all_variables()sess = tf.Session()sess.run(init) summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)for step in xrange(FLAGS.max_steps):start_time = time.time()data_batch, label_batch = SRCNN_inputs.next_batch(np_data, np_label,FLAGS.batch_size)_, loss_value = sess.run([train_op, losses], feed_dict={images: data_batch, labels: label_batch})duration = time.time() - start_timedef next_batch(np_data, np_label, batchsize, training_number = NUM_EXAMPLES_PER_EPOCH_TRAIN):perm = np.arange(training_number)np.random.shuffle(perm)data = np_data[perm]label = np_label[perm]data_batch = data[0:batchsize,:]label_batch = label[0:batchsize,:]return data_batch, label_batch
where np_data
is the whole training samples read from hdf5 file, and the same to np_label
.
After I run the code, I got the error like this :
2016-07-07 11:16:36.900831: step 0, loss = 55.22 (218.9 examples/sec; 0.585 sec/batch)
Traceback (most recent call last):File "<ipython-input-1-19672e1f8f12>", line 1, in <module>runfile('/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py', wdir='/home/kang/Documents/work_code_PC1/tf_SRCNN')File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 685, in runfileexecfile(filename, namespace)File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 85, in execfileexec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)File "/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py", line 155, in <module>train_test()File "/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py", line 146, in train_testsummary_str = sess.run(summary_op)File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 372, in runrun_metadata_ptr)File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 636, in _runfeed_dict_string, options, run_metadata)File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 708, in _do_runtarget_list, options, run_metadata)File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 728, in _do_callraise type(e)(node_def, op, message)InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [128,33,33,1][[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[128,33,33,1], _device="/job:localhost/replica:0/task:0/gpu:0"]()]][[Node: truediv/_74 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_56_truediv", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op 'Placeholder', defined at:
So, It shows that for the step 0 it has the result, which means that the data has been fed into the Placeholders.
But why does it come the error of feeding data into Placeholder in the next time?
When I try to comment the code summary_op = tf.merge_all_summaries()
and the code works fine. why is it the case?