Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I want to compute and display accuracy on the test set while the network is training.

In the MNIST tutorial that uses feeds, one can see that it can be done easily by feeding test data rather than train data. Simple solution to a simple problem.

However I am not able to find such an easy example when using queues for batching. AFAICS, the documentation proposes two solutions:

  1. Offline testing with saved states. I don't want offline.
  2. Making a second 'test' network that share weights with the network being trained. That doesn't sound simple and I have not seen an example of that.

Is there a third, easy way to compute test metrics at run time? Or is there an example somewhere of the second, test network with shared weights that proves me wrong by being super simple to implement?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
247 views
Welcome To Ask or Share your Answers For Others

1 Answer

If I understand your question correctly, you want to validate your model while training with queue inputs not feed_dict? see my program that does this. Here is a short explanation:

First you need to convert you data into train and validation files like 'train.tfreords' and 'valid.tfreocrds'

Second in your training program start two queues that parse this two files, and use sharing variables to get the two logits for train and valid

In my program this is done by

with tf.variable_scope("inference") as scope:
        logits = mnist.inference(images)
        scope.reuse_variables()
        validation_logits = mnist.inference(validation_images)

then use logits to do get train loss and minimize it and use validation_logits to get valid accuracy


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...