Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I am currently having a problem that I haven't been able to figure out for a couple of days with a self-implemented neural network in Java. My network has two input neurons corresponding to x and y coordinates of a pixel in a given greyscale image and one output representing the tone of the pixel. The learning algorithm I'm using is RPROP. The problem I am having is that after numerous iterations of feeding the network with the whole training data set, it converges to a point very far away from the one expected and after analysing the weight structure of the converged network, I could see that all neurons in the hidden layer had the exact same set of input weights. This happens independently of the number of neurons in the hidden layer. Is it caused by the fact that I'm initializing all weights in the network with the same value? I tried randomizing the weights but it didn't make things any better. I am using sigmoidal (tanh) activation functions in all the layers except the output layer. I don't know if I made a bug in implementation or if I misunderstood some part of mathematical description of neural network learning process. Does anyone know what might cause such strange behaviour?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
167 views
Welcome To Ask or Share your Answers For Others

1 Answer

Are you using bias units? I would look up the use of bias units in Neural Networks. Also, if you're doing a simple out of the box implementation, you may want to test iterative results of your work against a known NN library.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...