I am currently having a problem that I haven't been able to figure out for a couple of days with a self-implemented neural network in Java. My network has two input neurons corresponding to x and y coordinates of a pixel in a given greyscale image and one output representing the tone of the pixel. The learning algorithm I'm using is RPROP. The problem I am having is that after numerous iterations of feeding the network with the whole training data set, it converges to a point very far away from the one expected and after analysing the weight structure of the converged network, I could see that all neurons in the hidden layer had the exact same set of input weights. This happens independently of the number of neurons in the hidden layer. Is it caused by the fact that I'm initializing all weights in the network with the same value? I tried randomizing the weights but it didn't make things any better. I am using sigmoidal (tanh) activation functions in all the layers except the output layer. I don't know if I made a bug in implementation or if I misunderstood some part of mathematical description of neural network learning process. Does anyone know what might cause such strange behaviour?
See Question&Answers more detail:os