python - Activation for pybrain LSTM layer is zero -
I created an LSTM network to regress a sequence of data. When I try to get the activation of the hidden layer (which is the LSTM layer), then it gives zero. The network has only one hidden layer, one input and one output layer.
I try to get a hidden layer value with the following snippet.
Print Net.Activate (Data) Print Net ['' ''. Output buffer [net ['' '' offset] print net ['hidden']. Output buffer [net ['hidden']. Offset]
Any ideas why? Below is a more complete code snippet
RopewayIn = RopewayOverallData [: - 1 ,:] RopewayOut = RopewayOverallData [1:,:] ds.newSequence (range) to I (noDataFrames) : DS .appendLinked ([RopewayIn [i, 0], RopewayIn [i, 1], RopewayIn [i, 2], RopewayIn [I, 3], RopewayIn [I, 4], RopewayIn [I, 5], RopewayIn [I, 6], RopewayIn [I, 7], RopewayIn [I, 8], RopewayIn [I, 9]], [RopewayOut [i, 0], RopewayOut [i, 1], RopewayOut [i, 2], RopewayOut [i, 3], RopewayOut [I, 4], RopewayOut [I-5], RopewayOut [I, 6], RopewayOut [I, 7], RopewayOut [I, 8], RopewayOut [I, 9]]) Pure = buildNetwork (10,20,10, hiddenclass = LSTMLayer, proceeding = LinearLayer, bias = true, recurrent =) ) Trainer = RPropMinusTrainer (net dataset = DS, verbose = true, weightdecay = 0.01) for the range I (10001): Trainer .trainEpochs (2) Print net.activate (RopewayOverallData [0,4]) Print Net [ 'In']. Outputbuffer [net ['in'] Offset] Print Net ['hidden0']. Outputbuffer [net ['hidden0']. Offset
This is not really an answer But it will not fit in a comment I tried to run it, a mixture of your code and previous question ():
import from pybrain.tools.shortcuts
pybrain. Import network datasets to import from SupervisedDataSet import pybrain.supervised.Trainers BackpropTrainer pybrain.structure.modules LSTMLayer, LinearLayer net = buildNetwork import (3,3,3, hiddenclass = LSTMLayer, proceed = LinearLayer, bias = true; Recurrent = True) Dataset = Supervised Data Set (3, 3) dataSet.addSample ((0, 0, 0), (0, 0, 0)) dataSet.addSample ((1, 1, 1), (0, 0, 0) ) DataSet.addSample Dataset ((0, 0, 0), (1, 0, 0) Trainer = BackpropTrainer (Pure, Dataset)) Train = Wrong Acceptable = 0.001 howmanytries = 0 # Until Train Reaches Acceptable Error (Trained == Incorrect) and (howmanytries & lt; 1000): error = trainer.train () if error & lt; Acceptable Disabled: Trained = True Other: howmanytries + = 1 results = net.activate ([0.5, 0.4, 0.7]) Net ['in']. Outputbuffer [net ['in'] Offset] net ['hidden0'] Output buffer [net ['hidden']] offset] print result
... and it just printed fine, non-zero results I I'll start and change the pieces back to my code, and see where it closes.
Comments
Post a Comment