Abstract:
DNN and RNN are becoming popular to for solving complex problems such as
speech recognition, machine translation, natural language processing and image
recognition. One major aspect of RNNs are their ability to learn though propagation
using weights. Weights are important for reducing the errors and for optimization.
Normally weights are initialized using randomization and are adjusted during
training. We present parameter splitting concept for training the DNN and RNN
for problems based on sequences such as speech recognition. We start from a small
model and gradually increase the number of parameters of RNN by splitting the
weights and appending together. Random noise is added to boost the weights. Our
splitting technique generates better word error rate and reduced overtraining of the
model.