This paper studies BP network, realizes the method of gradient descent, gets better result than traditional one.
本文研究了BP网络,实现了“梯度下降法”的网络训练方法,获得了较传统方法好的效果。
An adaptive gradient descent algorithm for training simplified internally recurrent networks (SIRN) is developed and a new method of reconciling nonlinear dynamic data based on SIRN is proposed.
研究了简化型内回归神经网络基于自适应梯度下降法的训练算法,并提出了一种基于简化型内回归神经网络的非线性动态数据校核新方法。
The paper proposes an adaptive neural network PID controller based on weighlearning algorithm using the gradient descent method for the AC position servosystem of binding and printing.
针对包装印刷传动位置伺服系统,介绍一种基于共轭梯度学习算法的神经网络自适应PID控制方法。
BP neural network, as its nature of gradient descent method, is easy to fall into local optimum.
但BP神经网络本质是梯度下降法,容易陷入局部最优。
The essence of back propagation networks is that make the change of weights become little by gradient descent method and finally attain the minimal error.
其实质是采用梯度下降法使权值的改变总是朝着误差变小的方向改进,最终达到最小误差。
In this scheme, the inputs of hidden layer neurons are acquired by using the gradient descent method, and the weights and threshold of each neuron are trained using the linear least square method.
在该方案中,通过梯度法获取隐层神经元的输入,使用线性最小二乘法训练各神经元的权值和阈值。
The weights are trained with Gradient Descent Method. The increase algorithm of BVS, and restricted algorithm, was induced.
利用梯度下降法对网络的权值进行训练,并且推导了BVS的增长算法,以及网络训练的限制记忆递推公式。
The weights are trained with Gradient Descent Method. The increase algorithm of BVS, and restricted algorithm, was induced.
利用梯度下降法对网络的权值进行训练,并且推导了BVS的增长算法,以及网络训练的限制记忆递推公式。
应用推荐