BP neural network, as its nature of gradient descent method, is easy to fall into local optimum.
但BP神经网络本质是梯度下降法,容易陷入局部最优。
The weights are trained with Gradient Descent Method. The increase algorithm of BVS, and restricted algorithm, was induced.
利用梯度下降法对网络的权值进行训练,并且推导了BVS的增长算法,以及网络训练的限制记忆递推公式。
Two ways are used to design the network, the one is the direct energy descent method, and the other is the gradient descent method.
有两种方法可以用来设计网络,一是能量下降法,另一个是梯度下降法。
The essence of back propagation networks is that make the change of weights become little by gradient descent method and finally attain the minimal error.
其实质是采用梯度下降法使权值的改变总是朝着误差变小的方向改进,最终达到最小误差。
And the three optimization problems are solved respectively by the conjugate gradient method, the adaptive bivariate shrinking and the gradient descent method.
并提出分别采用共轭梯度法、二元自适应收缩法以及梯度下降法对以上优化问题求解。
The paper proposes an adaptive neural network PID controller based on weighlearning algorithm using the gradient descent method for the AC position servosystem of binding and printing.
针对包装印刷传动位置伺服系统,介绍一种基于共轭梯度学习算法的神经网络自适应PID控制方法。
In this scheme, the inputs of hidden layer neurons are acquired by using the gradient descent method, and the weights and threshold of each neuron are trained using the linear least square method.
在该方案中,通过梯度法获取隐层神经元的输入,使用线性最小二乘法训练各神经元的权值和阈值。
This paper studies BP network, realizes the method of gradient descent, gets better result than traditional one.
本文研究了BP网络,实现了“梯度下降法”的网络训练方法,获得了较传统方法好的效果。
Based on the steepest descent method and the conjugate gradient method, a hybrid algorithm is proposed in this paper, and its global convergence is proved.
将最速下降法与共轭梯度法有机结合起来,构造出一种混合优化算法,并证明其全局收敛性。
An adaptive gradient descent algorithm for training simplified internally recurrent networks (SIRN) is developed and a new method of reconciling nonlinear dynamic data based on SIRN is proposed.
研究了简化型内回归神经网络基于自适应梯度下降法的训练算法,并提出了一种基于简化型内回归神经网络的非线性动态数据校核新方法。
Gradient descent algorithm is an efficient method to train FNN, and it can be realized in batch or incremental manner.
梯度下降算法是训练多层前向神经网络的一种有效方法,该算法可以以增量或者批量两种学习方式实现。
This method proposes five decimation patterns based on the gradient descent direction, and it can select the decimation pattern adaptively during the motion estimation.
首先改进传统的一维梯度下降搜索算法,并提出由梯度下降方向自适应选择不同的抽取模式去除计算运动估计准则时的冗余。
This method proposes five decimation patterns based on the gradient descent direction, and it can select the decimation pattern adaptively during the motion estimation.
首先改进传统的一维梯度下降搜索算法,并提出由梯度下降方向自适应选择不同的抽取模式去除计算运动估计准则时的冗余。
应用推荐