BP neural network, as its nature of gradient descent method, is easy to fall into local optimum.
但BP神经网络本质是梯度下降法,容易陷入局部最优。
However, BP network with gradient descent has some defects such as low convergence speed, fall in local minima.
然而基于梯度下降的BP网络存在收敛速度慢、易陷入局部极小的缺陷。
This paper studies BP network, realizes the method of gradient descent, gets better result than traditional one.
本文研究了BP网络,实现了“梯度下降法”的网络训练方法,获得了较传统方法好的效果。
Gradient descent algorithm is an efficient method to train FNN, and it can be realized in batch or incremental manner.
梯度下降算法是训练多层前向神经网络的一种有效方法,该算法可以以增量或者批量两种学习方式实现。
The weights are trained with Gradient Descent Method. The increase algorithm of BVS, and restricted algorithm, was induced.
利用梯度下降法对网络的权值进行训练,并且推导了BVS的增长算法,以及网络训练的限制记忆递推公式。
The model was a Feedforward Fuzzy neural network possessing five layers, and Gradient Descent was adopted as learning algorithm.
该模型采用五层前向模糊神经网络,学习算法为梯度下降法。
A few or all of the parameters of the controller are adjusted by using the gradient descent algorithm to minimize the output error.
运用最优梯度下降法使目标的计算值与期望值误差最小来迭代优选权重,建立迭代算法模型。
Two ways are used to design the network, the one is the direct energy descent method, and the other is the gradient descent method.
有两种方法可以用来设计网络,一是能量下降法,另一个是梯度下降法。
This paper researches the application of the stochastic parallel gradient descent (SPGD) optimization algorithm on the beam cleanup system.
就随机并行梯度下降(SPGD)最优化算法在光束净化系统中的应用展开研究。
The stochastic parallel gradient descent (SPGD) algorithm can optimize the system performance indexes directly to correct wavefront aberration.
随机并行梯度下降(SPGD)算法可以对系统性能指标直接优化来校正畸变波前。
For the learning process, a new kind of empirical risk function is proposed which is differentiable and can be minimized by gradient descent strategy.
对于参数的学习,提出了一种适用于分类器的可微经验风险函数,该函数能够有效地利用梯度下降法进行最小化。
A new design of a neural servocontroller is presented. Neural network model is established by BP network. Optimizer is obtained by gradient descent rule.
提出一种新的神经网络伺服控制器,采用BP网络建立神经网络模型,依据梯度算法建立优化器,可以同时跟踪状态和控制设定变量。
The essence of back propagation networks is that make the change of weights become little by gradient descent method and finally attain the minimal error.
其实质是采用梯度下降法使权值的改变总是朝着误差变小的方向改进,最终达到最小误差。
And the three optimization problems are solved respectively by the conjugate gradient method, the adaptive bivariate shrinking and the gradient descent method.
并提出分别采用共轭梯度法、二元自适应收缩法以及梯度下降法对以上优化问题求解。
The known study algorithms which are used to be Fuzzy Neural Network parameter study algorithms are BP algorithm with gradient descent and inheritance algorithm.
目前使用的最多的学习算法仍然是基于梯度下降的BP算法和遗传算法。
This method proposes five decimation patterns based on the gradient descent direction, and it can select the decimation pattern adaptively during the motion estimation.
首先改进传统的一维梯度下降搜索算法,并提出由梯度下降方向自适应选择不同的抽取模式去除计算运动估计准则时的冗余。
Furthermore, the condition of the faster convergence of the proposed algorithm comparing with the conventional gradient descent algorithm with momentum term is derived.
进而给出了在梯度一定的条件下,所提出的算法比传统的带动量的梯度法收敛快的条件。
An algorithm of PID gradient descent with momentum term (PIDGDM) is proposed. In this algorithm, the procedure of gradient optimization is considered as a feedback control system.
将梯度优化过程看作反馈控制系统,从而提出了一种带动量项的PID梯度算法(PIDGDM)。
Based on stochastic parallel gradient descent (SPGD) control algorithm, an adaptive optics test-bed without a wave-front sensor was built with a 32-element deformable mirror and a CCD.
基于随机并行梯度下降(SPGD)算法,32单元变形镜,CCD成像器件等建立了无波前传感自适应光学系统实验平台。
The paper proposes an adaptive neural network PID controller based on weighlearning algorithm using the gradient descent method for the AC position servosystem of binding and printing.
针对包装印刷传动位置伺服系统,介绍一种基于共轭梯度学习算法的神经网络自适应PID控制方法。
In the past thirty years, the work-horse algorithms in the field of digital signal processing and communication have been the gradient descent algorithm and the least square algorithm.
在过去的三十年中,数字信号处理和数字通信领域中采用的主要优化算法是最速下降方法和最小二乘方法。
An adaptive gradient descent algorithm for training simplified internally recurrent networks (SIRN) is developed and a new method of reconciling nonlinear dynamic data based on SIRN is proposed.
研究了简化型内回归神经网络基于自适应梯度下降法的训练算法,并提出了一种基于简化型内回归神经网络的非线性动态数据校核新方法。
In this scheme, the inputs of hidden layer neurons are acquired by using the gradient descent method, and the weights and threshold of each neuron are trained using the linear least square method.
在该方案中,通过梯度法获取隐层神经元的输入,使用线性最小二乘法训练各神经元的权值和阈值。
It was built originally because vehicles could not cope with the 27% gradient so it was suggested that switchbacks of hair pin bends were built thereby making the ascent or descent more manageable.
它始建初衷是由于车辆无法适应27%的坡度,因此有人建议,将其建成发夹般弯曲的盘山路以使上坡或下坡更容易些。
Based on the steepest descent method and the conjugate gradient method, a hybrid algorithm is proposed in this paper, and its global convergence is proved.
将最速下降法与共轭梯度法有机结合起来,构造出一种混合优化算法,并证明其全局收敛性。
Based on the steepest descent method and the conjugate gradient method, a hybrid algorithm is proposed in this paper, and its global convergence is proved.
将最速下降法与共轭梯度法有机结合起来,构造出一种混合优化算法,并证明其全局收敛性。
应用推荐