梯度下降法与混沌优化法均具有各自的缺点。
The decreasing gradient algorithm and chaos algorithm both have shortcomings for optimization problems.
但BP神经网络本质是梯度下降法,容易陷入局部最优。
BP neural network, as its nature of gradient descent method, is easy to fall into local optimum.
该模型采用五层前向模糊神经网络,学习算法为梯度下降法。
The model was a Feedforward Fuzzy neural network possessing five layers, and Gradient Descent was adopted as learning algorithm.
然后利用梯度下降法推导了基于最优模式中心的NLDA算法。
The second algorithm calculates the optimum classes' centres of NLDA by method of grads descending.
有两种方法可以用来设计网络,一是能量下降法,另一个是梯度下降法。
Two ways are used to design the network, the one is the direct energy descent method, and the other is the gradient descent method.
将权矩阵的学习过程归结为用梯度下降法求一组矛盾线性方程组的过程;
Then, the learning of the weight matrix can be done by means of solving a group of systems of linear equations. Last, the mathematical base of the outer-product leaming method is pointed out.
首先研究了离线训练的滑模控制器,然后,给出了利用梯度下降法的在线训练方法。
First, a study of a sliding mode controller under on off training is made and then the on line learning algorithm using a gradient decent method is designed.
并提出分别采用共轭梯度法、二元自适应收缩法以及梯度下降法对以上优化问题求解。
And the three optimization problems are solved respectively by the conjugate gradient method, the adaptive bivariate shrinking and the gradient descent method.
其实质是采用梯度下降法使权值的改变总是朝着误差变小的方向改进,最终达到最小误差。
The essence of back propagation networks is that make the change of weights become little by gradient descent method and finally attain the minimal error.
本文研究了BP网络,实现了“梯度下降法”的网络训练方法,获得了较传统方法好的效果。
This paper studies BP network, realizes the method of gradient descent, gets better result than traditional one.
运用最优梯度下降法使目标的计算值与期望值误差最小来迭代优选权重,建立迭代算法模型。
A few or all of the parameters of the controller are adjusted by using the gradient descent algorithm to minimize the output error.
对小波神经网络采用最速梯度下降法优化网络参数,并对学习率采用自适应学习速率方法自动调节。
In WNN the most fast grads descent methodology was adopted to adjust the network parameters and the learning rate by self adapting learning rate method.
RBF神经网络采用离线学习在线修正权值和阈值,为加快收敛速度,应用带惯性项的梯度下降法。
RBF neural network adopts the off-line training and the on-line adaptation of weight and threshold value. In order to speed up the convergence, the grads descent method with inertia item was used.
然后通过梯度下降法和最小二乘法相结合的混合学习算法,对控制器参数进行调整以提高其控制精度。
Then some parameters of the controller are modulated by hybrid learning algorithm of ladder descent (LD) and least square error (LSE) so as to attain better control precision.
它对一定时间间隔内两次采样得到的图象进行运算,用最陡梯度下降法直接迭代出图象位移的估计值。
The algorithm which utilizes the Steepest Descent Method can give the estimation of displacement between two frames of image directly.
利用梯度下降法对网络的权值进行训练,并且推导了BVS的增长算法,以及网络训练的限制记忆递推公式。
The weights are trained with Gradient Descent Method. The increase algorithm of BVS, and restricted algorithm, was induced.
对于参数的学习,提出了一种适用于分类器的可微经验风险函数,该函数能够有效地利用梯度下降法进行最小化。
For the learning process, a new kind of empirical risk function is proposed which is differentiable and can be minimized by gradient descent strategy.
分析了BP神经网络和混沌优化的特点,并将混沌优化方法和梯度下降法结合起来构成一种新的组合搜索优化方法。
The characteristics of BP neural network and chaos optimal method are analyzed. By integrating chaos optimal method with gradient-decline method, an optimal method of combination search is created.
通过对泛函网络的分析,提出了一种序列泛函网络模型及学习算法,而网络的泛函参数利用梯度下降法来进行学习。
In this paper, by analyzing the functional network, a new model and learning algorithm of the serial functional networks is proposed.
研究了简化型内回归神经网络基于自适应梯度下降法的训练算法,并提出了一种基于简化型内回归神经网络的非线性动态数据校核新方法。
An adaptive gradient descent algorithm for training simplified internally recurrent networks (SIRN) is developed and a new method of reconciling nonlinear dynamic data based on SIRN is proposed.
考虑神经网络在训练大规模样品时易陷入局部极小,用梯度下降法与混沌优化方法相结合,使神经网络实现快速训练的同时,避免陷入局部极小。
Combining grading method with chaotic optimization, the neural network model achieves rapid training and avoids local minimum when there are a lot of samples to be trained.
将最速下降法与共轭梯度法有机结合起来,构造出一种混合优化算法,并证明其全局收敛性。
Based on the steepest descent method and the conjugate gradient method, a hybrid algorithm is proposed in this paper, and its global convergence is proved.
运用梯度最陡下降法,推导出能量函数曲线演化方程,并应用于图像分割。
Using gradient-descent methods the energy function is minimized and a curve evolution equation is obtained to segment the image.
本文根据常微分方程参数反问题的数学理论,将正交化方法同有限差分法结合用于确定水质模型参数,并与正则化方法、最速下降法和共轭梯度法作了比较。
The comparison of the calculation results show that orthogonal rule method is fast, simple and reliable, and is applicable to the calculation of the water quality modeling parameters.
本文根据常微分方程参数反问题的数学理论,将正交化方法同有限差分法结合用于确定水质模型参数,并与正则化方法、最速下降法和共轭梯度法作了比较。
The comparison of the calculation results show that orthogonal rule method is fast, simple and reliable, and is applicable to the calculation of the water quality modeling parameters.
应用推荐