Standard particle swarm algorithm is easy to fall into local optimum.
标准粒子群算法易陷入局部最优值。
BP neural network, as its nature of gradient descent method, is easy to fall into local optimum.
但BP神经网络本质是梯度下降法,容易陷入局部最优。
Most swarm intelligence algorithms fall into local optimum easily, and convergence speed is very slow.
大多数群体智能算法容易陷入局部最优,且收敛速度较慢。
That they are easy to fall into a local optimum is the shortcoming of conventional optimization methods.
传统的优化方法,即所谓的确定性优化方法的突出缺陷是容易陷入局部最优解。
This method is fast while avoiding the shortcoming that the G-S result is easy to trap in local optimum.
该方法不仅具有追迹法计算简单、运算量小的优点,而且克服了G - S算法容易陷入局部最优的缺点。
However, the standard Particle Swarm Optimization is easy to fall into local optimum, and slow convergence.
然而,标准粒子群算法存在容易陷入局部最优,后期收敛过慢等问题。
Otherwise, the hybrid algorithm can avoid trapping in local optimum and does not need initial feasible solution.
另外,该算法可有效避免陷入局部最优,也不要求提供初始可行解。
The experimental result indicates that the modified PSO increases the ability to break away from the local optimum.
实验结果表明,改进后的粒子群算法防止陷入局部最优的能力有了明显的增强。
The approaches based on ANN or gradient hill climb algorithm have limitations such as the function form and local optimum.
采用人工神经网或梯度爬山算法均存在对优化函数形式有限制及陷入局部最优等局限性。
Local optimum of the permutation-based chromosomes is defined, and a hill-climbing algorithm is proposed to get the local optimum.
定义了基于排列的染色体的局部极值,并以此为基础构造了求极值的爬山算法。
The results show the optimized BP neural network can effectively avoid converging on local optimum and reduce training time greatly.
实验结果证明,优化后的BP网络可有效地避免收敛于局部最优值,大大地缩短了训练时间。
And, in FNN weight training, improved PSO in the convergence rate and the ability to jump out to local optimum algorithm is better than BP.
且改进的粒子群算法在模糊神经网络权值的训练中收敛速度和跳出局部最优的能力都要比BP算法更优。
But the result easily falls into the local optimum with random initial choice, and more control points are required to assure higher accuracy.
但随机选取初始种群的遗传算法,容易使得结果陷入局部最优。要达到较高的拟合精度,则需要增加更多的控制顶点。
Based on the objective function, local optimum and iterative method were adopted to get a computer solution on the optimum economical thickness.
对保温设计的目标函数,采用局部求优,逐步迭代的方法,实现了多层保温经济厚度计算机求解。
Experimental results show that the improved algorithm performs better than the traditional PSO and may avoid falling into the local optimum instead.
实验结果证明,与传统PSO算法相比,改进算法的寻优效果较好,可在一定程度上避免陷入局部最优。
However, drawbacks of slow convergent speed and sometimes just getting local optimum solution are found by tests when common GA technology is applied.
但在试验中发现普通遗传算法存在收敛速度较慢且容易陷入局部最优解等问题。
The adoption of remembrance-guided search method emphasizes local optimum value in each remembrance segment, which avoids the blindness of global search.
算法中采用的记忆指导搜索策略重点搜索了各记忆段的局部最优值,避免了全局搜索的盲目性;
Aiming at problem that Particle Swarm Optimization (PSO) algorithm falls into local optimum easily, this paper presents a PSO algorithm based on sub-region.
针对粒子群优化(PSO)算法在寻优时容易陷入局部最优的不足,提出一种基于子区域的PSO算法。
The experimental results show that the proposed method can accurately segment PET image lesion area, to avoid falling into local optimum and has a good real-time.
其实验结果表明,本文提出的方法能够对PET图像病灶区域进行精确的分割,避免陷入局部最优且具有良好的实时性。
The new algorithm includes the mutation operator during the running time which can be useful to improve the ability of PSO in breaking away from the local optimum.
该算法在运行过程中增加了随机变异算子,通过对当前最佳粒子进行随机变异来增强粒子群优化算法跳出局部最优解的能力。
Such an algorithm is verified to accelerate convergence process, enhance searching efficiency and solving precision as well as avoid low efficiency and local optimum.
实例证明该算法求解加速了收敛过程,提高了搜索效率,在避免陷入局部最优的同时提高了求解精度。
To solve the problem that particle swarm optimization algorithm is apt to trap in local optimum, a novel cooperative particle swarm optimization algorithm is proposed.
为了解决基本粒子群算法不易跳出局部最优的问题,提出了一种协同粒子群优化算法。
Neural network BP training algorithm based on gradient descend technique may lead to entrapment in local optimum so that the network inaccurately classifies input patterns.
基于梯度下降的神经网络训练算法易于陷入局部最小,从而使网络不能对输入模式进行准确分类。
The general dynamic clustering algorithms are used by static samples. The results of clustering not only rely on the original classification, but easily get into local optimum.
一般的动态聚类算法都是针对静态样本数据的,其聚类结果不仅依赖初始分类,而且易陷入局部极小。
The general dynamic clustering algorithms are used by static samples. The results of clustering not only rely on the original classification, but easily get into local optimum.
一般的动态聚类算法都是针对静态样本数据的,其聚类结果不仅依赖初始分类,而且易陷入局部极小。
应用推荐