中英
information gain
  • 简明
  • 信息增益:在进行决策或分类时,通过获取特定信息而获得的有关数据的增加量。
  • 网络释义
  • 专业释义
  • 1

    [计 图情] 信息增益

    ...。 上述的ID3算法的核心问题是选取在树的每个结点要测试的属性。我们希望选择的是最有利于分类实例的属性,信息增益(Information Gain)是用来衡量给定的属性区分训练样例的能力,而ID3算法在增长树的每一步使用信息增益从候选属性中选择属性。

  • 2

     信息增益法

    信息增益法(Information Gain):通过统计某个特征项在文档中出现或者不出 现的次数来判断文档的类别。特征t对于类别L的信息增益计算公式为:

  • 3

     资讯获利

    资讯获利(Information Gain)是由以某一属性为决策树节点所产生的子决策树之 Entropy与物件集合的Entropy所决定假设训练资料形成得集合S中有n 种类别Ci , i =...

短语
查看更多
  • 双语例句
  • 原声例句
  • 权威例句
  • 1
    The result shows that the information gain decreases before strong earthquake.
    结果表明,强震前有信息增益减小的特点。
  • 2
    This paper proposes a new future selection algorithm based on information gain and chi-square test.
    本文提出了一个基于信息增益和卡方检验的属性选择算法。
  • 3
    An optimizing algorithm based on information gain is put forward for multisensor detection and classification of multitargets.
    基于信息增量提出了一种多传感器对多目标检测与分类的优化算法。
查看更多
  • 百科
  • Information gain

    In probability theory and information theory, the Kullback–Leibler divergence (also information divergence, information gain, relative entropy, or KLIC; here abbreviated as KL divergence) is a non-symmetric measure of the difference between two probability distributions P and Q. Specifically, the Kullback–Leibler divergence of Q from P, denoted DKL(P‖Q), is a measure of the information lost when Q is used to approximate P: The KL divergence measures the expected number of extra (so intuitively it is non negative; this can be verified by Jensen's inequality) bits required to code samples from P when using a code optimized for Q, rather than using the true code optimized for P. Typically P represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution. The measure Q typically represents a theory, model, description, or approximation of P.Although it is often intuited as a metric or distance, the KL divergence is not a true metric — for example, it is not symmetric: the KL divergence from P to Q is generally not the same as that from Q to P. However, its infinitesimal form, specifically its Hessian, is a metric tensor: it is the Fisher information metric.KL divergence is a special case of a broader class of divergences called f-divergences. It was originally introduced by Solomon Kullback and Richard Leibler in 1951 as the directed divergence between two distributions. It can be derived from a Bregman divergence.

查看更多