• In this current n-gram situation, the grammar is fixed.

    这个当前n元语法情景中,文法固定的。

    youdao

  • If you work with a context-free grammar, the structure of n-gram (see Resources) might be what you require.

    如果使用一种上下文无关文法那么n元语法结构(参见参考资料)可能需要的。

    youdao

  • The myroot rule contains one or more rulerefs, each of which has a URI that points to another rule, in this case command1, which is the master rule for an n-gram.

    myroot规则包含多个 rulerefs,每个都指向另一规则(例中是 command1 n 元语法的masterrule)的URI

    youdao

  • The segmentation system selects three kinds of statistics principles to count separately:Mutual Information, N -Gram and t-test.

    系统选用了统计原理分别进行统计:信息N元统计模型t-测试。

    youdao

  • In candidate sentences selection module, the methods we used to compute the similarity between sentences and query are N-gram modal and Vector Space modal.

    相关语句抽取部分相似度计算使用N模型向量空间模型。

    youdao

  • The key problem in N-gram method is the problem of sparse data which still can not be solved effectively now.

    但是随着研究深入,出现稀疏数据成像问题无法传统方法重建清晰图像。

    youdao

  • NIST is another statistical method counting n-gram co-ocurrence based on BLEU, which assigns a higher weight to a more informative n-gram co-ocurrence with a less number occurring in references.

    NISTBLEU方法的基础上,提出了另外种基于共现n元词的统计方法,认为如果一个n元词参考译文中出现次数越少,那么该n元词包含的信息量就越大,就应该赋予更高权重

    youdao

  • NIST is another statistical method counting n-gram co-ocurrence based on BLEU, which assigns a higher weight to a more informative n-gram co-ocurrence with a less number occurring in references.

    NISTBLEU方法的基础上,提出了另外种基于共现n元词的统计方法,认为如果一个n元词参考译文中出现次数越少,那么该n元词包含的信息量就越大,就应该赋予更高权重

    youdao

$firstVoiceSent
- 来自原声例句
小调查
请问您想要如何调整此模块?

感谢您的反馈,我们会尽快进行适当修改!
进来说说原因吧 确定
小调查
请问您想要如何调整此模块?

感谢您的反馈,我们会尽快进行适当修改!
进来说说原因吧 确定