In this current n-gram situation, the grammar is fixed.
在这个当前的n元语法情景中,文法是固定的。
If you work with a context-free grammar, the structure of n-gram (see Resources) might be what you require.
如果您使用一种上下文无关的文法,那么n元语法的结构(参见参考资料)可能是您所需要的。
The myroot rule contains one or more rulerefs, each of which has a URI that points to another rule, in this case command1, which is the master rule for an n-gram.
myroot规则包含一个或多个 rulerefs,每个都有一个指向另一规则(本例中是 command1,它是一个 n 元语法的masterrule)的URI。
The segmentation system selects three kinds of statistics principles to count separately:Mutual Information, N -Gram and t-test.
系统选用了三种统计原理分别进行统计:互信息,N元统计模型和t-测试。
In candidate sentences selection module, the methods we used to compute the similarity between sentences and query are N-gram modal and Vector Space modal.
相关语句抽取部分的相似度计算使用了N元模型和向量空间模型。
The key problem in N-gram method is the problem of sparse data which still can not be solved effectively now.
但是随着研究深入,出现稀疏数据成像问题,无法用传统方法重建清晰图像。
NIST is another statistical method counting n-gram co-ocurrence based on BLEU, which assigns a higher weight to a more informative n-gram co-ocurrence with a less number occurring in references.
NIST在BLEU方法的基础上,提出了另外一种基于共现n元词的统计方法,它认为如果一个n元词在参考译文中出现的次数越少,那么该n元词包含的信息量就越大,就应该赋予更高的权重。
NIST is another statistical method counting n-gram co-ocurrence based on BLEU, which assigns a higher weight to a more informative n-gram co-ocurrence with a less number occurring in references.
NIST在BLEU方法的基础上,提出了另外一种基于共现n元词的统计方法,它认为如果一个n元词在参考译文中出现的次数越少,那么该n元词包含的信息量就越大,就应该赋予更高的权重。
应用推荐