A listing of supported lexer types follows.
下面是一个支持lexer类型的清单。
How to specify TAB width for Alex lexer?
如何指定亚历克斯词法分析器的标签宽度?
How to use an Alex monadic lexer with Happy?
如何使用一个亚历克斯一元词法分析器与快乐吗?
How to parse C-style comments with Alex lexer?
和亚历克斯词法分析程序如何解析c风格的评论吗?
KOREAN_LEXER: a lexer for extracting tokens from Korean text.
KOREAN_LEXER:一个从Korean文本在中提取标记的lexer。
CHINESE_VGRAM: a lexer for extracting tokens from Chinese text.
CHINESE_VGRAM:一个从Chinese文本在中提取标记的lexer。
JAPANESE_VGRAM: a lexer for extracting tokens from Japanese text.
JAPANESE_VGRAM:一个从Japanese文本在中提取标记的lexer。
JAPANESE_LEXER: a lexer for extracting tokens from Japanese text.
JAPANESE_LEXER:一个从Japanese文本在中提取标记的lexer。
These directives are used to define the tokens the lexer can return.
这些指令用来定义lexer可以返回的记号。
Thus, p: : RD on top of Perl 5 is a powerful parser and lexer combination.
因此,Perl5之上的P:RD是一个解析器与词法分析器的强大组合。
A good lexer example can help a lot with learning how to write a tokenizer.
一个好的lexer例子会非常有助于学习如何编写断词器(tokenizer)。
Whenever possible, try to develop separate tests for your lexer and your parser.
只要有可能,请尝试为您的lexer和解析器开发单独的测试。
Why does the lexer rule for strings takes precedence over all my other rules?
为什么字符串的词法规则优先于我的所有其他的规则?
It USES start states — a feature allowing the lexer to match some rules only some of the time.
它使用了start声明——这项功能使lexer能够只在一些时候去匹配某些规则。
The grammar shown above in typographify.def provides guidance for the design of a Spark lexer/scanner.
上面的 typographify.def中所示的语法提供了Spark 词法分析程序/扫描程序的设计指南。
BASIC_LEXER: The lexer for English and most western European languages that use white space delimited words.
BASIC _lexer:English以及大多数西方欧洲语言的lexers使用白色空白分隔命令。
This does mean that the lexer and the parser both have to know that an empty line separates a header from a body.
这确实意味着lexer和解析器都必须明白,是一个空行将标题从主体中分离了出来。
The lexer actually does some of the work of figuring out where in a message it is, but the parser still ties everything together.
lexer实际上完成了指出它位于消息中的哪个地方的一些任务,但是解析器仍将所有内容放在一起。
The lexer is the part of our language knowledge that says "this is a sentence; this is punctuation; twenty-three is a single word."
词法分析器是我们的语言知识中识别“这是一个句子;这是标点;t wenty - three是一个单一的词”的那一部分。
A lexer is a software component that divides text strings into individual words, or tokens, so that the individual words can be indexed.
lexer是一个能将文本字符串分成单个命令,或者标记的软件组分,这样使得单个命令可以被检索到。
The Gang advocates abandoning your core language and building an entirely new one atop it, creating your own lexer, parser, grammar, and so on.
四人组主张放弃核心语言并在其上构建全新的语言,创建您自己的lexer、解析器、语法等等。
Similarly, the lexer has to know about the structure of continuations; the parser only knows that it sometimes gets additional text to add on to a header.
与此类似,lexer必须明白附加部分的结构;解析器只知道有时它需要将附加的文本添加到标题中。
For lexing purposes, this means that the lexer definition of an integer number, for example, can be used to build the lexer definition of a real number and a fraction.
例如,对于词法分析目的而言,这意味着可以用整数的词法分析器定义来构建实数和分数的词法分析器定义。
These Numbers will be outside the range of possible valid characters; thus, the lexer can return individual characters as themselves, or special tokens using these defined values.
这些数字将会在可能的合法字符范围之外;这样,lexer可以原样返回单个字符本身,或者返回使用这些定义值的记号。
There is no completely standardized way to do this; the most portable solution is to open a temporary file, write data to it, and hand it to the lexer. Here's a sample bit of code to do this.
对此没有完全标准化的方法;最轻便的方法是打开一个临时文件,将数据写入到这个临时文件,然后将它交给lexer。
There is no completely standardized way to do this; the most portable solution is to open a temporary file, write data to it, and hand it to the lexer. Here's a sample bit of code to do this.
对此没有完全标准化的方法;最轻便的方法是打开一个临时文件,将数据写入到这个临时文件,然后将它交给lexer。
应用推荐