The rule DSL for specifying the grammar.
template <typename T> concept rule = …;
The grammar in lexy is specified in several productions, where each one defines an associated rule.
This rule is an object built from the objects and functions of namespace
lexy::dsl that defines some (implementation-defined) parsing function.
Parsing a rule takes the reader, which remembers the current position of the input, and the context, which stores information about the current production and whitespace rules, and is responsible for handling errors and values.
Parsing can have one of the following results:
Parsing can succeed. Then it consumes some input by advancing the reader position and produces zero or more values.
Parsing can fail. Then it reports an error, potentially after having consumed some input but without producing values. The parent rule can react to the failure by recovering from it or they fail itself.
Parsing can fail, but then recover. Then it has reported an error, but now it has consumed enough input to be in a known good state and parsing continues normally. See
error recoveryfor details.
A branch rule is a special kind of rule that has an easy to check condition. They are used to guide decisions in the parsing algorithm. Every branch rule defines some (implementation defined) branch parsing function. It mostly behaves the same as the normal parse rule, but can have one additional result: branch parsing can backtrack. If it backtracks, it hasn’t consumed any input, raised errors or produced values. The parsing algorithm is then free to try another branch.
|The idea is that a branch rule can relatively quickly decide whether or not it should backtrack. If a branch rule does not backtrack, but fails instead, this failure is propagated and the parsing algorithm does not try another branch.|
A token rule is a special kind of rule that describes the atomic elements. Parsing them never produces any values and can happen easily, as such they’re also branch rules where the entire rule is used as the condition. Because they’re atomic elements of the input, they also participate in automatic whitespace skipping: after every token, lexy will automatically skip whitespace, if one has been defined.
The parse context stores state that can be accessed during parsing.
This includes things like the current recursion depth, see
whether or not automatic whitespace skipping is currently enabled, see whitespace skipping,
but also arbitrary user-defined variables, see
When a rule modifies the context during parsing, by adding an additional context variable for example,
this modification is available for all following rules in the current production and all child productions.
In particular, the modification is no longer visible in any parent production.
If a rule is parsed in a loop, e.g. by
any context modification does not persist between loop iterations, and is also not available outside the loop.
How to read the DSL documentation 
The behavior of a rule is described by the following sections.
This section describes what input is matched for the rule to succeed, and what is consumed. For token rules it is called "matching", otherwise "parsing".
It often delegates to the behavior of other rules: Here, the term "parsing" refers to the parsing operation of a rule, "branch parsing" or "try to parse" refers to the special parsing operation of a branch rule, which can backtrack, "matching" refers to the parsing operation of a token rule, which cannot produce values, and "try matching" refers to the branch parsing operation of a token rule, which cannot produce values or raise errors.
- Branch parsing
This section describes what input is matched, consumed, and leads to a backtracking for a branch rule. Note that a rule can parse something different here than during non-branch parsing.
This section describes what errors are raised, when, and where. It also describes whether the rule can recover after the error.
This section describes what values are produced during a successful parsing operation. It is omitted for token rules, which never produce values.
- Parse tree
This section describes what nodes are created in the
lexy::parse_tree. If omitted, a token rule creates a single token node covering everything consumed, and a rule produces no extra nodes besides the ones created by the other rules it parses.
If a rule parses another rule in a new context (e.g.
the other rule does not have access to context variables, and any context modification is not visible outside of the rule.
The rule DSL
match a single character
match character sequences
match a sequence of bytes
match a code point with the specified value
match common punctuation
match one of the specified literals
ensure a literal is (not) followed by a char class
match a literal case-insensitively
match specific Unicode code points
match ASCII char classes
match Unicode char classes
lexy::dsl::operator/ (char class),
combine char classes
create a named char class
turn a rule into a token
parse a sequence of rules
parse one of the specified (branch) rules
parse all (some) of the (branch) rules in arbitrary order
parse a branch rule if its condition matches
parse a rule repeatedly
parse a branch rule while its condition matches
parse a list of things
parse a rule
skip everything until a rule matches
Brackets and delimited
capture everything consumed by a token rule
produce the current input position
produce an empty placeholder value
parse something into a member variable
parse a completely user-defined rule
parses a rule ensuring it always produces a specific value
Errors and error recovery
parse a digit
parse one or more digits
parse N digits
convert digits to an integer
parse a sign
convert N digits into a code point