Header lexy/dsl.hpp

The rule DSL for specifying the grammar.

template <typename T>
concept rule = ;

The grammar in lexy is specified in several productions, where each one defines an associated rule. This rule is an object built from the objects and functions of namespace lexy::dsl that defines some (implementation-defined) parsing function. Parsing a rule takes the reader, which remembers the current position of the input, and the context, which stores information about the current production and whitespace rules, and is responsible for handling errors and values.

Parsing can have one of the following results:

  • Parsing can succeed. Then it consumes some input by advancing the reader position and produces zero or more values.

  • Parsing can fail. Then it reports an error, potentially after having consumed some input but without producing values. The parent rule can react to the failure by recovering from it or they fail itself.

  • Parsing can fail, but then recover. Then it has reported an error, but now it has consumed enough input to be in a known good state and parsing continues normally. See error recovery for details.

A branch rule is a special kind of rule that has an easy to check condition. They are used to guide decisions in the parsing algorithm. Every branch rule defines some (implementation defined) branch parsing function. It mostly behaves the same as the normal parse rule, but can have one additional result: branch parsing can backtrack. If it backtracks, it hasn’t consumed any input, raised errors or produced values. The parsing algorithm is then free to try another branch.

The idea is that a branch rule can relatively quickly decide whether or not it should backtrack. If a branch rule does not backtrack, but fails instead, this failure is propagated and the parsing algorithm does not try another branch.

A token rule is a special kind of rule that describes the atomic elements. Parsing them never produces any values and can happen easily, as such they’re also branch rules where the entire rule is used as the condition. Because they’re atomic elements of the input, they also participate in automatic whitespace skipping: after every token, lexy will automatically skip whitespace, if one has been defined.

The parse context stores state that can be accessed during parsing. This includes things like the current recursion depth, see lexy::dsl::recurse, whether or not automatic whitespace skipping is currently enabled, see whitespace skipping, but also arbitrary user-defined variables, see lexy::dsl::context_flag, lexy::dsl::context_counter, and lexy::dsl::context_identifier.

When a rule modifies the context during parsing, by adding an additional context variable for example, this modification is available for all following rules in the current production and all child productions. In particular, the modification is no longer visible in any parent production. If a rule is parsed in a loop, e.g. by lexy::dsl::loop or lexy::dsl::list, any context modification does not persist between loop iterations, and is also not available outside the loop.

How to read the DSL documentation [1]

The behavior of a rule is described by the following sections.


This section describes what input is matched for the rule to succeed, and what is consumed. For token rules it is called "matching", otherwise "parsing".

It often delegates to the behavior of other rules: Here, the term "parsing" refers to the parsing operation of a rule, "branch parsing" or "try to parse" refers to the special parsing operation of a branch rule, which can backtrack, "matching" refers to the parsing operation of a token rule, which cannot produce values, and "try matching" refers to the branch parsing operation of a token rule, which cannot produce values or raise errors.

Branch parsing

This section describes what input is matched, consumed, and leads to a backtracking for a branch rule. Note that a rule can parse something different here than during non-branch parsing.


This section describes what errors are raised, when, and where. It also describes whether the rule can recover after the error.


This section describes what values are produced during a successful parsing operation. It is omitted for token rules, which never produce values.

Parse tree

This section describes what nodes are created in the lexy::parse_tree. If omitted, a token rule creates a single token node covering everything consumed, and a rule produces no extra nodes besides the ones created by the other rules it parses.

If a rule parses another rule in a new context (e.g. lexy::dsl::peek), the other rule does not have access to context variables, and any context modification is not visible outside of the rule.

The rule DSL

Primitive tokens
lexy::dsl::lit and lexy::dsl::lit_c

match character sequences


match anything


match EOF

lexy::dsl::newline and lexy::dsl::eol

match the end of a line


turn a rule into a token

Character classes

match (specific) Unicode code points


match ASCII character classes


match Unicode character classes


exclude some characters


combine character classes

Branch conditions

add a branch condition to a rule


branch condition that is always taken

lexy::dsl::peek and lexy::dsl::peek_not

check whether something matches without consuming it


check whether something matches somewhere in the input without consuming it


parse a sequence of rules


parse one of the specified (branch) rules

lexy::dsl::combination and lexy::dsl::partial_combination

parse all (some) of the (branch) rules in arbitrary order

lexy::dsl::if_ and lexy::dsl::opt

parse a branch rule if its condition matches


parse a rule repeatedly

lexy::dsl::while_ and lexy::dsl::while_one

parse a branch rule while its condition matches


parse a list of things

lexy::dsl::times and lexy::dsl::repeat

parse a rule N times


skip everything until a rule matches

Brackets and delimited

parse something that ends with a terminator


parse something surrounded by brackets

lexy::dsl::delimited and lexy::dsl::escape

parse everything between two delimiters, with optional escape sequences

lexy::dsl::p and lexy::dsl::recurse

parse another production


parse another production’s rule inline


exit early from parsing a production

lexy::dsl::capture and lexy::dsl::capture_token

capture everything consumed by a rule


produce the current input position


produce an empty placeholder value


parse something into a member variable


parse a completely user-defined rule

Errors and error recovery

explicitly raise an error


raise an error if a branch backtracks


recover from a failed rule


recover by looking and then continuing with some other rule


recover by looking for synchronization tokens


explicitly skip whitespace


do not skip whitespace


parse an identifier


parse a keyword


parse one of the specified symbols and produce their value


parse zero


parse a digit


parse one or more digits


parse N digits


convert digits to an integer

lexy::dsl::sign, lexy::dsl::plus_sign and lexy::dsl::minus_sign

parse a sign


convert N digits into a code point

Context-sensitive parsing

a boolean flag


an integer counter


an identifier variable

Byte input
lexy::dsl::bytes and lexy::dsl::padding_bytes

parse N bytes

lexy::dsl::bint8, lexy::dsl::bint16, …​

parse a little/big endian integer


parse a byte with specific bit patterns


parse a byte-order mark (BOM)

Input and action specific rules

match the argument separator of a lexy::argv_input


generate a debug event that is visualized by lexy::trace