Skip to content
Surf Wiki
Save to docs
technology/algorithms

From Surf Wiki (app.surf) — the open knowledge base

Earley parser

Algorithm for parsing context-free languages


Algorithm for parsing context-free languages

FieldValue
nameEarley parser
classParsing grammars that are context-free
dataString
timeO(n^3)
best-time{{plainlist
average-time\Theta(n^3)

|best-time={{plainlist|

  • \Omega(n) for all deterministic context-free grammars
  • \Omega(n^2) for unambiguous grammars |average-time=\Theta(n^3)

In computer science, the Earley parser is an algorithm for parsing strings that belong to a given context-free language, though (depending on the variant) it may suffer problems with certain nullable grammars. The algorithm, named after its inventor Jay Earley, is a chart parser that uses dynamic programming; it is mainly used for parsing in computational linguistics. It was first introduced in his dissertation{{cite book | access-date=2012-09-12 | archive-date=2017-09-22 | archive-url=https://web.archive.org/web/20170922004954/http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMU-CS-68-earley.pdf | url-status=dead

Earley parsers are appealing because they can parse all context-free languages, unlike LR parsers and LL parsers, which are more typically used in compilers but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case {O}(n^3), where n is the length of the parsed string, quadratic time for unambiguous grammars {O}(n^2), and linear time for all deterministic context-free grammars. It performs particularly well when the rules are written left-recursively.

Earley recogniser

The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser.

The algorithm

In the following descriptions, α, β, and γ represent any string of terminals/nonterminals (including the empty string), X and Y represent single nonterminals, and a represents a terminal symbol.

Earley's algorithm is a top-down dynamic programming algorithm. In the following, we use Earley's dot notation: given a production X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.

Input position 0 is the position prior to input. Input position n is the position after accepting the nth token. (Informally, input positions can be thought of as locations at token boundaries.) For every input position, the parser generates a state set. Each state is a tuple (X → α • β, i), consisting of

  • the production currently being matched (X → α β)
  • the current position in that production (visually represented by the dot •)
  • the position i in the input at which the matching of this production began: the origin position

(Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.)

A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state.

The state set at input position k is called S(k). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations: prediction, scanning, and completion.

  • Prediction: For every state in S(k) of the form (X → α • Y β, j) (where j is the origin position as above), add (Y → • γ, k) to S(k) for every production in the grammar with Y on the left-hand side (Y → γ).
  • Scanning: If a is the next symbol in the input stream, for every state in S(k) of the form (X → α • a β, j), add (X → α a • β, j) to S(k+1).
  • Completion: For every state in S(k) of the form (Y → γ •, j), find all states in S(j) of the form (X → α • Y β, i) and add (X → α Y • β, i) to S(k).

Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.

The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top level-rule and n the input length, otherwise it rejects.

Pseudocode

Adapted from Speech and Language Processing by Daniel Jurafsky and James H. Martin,

DECLARE ARRAY S;

function INIT(words)
    S ← CREATE_ARRAY(LENGTH(words) + 1)
    for k ← from 0 to LENGTH(words) do
        S[k] ← EMPTY_ORDERED_SET

function EARLEY_PARSE(words, grammar)
    INIT(words)
    ADD_TO_SET((γ → •S, 0), S[0])
    for k ← from 0 to LENGTH(words) do
        for each state in S[k] do  // S[k] can expand during this loop
            if not FINISHED(state) then
                if NEXT_ELEMENT_OF(state) is a nonterminal then
                    PREDICTOR(state, k, grammar)         // non_terminal
                else do
                    SCANNER(state, k, words)             // terminal
            else do
                COMPLETER(state, k)
        end
    end
    return chart

procedure PREDICTOR((A → α•Bβ, j), k, grammar)
    for each (B → γ) in GRAMMAR_RULES_FOR(B, grammar) do
        ADD_TO_SET((B → •γ, k), S[k])
    end

procedure SCANNER((A → α•aβ, j), k, words)
    if j < LENGTH(words) and a ⊂ PARTS_OF_SPEECH(words[k]) then
        ADD_TO_SET((A → αa•β, j), S[k+1])
    end

procedure COMPLETER((B → γ•, x), k)
    for each (A → α•Bβ, j) in S[x] do
        ADD_TO_SET((A → αB•β, j), S[k])
    end

Example

Consider the following simple grammar for arithmetic expressions:

<P> ::= <S>      # the start rule
<S> ::= <S> "+" <M> | <M>
<M> ::= <M> "*" <T> | <T>
<T> ::= "1" | "2" | "3" | "4"

With the input: 2 + 3 * 4

This is the sequence of state sets:

(state no.)Production(Origin)CommentS(0): • 2 + 3 * 4S(1): 2 • + 3 * 4S(2): 2 + • 3 * 4S(3): 2 + 3 • * 4S(4): 2 + 3 * • 4S(5): 2 + 3 * 4 •
1P → • S0start rule
2S → • S + M0predict from (1)
3S → • M0predict from (1)
4M → • M * T0predict from (3)
5M → • T0predict from (3)
6T → • number0predict from (5)
1T → number •0scan from S(0)(6)
2M → T •0complete from (1) and S(0)(5)
3M → M • * T0complete from (2) and S(0)(4)
4S → M •0complete from (2) and S(0)(3)
5S → S • + M0complete from (4) and S(0)(2)
6P → S •0complete from (4) and S(0)(1)
1S → S + • M0scan from S(1)(5)
2M → • M * T2predict from (1)
3M → • T2predict from (1)
4T → • number2predict from (3)
1T → number •2scan from S(2)(4)
2M → T •2complete from (1) and S(2)(3)
3M → M • * T2complete from (2) and S(2)(2)
4S → S + M •0complete from (2) and S(2)(1)
5S → S • + M0complete from (4) and S(0)(2)
6P → S •0complete from (4) and S(0)(1)
1M → M * • T2scan from S(3)(3)
2T → • number4predict from (1)
1T → number •4scan from S(4)(2)
2M → M * T •2complete from (1) and S(4)(1)
3M → M • * T2complete from (2) and S(2)(2)
4S → S + M •0complete from (2) and S(2)(1)
5S → S • + M0complete from (4) and S(0)(2)
6P → S •0complete from (4) and S(0)(1)

The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.

Constructing the parse forest

Earley's dissertation{{cite book | access-date=2012-09-12 | archive-date=2017-09-22 | archive-url=https://web.archive.org/web/20170922004954/http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMU-CS-68-earley.pdf | url-status=dead

Another method is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest.

  • Predicted items have a null SPPF pointer.
  • The scanner creates an SPPF node representing the non-terminal it is scanning.
  • Then when the scanner or completer advance an item, they add a derivation whose children are the node from the item whose dot was advanced, and the one for the new symbol that was advanced over (the non-terminal or completed item).

SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.

Optimizations

Philippe McLean and R. Nigel Horspool in their paper "A Faster Earley Parser" combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.

Citations

Other reference materials

  • {{cite journal

  • {{citation | doi-access = free

References

  1. Kegler, Jeffrey. "What is the Marpa algorithm?".
  2. John E. Hopcroft and Jeffrey D. Ullman. (1979). "Introduction to Automata Theory, Languages, and Computation". Addison-Wesley.
  3. Jurafsky, D.. (2009). "Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition". Pearson Prentice Hall.
  4. (April 17, 2013). "Efficient Parsing for Natural Language: A Fast Algorithm for Practical Systems". Springer Science and Business Media.
  5. (April 1, 2008). "SPPF-Style Parsing From Earley Recognizers". Electronic Notes in Theoretical Computer Science.
Info: Wikipedia Source

This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.

Want to explore this topic further?

Ask Mako anything about Earley parser — get instant answers, deeper analysis, and related topics.

Research with Mako

Free with your Surf account

Content sourced from Wikipedia, available under CC BY-SA 4.0.

This content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.

Report