...

Library - Natural Language Toolkit

Back to Course

Lesson Description


Lession - #543 Natural Language Toolkit-Parsing


Parsing and its relevance in NLP

The word 'Parsing' whose beginning is from Latin word 'pars' (and that signifies 'part'>
, is utilized to draw accurate importance or word reference significance from the text. It is likewise called Syntactic investigation or grammar examination. Looking at the principles of formal sentence structure, punctuation examination really takes a look at the message for importance. The sentence like "Give me hot frozen yogurt", for instance, would be dismissed by parser or syntactic analyzer.

In this sense, we can characterize parsing or syntactic examination or linguistic structure investigation as follows −

It very well might be characterized as the most common way of dissecting the series of images in natural language adjusting to the standards of formal syntax.

We can comprehend the importance of parsing in NLP with the assistance of following places −
  • Parser is utilized to report any sentence structure mistake.
  • It assists with recuperating from generally happening mistake so the handling of the rest of program can be proceeded.
  • Parse tree is made with the assistance of a parser.
  • Parser is utilized to make image table, which assumes a significant part in NLP.
  • Parser is additionally used to deliver intermediate representations (IR>
    .


Deep Vs Shallow Parsing

Deep Parsing Shallow Parsing
In deep parsing, the search technique will give a total syntactic construction to a sentence. It is the undertaking of parsing a restricted piece of the syntactic data from the given errand.
It is reasonable for complex NLP applications. It can be utilized for less intricate NLP applications.
Discourse frameworks and outline are the instances of NLP applications where profound parsing is used. Information extraction and text mining are the instances of NLP applications where deep parsing is utilized.
It is likewise called full parsing. It is additionally called chunking.


Various types of parsers

Recursive descent parser
Recursive plunge parsing is perhaps the most clear type of parsing. Following are a few significant focuses about recursive drop parser −
  • It follows a top down process.
  • It endeavors to check that the sentence structure of the input stream is right or not.
  • It peruses the info sentence from left to right.
  • One vital activity for recursive drop parser is to peruse characters from the input stream and coordinating them with the terminals from the sentence structure.


Shift-reduce parser
Following are a few significant focuses about shift-reduce parser −
  • It follows a basic base up process.
  • It attempts to find a succession of words and expressions that compare to the right-hand side of a language creation and replaces them with the left-hand side of the creation.
  • The above endeavor to find a succession of word go on until the entire sentence is decreased.
  • In other basic words, shift-reduce parser begins with the information symbol and attempts to build the parser tree up to the beginning symbol .


Chart parser
Following are a few significant focuses about diagram parser −
  • It is predominantly helpful or appropriate for questionable syntaxes, including punctuations of natural languages.
  • It applies dynamic programing to the parsing issues.
  • In view of dynamic programing, fractional guessed results are put away in a construction called a 'chart'.
  • The 'chart' can likewise be re-utilized.


Regexp parser
Regexp parsing is one of the generally utilized parsing method. Following are a few significant focuses about Regexp parser −
  • As the name suggests, it utilizes a standard articulation characterized as punctuation on top of a POS-tagged string.
  • It fundamentally utilizes these normal articulations to parse the information sentences and produce a parse tree out of this.


Dependency Parsing
Dependency Parsing (DP>
, a cutting edge parsing component, whose primary idea is that each semantic unit for example words connects with one another by an immediate connection. These immediate connections are really 'dependencies' in phonetic. For instance, the accompanying graph shows dependency syntax for the sentence "John can hit the ball".


NLTK Package

We have following the two ways to do dependency parsing with NLTK −

Probabilistic, projective dependency parser
This is the principal way we can do dependency parsing with NLTK. In any case, this parser has the limitation of preparing with a restricted arrangement of preparing data.

Stanford parser
This is another way we can do dependency parsing with NLTK. Stanford parser is a state-of-the-art dependency parser. NLTK has a covering around it. To utilize it we really want to download following two things −

The Stanford CoreNLP parser
Language model for wanted language. For instance, English language model.

Example
When you downloaded the model, we can utilize it through NLTK as follows −
from nltk.parse.stanford import StanfordDependencyParser
path_jar = 'path_to/stanford-parser-full-2014-08-27/stanford-parser.jar'
path_models_jar = 'path_to/stanford-parser-full-2014-08-27/stanford-parser-3.4.1-models.jar'
dep_parser = StanfordDependencyParser(
   path_to_jar = path_jar, path_to_models_jar = path_models_jar
>
result = dep_parser.raw_parse('I shot an elephant in my sleep'>
depndency = result.next(>
list(dependency.triples(>
>

Output
[
((u'shot', u'VBD'>
, u'nsubj', (u'I', u'PRP'>
>
, ((u'shot', u'VBD'>
, u'dobj', (u'elephant', u'NN'>
>
, ((u'elephant', u'NN'>
, u'det', (u'an', u'DT'>
>
, ((u'shot', u'VBD'>
, u'prep', (u'in', u'IN'>
>
, ((u'in', u'IN'>
, u'pobj', (u'sleep', u'NN'>
>
, ((u'sleep', u'NN'>
, u'poss', (u'my', u'PRP$'>
>
]