Difference between revisions of "Learning Constraint Grammars"

From Apertium
Jump to navigation Jump to search
 
Line 4: Line 4:
 
== Statistical approach ==
 
== Statistical approach ==
   
In statistical approach Constraint Grammar style rules are learned by calculating n-gram probabilities of word and part-of-speech tag groups. Current work on implementing such a system is at [https://github.com/nuboro/CG_module/ nuboro's Github repository], and it is based on the paper [https://archive.org/details/arxiv-cmp-lg9607002 Inducing Constraint Grammars].
+
In statistical approach Constraint Grammar style rules are learned by calculating n-gram probabilities of word and part-of-speech tag groups. Current work on implementing such a system is at [https://github.com/nuboro/CG-generator nuboro's Github repository], and it is based on the paper [https://archive.org/details/arxiv-cmp-lg9607002 Inducing Constraint Grammars].
   
 
== Machine Learning ==
 
== Machine Learning ==

Latest revision as of 22:59, 25 December 2016

Constraint Grammar style part-of-speech disambiguation rules can be learned automatically from disambiguated parallel corpora.


Statistical approach[edit]

In statistical approach Constraint Grammar style rules are learned by calculating n-gram probabilities of word and part-of-speech tag groups. Current work on implementing such a system is at nuboro's Github repository, and it is based on the paper Inducing Constraint Grammars.

Machine Learning[edit]

A subfield of Machine Learning, called Inductive Logic Programming, has been used to learn Constraint Grammar style disambiguation rules. See for example the branch mil-pos-tagger.

Papers[edit]