Difference between revisions of "Learning Constraint Grammars"

From Apertium
Jump to navigation Jump to search
Line 9: Line 9:
   
 
A subfield of Machine Learning, called Inductive Logic Programming, has been used to learn Constraint Grammar style disambiguation rules. See for example the branch [http://svn.code.sf.net/p/apertium/svn/branches/mil-pos-tagger/ mil-pos-tagger]
 
A subfield of Machine Learning, called Inductive Logic Programming, has been used to learn Constraint Grammar style disambiguation rules. See for example the branch [http://svn.code.sf.net/p/apertium/svn/branches/mil-pos-tagger/ mil-pos-tagger]
  +
  +
==External links==
  +
* http://swarm.cs.pub.ro/~asfrent/msc/thesis.pdf – Inductive Logic Programming
  +
* http://ucrel.lancs.ac.uk/acl/C/C98/C98-2123.pdf – Inductive Logic Programming

Revision as of 12:25, 22 December 2016

Constraint Grammar style part-of-speech disambiguation rules can be learned automatically from disambiguated parallel corpora.


Statistical approach

In statistical approach Constraint Grammar style rules are learned by calculating n-gram probabilities of word and part-of-speech tag groups. Current work on implementing such a system is at nuboro's Github repository, and it is based on the paper Inducing Constraint Grammars.

Machine Learning

A subfield of Machine Learning, called Inductive Logic Programming, has been used to learn Constraint Grammar style disambiguation rules. See for example the branch mil-pos-tagger

External links