Difference between revisions of "User:Gang Chen/GSoC 2013 Application: "Sliding Window PoS Tagger""

From Apertium
Jump to navigation Jump to search
Line 168: Line 168:
   
 
== Why Google and Apertium should sponsor it? How and who it will benefit in society? ==
 
== Why Google and Apertium should sponsor it? How and who it will benefit in society? ==
The PoS tagger plays an important role in the Apertium translation pipeline. A sliding-window PoS tagger can improve the quality and efficiency of tagging, than the current HMM tagger. So its implementation will benifit both the Apertium developers and users.
+
The PoS tagger plays an important role in the Apertium translation pipeline. A sliding-window PoS tagger can improve the quality of tagging, than the current HMM tagger. Besides, the algorithm is intuitive, and easy to understand and modify. So its implementation will benifit both the Apertium developers and users.
   
I would be really thankful if Google and Apertium would sponsor this project. I think I have the knowledge and skills to accomplish the task, plus the enthusiasm for machine translation. And Apertium is a very nice community that I would like to stay with for long time.
+
I would be really thankful if Google and Apertium would sponsor this project. I have understood the mathematical model in the algorithm, with its main idea described above. I think I have the knowledge and skills to accomplish the task, plus the enthusiasm for machine translation. I And Apertium is a very nice community that I would like to stay with for long time.
   
 
== Work plan ==
 
== Work plan ==

Revision as of 09:20, 27 April 2013

Name

Name: Gang Chen

Contact Information

Email: pkuchengang@gmail.com

IRC: Gang

GitHub Repo: https://github.com/elephantgcc

Why is it you are interested in machine translation?

I am majored in Natural Language Processing. I became intrested in Machine Translation, after taking a course on it. It sits in the center of many NLP techniques, and has a very promising application in the real life.

2 years ago, I started to intern in a statistical machine translation team of an Internet company. Since then, I began to be in deeper touch with the classic IBM models, the n-gram language models, various decoding algorithms, and many excellent open source toolkits. I was greatly attracted by the ideas and applications of machine translation. Reading the papers and making the toolkits to work properly brought me much fun, and I still remember the sleepless night thinking about the nerual network language model.

A language is a very complicated system, and translating a sentence from one language to another has always been a challenging task. Meanwhile, the translation need is growing drastically, so developing better machine translation systems can also benifit the society.

Why is it you are interested in the Apertium project?

I came across the Apertium project a year ago, when I knew about last year's GSoC.

Firstly, The rule-based methodology of it attracted me most. I studied linguistics in my undergraduate period, and during that time I formed a interest in analysing language phenomena. The language system is organized in a structural way in its nature, so studying the underlying rules of it should be the most direct effort to unveil the mysteries of the system. Although currently the rules in Apertium are approximations to the real structural rules in a language system, many language pairs have proved a high translation quality. So hopefully, we can expect them to have a higer quality with more sophiscated rules integrated.

Secondly, Apertium is open in using statistical methods. Although it is a rule-based project, there are some useful statistical models as its components, and they help to improve the translation quality indeed. So I think more developers are willing to contribute to the project.

Thirdly, Apertium is an open source project, so people with the necessary skills can join in and think of ideas to improve it for more and better machine translation services. And what's also important is that they can be used by people all around the world.

Which of the published tasks are you interested in? What do you plan to do?

I am interested in the project "Sliding-window part-of-speech tagger".

Currently the PoS tagger for Apertium is a HMM tagger, while the project aims to the implementation of a sliding-window PoS tagger, for a better quality and efficiency.

Sliding Window PoS Tagger

Overview

The task of a PoS tagger is to choose the best PoS tag for each position of the input. According to the documentation, the input to the tagger is a "ambiguity class sequence", and the output is a normal fine-grained tag sequence.

Firstly, a "Category" is generated by grouping similar fine-grained tags together, because the information provided by a morphological analyser is too detailed for the PoS disambiguation.

For example, the morphological analysis (or the noraml fine-grained tags) of the input sentence "I have seen it" is:

^I/I<num><mf><sg>/prpers<prn><subj><p1><mf><sg>$ ^have/have<vbhaver><inf>/have<vbhaver><pres>/have<vblex><inf>/have<vblex><pres>$ ^seen/see<vblex><pp>$ ^it/prpers<prn><subj><p3><nt><sg>/prpers<prn><obj><p3><nt><sg>$

According to the tagger configuration file (or the TSX file), "num.*" will be grouped into a category called "NUM", "prn.subj.*" will be grouped into a category called "PRNSUBJ", etc. So we finally get a category sequence:

^I/NUM/PRNSUBJ/$ ^have/VHAVEINF/VHAVEPRES/INF/PRES$ ^seen/VLEXPP$ ^it/PRNSUBJ/PRNOBJ/$

Secondly, a partition of the vocabulary is established so that if and only if both are assigned the same subset of categories. Each class of the partition is called a ambiguity class.

Problem Formulation

For the convenience of discussion, we will formulate the PoS tagging problem in Apertium as the folllowing example:

Input: ambiguity class sequence

where each has a set of candidate PoS tags, and let to denote this set, namely .

Output: the best PoS tag sequence

where each .

Tagging procedure

The core idea of a SWPoS is easy to understand. For the input ambiguity class , the tagger predicts the PoS tag by looking at the left context and right context of it. In the paper, the algorithm works best when (left context length) is set to 1, and (right context length) also to 1.

For example, if we have the input as:

^A/a/b$ ^B/x/y$ ^C/m/n$

where , and are the input -s, while , , , , and are candidate -s. The context of should be _.

In the tagging procedure, the algorithm firstly gets a possible list of tags that may be assigned, i.e. and ; then consult the model for the context _ on which tag it mostly supports, for example is bigger than , then the PoS tag for should be .


Unsupervised training procedure

In the training procedure, the key problem is to figure out the probability that a particular context supports a particular tag, based on the unsupervised data. The paper provides an iterative method to make approximations to that probability.

For computational convenience, the algorithm estimates , the number of times that tag would appear in the context _, instead of . They are interchangable.

We use a very simple example to give a description on how the iterative procedure runs.

Suppose we had only two training cases:

(1) A ^B/x/y$   C
(2) A ^D/x/y/z$ C

where we only focus on the context _. Whether or is disambiguous or not will not affect the context.

Firstly, for each training case, we assume that the possibility for any tag is equal, because we don't have any other information. That is, according to the first case, , and ; according to the second case, , and . This is the initil step (or 0-th step) of the algorithm, and we finally get:

Then, we can use these counts to estimate the following probabilities:

And then, using these probabilities we can re-estimate the counts as expectations:

Until now, we have finished one iteration, with all the values updated. And we can observe that and are growing bigger, and is growing smaller, while the total count still remains the same, .

Repeating the above procedure, the process will converge to a point, with:


This is a simple example to show the iterative procedure of the unsupervised training algorithm. Real data may be more complex, but the core mechanism remains the same.

FORBID ristrictions

TODO

Basic plans

My plans are basiclly as follows:

step plans
1) Read the paper and get a full understanding to the algorithm, and familiarize myself with the Apertium pipeline.
2) Implement an unsupervised version of the algorithm.
3) Implement FORBID restrictions, probably by introducing Parameter Reduciton into the unsupervised algorithm. The SWPoS tagger will use the same option and same TSX file as the existing HMM tagger, and will be entirely parallel to it.
4) Implement the minimized FST version.
5) Optionally, implement a supervised version of the algorithm, re-study the algorithm to explore possible improvements.

Proposal Title

A sliding-window drop-in replacement for the HMM part-of-speech tagger in Apertium

Why Google and Apertium should sponsor it? How and who it will benefit in society?

The PoS tagger plays an important role in the Apertium translation pipeline. A sliding-window PoS tagger can improve the quality of tagging, than the current HMM tagger. Besides, the algorithm is intuitive, and easy to understand and modify. So its implementation will benifit both the Apertium developers and users.

I would be really thankful if Google and Apertium would sponsor this project. I have understood the mathematical model in the algorithm, with its main idea described above. I think I have the knowledge and skills to accomplish the task, plus the enthusiasm for machine translation. I And Apertium is a very nice community that I would like to stay with for long time.

Work plan

The task of this project is to implement a new PoS tagger. The outline of the developing procedure are mentioned above as basic plans. The detailed work plan is as follows:

Coding Challenge

Community bonding period

Get further contact with the community, and familiarize myself with the Apertium platform and the project.

Week Plan

week plans
Week 01 Learn about code and file formats for the tagger. Learn from and run the current HMM tagger to get a baseline.
Week 02 Implement the supervised training algorithm. Get training corpus ready for experiments.
Week 03 Learn about finite-state transducers. Implement a FST to use the new tagger.
Week 04 Continue implementation. Make tests, check for bugs, and documentation.
Deliverable #1 Finish a supervised training algorithm, with FST usable. The tagger has good a quality and efficiency.
Week 05 Re-read the paper and documentation. Write a first version of the unsupervised algorithm.
Week 06 Continue with the unsupervised algorithm. Survey techniques on how to minimize FST.
Week 07 Implement the FST minimization.
Week 08 Continue implementation. Make tests, check for bugs, and documentation.
Deliverable #2 Finish a unsupervised training algorithm, with FST minimization. The tagger has a good quality and efficiency.
Week 09 Study different ways to incorperate FORBID rules. Implement the FORBID restrictions.
Week 10 Continue implementation. Make tests, check for bugs, and documentation.
Deliverable #3 Finish an XML-based format for writing FORBID rules.
Week 11 Study further possible improvements.
Week 12 Check the code and documentation.
Deliverable #Final A full implementation of the Sliding-window PoS tagger in Apertium.

Include time needed to think, to program, to document and to disseminate.

Generally and totally:

Programming and thinking will mainly take 8 weeks.

Documentation, surveying, and disseminatation will share the rest 4 weeks.

List your skills and give evidence of your qualifications.

I am currently a 2-nd year postgraduate majoring in Natural Language Processing.

During the recent 2 years, I have been interning in a statistical machine translation team of an Internet company.

There, I brought up 2 language pairs (Spanish-Chinese and Russian Chinese) into online services (http://fanyi.youdao.com) independently. With that chance, I explored the whole pipeline of a statistical machine translation system, including bilingual data crawling and filtering , word alignment, model extraction, and parameter tunning, etc.

I also used Map-Reduce framework to processe large amounts of text, to improve the Ngram language models, which a significant +0.5 BLEU quality improvement for Chinese-English translation. During that, I watched and analysed a lot of raw text and ngram data, read many papers on data selection and domain adaptation, and conducted various kinds of experiments on text filtering and cleaning. Thanks to that project, it made me form a habit of analysing bad cases and data, instead of blindly doing the black-box parameter tuning without thinking.

Besides, I also took part in a project for speeding up the translation decoder, and gained a significant 2x efficiency improvement at very little quality cost. We watched a lot of bad cases, analyse them linguistically, and finally we made a good balance between translation qualitiy and speed.

During my undergraduate period, I took courses on linguistics, mathematics and computer science, where I got the basic knowledge about languistic analyses and programming. I also took part in the development of some NLP systems, such as the construction of Chinese Concept Dictionary, which is a a WordNet-like Chinese ontology, a lexical similarity computation software written in Java, and especially, an supervised HMM PoS tagger written in C++, which I think may help to the implementation of the Sliding-window PoS tagger.

As to the coding skills, I have 3 years' experience programming in Java and C++, and I am also familiar with basic Python and Shell for dealing with light-weight tasks.

With the knowledge and experiences on natural language processing, I am confident in accomplishing the task well.

This is the first time that I take part in an open source project, and I'm excited about it! I have been using open source toolkits for long, for example, the Moses machine translation toolkit, Srilm language model, OpenNLP toolkit, etc, and they all brought great help to me. I'd be happy to make some contributions to the open source community, and try best to adapt myself to the open source developing environment.

List any non-Summer-of-Code plans you have for the Summer