Difference between revisions of "User talk:Rlopez/Application"

From Apertium
Jump to navigation Jump to search
Line 23: Line 23:


=== Description ===
=== Description ===
After read some papers about Text Normalitation (see References Section) and doing some experiments with Apertium, for this project I identify 10 main sub-modules to normalize non-standard texts. The Figure below shows the architecture of my proposal:

==== 1. Tokenizer and NSW Detection ====
First, the text is broken up into whitespace-separated tokens. Regular expressions can be used to extract the tokens. For each token we evaluate if it is or not a NSW (Non-Standard Word) using the dictionary criterion (search the word in a dictionary of standard words). If the word is a NSW, we pass to the next sub-module; otherwise we evaluate the next token.

==== 2. Links, hashtags and @-tokens Identification ====
Links, hashtags and @-tokens must not be translated. For example:


<pre>
<pre>
http://www.timeanddate.com/astronomy/moon/blue-moon.html
Example
#playing
@worker
</pre>
</pre>

Using Apertium we get these translations:

<pre>
http://Www.timeanddate.com/luna/de astronomía/azul-luna.html
#Jugando
@Trabajador
</pre>

That is wrong. To solve this problem we have to identify that kind of token and put it in superblanks (http://wiki.apertium.org/wiki/Superblanks). So, we avoid the translation of these tokens. To identify these tokens, the use of regular expression is a good alternative.

==== 3. Emoticons Detection ====

==== 4. Apostrophe Correction ====
<pre>
wouldnt --> wouldn't
dont --> don't
shell --> she'll or shell?
</pre>

==== 5. Spelling Mistakes Correction ====
<pre>
smoking --> smoking
gooood --> god or good?
</pre>

==== 6. Number Substitution ====
<pre>
2gether --> together
be4 --> before
</pre>

==== 7. Abbreviation Mapping ====

<pre>
u --> you
tgthr --> together
</pre>

==== 8. Token Splitter ====

<pre>
youand --> you and
twoworkers -- > two workers
</pre>

==== 9. Punctuation Correction ====

==== 10. Format Data ====

==== How it will work? ====
Initially, like this:

<pre>
echo 'helllooo worrrddd!!!' | python normalizer_module.py 'en' | apertium en-es
</pre>



=== Why Google and Apertium should sponsor this project? ===
=== Why Google and Apertium should sponsor this project? ===

Revision as of 00:53, 21 March 2014

Contact information

Name: Roque Enrique López Condori

Email: rlopezc27@gmail.com

IRC: Roque

Personal page: http://maskaygroup.com/rlopez/

Github repo: https://github.com/rlopezc27

Assembla repo: https://www.assembla.com/profile/roque27

Why is it you are interested in machine translation?

I am master student majoring in Natural Language Processing, and I like many tasks of this area. The machine translation is a task with various challenges. Nowadays, with the growth of internet, it is very common to find many non-standard texts like spelling mistakes, internet abbreviation, etc. These texts represent a big challenge to the machine translation. I'm so interested in finding better translations using a good preprocessing of these kinds of texts.

Why is it that you are interested in the Apertium project?

I think Apertium is one of the most important NLP open-source project. I'm very interested in the NLP area and I worked in some projects about this area. But, unfortunately I haven't had the opportunity to contribute to any open source project, and I think that Apertium is the right place to start. In addition, Apertium has many tasks which are so amazing to doing. I would really like to participate in this organitation because I have a big desire to contribute to the open source community.

Which of the published tasks are you interested in? What do you plan to do?

I am interested in the “Improving support for non-standard text input” task. I plan to work with English, Spanish and Portuguese languages.

Description

After read some papers about Text Normalitation (see References Section) and doing some experiments with Apertium, for this project I identify 10 main sub-modules to normalize non-standard texts. The Figure below shows the architecture of my proposal:

1. Tokenizer and NSW Detection

First, the text is broken up into whitespace-separated tokens. Regular expressions can be used to extract the tokens. For each token we evaluate if it is or not a NSW (Non-Standard Word) using the dictionary criterion (search the word in a dictionary of standard words). If the word is a NSW, we pass to the next sub-module; otherwise we evaluate the next token.

2. Links, hashtags and @-tokens Identification

Links, hashtags and @-tokens must not be translated. For example:

http://www.timeanddate.com/astronomy/moon/blue-moon.html
#playing
@worker

Using Apertium we get these translations:

http://Www.timeanddate.com/luna/de astronomía/azul-luna.html
#Jugando
@Trabajador

That is wrong. To solve this problem we have to identify that kind of token and put it in superblanks (http://wiki.apertium.org/wiki/Superblanks). So, we avoid the translation of these tokens. To identify these tokens, the use of regular expression is a good alternative.

3. Emoticons Detection

4. Apostrophe Correction

wouldnt --> wouldn't
dont --> don't
shell --> she'll or shell?

5. Spelling Mistakes Correction

smoking --> smoking
gooood  --> god or good?

6. Number Substitution

2gether --> together
be4 --> before

7. Abbreviation Mapping

u --> you
tgthr --> together

8. Token Splitter

youand --> you and
twoworkers -- > two workers

9. Punctuation Correction

10. Format Data

How it will work?

Initially, like this:

echo  'helllooo worrrddd!!!' | python normalizer_module.py 'en'  | apertium en-es


Why Google and Apertium should sponsor this project?

Machine translation systems are pretty fragile working with non-standard input like spelling mistakes, internet abbreviation, etc. These types of inputs reduce the machine translation performance. Currently Apertium has no a preprocessing module to normalize these type of non-standard input. With a good text normalitation module, Apertium can significantly increases the translation quality and gets more human translation. Also this module would be helpful for other NLP tasks.

How and who it will benefit in society?

English, Spanish and Portuguese are some of the most spoken languages. The implementation of this module will improve the translations and it benefit to Apertium users that are learning these languages.

Work Plan

Coding Challenge

I have finished the Coding Challenge and, in addition to the English, I aggregate support for two languages (Spanish and Portuguese). All the instructions and explanations are in my Github Repo.

Community Bonding Period

  • Familiaritation with Apertium tool and community.
  • Find and analyze language resources for English, Spanish and Portuguese.
  • Make preparations that will be used in the implementation.

Week Plan

GSoC Week Tasks
Week 1 A
Week 2 A
Week 3 A
Week 4 A
First Deliverable
Week 5 A
Week 6 A
Week 7 A
Week 8 A
Second Deliverable
Week 9 A
Week 10 A
Week 11 A
Week 12 A
Finalitation

List your skills and give evidence of your qualifications

I am currently a 2-nd year master student majoring in Natural Language Processing. After my graduation I worked during 2 years and some months in three NLP projects that have a direct relation with this project. With my jobs and studies I have gained the following skills:

Programming Skills: During my undergraduate period, I took courses about Python, Java and C++ programing. I finished my undergraduate placed in the top 5 position. In my second and third jobs I used Python as a main programming language. At the Master, I am using Python more frequently. Some of my works are in my Github repo.

Natural Language Processing: I worked in three NLP projects. Some of the main topics are: text-processing, sentiment analysis, text classification, summaritation, etc.

Spanish, Portuguese and English language: I'm a Spanish native speaker. I'm living in Brazil for over a year, which improves my Portuguese. About my English, I studied for two years.

I develop all of my own software under free licenses and make an effort to work in groups as often as possible. However, unfortunately I can't claim much in terms of experience with open-source projects.

CV: More about me in my complete CV.

List any non-Summer-of-Code plans you have for the Summer

During the May 19 and August 18 period, mainly my activities will be focused in the GSoC project and the progress of my master’s work. This year I don’t have courses at my master, therefore, my tasks are related to research activities. I do not pretend to make any trip, I will stay in São Paulo–Brazil. I have planned to work at least 30 hours a week for this project, but if it is necessary I can work some hours more.

About me

I studied at San Agustin National University in Perú, where I gained my BA (Hons) degree in System Engineering. After my studies I worked two years. In the first year, I worked in a research project about Automatic Summaritation of medical records, as a result of my work, I got some publications about medical record classification. In the second year I was member and worked in the Lindexa startup (Natural Language Processing startup), which was one of 10 startups that won the Wayra-Peru 2011 competition.

From Perú I moved to Brazil, where I am doing my Master in Computer Science at São Paulo University (http://www.nilc.icmc.usp.br/nilc/index.php). My research topic is about Opinion Summaritation. In Brazil, the last year, I worked in the DicionarioCriativo startup, which is an online dictionary that relates words, concepts, phrases, quotes, images and other contents by semantic fields.

This is the first year applying to the GSoC. I hope to be part of your team in this summer.

References