User:Aboelhamd

From Apertium
Revision as of 01:14, 3 April 2019 by Aboelhamd (talk | contribs)
Jump to navigation Jump to search

GSOC 2019 : Extend weighted transfer rules

Personal Details

General Summary

I am Aboelhamd Aly, a 24 years old Egyptian computer engineer. My first tongue is Arabic and not hieroglyphic :) . I am currently living in Alexandria, Egypt, and I intend to study masters abroad after finishing my undergraduate study. I love languages, AI and hence NLP. I have some research and industry experience in NLP, machine learning, parallel programming and optimization. I have been working alongside Sevilay Bayatli (piraye) on introducing new module (weighted transfer rule) to apertium , and that encouraged me to choose the idea "Extend weighted transfer rules" to continue our work, extend, integrate and deploy the full module.


Contacts

Email : aboelhamd.abotreka@gmail.com
Facebook : https://www.facebook.com/aboelhamd
LinkedIn : https://www.linkedin.com/in/aboelhamd-aly-76384a102/
IRC : aboelhamd
Github : https://github.com/aboelhamd


Education

I am a senior bachelor student at Alexandria University in Egypt. Recently I have been granted a scholarship to study masters in data science at Innopolis University in Russia. My undergraduate major is computer engineering, which dealt with everything in computers from the lowest level of zeros and ones to the highest level of HCI (human and computer interaction, mainly deals with user interface).
The subjects I loved the most were artificial intelligence, machine leaning, data mining and deep learning, and that's because I see very great potential in the AI field that could solve many of the problems humans face today.


Languages Love

I love languages very much, especially Arabic, because it's a very very beautiful language and of course because it's the language of our holy scripture (Quraan), which I memorize more than half of it. Also I love Arabic literature and I have wrote several Arabic poems and short stories. All of that gave me a very good knowledge of classical and modern Arabic morphology, syntax and derivation. After Arabic comes English which I also love very much but surely not proficient at it like Arabic.
And so my love to languages and AI lead me to work in natural language processing field to combine my passion and knowledge in it.


Last Year GSoC

Last year I tried to contribute in apertium to introduce a new pair (Arabic-Syriac) but I failed miserably, because I wasn't familiar at all with Syriac nor with apertium and also I began late in time which made me hasty, that I needed a less overwhelming project. I then applied to classical language tool-kit project to enhance some Classical Arabic functionalities there and that was my proposal[1]. Unfortunately I wasn't accepted in the program, though my mentor told me then, that Google gave them less spots than what they asked for :( , and that the other 3 applicants was postgraduate students that have more experience in the field and in open-source projects :( .
And after that I decided to contribute in an open-source project to gain both experiences and to try again next year, and here I am now :) .


Experience

Apertium

Sevilay and Me have been working into introducing the weighted transfer rules for months now. And we re-implemented a new module to handle ambiguous transfer rules , which parses, matches, applies transfer rules to the source and target sentence, then train maximum entropy models to be able to choose the best ambiguous rule for any given pattern.


Online courses

I had taken many online courses with wide spectrum of the computer engineering field. One that I am very proud of, is udacity's machine-learning nano-degree[2] which is a six-months program, consists of many courses and practical projects regarding machine learning.


Industry

Last summer I was hired as a software engineer intern in Brightskies tech. company. After the internship I was hired as a part-time software engineer.
Our team is working on parallel programming, optimization and machine learning projects. The 2 biggest companies we are working with are Intel and Aramco.
My role is working on understanding, implementing, optimizing some seismic algorithms and kernels, besides doing some research on some machine learning algorithms and topics.


Why interested in apertium ?

- I am very interested in NLP in general.
- Apertium has very noble goal, which is bringing languages with scarce data to life by linking them with machine translation of other languages.
- I have previous contribution in apertium and willing to build on it.



Project Idea

Weighted transfer rules

When we have more than one transfer rule that can be applied to a given pattern, we call this an ambiguous situation. Apertium resolve this ambiguous situation by choosing the left-to-right longest match (LRLM) rule/s to apply. And that's of course is not adequate with all the word/s that follow that pattern/s. To solve this problem we introduced a way to make this ambiguous rules weighted for certain word/s that follow the ambiguous pattern/s. And this is done by training very huge corpus to capture better expressive weights.
1- First we train an n-gram -we use n=5- source language model.
2- We apply all ambiguous transfer rules for each ambiguous pattern in the given sentence, separately from each other -we apply LRLM rules to all other ambiguous patterns-, and get score from the n-gram model for each of the ambiguous sentences for that given pattern.
3- These scores are then written in some files, each file contains the scores of an ambiguous pattern. These files are considered the datasets for the tool (we use yasmet) which train target language max entropy models.
4- After having the models, our module is now ready for use. By using beam search algorithm we choose the best possible target.
For more detailed explanation you could refer to this documentation[2].


Weighted transfer rules extension

Now that weighted transfer module we worked on was built to apply only chunker transfer rules

Latest updates on WTR module

The module is now finished and is in the testing phase, it does well with Kazakh-Turkish pair and we hope it does as well with other pairs like Spanish-English pair which have more transfer rules than any other pair.

Additional thoughts

Why this idea ?

Why google and apertium should sponsor it ?

- The project enhances apertium translation of all pairs making it closer to human translation.
- I have the right experience and qualifications to complete it successfully. And since I participated in building the module, I will easily be able to extend it.

How and who will it benefit in society ?

As the project will enhance apertium translation and make it closer to human translation, apertium will be more reliable and efficient to use in daily life and especially for document translation, which -in the long term- will enrich the data of languages with data scarcity, and hence help the speakers of such languages enriching and preserving their languages from extinction.

Another ideas ?

Work plan

Schedule

First milestone

Week 1

(From - To)

Week 2

(From - To)

Week 3

(From - To)

Week 4

(From - To)

Deliverable

Second milestone

Week 5

(From - To)

Week 6

(From - To)

Week 7

(From - To)

Week 8

(From - To)

Deliverable

Third milestone

Week 9

(From - To)

Week 10

(From - To)

Week 11

(From - To)

Week 12

(From - To)

Deliverable

Other summer plans