Difference between revisions of "User:Aboelhamd"

From Apertium
Jump to navigation Jump to search
Line 5: Line 5:
=== General Summary ===
=== General Summary ===
I am Aboelhamd Aly, a 24 years old Egyptian computer engineer. My first tongue is Arabic and not hieroglyphic :) . I am currently living in Alexandria, Egypt, and I intend to study masters abroad after finishing my undergraduate study. I love languages, AI and hence NLP. I have some research and industry experience in NLP, machine learning, parallel programming and optimization. I have been working alongside Sevilay Bayatli (piraye) on introducing new module (weighted transfer rule) to apertium , and that encouraged me to choose the idea "Extend weighted transfer rules" to continue our work, extend, integrate and deploy the full module.
I am Aboelhamd Aly, a 24 years old Egyptian computer engineer. My first tongue is Arabic and not hieroglyphic :) . I am currently living in Alexandria, Egypt, and I intend to study masters abroad after finishing my undergraduate study. I love languages, AI and hence NLP. I have some research and industry experience in NLP, machine learning, parallel programming and optimization. I have been working alongside Sevilay Bayatli (piraye) on introducing new module (weighted transfer rule) to apertium , and that encouraged me to choose the idea "Extend weighted transfer rules" to continue our work, extend, integrate and deploy the full module.





Line 14: Line 15:
Github : https://github.com/aboelhamd<br/ >
Github : https://github.com/aboelhamd<br/ >
Time zone : GMT+2
Time zone : GMT+2





=== Education ===
=== Education ===
I am a senior bachelor student at Alexandria University in Egypt. Recently I have been granted a scholarship to study masters in data science at Innopolis University in Russia.
I am a senior bachelor student at Alexandria University in Egypt. Recently I have been granted a scholarship to study masters in data science at Innopolis University in Russia.<br />
My undergraduate major is computer engineering, which exposed me to almost everything in computers from the lowest level of zeros and ones to the highest level of HCI (human and computer interaction, mainly deals with user interface). <br />
My undergraduate major is computer engineering, which exposed me to almost everything in computers from the lowest level of zeros and ones to the highest level of HCI (human and computer interaction, mainly deals with user interface). <br />
The subjects I loved the most were artificial intelligence, machine learning, data mining and deep learning, and that's because of the great potential in the AI field that already solved and could solve many of the problems humans face today.
The subjects I loved the most were artificial intelligence, machine learning, data mining and deep learning, and that's because of the great potential in the AI field that already solved and could solve many of the problems humans face today.




=== Languages Love ===
=== Languages Love ===
Line 28: Line 32:
=== Last Year GSoC ===
=== Last Year GSoC ===
Last year I tried to contribute in apertium to introduce a new pair (Arabic-Syriac) but I failed , because I wasn't familiar at all with Syriac nor with apertium and also I began late in time which made me hasty, that I needed a less overwhelming project.
Last year I tried to contribute in apertium to introduce a new pair (Arabic-Syriac) but I failed , because I wasn't familiar at all with Syriac nor with apertium and also I began late in time which made me hasty, that I needed a less overwhelming project.
I then applied to classical language tool-kit project (cltk)[http://cltk.org/] to enhance some Classical Arabic functionalities there and that was my proposal[https://docs.google.com/document/d/1Rw-jEaeOwbjYNPKOhgiCGH5aG3p0qNxMOoT1QD3rxg0/edit?usp=sharing]. Unfortunately I wasn't accepted in the program, though my mentor told me then, that Google gave them less spots than what they asked for :( , and that the other 3 accepted applicants were postgraduate students that have more experience in the field and in open-source projects than me :( . <br />
I then applied to classical language tool-kit project (cltk)[http://cltk.org/] to enhance some Classical Arabic functionalities there and that was my proposal[https://docs.google.com/document/d/1Rw-jEaeOwbjYNPKOhgiCGH5aG3p0qNxMOoT1QD3rxg0/edit?usp=sharing]. Unfortunately I wasn't accepted in the program, though my mentor told me then, that Google gave them less slots than what they asked for :( , and that the other 3 accepted applicants were postgraduate students that have more experience in the field and in open-source projects than me :( . <br />
And after that I decided to contribute in an open-source project to gain both experiences and to try again next year, and here I am now :) .
And after that I decided to contribute in an open-source project to gain both experiences and to try again next year, and here I am now :) .




=== Experience ===
=== Experience ===


==== Apertium ====
==== Apertium ====
Sevilay and Me have been working into introducing the weighted transfer rules for months now. And we re-implemented a new module to handle ambiguous transfer rules , which parses, matches, applies transfer rules to the source and target sentence, then train maximum entropy models to be able to choose the best ambiguous rule for any given pattern. And then lastly, use these models to get the best possible target sentence.<br />
Sevilay and Me have been working into introducing the weighted transfer rules for months now. And we re-implemented a new module to handle ambiguous transfer rules , which parses, matches, applies transfer rules, then train maximum entropy models to be able to choose the best ambiguous rule for any given pattern. And then lastly, use these models to get the best possible target sentence.<br />


==== Online courses ====
I had taken many online courses in many of the computer engineering tracks. And One that I am very proud of, is udacity's machine-learning nano-degree[https://confirm.udacity.com/GJPFVCGK] which is a six-months program, consists of many courses and practical projects regarding machine learning.


==== Industry ====
==== Industry ====
Line 43: Line 47:
Our team is working on parallel programming, optimization and machine learning projects. The 2 biggest companies we are working with are Intel and Aramco.<br />
Our team is working on parallel programming, optimization and machine learning projects. The 2 biggest companies we are working with are Intel and Aramco.<br />
My role is working on understanding, implementing, optimizing some seismic algorithms and kernels, besides doing some research on some machine learning algorithms and topics.
My role is working on understanding, implementing, optimizing some seismic algorithms and kernels, besides doing some research on some machine learning algorithms and topics.


==== Online courses ====
I had taken many online courses in many of the computer engineering tracks. And One that I am very proud of, is udacity's machine-learning nano-degree[https://confirm.udacity.com/GJPFVCGK] which is a six-months program, consists of many courses and practical projects regarding machine learning.




=== Why interested in apertium ? ===
=== Why interested in apertium ? ===
Line 48: Line 58:
- Apertium has a very noble goal, which is bringing languages with scarce data to life by linking them with machine translation of other languages.<br />
- Apertium has a very noble goal, which is bringing languages with scarce data to life by linking them with machine translation of other languages.<br />
- I have previous contribution in apertium and willing to build on it.
- I have previous contribution in apertium and willing to build on it.





== Project Idea ==
== Project Idea ==


=== Weighted transfer rules ===
=== Weighted transfer rules ===
When we have more than one transfer rule that can be applied to a given pattern, we call this an ambiguous pattern. Apertium resolve this ambiguous situation by applying the left-to-right longest match (LRLM) rule, and that's of course is not adequate with all the word/s that follow that pattern/s.<br />
When more than one transfer rule could be applied to a given pattern, we call this ambiguous. Apertium resolve this ambiguity by applying the left-to-right longest match (LRLM) rule, and that is not adequate with all the word/s that follow that pattern/s.<br />
To solve this problem we introduced a way to make this ambiguous rules weighted for the word/s that follow the ambiguous pattern, and this is done by training very huge corpus to capture better expressive weights.<br />
To enhance this resolution, a new module was introduced to make this ambiguous rules weighted for the word/s that follow the ambiguous pattern, and this is done by training a corpus to generate maximum entropy models that are used to choose the best (highest weight) ambiguous rule to apply.<br />
The module works as follows:<br />
The module works as follows:<br />
1- First we train an n-gram -we put n=5- source language model.<br />
1- First we train an n-gram -we put n=5- source language model.<br />
2- We split the corpus into sentences, for a given sentence we apply all ambiguous transfer rules for each ambiguous pattern , separately from each other -we apply LRLM rules to all other ambiguous patterns-, and get score from the n-gram model for each of the ambiguous sentences for that given pattern.<br />
2- We split the corpus into sentences, for a given sentence we apply all ambiguous transfer rules for each ambiguous pattern separately from other ambiguous patterns -we apply LRLM rules to them-, and then get score from the n-gram model for each of the ambiguous sentences for that pattern.<br />
3- These scores are then written in some files, each file contains the scores of an ambiguous pattern. These files are considered the datasets for the tool (we use yasmet) which trains target language max entropy models.<br />
3- These scores are then written in some files, each file contains the scores of an ambiguous pattern. These files are considered the datasets for yasmet tool, which trains target language max entropy models.<br />
4- After having the models, the module is now ready for use. By using beam search algorithm we choose the best possible target.<br />
4- After having the models, the module is now ready for use. By using beam search algorithm we choose the best possible ambiguous rules to apply, hence having the best translation.<br />
For more detailed explanation you could refer to this documentation[https://docs.google.com/document/d/1t0VnUhw_LwN0oNL7Sk1fqSJyPnWdxYElesuIV_htn7o/edit?usp=sharing].
For more detailed explanation you could refer to this documentation[https://docs.google.com/document/d/1t0VnUhw_LwN0oNL7Sk1fqSJyPnWdxYElesuIV_htn7o/edit?usp=sharing].


Line 64: Line 77:


=== Weighted transfer rules extension ===
=== Weighted transfer rules extension ===
The weighted transfer module we worked on was built to apply only chunker transfer rules. And this idea is to extend that module to interchunk and postchunk transfer rules too.<br />
The weighted transfer module was built to apply only chunker transfer rules. And this idea is to extend that module to be applied to interchunk and postchunk transfer rules too.<br />
Both of them are similar to the chunker, but with some differences. For example interchunk def-cats will refer to the tags of the chunk itself and not the lexical forms it contains like chunker, and for postchunk they refer to name of the chunk and has nothing to do with tags now. Also chunk element has different use, because it deals with chunks not words. Also there are some differences in clip element attributes between the three transfer files.<br />
Both of them are similar to the chunker, but with some differences. For example interchunk def-cats will refer to the tags of the chunk itself and not the lexical forms it contains like chunker, and for postchunk they refer to name of the chunk and has nothing to do with tags now. Also chunk element has different use, because it deals with chunks not words. Also there are some differences in clip element attributes between the three transfer files.<br />
All these differences may be considered minor with respect the whole module that handle the chunker transfer rules. And I think adding these modifications will not take long time.<br />
All these differences may be considered minor with respect the whole module that handle the chunker transfer rules. And I think adding these modifications will not take long time.<br />
So in addition to this extension, I think introducing new ideas or modifications that could enhance the accuracy and efficiency of the whole module could be necessary to do along side the extension. Also I think I may work in related or not related ideas to this one to make full use of the 3 months period.
So in addition to this extension, I think introducing new ideas or modifications that could enhance the accuracy and efficiency of the whole module could be necessary to do alongside the extension. Also I think I may work in related or not related ideas to this one to make full use of the 3 months period.





Line 79: Line 92:


=== Coding Challenge ===
=== Coding Challenge ===
The coding challenge was to set up a pair and train the existing weighted transfer rule code, which I hade done several times while testing and debugging the code.<br />
The coding challenge was to set up a pair and train the existing weighted transfer rule code, which I had done several times while testing and debugging the code.<br />
Since I didn't have a coding challenge and also the module was separated from apertium core as mentioned before, Francis Tyers(spectei) told me integrate the module -without the training part- with apertium-transfer, and I did, in that pull-request[https://github.com/apertium/apertium/pull/41].<br />
Since I didn't have a coding challenge and also the module was separated from apertium core as mentioned before, Francis Tyers(spectei) told me integrate the module -without the training part- with apertium-transfer, and I did that in this pull-request[https://github.com/apertium/apertium/pull/41].<br />
Then he told me to make the module depends on libraries already used in apertium and not external ones, as I used 2 libraries pugixml to handle xml files and icu library to handle upper and lower cases, which are not used in apertium. Also Kevin Unhammer(unhammer) gave me some helpful review on the code, and I am currently resolving all these issues.
Then he told me to make the module depends on libraries already used in apertium and not external ones, as I used 2 libraries pugixml to handle xml files and icu library to handle upper and lower cases, which are not used in apertium. Also Kevin Unhammer(unhammer) gave me some helpful review on the code, and these issues were resolved.




Line 88: Line 101:
There are additional thoughts and modifications to the weighted transfer rules proposed in the aforementioned documentation[https://docs.google.com/document/d/1t0VnUhw_LwN0oNL7Sk1fqSJyPnWdxYElesuIV_htn7o/edit?usp=sharing].<br />
There are additional thoughts and modifications to the weighted transfer rules proposed in the aforementioned documentation[https://docs.google.com/document/d/1t0VnUhw_LwN0oNL7Sk1fqSJyPnWdxYElesuIV_htn7o/edit?usp=sharing].<br />
And if some of them are valid, They could be applied along with the extension too.
And if some of them are valid, They could be applied along with the extension too.
Also now I am looking for some newer machine or deep learning methods to apply as alternative for yasmet and max entropy method.




Line 94: Line 108:
- The project enhances apertium translation of all pairs making it closer to human translation.<br />
- The project enhances apertium translation of all pairs making it closer to human translation.<br />
- I have previous experience and the required qualifications to complete the project successfully. And since I participated in building the module, I will be able to extend it without much difficulty.<br />
- I have previous experience and the required qualifications to complete the project successfully. And since I participated in building the module, I will be able to extend it without much difficulty.<br />
- By being accepted and successful in GSoC program, it would make a huge impact on my cv and hence my career.<br />
- The stipend and the opportunity to have a job interview with google are huge benefits to a fresh graduate student like me.






=== How and who will it benefit in society ? ===
=== How and who will it benefit in society ? ===
As the project will enhance apertium translation and make it closer to human translation, apertium will be more reliable and efficient to use in daily life and especially for document translation, which -in the long term- will enrich the data of languages with data scarcity, and hence help the speakers of such languages enriching and preserving their languages from extinction.
As the project will hopefully enhance apertium translation and make it closer to human translation, apertium will be more reliable and efficient to use in daily life and for document translation, which -in the long term- will enrich the data of languages with data scarcity, and hence help the speakers of such languages enriching and preserving their languages from extinction.




Line 105: Line 121:
I would love to work on "Light alternative format for all XML files in an Apertium language pair"[http://wiki.apertium.org/wiki/Ideas_for_Google_Summer_of_Code#Light_alternative_format_for_all_XML_files_in_an_Apertium_language_pair] idea along with weighted transfer rules idea too, if there was enough time. As there is an intersection between the two ideas which is the xml transfer files, and since I am already familiar with the documentation of these files, and has written module to handle, match and apply the rules, I think I could design another lighter format than xml, and write converters scripts between the two formats.<br />
I would love to work on "Light alternative format for all XML files in an Apertium language pair"[http://wiki.apertium.org/wiki/Ideas_for_Google_Summer_of_Code#Light_alternative_format_for_all_XML_files_in_an_Apertium_language_pair] idea along with weighted transfer rules idea too, if there was enough time. As there is an intersection between the two ideas which is the xml transfer files, and since I am already familiar with the documentation of these files, and has written module to handle, match and apply the rules, I think I could design another lighter format than xml, and write converters scripts between the two formats.<br />
I hope in the next few days, I would be to able to finish the coding challenge of this idea so I could be considered working on it too if no other one applied to it.
I hope in the next few days, I would be to able to finish the coding challenge of this idea so I could be considered working on it too if no other one applied to it.





== Work plan ==
== Work plan ==


=== Exams and community bounding ===
=== Exams and community bounding ===
I am having my final exams from May 27 to June 20 and it's almost exactly the same as the first phase of GSoC this year, and since I will not be able to work in my exams duration and even I want at least one free week before the first exam, I will start earlier, even before the announcement of accepted students, and that's not because I am sure that I will be accepted, but because I will continue contribution to the module anyways, if I got accepted or not.<br />
I am having my final exams from May 27 to June 20 and it's almost exactly the same as the first phase of GSoC this year, and since I will not be able to work in my exams duration and even I want at least one free week before the first exam, I will start earlier, even before the announcement of accepted students, and that's because I will continue contribution to the module anyways, if I got accepted or not.<br />
So I will start working on the first phase on April 19 to May 16. And from May 17 to July 20 I will be taking my exams and I will still be able to do minor changes if necessary, and also open for discussions and chats about the first phase and the next one, to be ready when I came back to design and implement the code.
So I will start working on the first phase on April 19 to May 16. And from May 17 to July 20 I will be taking my exams and I will still be able to do minor changes if necessary, and also will be open for discussions and chats about the first phase and the next one, to be ready when I came back to design and implement the code.




=== Schedule ===
=== Schedule ===
Line 248: Line 269:
|}
|}


=== Other summer plans ===
For a long while now, I am working about 80 hours a week, that working is divided between college attendance, studying, graduate project, assignments and sheets delivering, part-time software engineer job, working with Sevilay on the weighted transfer rules module, and last but not least preparing and applying for many opportunities for next year, like GSoC, local and abroad full-time jobs and masters scholarships. And I have already been accepted in several companies and in one university till now.<br />
May be you would say, this is a crazy boy, but honestly I am still thinking about my future and what to do next, so I want to open many doors and then decide which ones to go through next.
My best case scenario I guess, is being accepted in GSoC, continue in my part-time job. And then after GSoC finished, either work as a full-time research and development engineer or working as a part-time along with taking the masters degree.



Now, for my summer plans:<br />
=== Other summer plans ===
- For the first phase of GSoC I will have two other responsibilities, which are college and the part-time job. The job will take about 16 hours, and GSoC will take about 25-35 hours and the rest goes to college. And note that in first phase I didn't know if I got accepted or not.<br />
- For the part-time job, as I was told by Francis that it's not compatible with GSoC, I decided to leave the job by April 15 before the first phase.<br />
- For the second and third phases, college will have been finished, I will reduce my working hours per week to about 40-50 hours. If I got accepted in GSoC, it would take about 35-40, with the rest going to the part-time job. If I didn't get accepted, then apertium would take about 15-20 hours, and the rest going to the part-/full-time job.
- For the first phase of GSoC I will still be in my college, but I will be able to allocate at least 30 hours per week for GSoC.<br />
- For the second and third phases, college will have been finished, and I will be able to allocate at least 40 hours per week for GSoC.





Revision as of 13:42, 7 April 2019

GSOC 2019 : Extend weighted transfer rules[1]

Personal Details

General Summary

I am Aboelhamd Aly, a 24 years old Egyptian computer engineer. My first tongue is Arabic and not hieroglyphic :) . I am currently living in Alexandria, Egypt, and I intend to study masters abroad after finishing my undergraduate study. I love languages, AI and hence NLP. I have some research and industry experience in NLP, machine learning, parallel programming and optimization. I have been working alongside Sevilay Bayatli (piraye) on introducing new module (weighted transfer rule) to apertium , and that encouraged me to choose the idea "Extend weighted transfer rules" to continue our work, extend, integrate and deploy the full module.


Contacts

Email : aboelhamd.abotreka@gmail.com
Facebook : https://www.facebook.com/aboelhamd
LinkedIn : https://www.linkedin.com/in/aboelhamd-aly-76384a102/
IRC : aboelhamd
Github : https://github.com/aboelhamd
Time zone : GMT+2


Education

I am a senior bachelor student at Alexandria University in Egypt. Recently I have been granted a scholarship to study masters in data science at Innopolis University in Russia.
My undergraduate major is computer engineering, which exposed me to almost everything in computers from the lowest level of zeros and ones to the highest level of HCI (human and computer interaction, mainly deals with user interface).
The subjects I loved the most were artificial intelligence, machine learning, data mining and deep learning, and that's because of the great potential in the AI field that already solved and could solve many of the problems humans face today.


Languages Love

I love languages very much, especially Arabic, because it's a very very beautiful language and of course because it's the language of our holy scripture (Quraan), which I memorize more than half of it. Also I love Arabic literature and I have wrote several Arabic poems and short stories. All of that gave me a very good knowledge of classical and modern Arabic morphology, syntax and derivation. After Arabic comes English which I also love very much but surely not proficient at it like Arabic.
And so my love to languages and AI lead me to work in natural language processing field to combine my passion and knowledge in it.


Last Year GSoC

Last year I tried to contribute in apertium to introduce a new pair (Arabic-Syriac) but I failed , because I wasn't familiar at all with Syriac nor with apertium and also I began late in time which made me hasty, that I needed a less overwhelming project. I then applied to classical language tool-kit project (cltk)[2] to enhance some Classical Arabic functionalities there and that was my proposal[3]. Unfortunately I wasn't accepted in the program, though my mentor told me then, that Google gave them less slots than what they asked for :( , and that the other 3 accepted applicants were postgraduate students that have more experience in the field and in open-source projects than me :( .
And after that I decided to contribute in an open-source project to gain both experiences and to try again next year, and here I am now :) .


Experience

Apertium

Sevilay and Me have been working into introducing the weighted transfer rules for months now. And we re-implemented a new module to handle ambiguous transfer rules , which parses, matches, applies transfer rules, then train maximum entropy models to be able to choose the best ambiguous rule for any given pattern. And then lastly, use these models to get the best possible target sentence.


Industry

Last summer I was hired as a software engineer intern in Brightskies technology, and after the internship I was hired as a part-time software engineer and I am still working there.
Our team is working on parallel programming, optimization and machine learning projects. The 2 biggest companies we are working with are Intel and Aramco.
My role is working on understanding, implementing, optimizing some seismic algorithms and kernels, besides doing some research on some machine learning algorithms and topics.


Online courses

I had taken many online courses in many of the computer engineering tracks. And One that I am very proud of, is udacity's machine-learning nano-degree[4] which is a six-months program, consists of many courses and practical projects regarding machine learning.


Why interested in apertium ?

- I am very interested in NLP in general.
- Apertium has a very noble goal, which is bringing languages with scarce data to life by linking them with machine translation of other languages.
- I have previous contribution in apertium and willing to build on it.



Project Idea

Weighted transfer rules

When more than one transfer rule could be applied to a given pattern, we call this ambiguous. Apertium resolve this ambiguity by applying the left-to-right longest match (LRLM) rule, and that is not adequate with all the word/s that follow that pattern/s.
To enhance this resolution, a new module was introduced to make this ambiguous rules weighted for the word/s that follow the ambiguous pattern, and this is done by training a corpus to generate maximum entropy models that are used to choose the best (highest weight) ambiguous rule to apply.
The module works as follows:
1- First we train an n-gram -we put n=5- source language model.
2- We split the corpus into sentences, for a given sentence we apply all ambiguous transfer rules for each ambiguous pattern separately from other ambiguous patterns -we apply LRLM rules to them-, and then get score from the n-gram model for each of the ambiguous sentences for that pattern.
3- These scores are then written in some files, each file contains the scores of an ambiguous pattern. These files are considered the datasets for yasmet tool, which trains target language max entropy models.
4- After having the models, the module is now ready for use. By using beam search algorithm we choose the best possible ambiguous rules to apply, hence having the best translation.
For more detailed explanation you could refer to this documentation[5].


Weighted transfer rules extension

The weighted transfer module was built to apply only chunker transfer rules. And this idea is to extend that module to be applied to interchunk and postchunk transfer rules too.
Both of them are similar to the chunker, but with some differences. For example interchunk def-cats will refer to the tags of the chunk itself and not the lexical forms it contains like chunker, and for postchunk they refer to name of the chunk and has nothing to do with tags now. Also chunk element has different use, because it deals with chunks not words. Also there are some differences in clip element attributes between the three transfer files.
All these differences may be considered minor with respect the whole module that handle the chunker transfer rules. And I think adding these modifications will not take long time.
So in addition to this extension, I think introducing new ideas or modifications that could enhance the accuracy and efficiency of the whole module could be necessary to do alongside the extension. Also I think I may work in related or not related ideas to this one to make full use of the 3 months period.


Latest updates on WTR module

The module is now finished and in the testing phase. It does well with Kazakh-Turkish pair and we hope it does as well with other pairs like Spanish-English pair which have more transfer rules than any other pair in apertium.
The latest code is uploaded in this repo[6]. The module is separated from apertium core, that is installing apertium only is not enough as one should download and install our module separately to use it along with apertium.


Coding Challenge

The coding challenge was to set up a pair and train the existing weighted transfer rule code, which I had done several times while testing and debugging the code.
Since I didn't have a coding challenge and also the module was separated from apertium core as mentioned before, Francis Tyers(spectei) told me integrate the module -without the training part- with apertium-transfer, and I did that in this pull-request[7].
Then he told me to make the module depends on libraries already used in apertium and not external ones, as I used 2 libraries pugixml to handle xml files and icu library to handle upper and lower cases, which are not used in apertium. Also Kevin Unhammer(unhammer) gave me some helpful review on the code, and these issues were resolved.


Additional thoughts

There are additional thoughts and modifications to the weighted transfer rules proposed in the aforementioned documentation[8].
And if some of them are valid, They could be applied along with the extension too. Also now I am looking for some newer machine or deep learning methods to apply as alternative for yasmet and max entropy method.


Why google and apertium should sponsor it ?

- The project enhances apertium translation of all pairs making it closer to human translation.
- I have previous experience and the required qualifications to complete the project successfully. And since I participated in building the module, I will be able to extend it without much difficulty.
- By being accepted and successful in GSoC program, it would make a huge impact on my cv and hence my career.
- The stipend and the opportunity to have a job interview with google are huge benefits to a fresh graduate student like me.


How and who will it benefit in society ?

As the project will hopefully enhance apertium translation and make it closer to human translation, apertium will be more reliable and efficient to use in daily life and for document translation, which -in the long term- will enrich the data of languages with data scarcity, and hence help the speakers of such languages enriching and preserving their languages from extinction.


Other ideas ?

I would love to work on "Light alternative format for all XML files in an Apertium language pair"[9] idea along with weighted transfer rules idea too, if there was enough time. As there is an intersection between the two ideas which is the xml transfer files, and since I am already familiar with the documentation of these files, and has written module to handle, match and apply the rules, I think I could design another lighter format than xml, and write converters scripts between the two formats.
I hope in the next few days, I would be to able to finish the coding challenge of this idea so I could be considered working on it too if no other one applied to it.



Work plan

Exams and community bounding

I am having my final exams from May 27 to June 20 and it's almost exactly the same as the first phase of GSoC this year, and since I will not be able to work in my exams duration and even I want at least one free week before the first exam, I will start earlier, even before the announcement of accepted students, and that's because I will continue contribution to the module anyways, if I got accepted or not.
So I will start working on the first phase on April 19 to May 16. And from May 17 to July 20 I will be taking my exams and I will still be able to do minor changes if necessary, and also will be open for discussions and chats about the first phase and the next one, to be ready when I came back to design and implement the code.


Schedule

Pre-GSoC

Week 1

(From April 5 - To April 11)

Continue code reformatting as proposed by mentors.

Week 2

(From April 12 - To May 18)

See what mentors say next to modify in the code.
Discuss with them some of the thoughts on the proposed documentation.
Begin new refactored module documentation.

Deliverable

Weighted transfer rules module is integrated with apertium-transfer.

First milestone

Week 1

(From April 19 - To April 25)

If code needs further refactoring, bugs/issues fixing, polishing, documentation, etc. Start in it.

Week 2

(From April 26 - To May 2)

Start Designing and Implementing some of the valid thoughts, ideas proposed or discussed with mentors. For now I think sentence splitting is the most promising idea, also may be substituting yasmet with another tool or method.

Week 3

(From May 3 - To May 9)

Continue coding and start testing and debugging.

Week 4

(From May 10 - To May 16)

Finish coding, testing and debugging. Write documentation. Train one chosen pair and evaluate its accuracy.

Deliverable

Hopefully, more accurate, clean and robust weighted transfer rules module.

Week 5

(From June 21 - To June 28)

After exams, I will familiarize myself again with the code because my memory is not good enough :) . Also write the mentor evaluation, complete any unfinished documentation, tests or evaluations, and fix any reported issues or bugs.


Second milestone

Week 5

(From June 28 - To July 4)

Read apertium2 document again, read deprecated or out of date parts from different sources and collect all the up to date transfer files specifications in a new document.

Week 6

(From July 5 - To July 11)

Fix any errors found in the module after collecting the up to date specifications.
Update and modify the ambiguous transfer file code to handle both inter- and post-chunk transfer files.

Week 7

(From July 12 - To July 18)

Continue coding and start testing and debugging.

Week 8

(From July 19 - To July 25)

Fix any reported bugs or issues.
Finish coding, testing and debugging. Write documentation. Train one chosen pair and evaluate its accuracy.
Writing mentor evaluation.

Deliverable

Extended weighted transfer rules module.


Third milestone

Week 9

(From July 26 - To August 1)

Fix any reported bugs or issues on the previous deliverable.
Start in a new proposed idea regarding weighted transfer rules or regarding the light weight alternative for xml.
If later was chosen, then I will start familiarizing myself with interNOSTRUM-style.
Start designing and documenting an interNOSTRUM-style format for at least the transfer rules XML files.

Week 10

(From August 2 - To August 8)

Start writing converters to XML and from XML.

Week 11

(From August 9 - To August 15)

Continue coding. Fix any reported bugs or issues.
Finish coding, debugging and testing and comparing results with XML.

Week 12

(From August 16 - To August 19)

Write documentation.
Write mentor evaluation.

Deliverable

New light interNOSTRUM-style format that's alternative to XML format, with converters from and to the two formats.


Other summer plans

- For the part-time job, as I was told by Francis that it's not compatible with GSoC, I decided to leave the job by April 15 before the first phase.
- For the first phase of GSoC I will still be in my college, but I will be able to allocate at least 30 hours per week for GSoC.
- For the second and third phases, college will have been finished, and I will be able to allocate at least 40 hours per week for GSoC.