Difference between revisions of "User:Iamas/GSoC13 Application: "Improved Bilingual Dictionary Induction""

From Apertium
Jump to navigation Jump to search
Line 17: Line 17:


== Which of the published tasks am I interested in? What do I plan to do? ==
== Which of the published tasks am I interested in? What do I plan to do? ==
I am interested in the project '''Improved Bilingual Dictionary Induction'''. The aim is to write a set of scripts that generate valid entries for a bidix from the word aligned parallel corpus and also to evaluate the reliability of the extracted translations.
I am interested in the project '''Improved Bilingual Dictionary Induction'''. The aim is to write a set of scripts that generate valid entries for a bidix from the word aligned parallel corpus and also to evaluate the reliability of the extracted translations. There are no methods that could enable the fully automatic production of dictionaries. Thus, the creation of a completely clean lexicographical resource
with an appropriate coverage requires a manual post-editing phase. Accordingly, my goal is to provide lexicographers with resources diminishing as much as possible the amount of labor required to prepare full-fledged dictionaries for Apertium's usage.
===Advantages of using parallel corpora in dictionary creation===
*High-quality dictionaries are based on corpora. This linguistic data decreases the role of human intuition during lexicographic process.
*Corpus-driven nature of this method ensures that human insight is eliminated also when hunting for possible translation candidates, that is, when establishing possible pairings of the source language and the target language expressions.
*The method we will be using will rank the translation candidates according to how likely they are based on automatically determined translational probabilities. This in turn renders possible to determine which sense of a given lemma is the most frequently used. Thus, representative corpora guarantees that not only the most important source lemmata will be included in the dictionary – as in traditional corpus-based lexicography – but also the translations of their most relevant senses.


==Proposal Title==
==Proposal Title==

Revision as of 15:46, 3 May 2013

Name

Arnav Sharma

Contact Information

Why am I interested in Machine Translation?

Machine Translation is an important technology for localization, and is particularly relevant in a linguistically diverse country like India. Machine Translation can help reduce the language barrier. That motivated me to study Computational Linguistics in IIIT-H. I am currently working in the Machine Translation Department of IIIT-H.

Why am I interested in the Apertium Project?

I have been fascinated by FOSS and open source software since the time I heard about it. As mentioned above, Machine Translation and computational linguisitics interests me a lot. Apertium combines both of these factors. Plus, I really like Begiak.

Which of the published tasks am I interested in? What do I plan to do?

I am interested in the project Improved Bilingual Dictionary Induction. The aim is to write a set of scripts that generate valid entries for a bidix from the word aligned parallel corpus and also to evaluate the reliability of the extracted translations. There are no methods that could enable the fully automatic production of dictionaries. Thus, the creation of a completely clean lexicographical resource with an appropriate coverage requires a manual post-editing phase. Accordingly, my goal is to provide lexicographers with resources diminishing as much as possible the amount of labor required to prepare full-fledged dictionaries for Apertium's usage.

Advantages of using parallel corpora in dictionary creation

  • High-quality dictionaries are based on corpora. This linguistic data decreases the role of human intuition during lexicographic process.
  • Corpus-driven nature of this method ensures that human insight is eliminated also when hunting for possible translation candidates, that is, when establishing possible pairings of the source language and the target language expressions.
  • The method we will be using will rank the translation candidates according to how likely they are based on automatically determined translational probabilities. This in turn renders possible to determine which sense of a given lemma is the most frequently used. Thus, representative corpora guarantees that not only the most important source lemmata will be included in the dictionary – as in traditional corpus-based lexicography – but also the translations of their most relevant senses.

Proposal Title

Improved bilingual dictionary induction

Why Apertium and Google should sponsor it?

Bilingual Dictionary is one of the five main dictionaries used in Apertium. This project involves generating valid and consistent Apertium bilingual dictionary entries from a word-aligned parallel corpus. There exist such tools but most of the generated entries have to be checked, which can greatly increase the amount of time it takes to make a new translation system. This will greatly benefit the lexicographers and other contributors and will help in reducing the effort and time taken to make new translation system.

Work Plan

Coding Challenge

The coding challenge involved:


I have finished the coding challenge.

  • Link can be found on github here.
  • Please refer to the README for further details.


Interim period and community bonding period

  • Get to know the community better
  • Habituate myself with the Apertium platform and project
  • Make preparations and gain necessary information that will help me in the coding period.
  • Contribute by solving bugs, rewriting scripts and contributing to the language pairs Hindi-Punjabi and Hindi-Urdu.

Week Plan

WEEK DATE PLANS
Week 01 06.17-06.23 Choose at least three language pairs with varying degree of relatedness and make word aligned data after running morph analyzer on a parallel corpus of these language pairs.
Week 02 06.24-06.30 Write a script to generate the list of word alignments.
Week 03 07.01-07.07 Write a script to extract the most frequent combinations of paradigms in Source Language - Transfer Language.
Week 04 07.08-07.14 Complete the script which generates templates for user's selection.
Deliverable #1 Developed templating system
Week 05 & 06 07.15-07.28 Write a script which checks to see whether the source language paradigm has a template with the transfer language paradigm.
Week 07 & 08 07.29-08.11 Write a script to make bidix entries in an incremental fashion.
Deliverable #2 Script that makes bidix entries in an incremental fashion.
Week 09 08.12-08.18 Create a mini-testvoc for the added words and see that the entries pass it.
Week 10 & 11 08.19-09.01 Compare the bidix dictionary against some online web dictionaries
Week 11 08.25-09.01 Combine all of the scripts above to automate all the tasks.
Week 12 09.02-09.08 Improve code, add lots of comments and write wiki for all of the scripts usage.
Deliverable #3 Final project

Biography

I am currently pursuing Bachelor of Technology in Computer Science and MS by Research in Computational Linguistics at IIIT-H. I have just finished my second year in that. I have been studying the various fields of Computational Linguistics for the past two years and I can not wait to study more. I am proficient in Python, C/C++, Bash, SQL and HTML5. I have developed an Urdu-Hindi transliterator using NLP tools. It gave an accuracy of 75%.

Non-Summer-of-Code plans for the summer

I might have to go for a social entrepreneurship trip for 3 days in July. Also, I plan on improving my programming skills by taking part in algorithmic coding competitions. Otherwise, I have nothing else planned for the summer. This project will be my main priority.