Difference between revisions of "Ideas for Google Summer of Code"

From Apertium
Jump to navigation Jump to search
(Replaced content with 'Name: Kiran Kumar')
Line 1: Line 1:
  +
Name: Kiran Kumar
{{TOCD}}
 
This is the ideas page for [[Google Summer of Code]], here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality. If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using <nowiki>~~~</nowiki>
 
 
The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on <code>#apertium</code> on <code>irc.freenode.net</code>, mail the [[Contact|mailing list]], or draw attention to yourself in some other way.
 
 
Note that, if you have an idea that isn't mentioned here, we would be very interested to hear about it.
 
 
Here are some more things you could look at:
 
 
* the [http://bugs.apertium.org/cgi-bin/bugzilla/buglist.cgi?cmdtype=runnamed&namedcmd=Open%20Bugs open bugs] page on Bugzilla
 
* pages in the [[:Category:Development|development category]]
 
* resources that could be converted or expanded in the [[incubator]]. Consider doing or improving a language pair.
 
* Unhammer's [[User:Unhammer/wishlist|wishlist]]
 
* [[Top tips for GSOC applications]]
 
 
<!--
 
See the list sorted by: {{comment|Do we need this? - [[User:Francis Tyers|Francis Tyers]]}}
 
 
* [[/Difficulty|difficulty level]],
 
* [[/Thematic|theme]]
 
-->
 
==List==
 
 
''Note: The table below is sortable by column. Click on the little squares below or next to the headers.''
 
 
{|class="wikitable sortable"
 
! Task !! Difficulty !! Description !! Rationale !! Requirements !! Interested<br/>mentors
 
|-
 
| '''Text tokenisation in HFST''' <small>[[/Morphology with HFST|read more...]]</small> || 3.&nbsp;Medium || Modify the [[HFST|Helsinki Finite State Toolkit]] to work nicely with Apertium. This will involve implementing the ''tokenise-as-you-analyse'' algorithm as presented in this paper.<ref>Alicia Garrido-Alenda, Mikel L. Forcada, Rafael C. Carrasco (2002) [http://www.dlsi.ua.es/~mlf/docum/garrido02p.pdf Incremental construction and maintenance of morphological analysers based on augmented letter transducers]</ref> For bonus points, do the same for [[Foma]]. || [[HFST]] can be used instead of [[lttoolbox]] to handle languages that have much more complex word structure, this will let us expand Apertium into a wider range of languages, and there are many freely available HFST-compatible dictionaries "in the wild". However, HFST is not that well integrated in Apertium yet. || C++, knowledge of finite state transducers a plus || [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]], [[User:Trondtr|Trondtr]], [[User:Unhammer|Unhammer]]
 
|-
 
| '''Java port of Apertium runtime''' || 3.&nbsp;Medium || We have a Java version of 3/4 of the core elements of Apertium. Do the rest and integrate. This consists mainly of porting the HMM [[tagger]] (some parts have already been ported, HMM and loading of data is incomplete and untested). You can read more on: [[Java port of Apertium runtime]] || A "Java port" of Apertium would enable use on J2ME/Android phones, web pages (applets), desktop application, Java server applications. || Java (reading C++ code required) || [[User:Jacob Nordfalk|Jacob Nordfalk]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''Easy dictionary maintenance''' <small>[[/Easy dictionary maintenance|read more...]]</small> || 3.&nbsp;Medium || Write code that simplifies the maintenance of the single-word part of Apertium monolingual and bilingual dictionaries. || Apertium dictionaries are very heterogeneous, but a great part of the development of a language pair consists in adding single words to monolingual and bilingual dictionaries, and, indeed, work on this part of the dictionaries is crucial for coverage and usefulness. Currently, dictionary maintenance is difficult because it involves editing an XML file. This may be slowing down the development of many language pairs. || Java or any language which can parse XML || [[User:Mlforcada|Mlforcada]], [[User:Jimregan|Jimregan]], [[User:Jacob Nordfalk|Jacob Nordfalk]], [[User:Japerez|Japerez]]
 
|-
 
| '''Discontiguous multiwords''' <small>[[/Discontiguous multiwords|read more...]]</small> || 3.&nbsp;Medium || The task will be to develop, or adapt a module to deal with these kind of contiguous multiword expressions, for example, taking 'liggja ekki fyrir' and reordering it as 'liggja# fyrir ekki'. || In many languages, such as English, Norwegian and Icelandic, there are discontiguous multiwords, e.g. phrasal verbs, that we cannot easily support. For example 'liggja ekki fyrir' in Icelandic should be translated in English as 'to be not clear', but we cannot have 'liggja fyrir' as a traditional multiword because of the extra 'adverb', or it could even be a whole NP. || C++, Knowledge of FSTs || [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''Flag diacritics in lttoolbox''' <small>[[/Flag diacritics in lttoolbox|read more...]]</small> || 2.&nbsp;Hard || Adapt [[lttoolbox]] to elegantly use flag diacritics. Flag diacritics are a way of avoiding transducer size blow-up by discarding impossible paths at runtime as opposed to compile time. || This will involve designing some changes to our XML dictionary format (see [[lttoolbox]], and implementing the associated changes in the FST compiling processing code. The reason behind this is that many languages have prefix inflection, and we cannot currently deal with this without either making paradigms useless, or overanalysing (e.g. returning analyses where none exist). Flag diacritics (or constraints) would allow us to restrict overanalysis without blowing up the size of our dictionaries. || C++, XML, Knowledge of FSTs || [[User:Francis Tyers|Francis Tyers]], [[User:Jacob Nordfalk|Jacob Nordfalk]]
 
|-
 
| '''Flag diacritics in lttoolbox-java''' <small>[[/Flag diacritics in lttoolbox|read more...]]</small> || 2.&nbsp;Hard || Adapt [[lttoolbox-java]] to elegantly use flag diacritics. Flag diacritics are a way of avoiding transducer size blow-up by discarding impossible paths at runtime as opposed to compile time. || This will involve designing some changes to our XML dictionary format (see [[lttoolbox]], and implementing the associated changes in the FST compiling processing code. The reason behind this is that many languages have prefix inflection, and we cannot currently deal with this without either making paradigms useless, or overanalysing (e.g. returning analyses where none exist). Flag diacritics (or constraints) would allow us to restrict overanalysis without blowing up the size of our dictionaries. || Java, XML, Knowledge of FSTs || [[User:Jacob Nordfalk|Jacob Nordfalk]]
 
|-
 
| '''Linguistically-driven bilingual-phrase filtering for inferring transfer rules''' || 3.&nbsp;Medium || Re-working apertium-transfer-training-tools to filter the set of bilingual phrases automatically obtained from a word-aligned sentence pair by using linguistic criteria. || Apertium-transfer-training-tools is a cool piece of software that generates shallow-transfer rules from aligned parallel corpora. It could greatly speed up the creation of new language pairs by generating rules that would otherwise have to be written by human linguists || C++, general knowledge of GIZA++, Perl considered a plus. || [[User:Jimregan|Jimregan]]
 
|-
 
| '''Context-dependent lexicalised categories for inferring transfer rules''' || 2.&nbsp;Hard || Re-working apertium-transfer-training-tools to use context-dependent lexicalised categories in the inference of shallow-transfer rules. || Apertium-transfer-training-tools generates shallow-transfer rules from aligned parallel corpora. It uses an small set of lexicalised categories, categories that are usually involved in lexical changes, such as prepositions, pronouns or auxiliary verbs. Lexicalised categories differentiate from the rest of categories because their lemmas are taken into account in the generation of rules. || C++, general knowledge of GIZA++, XML. || [[User:Jimregan|Jimregan]]
 
|-
 
| '''Improve integration of'''<br/>'''lttoolbox in libvoikko''' <small>[[/Improve integration of lttoolbox in libvoikko|read more...]]</small> || 3.&nbsp;Medium || Dictionaries from [[lttoolbox]] can now be used for spellchecking directly with [[libvoikko]] (see [[Spell checking]]). The idea of this project is to improve the integration. Fix bugs, look at ways of codifying "standard"/"sub-standard" forms in our dictionaries. || Spell checkers can be useful, for languages other than English moreso. They are one of the "must have" items of language technology. If we can re-use Apertium data for this purpose it will help both the project (by making creating new language pairs more rewarding) and the language communities (by making more useful software). || XML, C++. || [[User:Francis Tyers|Francis Tyers]]
 
|-
 
| '''Complex multiwords''' || 2.&nbsp;Hard || Write a bidirectional module for specifying complex multiword units, for example ''dirección general'' and ''zračna luka''. See ''[[Multiwords]]'' for more information. || Although in the Romance languages it is not a big problem, as soon as you start to get to languages with cases (e.g. Serbo-Croatian, Slovenian, German, etc.) the problem comes that you can't define a multiword of <code>adj nom</code> because the adjective has a lot of inflection. || Java or C++, XML || [[User:Jimregan|Jimregan]]
 
|-
 
 
| '''Adopt a'''<br/>'''language pair''' <small>[[/Adopt a language pair|read more...]]</small> || 4.&nbsp;Entry&nbsp;level || Take on an orphaned language pair, and bring it up to release quality results. What this quality will be will depend on the language pair adopted, and will need to be discussed with the prospective mentor. This will involve writing linguistic data (including morphological rules and transfer rules &mdash; which are specified in a declarative language &mdash; and possibly [[Constraint Grammar]] rules if that is relevant) || Apertium has a few pairs of languages (e.g. sh-mk, en-af, ga-gd, etc...) that are orphaned, they don't have active maintainers. A lot of these pairs have a lot of work already put in, just need another few months to get them to release quality. See also [[Incubator]]|| XML, a scripting language (Python, Perl), good knowledge of the language pair adopted. || [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Kevin Scannell|Kevin&nbsp;Scannell]], [[User:Trondtr|Trondtr]], [[User:Niunniminuni|Niunniminuni]], [[User:Unhammer|Unhammer]]
 
|-
 
| '''Post-edition '''<br/>'''tool''' || 3.&nbsp;Medium || Make a post-edition tool to speed up revision of Apertium translations. It would likely include at least support for spelling and grammar checking, along with defining user-specified rules for fixing translations (search and replace, etc.). This tool can reuse an existing grammar checker such as [http://www.languagetool.org LanguageTool]. || After translating with Apertium revision work has to be done to consider a translation as an "adequate" translation. An intelligent post-edition environment will help doing this task. In this environment some typical mistakes in the translation process that can be automatically detected (for example unknown words and homographs) could be highlighted to be taken in consideration while doing post-edition. Some typical mistakes could also be defined to advise the post-editor to check them. The application could be more useful if it is built as a web application. || XML, PHP, Python, Java, C, C++, whichever programming language, platforms for building web applications. || [[User:Japerez|Japerez]], [[User:Jimregan|Jimregan]], [[User:villarejo|villarejo]], [[User:Acorbi|Acorbi]]
 
|-
 
| '''Detect 'hidden' unknown words''' <small>[[/Detect hidden unknown words|read more...]]</small>|| 3.&nbsp;Medium || The part-of-speech tagger of Apertium can be modified to work out the likelihood of each 'tag' in a certain context, this can be used to detect missing entries in the dictionary. || Apertium dictionaries may have incomplete entries, that is, surface forms (lexical units as they appear in running texts) for which the dictionary does not provided all the possible lexical forms (consisting of lemma, part-of-speech and morphological inflection information). As those surface form for which there is at least one lexical form cannot be considered unknown words, it is difficult to know whether all lexical forms for a given surface form have been included in the dictionaries or not. This feature will detect 'possible' missing lexical forms for those surface forms in the dictionaries. || C++ if you plan to modify the part-of-speech tagger; whatever if rewriting it from scratch. || [[User:Fsanchez|Felipe Sánchez-Martínez]]
 
|-
 
| '''Format filters''' || 4.&nbsp;Entry&nbsp;level || Making apertium capable of dealing with more different formats, for the minimum: files marked up with LaTeX and MediaWiki. || Apertium can currently deal with texts in plain-text, RTF, HTML and ODT formats by means of a format definition file. It should be easy to use the same language to define filters for other formats. || Apertium format definition language and/or scripting languages. || [[User:Villarejo|Villarejo]]
 
|-
 
| '''Geriaoueg<br/>vocabulary assistant''' || 4.&nbsp;Entry&nbsp;level || Extend [[Geriaoueg]] so that it works more reliably with broken HTML and with any given language pair. || [[Geriaoueg]] is a program that provides "popup" vocabulary assistance, something like BBC Vocab or Lingro. Currently it only works with Breton--French, Welsh--English and Spanish--Breton. This task would be to develop it to work with any language in our SVN and fix problems with processing and displaying non-standard HTML. || PHP, C++, XML || [[User:Francis Tyers|Francis Tyers]]
 
|-
 
| '''Corpus-assisted dictionary expansion''' || 4.&nbsp;Entry&nbsp;level || Semi-automatic bilingual word equivalence retrieval from a bitext and a monolingual word list. || Improve an existing Python script to retrieve the best translations (suggestions) of a word (typically an unknown word) given a particular parallel text corpus. Perhaps combine the result with automatic paradigm guessing (also suggestions) to improve the productivity of the lexical work for most contributors|| Python, C/C++, AWK, Bash, perhaps web interface in PHP, Python, Ruby on Rails || [[User:Sortiz|Sortiz]], [[User:Jimregan|Jimregan]], [[User:Villarejo|Villarejo]]
 
|-
 
| '''Improvements to target-language tagger training''' <small>[[/Improvements to target-language tagger training|read more...]]</small> || 2.&nbsp;Hard || Modify apertium-tagger-training-tools so that it can deals with n-stage transfer rules when segmenting the input source-language text, and applies a k-best viterbi pruning approach that does not require to compute the a-priori likelihood of every disambiguation path before pruning. || apertium-tagger-training-tools is a program for doing [[target-language tagger training]], meaning it improves POS tagging performance specifically for the translation task, achieving a result for unsupervised training comparable with supervised training. This task would also require switching the default perl-based language model to either IRSTLM or RandLM (or both!). || C++, XML, XSLT || [[User:Fsanchez|Felipe Sánchez-Martínez]]
 
|-
 
| '''Accent and diacritic'''<br/>'''restoration''' <small>[[/Automatic diacritic restoration|read more...]]</small> || 3.&nbsp;Medium || Create an optional module to restore diacritics and accents on input text, and integrate it into the Apertium pipeline. || Many languages use diacritics and accents in normal writing, and Apertium is designed to use these, however in some places, especially for example. instant messaging, irc, searching in the web etc. these are often not used or untyped. This causes problems as for the engine, ''traduccion'' is not the same as ''traducción''. || C, C++, XML, familiarity with linguistic issues || [[User:Kevin Scannell|Kevin&nbsp;Scannell]], [[User:Trondtr|Trondtr]]
 
|-
 
| '''VM for the transfer module''' <small>[[/VM for transfer|read more...]]</small> || 3.&nbsp;Medium || VM for the current transfer architecture of Apertium and for the future transfers, pure C++ || Define an instruction set for a virtual machine that processes transfer code, then implement a prototype in Python, then porting to C++. The rationale behind this is that XML tree-walking is quite slow and CPU intensive. In modern (3 or more stage) pairs, transfer takes up most of the CPU. There are other options, like [[Bytecode for transfer]], but we would like something that does not require external libraries and is adapted specifically for Apertium. || Python, C/C++, XML, XSLT, code optimisation, JIT techniques, etc. || [[User:Sortiz|Sortiz]]
 
|-
 
| '''Hybrid MT''' || 2.&nbsp;Hard || Building Apertium-[[Marclator]] rule-based/corpus-based hybrids || Both the rule-based machine translation system Apertium and the corpus-based machine translation system [http://www.openmatrex.org/marclator/marclator.html Marclator] do some kind of chunking of the input as well as use a relatively straightforward left-to-right machine translation strategy. This has been explored [http://www.dlsi.ua.es/~fsanchez/pub/pdf/sanchez-martinez09d.pdf before] but there are other ways to organize hybridization which should be explored (the mentor is happy to discuss). Hybridization may make it easier to adapt Apertium to a particular corpus by using chunk pairs derived from it. || Knowledge of Java, C++, and scripting languages, and appreciation for research-like coding projects || [[User:Mlforcada|Mlforcada]], [[User:Jimregan|Jimregan]]
 
|-
 
|}
 
 
==Notes==
 
<references/>
 
==Further reading==
 
 
; Transfer rule learning
 
* Sánchez-Martínez, F. and Forcada, M.L. (2009) "[http://www.dlsi.ua.es/~fsanchez/pub/pdf/sanchez-martinez09b.pdf Inferring shallow-transfer machine translation rules from small parallel corpora]" In Journal of Artificial Intelligence Research. volume 34, p. 605-635.
 
 
; Target-language driven part-of-speech tagger training
 
* Sánchez-Martínez, F.; Pérez-Ortiz, J.A. and Forcada, M.L. (2008) "[http://www.springerlink.com/content/m452802q3536044v/fulltext.pdf Using target-language information to train part-of-speech taggers for machine translation]". In Machine Translation, volume 22, numbers 1-2, p. 29-66.
 
[[Category:Development]]
 

Revision as of 14:20, 2 April 2010

Name: Kiran Kumar