Difference between revisions of "Talk:Ideas for Google Summer of Code"

From Apertium
Jump to navigation Jump to search
(→‎2020 ideas: new section)
 
(4 intermediate revisions by one other user not shown)
Line 36: Line 36:
* Simard, Michel (1998). "Automatic Insertion of Accents in French Texts". Proceedings of EMNLP-3. Granada, Spain.
* Simard, Michel (1998). "Automatic Insertion of Accents in French Texts". Proceedings of EMNLP-3. Granada, Spain.
* Rada F. Mihalcea. (2002). "Diacritics Restoration: Learning from Letters versus Learning from Words". ''Lecture Notes in Computer Science'' 2276/2002 pp. 96--113
* Rada F. Mihalcea. (2002). "Diacritics Restoration: Learning from Letters versus Learning from Words". ''Lecture Notes in Computer Science'' 2276/2002 pp. 96--113


==2017==

=== <u>Back-off morphological generation with RNNs</u> ===
* '''Difficulty''':<br><span style="background-color: #ffbdbd">1. Hard</span>
* '''Required skills''':<br>c++
* '''Description''':<br>Write a pared-down RNN library to do morphological generation, add it as a backoff to [[lt-proc]] in case of a generation fail.
* '''Rationale''':<br>One of the most frustrating things when developing a new language pair is that you have to get the tags ''just right'' in order to be able to generate, if they come in the wrong order, or if you're missing a tag or if you have too many tags, then you don't get any surface form generated. For many languages it is possible to train RNNs to a high-level of accuracy.
* '''Mentors''':<br>[[User:Francis Tyers]] [[User:Mlforcada]]
* '''[[Morphological generation using RNNs|read more...]]'''
=== <u>Shallow-function labeller</u> ===
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
* '''Required skills''':<br>Python, shell scripting
* '''Description''':<br>Implement a prototype shallow syntactic function labeller for Apertium
* '''Rationale''':<br>In many pairs it is useful to know in addition to the morphological tags of a word, syntactic function tags in order to make an adequate translation. For instance, you might want to know in an ergative language if an absolutive is subject or object while translating. A shallow function labeller takes an annotated corpus and produces a model which can annotate new text.
* '''Mentors''':<br>[[User:Unhammer|Unhammer]], [[User:Francis Tyers|Francis Tyers]], [[User:Mlforcada|Mikel Forcada]]
* '''[[/Shallow-function labeller|read more...]]'''

=== <u>Discontiguous multiwords</u> ===
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
* '''Required skills''':<br>C++, Knowledge of FSTs
* '''Description''':<br>The task will be to develop, or adapt a module to deal with these kind of contiguous multiword expressions, for example, taking 'liggja ekki fyrir' and reordering it as 'liggja# fyrir ekki'.
* '''Rationale''':<br>In many languages, such as English, Norwegian and Icelandic, there are discontiguous multiwords, e.g. phrasal verbs, that we cannot easily support. For example 'liggja ekki fyrir' in Icelandic should be translated in English as 'to be not clear', but we cannot have 'liggja fyrir' as a traditional multiword because of the extra 'adverb', or it could even be a whole NP.
* '''Mentors''':<br>[[User:Francis Tyers|Francis Tyers]]
* '''[[/Discontiguous multiwords|read more...]]'''






Line 174: Line 203:
|-
|-
|}
|}

==2014 ideas==

{|
|-
!colspan=4 style="background-color: #cdcdcd"|'''Apertium in chat clients''' ([[Google_Summer_of_Code/Report_2014|2014 project]])
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| Good command of Java, interfaces, Android, and scripting languages || Make it possible to use Apertium seamlessly from inside Telegram, XChat, Pidgin. || Telegram is a free/open-source, documented-API alternative to Whatsapp. Using the existing offline Apertium code base it should be possible to integrate it in the Android version of Telegram or in the Chrome/Chromium plugin. XChat is one of the most popular IRC programs. Apertium has come a long way to becoming a machine translation system that may be easily installed (e.g. [[apertium-caffeine]]). This means that it should be easy to interface that so that it works as a plugin to XChat (see [http://xchat.org/docs/plugin20.html XChat 2.0 Plugin interface]) || [[User:mlforcada|mlforcada]], other mentors wanted.
|-
!style="background-color: #cdefcd"|2.&nbsp;Medium ||colspan=2| || [[/Apertium in chat clients|read more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Unify the metadix formats'''
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| Good command of XML, XSLT, scripting languages || Unify and extend the various "[[metadix]]" formats used in Apertium and deploy the modifications || In some language pairs, dictionaries are not written directly in the .dix format understood by lt-comp, but rather in a higher-level format called .metadix which is converted to .dix using a cascade of XSLT stylesheets and scripts. However, the .metadix format is different in each language pair, and, therefore, each language pair contains its own scripts and XSLT stylesheets. There are basically two such formats. The idea is to unify them in a single format that can be processed with scripts that would then be part of lttoolbox or apertium, and, if possible, extend it so that it allows for "variables " in bilingual dictionaries, so that one can have a single entry for (e.g.) 'foodstock'/'foodstocks' (en) = 'matèria primera'/'matèries primeres' (ca), which is currently not possible. || [[User:mlforcada|mlforcada]]
|-
!style="background-color: #cdefcd"|2.&nbsp;Medium ||colspan=2| || [[/Unify the metadix formats|read more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Command-line translation memory fuzzy-match repair''' ([[Google_Summer_of_Code/Report_2014|2014 project]])
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| Any command-line scripting language or Java, or C++ ... || Extend the Apertium capability to deal with translation memory so that it can "repair" some fuzzy matches when it is "safe" to do so. || Currently Apertium has support for translation memories, basically as follows: If an input sentence is found exactly in the translation memory, it is not machine translated but instead retrieved from the translation memory. However, it may be the case that one finds, for instance, sentences that differ only in one or two words. In that case, it may make sense to try and use Apertium only to "patch" the translation in the memory. It is actually possible to do this in a "safe" way. || [[User:mlforcada|mlforcada]]
|-
!style="background-color: #efcdcd"|1.&nbsp;Hard ||colspan=2| || [[/Command-line translation memory fuzzy-match repair|read more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Rule-based finite-state disambiguation''' ([[Google_Summer_of_Code/Report_2013|2013 project]])
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| XML, C++ or Java || Implement a disambiguation framework for Apertium that can be expressed as a finite-state transducer. It might be a good idea to express this as constraint rules, in a novel XML-based file format. It would be a good idea to look at LanguageTool, and IceParser and Apertium's own [[apertium-lex-tools]] to get ideas on how this might be accomplished. || Currently Apertium only has a bigram/trigram part-of-speech tagger. For most languages, bigram/trigram POS disambiguation really doesn't work, especially when you want to disambiguate morphology (e.g. number, case) along with part-of-speech. So far we've been using [[constraint grammar]] for some of these languages. But although Constraint Grammar is great and powerful, it is also pretty slow. || [[User:Francis Tyers|Francis Tyers]] (C++), [[User:Jacob Nordfalk|Jacob Nordfalk]] (Java)
|-
!style="background-color: #efcdcd"|1.&nbsp;Hard ||colspan=2| || [[/Rule-based finite-state disambiguation|read&nbsp;more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Complex multiwords''' ([[Google_Summer_of_Code/Report_2014|2014 project]])
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| Java or C++, XML, Knowledge of FSTs || Write a bidirectional module for specifying complex multiword units, for example ''dirección general'' and ''zračna luka''. See ''[[Multiwords]]'' for more information. || Although in the Romance languages it is not a big problem, as soon as you start to get to languages with cases (e.g. Serbo-Croatian, Slovenian, German, etc.) the problem comes that you can't define a multiword of <code>adj nom</code> because the adjective has a lot of inflection. || [[User:Jimregan|Jimregan]]
|-
!style="background-color: #efcdcd"|1.&nbsp;Hard ||colspan=2| || [[/Complex multiwords|read&nbsp;more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Optimise the VM for transfer''' ([[Google_Summer_of_Code/Report_2014|2014 project]])
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| Python, C++, XML, code optimisation, JIT techniques, etc. || The current VM for the transfer architecture of Apertium is up to five times slower than the XML tree-walking implementation. The job of this task is to optimise the C++ code to make it faster than XML tree-walking. || The rationale behind this is that XML tree-walking is quite slow and CPU intensive. In modern (3 or more stage) pairs, transfer takes up most of the CPU. There are other options, like [[Bytecode for transfer]], but we would like something that does not require external libraries and is adapted specifically for Apertium. || [[User:Sortiz|Sortiz]]
|-
!style="background-color: #cdefcd"|2.&nbsp;Medium ||colspan=2| || [[/Optimise the VM for transfer|read&nbsp;more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Accent and diacritic restoration'''
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| C, C++, XML, familiarity with linguistic issues, knowledge of FSTs preferable || Create an optional module to restore diacritics and accents on input text, and integrate it into the Apertium pipeline. || Many languages use diacritics and accents in normal writing, and Apertium is designed to use these, however in some places, especially for example. instant messaging, irc, searching in the web etc. these are often not used or untyped. This causes problems as for the engine, ''traduccion'' is not the same as ''traducción''. || [[User:Kevin Scannell|Kevin&nbsp;Scannell]], [[User:Trondtr|Trondtr]]
|-
!style="background-color: #cdcdef"|3.&nbsp;Entry&nbsp;level ||colspan=2| || [[/Accent and diacritic restoration|read&nbsp;more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Geriaoueg vocabulary assistant'''
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| PHP, C++, XML || Extend [[Geriaoueg]] so that it works more reliably with broken HTML, with any given language pair (e.g. support for both [[lttoolbox]] and [[HFST]]. || [[Geriaoueg]] is a program that provides "popup" vocabulary assistance, something like BBC Vocab or Lingro. Currently it only works with Breton--French, Welsh--English and Spanish--Breton. This task would be to develop it to work with any language in our SVN and fix problems with processing and displaying non-standard HTML. || [[User:Francis Tyers|Francis&nbsp;Tyers]]
|-
!style="background-color: #cdcdef"|3.&nbsp;Entry&nbsp;level ||colspan=2| || [[/Geriaoueg vocabulary assistant|read&nbsp;more...]]
|-
!colspan=4 style="background-color: #cdcdcd"|'''Closer integration with HFST'''
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| C++, Autotools, XML || This is a set of subtasks to make it easier for Apertium developers to use the Helsinki Finite-State Toolkit (HFST). It will involve: Adjusting the HFST build process to allow for an Apertium-tailored install. Making an XML format for [[lexc]] designed with machine translation in mind. Adjusting the tokenisation code in <code>hfst-proc</code>. Making [[lttoolbox]] a possible backend for HFST. || HFST is a great toolkit for working with morphological transducers, but it is pretty difficult to install, and also not very well integrated with Apertium / doesn't really follow the Apertium way of doing things. We'd like to make it more closely integrated. || [[User:Francis Tyers|Francis&nbsp;Tyers]], [[User:TommiPirinen|Tommi A Pirinen]]
|-
!style="background-color: #cdefcd"|2.&nbsp;Medium ||colspan=2| || [[/Closer integration with HFST|read&nbsp;more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Plain-text formats for Apertium data'''
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| XSLT, XML, flex, bison || Apertium data is currently largely encoded in XML-based formats. These are very overt and clear, but clumsy and hard to write. The idea is to make a plain-text format (based on the old [http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/download/3355/1843 MorphTrans] format) and write converters to/from the existing XML based format. || Many of our developers like the XML-based transfer and dictionary formats, but there are always some who would prefer a more texty format. This idea would make them happier. Happy developers write more code! || [[User:Mlforcada|Mlforcada]]
|-
!style="background-color: #cdefcd"|2.&nbsp;Medium ||colspan=2| || [[/Plain-text formats for Apertium data|read&nbsp;more...]]
|-
|-
!colspan=4 style="background-color: #cdcdcd"|'''Improving support for non-standard text input''' ([[Google_Summer_of_Code/Report_2014|2014 project]])
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| Python, XML, familiarity with linguistic issues, knowledge of FSTs preferable || Create a module that will standardise non-standard input. For example, slang, abbreviations. || Machine translation systems, especially rule-based systems, are pretty fragile when it comes to non-standard input. Get a comma, space, apostrophe or hyphen in the wrong place and it can come out all wrong. But, we definitely want to be able to translate IRC, SMS, Tweets and Youtube comments... || [[User:Francis Tyers|Francis&nbsp;Tyers]]
|-
!style="background-color: #cdcdef"|3.&nbsp;Entry&nbsp;level ||colspan=2| || [[/Improving support for non-standard text input|read&nbsp;more...]]
|-
!colspan=4 style="background-color: #cdcdcd"|'''Apertium assimilation evaluation toolkit''' ([[Google_Summer_of_Code/Report_2014|2014 project]])
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| A scripting language || Starting from files containing sentences in the source language and reference translations, generate tests for human evaluation consisting of: (1) (optionally) the source sentence, (2) (optionally) the machine-translated version of the source sentences and (3) a reference translation of the sentence in which one or more content words have been deleted. The idea is to measure how the ability of human subjects to fill in the holes improves when the source or a machine translation of it are presented. The task involves also generating a program that computes the success as a function of the information presented to the user, and utilities to make the whole process automatic given an Apertium language pair. || Many Apertium language pairs are designed for assimilation (gisting) purposes. The evaluation described would measure how helpful they are in the task. || [[User:Francis Tyers|Francis&nbsp;Tyers]], [[User:mlforcada| Mikel Forcada]]
|-
!style="background-color: #cdcdef"| 3.&nbsp;Entry&nbsp;level ||colspan=2| || [[/Apertium assimilation evaluation toolkit|read&nbsp;more...]]
|-
!colspan=4 style="background-color: #cdcdcd"|'''Corpus-based lexicalised feature transfer'''
|-
|align=center| '''How ?'''<br/><small>(required skills)</small> ||align=center| '''What ?'''<br/><small>(description)</small> ||align=center| '''Why ?'''<br/><small>(rationale)</small> ||align=center| '''Who ?'''<br/><small>(mentors)</small>
|-
| C++, NLP || Make a module that sits somewhere in the Apertium pipeline (somewhere after the lexical selection and before morphological generation) that sets features (eg. tags) based on a model generated from a corpus. || Let's get down to brass tacks, sometimes we get really inadequate translations even though you'd never hear stuff like that. One of those things is when we output something as definite when it is never used as definite. One way of dealing with this is a lot of rules and lists in transfer, but those are hard to do. So, how about looking at a corpus for information about some features like definiteness, aspect, evidentiality, impersonal/reflexive pronoun use in Romance languages etc. || [[User:Francis Tyers|Francis&nbsp;Tyers]], [[User:Jimregan|Jimregan]]
|-
!style="background-color: #efcdcd"|1.&nbsp;Hard ||colspan=2| || [[/Corpus-based lexicalised feature transfer|read&nbsp;more...]] ([[Google_Summer_of_Code/Report_2012|2012 project]])
|-
|}

== 2020 ideas ==

=== Language Ideas ===

These are ideas that involve working with particular languages.

<!-- See https://github.com/apertium/apertium-anaphora
{{IdeaSummary
| name = Anaphora resolution for machine translation
| difficulty = hard
| skills = C++, XML, Python
| description = Write a program to resolve anaphora and include it in the Apertium translation pipeline.
| rationale = Apertium has a problem with long distance dependencies in terms of agreement and co-reference. For example, deciding which determiner to use when translating from Spanish "su" to English "his, her, its". The objective of this task is to make a system to resolve anaphora and integrate it into a translation pipeline.
| mentors = [[User:Francis Tyers|Francis Tyers]]
| more = /Anaphora resolution
}}
-->

{{IdeaSummary
| name = Bring a released language pair up to state-of-the-art quality
| difficulty = medium
| skills = XML, a scripting language (Python, Perl), good knowledge of the language pair adopted.
| description = Take a released language pair, and drastically improve the performance both in terms of coverage, and in terms of translation quality. This will involve working with dictionaries, transfer rules, scripting, corpora. The objective is to make an Apertium language pair state-of-the-art, or close to state-of-the-art in terms of translation quality. This will involve improving coverage to 95-98% on a range of corpora and decreasing [[word error rate]] by 30-50%. For example if the current word error rate is 30%, then it should be reduced to 15-20%.
| rationale = Apertium has quite a broad coverage of language pairs, but few of these pairs offer state-of-the-art translation quality. We think broad is important, but deep coverage is important too.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Mlforcada|Mikel Forcada]], [[User:Xavivars|Xavi Ivars]], [[User:Ilnar.salimzyan|Ilnar Salimzianov]]
| more = /Make a language pair state-of-the-art
}}

{{IdeaSummary
| name = Adopt an unreleased language pair
| difficulty = easy
| skills = XML, a scripting language (Python, Perl), good knowledge of the language pair adopted.
| description = Take on an orphaned unreleased language pair, and bring it up to release quality results. What this quality will be will depend on the language pair adopted, and will need to be discussed with the prospective mentor. This will involve writing linguistic data (including morphological rules and transfer rules &mdash; which are specified in a declarative language &mdash; and possibly [[Constraint Grammar]] rules if that is relevant)
| rationale = Apertium has a few pairs of languages (e.g. mt-he, ga-gd, ur-hi, pl-cs, sh-ru, etc...) that are orphaned, they don't have active maintainers. A lot of these pairs have a lot of work already put in, just need another few months to get them to release quality. See also [[Incubator]]
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Kevin Scannell|Kevin Scannell]], [[User:Trondtr|Trondtr]], [[User:Unhammer|Unhammer]], [[User:Darthxaher|Darthxaher]], [[User:Firespeaker|Firespeaker]], [[User:Hectoralos|Hectoralos]], [[User:Krvoje|Hrvoje Peradin]], [[User:Jacob Nordfalk|Jacob Nordfalk]], [[User:Mlforcada|Mikel Forcada]], [[User:Vin-ivar|Vinit Ravishankar]], [[User:Aida|Aida Sundetova]], [[User:Xavivars|Xavi Ivars]], [[User:Ilnar.salimzyan|Ilnar Salimzianov]], [[User:Sevilay bayatlı|Sevilay Bayatlı]]
| more = /Adopt a language pair
}}

{{IdeaSummary
| name = apertium-separable language-pair integration
| difficulty = Medium
| skills = XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
| description = Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly.
| rationale = Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker]]
| more = /Apertium separable
}}

{{IdeaSummary
| name = Bring Apertium Occitan--French closer to posteditable quality.
| difficulty = medium
| skills = GNU/Linux advanced user, bash, git, XML editing, standard Occitan, French.
| description = The idea is to make Occitan output easier to postedit and French output easier to understand. This entails increasing the monolingual and bilingual dictionaries, improving disambiguation, and writing new structural transfer rules.
| rationale = The [https://github.com/apertium/apertium-oci-fra Occitan--French language pair] has been recently published. This language pair is of strategic importance for the Occitan language, as Apertium offers the only machine translation system for this language pair.
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome.
| more = /Apertium_Occitan_French
}}

{{IdeaSummary
| name = Create a usable version of one of these language pairs: English--Igbo, English--Yoruba, English--Tigrinya, English--Swahili, English-Hausa
| difficulty = medium
| skills = GNU/Linux advanced user, bash, git, XML editing, English, Igbo/Yoruba/Tigrinya/Swahili/Hausa
| description = The objective is to start these language pairs (which haven't been started or have currentlu very little data in Apertium) and write an usable version which provides intelligible output.
| rationale = African languages are not particularly well served by Apertium. The four languages listed are quite important, and are only currently served by commercial machine translation companies such as Google, which makes these language communities dependent on a specific commercial provider.
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome.
| more = /Apertium_African
}}

=== Module/Pipeline Ideas ===

These are ideas for modifying things in the translation pipeline.

{{IdeaSummary
| name = Robust tokenisation in lttoolbox
| difficulty = Medium
| skills = C++, XML, Python
| description = Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to be fully Unicode compliant.
| rationale = One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]]
| more = /Robust tokenisation
}}

{{IdeaSummary
| name = Extend lttoolbox to have the power of HFST
| difficulty = Hard
| skills = C++, XSLT, XML
| description = Extend lttoolbox (perhaps writing a preprocessor for it) so that it can be used to do the morphological transformations currently done with HFST. And yes, of course, writing something that translates the current HFST format to the new lttolbox format. Proof of concept: Come up with a new format that can express all of the features found in the Kazakh transducer; implement this format in Apertium; Implement the Kazakh transducer in this format and integrate it in the English--Kazakh pair.
| rationale = Some language pairs in Apertium use HFST where most language pairs use Apertium's own lttoolbox. This is due to the fact that writing morphologies for languages that have features such as the vowel harmony found in Turkic languages is very hard with the current format supported by lttoolbox. The mixture of HFST and lttoolbox makes it harder for people to develop some language pairs.
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:TommiPirinen|Tommi A Pirinen]], [[User:Unhammer]], mentors wanted
| more = /Extend lttoolbox to have the power of HFST
}}

<!--

DANGER TERROR HORROR !!!!!!

The task above has subsumed these two

{{IdeaSummary
| name = Flag diacritics in lttoolbox
| difficulty = Hard
| skills = C++ or Java, XML, Knowledge of FSTs
| description = Adapt [[lttoolbox]] to elegantly use flag diacritics. Flag diacritics are a way of avoiding transducer size blow-up by discarding impossible paths at runtime as opposed to compile time. Some work has already been done, see [[Flag diacritics]].
| rationale = This will involve designing some changes to our XML dictionary format (see [[lttoolbox]], and implementing the associated changes in the FST compiling processing code. The reason behind this is that many languages have prefix inflection, and we cannot currently deal with this without either making paradigms useless, or overanalysing (e.g. returning analyses where none exist). Flag diacritics (or constraints) would allow us to restrict overanalysis without blowing up the size of our dictionaries.
| mentors = [[User:Francis Tyers|Francis Tyers]] (C++), [[User:Jacob Nordfalk|Jacob Nordfalk]] (Java)
| more = /Flag diacritics in lttoolbox
}}

{{IdeaSummary
| name = Weights in lttoolbox
| difficulty = Medium
| skills = C++, XML, FSTs
| description = [[lttoolbox]] is a set of tools for building finite-state transducers. As part of Apertium's long-term strategy we would like to include probabilistic information into more stages of the pipeline to allow generic tools to be optimised for machine translation. This task involves adding the possibility of weighting lexemes and analyses in our finite-state transducer toolbox.
| rationale = Weighting information for lexical forms will be useful for morphological disambiguation, and for work on [[spellchecking]].
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]]
| more = /Add weights to lttoolbox
}}
-->

{{IdeaSummary
| name = Extend weighted transfer rules
| difficulty = Hard
| skills = Python, C++, linguistics
| description = The purpose of this task is to extend weighted transfer rules to all transfer files and to allow conflicting rule patterns to be handled by combining (lexicalised) weights.
| rationale = Currently our transfer rules are applied longest-match left-to-right ([[LRLM]]). When two rule patterns conflict the first one is chosen. We have a prototype for selecting based on lexicalised weights, but it only applies to the first stage of transfer.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]], [[User:Sevilay Bayatlı|Sevilay Bayatlı]]
| more = /Weighted transfer rules
}}

{{IdeaSummary
| name = Light alternative format for all XML files in an Apertium language pair
| difficulty = Hard
| skills = Python, C++, shell scripting, XSLT, flex
| description = Make it possible to edit and develop language data using a format that is lighter than XML
| rationale = In most Apertium language pairs, monolingual dictionaries, bilingual dictionaries, post-generation rule files and structural transfer rule files are all written in XML. While XML is easy to process due to explicit tagging of every element, it is tedious to deal with, particularly when it comes to structural transfer rules. Apertium's precursor, interNOSTRUM, had lighter text based formats. The task involves: (a) designing and documenting an interNOSTRUM-style format for all of the XML language data files in a language pair; (b) writing converters to XML and from XML that are fully roundtrip-compliant: (c) designing a way to synchronize changes when both the XML and the non-XML format are used simultaneously in a specific language pair.
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:Japerez|Juan Antonio Pérez]], pair.
| more = /Plain-text_formats_for_Apertium_data
}}

{{IdeaSummary
| name = Eliminate dictionary trimming
| difficulty = Very Hard
| skills = C++, Finite-State Transducers
| description = Eliminate the need for trimming the monolingual dictionaries, in order to preserve and take advantage of maximal source language analysis.
| rationale = [[Why we trim]] mentions several technical reasons for why trimming away monolingual information is currently needed. Unfortunately, this limitation means that a lot of useful contextual information is lost. It would be ideal if the source language could be fully analyzed independent of target language, with any untranslated part fed back into the source language generator.
* '''Work around everything in [[Why we trim]]'''
| mentors = [[User:TommiPirinen|Flammie]], +1 '''You need to find at least 1 mentor more to apply for this task'''
| more = /Eliminate trimming
}}

<!--
{{IdeaSummary
| name = Add weights to lttoolbox
| difficulty = Hard
| skills = c++
| description = Add support for weighted transducers to lttoolbox
| rationale = This will either involve implementing it from scratch or adding OpenFST as a backend. We would like to be able to use it both in the bilingual dictionaries, and in the morphological analysers, to be able to order analyses/translations by their probability/weight instead of by the random topological order.
| mentors = [[User:Francis Tyers]] [[User:Unhammer]]
| more = /Add weights to lttoolbox
}}
-->

{{IdeaSummary
| name = Create FST-based module for disambiguating
| difficulty = medium
| skills = XML, a scripting language (Python, Perl), C++, finite-state transducers
| description = Implement a [[Constraint Grammar]]-like module based on finite-state transducers.
| rationale = Currently, many language pairs use [[Constraint grammar]] as a pre-disambiguator for the Apertium tagger, allowing the imposition of more fine grained constraints than would be otherwise possible. However, current implementation of CG is much slower than most of the other modules in the Apertium pipeline, and it's also very different in terms of syntax to other Apertium modules (dictionaries, lexical selection, transfer rules, etc). There have been a few attempts to create FST versions of CG (see [[User:David_Nemeskey/GSOC_progress_2013]]), but they haven't succeeded. The hypothesis is that a simpler version of CG that supports the main features that CG support (no need to feature parity) would have better adoption and integration within the Apertium pipeline.
| mentors = [[User:Xavivars|Xavi Ivars]], [[User:Francis Tyers|Francis Tyers]]
| more = /Apertium FST GC
}}

{{IdeaSummary
| name = Learning distributed representations for Apertium modules
| difficulty = hard
| skills = Python, neural networks
| description =
| rationale =
| mentors = [[User:Francis Tyers|Francis Tyers]]
| more = /Distributed representations and Apertium
}}

=== Tool Ideas ===

These are ideas for creating tools to help build modules and pairs.

{{IdeaSummary
| name = User-friendly lexical selection training
| difficulty = Medium
| skills = Python, C++, shell scripting
| description = Make it so that training/inference of lexical selection rules is a more user-friendly process
| rationale = Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
| mentors = [[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]]
| more = /User-friendly lexical selection training
}}

{{IdeaSummary
| name = Bilingual dictionary enrichment via graph completion
| difficulty = Very Hard
| skills = shell scripting, python, XSLT, XML
| description = Generate new entries for existing or new bilingual dictionaries using graphic representations of bilingual correspondences as found in all existing dictionaries (note that this idea defines a rather open-ended task to be discussed in detail with mentors).
| rationale = Apertium bilingual dictionaries establish correspondences between lexical forms in a number of language pairs. Connections among them may be used to infer new entries for existing or new language pairs using graphs. The graphs may be directly generated from Apertium bidixes and exploiting using [[Bilingual_dictionary_discovery|ideas that had already been proposed in Apertium]] or using existing [http://linguistic.linkeddata.es/apertium/ RDF representations] of parts of their content, which may benefit from the information coming from being linked to other resources. Some previous progress can be found at [[Bilingual_dictionary_enrichment_via_graph_completion|here]].
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:Francis Tyers|Francis Tyers]], [[User:Jorge Gracia|Jorge Gracia]]
| more = Bilingual_dictionary_discovery
}}

<!--
{{IdeaSummary
| name = Transfer rule induction from comparable parsed corpora
| difficulty = Hard
| skills = shell scripting, python, XSLT, XML
| description = A system to infer transfer rules from comparable corpora that have both been deeply parsed (with e.g. CG)
| rationale = Many languages have good CG's and fairly large monolingual corpora, but little parallel material. Given a small bidix, fairly large monolingual corpora and good analysers/CG's, we should be able to parse both corpora, translate lemmas and look for similar sentences, turning the differences in their parses into transfer rules.
| mentors = [[User:Unhammer]]
| more = Transfer_induction_from_comparable_parsed_corpora
}}
-->

{{IdeaSummary
| name = A Web Interface to expanding dictionary lemmas integrate with GitLab/GitHub
| difficulty = hard
| skills = java, git, XML, nodeJs, Angular 8.
| description = Given that Apertium has a few dozen contributors and thousands of users, we propose a web graphical user interface (GUI) that enables the lay users to contribute to the expansion of the dictionaries that makeup the knowledge base of Apertium.
The main premise of the solution is that users with minimal knowledge of the language can contribute easily and that it must be integrated with the current form of development of expert users. Some prior progress can be found [https://web-dix-maintenance.appspot.com/ here].
| rationale = .
| mentors = [[User:Alessio|Aléssio Jr.]], mentors welcome.
| more = /Easy_dictionary_maintenance
}}

{{IdeaSummary
| name = Dictionary lookup with editing
| difficulty = hard
| skills = XML, git, JavaScript, any language for backend (Python?)
| description = A bilingual dictionary (the kind for people, not a bidix) contains various kinds of information ([http://perseus.uchicago.edu/cgi-bin/philologic/getobject.pl?c.17:3:39.LSJ example here]). Possible things to find in such a dictionary include inflected forms, translations, and phrases that the word might occur in. It should be possible to extract this information from various files within a translation pair. Within the interface, users could make changes to that information which can then be automatically converted to a pull request on Github. Some prior efforts at various kinds of dictionary lookup have been attempted [https://github.com/apertium/apertium-html-tools/issues/105 here].
| rationale = Dictionary lookup is something that would be useful to a lot of users and fixing bilingual dictionary entries is something new people frequently want to do.
| mentors = [[User:Popcorndude|Popcorndude]], mentors welcome
| more = /Bidix_lookup_and_maintenance
}}

{{IdeaSummary
| name = Extract morphological data from FLEx
| difficulty = hard
| skills = python, XML parsing
| description = Write a program to extract data from [https://software.sil.org/fieldworks/ SIL FieldWorks] and convert as much as possible to monodix (and maybe bidix).
| rationale = There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use
| mentors = [[User:Popcorndude|Popcorndude]], [[User:TommiPirinen|Flammie]]
| more = /FieldWorks_data_extraction
}}

=== Integration Ideas ===

These are ideas for making Apertium more useful in other places.

{{IdeaSummary
| name = Improvements to the Apertium website
| difficulty = Entry level
| skills = Python, HTML, JS
| description = Our web site is pretty cool already, but it's missing things like dictionary/synonym lookup, support for several variants of one language, reliability visualisation, (reliable) webpage translation, feedback, etc.
| rationale = [https://apertium.org https://apertium.org] / [http://beta.apertium.org http://beta.apertium.org] is what most people know us by, it should show off more of the things we are capable of :-)
| mentors = [[User:Firespeaker|Jonathan]], [[User:Sushain|Sushain]]
| more = /Apertium website improvements
}}

{{IdeaSummary
| name = UD and Apertium integration
| difficulty = Entry level
| skills = python, javascript, HTML, (C++)
| description = Create a range of tools for making Apertium compatible with Universal Dependencies
| rationale = Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
| mentors = [[User:Francis Tyers]] [[User:Firespeaker]]
| more = /UD and Apertium integration
}}

{{IdeaSummary
| name = Improving language pairs mining Mediawiki Content Translation postedits
| difficulty = Hard
| skills = Python, shell scripting, some statistics
| description = Implement a toolkit that allows mining existing machine translation postediting data in [Mediawiki Content Translation https://www.mediawiki.org/wiki/Content_translation] to generate (as automatically as possible, and as complete as possible) monodix and bidix entries to improve the performance of an Apertium language pair. Data is available from Wikimedia content translation through an [API https://www.mediawiki.org/wiki/Content_translation/Published_translations#API] or in the form of [Dumps https://dumps.wikimedia.org/other/contenttranslation/] available in JSON and TMX format. This project is rather experimental and involves some research in addition to coding.
| rationale = Apertium is used to generate new Wikipedia content: machine-translated content is postedited (and perhaps adapted) before publishing. Postediting information may contain information that can be used to help improve the lexical components of an Apertium language pair.
| mentors = [[User:Mlforcada|Mikel Forcada]], (more mentors to be added)
| more = /automatic-postediting
}}

{{IdeaSummary
| name = Improvements to UD Annotatrix
| difficulty = Medium
| skills = JavaScript, Jquery, HTML, Python
| description = UD Annotatrix is an interface by Apertium for annotating dependency trees in CoNLL-U format. The system is currently in beta, but is getting traction as more people start using it.
| rationale = Universal Dependencies is a very widely used standard for annotating data, the kind of annotated data that can be used to train part of speech taggers. There is still a lot of work that could be done to improve it.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker]]
| more = /UD Annotatrix
}}

<!-- done as of 2019 I think?
{{IdeaSummary
| name = Python API/library for Apertium
| difficulty = medium
| skills = Python, C++, SWIG
| description = Implement a Python library for Apertium and Lttoolbox.
| rationale = Lots of people use Python, they like to use it in their Jupyter notebooks and on Microsoft Windows™. Apertium is really hard to get going in these kind of environments. So it would be cool if we could make Apertium work for them too. It has a lot of nice language processing tools that we would like more people to use. I'm sure people would love to "pip install apertium" or "pip install apertium-ava". The API/implementation should be pythonistic and should use C++ bindings to directly perform morphological functions, avoiding the overhead of a separate process. Prior GSoC project work is available on [https://github.com/apertium/apertium-python GitHub].
| mentors = [[User:Sushain|Sushain]], [[User:Francis Tyers|Francis Tyers]], [[User:Unhammer|Unhammer]], [[User:Xavivars|Xavi Ivars]]
| more = /Python library
}}
-->

{{IdeaSummary
| name = TIPP functionality for Apertium
| difficulty = medium
| skills = Python, C++, XML, XLIFF (a subset called XLIFF:doc)
| description = TIPP, the TMS Interoperability Protocol Package (where TMS means translation management system), [https://github.com/tingley/interoperability-now/blob/master/releases/tipp/1.5/The_TMS_Interoperability_Protocol_Package-1.5.pdf], currently in version 1.5 but being upgraded to 2.0, specifies a container (package format) that allows the interchange of information along a translation value chain. There are various container varieties for different tasks. One such variety, called Translate-Strict-Bitext, represents a bilingual translation job. The 'request' TIPP would contain an XLIFF:doc ([https://github.com/tingley/interoperability-now/tree/master/releases/xliffdoc/1.0.1], a subset of XLIFF 1 [http://docs.oasis-open.org/xliff/v1.2/os/xliff-core.html]) file with the document to be translated and the corresponding metadata, and the corresponding 'response' TIPP would contain the results of Apertium MT applied to it, but taking into account the translation memory provided in the TIPP, if any (using Apertium's -m switch). Apertium should be endowed with the capacity to manage TIPP packages: unpack the request package, parse it, process it, and repack it.
| mentors = [[User:Mlforcada|Mikel Forcada]]
| more = /Apertium_TIPP
}}

{{IdeaSummary
| name = Set up gap-filling machine-translation-for-gisting evaluation on a recent version of Appraise
| difficulty = hard
| skills = python, bash, git, XML editing.
| description = [http://github.com/mlforcada/Appraise http://github.com/mlforcada/Appraise], in turn forked from [http://github.com/mlforcada/Appraise http://github.com/mlforcada/Appraise], the work of a GSoC student, contains an adaptation of an old (2014) version of [http://github.com/cfedermann/Appraise http://github.com/cfedermann/Appraise] to implement gap-filling evaluation as described in [https://export.arxiv.org/pdf/1809.00315 this WMT2018 paper]. The objective is to bring the gap-filling functionality in [http://github.com/mlforcada/Appraise] to be compatible with the latest versions of Appraise.
| rationale = Many language pairs in Apertium are unique, such as Breton-French, and many of them are used for gisting (understanding) purposes. Gap-filling provides a simple way to evaluate the usefulness of machine translation for gisting. The implementation used in recent experiments is based on an outdated version of the Appraise platform.
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome.
| more = /Appraise_gisting
}}

Latest revision as of 21:14, 18 January 2021

So, was your organization a part of the google summer of code last year too?

Nope, but we're hoping to be included this year -- Francis Tyers 02:45, 16 March 2008 (UTC)

From old Projects page[edit]

Contents

Writing extensions to Apertium could be the ideal undergraduate (major) project. Here are some suggestions, along with brief outlines for how you might go about starting it.

A word compounder for Germanic languages[edit]

Most Germanic languages have compound words, we can analyse the compounds using LRLM (see Agglutination and compounds), but we cannot generate them without having them in the dictionary (a laborious task). The idea of this project it to create a post-generation module that takes series of words, e.g. in Afrikaans:

 vlote bestorming fase
 naval assault    phase

and turn them into compounds:

 vlootbestormingfase
 naval+assault+phase

We don't want to compound all words, but it might be a good idea to compound those which have been seen before . There are many large wordlists of compound words that could be used for this. Of course if they aren't found maybe some kind of heuristics could be used. Probably we'd only want to compound where words are >= 5 characters long.

Automatic accent and diacritic insertion[edit]

One of the problems in machine translating text in real time chat environments (and generally) is the lack of accents or diacritic marks. This makes machine translation hard, because without the (´), traducción is an unknown word.

There is a need for a module for Apertium which would automatically replace the accents/diacritics on unaccented/diacritic'd words.

References
  • Simard, Michel (1998). "Automatic Insertion of Accents in French Texts". Proceedings of EMNLP-3. Granada, Spain.
  • Rada F. Mihalcea. (2002). "Diacritics Restoration: Learning from Letters versus Learning from Words". Lecture Notes in Computer Science 2276/2002 pp. 96--113


2017[edit]

Back-off morphological generation with RNNs[edit]

  • Difficulty:
    1. Hard
  • Required skills:
    c++
  • Description:
    Write a pared-down RNN library to do morphological generation, add it as a backoff to lt-proc in case of a generation fail.
  • Rationale:
    One of the most frustrating things when developing a new language pair is that you have to get the tags just right in order to be able to generate, if they come in the wrong order, or if you're missing a tag or if you have too many tags, then you don't get any surface form generated. For many languages it is possible to train RNNs to a high-level of accuracy.
  • Mentors:
    User:Francis Tyers User:Mlforcada
  • read more...

Shallow-function labeller[edit]

  • Difficulty:
    2. Medium
  • Required skills:
    Python, shell scripting
  • Description:
    Implement a prototype shallow syntactic function labeller for Apertium
  • Rationale:
    In many pairs it is useful to know in addition to the morphological tags of a word, syntactic function tags in order to make an adequate translation. For instance, you might want to know in an ergative language if an absolutive is subject or object while translating. A shallow function labeller takes an annotated corpus and produces a model which can annotate new text.
  • Mentors:
    Unhammer, Francis Tyers, Mikel Forcada
  • read more...

Discontiguous multiwords[edit]

  • Difficulty:
    2. Medium
  • Required skills:
    C++, Knowledge of FSTs
  • Description:
    The task will be to develop, or adapt a module to deal with these kind of contiguous multiword expressions, for example, taking 'liggja ekki fyrir' and reordering it as 'liggja# fyrir ekki'.
  • Rationale:
    In many languages, such as English, Norwegian and Icelandic, there are discontiguous multiwords, e.g. phrasal verbs, that we cannot easily support. For example 'liggja ekki fyrir' in Icelandic should be translated in English as 'to be not clear', but we cannot have 'liggja fyrir' as a traditional multiword because of the extra 'adverb', or it could even be a whole NP.
  • Mentors:
    Francis Tyers
  • read more...



Old ideas[edit]

Task Difficulty Description Rationale Requirements Interested
mentors
Porting read more... 4. Entry level Port Apertium to Windows complete with nice installers and all that jazz. Apertium currently compiles on Windows (see Apertium on Windows) While we all might use GNU/Linux, there are a lot of people out there who don't, some of them use Microsoft's Windows. It would be nice for these people to be able use Apertium too. C++, autotools, experience in programming on Windows.
Tree-based transfer read more... 1. Very hard Create a new XML-based transfer language for tree-based transfer and a prototype implementation, and transfer rules for an existing language pair. Apertium currently works on finite-state chunking, which works well, but is problematic for less-closely related languages and for getting the final few percent in closely-related languages. A tree-based transfer would allow us to work on real syntactic constituents, and probably simplify many existing pairs. There are some existing non-free implementations.[1] [2] XML, Knowledge of parsing, implementation language largely free.
Interfaces 4. Entry level Create plugins or extensions for popular free software applications to include support for translation using Apertium. We'd expect at least Firefox and Evolution (or Thunderbird), but to start with something more easy we have half-finished plugins for Pidgin and XChat that could use some love. The more the better! Further ideas on plugins page Apertium currently runs as a stand alone translator. It would be great if it was integrated in other free software applications. For example so instead of copy/pasting text out of your email, you could just click a button and have it translated in place. This should use a local installation with optional fallback to the webservice. Depends on the application chosen, but probably Java, C, C++, Python or Perl.
Automated lexical
extraction
2. Hard Writing a C++ wrapper around Markus Forsberg's Extract tool (version 2.0) as a library to allow it to be used with Apertium paradigms and TSX files / Constraint grammars as input into its paradigms and constraints. One of the things that takes a lot of time when creating a new language pair is constructing the monodices. The extract tool can greatly reduce the time this takes by matching lemmas to paradigms based on distribution in a corpus. Haskell, C++, XML
Bytecode for transfer 2. Hard Adapt transfer to use bytecode instead of tree walking. Apertium is pretty fast, but it could be faster, and the transfer is dominating the CPU usage. This task would be write a compiler and interpreter for Apertium transfer rules into the format of an an off-the-shelf bytecode engine (e.g. Java, v8, kjs, ...). If Java bytecode was chosen this might eventually make Apertium run on J2ME devices. See also: Bytecode for transfer C++ and for the bytecode Java or Javascript
VM for the transfer module read more... 3. Medium VM for the current transfer architecture of Apertium and for the future transfers, pure C++ Define an instruction set for a virtual machine that processes transfer code, then implement a prototype in Python, then porting to C++. The rationale behind this is that XML tree-walking is quite slow and CPU intensive. In modern (3 or more stage) pairs, transfer takes up most of the CPU. There are other options, like Bytecode for transfer, but we would like something that does not require external libraries and is adapted specifically for Apertium. Python, C/C++, XML, XSLT, code optimisation, JIT techniques, etc. Sortiz
Linguistically-driven bilingual-phrase filtering for inferring transfer rules 3. Medium Re-working apertium-transfer-training-tools to filter the set of bilingual phrases automatically obtained from a word-aligned sentence pair by using linguistic criteria. Apertium-transfer-training-tools is a cool piece of software that generates shallow-transfer rules from aligned parallel corpora. It could greatly speed up the creation of new language pairs by generating rules that would otherwise have to be written by human linguists C++, general knowledge of GIZA++, Perl considered a plus. Jimregan
Context-dependent lexicalised categories for inferring transfer rules 2. Hard Re-working apertium-transfer-training-tools to use context-dependent lexicalised categories in the inference of shallow-transfer rules. Apertium-transfer-training-tools generates shallow-transfer rules from aligned parallel corpora. It uses an small set of lexicalised categories, categories that are usually involved in lexical changes, such as prepositions, pronouns or auxiliary verbs. Lexicalised categories differentiate from the rest of categories because their lemmas are taken into account in the generation of rules. C++, general knowledge of GIZA++, XML. Jimregan
Corpus-assisted dictionary expansion 4. Entry level Semi-automatic bilingual word equivalence retrieval from a bitext and a monolingual word list. Improve an existing Python script to retrieve the best translations (suggestions) of a word (typically an unknown word) given a particular parallel text corpus. Perhaps combine the result with automatic paradigm guessing (also suggestions) to improve the productivity of the lexical work for most contributors Python, C/C++, AWK, Bash, perhaps web interface in PHP, Python, Ruby on Rails Sortiz, Jimregan
Detect 'hidden' unknown words read more... 3. Medium The part-of-speech tagger of Apertium can be modified to work out the likelihood of each 'tag' in a certain context, this can be used to detect missing entries in the dictionary. Apertium dictionaries may have incomplete entries, that is, surface forms (lexical units as they appear in running texts) for which the dictionary does not provide all the possible lexical forms (consisting of lemma, part-of-speech and morphological inflection information). As those surface form for which there is at least one lexical form cannot be considered unknown words, it is difficult to know whether all lexical forms for a given surface form have been included in the dictionaries or not. This feature will detect 'possible' missing lexical forms for those surface forms in the dictionaries. C++ if you plan to modify the part-of-speech tagger; whatever if rewriting it from scratch. Felipe Sánchez-Martínez
Improvements to target-language tagger training read more... 2. Hard Modify apertium-tagger-training-tools so that it can deals with n-stage transfer rules when segmenting the input source-language text, and applies a k-best viterbi pruning approach that does not require to compute the a-priori likelihood of every disambiguation path before pruning. apertium-tagger-training-tools is a program for doing target-language tagger training, meaning it improves POS tagging performance specifically for the translation task, achieving a result for unsupervised training comparable with supervised training. This task would also require switching the default perl-based language model to either IRSTLM or RandLM (or both!). C++, XML, XSLT Felipe Sánchez-Martínez
Hybrid MT 2. Hard Building Apertium-Marclator rule-based/corpus-based hybrids Both the rule-based machine translation system Apertium and the corpus-based machine translation system Marclator do some kind of chunking of the input as well as use a relatively straightforward left-to-right machine translation strategy. This has been explored before but there are other ways to organize hybridization which should be explored (the mentor is happy to discuss). Hybridization may make it easier to adapt Apertium to a particular corpus by using chunk pairs derived from it. Knowledge of Java, C++, and scripting languages, and appreciation for research-like coding projects Mlforcada, Jimregan

List[edit]

Note: The table below is sortable by column. Click on the little squares below or next to the headers.

Task Difficulty Description Rationale Requirements Interested
mentor
Improve integration of
lttoolbox in libvoikko and libhfst read more...
3. Medium Dictionaries from lttoolbox can now be used for spellchecking directly with libvoikko (see Spell checking). The idea of this project is to improve the integration. Fix bugs, look at ways of codifying "standard"/"sub-standard" forms in our dictionaries. Spell checkers can be useful, for languages other than English moreso. They are one of the "must have" items of language technology. If we can re-use Apertium data for this purpose it will help both the project (by making creating new language pairs more rewarding) and the language communities (by making more useful software). XML, C++. Francis Tyers
Regular expressions in lt-tmxproc read more... 2. Hard Adding regex support to lt-tmxproc would maximise the amount of translations we can get from an available TMX. lt-tmxproc already includes some limited support for making translation units in a TMX file into something of a template, but only for digits. Gintrowicz and Jassem describe an interesting idea, using user-definable regular expressions, to turn items such as dates into templates. lttoolbox already has support for a subset of regular expressions; add a mechanism to allow the user to make use of this, and to include these regular expressions in processing. C++, Knowledge of FSTs Jimregan
Quality control framework 3. Medium Write a unified testing framework for released language pairs in Apertium. The system should be able to track both regressions with respect to previous versions, and quality checks with respect to previous quality evaluations. We are gradually improving our quality control, with (semi-)automated tests, but these are done on the Wiki on an ad-hoc basis. Having a unified testing framework would allow us to be able to more easily track quality improvements over all language pairs, and more easily deal with regressions. See [1] PHP or Python Francis Tyers
Tree-based reordering read more... 2. Hard Currently we have a problem with very distantly related languages that have long-distance constituent reordering. Some languages have dependency parsers which create graphs of words. The purpose of this task would be to create a module that comes before or after apertium-transfer which reorders the graph. XML, C++ Francis Tyers
Dictionary induction from wikis 3. Medium Extract dictionaries from linguistic wikis Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets. MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction Atoral, Jimregan (dbpedia extraction only)
Inferring transfer rules with active learning 2. Hard Re-working apertium-transfer-training-tools to get more general rules. The right level of generalisation can be achieved by asking non-expert users to validate examples. Apertium-transfer-training-tools generates shallow-transfer rules from aligned parallel corpora. The corpus-extracted rules provide a translation quality comparable with the expert-written ones. However, human-written rules are more general and easier to mantain. The purpose of this task is to reduce the number of inferred rules while keeping translation quality. Gaps in the information elicited from the parallel corpus are filled with knowledge from non-expert users to achieve the right degree of generalisation. C++, general knowledge of GIZA++, XML. Víctor M. Sánchez-Cartagena
Active learning to choose among paradigms which share superficial forms 2. Hard Developing an active learning algorithm to choose the most appropriate paradigm (in a monolingual dictionary) for a new word. Previous research work allows us to reduce the problem to a set of paradigms which share superficial forms but not lexical forms. Current research on the addition of entries to a monolingual dictionary by non-expert users has partially solved the problem of choosing the best paradigm for a new word. However, when different paradigms share all the lexical forms the problem becomes harder. In such case, it is necessary to ask the users to validate sentences in which the word to be added acts as different lexical forms. Such sentences may be extracted from a monolingual corpus. XML, Java, Web technologies. Víctor M. Sánchez-Cartagena, Miquel Esplà-Gomis
Optimising paradigms 3. Medium Developing a tool to simplify the paradigms in monolingual dictionaries in order to minimise the redundancy. The collaborative improvement of dictionaries can cause some degree of redundancy in paradigms (it is frequent to find paradigms generating a very similar set of lexical forms). Although apertium-dixtools includes an option to remove identical paradigms, this task consists in the implementation of a tool to reduce the redundancy in paradigms by generating all the lexical forms and restructuring the paradigms. XML, Java. Miquel Esplà-Gomis
Corpus-based distinction between verb/pronoun
combinations in Romance languages
2. Hard Write a module which learns to distinguish between different translations of pronouns/verbs in Romance languages, e.g. "se come bien", "él se come", etc. Some constructions are ambiguous in Romance languages (e.g. Spanish). One of these are verb/pronoun combinations, however, with any verb, some combinations are more likely than others. C++, XML
Corpus-based definiteness transfer 3. Medium Develop a program that uses information from corpora to improve the transfer of definiteness between language pairs which have it. Languages treat definiteness differently, some using it more than others, in Apertium we typically just transfer it as is, but this has problems. read more... C++, XML, python linguistics Francis Tyers


Old further reading[edit]

Automated lexical extraction
Support for agglutinative languages
Transfer rule learning
Target-language driven part-of-speech tagger training
Regular expressions in lt-tmxproc

Apertium on Android with Necessitas QT[edit]

First off: "work on a smartphone and would be the first translator to work without Internet connection" - not true. There's at least one for the iPhone.

"A mechanism (i.e. temporary files) must be instrumented to overcome the unavailability of pipelines in Android." -- Pasquale implemented a pipeline replacement using memory-based pseudo files in apertium-service. Consider that problem solved. (Also, pipelines are not unavailable, they're just not standard).

Beyond that, I think the premise is flawed. The primary issue(s) with Apertium on Android were, chronologically: 1) lack of C++ support, 2) lack of STL support, 3) lack of wstring support. The first two are solved, the third, I'm not sure. Qt cannot help with that.

I don't see any major appeal in using a non-standard GUI, particularly when it's likely to involve as much effort as building a JNI interface. An optimal Android version would provide a translation interface as an Intent, rather than tightly coupling a GUI.

If the wstring issue no longer exists, it should be possible to run apertium-service unmodified. A JNI wrapper around it is the best way forward for a C++ port. Necessitas QT is a dead end idea. -- Jimregan 15:53, 29 February 2012 (UTC)

According to Boyán: "Parece ser que sí que soporta STL. ... por lo visto la propuesta de Sergio de pasar Apertium a Android sigue siendo viable y tarde o temprano habrá que hacerlo. Tiene Jim la razón en que no se usaría QT para nada, sino un Intent de Android" - Francis Tyers 21:08, 29 February 2012 (UTC)
STL support: that's what I was saying. Rephrased: first, the primary issue was lack of C++ support; that resolved, the primary issue became lack of STL support; that resolved, the primary issue became lack of wstring support. I've done some checking, and can't seem to find a straight answer, so "probably not". There is an alternative version of stlport for android, that seems to have the necessary pieces, here. I still say Qt is a dead end. Even if it's possible to create an Intent with Qt -- and it's by no means certain that it is: http://comments.gmane.org/gmane.comp.lib.qt.android/2719 -- it seems a lot of trouble to go to for an outcome that will never be truly satisfactory. Either way, there's going to be a lot of dicking around with JNI, so it would probably just be best to wrap Apertium in a Java shim. On the plus side, there are tools like jnaerator to automate most of that. -- Jimregan 19:39, 2 March 2012 (UTC)

2013 ideas[edit]

Template-based bilingual dictionary
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
XML, C++, python Design a format similar to bidix (declarative XML establishing language 1 <> language 2 correspondences) that allows the use of templates, as well as the back-end to process it (i.e., it should compile into an FST). It should deal with discontiguous multiwords and complex multiwords, allowing them to be easily translated, and should provide some mechanism (some sort of ranking) to deal with multiple matching sets of templates for a given translation (similar to CG). It should essentially allow one to bypass transfer rules and disambiguation and produce similar (if not better) accuracy in translation. A templatic bidix forces the designer of a language pair to be more explicit, and also consolidates pair development. Furthermore, there are several types of phenomenon such a system could deal with that are currently highly problematic. Firespeaker Francis Tyers
1. Hard read more...
Improved bilingual dictionary induction
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
Python, XML Write a set of scripts that can generate valid and consistent Apertium bilingual dictionary entries from a word-aligned parallel corpus. This will involve making a basic templating system. The scripts should ideally be able to incorporate quality measures to determine how reliable the translations extracted from the corpus are. There are some tools to make bilingual dictionaries from parallel corpora (such as retratos) but they don't take into account that words in different languages can require different entries in the bilingual dictionary depending on their morphological characteristics. This means that although finding the translations is automatic, most generated entries have to be checked, which can greatly increase the amount of time it takes to make a new translation system. Francis Tyers
3. Entry level read more...
Improvements in lexical-selection module
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
C++, python Implement a number of improvements to the lexical selection module, particularly involving the rule-learning scripts. The lexical selection module in Apertium is currently a prototype. There are many optimisations that could be made to make it faster and more efficient. There are a number of scripts which can be used for learning lexical-selection rules, but the scripts are not particularly well written. Part of the task will be to rewrite the scripts taking into account all possible corner cases. Francis Tyers
2. Medium read more...
Visual interface to write structural transfer rules
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
C++, scripting languages, GUI design Write a graphical user interface to write structural transfer rules (one that reads in (a subset of) the current XML-based language, allows for a graphical, intuitive editing of the rules, and writes compilable .t1x, .t2x or .t3x files) Apertium structural transfer rules are currently encoded in XML-based formats. These are very overt and clear, but clumsy and may be hard to write. The idea is to design a visual programming language of the style of like Scratch, where jigsaw-puzzle-style pieces corresponding to statements and control structures fit only if the syntax is right. Mikel Forcada, mentors wanted!
1. Hard read more

2014 ideas[edit]

Apertium in chat clients (2014 project)
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
Good command of Java, interfaces, Android, and scripting languages Make it possible to use Apertium seamlessly from inside Telegram, XChat, Pidgin. Telegram is a free/open-source, documented-API alternative to Whatsapp. Using the existing offline Apertium code base it should be possible to integrate it in the Android version of Telegram or in the Chrome/Chromium plugin. XChat is one of the most popular IRC programs. Apertium has come a long way to becoming a machine translation system that may be easily installed (e.g. apertium-caffeine). This means that it should be easy to interface that so that it works as a plugin to XChat (see XChat 2.0 Plugin interface) mlforcada, other mentors wanted.
2. Medium read more...
Unify the metadix formats
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
Good command of XML, XSLT, scripting languages Unify and extend the various "metadix" formats used in Apertium and deploy the modifications In some language pairs, dictionaries are not written directly in the .dix format understood by lt-comp, but rather in a higher-level format called .metadix which is converted to .dix using a cascade of XSLT stylesheets and scripts. However, the .metadix format is different in each language pair, and, therefore, each language pair contains its own scripts and XSLT stylesheets. There are basically two such formats. The idea is to unify them in a single format that can be processed with scripts that would then be part of lttoolbox or apertium, and, if possible, extend it so that it allows for "variables " in bilingual dictionaries, so that one can have a single entry for (e.g.) 'foodstock'/'foodstocks' (en) = 'matèria primera'/'matèries primeres' (ca), which is currently not possible. mlforcada
2. Medium read more...
Command-line translation memory fuzzy-match repair (2014 project)
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
Any command-line scripting language or Java, or C++ ... Extend the Apertium capability to deal with translation memory so that it can "repair" some fuzzy matches when it is "safe" to do so. Currently Apertium has support for translation memories, basically as follows: If an input sentence is found exactly in the translation memory, it is not machine translated but instead retrieved from the translation memory. However, it may be the case that one finds, for instance, sentences that differ only in one or two words. In that case, it may make sense to try and use Apertium only to "patch" the translation in the memory. It is actually possible to do this in a "safe" way. mlforcada
1. Hard read more...
Rule-based finite-state disambiguation (2013 project)
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
XML, C++ or Java Implement a disambiguation framework for Apertium that can be expressed as a finite-state transducer. It might be a good idea to express this as constraint rules, in a novel XML-based file format. It would be a good idea to look at LanguageTool, and IceParser and Apertium's own apertium-lex-tools to get ideas on how this might be accomplished. Currently Apertium only has a bigram/trigram part-of-speech tagger. For most languages, bigram/trigram POS disambiguation really doesn't work, especially when you want to disambiguate morphology (e.g. number, case) along with part-of-speech. So far we've been using constraint grammar for some of these languages. But although Constraint Grammar is great and powerful, it is also pretty slow. Francis Tyers (C++), Jacob Nordfalk (Java)
1. Hard read more...
Complex multiwords (2014 project)
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
Java or C++, XML, Knowledge of FSTs Write a bidirectional module for specifying complex multiword units, for example dirección general and zračna luka. See Multiwords for more information. Although in the Romance languages it is not a big problem, as soon as you start to get to languages with cases (e.g. Serbo-Croatian, Slovenian, German, etc.) the problem comes that you can't define a multiword of adj nom because the adjective has a lot of inflection. Jimregan
1. Hard read more...
Optimise the VM for transfer (2014 project)
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
Python, C++, XML, code optimisation, JIT techniques, etc. The current VM for the transfer architecture of Apertium is up to five times slower than the XML tree-walking implementation. The job of this task is to optimise the C++ code to make it faster than XML tree-walking. The rationale behind this is that XML tree-walking is quite slow and CPU intensive. In modern (3 or more stage) pairs, transfer takes up most of the CPU. There are other options, like Bytecode for transfer, but we would like something that does not require external libraries and is adapted specifically for Apertium. Sortiz
2. Medium read more...
Accent and diacritic restoration
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
C, C++, XML, familiarity with linguistic issues, knowledge of FSTs preferable Create an optional module to restore diacritics and accents on input text, and integrate it into the Apertium pipeline. Many languages use diacritics and accents in normal writing, and Apertium is designed to use these, however in some places, especially for example. instant messaging, irc, searching in the web etc. these are often not used or untyped. This causes problems as for the engine, traduccion is not the same as traducción. Kevin Scannell, Trondtr
3. Entry level read more...
Geriaoueg vocabulary assistant
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
PHP, C++, XML Extend Geriaoueg so that it works more reliably with broken HTML, with any given language pair (e.g. support for both lttoolbox and HFST. Geriaoueg is a program that provides "popup" vocabulary assistance, something like BBC Vocab or Lingro. Currently it only works with Breton--French, Welsh--English and Spanish--Breton. This task would be to develop it to work with any language in our SVN and fix problems with processing and displaying non-standard HTML. Francis Tyers
3. Entry level read more...
Closer integration with HFST
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
C++, Autotools, XML This is a set of subtasks to make it easier for Apertium developers to use the Helsinki Finite-State Toolkit (HFST). It will involve: Adjusting the HFST build process to allow for an Apertium-tailored install. Making an XML format for lexc designed with machine translation in mind. Adjusting the tokenisation code in hfst-proc. Making lttoolbox a possible backend for HFST. HFST is a great toolkit for working with morphological transducers, but it is pretty difficult to install, and also not very well integrated with Apertium / doesn't really follow the Apertium way of doing things. We'd like to make it more closely integrated. Francis Tyers, Tommi A Pirinen
2. Medium read more...
Plain-text formats for Apertium data
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
XSLT, XML, flex, bison Apertium data is currently largely encoded in XML-based formats. These are very overt and clear, but clumsy and hard to write. The idea is to make a plain-text format (based on the old MorphTrans format) and write converters to/from the existing XML based format. Many of our developers like the XML-based transfer and dictionary formats, but there are always some who would prefer a more texty format. This idea would make them happier. Happy developers write more code! Mlforcada
2. Medium read more...
Improving support for non-standard text input (2014 project)
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
Python, XML, familiarity with linguistic issues, knowledge of FSTs preferable Create a module that will standardise non-standard input. For example, slang, abbreviations. Machine translation systems, especially rule-based systems, are pretty fragile when it comes to non-standard input. Get a comma, space, apostrophe or hyphen in the wrong place and it can come out all wrong. But, we definitely want to be able to translate IRC, SMS, Tweets and Youtube comments... Francis Tyers
3. Entry level read more...
Apertium assimilation evaluation toolkit (2014 project)
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
A scripting language Starting from files containing sentences in the source language and reference translations, generate tests for human evaluation consisting of: (1) (optionally) the source sentence, (2) (optionally) the machine-translated version of the source sentences and (3) a reference translation of the sentence in which one or more content words have been deleted. The idea is to measure how the ability of human subjects to fill in the holes improves when the source or a machine translation of it are presented. The task involves also generating a program that computes the success as a function of the information presented to the user, and utilities to make the whole process automatic given an Apertium language pair. Many Apertium language pairs are designed for assimilation (gisting) purposes. The evaluation described would measure how helpful they are in the task. Francis Tyers, Mikel Forcada
3. Entry level read more...
Corpus-based lexicalised feature transfer
How ?
(required skills)
What ?
(description)
Why ?
(rationale)
Who ?
(mentors)
C++, NLP Make a module that sits somewhere in the Apertium pipeline (somewhere after the lexical selection and before morphological generation) that sets features (eg. tags) based on a model generated from a corpus. Let's get down to brass tacks, sometimes we get really inadequate translations even though you'd never hear stuff like that. One of those things is when we output something as definite when it is never used as definite. One way of dealing with this is a lot of rules and lists in transfer, but those are hard to do. So, how about looking at a corpus for information about some features like definiteness, aspect, evidentiality, impersonal/reflexive pronoun use in Romance languages etc. Francis Tyers, Jimregan
1. Hard read more... (2012 project)

2020 ideas[edit]

Language Ideas[edit]

These are ideas that involve working with particular languages.


Bring a released language pair up to state-of-the-art quality[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    XML, a scripting language (Python, Perl), good knowledge of the language pair adopted.
  • Description:
    Take a released language pair, and drastically improve the performance both in terms of coverage, and in terms of translation quality. This will involve working with dictionaries, transfer rules, scripting, corpora. The objective is to make an Apertium language pair state-of-the-art, or close to state-of-the-art in terms of translation quality. This will involve improving coverage to 95-98% on a range of corpora and decreasing word error rate by 30-50%. For example if the current word error rate is 30%, then it should be reduced to 15-20%.
  • Rationale:
    Apertium has quite a broad coverage of language pairs, but few of these pairs offer state-of-the-art translation quality. We think broad is important, but deep coverage is important too.
  • Mentors:
    Francis Tyers, Mikel Forcada, Xavi Ivars, Ilnar Salimzianov
  • read more...


Adopt an unreleased language pair[edit]


apertium-separable language-pair integration[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
  • Description:
    Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the Apertium-separable module to process the multiwords, and clean up the dictionaries accordingly.
  • Rationale:
    Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle.
  • Mentors:
    Francis Tyers, User:Firespeaker
  • read more...


Bring Apertium Occitan--French closer to posteditable quality.[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    GNU/Linux advanced user, bash, git, XML editing, standard Occitan, French.
  • Description:
    The idea is to make Occitan output easier to postedit and French output easier to understand. This entails increasing the monolingual and bilingual dictionaries, improving disambiguation, and writing new structural transfer rules.
  • Rationale:
    The Occitan--French language pair has been recently published. This language pair is of strategic importance for the Occitan language, as Apertium offers the only machine translation system for this language pair.
  • Mentors:
    Mikel Forcada, mentors welcome.
  • read more...


Create a usable version of one of these language pairs: English--Igbo, English--Yoruba, English--Tigrinya, English--Swahili, English-Hausa[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    GNU/Linux advanced user, bash, git, XML editing, English, Igbo/Yoruba/Tigrinya/Swahili/Hausa
  • Description:
    The objective is to start these language pairs (which haven't been started or have currentlu very little data in Apertium) and write an usable version which provides intelligible output.
  • Rationale:
    African languages are not particularly well served by Apertium. The four languages listed are quite important, and are only currently served by commercial machine translation companies such as Google, which makes these language communities dependent on a specific commercial provider.
  • Mentors:
    Mikel Forcada, mentors welcome.
  • read more...

Module/Pipeline Ideas[edit]

These are ideas for modifying things in the translation pipeline.


Robust tokenisation in lttoolbox[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    C++, XML, Python
  • Description:
    Improve the longest-match left-to-right tokenisation strategy in lttoolbox to be fully Unicode compliant.
  • Rationale:
    One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing.
  • Mentors:
    Francis Tyers, Flammie
  • read more...


Extend lttoolbox to have the power of HFST[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    C++, XSLT, XML
  • Description:
    Extend lttoolbox (perhaps writing a preprocessor for it) so that it can be used to do the morphological transformations currently done with HFST. And yes, of course, writing something that translates the current HFST format to the new lttolbox format. Proof of concept: Come up with a new format that can express all of the features found in the Kazakh transducer; implement this format in Apertium; Implement the Kazakh transducer in this format and integrate it in the English--Kazakh pair.
  • Rationale:
    Some language pairs in Apertium use HFST where most language pairs use Apertium's own lttoolbox. This is due to the fact that writing morphologies for languages that have features such as the vowel harmony found in Turkic languages is very hard with the current format supported by lttoolbox. The mixture of HFST and lttoolbox makes it harder for people to develop some language pairs.
  • Mentors:
    Mikel Forcada, Tommi A Pirinen, User:Unhammer, mentors wanted
  • read more...


Extend weighted transfer rules[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    Python, C++, linguistics
  • Description:
    The purpose of this task is to extend weighted transfer rules to all transfer files and to allow conflicting rule patterns to be handled by combining (lexicalised) weights.
  • Rationale:
    Currently our transfer rules are applied longest-match left-to-right (LRLM). When two rule patterns conflict the first one is chosen. We have a prototype for selecting based on lexicalised weights, but it only applies to the first stage of transfer.
  • Mentors:
    Francis Tyers, Tommi Pirinen, Sevilay Bayatlı
  • read more...


Light alternative format for all XML files in an Apertium language pair[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    Python, C++, shell scripting, XSLT, flex
  • Description:
    Make it possible to edit and develop language data using a format that is lighter than XML
  • Rationale:
    In most Apertium language pairs, monolingual dictionaries, bilingual dictionaries, post-generation rule files and structural transfer rule files are all written in XML. While XML is easy to process due to explicit tagging of every element, it is tedious to deal with, particularly when it comes to structural transfer rules. Apertium's precursor, interNOSTRUM, had lighter text based formats. The task involves: (a) designing and documenting an interNOSTRUM-style format for all of the XML language data files in a language pair; (b) writing converters to XML and from XML that are fully roundtrip-compliant: (c) designing a way to synchronize changes when both the XML and the non-XML format are used simultaneously in a specific language pair.
  • Mentors:
    Mikel Forcada, Juan Antonio Pérez, pair.
  • read more...


Eliminate dictionary trimming[edit]

  • Difficulty:
    0. Very Hard
  • Size: default Unknown size
  • Required skills:
    C++, Finite-State Transducers
  • Description:
    Eliminate the need for trimming the monolingual dictionaries, in order to preserve and take advantage of maximal source language analysis.
  • Rationale:
    Why we trim mentions several technical reasons for why trimming away monolingual information is currently needed. Unfortunately, this limitation means that a lot of useful contextual information is lost. It would be ideal if the source language could be fully analyzed independent of target language, with any untranslated part fed back into the source language generator.
  • Work around everything in Why we trim
  • Mentors:
    Flammie, +1 You need to find at least 1 mentor more to apply for this task
  • read more...


Create FST-based module for disambiguating[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    XML, a scripting language (Python, Perl), C++, finite-state transducers
  • Description:
    Implement a Constraint Grammar-like module based on finite-state transducers.
  • Rationale:
    Currently, many language pairs use Constraint grammar as a pre-disambiguator for the Apertium tagger, allowing the imposition of more fine grained constraints than would be otherwise possible. However, current implementation of CG is much slower than most of the other modules in the Apertium pipeline, and it's also very different in terms of syntax to other Apertium modules (dictionaries, lexical selection, transfer rules, etc). There have been a few attempts to create FST versions of CG (see User:David_Nemeskey/GSOC_progress_2013), but they haven't succeeded. The hypothesis is that a simpler version of CG that supports the main features that CG support (no need to feature parity) would have better adoption and integration within the Apertium pipeline.
  • Mentors:
    Xavi Ivars, Francis Tyers
  • read more...


Learning distributed representations for Apertium modules[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    Python, neural networks
  • Description:
  • Rationale:
  • Mentors:
    Francis Tyers
  • read more...

Tool Ideas[edit]

These are ideas for creating tools to help build modules and pairs.


User-friendly lexical selection training[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    Python, C++, shell scripting
  • Description:
    Make it so that training/inference of lexical selection rules is a more user-friendly process
  • Rationale:
    Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
  • Mentors:
    Unhammer, Mikel Forcada
  • read more...


Bilingual dictionary enrichment via graph completion[edit]

  • Difficulty:
    0. Very Hard
  • Size: default Unknown size
  • Required skills:
    shell scripting, python, XSLT, XML
  • Description:
    Generate new entries for existing or new bilingual dictionaries using graphic representations of bilingual correspondences as found in all existing dictionaries (note that this idea defines a rather open-ended task to be discussed in detail with mentors).
  • Rationale:
    Apertium bilingual dictionaries establish correspondences between lexical forms in a number of language pairs. Connections among them may be used to infer new entries for existing or new language pairs using graphs. The graphs may be directly generated from Apertium bidixes and exploiting using ideas that had already been proposed in Apertium or using existing RDF representations of parts of their content, which may benefit from the information coming from being linked to other resources. Some previous progress can be found at here.
  • Mentors:
    Mikel Forcada, Francis Tyers, Jorge Gracia
  • read more...


A Web Interface to expanding dictionary lemmas integrate with GitLab/GitHub[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    java, git, XML, nodeJs, Angular 8.
  • Description:
    Given that Apertium has a few dozen contributors and thousands of users, we propose a web graphical user interface (GUI) that enables the lay users to contribute to the expansion of the dictionaries that makeup the knowledge base of Apertium.

The main premise of the solution is that users with minimal knowledge of the language can contribute easily and that it must be integrated with the current form of development of expert users. Some prior progress can be found here.


Dictionary lookup with editing[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    XML, git, JavaScript, any language for backend (Python?)
  • Description:
    A bilingual dictionary (the kind for people, not a bidix) contains various kinds of information (example here). Possible things to find in such a dictionary include inflected forms, translations, and phrases that the word might occur in. It should be possible to extract this information from various files within a translation pair. Within the interface, users could make changes to that information which can then be automatically converted to a pull request on Github. Some prior efforts at various kinds of dictionary lookup have been attempted here.
  • Rationale:
    Dictionary lookup is something that would be useful to a lot of users and fixing bilingual dictionary entries is something new people frequently want to do.
  • Mentors:
    Popcorndude, mentors welcome
  • read more...


Extract morphological data from FLEx[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    python, XML parsing
  • Description:
    Write a program to extract data from SIL FieldWorks and convert as much as possible to monodix (and maybe bidix).
  • Rationale:
    There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use
  • Mentors:
    Popcorndude, Flammie
  • read more...

Integration Ideas[edit]

These are ideas for making Apertium more useful in other places.


Improvements to the Apertium website[edit]

  • Difficulty:
    3. Entry level
  • Size: default Unknown size
  • Required skills:
    Python, HTML, JS
  • Description:
    Our web site is pretty cool already, but it's missing things like dictionary/synonym lookup, support for several variants of one language, reliability visualisation, (reliable) webpage translation, feedback, etc.
  • Rationale:
    https://apertium.org / http://beta.apertium.org is what most people know us by, it should show off more of the things we are capable of :-)
  • Mentors:
    Jonathan, Sushain
  • read more...


UD and Apertium integration[edit]

  • Difficulty:
    3. Entry level
  • Size: default Unknown size
  • Required skills:
    python, javascript, HTML, (C++)
  • Description:
    Create a range of tools for making Apertium compatible with Universal Dependencies
  • Rationale:
    Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
  • Mentors:
    User:Francis Tyers User:Firespeaker
  • read more...


Improving language pairs mining Mediawiki Content Translation postedits[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    Python, shell scripting, some statistics
  • Description:
    Implement a toolkit that allows mining existing machine translation postediting data in [Mediawiki Content Translation https://www.mediawiki.org/wiki/Content_translation] to generate (as automatically as possible, and as complete as possible) monodix and bidix entries to improve the performance of an Apertium language pair. Data is available from Wikimedia content translation through an [API https://www.mediawiki.org/wiki/Content_translation/Published_translations#API] or in the form of [Dumps https://dumps.wikimedia.org/other/contenttranslation/] available in JSON and TMX format. This project is rather experimental and involves some research in addition to coding.
  • Rationale:
    Apertium is used to generate new Wikipedia content: machine-translated content is postedited (and perhaps adapted) before publishing. Postediting information may contain information that can be used to help improve the lexical components of an Apertium language pair.
  • Mentors:
    Mikel Forcada, (more mentors to be added)
  • read more...


Improvements to UD Annotatrix[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    JavaScript, Jquery, HTML, Python
  • Description:
    UD Annotatrix is an interface by Apertium for annotating dependency trees in CoNLL-U format. The system is currently in beta, but is getting traction as more people start using it.
  • Rationale:
    Universal Dependencies is a very widely used standard for annotating data, the kind of annotated data that can be used to train part of speech taggers. There is still a lot of work that could be done to improve it.
  • Mentors:
    Francis Tyers, User:Firespeaker
  • read more...


TIPP functionality for Apertium[edit]

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    Python, C++, XML, XLIFF (a subset called XLIFF:doc)
  • Description:
    TIPP, the TMS Interoperability Protocol Package (where TMS means translation management system), [2], currently in version 1.5 but being upgraded to 2.0, specifies a container (package format) that allows the interchange of information along a translation value chain. There are various container varieties for different tasks. One such variety, called Translate-Strict-Bitext, represents a bilingual translation job. The 'request' TIPP would contain an XLIFF:doc ([3], a subset of XLIFF 1 [4]) file with the document to be translated and the corresponding metadata, and the corresponding 'response' TIPP would contain the results of Apertium MT applied to it, but taking into account the translation memory provided in the TIPP, if any (using Apertium's -m switch). Apertium should be endowed with the capacity to manage TIPP packages: unpack the request package, parse it, process it, and repack it.
  • Rationale:
    {{{rationale}}}
  • Mentors:
    Mikel Forcada
  • read more...


Set up gap-filling machine-translation-for-gisting evaluation on a recent version of Appraise[edit]

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    python, bash, git, XML editing.
  • Description:
    http://github.com/mlforcada/Appraise, in turn forked from http://github.com/mlforcada/Appraise, the work of a GSoC student, contains an adaptation of an old (2014) version of http://github.com/cfedermann/Appraise to implement gap-filling evaluation as described in this WMT2018 paper. The objective is to bring the gap-filling functionality in [5] to be compatible with the latest versions of Appraise.
  • Rationale:
    Many language pairs in Apertium are unique, such as Breton-French, and many of them are used for gisting (understanding) purposes. Gap-filling provides a simple way to evaluate the usefulness of machine translation for gisting. The implementation used in recent experiments is based on an outdated version of the Appraise platform.
  • Mentors:
    Mikel Forcada, mentors welcome.
  • read more...
  1. Koichi Takeda "Pattern-Based Context-Free Grammars for Machine Translation"
  2. Gábor PRÓSZÉKY and László TIHANYI "MetaMorpho: A Pattern-Based Machine Translation System"