Difference between revisions of "Ideas for Google Summer of Code"

From Apertium
Jump to navigation Jump to search
m (mention need for wiki account)
(38 intermediate revisions by 10 users not shown)
Line 13: Line 13:
 
* Resources that could be converted or expanded in the [[incubator]]. Consider doing or improving a language pair (see [[incubator]], [[nursery]] and [[staging]] for pairs that need work)
 
* Resources that could be converted or expanded in the [[incubator]]. Consider doing or improving a language pair (see [[incubator]], [[nursery]] and [[staging]] for pairs that need work)
 
* Unhammer's [[User:Unhammer/wishlist|wishlist]]
 
* Unhammer's [[User:Unhammer/wishlist|wishlist]]
* The [http://sourceforge.net/p/apertium/tickets/search/?q=!status%3Awont-fix+%26%26+!status%3Aclosed open tickets] page on SourceForge
+
* The open issues [https://github.com/search?q=org%3Aapertium&state=open&type=Issues on Github] (or [http://sourceforge.net/p/apertium/tickets/search/?q=!status%3Awont-fix+%26%26+!status%3Aclosed on Sourceforge]). The latter are probably out of date now since migrating to Github.
   
 
__TOC__
 
__TOC__
   
  +
If you're a student trying to propose a topic, the recommended way is to request a wiki account and then go to <pre>http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2020Proposal</pre> and click the "create" button near the top of the page. It's also nice to include <code><nowiki>[[Category:GSoC_2020_student_proposals]]</nowiki></code> to help organize submitted proposals.
== List ==
 
   
  +
== Language Ideas ==
  +
  +
These are ideas that involve working with particular languages.
  +
  +
<!-- See https://github.com/apertium/apertium-anaphora
 
{{IdeaSummary
 
{{IdeaSummary
 
| name = Anaphora resolution for machine translation
 
| name = Anaphora resolution for machine translation
Line 28: Line 33:
 
| more = /Anaphora resolution
 
| more = /Anaphora resolution
 
}}
 
}}
  +
-->
   
  +
{{IdeaSummary
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| name = Bring a released language pair up to state-of-the-art quality
  +
| difficulty = medium
  +
| skills = XML, a scripting language (Python, Perl), good knowledge of the language pair adopted.
  +
| description = Take a released language pair, and drastically improve the performance both in terms of coverage, and in terms of translation quality. This will involve working with dictionaries, transfer rules, scripting, corpora. The objective is to make an Apertium language pair state-of-the-art, or close to state-of-the-art in terms of translation quality. This will involve improving coverage to 95-98% on a range of corpora and decreasing [[word error rate]] by 30-50%. For example if the current word error rate is 30%, then it should be reduced to 15-20%.
  +
| rationale = Apertium has quite a broad coverage of language pairs, but few of these pairs offer state-of-the-art translation quality. We think broad is important, but deep coverage is important too.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Mlforcada|Mikel Forcada]], [[User:Xavivars|Xavi Ivars]], [[User:Ilnar.salimzyan|Ilnar Salimzianov]]
  +
| more = /Make a language pair state-of-the-art
  +
}}
   
  +
{{IdeaSummary
=== <u>Bring a released language pair up to state-of-the-art quality</u> ===
 
  +
| name = Adopt an unreleased language pair
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
 
  +
| difficulty = easy
* '''Required skills''':<br>XML, a scripting language (Python, Perl), good knowledge of the language pair adopted.
 
  +
| skills = XML, a scripting language (Python, Perl), good knowledge of the language pair adopted.
* '''Description''':<br>Take a released language pair, and drastically improve the performance both in terms of coverage, and in terms of translation quality. This will involve working with dictionaries, transfer rules, scripting, corpora. The objective is to make an Apertium language pair state-of-the-art, or close to state-of-the-art in terms of translation quality. This will involve improving coverage to 95-98% on a range of corpora and decreasing [[word error rate]] by 30-50%. For example if the current word error rate is 30%, then it should be reduced to 15-20%.
 
  +
| description = Take on an orphaned unreleased language pair, and bring it up to release quality results. What this quality will be will depend on the language pair adopted, and will need to be discussed with the prospective mentor. This will involve writing linguistic data (including morphological rules and transfer rules &mdash; which are specified in a declarative language &mdash; and possibly [[Constraint Grammar]] rules if that is relevant)
* '''Rationale''':<br>Apertium has quite a broad coverage of language pairs, but few of these pairs offer state-of-the-art translation quality. We think broad is important, but deep coverage is important too.
 
  +
| rationale = Apertium has a few pairs of languages (e.g. mt-he, ga-gd, ur-hi, pl-cs, sh-ru, etc...) that are orphaned, they don't have active maintainers. A lot of these pairs have a lot of work already put in, just need another few months to get them to release quality. See also [[Incubator]]
* '''Mentors''':<br>[[User:Francis Tyers|Francis Tyers]], [[User:Mlforcada|Mikel Forcada]], [[User:Xavivars|Xavi Ivars]], [[User:Ilnar.salimzyan|Ilnar Salimzianov]]
 
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Kevin Scannell|Kevin Scannell]], [[User:Trondtr|Trondtr]], [[User:Unhammer|Unhammer]], [[User:Darthxaher|Darthxaher]], [[User:Firespeaker|Firespeaker]], [[User:Hectoralos|Hectoralos]], [[User:Krvoje|Hrvoje Peradin]], [[User:Jacob Nordfalk|Jacob Nordfalk]], [[User:Mlforcada|Mikel Forcada]], [[User:Vin-ivar|Vinit Ravishankar]], [[User:Aida|Aida Sundetova]], [[User:Xavivars|Xavi Ivars]], [[User:Ilnar.salimzyan|Ilnar Salimzianov]], [[User:Sevilay bayatlı|Sevilay Bayatlı]]
* '''[[/Make a language pair state-of-the-art|read more...]]'''
 
  +
| more = /Adopt a language pair
</div>
 
  +
}}
   
  +
{{IdeaSummary
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| name = apertium-separable language-pair integration
  +
| difficulty = Medium
  +
| skills = XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
  +
| description = Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly.
  +
| rationale = Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker]]
  +
| more = /Apertium separable
  +
}}
   
  +
{{IdeaSummary
=== <u>Robust tokenisation in lttoolbox</u> ===
 
  +
| name = Bring Apertium Occitan--French closer to posteditable quality.
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
 
  +
| difficulty = medium
* '''Required skills''':<br>C++, XML, Python
 
  +
| skills = GNU/Linux advanced user, bash, git, XML editing, standard Occitan, French.
* '''Description''':<br>Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to be fully Unicode compliant.
 
  +
| description = The idea is to make Occitan output easier to postedit and French output easier to understand. This entails increasing the monolingual and bilingual dictionaries, improving disambiguation, and writing new structural transfer rules.
* '''Rationale''':<br>One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing.
 
  +
| rationale = The [https://github.com/apertium/apertium-oci-fra Occitan--French language pair] has been recently published. This language pair is of strategic importance for the Occitan language, as Apertium offers the only machine translation system for this language pair.
* '''Mentors''':<br>[[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]]
 
  +
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome.
* '''[[/Robust tokenisation|read more...]]'''
 
  +
| more = /Apertium_Occitan_French
</div>
 
  +
}}
   
  +
{{IdeaSummary
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| name = Create a usable version of one of these language pairs: English--Igbo, English--Yoruba, English--Tigrinya, English--Swahili, English-Hausa
  +
| difficulty = medium
  +
| skills = GNU/Linux advanced user, bash, git, XML editing, English, Igbo/Yoruba/Tigrinya/Swahili/Hausa
  +
| description = The objective is to start these language pairs (which haven't been started or have currentlu very little data in Apertium) and write an usable version which provides intelligible output.
  +
| rationale = African languages are not particularly well served by Apertium. The four languages listed are quite important, and are only currently served by commercial machine translation companies such as Google, which makes these language communities dependent on a specific commercial provider.
  +
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome.
  +
| more = /Apertium_African
  +
}}
   
  +
== Module/Pipeline Ideas ==
=== <u>Adopt an unreleased language pair</u> ===
 
* '''Difficulty''':<br><span style="background-color: #cdcdef">3. Entry level</span>
 
* '''Required skills''':<br>XML, a scripting language (Python, Perl), good knowledge of the language pair adopted.
 
* '''Description''':<br>Take on an orphaned unreleased language pair, and bring it up to release quality results. What this quality will be will depend on the language pair adopted, and will need to be discussed with the prospective mentor. This will involve writing linguistic data (including morphological rules and transfer rules &mdash; which are specified in a declarative language &mdash; and possibly [[Constraint Grammar]] rules if that is relevant)
 
* '''Rationale''':<br>Apertium has a few pairs of languages (e.g. mt-he, ga-gd, ur-hi, pl-cs, sh-ru, etc...) that are orphaned, they don't have active maintainers. A lot of these pairs have a lot of work already put in, just need another few months to get them to release quality. See also [[Incubator]]
 
* '''Mentors''':<br>[[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Kevin Scannell|Kevin Scannell]], [[User:Trondtr|Trondtr]], [[User:Unhammer|Unhammer]], [[User:Darthxaher|Darthxaher]], [[User:Firespeaker|Firespeaker]], [[User:Hectoralos|Hectoralos]], [[User:Krvoje|Hrvoje Peradin]], [[User:Jacob Nordfalk|Jacob Nordfalk]], [[User:Mlforcada|Mikel Forcada]], [[User:Vin-ivar|Vinit Ravishankar]], [[User:Aida|Aida Sundetova]], [[User:Xavivars|Xavi Ivars]], [[User:Ilnar.salimzyan|Ilnar Salimzianov]]
 
* '''[[/Adopt a language pair|read more...]]'''
 
</div>
 
   
  +
These are ideas for modifying things in the translation pipeline.
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
   
  +
{{IdeaSummary
=== <u>Extend lttoolbox to have the power of HFST</u> ===
 
  +
| name = Robust tokenisation in lttoolbox
* '''Difficulty''':<br><span style="background-color: #efcdcd">1. Hard</span>
 
  +
| difficulty = Medium
* '''Required skills''':<br>C++, XSLT, XML
 
  +
| skills = C++, XML, Python
* '''Description''':<br>Extend lttoolbox (perhaps writing a preprocessor for it) so that it can be used to do the morphological transformations currently done with HFST. And yes, of course, writing something that translates the current HFST format to the new lttolbox format. Proof of concept: Come up with a new format that can express all of the features found in the Kazakh transducer; implement this format in Apertium; Implement the Kazakh transducer in this format and integrate it in the English--Kazakh pair.
 
  +
| description = Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to be fully Unicode compliant.
* '''Rationale''':<br>Some language pairs in Apertium use HFST where most language pairs use Apertium's own lttoolbox. This is due to the fact that writing morphologies for languages that have features such as the vowel harmony found in Turkic languages is very hard with the current format supported by lttoolbox. The mixture of HFST and lttoolbox makes it harder for people to develop some language pairs.
 
  +
| rationale = One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing.
* '''Mentors''':<br>[[User:Mlforcada|Mikel Forcada]], [[User:TommiPirinen|Tommi A Pirinen]], [[User:Unhammer]], mentors wanted
 
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]]
* '''[[/Extend lttoolbox to have the power of HFST|read more...]]'''
 
  +
| more = /Robust tokenisation
</div>
 
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Extend lttoolbox to have the power of HFST
  +
| difficulty = Hard
  +
| skills = C++, XSLT, XML
  +
| description = Extend lttoolbox (perhaps writing a preprocessor for it) so that it can be used to do the morphological transformations currently done with HFST. And yes, of course, writing something that translates the current HFST format to the new lttolbox format. Proof of concept: Come up with a new format that can express all of the features found in the Kazakh transducer; implement this format in Apertium; Implement the Kazakh transducer in this format and integrate it in the English--Kazakh pair.
  +
| rationale = Some language pairs in Apertium use HFST where most language pairs use Apertium's own lttoolbox. This is due to the fact that writing morphologies for languages that have features such as the vowel harmony found in Turkic languages is very hard with the current format supported by lttoolbox. The mixture of HFST and lttoolbox makes it harder for people to develop some language pairs.
  +
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:TommiPirinen|Tommi A Pirinen]], [[User:Unhammer]], mentors wanted
  +
| more = /Extend lttoolbox to have the power of HFST
  +
}}
   
 
<!--
 
<!--
Line 79: Line 115:
 
The task above has subsumed these two
 
The task above has subsumed these two
   
  +
{{IdeaSummary
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
=== <u>Flag diacritics in lttoolbox</u> ===
+
| name = Flag diacritics in lttoolbox
  +
| difficulty = Hard
* '''Difficulty''':<br><span style="background-color: #efcdcd">1. Hard</span>
 
* '''Required skills''':<br>C++ or Java, XML, Knowledge of FSTs
+
| skills = C++ or Java, XML, Knowledge of FSTs
* '''Description''':<br>Adapt [[lttoolbox]] to elegantly use flag diacritics. Flag diacritics are a way of avoiding transducer size blow-up by discarding impossible paths at runtime as opposed to compile time. Some work has already been done, see [[Flag diacritics]].
+
| description = Adapt [[lttoolbox]] to elegantly use flag diacritics. Flag diacritics are a way of avoiding transducer size blow-up by discarding impossible paths at runtime as opposed to compile time. Some work has already been done, see [[Flag diacritics]].
* '''Rationale''':<br>This will involve designing some changes to our XML dictionary format (see [[lttoolbox]], and implementing the associated changes in the FST compiling processing code. The reason behind this is that many languages have prefix inflection, and we cannot currently deal with this without either making paradigms useless, or overanalysing (e.g. returning analyses where none exist). Flag diacritics (or constraints) would allow us to restrict overanalysis without blowing up the size of our dictionaries.
+
| rationale = This will involve designing some changes to our XML dictionary format (see [[lttoolbox]], and implementing the associated changes in the FST compiling processing code. The reason behind this is that many languages have prefix inflection, and we cannot currently deal with this without either making paradigms useless, or overanalysing (e.g. returning analyses where none exist). Flag diacritics (or constraints) would allow us to restrict overanalysis without blowing up the size of our dictionaries.
* '''Mentors''':<br>[[User:Francis Tyers|Francis Tyers]] (C++), [[User:Jacob Nordfalk|Jacob Nordfalk]] (Java)
+
| mentors = [[User:Francis Tyers|Francis Tyers]] (C++), [[User:Jacob Nordfalk|Jacob Nordfalk]] (Java)
* '''[[/Flag diacritics in lttoolbox|read more...]]'''
+
| more = /Flag diacritics in lttoolbox
  +
}}
</div>
 
   
  +
{{IdeaSummary
 
  +
| name = Weights in lttoolbox
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| difficulty = Medium
=== <u>Weights in lttoolbox</u> ===
 
  +
| skills = C++, XML, FSTs
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
 
  +
| description = [[lttoolbox]] is a set of tools for building finite-state transducers. As part of Apertium's long-term strategy we would like to include probabilistic information into more stages of the pipeline to allow generic tools to be optimised for machine translation. This task involves adding the possibility of weighting lexemes and analyses in our finite-state transducer toolbox.
* '''Required skills''':<br>C++, XML, FSTs
 
  +
| rationale = Weighting information for lexical forms will be useful for morphological disambiguation, and for work on [[spellchecking]].
* '''Description''':<br>[[lttoolbox]] is a set of tools for building finite-state transducers. As part of Apertium's long-term strategy we would like to include probabilistic information into more stages of the pipeline to allow generic tools to be optimised for machine translation. This task involves adding the possibility of weighting lexemes and analyses in our finite-state transducer toolbox.
 
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]]
* '''Rationale''':<br>Weighting information for lexical forms will be useful for morphological disambiguation, and for work on [[spellchecking]].
 
  +
| more = /Add weights to lttoolbox
* '''Mentors''':<br>[[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]]
 
  +
}}
* '''[[/Add weights to lttoolbox|read more...]]'''
 
</div>
 
 
-->
 
-->
   
  +
{{IdeaSummary
  +
| name = Extend weighted transfer rules
  +
| difficulty = Hard
  +
| skills = Python, C++, linguistics
  +
| description = The purpose of this task is to extend weighted transfer rules to all transfer files and to allow conflicting rule patterns to be handled by combining (lexicalised) weights.
  +
| rationale = Currently our transfer rules are applied longest-match left-to-right ([[LRLM]]). When two rule patterns conflict the first one is chosen. We have a prototype for selecting based on lexicalised weights, but it only applies to the first stage of transfer.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]], [[User:Sevilay Bayatlı|Sevilay Bayatlı]]
  +
| more = /Weighted transfer rules
  +
}}
   
  +
{{IdeaSummary
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| name = Light alternative format for all XML files in an Apertium language pair
  +
| difficulty = Hard
  +
| skills = Python, C++, shell scripting, XSLT, flex
  +
| description = Make it possible to edit and develop language data using a format that is lighter than XML
  +
| rationale = In most Apertium language pairs, monolingual dictionaries, bilingual dictionaries, post-generation rule files and structural transfer rule files are all written in XML. While XML is easy to process due to explicit tagging of every element, it is tedious to deal with, particularly when it comes to structural transfer rules. Apertium's precursor, interNOSTRUM, had lighter text based formats. The task involves: (a) designing and documenting an interNOSTRUM-style format for all of the XML language data files in a language pair; (b) writing converters to XML and from XML that are fully roundtrip-compliant: (c) designing a way to synchronize changes when both the XML and the non-XML format are used simultaneously in a specific language pair.
  +
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:Japerez|Juan Antonio Pérez]], pair.
  +
| more = /Plain-text_formats_for_Apertium_data
  +
}}
   
  +
{{IdeaSummary
=== <u>Robust recursive transfer</u> ===
 
  +
| name = Eliminate dictionary trimming
* '''Difficulty''':<br><span style="background-color: #efcdcd">1. Hard</span>
 
  +
| difficulty = Very Hard
* '''Required skills''':<br>Python, XML, linguistics
 
  +
| skills = C++, Finite-State Transducers
* '''Description''':<br>The purpose of this task would be to create a module to replace the apertium-transfer module(s) which will parse and allow transfer operations on an input.
 
  +
| description = Eliminate the need for trimming the monolingual dictionaries, in order to preserve and take advantage of maximal source language analysis.
* '''Rationale''':<br>Currently we have a problem with very distantly related languages that have long-distance constituent reordering, because we can only do finite-state chunking.
 
  +
| rationale = [[Why we trim]] mentions several technical reasons for why trimming away monolingual information is currently needed. Unfortunately, this limitation means that a lot of useful contextual information is lost. It would be ideal if the source language could be fully analyzed independent of target language, with any untranslated part fed back into the source language generator.
* '''Mentors''':<br>[[User:Francis Tyers|Francis Tyers]], [[User:Sortiz|Sortiz]], [[User:Mlforcada|Mikel Forcada]], [[User:Japerez|Juan Antonio Pérez]]
 
  +
* '''Work around everything in [[Why we trim]]'''
* '''[[/Robust recursive transfer|read more...]]'''
 
  +
| mentors = [[User:TommiPirinen|Flammie]], +1 '''You need to find at least 1 mentor more to apply for this task'''
</div>
 
  +
| more = /Eliminate trimming
  +
}}
   
  +
<!--
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
{{IdeaSummary
=== <u>Extend weighted transfer rules</u> ===
 
  +
| name = Add weights to lttoolbox
* '''Difficulty''':<br><span style="background-color: #efcdcd">1. Hard</span>
 
  +
| difficulty = Hard
* '''Required skills''':<br>Python, C++, linguistics
 
  +
| skills = c++
* '''Description''':<br>The purpose of this task is to extend weighted transfer rules to all transfer files and to allow conflicting rule patterns to be handled by combining (lexicalised) weights.
 
  +
| description = Add support for weighted transducers to lttoolbox
* '''Rationale''':<br>Currently our transfer rules are applied longest-match left-to-right ([[LRLM]]). When two rule patterns conflict the first one is chosen. We have a prototype for selecting based on lexicalised weights, but it only applies to the first stage of transfer.
 
  +
| rationale = This will either involve implementing it from scratch or adding OpenFST as a backend. We would like to be able to use it both in the bilingual dictionaries, and in the morphological analysers, to be able to order analyses/translations by their probability/weight instead of by the random topological order.
* '''Mentors''':<br>[[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]]
 
  +
| mentors = [[User:Francis Tyers]] [[User:Unhammer]]
* '''[[/Weighted transfer rules|read more...]]'''
 
  +
| more = /Add weights to lttoolbox
</div>
 
  +
}}
  +
-->
   
  +
{{IdeaSummary
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| name = Create FST-based module for disambiguating
  +
| difficulty = medium
  +
| skills = XML, a scripting language (Python, Perl), C++, finite-state transducers
  +
| description = Implement a [[Constraint Grammar]]-like module based on finite-state transducers.
  +
| rationale = Currently, many language pairs use [[Constraint grammar]] as a pre-disambiguator for the Apertium tagger, allowing the imposition of more fine grained constraints than would be otherwise possible. However, current implementation of CG is much slower than most of the other modules in the Apertium pipeline, and it's also very different in terms of syntax to other Apertium modules (dictionaries, lexical selection, transfer rules, etc). There have been a few attempts to create FST versions of CG (see [[User:David_Nemeskey/GSOC_progress_2013]]), but they haven't succeeded. The hypothesis is that a simpler version of CG that supports the main features that CG support (no need to feature parity) would have better adoption and integration within the Apertium pipeline.
  +
| mentors = [[User:Xavivars|Xavi Ivars]], [[User:Francis Tyers|Francis Tyers]]
  +
| more = /Apertium FST GC
  +
}}
   
=== <u>Improvements to the Apertium website</u> ===
 
* '''Difficulty''':<br><span style="background-color: #cdcdef">3. Entry level</span>
 
* '''Required skills''':<br>Python, HTML, JS
 
* '''Description''':<br>Our web site is pretty cool already, but it's missing things like dictionary/synonym lookup, support for several variants of one language, reliability visualisation, (reliable) webpage translation, feedback, etc.
 
* '''Rationale''':<br>[https://apertium.org https://apertium.org] / [http://beta.apertium.org http://beta.apertium.org] is what most people know us by, it should show off more of the things we are capable of :-)
 
* '''Mentors''':<br>[[User:Firespeaker|Jonathan]], [[User:Sushain|Sushain]]
 
* '''[[/Apertium website improvements|read more...]]'''
 
</div>
 
   
  +
{{IdeaSummary
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| name = Learning distributed representations for Apertium modules
  +
| difficulty = hard
  +
| skills = Python, neural networks
  +
| description =
  +
| rationale =
  +
| mentors = [[User:Francis Tyers|Francis Tyers]]
  +
| more = /Distributed representations and Apertium
  +
}}
   
  +
== Tool Ideas ==
=== <u>User-friendly lexical selection training</u> ===
 
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
 
* '''Required skills''':<br>Python, C++, shell scripting
 
* '''Description''':<br>Make it so that training/inference of lexical selection rules is a more user-friendly process
 
* '''Rationale''':<br>Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
 
* '''Mentors''':<br>[[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]]
 
* '''[[/User-friendly lexical selection training|read more...]]'''
 
</div>
 
   
  +
These are ideas for creating tools to help build modules and pairs.
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
   
  +
{{IdeaSummary
=== <u>Light alternative format for all XML files in an Apertium language pair</u> ===
 
  +
| name = User-friendly lexical selection training
* '''Difficulty''':<br><span style="background-color: #efcdcd">1. Hard</span>
 
  +
| difficulty = Medium
* '''Required skills''':<br>Python, C++, shell scripting, XSLT, flex
 
  +
| skills = Python, C++, shell scripting
* '''Description''':<br>Make it possible to edit and develop language data using a format that is lighter than XML
 
  +
| description = Make it so that training/inference of lexical selection rules is a more user-friendly process
* '''Rationale''':<br>In most Apertium language pairs, monolingual dictionaries, bilingual dictionaries, post-generation rule files and structural transfer rule files are all written in XML. While XML is easy to process due to explicit tagging of every element, it is tedious to deal with, particularly when it comes to structural transfer rules. Apertium's precursor, interNOSTRUM, had lighter text based formats. The task involves: (a) designing and documenting an interNOSTRUM-style format for all of the XML language data files in a language pair; (b) writing converters to XML and from XML that are fully roundtrip-compliant: (c) designing a way to synchronize changes when both the XML and the non-XML format are used simultaneously in a specific language pair.
 
  +
| rationale = Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
* '''Mentors''':<br>[[User:Mlforcada|Mikel Forcada]], [[User:Japerez|Juan Antonio Pérez]], pair.
 
  +
| mentors = [[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]]
* '''[[/Plain-text_formats_for_Apertium_data|read more...]]'''
 
  +
| more = /User-friendly lexical selection training
</div>
 
  +
}}
   
  +
{{IdeaSummary
<!--<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
=== <u>Eliminate dictionary trimming</u> ===
+
| name = Bilingual dictionary enrichment via graph completion
  +
| difficulty = Very Hard
* '''Difficulty''':<br><span style="background-color: #ffbdbd">0. Very Hard</span>
 
  +
| skills = shell scripting, python, XSLT, XML
* '''Required skills''':<br>C++, Finite-State Transducers
 
  +
| description = Generate new entries for existing or new bilingual dictionaries using graphic representations of bilingual correspondences as found in all existing dictionaries (note that this idea defines a rather open-ended task to be discussed in detail with mentors).
* '''Description''':<br>Eliminate the need for trimming the monolingual dictionaries, in order to preserve and take advantage of maximal source language analysis.
 
  +
| rationale = Apertium bilingual dictionaries establish correspondences between lexical forms in a number of language pairs. Connections among them may be used to infer new entries for existing or new language pairs using graphs. The graphs may be directly generated from Apertium bidixes and exploiting using [[Bilingual_dictionary_discovery|ideas that had already been proposed in Apertium]] or using existing [http://linguistic.linkeddata.es/apertium/ RDF representations] of parts of their content, which may benefit from the information coming from being linked to other resources. Some previous progress can be found at [[Bilingual_dictionary_enrichment_via_graph_completion|here]].
* '''Rationale''':<br>[[Why we trim]] mentions several technical reasons for why trimming away monolingual information is currently needed. Unfortunately, this limitation means that a lot of useful contextual information is lost. It would be ideal if the source language could be fully analyzed independent of target language, with any untranslated part fed back into the source language generator.
 
  +
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:Francis Tyers|Francis Tyers]], [[User:Jorge Gracia|Jorge Gracia]]
* '''Mentors''':<br> [[User:TommiPirinen|Tommi Pirinen a.k.a. Flammie]]
 
  +
| more = Bilingual_dictionary_discovery
* '''Work around everything in [[Why we trim]]'''
 
  +
}}
</div>-->
 
   
  +
<!--
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
{{IdeaSummary
  +
| name = Transfer rule induction from comparable parsed corpora
  +
| difficulty = Hard
  +
| skills = shell scripting, python, XSLT, XML
  +
| description = A system to infer transfer rules from comparable corpora that have both been deeply parsed (with e.g. CG)
  +
| rationale = Many languages have good CG's and fairly large monolingual corpora, but little parallel material. Given a small bidix, fairly large monolingual corpora and good analysers/CG's, we should be able to parse both corpora, translate lemmas and look for similar sentences, turning the differences in their parses into transfer rules.
  +
| mentors = [[User:Unhammer]]
  +
| more = Transfer_induction_from_comparable_parsed_corpora
  +
}}
  +
-->
   
  +
{{IdeaSummary
=== <u>Bilingual dictionary enrichment via graph completion</u> ===
 
  +
| name = A Web Interface to expanding dictionary lemmas integrate with GitLab/GitHub
* '''Difficulty''':<br><span style="background-color: #ffbdbd">0. Very hard</span>
 
  +
| difficulty = hard
* '''Required skills''':<br>shell scripting, python, XSLT, XML
 
  +
| skills = java, git, XML, nodeJs, Angular 8.
* '''Description''':<br>Generate new entries for existing or new bilingual dictionaries using graphic representations of bilingual correspondences as found in all existing dictionaries (note that this idea defines a rather open-ended task to be discussed in detail with mentors).
 
  +
| description = Given that Apertium has a few dozen contributors and thousands of users, we propose a web graphical user interface (GUI) that enables the lay users to contribute to the expansion of the dictionaries that makeup the knowledge base of Apertium.
* '''Rationale''':<br>Apertium bilingual dictionaries establish correspondences between lexical forms in a number of language pairs. Connections among them may be used to infer new entries for existing or new language pairs using graphs. The graphs may be directly generated from Apertium bidixes and exploiting using [[Bilingual_dictionary_discovery|ideas that had already been proposed in Apertium]] or using existing [http://linguistic.linkeddata.es/apertium/ RDF representations] of parts of their content, which may benefit from the information coming from being linked to other resources.
 
  +
The main premise of the solution is that users with minimal knowledge of the language can contribute easily and that it must be integrated with the current form of development of expert users. Some prior progress can be found [https://web-dix-maintenance.appspot.com/ here].
* '''Mentors''':<br>[[User:Mlforcada|Mikel Forcada]], [[User:Francis Tyers|Francis Tyers]], [[User:Jorge Gracia|Jorge Gracia]]
 
  +
| rationale = .
* '''[[Bilingual_dictionary_discovery|read more...]]''' '''[[/Bilingual_dictionary_enrichment_via_graph_completion|read even more...]]'''
 
  +
| mentors = [[User:Alessio|Aléssio Jr.]], mentors welcome.
</div>
 
  +
| more = /Easy_dictionary_maintenance
  +
}}
   
  +
{{IdeaSummary
<!--<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| name = Dictionary lookup with editing
  +
| difficulty = hard
  +
| skills = XML, git, JavaScript, any language for backend (Python?)
  +
| description = A bilingual dictionary (the kind for people, not a bidix) contains various kinds of information ([http://perseus.uchicago.edu/cgi-bin/philologic/getobject.pl?c.17:3:39.LSJ example here]). Possible things to find in such a dictionary include inflected forms, translations, and phrases that the word might occur in. It should be possible to extract this information from various files within a translation pair. Within the interface, users could make changes to that information which can then be automatically converted to a pull request on Github. Some prior efforts at various kinds of dictionary lookup have been attempted [https://github.com/apertium/apertium-html-tools/issues/105 here].
  +
| rationale = Dictionary lookup is something that would be useful to a lot of users and fixing bilingual dictionary entries is something new people frequently want to do.
  +
| mentors = [[User:Popcorndude|Popcorndude]], mentors welcome
  +
| more = /Bidix_lookup_and_maintenance
  +
}}
   
  +
{{IdeaSummary
=== <u>Transfer rule induction from comparable parsed corpora</u> ===
 
  +
| name = Extract morphological data from FLEx
* '''Difficulty''':<br><span style="background-color: #ffbdbd">1. Hard</span>
 
  +
| difficulty = hard
* '''Required skills''':<br>shell scripting, python, XSLT, XML
 
  +
| skills = python, XML parsing
* '''Description''':<br>A system to infer transfer rules from comparable corpora that have both been deeply parsed (with e.g. CG)
 
  +
| description = Write a program to extract data from [https://software.sil.org/fieldworks/ SIL FieldWorks] and convert as much as possible to monodix (and maybe bidix).
* '''Rationale''':<br>Many languages have good CG's and fairly large monolingual corpora, but little parallel material. Given a small bidix, fairly large monolingual corpora and good analysers/CG's, we should be able to parse both corpora, translate lemmas and look for similar sentences, turning the differences in their parses into transfer rules.
 
  +
| rationale = There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use
* '''Mentors''':<br>[[User:Unhammer]]
 
  +
| mentors = [[User:Popcorndude|Popcorndude]], [[User:TommiPirinen|Flammie]]
* '''[[Transfer_induction_from_comparable_parsed_corpora|read more...]]'''
 
  +
| more = /FieldWorks_data_extraction
</div>-->
 
  +
}}
   
  +
== Integration Ideas ==
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
   
  +
These are ideas for making Apertium more useful in other places.
=== <u>UD and Apertium integration</u> ===
 
* '''Difficulty''':<br><span style="background-color: #cdcdef">3. Entry level</span>
 
* '''Required skills''':<br>python, javascript, HTML, (C++)
 
* '''Description''':<br>Create a range of tools for making Apertium compatible with Universal Dependencies
 
* '''Rationale''':<br>Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
 
* '''Mentors''':<br>[[User:Francis Tyers]] [[User:Firespeaker]]
 
* '''[[/UD and Apertium integration|read more...]]'''
 
</div>
 
   
  +
{{IdeaSummary
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
  +
| name = Improvements to the Apertium website
  +
| difficulty = Entry level
  +
| skills = Python, HTML, JS
  +
| description = Our web site is pretty cool already, but it's missing things like dictionary/synonym lookup, support for several variants of one language, reliability visualisation, (reliable) webpage translation, feedback, etc.
  +
| rationale = [https://apertium.org https://apertium.org] / [http://beta.apertium.org http://beta.apertium.org] is what most people know us by, it should show off more of the things we are capable of :-)
  +
| mentors = [[User:Firespeaker|Jonathan]], [[User:Sushain|Sushain]]
  +
| more = /Apertium website improvements
  +
}}
   
  +
{{IdeaSummary
=== <u>Add weights to lttoolbox</u> ===
 
  +
| name = UD and Apertium integration
* '''Difficulty''':<br><span style="background-color: #ffbdbd">1. Hard</span>
 
  +
| difficulty = Entry level
* '''Required skills''':<br>c++
 
  +
| skills = python, javascript, HTML, (C++)
* '''Description''':<br>Add support for weighted transducers to lttoolbox
 
  +
| description = Create a range of tools for making Apertium compatible with Universal Dependencies
* '''Rationale''':<br>This will either involve implementing it from scratch or adding OpenFST as a backend. We would like to be able to use it both in the bilingual dictionaries, and in the morphological analysers, to be able to order analyses/translations by their probability/weight instead of by the random topological order.
 
  +
| rationale = Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
* '''Mentors''':<br>[[User:Francis Tyers]] [[User:Unhammer]]
 
  +
| mentors = [[User:Francis Tyers]] [[User:Firespeaker]]
* '''[[/Add weights to lttoolbox|read more...]]'''
 
  +
| more = /UD and Apertium integration
</div>
 
  +
}}
 
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
   
  +
{{IdeaSummary
=== <u>Improving language pairs mining Mediawiki Content Translation postedits</u> ===
 
  +
| name = Improving language pairs mining Mediawiki Content Translation postedits
* '''Difficulty''':<br><span style="background-color: #ffbdbd">1. Hard</span>
 
  +
| difficulty = Hard
* '''Required skills''':<br>Python, shell scripting, some statistics
 
  +
| skills = Python, shell scripting, some statistics
* '''Description''':<br>Implement a toolkit that allows mining existing machine translation postediting data in [Mediawiki Content Translation https://www.mediawiki.org/wiki/Content_translation] to generate (as automatically as possible, and as complete as possible) monodix and bidix entries to improve the performance of an Apertium language pair. Data is available from Wikimedia content translation through an [API https://www.mediawiki.org/wiki/Content_translation/Published_translations#API] or in the form of [Dumps https://dumps.wikimedia.org/other/contenttranslation/] available in JSON and TMX format. This project is rather experimental and involves some research in addition to coding.
 
  +
| description = Implement a toolkit that allows mining existing machine translation postediting data in [Mediawiki Content Translation https://www.mediawiki.org/wiki/Content_translation] to generate (as automatically as possible, and as complete as possible) monodix and bidix entries to improve the performance of an Apertium language pair. Data is available from Wikimedia content translation through an [API https://www.mediawiki.org/wiki/Content_translation/Published_translations#API] or in the form of [Dumps https://dumps.wikimedia.org/other/contenttranslation/] available in JSON and TMX format. This project is rather experimental and involves some research in addition to coding.
* '''Rationale''':<br>Apertium is used to generate new Wikipedia content: machine-translated content is postedited (and perhaps adapted) before publishing. Postediting information may contain information that can be used to help improve the lexical components of an Apertium language pair.
 
  +
| rationale = Apertium is used to generate new Wikipedia content: machine-translated content is postedited (and perhaps adapted) before publishing. Postediting information may contain information that can be used to help improve the lexical components of an Apertium language pair.
* '''Mentors''':<br> [[User:Mlforcada|Mikel Forcada]], (more mentors to be added)
 
  +
| mentors = [[User:Mlforcada|Mikel Forcada]], (more mentors to be added)
* '''[[/automatic-postediting|read more...]]'''
 
  +
| more = /automatic-postediting
</div>
 
  +
}}
 
 
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
 
=== <u>Unsupervised weighting of automata</u> ===
 
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
 
* '''Required skills''':<br>Python, shell scripting, statistics, finite-state transducers
 
* '''Description''':<br>Implement a collection of methods for weighting finite-state transducers, the methods should include an implementation of a simple method of supervised training, and a number of methods for unsupervised training. The objective being to get the analysis ranking given a set of a analyses for a given surface form as close to the result given by supervised training as possible.
 
* '''Rationale''':<br>Apertium struggles with ambiguity, we have had many attempts to write better part of speech taggers. This would complement those attempts by providing a generic method to weight automata.
 
* '''Mentors''':<br> [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]], [[User:Unhammer|Unhammer]]
 
* '''[[/Unsupervised weighting of automata|read more...]]'''
 
</div>
 
 
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
=== <u>Improvements to UD Annotatrix</u> ===
 
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
 
* '''Required skills''':<br>JavaScript, Jquery, HTML, Python
 
* '''Description''':<br>UD Annotatrix is an interface by Apertium for annotating dependency trees in CoNLL-U format. The system is currently in beta, but is getting traction as more people start using it.
 
* '''Rationale''':<br>Universal Dependencies is a very widely used standard for annotating data, the kind of annotated data that can be used to train part of speech taggers. There is still a lot of work that could be done to improve it.
 
* '''Mentors''':<br> [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker]]
 
* '''[[/UD Annotatrix|read more...]]'''
 
</div>
 
 
<div style="background-color: #f9f9f9; border: 1px solid black; padding: 1ex; margin-bottom: 2ex;">
 
=== <u>apertium-separable language-pair integration</u> ===
 
* '''Difficulty''':<br><span style="background-color: #cdefcd">2. Medium</span>
 
* '''Required skills''':<br>XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
 
* '''Description''':<br>Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly.
 
* '''Rationale''':<br>Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle.
 
* '''Mentors''':<br> [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker]]
 
* '''[[/Apertium separable|read more...]]'''
 
</div>
 
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = Create FST-based module for disambiguating
+
| name = Improvements to UD Annotatrix
| difficulty = medium
+
| difficulty = Medium
  +
| skills = JavaScript, Jquery, HTML, Python
| skills = XML, a scripting language (Python, Perl), C++, finite-state transducers
 
  +
| description = UD Annotatrix is an interface by Apertium for annotating dependency trees in CoNLL-U format. The system is currently in beta, but is getting traction as more people start using it.
| description = Implement a [[Constraint Grammar]]-like module based on finite-state transducers.
 
  +
| rationale = Universal Dependencies is a very widely used standard for annotating data, the kind of annotated data that can be used to train part of speech taggers. There is still a lot of work that could be done to improve it.
| rationale = Currently, many language pairs use [[Constraint grammar]] as a pre-disambiguator for the Apertium tagger, allowing the imposition of more fine grained constraints than would be otherwise possible. However, current implementation of CG is much slower than most of the other modules in the Apertium pipeline, and it's also very different in terms of syntax to other Apertium modules (dictionaries, lexical selection, transfer rules, etc). There have been a few attempts to create FST versions of CG (see [[User:David_Nemeskey/GSOC_progress_2013]]), but they haven't succeeded. The hypothesis is that a simpler version of CG that supports the main features that CG support (no need to feature parity) would have better adoption and integration within the Apertium pipeline.
 
| mentors = [[User:Xavivars|Xavi Ivars]], [[User:Francis Tyers|Francis Tyers]]
+
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker]]
| more = /Apertium FST GC
+
| more = /UD Annotatrix
 
}}
 
}}
   
  +
<!-- done as of 2019 I think?
 
{{IdeaSummary
 
{{IdeaSummary
 
| name = Python API/library for Apertium
 
| name = Python API/library for Apertium
Line 268: Line 317:
 
| skills = Python, C++, SWIG
 
| skills = Python, C++, SWIG
 
| description = Implement a Python library for Apertium and Lttoolbox.
 
| description = Implement a Python library for Apertium and Lttoolbox.
| rationale = Lots of people use Python, they like to use it in their Jupyter notebooks and on Microsoft Windows™. Apertium is really hard to get going in these kind of environments. So it would be cool if we could make Apertium work for them too. It has a lot of nice language processing tools that we would like more people to use. I'm sure people would love to "pip install apertium" or "pip install apertium-ava". The API/implementation should be pythonistic.
+
| rationale = Lots of people use Python, they like to use it in their Jupyter notebooks and on Microsoft Windows™. Apertium is really hard to get going in these kind of environments. So it would be cool if we could make Apertium work for them too. It has a lot of nice language processing tools that we would like more people to use. I'm sure people would love to "pip install apertium" or "pip install apertium-ava". The API/implementation should be pythonistic and should use C++ bindings to directly perform morphological functions, avoiding the overhead of a separate process. Prior GSoC project work is available on [https://github.com/apertium/apertium-python GitHub].
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Unhammer|Unhammer]], [[User:Xavivars|Xavi Ivars]], [[User:Sushain|Sushain]]
+
| mentors = [[User:Sushain|Sushain]], [[User:Francis Tyers|Francis Tyers]], [[User:Unhammer|Unhammer]], [[User:Xavivars|Xavi Ivars]]
 
| more = /Python library
 
| more = /Python library
 
}}
 
}}
  +
-->
   
 
{{IdeaSummary
 
{{IdeaSummary
Line 282: Line 332:
 
}}
 
}}
   
  +
{{IdeaSummary
 
  +
| name = Set up gap-filling machine-translation-for-gisting evaluation on a recent version of Appraise
 
  +
| difficulty = hard
  +
| skills = python, bash, git, XML editing.
  +
| description = [http://github.com/mlforcada/Appraise http://github.com/mlforcada/Appraise], in turn forked from [http://github.com/mlforcada/Appraise http://github.com/mlforcada/Appraise], the work of a GSoC student, contains an adaptation of an old (2014) version of [http://github.com/cfedermann/Appraise http://github.com/cfedermann/Appraise] to implement gap-filling evaluation as described in [https://export.arxiv.org/pdf/1809.00315 this WMT2018 paper]. The objective is to bring the gap-filling functionality in [http://github.com/mlforcada/Appraise] to be compatible with the latest versions of Appraise.
  +
| rationale = Many language pairs in Apertium are unique, such as Breton-French, and many of them are used for gisting (understanding) purposes. Gap-filling provides a simple way to evaluate the usefulness of machine translation for gisting. The implementation used in recent experiments is based on an outdated version of the Appraise platform.
  +
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome.
  +
| more = /Appraise_gisting
  +
}}
   
 
[[Category:Development]]
 
[[Category:Development]]

Revision as of 19:54, 29 March 2020

This is the ideas page for Google Summer of Code, here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality. If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using ~~~

The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on #apertium on irc.freenode.net, mail the mailing list, or draw attention to yourself in some other way.

Note that, if you have an idea that isn't mentioned here, we would be very interested to hear about it.

Here are some more things you could look at:


If you're a student trying to propose a topic, the recommended way is to request a wiki account and then go to

http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2020Proposal

and click the "create" button near the top of the page. It's also nice to include [[Category:GSoC_2020_student_proposals]] to help organize submitted proposals.

Language Ideas

These are ideas that involve working with particular languages.


Bring a released language pair up to state-of-the-art quality

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    XML, a scripting language (Python, Perl), good knowledge of the language pair adopted.
  • Description:
    Take a released language pair, and drastically improve the performance both in terms of coverage, and in terms of translation quality. This will involve working with dictionaries, transfer rules, scripting, corpora. The objective is to make an Apertium language pair state-of-the-art, or close to state-of-the-art in terms of translation quality. This will involve improving coverage to 95-98% on a range of corpora and decreasing word error rate by 30-50%. For example if the current word error rate is 30%, then it should be reduced to 15-20%.
  • Rationale:
    Apertium has quite a broad coverage of language pairs, but few of these pairs offer state-of-the-art translation quality. We think broad is important, but deep coverage is important too.
  • Mentors:
    Francis Tyers, Mikel Forcada, Xavi Ivars, Ilnar Salimzianov
  • read more...


Adopt an unreleased language pair


apertium-separable language-pair integration

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
  • Description:
    Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the Apertium-separable module to process the multiwords, and clean up the dictionaries accordingly.
  • Rationale:
    Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle.
  • Mentors:
    Francis Tyers, User:Firespeaker
  • read more...


Bring Apertium Occitan--French closer to posteditable quality.

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    GNU/Linux advanced user, bash, git, XML editing, standard Occitan, French.
  • Description:
    The idea is to make Occitan output easier to postedit and French output easier to understand. This entails increasing the monolingual and bilingual dictionaries, improving disambiguation, and writing new structural transfer rules.
  • Rationale:
    The Occitan--French language pair has been recently published. This language pair is of strategic importance for the Occitan language, as Apertium offers the only machine translation system for this language pair.
  • Mentors:
    Mikel Forcada, mentors welcome.
  • read more...


Create a usable version of one of these language pairs: English--Igbo, English--Yoruba, English--Tigrinya, English--Swahili, English-Hausa

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    GNU/Linux advanced user, bash, git, XML editing, English, Igbo/Yoruba/Tigrinya/Swahili/Hausa
  • Description:
    The objective is to start these language pairs (which haven't been started or have currentlu very little data in Apertium) and write an usable version which provides intelligible output.
  • Rationale:
    African languages are not particularly well served by Apertium. The four languages listed are quite important, and are only currently served by commercial machine translation companies such as Google, which makes these language communities dependent on a specific commercial provider.
  • Mentors:
    Mikel Forcada, mentors welcome.
  • read more...

Module/Pipeline Ideas

These are ideas for modifying things in the translation pipeline.


Robust tokenisation in lttoolbox

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    C++, XML, Python
  • Description:
    Improve the longest-match left-to-right tokenisation strategy in lttoolbox to be fully Unicode compliant.
  • Rationale:
    One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing.
  • Mentors:
    Francis Tyers, Flammie
  • read more...


Extend lttoolbox to have the power of HFST

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    C++, XSLT, XML
  • Description:
    Extend lttoolbox (perhaps writing a preprocessor for it) so that it can be used to do the morphological transformations currently done with HFST. And yes, of course, writing something that translates the current HFST format to the new lttolbox format. Proof of concept: Come up with a new format that can express all of the features found in the Kazakh transducer; implement this format in Apertium; Implement the Kazakh transducer in this format and integrate it in the English--Kazakh pair.
  • Rationale:
    Some language pairs in Apertium use HFST where most language pairs use Apertium's own lttoolbox. This is due to the fact that writing morphologies for languages that have features such as the vowel harmony found in Turkic languages is very hard with the current format supported by lttoolbox. The mixture of HFST and lttoolbox makes it harder for people to develop some language pairs.
  • Mentors:
    Mikel Forcada, Tommi A Pirinen, User:Unhammer, mentors wanted
  • read more...


Extend weighted transfer rules

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    Python, C++, linguistics
  • Description:
    The purpose of this task is to extend weighted transfer rules to all transfer files and to allow conflicting rule patterns to be handled by combining (lexicalised) weights.
  • Rationale:
    Currently our transfer rules are applied longest-match left-to-right (LRLM). When two rule patterns conflict the first one is chosen. We have a prototype for selecting based on lexicalised weights, but it only applies to the first stage of transfer.
  • Mentors:
    Francis Tyers, Tommi Pirinen, Sevilay Bayatlı
  • read more...


Light alternative format for all XML files in an Apertium language pair

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    Python, C++, shell scripting, XSLT, flex
  • Description:
    Make it possible to edit and develop language data using a format that is lighter than XML
  • Rationale:
    In most Apertium language pairs, monolingual dictionaries, bilingual dictionaries, post-generation rule files and structural transfer rule files are all written in XML. While XML is easy to process due to explicit tagging of every element, it is tedious to deal with, particularly when it comes to structural transfer rules. Apertium's precursor, interNOSTRUM, had lighter text based formats. The task involves: (a) designing and documenting an interNOSTRUM-style format for all of the XML language data files in a language pair; (b) writing converters to XML and from XML that are fully roundtrip-compliant: (c) designing a way to synchronize changes when both the XML and the non-XML format are used simultaneously in a specific language pair.
  • Mentors:
    Mikel Forcada, Juan Antonio Pérez, pair.
  • read more...


Eliminate dictionary trimming

  • Difficulty:
    0. Very Hard
  • Size: default Unknown size
  • Required skills:
    C++, Finite-State Transducers
  • Description:
    Eliminate the need for trimming the monolingual dictionaries, in order to preserve and take advantage of maximal source language analysis.
  • Rationale:
    Why we trim mentions several technical reasons for why trimming away monolingual information is currently needed. Unfortunately, this limitation means that a lot of useful contextual information is lost. It would be ideal if the source language could be fully analyzed independent of target language, with any untranslated part fed back into the source language generator.
  • Work around everything in Why we trim
  • Mentors:
    Flammie, +1 You need to find at least 1 mentor more to apply for this task
  • read more...


Create FST-based module for disambiguating

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    XML, a scripting language (Python, Perl), C++, finite-state transducers
  • Description:
    Implement a Constraint Grammar-like module based on finite-state transducers.
  • Rationale:
    Currently, many language pairs use Constraint grammar as a pre-disambiguator for the Apertium tagger, allowing the imposition of more fine grained constraints than would be otherwise possible. However, current implementation of CG is much slower than most of the other modules in the Apertium pipeline, and it's also very different in terms of syntax to other Apertium modules (dictionaries, lexical selection, transfer rules, etc). There have been a few attempts to create FST versions of CG (see User:David_Nemeskey/GSOC_progress_2013), but they haven't succeeded. The hypothesis is that a simpler version of CG that supports the main features that CG support (no need to feature parity) would have better adoption and integration within the Apertium pipeline.
  • Mentors:
    Xavi Ivars, Francis Tyers
  • read more...


Learning distributed representations for Apertium modules

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    Python, neural networks
  • Description:
  • Rationale:
  • Mentors:
    Francis Tyers
  • read more...

Tool Ideas

These are ideas for creating tools to help build modules and pairs.


User-friendly lexical selection training

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    Python, C++, shell scripting
  • Description:
    Make it so that training/inference of lexical selection rules is a more user-friendly process
  • Rationale:
    Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
  • Mentors:
    Unhammer, Mikel Forcada
  • read more...


Bilingual dictionary enrichment via graph completion

  • Difficulty:
    0. Very Hard
  • Size: default Unknown size
  • Required skills:
    shell scripting, python, XSLT, XML
  • Description:
    Generate new entries for existing or new bilingual dictionaries using graphic representations of bilingual correspondences as found in all existing dictionaries (note that this idea defines a rather open-ended task to be discussed in detail with mentors).
  • Rationale:
    Apertium bilingual dictionaries establish correspondences between lexical forms in a number of language pairs. Connections among them may be used to infer new entries for existing or new language pairs using graphs. The graphs may be directly generated from Apertium bidixes and exploiting using ideas that had already been proposed in Apertium or using existing RDF representations of parts of their content, which may benefit from the information coming from being linked to other resources. Some previous progress can be found at here.
  • Mentors:
    Mikel Forcada, Francis Tyers, Jorge Gracia
  • read more...


A Web Interface to expanding dictionary lemmas integrate with GitLab/GitHub

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    java, git, XML, nodeJs, Angular 8.
  • Description:
    Given that Apertium has a few dozen contributors and thousands of users, we propose a web graphical user interface (GUI) that enables the lay users to contribute to the expansion of the dictionaries that makeup the knowledge base of Apertium.

The main premise of the solution is that users with minimal knowledge of the language can contribute easily and that it must be integrated with the current form of development of expert users. Some prior progress can be found here.


Dictionary lookup with editing

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    XML, git, JavaScript, any language for backend (Python?)
  • Description:
    A bilingual dictionary (the kind for people, not a bidix) contains various kinds of information (example here). Possible things to find in such a dictionary include inflected forms, translations, and phrases that the word might occur in. It should be possible to extract this information from various files within a translation pair. Within the interface, users could make changes to that information which can then be automatically converted to a pull request on Github. Some prior efforts at various kinds of dictionary lookup have been attempted here.
  • Rationale:
    Dictionary lookup is something that would be useful to a lot of users and fixing bilingual dictionary entries is something new people frequently want to do.
  • Mentors:
    Popcorndude, mentors welcome
  • read more...


Extract morphological data from FLEx

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    python, XML parsing
  • Description:
    Write a program to extract data from SIL FieldWorks and convert as much as possible to monodix (and maybe bidix).
  • Rationale:
    There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use
  • Mentors:
    Popcorndude, Flammie
  • read more...

Integration Ideas

These are ideas for making Apertium more useful in other places.


Improvements to the Apertium website

  • Difficulty:
    3. Entry level
  • Size: default Unknown size
  • Required skills:
    Python, HTML, JS
  • Description:
    Our web site is pretty cool already, but it's missing things like dictionary/synonym lookup, support for several variants of one language, reliability visualisation, (reliable) webpage translation, feedback, etc.
  • Rationale:
    https://apertium.org / http://beta.apertium.org is what most people know us by, it should show off more of the things we are capable of :-)
  • Mentors:
    Jonathan, Sushain
  • read more...


UD and Apertium integration

  • Difficulty:
    3. Entry level
  • Size: default Unknown size
  • Required skills:
    python, javascript, HTML, (C++)
  • Description:
    Create a range of tools for making Apertium compatible with Universal Dependencies
  • Rationale:
    Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
  • Mentors:
    User:Francis Tyers User:Firespeaker
  • read more...


Improving language pairs mining Mediawiki Content Translation postedits

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    Python, shell scripting, some statistics
  • Description:
    Implement a toolkit that allows mining existing machine translation postediting data in [Mediawiki Content Translation https://www.mediawiki.org/wiki/Content_translation] to generate (as automatically as possible, and as complete as possible) monodix and bidix entries to improve the performance of an Apertium language pair. Data is available from Wikimedia content translation through an [API https://www.mediawiki.org/wiki/Content_translation/Published_translations#API] or in the form of [Dumps https://dumps.wikimedia.org/other/contenttranslation/] available in JSON and TMX format. This project is rather experimental and involves some research in addition to coding.
  • Rationale:
    Apertium is used to generate new Wikipedia content: machine-translated content is postedited (and perhaps adapted) before publishing. Postediting information may contain information that can be used to help improve the lexical components of an Apertium language pair.
  • Mentors:
    Mikel Forcada, (more mentors to be added)
  • read more...


Improvements to UD Annotatrix

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    JavaScript, Jquery, HTML, Python
  • Description:
    UD Annotatrix is an interface by Apertium for annotating dependency trees in CoNLL-U format. The system is currently in beta, but is getting traction as more people start using it.
  • Rationale:
    Universal Dependencies is a very widely used standard for annotating data, the kind of annotated data that can be used to train part of speech taggers. There is still a lot of work that could be done to improve it.
  • Mentors:
    Francis Tyers, User:Firespeaker
  • read more...


TIPP functionality for Apertium

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    Python, C++, XML, XLIFF (a subset called XLIFF:doc)
  • Description:
    TIPP, the TMS Interoperability Protocol Package (where TMS means translation management system), [1], currently in version 1.5 but being upgraded to 2.0, specifies a container (package format) that allows the interchange of information along a translation value chain. There are various container varieties for different tasks. One such variety, called Translate-Strict-Bitext, represents a bilingual translation job. The 'request' TIPP would contain an XLIFF:doc ([2], a subset of XLIFF 1 [3]) file with the document to be translated and the corresponding metadata, and the corresponding 'response' TIPP would contain the results of Apertium MT applied to it, but taking into account the translation memory provided in the TIPP, if any (using Apertium's -m switch). Apertium should be endowed with the capacity to manage TIPP packages: unpack the request package, parse it, process it, and repack it.
  • Rationale:
    {{{rationale}}}
  • Mentors:
    Mikel Forcada
  • read more...


Set up gap-filling machine-translation-for-gisting evaluation on a recent version of Appraise

  • Difficulty:
    1. Hard
  • Size: default Unknown size
  • Required skills:
    python, bash, git, XML editing.
  • Description:
    http://github.com/mlforcada/Appraise, in turn forked from http://github.com/mlforcada/Appraise, the work of a GSoC student, contains an adaptation of an old (2014) version of http://github.com/cfedermann/Appraise to implement gap-filling evaluation as described in this WMT2018 paper. The objective is to bring the gap-filling functionality in [4] to be compatible with the latest versions of Appraise.
  • Rationale:
    Many language pairs in Apertium are unique, such as Breton-French, and many of them are used for gisting (understanding) purposes. Gap-filling provides a simple way to evaluate the usefulness of machine translation for gisting. The implementation used in recent experiments is based on an outdated version of the Appraise platform.
  • Mentors:
    Mikel Forcada, mentors welcome.
  • read more...