Difference between revisions of "Ideas for Google Summer of Code"

From Apertium
Jump to navigation Jump to search
(Add l10n/i18n)
 
(58 intermediate revisions by 9 users not shown)
Line 1: Line 1:
 
{{TOCD}}
 
{{TOCD}}
This is the ideas page for [[Google Summer of Code]], here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality. If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using <nowiki>~~~</nowiki>
+
This is the ideas page for [[Google Summer of Code]], here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality.
   
  +
'''Current Apertium contributors''': If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using <code><nowiki>~~~</nowiki></code>.
The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on <code>#apertium</code> on <code>irc.freenode.net</code>, mail the [[Contact|mailing list]], or draw attention to yourself in some other way.
 
   
  +
'''Prospective GSoC contributors''': The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on <code>#apertium</code> on <code>irc.oftc.net</code> ([[IRC|more on IRC]]), mail the [[Contact|mailing list]], or draw attention to yourself in some other way.
Note that, if you have an idea that isn't mentioned here, we would be very interested to hear about it.
 
  +
  +
Note that if you have an idea that isn't mentioned here, we would be very interested to hear about it.
   
 
Here are some more things you could look at:
 
Here are some more things you could look at:
Line 13: Line 15:
 
* Resources that could be converted or expanded in the [[incubator]]. Consider doing or improving a language pair (see [[incubator]], [[nursery]] and [[staging]] for pairs that need work)
 
* Resources that could be converted or expanded in the [[incubator]]. Consider doing or improving a language pair (see [[incubator]], [[nursery]] and [[staging]] for pairs that need work)
 
* Unhammer's [[User:Unhammer/wishlist|wishlist]]
 
* Unhammer's [[User:Unhammer/wishlist|wishlist]]
* The open issues [https://github.com/search?q=org%3Aapertium&state=open&type=Issues on Github] (or [http://sourceforge.net/p/apertium/tickets/search/?q=!status%3Awont-fix+%26%26+!status%3Aclosed on Sourceforge]). The latter are probably out of date now since migrating to Github.
+
<!--* The open issues [https://github.com/search?q=org%3Aapertium&state=open&type=Issues on Github] - especially the [https://github.com/search?q=org%3Aapertium+label%3A%22good+first+issue%22&state=open&type=Issues Good First Issues]. -->
   
 
__TOC__
 
__TOC__
   
If you're a student trying to propose a topic, the recommended way is to request a wiki account and then go to <pre>http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2021Proposal</pre> and click the "create" button near the top of the page. It's also nice to include <code><nowiki>[[Category:GSoC_2021_student_proposals]]</nowiki></code> to help organize submitted proposals.
+
If you're a prospective GSoC contributor trying to propose a topic, the recommended way is to request a wiki account and then go to <pre>http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2023Proposal</pre> and click the "create" button near the top of the page. It's also nice to include <code><nowiki>[[</nowiki>[[:Category:GSoC_2023_student_proposals|Category:GSoC_2023_student_proposals]]<nowiki>]]</nowiki></code> to help organize submitted proposals.
   
== Ideas ==
+
== Language Data ==
  +
  +
Can you read or write a language other than English (and we do mean any language)? If so, you can help with one of these and we can help you figure out the technical parts.
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = Python API for Apertium
+
| name = Develop a morphological analyser
| difficulty = medium
+
| difficulty = easy
| skills = C++, Python
+
| size = either
  +
| skills = XML or HFST or lexd
| description = Update the Python API for Apertium to expose all Apertium modes and test with all major OSes
 
  +
| description = Write a morphological analyser and generator for a language that does not yet have one
| rationale = The current Python API misses out on a lot of functionality, like phonemicisation, segmentation, and transliteration, and doesn't work for some OSes <s>like Debian</s>.
 
  +
| rationale = A key part of an Apertium machine translation system is a morphological analyser and generator. The objective of this task is to create an analyser for a language that does not yet have one.
| mentors = [[User:Francis Tyers|Francis Tyers]]
 
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker|Jonathan Washington]], [[User: Sevilay Bayatlı|Sevilay Bayatlı]], Hossep, nlhowell, [[User:Popcorndude]]
| more = /Python API
 
  +
| more = /Morphological analyser
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = OmniLingo and Apertium
+
| name = apertium-separable language-pair integration
| difficulty = medium
+
| difficulty = Medium
| skills = JS, Python
+
| size = small
  +
| skills = XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
| description = OmniLingo is a language learning system for practising listening comprehension using Apertium data. There is a lot of text processing involved (for example tokenisation) that could be aided by Apertium tools.
 
  +
| description = Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly.
| rationale =
 
  +
| rationale = Apertium-separable is a newish module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, many translation pairs still don't use it.
| mentors = [[User:Francis Tyers|Francis Tyers]]
 
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Popcorndude]]
| more = /OmniLingo
 
  +
| more = /Apertium separable
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
  +
| name = Bring an unreleased translation pair to releasable quality
| name = Web API extensions
 
| difficulty = medium
+
| difficulty = Medium
| skills = Python
+
| size = large
  +
| skills = shell scripting
| description = Update the web API for Apertium to expose all Apertium modes
 
  +
| description = Take an unstable language pair and improve its quality, focusing on testvoc
| rationale = The current Web API misses out on a lot of functionality, like phonemicisation, segmentation, and transliteration
 
  +
| rationale = Many Apertium language pairs have large dictionaries and have otherwise seen much development, but are not of releasable quality. The point of this project would be bring one translation pair to releasable quality. This would entail obtaining good naïve coverage and a clean [[testvoc]].
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
 
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Seviay Bayatlı|Sevilay Bayatlı]], [[User:Unhammer]], [[User:hectoralos|Hèctor Alòs i Font]]
| more = /Apertium APY
 
  +
| more = /Make a language pair state-of-the-art
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = Develop a morphological analyser
+
| name = Develop a prototype MT system for a strategic language pair
  +
| difficulty = Medium
  +
| size = large
  +
| skills = XML, some knowledge of linguistics and of one relevant natural language
  +
| description = Create a translation pair based on two existing language modules, focusing on the dictionary and structural transfer
  +
| rationale = Choose a strategic set of languages to develop an MT system for, such that you know the target language well and morphological transducers for each language are part of Apertium. Develop an Apertium MT system by focusing on writing a bilingual dictionary and structural transfer rules. Expanding the transducers and disambiguation, and writing lexical selection rules and multiword sequences may also be part of the work. The pair may be an existing prototype, but if it's a heavily developed but unreleased pair, consider applying for "Bring an unreleased translation pair to releasable quality" instead.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Sevilay Bayatlı| Sevilay Bayatlı]], [[User:Unhammer]], [[User:hectoralos|Hèctor Alòs i Font]]
  +
| more = /Adopt a language pair
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Add a new variety to an existing language
 
| difficulty = easy
 
| difficulty = easy
| skills = XML
+
| size = either
  +
| skills = XML, some knowledge of linguistics and of one relevant natural language
| description = Write a morphological analyser and generator for a language that does not yet have one
 
  +
| description = Add a language variety to one or more released pairs, focusing on the dictionary and lexical selection
| rationale = A key part of an Apertium machine translation system is a morphological analyser and generator. The objective of this task is to create an analyser for a language that does not yet have one.
 
  +
| rationale = Take a released language, and define a new language variety for it: e.g. Quebec French or Provençal Occitan. Then add the new variety to one or more released language pairs, without diminishing the quality of the pre-existing variety(ies). The objective is to facilitate the generation of varieties for languages with a weak standardisation and/or pluricentric languages.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker|Jonathan Washington]], [[User: Sevilay Bayatlı|Sevilay Bayatlı]], Hossep
 
  +
| mentors = [[User:hectoralos|Hèctor Alòs i Font]], [[User:Firespeaker|Jonathan Washington]],[[User:piraye|Sevilaybayatlı]]
| more = /Morphological analyser
 
  +
| more = /Add a new variety to an existing language
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
  +
| name = Leverage and integrate language preferences into language pairs
| name = Support for Enhanced Dependencies in UD Annotatrix
 
| difficulty = medium
+
| difficulty = easy
| skills = NodeJS
+
| size = medium
  +
| skills = XML, some knowledge of linguistics and of one relevant natural language
| description = UD Annotatrix is an annotation interface for Universal Dependencies, but does not yet support all functionality
 
  +
| description = Update language pairs with lexical and orthographical variations to leverage the new [[Dialectal_or_standard_variation|preferences]] functionality
| rationale =
 
  +
| rationale = Currently, preferences are implemented via language variant, which relies on multiple dictionaries, increasing compilation time exponentially every time a new preference gets introduced.
| mentors = [[User:Francis Tyers|Francis Tyers]]
 
  +
| mentors = [[User:Xavivars|Xavi Ivars]] [[User:Unhammer]]
| more = /Morphological analyser
 
  +
| more = /Use preferences in pair
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
  +
| name = Add Capitalization Handling Module to a Language Pair
| name = User-friendly lexical selection training
 
| difficulty = Medium
+
| difficulty = easy
  +
| size = small
| skills = Python, C++, shell scripting
 
  +
| skills = XML, knowledge of some relevant natural language
| description = Make it so that training/inference of lexical selection rules is a more user-friendly process
 
  +
| description = Update a language pair to make use make use of the new [[Capitalization_restoration|Capitalization handling module]]
| rationale = Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
 
  +
| rationale = Correcting capitalization via transfer rules is tedious and error prone, but putting them in a separate set of rules should allow them to be handled in a more concise and maintainable way. Additionally, it is possible that capitalization rule could be moved to monolingual modules, thus reducing development effort on translators.
| mentors = [[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]]
 
  +
| mentors = [[User:Popcorndude]]
| more = /User-friendly lexical selection training
 
  +
| more = /Capitalization
 
}}
 
}}
  +
  +
== Data Extraction ==
  +
  +
A lot of the language data we need to make our analyzers and translators work already exists in other forms and we just need to figure out how to convert it. If you know of another source of data that isn't listed, we'd love to hear about it.
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = Robust tokenisation in lttoolbox
+
| name = dictionary induction from wikis
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = either
| skills = C++, XML, Python
 
  +
| skills = MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction
| description = Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to be fully Unicode compliant.
 
  +
| description = Extract dictionaries from linguistic wikis
| rationale = One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing.
 
  +
| rationale = Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]]
 
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Popcorndude]]
| more = /Robust tokenisation
 
  +
| more = /Dictionary induction from wikis
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
  +
| name = Dictionary induction from parallel corpora / Revive ReTraTos
| name = apertium-separable language-pair integration
 
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = medium
| skills = XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
 
  +
| skills = C++, perl, python, xml, scripting, machine learning
| description = Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly.
 
  +
| description = Extract dictionaries from parallel corpora
| rationale = Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle.
 
  +
| rationale = Given a pair of monolingual modules and a parallel corpus, we should be able to run a program to align tagged sentences and give us the best entries that are missing from bidix. [[ReTraTos]] (from 2008) did this back in 2008, but it's from 2008. We want a program which builds and runs in 2022, and does all the steps for the user.
| mentors = [[User:Firespeaker|Jonathan Washington]]
 
  +
| mentors = [[User:Unhammer]], [[User:Popcorndude]]
| more = /Apertium separable
 
  +
| more = /Dictionary induction from parallel corpora
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = UD and Apertium integration
+
| name = Extract morphological data from FLEx
| difficulty = Entry level
+
| difficulty = hard
  +
| size = large
| skills = python, javascript, HTML, (C++)
 
  +
| skills = python, XML parsing
| description = Create a range of tools for making Apertium compatible with Universal Dependencies
 
  +
| description = Write a program to extract data from [https://software.sil.org/fieldworks/ SIL FieldWorks] and convert as much as possible to monodix (and maybe bidix).
| rationale = Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
 
  +
| rationale = There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use
| mentors = [[User:Francis Tyers]] [[User:Firespeaker| Jonathan Washington]]
 
  +
| mentors = [[User:Popcorndude|Popcorndude]], [[User:TommiPirinen|Flammie]]
| more = /UD and Apertium integration
 
  +
| more = /FieldWorks_data_extraction
  +
}}
  +
  +
== Tooling ==
  +
  +
These are projects for people who would be comfortable digging through our C++ codebases (you will be doing a lot of that).
  +
  +
{{IdeaSummary
  +
| name = Python API for Apertium
  +
| difficulty = medium
  +
| size = medium
  +
| skills = C++, Python
  +
| description = Update the Python API for Apertium to expose all Apertium modes and test with all major OSes
  +
| rationale = The current Python API misses out on a lot of functionality, like phonemicisation, segmentation, and transliteration, and doesn't work for some OSes <s>like Debian</s>.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]]
  +
| more = /Python API
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Robust tokenisation in lttoolbox
  +
| difficulty = Medium
  +
| size = large
  +
| skills = C++, XML, Python
  +
| description = Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to handle spaceless orthographies.
  +
| rationale = One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing. Additionally, the system is nearly impossible to use for languages that don't use spaces, such as Japanese.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]]
  +
| more = /Robust tokenisation
 
}}
 
}}
   
Line 114: Line 168:
 
| name = rule visualization tools
 
| name = rule visualization tools
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = either
 
| skills = python? javascript? XML
 
| skills = python? javascript? XML
 
| description = make tools to help visualize the effect of various rules
 
| description = make tools to help visualize the effect of various rules
 
| rationale = TODO see https://github.com/Jakespringer/dapertium for an example
 
| rationale = TODO see https://github.com/Jakespringer/dapertium for an example
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Sevilay Bayatlı|Sevilay Bayatlı]]
+
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Sevilay Bayatlı|Sevilay Bayatlı]], [[User:Popcorndude]]
 
| more = /Visualization tools
 
| more = /Visualization tools
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = dictionary induction from wikis
+
| name = Extend Weighted transfer rules
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = medium
| skills = MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction
 
  +
| skills = C++, python
| description = Extract dictionaries from linguistic wikis
 
  +
| description = The weighted transfer module is already applied to the chunker transfer rules. And the idea here is to extend that module to be applied to interchunk and postchunk transfer rules too.
| rationale = Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets.
 
  +
| rationale = As a resource see https://github.com/aboelhamd/Weighted-transfer-rules-module
| mentors = [[User:Firespeaker|Jonathan Washington]]
 
  +
| mentors = [[User: Sevilay Bayatlı|Sevilay Bayatlı]]
| more = /Dictionary induction from wikis
 
  +
| more = /Make a module
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
  +
| name = Automatic Error-Finder / Pseudo-Backpropagation
| name = unit testing framework
 
| difficulty = Medium
+
| difficulty = Hard
| skills = perl
+
| size = large
  +
| skills = python?
| description = adapt https://github.com/TinoDidriksen/regtest for general Apertium use. [https://github.com/TinoDidriksen/regtest/wiki Screenshots of regtest action]
 
  +
| description = Develop a tool to locate the approximate source of translation errors in the pipeline.
| rationale = We are gradually improving our quality control, with (semi-)automated tests, but these are done on the Wiki on an ad-hoc basis. Having a unified testing framework would allow us to be able to more easily track quality improvements over all language pairs, and more easily deal with regressions.
 
  +
| rationale = Being able to generate a list of probable error sources automatically makes it possible to prioritize issues by frequency, frees up developer time, and is a first step towards automated generation of better rules.
| mentors = [[User:Xavivars|Xavi Ivars]]
 
  +
| mentors = [[User:Popcorndude]]
| more = /Unit testing
 
  +
| more = /Backpropagation
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
  +
| name = More Robust Recursive Transfer
| name = Bring an unreleased translation pair to releasable quality
 
| difficulty = Medium
+
| difficulty = Hard
  +
| size = large
| skills = shell scripting
 
  +
| skills = C++
| description = Take an unstable language pair and improve its quality, focusing on testvoc
 
  +
| description = Ensure [[Apertium-recursive#Further_Documentation|Recursive Transfer]] survives ambiguous or incomplete parse trees
| rationale = Many Apertium language pairs have large dictionaries and have otherwise seen much development, but are not of releasable quality. The point of this project would be bring one translation pair to releasable quality. This would entail obtaining good naïve coverage and a clean [[testvoc]].
 
  +
| rationale = Currently, one has to be very careful in writing recursive transfer rules to ensure they don't get too deep or ambiguous, and that they cover full sentences. See in particular issues [https://github.com/apertium/apertium-recursive/issues/97 97] and [https://github.com/apertium/apertium-recursive/issues/80 80]. We would like linguists to be able to fearlessly write recursive (rtx) rules based on what makes linguistic sense, and have rtx-proc/rtx-comp deal with the computational/performance side.
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Seviay Bayatlı|Sevilay Bayatlı]]
 
  +
| mentors =
| more = /Make a language pair state-of-the-art
 
  +
| more = /More_robust_recursive_transfer
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
  +
| name = CG-based Transfer
| name = Develop a prototype MT system for a strategic language pair
 
  +
| difficulty = Hard
  +
| size = large
  +
| skills = C++
  +
| description = Linguists already write dependency trees in [[Constraint Grammar]]. A following step could use these to reorder into target language trees.
  +
| mentors =
  +
| more =
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Language Server Protocol
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = medium
| skills = XML, some knowledge of linguistics and of one relevant natural language
 
  +
| skills = any programming language
| description = Create a translation pair based on two existing language modules, focusing on the dictionary and structural transfer
 
  +
| description = Build a [https://microsoft.github.io/language-server-protocol/|Language Server] for the various Apertium rule formats
| rationale = Choose a strategic set of languages to develop an MT system for, such that you know the target language well and morphological transducers for each language are part of Apertium. Develop an Apertium MT system by focusing on writing a bilingual dictionary and structural transfer rules. Expanding the transducers and disambiguation, and writing lexical selection rules and multiword sequences may also be part of the work. The pair may be an existing prototype, but if it's a heavily developed but unreleased pair, consider applying for "Bring an unreleased translation pair to releasable quality" instead.
 
  +
| rationale = We have some static analysis tools and syntax highlighters already and it would be great if we could combine and expand them to support more text editors.
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Sevilay Bayatlı| Sevilay Bayatlı]]
 
  +
| mentors = [[User:Popcorndude]]
| more = /Adopt a language pair
 
  +
| more = /Language Server Protocol
  +
}}
  +
  +
{{IdeaSummary
  +
| name = WASM Compilation
  +
| difficulty = hard
  +
| size = medium
  +
| skills = C++, Javascript
  +
| description = Compile the pipeline modules to WASM and provide JS wrappers for them.
  +
| rationale = There are situations where it would be nice to be able to run the entire pipeline in the browser
  +
| mentors = [[User:Tino Didriksen|Tino Didriksen]]
  +
| more = /WASM
  +
}}
  +
  +
== Web ==
  +
  +
If you know Python and JavaScript, here's some ideas for improving our [https://apertium.org website]. Some of these should be fairly short and it would be a good idea to talk to the mentors about doing a couple of them together.
  +
  +
{{IdeaSummary
  +
| name = Web API extensions
  +
| difficulty = medium
  +
| size = small
  +
| skills = Python
  +
| description = Update the web API for Apertium to expose all Apertium modes
  +
| rationale = The current Web API misses out on a lot of functionality, like phonemicisation, segmentation, transliteration, and paradigm generation.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
  +
| more = /Apertium APY
 
}}
 
}}
   
Line 164: Line 259:
 
| name = Website Improvements: Misc
 
| name = Website Improvements: Misc
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = small
 
| skills = html, css, js, python
 
| skills = html, css, js, python
 
| description = Improve elements of Apertium's web infrastructure
 
| description = Improve elements of Apertium's web infrastructure
Line 174: Line 270:
 
| name = Website Improvements: Dictionary Lookup
 
| name = Website Improvements: Dictionary Lookup
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = small
 
| skills = html, css, js, python
 
| skills = html, css, js, python
 
| description = Finish implementing dictionary lookup mode in Apertium's web infrastructure
 
| description = Finish implementing dictionary lookup mode in Apertium's web infrastructure
| rationale = Apertium's website infrastructure [[Apertium-html-tools]] and its supporting API [[APy|Apertium APy]] have numerous open issues, including half-completed features like dictionary lookup. This project would entail completing the dictionary lookup feature. Some additional features which would be good to work would include automatic reverse lookups (so that a user has a better understanding of the results), grammatical information (such as the gender of nouns or the conjugation paradigms of verbs), and information about MWEs. See [https://github.com/apertium/apertium-html-tools/issues/105 the open issue on GitHub].
+
| rationale = Apertium's website infrastructure [[Apertium-html-tools]] and its supporting API [[APy|Apertium APy]] have numerous open issues, including half-completed features like dictionary lookup. This project would entail completing the dictionary lookup feature. Some additional features which would be good to work would include automatic reverse lookups (so that a user has a better understanding of the results), grammatical information (such as the gender of nouns or the conjugation paradigms of verbs), and information about MWEs.
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
+
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]], [[User:Popcorndude]]
  +
| more = https://github.com/apertium/apertium-html-tools/issues/105 the open issue on GitHub
| more = /Website improvements
 
 
}}
 
}}
   
Line 184: Line 281:
 
| name = Website Improvements: Spell checking
 
| name = Website Improvements: Spell checking
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = small
 
| skills = html, js, css, python
 
| skills = html, js, css, python
 
| description = Add a spell-checking interface to Apertium's web tools
 
| description = Add a spell-checking interface to Apertium's web tools
Line 194: Line 292:
 
| name = Website Improvements: Suggestions
 
| name = Website Improvements: Suggestions
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = small
 
| skills = html, css, js, python
 
| skills = html, css, js, python
 
| description = Finish implementing a suggestions interface for Apertium's web infrastructure
 
| description = Finish implementing a suggestions interface for Apertium's web infrastructure
Line 204: Line 303:
 
| name = Website Improvements: Orthography conversion interface
 
| name = Website Improvements: Orthography conversion interface
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = small
 
| skills = html, js, css, python
 
| skills = html, js, css, python
 
| description = Add an orthography conversion interface to Apertium's web tools
 
| description = Add an orthography conversion interface to Apertium's web tools
Line 212: Line 312:
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = Apertium Browser Plugin
+
| name = Add support for NMT to web API
 
| difficulty = Medium
 
| difficulty = Medium
  +
| size = medium
| skills = html, css, js, python
 
  +
| skills = python, NMT
| description = Expand functionality of Geriaoueg vocabulary assistant
 
  +
| description = Add support for a popular NMT engine to Apertium's web API
| rationale = [[Geriaoueg]] is a vocabulary assistant with Firefox/Chrom[e/ium] plugins. These plugins interface with Apertium's web API, [[APy|Apertium APy]], and allow a user to look up (in Apertium's dictionaries) word forms from a web page they're viewing. A Firefox/Chrom[e/ium] plugin should also be able to provide in-browser website translation. This project is to clean up the dictionary lookup functionality and add translation support to the plugins. Some APy features may need to be tweaked, but most of the work in this project will be solely in the plugins.
 
  +
| rationale = Currently Apertium's web API [[APy|Apertium APy]], supports only Apertium language modules. But the front end could just as easily interface with an API that supports trained NMT models. The point of the project is to add support for one popular NMT package (e.g., translateLocally/Bergamot, OpenNMT or JoeyNMT) to the APy.
 
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
 
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
| more = /Geriaoueg browser plugin
+
| more =
 
}}
 
}}
  +
  +
== Integrations ==
  +
  +
In addition to incorporating data from other projects, it would be nice if we could also make our data useful to them.
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = Extend Weighted transfer rules
+
| name = OmniLingo and Apertium
| difficulty = Medium
+
| difficulty = medium
| skills = C++, python
+
| size = either
  +
| skills = JS, Python
| description = The weighted transfer module is already applied to the chunker transfer rules. And the idea here is to extend that module to be applied to interchunk and postchunk transfer rules too.
 
  +
| description = OmniLingo is a language learning system for practicing listening comprehension using Common Voice data. There is a lot of text processing involved (for example tokenisation) that could be aided by Apertium tools.
| rationale = As a resource see https://github.com/aboelhamd/Weighted-transfer-rules-module
 
  +
| rationale =
| mentors = [[User: Sevilay Bayatlı|Sevilay Bayatlı]]
 
  +
| mentors = [[User:Francis Tyers|Francis Tyers]]
| more = /Make a module
 
  +
| more = /OmniLingo
 
}}
 
}}
   
 
{{IdeaSummary
 
{{IdeaSummary
  +
| name = Support for Enhanced Dependencies in UD Annotatrix
| name = Automatic Error-Finder / Backpropagation
 
| difficulty = Medium
+
| difficulty = medium
| skills = python?
+
| size = medium
  +
| skills = NodeJS
| description = Develop a tool to locate the approximate source of translation errors in the pipeline.
 
  +
| description = UD Annotatrix is an annotation interface for Universal Dependencies, but does not yet support all functionality.
| rationale = Being able to generate a list of probable error sources automatically makes it possible to prioritize issues by frequency, frees up developer time, and is a first step towards automated generation of better rules.
 
| mentors = ???
+
| rationale =
  +
| mentors = [[User:Francis Tyers|Francis Tyers]]
| more = /Backpropagation
 
  +
| more = /Annotatrix enhanced dependencies
 
}}
 
}}
  +
  +
<!--
  +
This one was done, but could do with more work. Not sure if it's a full gsoc though?
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = Add support for NMT to web API
+
| name = User-friendly lexical selection training
 
| difficulty = Medium
 
| difficulty = Medium
| skills = python, NMT
+
| skills = Python, C++, shell scripting
| description = Add support for a popular NMT engine to Apertium's web API
+
| description = Make it so that training/inference of lexical selection rules is a more user-friendly process
  +
| rationale = Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
| rationale = Currently Apertium's web API [[APy|Apertium APy]], supports only Apertium language modules. But the front end could just as easily interface with an API that supports trained NMT models. The point of the project is to add support for one popular NMT package (e.g., OpenNMT or JoeyNMT) to the APy.
 
| mentors = [[User:Firespeaker|Jonathan Washington]]
+
| mentors = [[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]]
  +
| more = /User-friendly lexical selection training
| more =
 
 
}}
 
}}
  +
-->
   
 
{{IdeaSummary
 
{{IdeaSummary
| name = Localization (l10n) / Internationalization (i18n) of Apertium tools
+
| name = UD and Apertium integration
| difficulty = Medium
+
| difficulty = Entry level
| skills = C++
+
| size = medium
  +
| skills = python, javascript, HTML, (C++)
| description = All our command line tools are currently hardcoded as English-only and it would be good if this were otherwise.
 
  +
| description = Create a range of tools for making Apertium compatible with Universal Dependencies
| mentors = [[User:Tino_Didriksen|Tino Didriksen]]
 
  +
| rationale = Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
| more = https://github.com/apertium/organisation/issues/28 Github
 
  +
| mentors = [[User:Francis Tyers]], [[User:Firespeaker| Jonathan Washington]], [[User:Popcorndude]]
  +
| more = /UD and Apertium integration
 
}}
 
}}
   

Latest revision as of 09:15, 4 March 2024

This is the ideas page for Google Summer of Code, here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality.

Current Apertium contributors: If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using ~~~.

Prospective GSoC contributors: The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on #apertium on irc.oftc.net (more on IRC), mail the mailing list, or draw attention to yourself in some other way.

Note that if you have an idea that isn't mentioned here, we would be very interested to hear about it.

Here are some more things you could look at:


If you're a prospective GSoC contributor trying to propose a topic, the recommended way is to request a wiki account and then go to

http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2023Proposal

and click the "create" button near the top of the page. It's also nice to include [[Category:GSoC_2023_student_proposals]] to help organize submitted proposals.

Language Data[edit]

Can you read or write a language other than English (and we do mean any language)? If so, you can help with one of these and we can help you figure out the technical parts.


Develop a morphological analyser[edit]

  • Difficulty:
    3. Entry level
  • Size: Multiple lengths possible (discuss with the mentors which option is better for you)
  • Required skills:
    XML or HFST or lexd
  • Description:
    Write a morphological analyser and generator for a language that does not yet have one
  • Rationale:
    A key part of an Apertium machine translation system is a morphological analyser and generator. The objective of this task is to create an analyser for a language that does not yet have one.
  • Mentors:
    Francis Tyers, Jonathan Washington, Sevilay Bayatlı, Hossep, nlhowell, User:Popcorndude
  • read more...


apertium-separable language-pair integration[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
  • Description:
    Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the Apertium-separable module to process the multiwords, and clean up the dictionaries accordingly.
  • Rationale:
    Apertium-separable is a newish module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, many translation pairs still don't use it.
  • Mentors:
    Jonathan Washington, User:Popcorndude
  • read more...


Bring an unreleased translation pair to releasable quality[edit]

  • Difficulty:
    2. Medium
  • Size: Large
  • Required skills:
    shell scripting
  • Description:
    Take an unstable language pair and improve its quality, focusing on testvoc
  • Rationale:
    Many Apertium language pairs have large dictionaries and have otherwise seen much development, but are not of releasable quality. The point of this project would be bring one translation pair to releasable quality. This would entail obtaining good naïve coverage and a clean testvoc.
  • Mentors:
    Jonathan Washington, Sevilay Bayatlı, User:Unhammer, Hèctor Alòs i Font
  • read more...


Develop a prototype MT system for a strategic language pair[edit]

  • Difficulty:
    2. Medium
  • Size: Large
  • Required skills:
    XML, some knowledge of linguistics and of one relevant natural language
  • Description:
    Create a translation pair based on two existing language modules, focusing on the dictionary and structural transfer
  • Rationale:
    Choose a strategic set of languages to develop an MT system for, such that you know the target language well and morphological transducers for each language are part of Apertium. Develop an Apertium MT system by focusing on writing a bilingual dictionary and structural transfer rules. Expanding the transducers and disambiguation, and writing lexical selection rules and multiword sequences may also be part of the work. The pair may be an existing prototype, but if it's a heavily developed but unreleased pair, consider applying for "Bring an unreleased translation pair to releasable quality" instead.
  • Mentors:
    Jonathan Washington, Sevilay Bayatlı, User:Unhammer, Hèctor Alòs i Font
  • read more...


Add a new variety to an existing language[edit]

  • Difficulty:
    3. Entry level
  • Size: Multiple lengths possible (discuss with the mentors which option is better for you)
  • Required skills:
    XML, some knowledge of linguistics and of one relevant natural language
  • Description:
    Add a language variety to one or more released pairs, focusing on the dictionary and lexical selection
  • Rationale:
    Take a released language, and define a new language variety for it: e.g. Quebec French or Provençal Occitan. Then add the new variety to one or more released language pairs, without diminishing the quality of the pre-existing variety(ies). The objective is to facilitate the generation of varieties for languages with a weak standardisation and/or pluricentric languages.
  • Mentors:
    Hèctor Alòs i Font, Jonathan Washington,Sevilaybayatlı
  • read more...


Leverage and integrate language preferences into language pairs[edit]

  • Difficulty:
    3. Entry level
  • Size: Medium
  • Required skills:
    XML, some knowledge of linguistics and of one relevant natural language
  • Description:
    Update language pairs with lexical and orthographical variations to leverage the new preferences functionality
  • Rationale:
    Currently, preferences are implemented via language variant, which relies on multiple dictionaries, increasing compilation time exponentially every time a new preference gets introduced.
  • Mentors:
    Xavi Ivars User:Unhammer
  • read more...


Add Capitalization Handling Module to a Language Pair[edit]

  • Difficulty:
    3. Entry level
  • Size: Small
  • Required skills:
    XML, knowledge of some relevant natural language
  • Description:
    Update a language pair to make use make use of the new Capitalization handling module
  • Rationale:
    Correcting capitalization via transfer rules is tedious and error prone, but putting them in a separate set of rules should allow them to be handled in a more concise and maintainable way. Additionally, it is possible that capitalization rule could be moved to monolingual modules, thus reducing development effort on translators.
  • Mentors:
    User:Popcorndude
  • read more...

Data Extraction[edit]

A lot of the language data we need to make our analyzers and translators work already exists in other forms and we just need to figure out how to convert it. If you know of another source of data that isn't listed, we'd love to hear about it.


dictionary induction from wikis[edit]

  • Difficulty:
    2. Medium
  • Size: Multiple lengths possible (discuss with the mentors which option is better for you)
  • Required skills:
    MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction
  • Description:
    Extract dictionaries from linguistic wikis
  • Rationale:
    Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets.
  • Mentors:
    Jonathan Washington, User:Popcorndude
  • read more...


Dictionary induction from parallel corpora / Revive ReTraTos[edit]

  • Difficulty:
    2. Medium
  • Size: Medium
  • Required skills:
    C++, perl, python, xml, scripting, machine learning
  • Description:
    Extract dictionaries from parallel corpora
  • Rationale:
    Given a pair of monolingual modules and a parallel corpus, we should be able to run a program to align tagged sentences and give us the best entries that are missing from bidix. ReTraTos (from 2008) did this back in 2008, but it's from 2008. We want a program which builds and runs in 2022, and does all the steps for the user.
  • Mentors:
    User:Unhammer, User:Popcorndude
  • read more...


Extract morphological data from FLEx[edit]

  • Difficulty:
    1. Hard
  • Size: Large
  • Required skills:
    python, XML parsing
  • Description:
    Write a program to extract data from SIL FieldWorks and convert as much as possible to monodix (and maybe bidix).
  • Rationale:
    There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use
  • Mentors:
    Popcorndude, Flammie
  • read more...

Tooling[edit]

These are projects for people who would be comfortable digging through our C++ codebases (you will be doing a lot of that).


Python API for Apertium[edit]

  • Difficulty:
    2. Medium
  • Size: Medium
  • Required skills:
    C++, Python
  • Description:
    Update the Python API for Apertium to expose all Apertium modes and test with all major OSes
  • Rationale:
    The current Python API misses out on a lot of functionality, like phonemicisation, segmentation, and transliteration, and doesn't work for some OSes like Debian.
  • Mentors:
    Francis Tyers
  • read more...


Robust tokenisation in lttoolbox[edit]

  • Difficulty:
    2. Medium
  • Size: Large
  • Required skills:
    C++, XML, Python
  • Description:
    Improve the longest-match left-to-right tokenisation strategy in lttoolbox to handle spaceless orthographies.
  • Rationale:
    One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing. Additionally, the system is nearly impossible to use for languages that don't use spaces, such as Japanese.
  • Mentors:
    Francis Tyers, Flammie
  • read more...


rule visualization tools[edit]


Extend Weighted transfer rules[edit]


Automatic Error-Finder / Pseudo-Backpropagation[edit]

  • Difficulty:
    1. Hard
  • Size: Large
  • Required skills:
    python?
  • Description:
    Develop a tool to locate the approximate source of translation errors in the pipeline.
  • Rationale:
    Being able to generate a list of probable error sources automatically makes it possible to prioritize issues by frequency, frees up developer time, and is a first step towards automated generation of better rules.
  • Mentors:
    User:Popcorndude
  • read more...


More Robust Recursive Transfer[edit]

  • Difficulty:
    1. Hard
  • Size: Large
  • Required skills:
    C++
  • Description:
    Ensure Recursive Transfer survives ambiguous or incomplete parse trees
  • Rationale:
    Currently, one has to be very careful in writing recursive transfer rules to ensure they don't get too deep or ambiguous, and that they cover full sentences. See in particular issues 97 and 80. We would like linguists to be able to fearlessly write recursive (rtx) rules based on what makes linguistic sense, and have rtx-proc/rtx-comp deal with the computational/performance side.
  • Mentors:
  • read more...


CG-based Transfer[edit]

  • Difficulty:
    1. Hard
  • Size: Large
  • Required skills:
    C++
  • Description:
    Linguists already write dependency trees in Constraint Grammar. A following step could use these to reorder into target language trees.
  • Rationale:
    {{{rationale}}}
  • Mentors:
  • [[|read more...]]


Language Server Protocol[edit]


WASM Compilation[edit]

  • Difficulty:
    1. Hard
  • Size: Medium
  • Required skills:
    C++, Javascript
  • Description:
    Compile the pipeline modules to WASM and provide JS wrappers for them.
  • Rationale:
    There are situations where it would be nice to be able to run the entire pipeline in the browser
  • Mentors:
    Tino Didriksen
  • read more...

Web[edit]

If you know Python and JavaScript, here's some ideas for improving our website. Some of these should be fairly short and it would be a good idea to talk to the mentors about doing a couple of them together.


Web API extensions[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    Python
  • Description:
    Update the web API for Apertium to expose all Apertium modes
  • Rationale:
    The current Web API misses out on a lot of functionality, like phonemicisation, segmentation, transliteration, and paradigm generation.
  • Mentors:
    Francis Tyers, Jonathan Washington, Xavi Ivars
  • read more...


Website Improvements: Misc[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, css, js, python
  • Description:
    Improve elements of Apertium's web infrastructure
  • Rationale:
    Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy have numerous open issues. This project would entail choosing a subset of open issues and features that could realistically be completed in the summer. You're encouraged to speak with the Apertium community to see which features and issues are the most pressing.
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • read more...


Website Improvements: Dictionary Lookup[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, css, js, python
  • Description:
    Finish implementing dictionary lookup mode in Apertium's web infrastructure
  • Rationale:
    Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy have numerous open issues, including half-completed features like dictionary lookup. This project would entail completing the dictionary lookup feature. Some additional features which would be good to work would include automatic reverse lookups (so that a user has a better understanding of the results), grammatical information (such as the gender of nouns or the conjugation paradigms of verbs), and information about MWEs.
  • Mentors:
    Jonathan Washington, Xavi Ivars, User:Popcorndude
  • [the open issue on GitHub|read more...]


Website Improvements: Spell checking[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, js, css, python
  • Description:
    Add a spell-checking interface to Apertium's web tools
  • Rationale:
    Apertium-html-tools has seen some prototypes for spell-checking interfaces (all in stale PRs and branches on GitHub), but none have ended up being quite ready to integrate into the tools. This project would entail polishing up or recreating an interface, and making sure APy has a mode that allows access to Apertium voikospell modules. The end result should be a slick, easy-to-use interface for proofing text, with intuitive underlining of text deemed to be misspelled and intuitive presentation and selection of alternatives. the open issue on GitHub
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • read more...


Website Improvements: Suggestions[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, css, js, python
  • Description:
    Finish implementing a suggestions interface for Apertium's web infrastructure
  • Rationale:
    Some work has been done to add a "suggestions" interface to Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy, whereby users can suggest corrected translations. This project would entail finishing that feature. There are some related issues and PRs on GitHub.
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • read more...


Website Improvements: Orthography conversion interface[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, js, css, python
  • Description:
    Add an orthography conversion interface to Apertium's web tools
  • Rationale:
    Several Apertium language modules (like Kazakh, Kyrgyz, Crimean Tatar, and Hñähñu) have orthography conversion modes in their mode definition files. This project would be to expose those modes through Apertium APy and provide a simple interface in Apertium-html-tools to use them.
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • read more...


Add support for NMT to web API[edit]

  • Difficulty:
    2. Medium
  • Size: Medium
  • Required skills:
    python, NMT
  • Description:
    Add support for a popular NMT engine to Apertium's web API
  • Rationale:
    Currently Apertium's web API Apertium APy, supports only Apertium language modules. But the front end could just as easily interface with an API that supports trained NMT models. The point of the project is to add support for one popular NMT package (e.g., translateLocally/Bergamot, OpenNMT or JoeyNMT) to the APy.
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • [[|read more...]]

Integrations[edit]

In addition to incorporating data from other projects, it would be nice if we could also make our data useful to them.


OmniLingo and Apertium[edit]

  • Difficulty:
    2. Medium
  • Size: Multiple lengths possible (discuss with the mentors which option is better for you)
  • Required skills:
    JS, Python
  • Description:
    OmniLingo is a language learning system for practicing listening comprehension using Common Voice data. There is a lot of text processing involved (for example tokenisation) that could be aided by Apertium tools.
  • Rationale:
  • Mentors:
    Francis Tyers
  • read more...


Support for Enhanced Dependencies in UD Annotatrix[edit]

  • Difficulty:
    2. Medium
  • Size: Medium
  • Required skills:
    NodeJS
  • Description:
    UD Annotatrix is an annotation interface for Universal Dependencies, but does not yet support all functionality.
  • Rationale:
  • Mentors:
    Francis Tyers
  • read more...


UD and Apertium integration[edit]

  • Difficulty:
    3. Entry level
  • Size: Medium
  • Required skills:
    python, javascript, HTML, (C++)
  • Description:
    Create a range of tools for making Apertium compatible with Universal Dependencies
  • Rationale:
    Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
  • Mentors:
    User:Francis Tyers, Jonathan Washington, User:Popcorndude
  • read more...