Difference between revisions of "Ideas for Google Summer of Code"
(108 intermediate revisions by 10 users not shown) | |||
Line 1: | Line 1: | ||
<div style="background-color:pink; text-align:center; border: 1px solid crimson; padding: 1ex">This page has not been updated for GSoC 2021 yet. Some of these projects were completed in 2020, and none are adjusted for 2021 only allowing half the working hours of previous years.</div> |
|||
{{TOCD}} |
{{TOCD}} |
||
This is the ideas page for [[Google Summer of Code]], here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality. |
This is the ideas page for [[Google Summer of Code]], here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality. |
||
'''Current Apertium contributors''': If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using <code><nowiki>~~~</nowiki></code>. |
|||
The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on <code>#apertium</code> on <code>irc.freenode.net</code>, mail the [[Contact|mailing list]], or draw attention to yourself in some other way. |
|||
'''Prospective GSoC contributors''': The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on <code>#apertium</code> on <code>irc.oftc.net</code> ([[IRC|more on IRC]]), mail the [[Contact|mailing list]], or draw attention to yourself in some other way. |
|||
Note that, if you have an idea that isn't mentioned here, we would be very interested to hear about it. |
|||
Note that if you have an idea that isn't mentioned here, we would be very interested to hear about it. |
|||
Here are some more things you could look at: |
Here are some more things you could look at: |
||
Line 15: | Line 15: | ||
* Resources that could be converted or expanded in the [[incubator]]. Consider doing or improving a language pair (see [[incubator]], [[nursery]] and [[staging]] for pairs that need work) |
* Resources that could be converted or expanded in the [[incubator]]. Consider doing or improving a language pair (see [[incubator]], [[nursery]] and [[staging]] for pairs that need work) |
||
* Unhammer's [[User:Unhammer/wishlist|wishlist]] |
* Unhammer's [[User:Unhammer/wishlist|wishlist]] |
||
* The open issues [https://github.com/search?q=org%3Aapertium&state=open&type=Issues on Github] |
<!--* The open issues [https://github.com/search?q=org%3Aapertium&state=open&type=Issues on Github] - especially the [https://github.com/search?q=org%3Aapertium+label%3A%22good+first+issue%22&state=open&type=Issues Good First Issues]. --> |
||
__TOC__ |
__TOC__ |
||
If you're a |
If you're a prospective GSoC contributor trying to propose a topic, the recommended way is to request a wiki account and then go to <pre>http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2023Proposal</pre> and click the "create" button near the top of the page. It's also nice to include <code><nowiki>[[</nowiki>[[:Category:GSoC_2023_student_proposals|Category:GSoC_2023_student_proposals]]<nowiki>]]</nowiki></code> to help organize submitted proposals. |
||
== Language |
== Language Data == |
||
Can you read or write a language other than English (and we do mean any language)? If so, you can help with one of these and we can help you figure out the technical parts. |
|||
These are ideas that involve working with particular languages. |
|||
<!-- See https://github.com/apertium/apertium-anaphora |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Develop a morphological analyser |
|||
| name = Anaphora resolution for machine translation |
|||
| difficulty = |
| difficulty = easy |
||
| size = either |
|||
| skills = C++, XML, Python |
|||
| skills = XML or HFST or lexd |
|||
| description = Write a program to resolve anaphora and include it in the Apertium translation pipeline. |
|||
| description = Write a morphological analyser and generator for a language that does not yet have one |
|||
| rationale = Apertium has a problem with long distance dependencies in terms of agreement and co-reference. For example, deciding which determiner to use when translating from Spanish "su" to English "his, her, its". The objective of this task is to make a system to resolve anaphora and integrate it into a translation pipeline. |
|||
| rationale = A key part of an Apertium machine translation system is a morphological analyser and generator. The objective of this task is to create an analyser for a language that does not yet have one. |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]] |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker|Jonathan Washington]], [[User: Sevilay Bayatlı|Sevilay Bayatlı]], Hossep, nlhowell, [[User:Popcorndude]] |
|||
| more = /Anaphora resolution |
|||
| more = /Morphological analyser |
|||
}} |
}} |
||
--> |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = apertium-separable language-pair integration |
||
| difficulty = |
| difficulty = Medium |
||
| size = small |
|||
| skills = XML, a scripting language (Python, Perl), good knowledge of the language pair adopted. |
|||
| skills = XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language |
|||
| description = Take a released language pair, and drastically improve the performance both in terms of coverage, and in terms of translation quality. This will involve working with dictionaries, transfer rules, scripting, corpora. The objective is to make an Apertium language pair state-of-the-art, or close to state-of-the-art in terms of translation quality. This will involve improving coverage to 95-98% on a range of corpora and decreasing [[word error rate]] by 30-50%. For example if the current word error rate is 30%, then it should be reduced to 15-20%. |
|||
| description = Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly. |
|||
| rationale = Apertium has quite a broad coverage of language pairs, but few of these pairs offer state-of-the-art translation quality. We think broad is important, but deep coverage is important too. |
|||
| rationale = Apertium-separable is a newish module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, many translation pairs still don't use it. |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Mlforcada|Mikel Forcada]], [[User:Xavivars|Xavi Ivars]], [[User:Ilnar.salimzyan|Ilnar Salimzianov]] |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Popcorndude]] |
|||
| more = /Apertium separable |
|||
}} |
|||
{{IdeaSummary |
|||
| name = Bring an unreleased translation pair to releasable quality |
|||
| difficulty = Medium |
|||
| size = large |
|||
| skills = shell scripting |
|||
| description = Take an unstable language pair and improve its quality, focusing on testvoc |
|||
| rationale = Many Apertium language pairs have large dictionaries and have otherwise seen much development, but are not of releasable quality. The point of this project would be bring one translation pair to releasable quality. This would entail obtaining good naïve coverage and a clean [[testvoc]]. |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Seviay Bayatlı|Sevilay Bayatlı]], [[User:Unhammer]], [[User:hectoralos|Hèctor Alòs i Font]] |
|||
| more = /Make a language pair state-of-the-art |
| more = /Make a language pair state-of-the-art |
||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = Develop a prototype MT system for a strategic language pair |
||
| difficulty = |
| difficulty = Medium |
||
| size = large |
|||
| skills = XML, a scripting language (Python, Perl), good knowledge of the language pair adopted. |
|||
| skills = XML, some knowledge of linguistics and of one relevant natural language |
|||
| description = Take on an orphaned unreleased language pair, and bring it up to release quality results. What this quality will be will depend on the language pair adopted, and will need to be discussed with the prospective mentor. This will involve writing linguistic data (including morphological rules and transfer rules — which are specified in a declarative language — and possibly [[Constraint Grammar]] rules if that is relevant) |
|||
| description = Create a translation pair based on two existing language modules, focusing on the dictionary and structural transfer |
|||
| rationale = Apertium has a few pairs of languages (e.g. mt-he, ga-gd, ur-hi, pl-cs, sh-ru, etc...) that are orphaned, they don't have active maintainers. A lot of these pairs have a lot of work already put in, just need another few months to get them to release quality. See also [[Incubator]] |
|||
| rationale = Choose a strategic set of languages to develop an MT system for, such that you know the target language well and morphological transducers for each language are part of Apertium. Develop an Apertium MT system by focusing on writing a bilingual dictionary and structural transfer rules. Expanding the transducers and disambiguation, and writing lexical selection rules and multiword sequences may also be part of the work. The pair may be an existing prototype, but if it's a heavily developed but unreleased pair, consider applying for "Bring an unreleased translation pair to releasable quality" instead. |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Kevin Scannell|Kevin Scannell]], [[User:Trondtr|Trondtr]], [[User:Unhammer|Unhammer]], [[User:Darthxaher|Darthxaher]], [[User:Firespeaker|Firespeaker]], [[User:Hectoralos|Hectoralos]], [[User:Krvoje|Hrvoje Peradin]], [[User:Jacob Nordfalk|Jacob Nordfalk]], [[User:Mlforcada|Mikel Forcada]], [[User:Vin-ivar|Vinit Ravishankar]], [[User:Aida|Aida Sundetova]], [[User:Xavivars|Xavi Ivars]], [[User:Ilnar.salimzyan|Ilnar Salimzianov]], [[User:Sevilay bayatlı|Sevilay Bayatlı]] |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Sevilay Bayatlı| Sevilay Bayatlı]], [[User:Unhammer]], [[User:hectoralos|Hèctor Alòs i Font]] |
|||
| more = /Adopt a language pair |
| more = /Adopt a language pair |
||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Add a new variety to an existing language |
|||
| name = apertium-separable language-pair integration |
|||
| difficulty = |
| difficulty = easy |
||
| size = either |
|||
| skills = XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language |
|||
| skills = XML, some knowledge of linguistics and of one relevant natural language |
|||
| description = Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly. |
|||
| description = Add a language variety to one or more released pairs, focusing on the dictionary and lexical selection |
|||
| rationale = Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle. |
|||
| rationale = Take a released language, and define a new language variety for it: e.g. Quebec French or Provençal Occitan. Then add the new variety to one or more released language pairs, without diminishing the quality of the pre-existing variety(ies). The objective is to facilitate the generation of varieties for languages with a weak standardisation and/or pluricentric languages. |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker]] |
|||
| mentors = [[User:hectoralos|Hèctor Alòs i Font]], [[User:Firespeaker|Jonathan Washington]],[[User:piraye|Sevilaybayatlı]] |
|||
| more = /Apertium separable |
|||
| more = /Add a new variety to an existing language |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Leverage and integrate language preferences into language pairs |
|||
| name = Bring Apertium Occitan--French closer to posteditable quality. |
|||
| difficulty = |
| difficulty = easy |
||
| size = medium |
|||
| skills = GNU/Linux advanced user, bash, git, XML editing, standard Occitan, French. |
|||
| skills = XML, some knowledge of linguistics and of one relevant natural language |
|||
| description = The idea is to make Occitan output easier to postedit and French output easier to understand. This entails increasing the monolingual and bilingual dictionaries, improving disambiguation, and writing new structural transfer rules. |
|||
| description = Update language pairs with lexical and orthographical variations to leverage the new [[Dialectal_or_standard_variation|preferences]] functionality |
|||
| rationale = The [https://github.com/apertium/apertium-oci-fra Occitan--French language pair] has been recently published. This language pair is of strategic importance for the Occitan language, as Apertium offers the only machine translation system for this language pair. |
|||
| rationale = Currently, preferences are implemented via language variant, which relies on multiple dictionaries, increasing compilation time exponentially every time a new preference gets introduced. |
|||
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome. |
|||
| mentors = [[User:Xavivars|Xavi Ivars]] [[User:Unhammer]] |
|||
| more = /Apertium_Occitan_French |
|||
| more = /Use preferences in pair |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Add Capitalization Handling Module to a Language Pair |
|||
| name = Create a usable version of one of these language pairs: English--Igbo, English--Yoruba, English--Tigrinya, English--Swahili, English-Hausa |
|||
| difficulty = |
| difficulty = easy |
||
| size = small |
|||
| skills = GNU/Linux advanced user, bash, git, XML editing, English, Igbo/Yoruba/Tigrinya/Swahili/Hausa |
|||
| skills = XML, knowledge of some relevant natural language |
|||
| description = The objective is to start these language pairs (which haven't been started or have currentlu very little data in Apertium) and write an usable version which provides intelligible output. |
|||
| description = Update a language pair to make use make use of the new [[Capitalization_restoration|Capitalization handling module]] |
|||
| rationale = African languages are not particularly well served by Apertium. The four languages listed are quite important, and are only currently served by commercial machine translation companies such as Google, which makes these language communities dependent on a specific commercial provider. |
|||
| rationale = Correcting capitalization via transfer rules is tedious and error prone, but putting them in a separate set of rules should allow them to be handled in a more concise and maintainable way. Additionally, it is possible that capitalization rule could be moved to monolingual modules, thus reducing development effort on translators. |
|||
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome. |
|||
| mentors = [[User:Popcorndude]] |
|||
| more = /Apertium_African |
|||
| more = /Capitalization |
|||
}} |
}} |
||
== |
== Data Extraction == |
||
A lot of the language data we need to make our analyzers and translators work already exists in other forms and we just need to figure out how to convert it. If you know of another source of data that isn't listed, we'd love to hear about it. |
|||
These are ideas for modifying things in the translation pipeline. |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = dictionary induction from wikis |
||
| difficulty = Medium |
| difficulty = Medium |
||
| size = either |
|||
| skills = C++, XML, Python |
|||
| skills = MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction |
|||
| description = Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to be fully Unicode compliant. |
|||
| description = Extract dictionaries from linguistic wikis |
|||
| rationale = One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing. |
|||
| rationale = Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets. |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]] |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Popcorndude]] |
|||
| more = /Robust tokenisation |
|||
| more = /Dictionary induction from wikis |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = Dictionary induction from parallel corpora / Revive ReTraTos |
||
| difficulty = |
| difficulty = Medium |
||
| size = medium |
|||
| skills = C++, XSLT, XML |
|||
| skills = C++, perl, python, xml, scripting, machine learning |
|||
| description = Extend lttoolbox (perhaps writing a preprocessor for it) so that it can be used to do the morphological transformations currently done with HFST. And yes, of course, writing something that translates the current HFST format to the new lttolbox format. Proof of concept: Come up with a new format that can express all of the features found in the Kazakh transducer; implement this format in Apertium; Implement the Kazakh transducer in this format and integrate it in the English--Kazakh pair. |
|||
| description = Extract dictionaries from parallel corpora |
|||
| rationale = Some language pairs in Apertium use HFST where most language pairs use Apertium's own lttoolbox. This is due to the fact that writing morphologies for languages that have features such as the vowel harmony found in Turkic languages is very hard with the current format supported by lttoolbox. The mixture of HFST and lttoolbox makes it harder for people to develop some language pairs. |
|||
| rationale = Given a pair of monolingual modules and a parallel corpus, we should be able to run a program to align tagged sentences and give us the best entries that are missing from bidix. [[ReTraTos]] (from 2008) did this back in 2008, but it's from 2008. We want a program which builds and runs in 2022, and does all the steps for the user. |
|||
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:TommiPirinen|Tommi A Pirinen]], [[User:Unhammer]], mentors wanted |
|||
| mentors = [[User:Unhammer]], [[User:Popcorndude]] |
|||
| more = /Extend lttoolbox to have the power of HFST |
|||
| more = /Dictionary induction from parallel corpora |
|||
}} |
}} |
||
{{IdeaSummary |
|||
<!-- |
|||
| name = Extract morphological data from FLEx |
|||
| difficulty = hard |
|||
| size = large |
|||
| skills = python, XML parsing |
|||
| description = Write a program to extract data from [https://software.sil.org/fieldworks/ SIL FieldWorks] and convert as much as possible to monodix (and maybe bidix). |
|||
| rationale = There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use |
|||
| mentors = [[User:Popcorndude|Popcorndude]], [[User:TommiPirinen|Flammie]] |
|||
| more = /FieldWorks_data_extraction |
|||
}} |
|||
== Tooling == |
|||
DANGER TERROR HORROR !!!!!! |
|||
These are projects for people who would be comfortable digging through our C++ codebases (you will be doing a lot of that). |
|||
The task above has subsumed these two |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = Python API for Apertium |
||
| difficulty = |
| difficulty = medium |
||
| size = medium |
|||
| skills = C++ or Java, XML, Knowledge of FSTs |
|||
| skills = C++, Python |
|||
| description = Adapt [[lttoolbox]] to elegantly use flag diacritics. Flag diacritics are a way of avoiding transducer size blow-up by discarding impossible paths at runtime as opposed to compile time. Some work has already been done, see [[Flag diacritics]]. |
|||
| description = Update the Python API for Apertium to expose all Apertium modes and test with all major OSes |
|||
| rationale = This will involve designing some changes to our XML dictionary format (see [[lttoolbox]], and implementing the associated changes in the FST compiling processing code. The reason behind this is that many languages have prefix inflection, and we cannot currently deal with this without either making paradigms useless, or overanalysing (e.g. returning analyses where none exist). Flag diacritics (or constraints) would allow us to restrict overanalysis without blowing up the size of our dictionaries. |
|||
| rationale = The current Python API misses out on a lot of functionality, like phonemicisation, segmentation, and transliteration, and doesn't work for some OSes <s>like Debian</s>. |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]] (C++), [[User:Jacob Nordfalk|Jacob Nordfalk]] (Java) |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]] |
|||
| more = /Flag diacritics in lttoolbox |
|||
| more = /Python API |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = Robust tokenisation in lttoolbox |
||
| difficulty = Medium |
| difficulty = Medium |
||
| size = large |
|||
| skills = C++, XML, FSTs |
|||
| skills = C++, XML, Python |
|||
| description = [[lttoolbox]] is a set of tools for building finite-state transducers. As part of Apertium's long-term strategy we would like to include probabilistic information into more stages of the pipeline to allow generic tools to be optimised for machine translation. This task involves adding the possibility of weighting lexemes and analyses in our finite-state transducer toolbox. |
|||
| description = Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to handle spaceless orthographies. |
|||
| rationale = Weighting information for lexical forms will be useful for morphological disambiguation, and for work on [[spellchecking]]. |
|||
| rationale = One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing. Additionally, the system is nearly impossible to use for languages that don't use spaces, such as Japanese. |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]] |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]] |
|||
| more = /Add weights to lttoolbox |
|||
| more = /Robust tokenisation |
|||
}} |
}} |
||
--> |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = rule visualization tools |
||
| difficulty = |
| difficulty = Medium |
||
| size = either |
|||
| skills = Python, C++, linguistics |
|||
| skills = python? javascript? XML |
|||
| description = The purpose of this task is to extend weighted transfer rules to all transfer files and to allow conflicting rule patterns to be handled by combining (lexicalised) weights. |
|||
| description = make tools to help visualize the effect of various rules |
|||
| rationale = Currently our transfer rules are applied longest-match left-to-right ([[LRLM]]). When two rule patterns conflict the first one is chosen. We have a prototype for selecting based on lexicalised weights, but it only applies to the first stage of transfer. |
|||
| rationale = TODO see https://github.com/Jakespringer/dapertium for an example |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Tommi Pirinen]], [[User:Sevilay Bayatlı|Sevilay Bayatlı]] |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Sevilay Bayatlı|Sevilay Bayatlı]], [[User:Popcorndude]] |
|||
| more = /Weighted transfer rules |
|||
| more = /Visualization tools |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Extend Weighted transfer rules |
|||
| name = Light alternative format for all XML files in an Apertium language pair |
|||
| difficulty = Medium |
|||
| size = medium |
|||
| skills = C++, python |
|||
| description = The weighted transfer module is already applied to the chunker transfer rules. And the idea here is to extend that module to be applied to interchunk and postchunk transfer rules too. |
|||
| rationale = As a resource see https://github.com/aboelhamd/Weighted-transfer-rules-module |
|||
| mentors = [[User: Sevilay Bayatlı|Sevilay Bayatlı]] |
|||
| more = /Make a module |
|||
}} |
|||
{{IdeaSummary |
|||
| name = Automatic Error-Finder / Pseudo-Backpropagation |
|||
| difficulty = Hard |
| difficulty = Hard |
||
| size = large |
|||
| skills = Python, C++, shell scripting, XSLT, flex |
|||
| skills = python? |
|||
| description = Make it possible to edit and develop language data using a format that is lighter than XML |
|||
| description = Develop a tool to locate the approximate source of translation errors in the pipeline. |
|||
| rationale = In most Apertium language pairs, monolingual dictionaries, bilingual dictionaries, post-generation rule files and structural transfer rule files are all written in XML. While XML is easy to process due to explicit tagging of every element, it is tedious to deal with, particularly when it comes to structural transfer rules. Apertium's precursor, interNOSTRUM, had lighter text based formats. The task involves: (a) designing and documenting an interNOSTRUM-style format for all of the XML language data files in a language pair; (b) writing converters to XML and from XML that are fully roundtrip-compliant: (c) designing a way to synchronize changes when both the XML and the non-XML format are used simultaneously in a specific language pair. |
|||
| rationale = Being able to generate a list of probable error sources automatically makes it possible to prioritize issues by frequency, frees up developer time, and is a first step towards automated generation of better rules. |
|||
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:Japerez|Juan Antonio Pérez]], pair. |
|||
| mentors = [[User:Popcorndude]] |
|||
| more = /Plain-text_formats_for_Apertium_data |
|||
| more = /Backpropagation |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = More Robust Recursive Transfer |
||
| difficulty = |
| difficulty = Hard |
||
| size = large |
|||
| skills = C++, Finite-State Transducers |
|||
| skills = C++ |
|||
| description = Eliminate the need for trimming the monolingual dictionaries, in order to preserve and take advantage of maximal source language analysis. |
|||
| description = Ensure [[Apertium-recursive#Further_Documentation|Recursive Transfer]] survives ambiguous or incomplete parse trees |
|||
| rationale = [[Why we trim]] mentions several technical reasons for why trimming away monolingual information is currently needed. Unfortunately, this limitation means that a lot of useful contextual information is lost. It would be ideal if the source language could be fully analyzed independent of target language, with any untranslated part fed back into the source language generator. |
|||
| rationale = Currently, one has to be very careful in writing recursive transfer rules to ensure they don't get too deep or ambiguous, and that they cover full sentences. See in particular issues [https://github.com/apertium/apertium-recursive/issues/97 97] and [https://github.com/apertium/apertium-recursive/issues/80 80]. We would like linguists to be able to fearlessly write recursive (rtx) rules based on what makes linguistic sense, and have rtx-proc/rtx-comp deal with the computational/performance side. |
|||
* '''Work around everything in [[Why we trim]]''' |
|||
| mentors = |
|||
| mentors = [[User:TommiPirinen|Flammie]], +1 '''You need to find at least 1 mentor more to apply for this task''' |
|||
| more = / |
| more = /More_robust_recursive_transfer |
||
}} |
}} |
||
<!-- |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = CG-based Transfer |
||
| difficulty = Hard |
| difficulty = Hard |
||
| |
| size = large |
||
| skills = C++ |
|||
| description = Add support for weighted transducers to lttoolbox |
|||
| description = Linguists already write dependency trees in [[Constraint Grammar]]. A following step could use these to reorder into target language trees. |
|||
| rationale = This will either involve implementing it from scratch or adding OpenFST as a backend. We would like to be able to use it both in the bilingual dictionaries, and in the morphological analysers, to be able to order analyses/translations by their probability/weight instead of by the random topological order. |
|||
| mentors = |
| mentors = |
||
| more = |
| more = |
||
}} |
}} |
||
--> |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = Language Server Protocol |
||
| difficulty = |
| difficulty = Medium |
||
| size = medium |
|||
| skills = XML, a scripting language (Python, Perl), C++, finite-state transducers |
|||
| skills = any programming language |
|||
| description = Implement a [[Constraint Grammar]]-like module based on finite-state transducers. |
|||
| description = Build a [https://microsoft.github.io/language-server-protocol/|Language Server] for the various Apertium rule formats |
|||
| rationale = Currently, many language pairs use [[Constraint grammar]] as a pre-disambiguator for the Apertium tagger, allowing the imposition of more fine grained constraints than would be otherwise possible. However, current implementation of CG is much slower than most of the other modules in the Apertium pipeline, and it's also very different in terms of syntax to other Apertium modules (dictionaries, lexical selection, transfer rules, etc). There have been a few attempts to create FST versions of CG (see [[User:David_Nemeskey/GSOC_progress_2013]]), but they haven't succeeded. The hypothesis is that a simpler version of CG that supports the main features that CG support (no need to feature parity) would have better adoption and integration within the Apertium pipeline. |
|||
| rationale = We have some static analysis tools and syntax highlighters already and it would be great if we could combine and expand them to support more text editors. |
|||
| mentors = [[User:Xavivars|Xavi Ivars]], [[User:Francis Tyers|Francis Tyers]] |
|||
| mentors = [[User:Popcorndude]] |
|||
| more = /Apertium FST GC |
|||
| more = /Language Server Protocol |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = WASM Compilation |
|||
| name = Learning distributed representations for Apertium modules |
|||
| difficulty = hard |
| difficulty = hard |
||
| size = medium |
|||
| skills = Python, neural networks |
|||
| skills = C++, Javascript |
|||
| description = |
|||
| description = Compile the pipeline modules to WASM and provide JS wrappers for them. |
|||
| rationale = |
|||
| rationale = There are situations where it would be nice to be able to run the entire pipeline in the browser |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]] |
|||
| mentors = [[User:Tino Didriksen|Tino Didriksen]] |
|||
| more = /Distributed representations and Apertium |
|||
| more = /WASM |
|||
}} |
}} |
||
== |
== Web == |
||
If you know Python and JavaScript, here's some ideas for improving our [https://apertium.org website]. Some of these should be fairly short and it would be a good idea to talk to the mentors about doing a couple of them together. |
|||
These are ideas for creating tools to help build modules and pairs. |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = Web API extensions |
||
| difficulty = medium |
|||
| size = small |
|||
| skills = Python |
|||
| description = Update the web API for Apertium to expose all Apertium modes |
|||
| rationale = The current Web API misses out on a lot of functionality, like phonemicisation, segmentation, transliteration, and paradigm generation. |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]] |
|||
| more = /Apertium APY |
|||
}} |
|||
{{IdeaSummary |
|||
| name = Website Improvements: Misc |
|||
| difficulty = Medium |
| difficulty = Medium |
||
| size = small |
|||
| skills = Python, C++, shell scripting |
|||
| skills = html, css, js, python |
|||
| description = Make it so that training/inference of lexical selection rules is a more user-friendly process |
|||
| description = Improve elements of Apertium's web infrastructure |
|||
| rationale = Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest. |
|||
| rationale = Apertium's website infrastructure [[Apertium-html-tools]] and its supporting API [[APy|Apertium APy]] have numerous open issues. This project would entail choosing a subset of open issues and features that could realistically be completed in the summer. You're encouraged to speak with the Apertium community to see which features and issues are the most pressing. |
|||
| mentors = [[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]] |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]] |
|||
| more = /User-friendly lexical selection training |
|||
| more = /Website improvements |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Website Improvements: Dictionary Lookup |
|||
| name = Bilingual dictionary enrichment via graph completion |
|||
| difficulty = |
| difficulty = Medium |
||
| size = small |
|||
| skills = shell scripting, python, XSLT, XML |
|||
| skills = html, css, js, python |
|||
| description = Generate new entries for existing or new bilingual dictionaries using graphic representations of bilingual correspondences as found in all existing dictionaries (note that this idea defines a rather open-ended task to be discussed in detail with mentors). |
|||
| description = Finish implementing dictionary lookup mode in Apertium's web infrastructure |
|||
| rationale = Apertium bilingual dictionaries establish correspondences between lexical forms in a number of language pairs. Connections among them may be used to infer new entries for existing or new language pairs using graphs. The graphs may be directly generated from Apertium bidixes and exploiting using [[Bilingual_dictionary_discovery|ideas that had already been proposed in Apertium]] or using existing [http://linguistic.linkeddata.es/apertium/ RDF representations] of parts of their content, which may benefit from the information coming from being linked to other resources. Some previous progress can be found at [[Bilingual_dictionary_enrichment_via_graph_completion|here]]. |
|||
| rationale = Apertium's website infrastructure [[Apertium-html-tools]] and its supporting API [[APy|Apertium APy]] have numerous open issues, including half-completed features like dictionary lookup. This project would entail completing the dictionary lookup feature. Some additional features which would be good to work would include automatic reverse lookups (so that a user has a better understanding of the results), grammatical information (such as the gender of nouns or the conjugation paradigms of verbs), and information about MWEs. |
|||
| mentors = [[User:Mlforcada|Mikel Forcada]], [[User:Francis Tyers|Francis Tyers]], [[User:Jorge Gracia|Jorge Gracia]] |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]], [[User:Popcorndude]] |
|||
| more = Bilingual_dictionary_discovery |
|||
| more = https://github.com/apertium/apertium-html-tools/issues/105 the open issue on GitHub |
|||
}} |
}} |
||
<!-- |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Website Improvements: Spell checking |
|||
| name = Transfer rule induction from comparable parsed corpora |
|||
| difficulty = |
| difficulty = Medium |
||
| size = small |
|||
| skills = shell scripting, python, XSLT, XML |
|||
| skills = html, js, css, python |
|||
| description = A system to infer transfer rules from comparable corpora that have both been deeply parsed (with e.g. CG) |
|||
| description = Add a spell-checking interface to Apertium's web tools |
|||
| rationale = Many languages have good CG's and fairly large monolingual corpora, but little parallel material. Given a small bidix, fairly large monolingual corpora and good analysers/CG's, we should be able to parse both corpora, translate lemmas and look for similar sentences, turning the differences in their parses into transfer rules. |
|||
| rationale = [[Apertium-html-tools]] has seen some prototypes for spell-checking interfaces (all in stale PRs and branches on GitHub), but none have ended up being quite ready to integrate into the tools. This project would entail polishing up or recreating an interface, and making sure [[APy]] has a mode that allows access to Apertium voikospell modules. The end result should be a slick, easy-to-use interface for proofing text, with intuitive underlining of text deemed to be misspelled and intuitive presentation and selection of alternatives. [https://github.com/apertium/apertium-html-tools/issues/390 the open issue on GitHub] |
|||
| mentors = [[User:Unhammer]] |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]] |
|||
| more = Transfer_induction_from_comparable_parsed_corpora |
|||
| more = /Spell checker web interface |
|||
}} |
}} |
||
--> |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Website Improvements: Suggestions |
|||
| name = A Web Interface to expanding dictionary lemmas integrate with GitLab/GitHub |
|||
| difficulty = |
| difficulty = Medium |
||
| size = small |
|||
| skills = java, git, XML, nodeJs, Angular 8. |
|||
| skills = html, css, js, python |
|||
| description = Given that Apertium has a few dozen contributors and thousands of users, we propose a web graphical user interface (GUI) that enables the lay users to contribute to the expansion of the dictionaries that makeup the knowledge base of Apertium. |
|||
| description = Finish implementing a suggestions interface for Apertium's web infrastructure |
|||
The main premise of the solution is that users with minimal knowledge of the language can contribute easily and that it must be integrated with the current form of development of expert users. Some prior progress can be found [https://web-dix-maintenance.appspot.com/ here]. |
|||
| rationale = Some work has been done to add a "suggestions" interface to Apertium's website infrastructure [[Apertium-html-tools]] and its supporting API [[APy|Apertium APy]], whereby users can suggest corrected translations. This project would entail finishing that feature. There are some related [https://github.com/apertium/apertium-html-tools/issues/55 issues] and [https://github.com/apertium/apertium-html-tools/pull/252 PRs] on GitHub. |
|||
| rationale = . |
|||
| mentors = |
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]] |
||
| more = / |
| more = /Website improvements |
||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Website Improvements: Orthography conversion interface |
|||
| name = Dictionary lookup with editing |
|||
| difficulty = |
| difficulty = Medium |
||
| size = small |
|||
| skills = XML, git, JavaScript, any language for backend (Python?) |
|||
| skills = html, js, css, python |
|||
| description = A bilingual dictionary (the kind for people, not a bidix) contains various kinds of information ([http://perseus.uchicago.edu/cgi-bin/philologic/getobject.pl?c.17:3:39.LSJ example here]). Possible things to find in such a dictionary include inflected forms, translations, and phrases that the word might occur in. It should be possible to extract this information from various files within a translation pair. Within the interface, users could make changes to that information which can then be automatically converted to a pull request on Github. Some prior efforts at various kinds of dictionary lookup have been attempted [https://github.com/apertium/apertium-html-tools/issues/105 here]. |
|||
| description = Add an orthography conversion interface to Apertium's web tools |
|||
| rationale = Dictionary lookup is something that would be useful to a lot of users and fixing bilingual dictionary entries is something new people frequently want to do. |
|||
| rationale = Several Apertium language modules (like Kazakh, Kyrgyz, Crimean Tatar, and Hñähñu) have orthography conversion modes in their mode definition files. This project would be to expose those modes through [[APy|Apertium APy]] and provide a simple interface in [[Apertium-html-tools]] to use them. |
|||
| mentors = [[User:Popcorndude|Popcorndude]], mentors welcome |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]] |
|||
| more = /Bidix_lookup_and_maintenance |
|||
| more = /Website improvements |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = Add support for NMT to web API |
||
| difficulty = |
| difficulty = Medium |
||
| size = medium |
|||
| skills = python, XML parsing |
|||
| skills = python, NMT |
|||
| description = Write a program to extract data from [https://software.sil.org/fieldworks/ SIL FieldWorks] and convert as much as possible to monodix (and maybe bidix). |
|||
| description = Add support for a popular NMT engine to Apertium's web API |
|||
| rationale = There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use |
|||
| rationale = Currently Apertium's web API [[APy|Apertium APy]], supports only Apertium language modules. But the front end could just as easily interface with an API that supports trained NMT models. The point of the project is to add support for one popular NMT package (e.g., translateLocally/Bergamot, OpenNMT or JoeyNMT) to the APy. |
|||
| mentors = [[User:Popcorndude|Popcorndude]], [[User:TommiPirinen|Flammie]] |
|||
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]] |
|||
| more = /FieldWorks_data_extraction |
|||
| more = |
|||
}} |
}} |
||
== |
== Integrations == |
||
In addition to incorporating data from other projects, it would be nice if we could also make our data useful to them. |
|||
These are ideas for making Apertium more useful in other places. |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = OmniLingo and Apertium |
||
| difficulty = |
| difficulty = medium |
||
| size = either |
|||
| skills = Python, HTML, JS |
|||
| skills = JS, Python |
|||
| description = Our web site is pretty cool already, but it's missing things like dictionary/synonym lookup, support for several variants of one language, reliability visualisation, (reliable) webpage translation, feedback, etc. |
|||
| description = OmniLingo is a language learning system for practicing listening comprehension using Common Voice data. There is a lot of text processing involved (for example tokenisation) that could be aided by Apertium tools. |
|||
| rationale = [https://apertium.org https://apertium.org] / [http://beta.apertium.org http://beta.apertium.org] is what most people know us by, it should show off more of the things we are capable of :-) |
|||
| rationale = |
|||
| mentors = [[User:Firespeaker|Jonathan]], [[User:Sushain|Sushain]] |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]] |
|||
| more = /Apertium website improvements |
|||
| more = /OmniLingo |
|||
}} |
}} |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = Support for Enhanced Dependencies in UD Annotatrix |
|||
| name = UD and Apertium integration |
|||
| difficulty = |
| difficulty = medium |
||
| size = medium |
|||
| skills = python, javascript, HTML, (C++) |
|||
| skills = NodeJS |
|||
| description = Create a range of tools for making Apertium compatible with Universal Dependencies |
|||
| description = UD Annotatrix is an annotation interface for Universal Dependencies, but does not yet support all functionality. |
|||
| rationale = Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies. |
|||
| rationale = |
|||
| mentors = [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
| mentors = [[User:Francis Tyers|Francis Tyers]] |
|||
| more = /UD and Apertium integration |
|||
| more = /Annotatrix enhanced dependencies |
|||
}} |
}} |
||
<!-- |
|||
{{IdeaSummary |
|||
This one was done, but could do with more work. Not sure if it's a full gsoc though? |
|||
| name = Improving language pairs mining Mediawiki Content Translation postedits |
|||
| difficulty = Hard |
|||
| skills = Python, shell scripting, some statistics |
|||
| description = Implement a toolkit that allows mining existing machine translation postediting data in [Mediawiki Content Translation https://www.mediawiki.org/wiki/Content_translation] to generate (as automatically as possible, and as complete as possible) monodix and bidix entries to improve the performance of an Apertium language pair. Data is available from Wikimedia content translation through an [API https://www.mediawiki.org/wiki/Content_translation/Published_translations#API] or in the form of [Dumps https://dumps.wikimedia.org/other/contenttranslation/] available in JSON and TMX format. This project is rather experimental and involves some research in addition to coding. |
|||
| rationale = Apertium is used to generate new Wikipedia content: machine-translated content is postedited (and perhaps adapted) before publishing. Postediting information may contain information that can be used to help improve the lexical components of an Apertium language pair. |
|||
| mentors = [[User:Mlforcada|Mikel Forcada]], (more mentors to be added) |
|||
| more = /automatic-postediting |
|||
}} |
|||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = User-friendly lexical selection training |
||
| difficulty = Medium |
| difficulty = Medium |
||
| skills = |
| skills = Python, C++, shell scripting |
||
| description = Make it so that training/inference of lexical selection rules is a more user-friendly process |
|||
| description = UD Annotatrix is an interface by Apertium for annotating dependency trees in CoNLL-U format. The system is currently in beta, but is getting traction as more people start using it. |
|||
| rationale = Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest. |
|||
| rationale = Universal Dependencies is a very widely used standard for annotating data, the kind of annotated data that can be used to train part of speech taggers. There is still a lot of work that could be done to improve it. |
|||
| mentors = |
| mentors = [[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]] |
||
| more = / |
| more = /User-friendly lexical selection training |
||
}} |
|||
<!-- done as of 2019 I think? |
|||
{{IdeaSummary |
|||
| name = Python API/library for Apertium |
|||
| difficulty = medium |
|||
| skills = Python, C++, SWIG |
|||
| description = Implement a Python library for Apertium and Lttoolbox. |
|||
| rationale = Lots of people use Python, they like to use it in their Jupyter notebooks and on Microsoft Windows™. Apertium is really hard to get going in these kind of environments. So it would be cool if we could make Apertium work for them too. It has a lot of nice language processing tools that we would like more people to use. I'm sure people would love to "pip install apertium" or "pip install apertium-ava". The API/implementation should be pythonistic and should use C++ bindings to directly perform morphological functions, avoiding the overhead of a separate process. Prior GSoC project work is available on [https://github.com/apertium/apertium-python GitHub]. |
|||
| mentors = [[User:Sushain|Sushain]], [[User:Francis Tyers|Francis Tyers]], [[User:Unhammer|Unhammer]], [[User:Xavivars|Xavi Ivars]] |
|||
| more = /Python library |
|||
}} |
}} |
||
--> |
--> |
||
{{IdeaSummary |
{{IdeaSummary |
||
| name = |
| name = UD and Apertium integration |
||
| difficulty = |
| difficulty = Entry level |
||
| size = medium |
|||
| skills = Python, C++, XML, XLIFF (a subset called XLIFF:doc) |
|||
| skills = python, javascript, HTML, (C++) |
|||
| description = TIPP, the TMS Interoperability Protocol Package (where TMS means translation management system), [https://github.com/tingley/interoperability-now/blob/master/releases/tipp/1.5/The_TMS_Interoperability_Protocol_Package-1.5.pdf], currently in version 1.5 but being upgraded to 2.0, specifies a container (package format) that allows the interchange of information along a translation value chain. There are various container varieties for different tasks. One such variety, called Translate-Strict-Bitext, represents a bilingual translation job. The 'request' TIPP would contain an XLIFF:doc ([https://github.com/tingley/interoperability-now/tree/master/releases/xliffdoc/1.0.1], a subset of XLIFF 1 [http://docs.oasis-open.org/xliff/v1.2/os/xliff-core.html]) file with the document to be translated and the corresponding metadata, and the corresponding 'response' TIPP would contain the results of Apertium MT applied to it, but taking into account the translation memory provided in the TIPP, if any (using Apertium's -m switch). Apertium should be endowed with the capacity to manage TIPP packages: unpack the request package, parse it, process it, and repack it. |
|||
| description = Create a range of tools for making Apertium compatible with Universal Dependencies |
|||
| mentors = [[User:Mlforcada|Mikel Forcada]] |
|||
| rationale = Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies. |
|||
| more = /Apertium_TIPP |
|||
| mentors = [[User:Francis Tyers]], [[User:Firespeaker| Jonathan Washington]], [[User:Popcorndude]] |
|||
}} |
|||
| more = /UD and Apertium integration |
|||
{{IdeaSummary |
|||
| name = Set up gap-filling machine-translation-for-gisting evaluation on a recent version of Appraise |
|||
| difficulty = hard |
|||
| skills = python, bash, git, XML editing. |
|||
| description = [http://github.com/mlforcada/Appraise http://github.com/mlforcada/Appraise], in turn forked from [http://github.com/mlforcada/Appraise http://github.com/mlforcada/Appraise], the work of a GSoC student, contains an adaptation of an old (2014) version of [http://github.com/cfedermann/Appraise http://github.com/cfedermann/Appraise] to implement gap-filling evaluation as described in [https://export.arxiv.org/pdf/1809.00315 this WMT2018 paper]. The objective is to bring the gap-filling functionality in [http://github.com/mlforcada/Appraise] to be compatible with the latest versions of Appraise. |
|||
| rationale = Many language pairs in Apertium are unique, such as Breton-French, and many of them are used for gisting (understanding) purposes. Gap-filling provides a simple way to evaluate the usefulness of machine translation for gisting. The implementation used in recent experiments is based on an outdated version of the Appraise platform. |
|||
| mentors = [[User:Mlforcada|Mikel Forcada]], mentors welcome. |
|||
| more = /Appraise_gisting |
|||
}} |
}} |
||
Latest revision as of 09:15, 4 March 2024
This is the ideas page for Google Summer of Code, here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality.
Current Apertium contributors: If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using ~~~
.
Prospective GSoC contributors: The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on #apertium
on irc.oftc.net
(more on IRC), mail the mailing list, or draw attention to yourself in some other way.
Note that if you have an idea that isn't mentioned here, we would be very interested to hear about it.
Here are some more things you could look at:
- Top tips for GSOC applications
- Get in contact with one of our long-serving mentors — they are nice, honest!
- Pages in the development category
- Resources that could be converted or expanded in the incubator. Consider doing or improving a language pair (see incubator, nursery and staging for pairs that need work)
- Unhammer's wishlist
If you're a prospective GSoC contributor trying to propose a topic, the recommended way is to request a wiki account and then go to
http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2023Proposal
and click the "create" button near the top of the page. It's also nice to include [[Category:GSoC_2023_student_proposals]]
to help organize submitted proposals.
Language Data[edit]
Can you read or write a language other than English (and we do mean any language)? If so, you can help with one of these and we can help you figure out the technical parts.
Develop a morphological analyser[edit]
- Difficulty:
3. Entry level - Size: Multiple lengths possible (discuss with the mentors which option is better for you)
- Required skills:
XML or HFST or lexd - Description:
Write a morphological analyser and generator for a language that does not yet have one - Rationale:
A key part of an Apertium machine translation system is a morphological analyser and generator. The objective of this task is to create an analyser for a language that does not yet have one. - Mentors:
Francis Tyers, Jonathan Washington, Sevilay Bayatlı, Hossep, nlhowell, User:Popcorndude - read more...
apertium-separable language-pair integration[edit]
- Difficulty:
2. Medium - Size: Small
- Required skills:
XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language - Description:
Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the Apertium-separable module to process the multiwords, and clean up the dictionaries accordingly. - Rationale:
Apertium-separable is a newish module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, many translation pairs still don't use it. - Mentors:
Jonathan Washington, User:Popcorndude - read more...
Bring an unreleased translation pair to releasable quality[edit]
- Difficulty:
2. Medium - Size: Large
- Required skills:
shell scripting - Description:
Take an unstable language pair and improve its quality, focusing on testvoc - Rationale:
Many Apertium language pairs have large dictionaries and have otherwise seen much development, but are not of releasable quality. The point of this project would be bring one translation pair to releasable quality. This would entail obtaining good naïve coverage and a clean testvoc. - Mentors:
Jonathan Washington, Sevilay Bayatlı, User:Unhammer, Hèctor Alòs i Font - read more...
Develop a prototype MT system for a strategic language pair[edit]
- Difficulty:
2. Medium - Size: Large
- Required skills:
XML, some knowledge of linguistics and of one relevant natural language - Description:
Create a translation pair based on two existing language modules, focusing on the dictionary and structural transfer - Rationale:
Choose a strategic set of languages to develop an MT system for, such that you know the target language well and morphological transducers for each language are part of Apertium. Develop an Apertium MT system by focusing on writing a bilingual dictionary and structural transfer rules. Expanding the transducers and disambiguation, and writing lexical selection rules and multiword sequences may also be part of the work. The pair may be an existing prototype, but if it's a heavily developed but unreleased pair, consider applying for "Bring an unreleased translation pair to releasable quality" instead. - Mentors:
Jonathan Washington, Sevilay Bayatlı, User:Unhammer, Hèctor Alòs i Font - read more...
Add a new variety to an existing language[edit]
- Difficulty:
3. Entry level - Size: Multiple lengths possible (discuss with the mentors which option is better for you)
- Required skills:
XML, some knowledge of linguistics and of one relevant natural language - Description:
Add a language variety to one or more released pairs, focusing on the dictionary and lexical selection - Rationale:
Take a released language, and define a new language variety for it: e.g. Quebec French or Provençal Occitan. Then add the new variety to one or more released language pairs, without diminishing the quality of the pre-existing variety(ies). The objective is to facilitate the generation of varieties for languages with a weak standardisation and/or pluricentric languages. - Mentors:
Hèctor Alòs i Font, Jonathan Washington,Sevilaybayatlı - read more...
Leverage and integrate language preferences into language pairs[edit]
- Difficulty:
3. Entry level - Size: Medium
- Required skills:
XML, some knowledge of linguistics and of one relevant natural language - Description:
Update language pairs with lexical and orthographical variations to leverage the new preferences functionality - Rationale:
Currently, preferences are implemented via language variant, which relies on multiple dictionaries, increasing compilation time exponentially every time a new preference gets introduced. - Mentors:
Xavi Ivars User:Unhammer - read more...
Add Capitalization Handling Module to a Language Pair[edit]
- Difficulty:
3. Entry level - Size: Small
- Required skills:
XML, knowledge of some relevant natural language - Description:
Update a language pair to make use make use of the new Capitalization handling module - Rationale:
Correcting capitalization via transfer rules is tedious and error prone, but putting them in a separate set of rules should allow them to be handled in a more concise and maintainable way. Additionally, it is possible that capitalization rule could be moved to monolingual modules, thus reducing development effort on translators. - Mentors:
User:Popcorndude - read more...
Data Extraction[edit]
A lot of the language data we need to make our analyzers and translators work already exists in other forms and we just need to figure out how to convert it. If you know of another source of data that isn't listed, we'd love to hear about it.
dictionary induction from wikis[edit]
- Difficulty:
2. Medium - Size: Multiple lengths possible (discuss with the mentors which option is better for you)
- Required skills:
MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction - Description:
Extract dictionaries from linguistic wikis - Rationale:
Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets. - Mentors:
Jonathan Washington, User:Popcorndude - read more...
Dictionary induction from parallel corpora / Revive ReTraTos[edit]
- Difficulty:
2. Medium - Size: Medium
- Required skills:
C++, perl, python, xml, scripting, machine learning - Description:
Extract dictionaries from parallel corpora - Rationale:
Given a pair of monolingual modules and a parallel corpus, we should be able to run a program to align tagged sentences and give us the best entries that are missing from bidix. ReTraTos (from 2008) did this back in 2008, but it's from 2008. We want a program which builds and runs in 2022, and does all the steps for the user. - Mentors:
User:Unhammer, User:Popcorndude - read more...
Extract morphological data from FLEx[edit]
- Difficulty:
1. Hard - Size: Large
- Required skills:
python, XML parsing - Description:
Write a program to extract data from SIL FieldWorks and convert as much as possible to monodix (and maybe bidix). - Rationale:
There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use - Mentors:
Popcorndude, Flammie - read more...
Tooling[edit]
These are projects for people who would be comfortable digging through our C++ codebases (you will be doing a lot of that).
Python API for Apertium[edit]
- Difficulty:
2. Medium - Size: Medium
- Required skills:
C++, Python - Description:
Update the Python API for Apertium to expose all Apertium modes and test with all major OSes - Rationale:
The current Python API misses out on a lot of functionality, like phonemicisation, segmentation, and transliteration, and doesn't work for some OSeslike Debian. - Mentors:
Francis Tyers - read more...
Robust tokenisation in lttoolbox[edit]
- Difficulty:
2. Medium - Size: Large
- Required skills:
C++, XML, Python - Description:
Improve the longest-match left-to-right tokenisation strategy in lttoolbox to handle spaceless orthographies. - Rationale:
One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing. Additionally, the system is nearly impossible to use for languages that don't use spaces, such as Japanese. - Mentors:
Francis Tyers, Flammie - read more...
rule visualization tools[edit]
- Difficulty:
2. Medium - Size: Multiple lengths possible (discuss with the mentors which option is better for you)
- Required skills:
python? javascript? XML - Description:
make tools to help visualize the effect of various rules - Rationale:
TODO see https://github.com/Jakespringer/dapertium for an example - Mentors:
Jonathan Washington, Sevilay Bayatlı, User:Popcorndude - read more...
Extend Weighted transfer rules[edit]
- Difficulty:
2. Medium - Size: Medium
- Required skills:
C++, python - Description:
The weighted transfer module is already applied to the chunker transfer rules. And the idea here is to extend that module to be applied to interchunk and postchunk transfer rules too. - Rationale:
As a resource see https://github.com/aboelhamd/Weighted-transfer-rules-module - Mentors:
Sevilay Bayatlı - read more...
Automatic Error-Finder / Pseudo-Backpropagation[edit]
- Difficulty:
1. Hard - Size: Large
- Required skills:
python? - Description:
Develop a tool to locate the approximate source of translation errors in the pipeline. - Rationale:
Being able to generate a list of probable error sources automatically makes it possible to prioritize issues by frequency, frees up developer time, and is a first step towards automated generation of better rules. - Mentors:
User:Popcorndude - read more...
More Robust Recursive Transfer[edit]
- Difficulty:
1. Hard - Size: Large
- Required skills:
C++ - Description:
Ensure Recursive Transfer survives ambiguous or incomplete parse trees - Rationale:
Currently, one has to be very careful in writing recursive transfer rules to ensure they don't get too deep or ambiguous, and that they cover full sentences. See in particular issues 97 and 80. We would like linguists to be able to fearlessly write recursive (rtx) rules based on what makes linguistic sense, and have rtx-proc/rtx-comp deal with the computational/performance side. - Mentors:
- read more...
CG-based Transfer[edit]
- Difficulty:
1. Hard - Size: Large
- Required skills:
C++ - Description:
Linguists already write dependency trees in Constraint Grammar. A following step could use these to reorder into target language trees. - Rationale:
{{{rationale}}} - Mentors:
- [[|read more...]]
Language Server Protocol[edit]
- Difficulty:
2. Medium - Size: Medium
- Required skills:
any programming language - Description:
Build a [https://microsoft.github.io/language-server-protocol/ - Rationale:
We have some static analysis tools and syntax highlighters already and it would be great if we could combine and expand them to support more text editors. - Mentors:
User:Popcorndude - read more...
WASM Compilation[edit]
- Difficulty:
1. Hard - Size: Medium
- Required skills:
C++, Javascript - Description:
Compile the pipeline modules to WASM and provide JS wrappers for them. - Rationale:
There are situations where it would be nice to be able to run the entire pipeline in the browser - Mentors:
Tino Didriksen - read more...
Web[edit]
If you know Python and JavaScript, here's some ideas for improving our website. Some of these should be fairly short and it would be a good idea to talk to the mentors about doing a couple of them together.
Web API extensions[edit]
- Difficulty:
2. Medium - Size: Small
- Required skills:
Python - Description:
Update the web API for Apertium to expose all Apertium modes - Rationale:
The current Web API misses out on a lot of functionality, like phonemicisation, segmentation, transliteration, and paradigm generation. - Mentors:
Francis Tyers, Jonathan Washington, Xavi Ivars - read more...
Website Improvements: Misc[edit]
- Difficulty:
2. Medium - Size: Small
- Required skills:
html, css, js, python - Description:
Improve elements of Apertium's web infrastructure - Rationale:
Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy have numerous open issues. This project would entail choosing a subset of open issues and features that could realistically be completed in the summer. You're encouraged to speak with the Apertium community to see which features and issues are the most pressing. - Mentors:
Jonathan Washington, Xavi Ivars - read more...
Website Improvements: Dictionary Lookup[edit]
- Difficulty:
2. Medium - Size: Small
- Required skills:
html, css, js, python - Description:
Finish implementing dictionary lookup mode in Apertium's web infrastructure - Rationale:
Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy have numerous open issues, including half-completed features like dictionary lookup. This project would entail completing the dictionary lookup feature. Some additional features which would be good to work would include automatic reverse lookups (so that a user has a better understanding of the results), grammatical information (such as the gender of nouns or the conjugation paradigms of verbs), and information about MWEs. - Mentors:
Jonathan Washington, Xavi Ivars, User:Popcorndude - [the open issue on GitHub|read more...]
Website Improvements: Spell checking[edit]
- Difficulty:
2. Medium - Size: Small
- Required skills:
html, js, css, python - Description:
Add a spell-checking interface to Apertium's web tools - Rationale:
Apertium-html-tools has seen some prototypes for spell-checking interfaces (all in stale PRs and branches on GitHub), but none have ended up being quite ready to integrate into the tools. This project would entail polishing up or recreating an interface, and making sure APy has a mode that allows access to Apertium voikospell modules. The end result should be a slick, easy-to-use interface for proofing text, with intuitive underlining of text deemed to be misspelled and intuitive presentation and selection of alternatives. the open issue on GitHub - Mentors:
Jonathan Washington, Xavi Ivars - read more...
Website Improvements: Suggestions[edit]
- Difficulty:
2. Medium - Size: Small
- Required skills:
html, css, js, python - Description:
Finish implementing a suggestions interface for Apertium's web infrastructure - Rationale:
Some work has been done to add a "suggestions" interface to Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy, whereby users can suggest corrected translations. This project would entail finishing that feature. There are some related issues and PRs on GitHub. - Mentors:
Jonathan Washington, Xavi Ivars - read more...
Website Improvements: Orthography conversion interface[edit]
- Difficulty:
2. Medium - Size: Small
- Required skills:
html, js, css, python - Description:
Add an orthography conversion interface to Apertium's web tools - Rationale:
Several Apertium language modules (like Kazakh, Kyrgyz, Crimean Tatar, and Hñähñu) have orthography conversion modes in their mode definition files. This project would be to expose those modes through Apertium APy and provide a simple interface in Apertium-html-tools to use them. - Mentors:
Jonathan Washington, Xavi Ivars - read more...
Add support for NMT to web API[edit]
- Difficulty:
2. Medium - Size: Medium
- Required skills:
python, NMT - Description:
Add support for a popular NMT engine to Apertium's web API - Rationale:
Currently Apertium's web API Apertium APy, supports only Apertium language modules. But the front end could just as easily interface with an API that supports trained NMT models. The point of the project is to add support for one popular NMT package (e.g., translateLocally/Bergamot, OpenNMT or JoeyNMT) to the APy. - Mentors:
Jonathan Washington, Xavi Ivars - [[|read more...]]
Integrations[edit]
In addition to incorporating data from other projects, it would be nice if we could also make our data useful to them.
OmniLingo and Apertium[edit]
- Difficulty:
2. Medium - Size: Multiple lengths possible (discuss with the mentors which option is better for you)
- Required skills:
JS, Python - Description:
OmniLingo is a language learning system for practicing listening comprehension using Common Voice data. There is a lot of text processing involved (for example tokenisation) that could be aided by Apertium tools. - Rationale:
- Mentors:
Francis Tyers - read more...
Support for Enhanced Dependencies in UD Annotatrix[edit]
- Difficulty:
2. Medium - Size: Medium
- Required skills:
NodeJS - Description:
UD Annotatrix is an annotation interface for Universal Dependencies, but does not yet support all functionality. - Rationale:
- Mentors:
Francis Tyers - read more...
UD and Apertium integration[edit]
- Difficulty:
3. Entry level - Size: Medium
- Required skills:
python, javascript, HTML, (C++) - Description:
Create a range of tools for making Apertium compatible with Universal Dependencies - Rationale:
Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies. - Mentors:
User:Francis Tyers, Jonathan Washington, User:Popcorndude - read more...