Difference between revisions of "Ideas for Google Summer of Code"

From Apertium
Jump to navigation Jump to search
(→‎List: forgot the rationale)
 
(636 intermediate revisions by 54 users not shown)
Line 1: Line 1:
 
{{TOCD}}
 
{{TOCD}}
This is the ideas page for [[Google Summer of Code]], here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality. If you have an idea please add it below, if you think you could mentor someone in a particular area &mdash; or just have interests or ideas for that, add your name to "Interested parties" using <nowiki>~~~</nowiki>
+
This is the ideas page for [[Google Summer of Code]], here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality.
   
  +
'''Current Apertium contributors''': If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using <code><nowiki>~~~</nowiki></code>.
The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on <code>#apertium</code> on <code>irc.freenode.net</code>, mail the [[Contact|mailing list]], or draw attention to yourself in some other way.
 
   
  +
'''Prospective GSoC contributors''': The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on <code>#apertium</code> on <code>irc.oftc.net</code> ([[IRC|more on IRC]]), mail the [[Contact|mailing list]], or draw attention to yourself in some other way.
Maybe take a look at some [http://bugs.apertium.org/cgi-bin/bugzilla/buglist.cgi?cmdtype=runnamed&namedcmd=Open%20Bugs open bugs], or browse some of the pages in the [[:Category:Development|development category]] ?
 
   
  +
Note that if you have an idea that isn't mentioned here, we would be very interested to hear about it.
:Difficulty = 1 (Very Hard) ... 4 (Entry level)
 
   
  +
Here are some more things you could look at:
See the list sorted by:
 
   
  +
* [[Top tips for GSOC applications]]
* [[/Difficulty|difficulty level]],
 
  +
* Get in contact with one of our long-serving [[List of Apertium mentors|mentors]] &mdash; they are nice, honest!
* [[/Thematic|theme]]
 
  +
* Pages in the [[:Category:Development|development category]]
  +
* Resources that could be converted or expanded in the [[incubator]]. Consider doing or improving a language pair (see [[incubator]], [[nursery]] and [[staging]] for pairs that need work)
  +
* Unhammer's [[User:Unhammer/wishlist|wishlist]]
  +
<!--* The open issues [https://github.com/search?q=org%3Aapertium&state=open&type=Issues on Github] - especially the [https://github.com/search?q=org%3Aapertium+label%3A%22good+first+issue%22&state=open&type=Issues Good First Issues]. -->
   
  +
__TOC__
==List==
 
   
  +
If you're a prospective GSoC contributor trying to propose a topic, the recommended way is to request a wiki account and then go to <pre>http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2023Proposal</pre> and click the "create" button near the top of the page. It's also nice to include <code><nowiki>[[</nowiki>[[:Category:GSoC_2023_student_proposals|Category:GSoC_2023_student_proposals]]<nowiki>]]</nowiki></code> to help organize submitted proposals.
{|class="wikitable sortable"
 
! Task !! Difficulty !! Description !! Rationale !! Requirements !! Interested<br/>parties
 
|-
 
| '''Improve interoperability''' || 3.&nbsp;Medium || Either to modify Apertium to accept different formats, or to modify the other tools to accept the Apertium format, or alternatively write some kind of generic "glue" code that converts between them. || There is a lot of great free software that could be used with the Apertium engine. We already have support for [[SFST]] (for morphological analysis) and [[Constraint Grammar]] (CG, for disambiguation) and use CG in several language pairs. It would be cool to have support for other tools, such as being able to use grammars from [[Freeling]] or [http://www.abisource.com/projects/link-grammar/ link grammar] to do preprocessing. Unfortunately these, along with many other tools have incompatible input/output formats. || C, C++, XML || [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''Constraint Grammar<br/>dependencies''' || 2.&nbsp;Hard || Develop a modification to the [[Apertium stream format]] to allow information about dependencies to be passed efficiently from CG to the transfer stage. || There are many Constraint Grammars out there in the wild which as well as disambiguation do dependency parsing. We support the disambiguation stage, but not the dependency information. It would be nice to come up a way to get CG dependencies to feed into Apertium, or into [[Matxin]], a sister project. See also: [[Dependency based re-ordering]] || C++, knowledge of dependency parsing || [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''n-Stage transfer''' || 2.&nbsp;Hard || Adapt the Apertium transfer engine code to work with more than one stage of "interchunk" movement. This will involve making some minor changes to the transfer language. It would be worth looking at Steven Abney's work on partial parsing with finite-state cascades. || Apertium currently has between one and three stages of transfer, sometimes, with distant languages this is not enough, it would be nice to make it possible to have more than one stage of interchunk transfer, which would also make it possible to "join" chunks. This would improve the treatment of less related languages and allow for more complex verb movement. || C++ || [[User:Francis Tyers|Francis Tyers]], [[User:Sortiz|Sortiz]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''Accent and diacritic'''<br/>'''restoration''' || 3.&nbsp;Medium || Create an optional module to restore diacritics and accents on input text, and integrate it into the Apertium pipeline. || Many languages use diacritics and accents in normal writing, and Apertium is designed to use these, however in some places, especially for example. instant messaging, irc, searching in the web etc. these are often not used or untyped. This causes problems as for the engine, ''traduccion'' is not the same as ''traducción''. || C, C++, XML, familiarity with linguistic issues || [[User:Francis Tyers|Francis Tyers]] [[User:Fsanchez|Fsanchez]]
 
|-
 
| '''Porting''' || 3.&nbsp;Medium || Port Apertium to Windows complete with nice installers and all that jazz. Apertium currently compiles on Windows (see [[Apertium on Windows]]), but we'd like to see it compile with a free tool-chain (MingGW, etc.) || While we all might use GNU/Linux, there are a lot of people out there who don't, some of them use Microsoft's Windows. It would be nice for these people to be able use Apertium too. || C++, autotools, experience in programming on Windows.|| [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Xavivars|Xavi Ivars]]
 
|-
 
| '''Lexical selection''' || 2.&nbsp;Hard || Write a prototype lexical selection module for Apertium using a combination of rule-based and statistical approaches, or maybe only an statistical approach. || Lexical selection is the task of choosing a sense (meaning) for a word out of a number of possible senses (related to [[word sense disambiguation]]), when languages are close, they often share semantic ambiguity, when they are further apart they do not, so for example Spanish "estación" can be either "station", "season" or "resort" in English. Lexical selection is the task of choosing the right one. See also: [[:Category:Lexical selection]] || C++, XML, good knowledge of statistics. || [[User:Jimregan|Jimregan]], [[User:Fsanchez|Fsanchez]], [[User:Japerez|Japerez]]
 
|-
 
| '''Interfaces''' || 4.&nbsp;Entry&nbsp;level || Create [[plugins]] or extensions for popular free software applications to include support for translation using Apertium. We'd expect at least Firefox and Evolution (or Thunderbird), but to start with something more easy we have half-finished plugins for Pidgin and XChat that could use some love. The more the better! Further ideas on [[plugins]] page || Apertium currently runs as a stand alone translator. It would be great if it was integrated in other free software applications. For example so instead of copy/pasting text out of your email, you could just click a button and have it translated in place. || Depends on the application chosen, but probably Java, C, C++, Python or Perl. || [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Japerez|Japerez]], [[User:Jacob Nordfalk|Jacob Nordfalk]]
 
|-
 
| '''Linguistically-driven filtering of the bilingual phrases used to infer shallow-transfer rules''' || 3.&nbsp;Medium || Re-working apertium-transfer-training-tools to filter the set of bilingual phrases automatically obtained from a word-aligned sentence pair by using linguistic criteria. || Apertium-transfer-training-tools is a cool piece of software that generates shallow-transfer rules from aligned parallel corpora. It could greatly speed up the creation of new language pairs by generating rules that would otherwise have to be written by human linguists || C++, general knowledge of GIZA++, Perl considered a plus. || [[User:Fsanchez|Fsanchez]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''Use of context-dependent lexicalized categories in the inference of shallow-transfer rules''' || 2.&nbsp;Hard || Re-working apertium-transfer-training-tools to use context-dependent lexicalized categories in the inference of shallow-transfer rules. || Apertium-transfer-training-tools generates shallow-transfer rules from aligned parallel corpora. It uses an small set of lexicalized categories, categories that are usually involved in lexical changes, such as prepositions, pronouns or auxiliary verbs. Lexicalized categories differentiate from the rest of categories because their lemmas are taken into account in the generation of rules. || C++, general knowledge of GIZA++, XML. || [[User:Fsanchez|Fsanchez]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''Automated lexical'''<br/>'''extraction''' || 2.&nbsp;Hard || Writing a C++ wrapper around Markus Forsberg's [[Extract]] tool (version 2.0) as a library to allow it to be used with Apertium paradigms and TSX files as input into its paradigms and constraints. || One of the things that takes a lot of time when creating a new language pair is constructing the [[monodix|monodices]]. The extract tool can greatly reduce the time this takes by matching lemmas to paradigms based on distribution in a corpus. || Haskell, C++, XML || [[User:Francis Tyers|Francis Tyers]], [[User:Japerez|Japerez]]
 
|-
 
| '''Generating grammar'''<br/>'''checkers''' || 3.&nbsp;Medium || The data that come with Apertium (morphological analysers -- and constraint grammars) could be used to create grammar checkers. This task would be to work on an automatic converter for Apertium formats (dictionaries, disambiguation rules) to other popular grammar checker formats, or alternatively work on a standalone grammar checker. Maybe using something like [http://www.languagetool.org/ languagetool] or [http://borel.slu.edu/gramadoir/ An Gramadóir] || Grammar checkers can be useful, for languages other than English moreso. They are one of the "must have" items of language technology. If we can re-use Apertium data for this purpose it will help both the project (by making creating new language pairs more rewarding) and the language communities (by making more useful software). || XML, whatever programming language and natural language are used for testing. || [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Milek pl|Marcin Miłkowski]]
 
|-
 
| '''Support for agglutinative'''<br/>'''languages''' || 2.&nbsp;Hard || Propose a new dictionary format that is suited to languages with agglutinative morphology and modify the morphological compiler/analyser. || Our dictionary format isn't particularly suited to agglutinative languages, and those with complex morphologies. There are many of these types of languages in the world, so it would be good to support them better. See also: [[Agglutination]] || C++, XML, knowledge of a language with these features (e.g. Finnish, Basque, Turkish, Estonian, [[Aymara]], etc.) || [[User:Sortiz|Sortiz]]
 
|-
 
| '''Complex multiwords''' || 2.&nbsp;Hard || Write a bidirectional module for specifying complex multiword units, for example ''dirección general'' and ''zračna luka''. See ''[[Multiwords]]'' for more information. || Although in the Romance languages it is not a big problem, as soon as you start to get to languages with cases (e.g. Serbo-Croatian, Slovenian, German, etc.) the problem comes that you can't define a multiword of <code>adj nom</code> because the adjective has a lot of inflection. || C, C++, XML || [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''Adopt a'''<br/>'''language pair''' || 4.&nbsp;Entry&nbsp;level || Take on an orphaned language pair, and bring it up to release quality results. What this quality will be will depend on the language pair adopted, and will need to be discussed with the prospective mentor. This will involve writing linguistic data (including morphological rules and transfer rules &mdash; which are specified in a declarative language) || Apertium has a few pairs of languages (e.g. sv-da, sh-mk, en-af, ga-gd, etc...) that are orphaned, they don't have active maintainers. A lot of these pairs have a lot of work already put in, just need another few months to get them to release quality. || XML, a scripting language (Python, Perl), good knowledge of the language pair adopted. || [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]]
 
|-
 
| '''Word compounder'''<br/>'''and de-compounder''' || 4.&nbsp;Entry&nbsp;level || Write a de-compounder and compounder for Apertium. || Many languages in the world have [[compounds|compound]] words, in Europe e.g. German, Dutch, Danish, etc. These are often very low frequency or completely novel, and as such do not exist in our dictionaries. If we had some software to split these into their constituent parts we might be able to translate them, and improve accuracy on our pairs with these languages. See also bug [http://bugs.apertium.org/cgi-bin/bugzilla/show_bug.cgi?id=13 #13], and the page [[Compounds]] || C, C++, XML || [[User:Francis Tyers|Francis Tyers]]
 
|-
 
| '''Post-edition '''<br/>'''tool''' || 3.&nbsp;Medium || Make a post-edition tool to speed up revision of Apertium translations. It would likely include at least support for spelling and grammar checking, along with defining user-specified rules for fixing translations (search and replace, etc.). This tool can reuse an existing grammar checker such as [http://www.languagetool.org LanguageTool]. || After translating with Apertium revision work has to be done to consider a translation as an "adequate" translation. An intelligent post-edition environment will help doing this task. In this environment some typical mistakes in the translation process that can be automatically detected (for example unknown words and homographs) could be highlighted to be taken in consideration while doing post-edition. Some typical mistakes could also be defined to advise the post-editor to check them. || XML, PHP, Python, C, C++, whichever programming language. || [[User:Sortiz|Sortiz]], [[User:Jimregan|Jimregan]], [[User:Milek pl|Marcin Miłkowski]], [[User:Japerez|Japerez]]
 
|-
 
| '''Lexical insertion '''<br/>'''tool''' || 3.&nbsp;Medium || Improving the current web based dictionary application to insert new word pairs into the Apertium dictionaries. This would involve both improving the functionality, and efficiency of the software. || Currently people have to edit XML in order to add words to the dictionaries. We have a web application, written in Python that does a lot of this work, but it still lacks some functionality, for example multiwords, and complete support for the new dictionary format. What is more it is quite slow and memory intensive. || Python || [[User:Francis Tyers|Francis Tyers]], [[User:Japerez|Japerez]]
 
|-
 
<!-- Alessio: Crees que se puede hacer algo en 2-3 meses? Me parece que costaría más tiempo.
 
   
  +
== Language Data ==
| '''Linguistic data'''<br/>'''management interface''' || 4.&nbsp;Hard || Create an interface or suite of programs to develop, maintain and manage the linguistic data of Apertium. Making the process of creating new language pairs more intuitive, easy and manageable. This task is linked with the ''Lexical insertion tool'', but is more in depth. || As more distant language pairs are adding to Apertium, and the number of language pairs increases, management becomes an issue. To edit and manage the XML files of this pairs without a specific tool can be challenging for non-expert users. || Java, flex, XML, WS, familiarity with linguistic issues || [[User:AlessioJr|AléssioJr]]
 
|-
 
   
  +
Can you read or write a language other than English (and we do mean any language)? If so, you can help with one of these and we can help you figure out the technical parts.
-->
 
   
  +
{{IdeaSummary
| '''Generating spell '''<br/>'''checkers''' || 4.&nbsp;Entry&nbsp;level || The data that come with Apertium (morphological analysers) could be used to create spell checkers. This task would be to work on an automatic converter for Apertium formats to other popular spell checker formats. Maybe using something ispell, myspell, hunspell, youspell, etc. || Spell checkers can be useful especially before translating. They are one of the basic items of language technology. If we can re-use Apertium data for this purpose it will help both the project (by making creating new language pairs more rewarding) and the language communities (by making more useful software). This will be particulary useful for minority languages having an Apertium translator but not having a free spell checker, and also to use spell checking tools as a controlled language tool. || XML, whatever programming language and natural language are used for testing. || [[User:Sortiz|Sortiz]]
 
  +
| name = Develop a morphological analyser
|-
 
  +
| difficulty = easy
| '''Detect 'hidden' unknown words''' || 3.&nbsp;Medium || The part-of-speech tagger of Apertium can be modified to work out the likelihood of each 'tag' in a certain context, this can be used to detect missing entries in the dictionary. || Apertium dictionaries may have incomplete entries, that is, surface forms (lexical units as they appear in running texts) for which the dictionary does not provided all the possible lexical forms (consisting of lemma, part-of-speech and morphological inflection information). As those surface form for which there is at least one lexical form cannot be considered unknown words, it is difficult to know whether all lexical forms for a given surface form have been included in the dictionaries or not. This feature will detect 'possible' missing lexical forms for those surface forms in the dictionaries. || C++ if you plan to modify the part-of-speech tagger; whatever if rewriting it from scratch. || [[User:Sortiz|Sortiz]], [[User:Fsanchez|Fsanchez]], [[User:Japerez|Japerez]]
 
  +
| size = either
|-
 
  +
| skills = XML or HFST or lexd
| '''Format filters''' || 4.&nbsp;Entry&nbsp;level || Making apertium capable of dealing with more different formats, for the minimum: files marked up with LaTeX and Wiki. || Apertium can currently deal with texts in plain-text, RTF, HTML and ODT formats by means of a format definition file. It should be easy to use the same language to define filters for other formats. || Apertium format definition language and/or scripting languages. || [[User:Mlforcada|Mlforcada]] [[User:Francis Tyers|Francis Tyers]], [[User:Jimregan|Jimregan]], [[User:Japerez|Japerez]]
 
  +
| description = Write a morphological analyser and generator for a language that does not yet have one
|-
 
  +
| rationale = A key part of an Apertium machine translation system is a morphological analyser and generator. The objective of this task is to create an analyser for a language that does not yet have one.
| '''Multi-engine translation synthesiser''' || 3.&nbsp;Medium || Write synthesiser to make a "better" translation out of several translations. || There are other open-source machine translation systems in existence (for example Moses and OpenLogos), the point of this project would be to write a "synthesiser" which can, given several translations, produce a better translation. The program will probably take each of the output sentences from each system, decompose them into chunks or phrases and then score them against a language model to come to the final synthesised translation. Care should be taken to not overly bias ''fluent'' translations over ''adequate'' translations. || C++ or Python (for prototyping) || [[User:Francis Tyers|Francis Tyers]], [[User:Japerez|Japerez]]
 
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker|Jonathan Washington]], [[User: Sevilay Bayatlı|Sevilay Bayatlı]], Hossep, nlhowell, [[User:Popcorndude]]
|-
 
  +
| more = /Morphological analyser
| '''Trigram tagger''' || 3.&nbsp;Medium || Modify the tagger to use trigrams instead of bigrams || We use bigrams -- for speed -- in the tagger, but now computers have improved and we have 3-stage transfer which will dominate CPU usage anyway. See also [[Unsupervised_tagger_training#Some_questions_and_answers_about_unsupervised_tagger_training|these questions]] || C++. || [[User:Jacob Nordfalk|Jacob Nordfalk]], [[User:Jimregan|Jimregan]], [[User:Fsanchez|Fsanchez]]
 
  +
}}
|-
 
| '''Language independent<br/>target-language tagger training''' || 2.&nbsp;Hard || Modify apertium-tagger-training-tools so that it does not need the segmentation provided by the transfer rules in order to score translations, and to prune the k-best list of possible translations at run-time for efficiency || apertium-tagger-training-tools is a program for doing target language tagger training. This means that it tunes the parameters of an HMM model based on the quality of the translations through the whole system. At the moment this relies on the segmentation of the input by the transfer rules, and also runs all the possible translations (undisambiguated). It would be good to be able to run it without having this segmentation. || C++ || [[User:Fsanchez|Fsanchez]] [[User:Francis Tyers|Francis Tyers]] [[User:Mlforcada|Mlforcada]]
 
|-
 
| '''Daemon mode''' || 3.&nbsp;Medium || Write a program to allow Apertium to be run in "server" or "daemon" mode. || When a lot of documents are translated, we launch the translator (the whole pipeline) multiple times, this is results in it being slower than ideal because of the time taken in process creation, loading transducers, grammars etc. and OS overheads. If it were to be run as a server or daemon, then these operations would only need to be performed once. || C++ || [[User:Xerakko|Xerakko]], [[User:Jimregan|Jimregan]], [[User:Japerez|Japerez]]
 
|-
 
| '''Geriaoueg<br/>vocabulary assistant''' || 4.&nbsp;Entry&nbsp;level || Extend [[Geriaoueg]] so that it works more reliably with broken HTML and with any given language pair. || [[Geriaoueg]] is a program that provides "popup" vocabulary assistance, something like BBC Vocab or Lingro. Currently it only works with Breton--French, Welsh--English and Spanish--Breton. This task would be to develop it to work with any language in our SVN and fix problems with processing and displaying non-standard HTML. || PHP, C++, XML || [[User:Francis Tyers|Francis Tyers]] [[User:Donnek|Donnek]]
 
|-
 
| '''Conversion of<br/>Anubadok''' || 3.&nbsp;Medium || Convert [http://anubadok.sourceforge.net/ Anubadok] to use the Apertium engine for transfer and generation. || [http://anubadok.sourceforge.net/ Anubadok] is a GPL'd machine translation system for English to Bengali. Part of the work in converting the bilingual dictionary has already been done (see details in the [[incubator]]), but the Bengali generation side needs to be worked on and the tagging needs to be standardised between Apertium and Anubadok. The author (G. M. Hossain) has been contacted and agrees with the idea. || Python, shell scripting, knowledge of Bengali definite plus || [[User:Francis Tyers|Francis Tyers]]
 
|-
 
| '''Complete the Java port of lttoolbox''' || 3.&nbsp;Medium || Nic Cottrell contributed an initial version of a Java port of [[lttoolbox]]; this work needs to be completed, and a test suite written in both C++ and Java (as it must be binary compatible). || There are several devices (mobile phones, for example) which can run quite complicated software, but only if written in Java. lttoolbox is the first step to having Apertium run on these devices || Java, C++, JUnit || [[User:Jimregan|Jimregan]], [[User:Jacob Nordfalk|Jacob Nordfalk]]
 
|-
 
|}
 
   
  +
{{IdeaSummary
==Notes==
 
  +
| name = apertium-separable language-pair integration
<references/>
 
  +
| difficulty = Medium
==Further reading==
 
  +
| size = small
  +
| skills = XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
  +
| description = Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly.
  +
| rationale = Apertium-separable is a newish module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, many translation pairs still don't use it.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Popcorndude]]
  +
| more = /Apertium separable
  +
}}
   
  +
{{IdeaSummary
;Accent and diacritic restoration
 
  +
| name = Bring an unreleased translation pair to releasable quality
* Simard, Michel (1998). "[http://citeseer.ist.psu.edu/rd/0%2C79937%2C1%2C0.25%2CDownload/http://citeseer.ist.psu.edu/cache/papers/cs/7401/http:zSzzSzwww.iro.umontreal.cazSz%7EsimardmzSzpubzSzemnlp98.pdf/simard98automatic.pdf Automatic Insertion of Accents in French Texts]". ''Proceedings of EMNLP-3. Granada, Spain''.
 
  +
| difficulty = Medium
* Rada F. Mihalcea. (2002). "[http://www.cs.unt.edu/~rada/papers/mihalcea.cicling02.ps Diacritics Restoration: Learning from Letters versus Learning from Words]". ''Lecture Notes in Computer Science'' 2276/2002 pp. 96--113
 
  +
| size = large
* G. De Pauw, P. W. Wagacha; G.M. de Schryver (2007) "[http://tshwanedje.com/publications/tsd2007.pdf Automatic diacritic restoration for resource-scarce languages]". ''Proceedings of Text, Speech and Dialogue, Tenth International Conference''. pp. 170--179
 
  +
| skills = shell scripting
* P.W. Wagacha; G. De Pauw; P.W. Githinji (2006) "[http://aflat.org/files/wagachaetallrec2k6_0.pdf A grapheme-based approach to accent restoration in Gĩkũyũ]". ''Proceedings of the Fifth International Conference on Language Resources and Evaluation''
 
  +
| description = Take an unstable language pair and improve its quality, focusing on testvoc
* D. Yarowsky (1994) "[http://citeseer.ist.psu.edu/rd/43728582%2C73251%2C1%2C0.25%2CDownload/http://citeseer.ist.psu.edu/cache/papers/cs/1083/http:zSzzSzwww.cs.jhu.eduzSz%7EyarowskyzSzpubszSzkluwerbook.pdf/yarowsky94comparison.pdf A Comparison Of Corpus-Based Techniques For Restoring Accents In Spanish And French Text]". ''Proceedings, 2nd annual workshop on very large corpora''. pp. 19--32
 
  +
| rationale = Many Apertium language pairs have large dictionaries and have otherwise seen much development, but are not of releasable quality. The point of this project would be bring one translation pair to releasable quality. This would entail obtaining good naïve coverage and a clean [[testvoc]].
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Seviay Bayatlı|Sevilay Bayatlı]], [[User:Unhammer]], [[User:hectoralos|Hèctor Alòs i Font]]
  +
| more = /Make a language pair state-of-the-art
  +
}}
   
  +
{{IdeaSummary
;Lexical selection
 
  +
| name = Develop a prototype MT system for a strategic language pair
* Ide, N. and Véronis, J. (1998) "[http://sites.univ-provence.fr/veronis/pdf/1998wsd.pdf Word Sense Disambiguation: The State of the Art]". ''Computational Linguistics'' 24(1)
 
  +
| difficulty = Medium
  +
| size = large
  +
| skills = XML, some knowledge of linguistics and of one relevant natural language
  +
| description = Create a translation pair based on two existing language modules, focusing on the dictionary and structural transfer
  +
| rationale = Choose a strategic set of languages to develop an MT system for, such that you know the target language well and morphological transducers for each language are part of Apertium. Develop an Apertium MT system by focusing on writing a bilingual dictionary and structural transfer rules. Expanding the transducers and disambiguation, and writing lexical selection rules and multiword sequences may also be part of the work. The pair may be an existing prototype, but if it's a heavily developed but unreleased pair, consider applying for "Bring an unreleased translation pair to releasable quality" instead.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Sevilay Bayatlı| Sevilay Bayatlı]], [[User:Unhammer]], [[User:hectoralos|Hèctor Alòs i Font]]
  +
| more = /Adopt a language pair
  +
}}
   
  +
{{IdeaSummary
* Yarowsky, D. (2000) "[http://www.springerlink.com/content/t47t6388566r5514/ Hierarchical Decision Lists for Word Sense Disambiguation]". ''Computers and the Humanities'' 34(1-2)
 
  +
| name = Add a new variety to an existing language
  +
| difficulty = easy
  +
| size = either
  +
| skills = XML, some knowledge of linguistics and of one relevant natural language
  +
| description = Add a language variety to one or more released pairs, focusing on the dictionary and lexical selection
  +
| rationale = Take a released language, and define a new language variety for it: e.g. Quebec French or Provençal Occitan. Then add the new variety to one or more released language pairs, without diminishing the quality of the pre-existing variety(ies). The objective is to facilitate the generation of varieties for languages with a weak standardisation and/or pluricentric languages.
  +
| mentors = [[User:hectoralos|Hèctor Alòs i Font]], [[User:Firespeaker|Jonathan Washington]],[[User:piraye|Sevilaybayatlı]]
  +
| more = /Add a new variety to an existing language
  +
}}
   
  +
{{IdeaSummary
;Automated lexical extraction
 
  +
| name = Leverage and integrate language preferences into language pairs
  +
| difficulty = easy
  +
| size = medium
  +
| skills = XML, some knowledge of linguistics and of one relevant natural language
  +
| description = Update language pairs with lexical and orthographical variations to leverage the new [[Dialectal_or_standard_variation|preferences]] functionality
  +
| rationale = Currently, preferences are implemented via language variant, which relies on multiple dictionaries, increasing compilation time exponentially every time a new preference gets introduced.
  +
| mentors = [[User:Xavivars|Xavi Ivars]] [[User:Unhammer]]
  +
| more = /Use preferences in pair
  +
}}
   
  +
{{IdeaSummary
* M. Forsberg H. Hammarström A. Ranta. "[http://www.cs.chalmers.se/~markus/FinTAL2006.pdf Morphological Lexicon Extraction from Raw Text Data]". ''FinTAL'' 2006, LNAI 4139, pp. 488--499.
 
  +
| name = Add Capitalization Handling Module to a Language Pair
  +
| difficulty = easy
  +
| size = small
  +
| skills = XML, knowledge of some relevant natural language
  +
| description = Update a language pair to make use make use of the new [[Capitalization_restoration|Capitalization handling module]]
  +
| rationale = Correcting capitalization via transfer rules is tedious and error prone, but putting them in a separate set of rules should allow them to be handled in a more concise and maintainable way. Additionally, it is possible that capitalization rule could be moved to monolingual modules, thus reducing development effort on translators.
  +
| mentors = [[User:Popcorndude]]
  +
| more = /Capitalization
  +
}}
   
  +
== Data Extraction ==
;Support for agglutinative languages
 
   
  +
A lot of the language data we need to make our analyzers and translators work already exists in other forms and we just need to figure out how to convert it. If you know of another source of data that isn't listed, we'd love to hear about it.
* Beesley, K. R and Karttunen, L. (2000) "[http://www2.parc.com/istl/members/karttune/publications/comprep/comprep.html Finite-State Non-Concatenative Morphotactics]". ''SIGPHON-2000, Proceedings of the Fifth Workshop of the ACLSpecial Interest Group in Computational Phonology'', pp. 1--12,
 
   
  +
{{IdeaSummary
; Transfer rule learning
 
  +
| name = dictionary induction from wikis
* Sánchez-Martínez, F. and Forcada, M.L. (2007) "Automatic induction of shallow-transfer rules for open-source machine translation", in Proceedings of TMI 2007, pp.181-190 ([http://www.mt-archive.info/TMI-2007-Sanchez-Martinez.pdf paper], [http://www.mt-archive.info/TMI-2007-Sanchez-Martinez-poster.pdf poster])
 
  +
| difficulty = Medium
  +
| size = either
  +
| skills = MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction
  +
| description = Extract dictionaries from linguistic wikis
  +
| rationale = Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Popcorndude]]
  +
| more = /Dictionary induction from wikis
  +
}}
   
  +
{{IdeaSummary
;Compounding and de-compounding
 
  +
| name = Dictionary induction from parallel corpora / Revive ReTraTos
  +
| difficulty = Medium
  +
| size = medium
  +
| skills = C++, perl, python, xml, scripting, machine learning
  +
| description = Extract dictionaries from parallel corpora
  +
| rationale = Given a pair of monolingual modules and a parallel corpus, we should be able to run a program to align tagged sentences and give us the best entries that are missing from bidix. [[ReTraTos]] (from 2008) did this back in 2008, but it's from 2008. We want a program which builds and runs in 2022, and does all the steps for the user.
  +
| mentors = [[User:Unhammer]], [[User:Popcorndude]]
  +
| more = /Dictionary induction from parallel corpora
  +
}}
   
  +
{{IdeaSummary
* Koehn, P. and Knight, K. (2003) "[http://www.iccs.inf.ed.ac.uk/~pkoehn/publications/compound2003.pdf Empirical Methods for Compound Splitting]". ''11th Conference of the European Chapter of the Association for Computational Linguistics'', (EACL2003).
 
  +
| name = Extract morphological data from FLEx
* Brown, R. (2002) "[http://www.eamt.org/archive/tmi2002/conference/02_brown.pdf Corpus-Driven Splitting of Compound Words]". ''TMI 2002''
 
  +
| difficulty = hard
* Moa, H. (2005) "[http://phon.joensuu.fi/lingjoy/01/moaF.pdf Compounds and other oddities in machine translation]". ''Proceedings of the 15th NODALIDA conference, Joensuu 2005''.
 
  +
| size = large
  +
| skills = python, XML parsing
  +
| description = Write a program to extract data from [https://software.sil.org/fieldworks/ SIL FieldWorks] and convert as much as possible to monodix (and maybe bidix).
  +
| rationale = There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use
  +
| mentors = [[User:Popcorndude|Popcorndude]], [[User:TommiPirinen|Flammie]]
  +
| more = /FieldWorks_data_extraction
  +
}}
   
  +
== Tooling ==
;Multi-engine machine translation
 
   
  +
These are projects for people who would be comfortable digging through our C++ codebases (you will be doing a lot of that).
* Sergei Nirenburg and Robert Frederking (1994) "[http://www.aclweb.org/anthology-new/H/H94/H94-1026.pdf Toward Multi-Engine Machine Translation]". ''Proceedings of the workshop on Human Language Technology''. pp. 147 - 151
 
* Shyamsundar Jayaraman and Alon Lavie (2005) "[http://www.cs.cmu.edu/~alavie/papers/ACL-2005-MEMT.pdf Multi-Engine Machine Translation Guided by Explicit Word Matching]". ''ACL 2005''
 
   
  +
{{IdeaSummary
;n-Stage transfer
 
  +
| name = Python API for Apertium
  +
| difficulty = medium
  +
| size = medium
  +
| skills = C++, Python
  +
| description = Update the Python API for Apertium to expose all Apertium modes and test with all major OSes
  +
| rationale = The current Python API misses out on a lot of functionality, like phonemicisation, segmentation, and transliteration, and doesn't work for some OSes <s>like Debian</s>.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]]
  +
| more = /Python API
  +
}}
   
  +
{{IdeaSummary
* Abney, S. (2002) "[http://www.google.es/url?sa=U&start=1&q=http://www.vinartus.net/spa/97a.pdf&ei=YXa_SYqnNJDRjAeNzu0z&usg=AFQjCNEa4wMEQInP-31ROcRg2PZ8fWTtDA Partial Parsing via Finite-State Cascades] in ''Natural Language Engineering''
 
  +
| name = Robust tokenisation in lttoolbox
  +
| difficulty = Medium
  +
| size = large
  +
| skills = C++, XML, Python
  +
| description = Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to handle spaceless orthographies.
  +
| rationale = One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing. Additionally, the system is nearly impossible to use for languages that don't use spaces, such as Japanese.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]]
  +
| more = /Robust tokenisation
  +
}}
  +
  +
{{IdeaSummary
  +
| name = rule visualization tools
  +
| difficulty = Medium
  +
| size = either
  +
| skills = python? javascript? XML
  +
| description = make tools to help visualize the effect of various rules
  +
| rationale = TODO see https://github.com/Jakespringer/dapertium for an example
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Sevilay Bayatlı|Sevilay Bayatlı]], [[User:Popcorndude]]
  +
| more = /Visualization tools
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Extend Weighted transfer rules
  +
| difficulty = Medium
  +
| size = medium
  +
| skills = C++, python
  +
| description = The weighted transfer module is already applied to the chunker transfer rules. And the idea here is to extend that module to be applied to interchunk and postchunk transfer rules too.
  +
| rationale = As a resource see https://github.com/aboelhamd/Weighted-transfer-rules-module
  +
| mentors = [[User: Sevilay Bayatlı|Sevilay Bayatlı]]
  +
| more = /Make a module
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Automatic Error-Finder / Pseudo-Backpropagation
  +
| difficulty = Hard
  +
| size = large
  +
| skills = python?
  +
| description = Develop a tool to locate the approximate source of translation errors in the pipeline.
  +
| rationale = Being able to generate a list of probable error sources automatically makes it possible to prioritize issues by frequency, frees up developer time, and is a first step towards automated generation of better rules.
  +
| mentors = [[User:Popcorndude]]
  +
| more = /Backpropagation
  +
}}
  +
  +
{{IdeaSummary
  +
| name = More Robust Recursive Transfer
  +
| difficulty = Hard
  +
| size = large
  +
| skills = C++
  +
| description = Ensure [[Apertium-recursive#Further_Documentation|Recursive Transfer]] survives ambiguous or incomplete parse trees
  +
| rationale = Currently, one has to be very careful in writing recursive transfer rules to ensure they don't get too deep or ambiguous, and that they cover full sentences. See in particular issues [https://github.com/apertium/apertium-recursive/issues/97 97] and [https://github.com/apertium/apertium-recursive/issues/80 80]. We would like linguists to be able to fearlessly write recursive (rtx) rules based on what makes linguistic sense, and have rtx-proc/rtx-comp deal with the computational/performance side.
  +
| mentors =
  +
| more = /More_robust_recursive_transfer
  +
}}
  +
  +
{{IdeaSummary
  +
| name = CG-based Transfer
  +
| difficulty = Hard
  +
| size = large
  +
| skills = C++
  +
| description = Linguists already write dependency trees in [[Constraint Grammar]]. A following step could use these to reorder into target language trees.
  +
| mentors =
  +
| more =
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Language Server Protocol
  +
| difficulty = Medium
  +
| size = medium
  +
| skills = any programming language
  +
| description = Build a [https://microsoft.github.io/language-server-protocol/|Language Server] for the various Apertium rule formats
  +
| rationale = We have some static analysis tools and syntax highlighters already and it would be great if we could combine and expand them to support more text editors.
  +
| mentors = [[User:Popcorndude]]
  +
| more = /Language Server Protocol
  +
}}
  +
  +
{{IdeaSummary
  +
| name = WASM Compilation
  +
| difficulty = hard
  +
| size = medium
  +
| skills = C++, Javascript
  +
| description = Compile the pipeline modules to WASM and provide JS wrappers for them.
  +
| rationale = There are situations where it would be nice to be able to run the entire pipeline in the browser
  +
| mentors = [[User:Tino Didriksen|Tino Didriksen]]
  +
| more = /WASM
  +
}}
  +
  +
== Web ==
  +
  +
If you know Python and JavaScript, here's some ideas for improving our [https://apertium.org website]. Some of these should be fairly short and it would be a good idea to talk to the mentors about doing a couple of them together.
  +
  +
{{IdeaSummary
  +
| name = Web API extensions
  +
| difficulty = medium
  +
| size = small
  +
| skills = Python
  +
| description = Update the web API for Apertium to expose all Apertium modes
  +
| rationale = The current Web API misses out on a lot of functionality, like phonemicisation, segmentation, transliteration, and paradigm generation.
  +
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
  +
| more = /Apertium APY
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Website Improvements: Misc
  +
| difficulty = Medium
  +
| size = small
  +
| skills = html, css, js, python
  +
| description = Improve elements of Apertium's web infrastructure
  +
| rationale = Apertium's website infrastructure [[Apertium-html-tools]] and its supporting API [[APy|Apertium APy]] have numerous open issues. This project would entail choosing a subset of open issues and features that could realistically be completed in the summer. You're encouraged to speak with the Apertium community to see which features and issues are the most pressing.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
  +
| more = /Website improvements
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Website Improvements: Dictionary Lookup
  +
| difficulty = Medium
  +
| size = small
  +
| skills = html, css, js, python
  +
| description = Finish implementing dictionary lookup mode in Apertium's web infrastructure
  +
| rationale = Apertium's website infrastructure [[Apertium-html-tools]] and its supporting API [[APy|Apertium APy]] have numerous open issues, including half-completed features like dictionary lookup. This project would entail completing the dictionary lookup feature. Some additional features which would be good to work would include automatic reverse lookups (so that a user has a better understanding of the results), grammatical information (such as the gender of nouns or the conjugation paradigms of verbs), and information about MWEs.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]], [[User:Popcorndude]]
  +
| more = https://github.com/apertium/apertium-html-tools/issues/105 the open issue on GitHub
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Website Improvements: Spell checking
  +
| difficulty = Medium
  +
| size = small
  +
| skills = html, js, css, python
  +
| description = Add a spell-checking interface to Apertium's web tools
  +
| rationale = [[Apertium-html-tools]] has seen some prototypes for spell-checking interfaces (all in stale PRs and branches on GitHub), but none have ended up being quite ready to integrate into the tools. This project would entail polishing up or recreating an interface, and making sure [[APy]] has a mode that allows access to Apertium voikospell modules. The end result should be a slick, easy-to-use interface for proofing text, with intuitive underlining of text deemed to be misspelled and intuitive presentation and selection of alternatives. [https://github.com/apertium/apertium-html-tools/issues/390 the open issue on GitHub]
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
  +
| more = /Spell checker web interface
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Website Improvements: Suggestions
  +
| difficulty = Medium
  +
| size = small
  +
| skills = html, css, js, python
  +
| description = Finish implementing a suggestions interface for Apertium's web infrastructure
  +
| rationale = Some work has been done to add a "suggestions" interface to Apertium's website infrastructure [[Apertium-html-tools]] and its supporting API [[APy|Apertium APy]], whereby users can suggest corrected translations. This project would entail finishing that feature. There are some related [https://github.com/apertium/apertium-html-tools/issues/55 issues] and [https://github.com/apertium/apertium-html-tools/pull/252 PRs] on GitHub.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
  +
| more = /Website improvements
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Website Improvements: Orthography conversion interface
  +
| difficulty = Medium
  +
| size = small
  +
| skills = html, js, css, python
  +
| description = Add an orthography conversion interface to Apertium's web tools
  +
| rationale = Several Apertium language modules (like Kazakh, Kyrgyz, Crimean Tatar, and Hñähñu) have orthography conversion modes in their mode definition files. This project would be to expose those modes through [[APy|Apertium APy]] and provide a simple interface in [[Apertium-html-tools]] to use them.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
  +
| more = /Website improvements
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Add support for NMT to web API
  +
| difficulty = Medium
  +
| size = medium
  +
| skills = python, NMT
  +
| description = Add support for a popular NMT engine to Apertium's web API
  +
| rationale = Currently Apertium's web API [[APy|Apertium APy]], supports only Apertium language modules. But the front end could just as easily interface with an API that supports trained NMT models. The point of the project is to add support for one popular NMT package (e.g., translateLocally/Bergamot, OpenNMT or JoeyNMT) to the APy.
  +
| mentors = [[User:Firespeaker|Jonathan Washington]], [[User:Xavivars|Xavi Ivars]]
  +
| more =
  +
}}
  +
  +
== Integrations ==
  +
  +
In addition to incorporating data from other projects, it would be nice if we could also make our data useful to them.
  +
  +
{{IdeaSummary
  +
| name = OmniLingo and Apertium
  +
| difficulty = medium
  +
| size = either
  +
| skills = JS, Python
  +
| description = OmniLingo is a language learning system for practicing listening comprehension using Common Voice data. There is a lot of text processing involved (for example tokenisation) that could be aided by Apertium tools.
  +
| rationale =
  +
| mentors = [[User:Francis Tyers|Francis Tyers]]
  +
| more = /OmniLingo
  +
}}
  +
  +
{{IdeaSummary
  +
| name = Support for Enhanced Dependencies in UD Annotatrix
  +
| difficulty = medium
  +
| size = medium
  +
| skills = NodeJS
  +
| description = UD Annotatrix is an annotation interface for Universal Dependencies, but does not yet support all functionality.
  +
| rationale =
  +
| mentors = [[User:Francis Tyers|Francis Tyers]]
  +
| more = /Annotatrix enhanced dependencies
  +
}}
  +
  +
<!--
  +
This one was done, but could do with more work. Not sure if it's a full gsoc though?
  +
  +
{{IdeaSummary
  +
| name = User-friendly lexical selection training
  +
| difficulty = Medium
  +
| skills = Python, C++, shell scripting
  +
| description = Make it so that training/inference of lexical selection rules is a more user-friendly process
  +
| rationale = Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
  +
| mentors = [[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]]
  +
| more = /User-friendly lexical selection training
  +
}}
  +
-->
   
  +
{{IdeaSummary
;Trigram Tagging
 
  +
| name = UD and Apertium integration
* Brants, (2000) [http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf TnT -- A Statistical Part-of-Speech Tagger], In ''Proceedings of the 6th Conference on Applied Natural Language Processing, ANLP.'' ACL
 
  +
| difficulty = Entry level
* Halácsy, Kornai, Oravecz (2007) [http://www.metacarta.com/Collateral/Documents/English-US/HunPos-Open-Source-Trigram-Trigger-Kornai.pdf HunPos - an open source trigram tagger], Proceedings of the ACL 2007 Demo and Poster Sessions, pages 209–212, Prague, June 2007.
 
  +
| size = medium
  +
| skills = python, javascript, HTML, (C++)
  +
| description = Create a range of tools for making Apertium compatible with Universal Dependencies
  +
| rationale = Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
  +
| mentors = [[User:Francis Tyers]], [[User:Firespeaker| Jonathan Washington]], [[User:Popcorndude]]
  +
| more = /UD and Apertium integration
  +
}}
   
;Target-language tagger training
 
* Sánchez-Martínez, F; Pérez-Ortiz, J. A.; Forcada, M. L. (2008) "[http://www.springerlink.com/content/m452802q3536044v/?p=61e26194c87e4a5780c77303b3210210&pi=2 Using target-language information to train part-of-speech taggers for machine translation]", Machine Translation, 22(1-2), p. 29-66.
 
**[http://code.google.com/p/hunpos/ Google Code project page]
 
 
[[Category:Development]]
 
[[Category:Development]]
  +
[[Category:Google Summer of Code]]

Latest revision as of 09:15, 4 March 2024

This is the ideas page for Google Summer of Code, here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality.

Current Apertium contributors: If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using ~~~.

Prospective GSoC contributors: The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on #apertium on irc.oftc.net (more on IRC), mail the mailing list, or draw attention to yourself in some other way.

Note that if you have an idea that isn't mentioned here, we would be very interested to hear about it.

Here are some more things you could look at:


If you're a prospective GSoC contributor trying to propose a topic, the recommended way is to request a wiki account and then go to

http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2023Proposal

and click the "create" button near the top of the page. It's also nice to include [[Category:GSoC_2023_student_proposals]] to help organize submitted proposals.

Language Data[edit]

Can you read or write a language other than English (and we do mean any language)? If so, you can help with one of these and we can help you figure out the technical parts.


Develop a morphological analyser[edit]

  • Difficulty:
    3. Entry level
  • Size: Multiple lengths possible (discuss with the mentors which option is better for you)
  • Required skills:
    XML or HFST or lexd
  • Description:
    Write a morphological analyser and generator for a language that does not yet have one
  • Rationale:
    A key part of an Apertium machine translation system is a morphological analyser and generator. The objective of this task is to create an analyser for a language that does not yet have one.
  • Mentors:
    Francis Tyers, Jonathan Washington, Sevilay Bayatlı, Hossep, nlhowell, User:Popcorndude
  • read more...


apertium-separable language-pair integration[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
  • Description:
    Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the Apertium-separable module to process the multiwords, and clean up the dictionaries accordingly.
  • Rationale:
    Apertium-separable is a newish module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, many translation pairs still don't use it.
  • Mentors:
    Jonathan Washington, User:Popcorndude
  • read more...


Bring an unreleased translation pair to releasable quality[edit]

  • Difficulty:
    2. Medium
  • Size: Large
  • Required skills:
    shell scripting
  • Description:
    Take an unstable language pair and improve its quality, focusing on testvoc
  • Rationale:
    Many Apertium language pairs have large dictionaries and have otherwise seen much development, but are not of releasable quality. The point of this project would be bring one translation pair to releasable quality. This would entail obtaining good naïve coverage and a clean testvoc.
  • Mentors:
    Jonathan Washington, Sevilay Bayatlı, User:Unhammer, Hèctor Alòs i Font
  • read more...


Develop a prototype MT system for a strategic language pair[edit]

  • Difficulty:
    2. Medium
  • Size: Large
  • Required skills:
    XML, some knowledge of linguistics and of one relevant natural language
  • Description:
    Create a translation pair based on two existing language modules, focusing on the dictionary and structural transfer
  • Rationale:
    Choose a strategic set of languages to develop an MT system for, such that you know the target language well and morphological transducers for each language are part of Apertium. Develop an Apertium MT system by focusing on writing a bilingual dictionary and structural transfer rules. Expanding the transducers and disambiguation, and writing lexical selection rules and multiword sequences may also be part of the work. The pair may be an existing prototype, but if it's a heavily developed but unreleased pair, consider applying for "Bring an unreleased translation pair to releasable quality" instead.
  • Mentors:
    Jonathan Washington, Sevilay Bayatlı, User:Unhammer, Hèctor Alòs i Font
  • read more...


Add a new variety to an existing language[edit]

  • Difficulty:
    3. Entry level
  • Size: Multiple lengths possible (discuss with the mentors which option is better for you)
  • Required skills:
    XML, some knowledge of linguistics and of one relevant natural language
  • Description:
    Add a language variety to one or more released pairs, focusing on the dictionary and lexical selection
  • Rationale:
    Take a released language, and define a new language variety for it: e.g. Quebec French or Provençal Occitan. Then add the new variety to one or more released language pairs, without diminishing the quality of the pre-existing variety(ies). The objective is to facilitate the generation of varieties for languages with a weak standardisation and/or pluricentric languages.
  • Mentors:
    Hèctor Alòs i Font, Jonathan Washington,Sevilaybayatlı
  • read more...


Leverage and integrate language preferences into language pairs[edit]

  • Difficulty:
    3. Entry level
  • Size: Medium
  • Required skills:
    XML, some knowledge of linguistics and of one relevant natural language
  • Description:
    Update language pairs with lexical and orthographical variations to leverage the new preferences functionality
  • Rationale:
    Currently, preferences are implemented via language variant, which relies on multiple dictionaries, increasing compilation time exponentially every time a new preference gets introduced.
  • Mentors:
    Xavi Ivars User:Unhammer
  • read more...


Add Capitalization Handling Module to a Language Pair[edit]

  • Difficulty:
    3. Entry level
  • Size: Small
  • Required skills:
    XML, knowledge of some relevant natural language
  • Description:
    Update a language pair to make use make use of the new Capitalization handling module
  • Rationale:
    Correcting capitalization via transfer rules is tedious and error prone, but putting them in a separate set of rules should allow them to be handled in a more concise and maintainable way. Additionally, it is possible that capitalization rule could be moved to monolingual modules, thus reducing development effort on translators.
  • Mentors:
    User:Popcorndude
  • read more...

Data Extraction[edit]

A lot of the language data we need to make our analyzers and translators work already exists in other forms and we just need to figure out how to convert it. If you know of another source of data that isn't listed, we'd love to hear about it.


dictionary induction from wikis[edit]

  • Difficulty:
    2. Medium
  • Size: Multiple lengths possible (discuss with the mentors which option is better for you)
  • Required skills:
    MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction
  • Description:
    Extract dictionaries from linguistic wikis
  • Rationale:
    Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets.
  • Mentors:
    Jonathan Washington, User:Popcorndude
  • read more...


Dictionary induction from parallel corpora / Revive ReTraTos[edit]

  • Difficulty:
    2. Medium
  • Size: Medium
  • Required skills:
    C++, perl, python, xml, scripting, machine learning
  • Description:
    Extract dictionaries from parallel corpora
  • Rationale:
    Given a pair of monolingual modules and a parallel corpus, we should be able to run a program to align tagged sentences and give us the best entries that are missing from bidix. ReTraTos (from 2008) did this back in 2008, but it's from 2008. We want a program which builds and runs in 2022, and does all the steps for the user.
  • Mentors:
    User:Unhammer, User:Popcorndude
  • read more...


Extract morphological data from FLEx[edit]

  • Difficulty:
    1. Hard
  • Size: Large
  • Required skills:
    python, XML parsing
  • Description:
    Write a program to extract data from SIL FieldWorks and convert as much as possible to monodix (and maybe bidix).
  • Rationale:
    There's a lot of potentially useful data in FieldWorks files that might be enough to build a whole monodix for some languages but it's currently really hard to use
  • Mentors:
    Popcorndude, Flammie
  • read more...

Tooling[edit]

These are projects for people who would be comfortable digging through our C++ codebases (you will be doing a lot of that).


Python API for Apertium[edit]

  • Difficulty:
    2. Medium
  • Size: Medium
  • Required skills:
    C++, Python
  • Description:
    Update the Python API for Apertium to expose all Apertium modes and test with all major OSes
  • Rationale:
    The current Python API misses out on a lot of functionality, like phonemicisation, segmentation, and transliteration, and doesn't work for some OSes like Debian.
  • Mentors:
    Francis Tyers
  • read more...


Robust tokenisation in lttoolbox[edit]

  • Difficulty:
    2. Medium
  • Size: Large
  • Required skills:
    C++, XML, Python
  • Description:
    Improve the longest-match left-to-right tokenisation strategy in lttoolbox to handle spaceless orthographies.
  • Rationale:
    One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing. Additionally, the system is nearly impossible to use for languages that don't use spaces, such as Japanese.
  • Mentors:
    Francis Tyers, Flammie
  • read more...


rule visualization tools[edit]


Extend Weighted transfer rules[edit]


Automatic Error-Finder / Pseudo-Backpropagation[edit]

  • Difficulty:
    1. Hard
  • Size: Large
  • Required skills:
    python?
  • Description:
    Develop a tool to locate the approximate source of translation errors in the pipeline.
  • Rationale:
    Being able to generate a list of probable error sources automatically makes it possible to prioritize issues by frequency, frees up developer time, and is a first step towards automated generation of better rules.
  • Mentors:
    User:Popcorndude
  • read more...


More Robust Recursive Transfer[edit]

  • Difficulty:
    1. Hard
  • Size: Large
  • Required skills:
    C++
  • Description:
    Ensure Recursive Transfer survives ambiguous or incomplete parse trees
  • Rationale:
    Currently, one has to be very careful in writing recursive transfer rules to ensure they don't get too deep or ambiguous, and that they cover full sentences. See in particular issues 97 and 80. We would like linguists to be able to fearlessly write recursive (rtx) rules based on what makes linguistic sense, and have rtx-proc/rtx-comp deal with the computational/performance side.
  • Mentors:
  • read more...


CG-based Transfer[edit]

  • Difficulty:
    1. Hard
  • Size: Large
  • Required skills:
    C++
  • Description:
    Linguists already write dependency trees in Constraint Grammar. A following step could use these to reorder into target language trees.
  • Rationale:
    {{{rationale}}}
  • Mentors:
  • [[|read more...]]


Language Server Protocol[edit]


WASM Compilation[edit]

  • Difficulty:
    1. Hard
  • Size: Medium
  • Required skills:
    C++, Javascript
  • Description:
    Compile the pipeline modules to WASM and provide JS wrappers for them.
  • Rationale:
    There are situations where it would be nice to be able to run the entire pipeline in the browser
  • Mentors:
    Tino Didriksen
  • read more...

Web[edit]

If you know Python and JavaScript, here's some ideas for improving our website. Some of these should be fairly short and it would be a good idea to talk to the mentors about doing a couple of them together.


Web API extensions[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    Python
  • Description:
    Update the web API for Apertium to expose all Apertium modes
  • Rationale:
    The current Web API misses out on a lot of functionality, like phonemicisation, segmentation, transliteration, and paradigm generation.
  • Mentors:
    Francis Tyers, Jonathan Washington, Xavi Ivars
  • read more...


Website Improvements: Misc[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, css, js, python
  • Description:
    Improve elements of Apertium's web infrastructure
  • Rationale:
    Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy have numerous open issues. This project would entail choosing a subset of open issues and features that could realistically be completed in the summer. You're encouraged to speak with the Apertium community to see which features and issues are the most pressing.
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • read more...


Website Improvements: Dictionary Lookup[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, css, js, python
  • Description:
    Finish implementing dictionary lookup mode in Apertium's web infrastructure
  • Rationale:
    Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy have numerous open issues, including half-completed features like dictionary lookup. This project would entail completing the dictionary lookup feature. Some additional features which would be good to work would include automatic reverse lookups (so that a user has a better understanding of the results), grammatical information (such as the gender of nouns or the conjugation paradigms of verbs), and information about MWEs.
  • Mentors:
    Jonathan Washington, Xavi Ivars, User:Popcorndude
  • [the open issue on GitHub|read more...]


Website Improvements: Spell checking[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, js, css, python
  • Description:
    Add a spell-checking interface to Apertium's web tools
  • Rationale:
    Apertium-html-tools has seen some prototypes for spell-checking interfaces (all in stale PRs and branches on GitHub), but none have ended up being quite ready to integrate into the tools. This project would entail polishing up or recreating an interface, and making sure APy has a mode that allows access to Apertium voikospell modules. The end result should be a slick, easy-to-use interface for proofing text, with intuitive underlining of text deemed to be misspelled and intuitive presentation and selection of alternatives. the open issue on GitHub
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • read more...


Website Improvements: Suggestions[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, css, js, python
  • Description:
    Finish implementing a suggestions interface for Apertium's web infrastructure
  • Rationale:
    Some work has been done to add a "suggestions" interface to Apertium's website infrastructure Apertium-html-tools and its supporting API Apertium APy, whereby users can suggest corrected translations. This project would entail finishing that feature. There are some related issues and PRs on GitHub.
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • read more...


Website Improvements: Orthography conversion interface[edit]

  • Difficulty:
    2. Medium
  • Size: Small
  • Required skills:
    html, js, css, python
  • Description:
    Add an orthography conversion interface to Apertium's web tools
  • Rationale:
    Several Apertium language modules (like Kazakh, Kyrgyz, Crimean Tatar, and Hñähñu) have orthography conversion modes in their mode definition files. This project would be to expose those modes through Apertium APy and provide a simple interface in Apertium-html-tools to use them.
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • read more...


Add support for NMT to web API[edit]

  • Difficulty:
    2. Medium
  • Size: Medium
  • Required skills:
    python, NMT
  • Description:
    Add support for a popular NMT engine to Apertium's web API
  • Rationale:
    Currently Apertium's web API Apertium APy, supports only Apertium language modules. But the front end could just as easily interface with an API that supports trained NMT models. The point of the project is to add support for one popular NMT package (e.g., translateLocally/Bergamot, OpenNMT or JoeyNMT) to the APy.
  • Mentors:
    Jonathan Washington, Xavi Ivars
  • [[|read more...]]

Integrations[edit]

In addition to incorporating data from other projects, it would be nice if we could also make our data useful to them.


OmniLingo and Apertium[edit]

  • Difficulty:
    2. Medium
  • Size: Multiple lengths possible (discuss with the mentors which option is better for you)
  • Required skills:
    JS, Python
  • Description:
    OmniLingo is a language learning system for practicing listening comprehension using Common Voice data. There is a lot of text processing involved (for example tokenisation) that could be aided by Apertium tools.
  • Rationale:
  • Mentors:
    Francis Tyers
  • read more...


Support for Enhanced Dependencies in UD Annotatrix[edit]

  • Difficulty:
    2. Medium
  • Size: Medium
  • Required skills:
    NodeJS
  • Description:
    UD Annotatrix is an annotation interface for Universal Dependencies, but does not yet support all functionality.
  • Rationale:
  • Mentors:
    Francis Tyers
  • read more...


UD and Apertium integration[edit]

  • Difficulty:
    3. Entry level
  • Size: Medium
  • Required skills:
    python, javascript, HTML, (C++)
  • Description:
    Create a range of tools for making Apertium compatible with Universal Dependencies
  • Rationale:
    Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
  • Mentors:
    User:Francis Tyers, Jonathan Washington, User:Popcorndude
  • read more...