Difference between revisions of "Task ideas for Google Code-in"

From Apertium
Jump to navigation Jump to search
(Delete a bunch of completed tasks (cleanup))
 
(698 intermediate revisions by 42 users not shown)
Line 1: Line 1:
  +
This is the task ideas page for Google Code-in (http://www.google-melange.com/gci/homepage/google/gci2013), here you can find ideas on interesting tasks that will improve your knowledge of Apertium and help you get into the world of open-source development.
 
  +
{{TOCD}}
  +
This is the task ideas page for [https://developers.google.com/open-source/gci/ Google Code-in], here you can find ideas on interesting tasks that will improve your knowledge of Apertium and help you get into the world of open-source development.
   
 
The people column lists people who you should get in contact with to request further information. All tasks are 2 hours maximum estimated amount of time that would be spent on the task by an experienced developer, however:
 
The people column lists people who you should get in contact with to request further information. All tasks are 2 hours maximum estimated amount of time that would be spent on the task by an experienced developer, however:
   
# '''this does not include time taken to [[Minimal installation from SVN|install]] / set up apertium'''.
+
<!--# '''this does not include time taken to [[Minimal installation from SVN|install]] / set up apertium (and relevant tools)'''.-->
 
# this is the time expected to take by an experienced developer, you may find that you spend more time on the task because of the learning curve.
 
# this is the time expected to take by an experienced developer, you may find that you spend more time on the task because of the learning curve.
   
Line 13: Line 15:
 
* {{sc|research}}: Tasks related to community management, outreach/marketting, or studying problems and recommending solutions
 
* {{sc|research}}: Tasks related to community management, outreach/marketting, or studying problems and recommending solutions
 
* {{sc|quality}}: Tasks related to testing and ensuring code is of high quality.
 
* {{sc|quality}}: Tasks related to testing and ensuring code is of high quality.
* {{sc|interface}}: Tasks related to user experience research or user interface design and interaction
+
* {{sc|design}}: Tasks related to user experience research or user interface design and interaction
   
  +
'''Clarification of "multiple task" types'''
You can find descriptions of some of the mentors here: [[List_of_Apertium_mentors]].
 
  +
* multi = number of students who can do a given task (GCI's "max instances")
  +
* dup = number of times a student can do the same task
   
  +
You can find descriptions of some of the mentors [[List_of_Apertium_mentors | here]].
==Task list==
 
   
  +
== Task ideas ==
{|class="wikitable sortable"
 
  +
! Category !! Title !! Description !! Mentors
 
  +
'''The current task ideas here are for 2019.''' See [[Talk:Task ideas for Google Code-in]] for task ideas from previous years.
  +
  +
<table class="sortable wikitable" style="display: none">
  +
<tr><th>type</th><th>title</th><th>description</th><th>tags</th><th>mentors</th><th>bgnr?</th><th>multi?</th><th>duplicates</th></tr>
  +
  +
{{Taskidea
  +
|type=research, quality, documentation
  +
|title=Adopt a Wiki page
  +
|description=Request an Apertium wiki account and adopt a wiki page by updating and fixing any issues with it. Examples of things to update might be documentation that still refers to our SVN repo (we're on GitHub now), documentation of new features, clarification of unclear things, indicating that a page no longer reflects how things are done, "archiving" a page that represents deprecated information, or updating documentation to reflect the current options and defaults of various tools.
  +
|tags=wiki
  +
|mentors=*
  +
|multi=150
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=research, quality, documentation
  +
|title=Test instructions on Apertium wiki
  +
|description=Find a page on the Apertium wiki that documents how to do something (hint: check the [http://wiki.apertium.org/wiki/Category:Documentation Documentation] category). Then try to follow the instructions. Check with your mentor when you get stuck. Modify the instructions as necessary. If the instructions are for something that is deprecated or no longer used by the community, either mark them as deprecated (category, banner at top of page, fix links to page) and/or modify them to match current practices.
  +
|tags=wiki
  +
|mentors=*
  +
|multi=150
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=research, code
  +
|title=expand coverage of Kyrgyz to English structural transfer
  +
|description=Find a sentence in Kyrgyz that once the lexical items are added to the bilingual dictionary is not fully (or correctly) parsed by the <tt>kir-eng-transfer</tt> Apertium mode. Determine what rule(s) need(s) to be added (or fixed) to cover this structure, and update <tt>apertium-eng-kir.kir-eng.rtx</tt> accordingly. You will first want to clone and compile [https://github.com/apertium/apertium-eng-kir apertium-eng-kir].
  +
|tags=Kyrgyz, English, recursive transfer, pairs
  +
|mentors=JNW, popcorndude
  +
|multi=150
  +
|dup=10
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|title=Add recursive transfer support to a language pair that doesn't support it
  +
|description=Make a branch of an Apertium language pair that doesn't support recursive transfer and call it "recursive transfer". Add vanilla <tt>.rtx</tt> files for both directions, and modify <tt>Makefile.am</tt> and <tt>modes.xml</tt> so that the branch compiles and runs. See [http://wiki.apertium.org/wiki/Apertium-recursive#Incorporating_Into_a_Pair this page] for instructions on how to do this.
  +
|tags=recursive transfer, pairs
  +
|mentors=JNW, popcorndude
  +
|multi=150
  +
|dup=10
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|title=Add 2 recursive transfer rules to a language pair
  +
|description=Add two recursive transfer rules to an Apertium language pair. These rules consist of, at minimum, a syntactic pattern to match, a phrase to combine them into, and an output pattern ([http://wiki.apertium.org/wiki/Apertium-recursive#Further_Documentation more documentation here]). If the language pair does not support recursive transfer, make sure [http://wiki.apertium.org/wiki/Apertium-recursive#Incorporating_Into_a_Pair to set it up] first. Submit your work as a pull request to a new branch ("recursive", "rtx", or similar) of the repository on GitHub.
  +
|tags=recursive transfer, pairs
  +
|mentors=JNW, popcorndude
  +
|multi=150
  +
|dup=20
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|mentors=JNW, wei2912, padth4i, popcorndude
  +
|title=Use apertium-init to bootstrap a new language pair
  +
|description=Use the [[Apertium-init]] script to bootstrap a new translation pair between two languages which have monolingual modules already in Apertium. To see if a translation pair has already been made, search our repositories on [https://github.com/apertium/ github], and especially ask on IRC. Add 100 common stems to the dictionary. Your submission should be in the form of a repository on github that we can fork to the Apertium organisation.
  +
|tags=languages, bootstrap, dictionaries, translators
  +
|beginner=yes
  +
|multi=25
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|mentors=JNW, wei2912, padth4i, popcorndude
  +
|title=Use apertium-init to bootstrap a new language module
  +
|description=Use the [[Apertium-init]] script to bootstrap a new language module that doesn't currently exist in Apertium. To see if a language is available, search our repositories on [https://github.com/apertium/ github], and especially ask on IRC. Add enough stems and morphology to the module so that it analyses and generates at least 100 correct forms. Your submission should be in the form of a repository on github that we can fork to the Apertium organisation. [[Task ideas for Google Code-in/Add words from frequency list|Read more about adding stems...]]
  +
|tags=languages, bootstrap, dictionaries
  +
|beginner=yes
  +
|multi=25
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|mentors=JNW, sevilay, Unhammer, marcriera, padth4i, Oguz, popcorndude
  +
|title=Write 10 lexical selection for an existing translation pair
  +
|description=Add 10 lexical selection rules to an existing translation pair. Submit your work as a github pull request to that pair. [[Task ideas for Google Code-in/Add lexical-select rules|Read more...]]
  +
|tags=languages, bootstrap, lexical selection, translators
  +
|multi=25
  +
|dup=5
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|mentors=JNW, Unhammer, padth4i, Oguz, popcorndude
  +
|title=Write 10 constraint grammar rules for an existing language module
  +
|description=Add 10 constraint grammar rules to an existing language module for a language that you know. Submit your work as a github pull request to that pair. [[Task ideas for Google Code-in/Add constraint-grammar rules|Read more...]]
  +
|tags=languages, bootstrap, constraint grammar
  +
|multi=25
  +
|dup=5
  +
}}
  +
  +
{{Taskidea
  +
|type=research
  +
|mentors=anakuz, fotonzade
  +
|title=Syntactic annotation of text
  +
|description=Pick a text of about 200 words and make a syntactic annotation for it according to the Universal Dependencies treebank. UD Annotatrix can be used for visualisation. Consult with your mentor about the language.
  +
|tags=UD, trees, annotation
  +
}}
  +
  +
{{Taskidea
  +
|type=research
  +
|mentors=JNW,ftyers, fotonzade, anakuz, Oguz
  +
|title=Create a UD-Apertium morphology mapping
  +
|description=Choose a language that has a Universal Dependencies treebank and tabulate a potential set of Apertium morph labels based on the (universal) UD morph labels. See Apertium's [[list of symbols]] and [http://universaldependencies.org/ UD]'s POS and feature tags for the labels.
  +
|tags=morphology, ud, dependencies
  +
|beginner=
  +
|multi=5
  +
}}
  +
  +
{{Taskidea
  +
|type=research
  +
|mentors=JNW, ftyers, fotonzade, anakuz
  +
|title=Create an Apertium-UD morphology mapping
  +
|description=Choose a language that has an Apertium morphological analyser and adapt it to convert the morphology to UD morphology
  +
|tags=morphology, ud, dependencies
  +
|beginner=
  +
|multi=5
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|title=Install Apertium and verify that it works
  +
|description=See [[Installation]] for instructions and if you encounter any issues along the way, document them and/or improve the wiki instructions!
  +
|tags=bash
  +
|mentors=ftyers, JNW, Unhammer, anakuz, Josh, fotonzade, sevilay, eirien, wei2912, padth4i, jjjppp
  +
|multi=150
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=research
  +
|title=Write a contrastive grammar
  +
|description=Document 6 differences between two (preferably related) languages and where they would need to be addressed in the [[Apertium pipeline]] (morph analysis, transfer, etc). Use a grammar book/resource for inspiration. Each difference should have no fewer than 3 examples. Put your work on the Apertium wiki under [[Language1_and_Language2/Contrastive_grammar]]. See [[Farsi_and_English/Pending_tests]] for an example of a contrastive grammar that a previous GCI student made.
  +
|mentors=mlforcada, JNW, Josh, xavivars, fotonzade, sevilay, khannatanmai, dolphingarlic, padth4i
  +
|tags=wiki, languages
  +
|beginner=yes
  +
|multi=40
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|mentors=mlforcada, anakuz, xavivars, fotonzade, sevilay, Unhammer, eirien, dolphingarlic, wei2912, marcriera, padth4i, Oguz, JNW, jjjppp
  +
|tags=xml, dictionaries
  +
|title=Identify and add 100 new entries to the bilingual dictionary for the %AAA%-%BBB% language pair
  +
|description=Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 100 new words to a bidirectional dictionary. With the help of your mentor, identify some text in either %AAA% or %BBB% and run it through Apertium's %AAA%-%BBB% translator to identify 50 unknown forms. Add the stems of these forms to the analyser in an appropriate way so that these words are analysed correctly. Your submission should be in the form of a pull request to each of the appropriate repositories on GitHub. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Grow_bilingual More instructions for this task here]...
  +
|multi=40
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|mentors=mlforcada, anakuz, xavivars, fotonzade, ftyers, sevilay, eirien, dolphingarlic, wei2912, marcriera, padth4i, Oguz, JNW
  +
|tags=xml, dictionaries
  +
|title=Identify and add 250 new entries to the bilingual dictionary for the %AAA%-%BBB% language pair
  +
|description=Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 250 new words to a bidirectional dictionary. With the help of your mentor, identify some text in either %AAA% or %BBB% and run it through Apertium's %AAA%-%BBB% translator to identify 50 unknown forms. Add the stems of these forms to the analyser in an appropriate way so that these words are analysed correctly. Your submission should be in the form of a pull request to each of the appropriate repositories on GitHub. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Grow_bilingual More instructions for this task here]...
  +
|dup=20
  +
|beginner=no
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|mentors=fotonzade, JNW, ftyers, anakuz, xavivars, mlforcada, shardulc, sevilay, Unhammer, dolphingarlic, wei2912, marcriera
  +
|tags=xml, dictionaries
  +
|title=Post-edit 500 sentences of any public domain text from %AAA% to %BBB%
  +
|description=Many of our systems benefit from statistical methods used with (ideally public domain) bilingual data. For this task, you need to translate a public domain text from %AAA% to %BBB% using any available machine translation system and clean up the translations yourself manually. Commit the post-edited texts (in plain text format) to an existing (via pull request) or if needed new github repository for the language pair in dev/ or texts/ folder. The texts are subject to mentor approval.
  +
|multi=10
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|mentors=mlforcada, anakuz, xavivars, fotonzade, sevilay, dolphingarlic, wei2912, marcriera, padth4i
  +
|tags=disambiguation
  +
|title=Disambiguate 500 tokens of text in %AAA%
  +
|description=Run some text through a morphological analyser and disambiguate the output. Discuss with the mentor beforehand to approve the choice of language and text. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Manually_disambiguate_text Read more]...
  +
|multi=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=research
  +
|mentors=eirien, anakuz, marcriera, padth4i
  +
|tags=dictionaries
  +
|title=Categorise 100 words from frequency list in %AAA%
  +
|description=Categorise words by frequency into one of the major part-of-speech categories. You will receive a frequency list. Work from top to bottom. At the beginning of each line you should put a letter which categorises the word form by its part-of-speech. For example n for noun, v for verb, etc. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Categorise_words_from_frequency_list Read more]... <!-- Wouldn't it be better for them to to add directly to the analyser? Easier for us, and more educational for them! -JNW -->
  +
|multi=yes
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=research
  +
|mentors=eirien, anakuz, sevilay, marcriera
  +
|tags=dictionaries
  +
|title=Categorise 500 words from frequency list in %AAA%
  +
|description=Categorise words by frequency into one of the major part-of-speech categories. You will receive a frequency list. Work from top to bottom. At the beginning of each line you should put a letter which categorises the word form by its part-of-speech. For example n for noun, v for verb, etc. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Categorise_words_from_frequency_list Read more]... <!-- Wouldn't it be better for them to to add directly to the analyser? Easier for us, and more educational for them! -JNW -->
  +
|multi=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=research
  +
|mentors=khannatanmai, sevilay, padth4i
  +
|tags=evaluation
  +
|title=Evaluate an existing apertium pair %AAA% to %BBB% on a text
  +
|description= Pick an existing apertium pair and get a parallel text for that language pair. Translate %AAA% to %BBB% and evaluate the translation using an automatic evaluation metric like BLEU or/and evaluate it manually. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Evaluation_of_translation_of_an_existing_pair Read more]...
  +
}}
  +
  +
{{Taskidea
  +
|type=code, research
  +
|title=Add apertium-anaphora support to a new language pair
  +
|description=Make a branch of an Apertium language pair that doesn't use apertium-anaphora yet, and call it "anaphora". Manually add the correct antecedent to side ref of the anaphors in the output of biltrans, and modify t1x to change the anaphor based on its antecedent. Verify that it runs and gives the correct anaphor. See the [http://wiki.apertium.org/wiki/Anaphora_Resolution_Module documentation] of the apertium-anaphora module for help.
  +
|tags=apertium-anaphora, transfer
  +
|mentors=khannatanmai
  +
}}
  +
  +
{{Taskidea
  +
|type=code, research
  +
|title=Add one markable to the arx file for a language pair
  +
|description=The arx file is where we tell the anaphora resolution algorithm which patterns to detect and score, when we want to find the antecedent of an anaphor. Add one rule in this file which can help the algorithm find out the antecedent of an anaphor - specify the pattern to detect and the positive or negative score you want to give to the noun in this pattern. See the [http://wiki.apertium.org/wiki/Anaphora_Resolution_Module documentation] of the apertium-anaphora module for help.
  +
|tags=apertium-anaphora, anaphora resolution
  +
|mentors=khannatanmai, popcorndude
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|title=Write a TextMate grammar for CG-3 syntax highlighting
  +
|description=We want CG-3 files to be syntax highlighted in Github. They use TextMate-compatible grammars. See [https://github.com/TinoDidriksen/cg3/issues/48 issue 48] in the CG-3 repo.
  +
|tags=cg, editors, tools
  +
|mentors=Unhammer
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|title=Make Apertium IRC bot's messaging system case-insensitive
  +
|description=[[Begiak]] is Apertium's IRC bot. It has a messaging system, where you can say e.g. "begiak, tell randomuser thanks for the tip!" or "begiak: ask randomuser where they filed that issue", and the bot will deliver the message next time it sees randomuser say something. There's been [https://github.com/apertium/phenny/issues/488 a request] for begiak to recognise case-insensitive commands. Your job is to create a fix for this and submit a pull request to the repository with the fix.
  +
|tags=python, irc
  +
|beginner=yes
  +
|mentors=JNW, popcorndude
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|title=Make Apertium's IRC bot's updating of Apertium's wiki format numbers better
  +
|description=[[Begiak]] is Apertium's IRC bot. It has a module that allows users on IRC to trigger a script that updates the Apertium wiki with statistics about Apertium modules. There have been [https://github.com/apertium/phenny/issues/485 complaints about the formatting of the numbers it writes]. Your job is to create a fix for this and submit a pull request to the repository with the fix.
  +
|tags=python, irc, wiki
  +
|beginner=yes
  +
|mentors=JNW, popcorndude
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|title=Create a new init script for Apertium's IRC bot
  +
|description=[[Begiak]] is Apertium IRC bot. It runs on a low-power server that runs Debian. Sometimes that server is reset, and we have to manually restart begiak. The init script that used to control begiak no longer seems to work. Your task is to create a new init script that supports the normal sort of actions that init scripts do, and also the following options for begiak: the specific path to run it from, the user to run it as, miscellaneous arguments, and a log file to log its output. This init script should live in the repository in a reasonable place. There is a [https://github.com/apertium/phenny/issues/484 github issue describing this task].
  +
|tags=python, debian, init, irc
  +
|mentors=JNW
  +
}}
  +
  +
{{Taskidea
  +
|type=documentation
  +
|mentors=JNW, flammie, popcorndude
  +
|title=Add comments to a dictionary defining the symbols used in it
  +
|description=Add comments to a monolingual or bilingual dictionary file (.lexc/.dix, .dix) in the symbol definitions area that clarify what each symbol stands for. Also direct the comment reader to the Apertium wiki page on symbol definitions for more information.
  +
|tags=dictionaries
  +
|dup=10
  +
}}
  +
  +
{{Taskidea
  +
|type=documentation
  +
|mentors=JNW, popcorndude
  +
|title=find symbols that aren't on the list of symbols page
  +
|description=Go through the symbol definitions in Apertium dictionaries on GitHub (.lexc and .dix format), and document any symbols you don't find on the [[List of symbols]] page. This task is fulfilled by adding at least one class of related symbols (e.g., xyz_*) or one major symbol (e.g., abc), along with notes about what it means.
  +
|tags=wiki,dictionaries
  +
}}
  +
  +
{{Taskidea
  +
|type=documentation
  +
|title=document usage of the apertium-separable module
  +
|mentors=JNW, khannatanmai, popcorndude
  +
|description=Document which language pairs have included the [[apertium-separable]] module in its package, which have beta-tested the lsx module, and which are good candidates for including support for lsx. Add to [[Lsx_module/supported_languages|this wiki page]]
  +
|tags=lsx, dictionaries, wiki
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|title=Beta-test the apertium-separable module
  +
|mentors=JNW, ftyers, wei2912, khannatanmai, dolphingarlic, popcorndude
  +
|description= [[Lsx_module#Creating_the_lsx-dictionary|create an lsx dictionary]] in both directions for any relevant and existing language pair that doesn't yet support it (as a "separable" branch in its GitHub repository), adding 10-30 entries to it in one or both directions. Thoroughly test to make sure the output is as expected. Report bugs/non-supported features and add them to [[Lsx_module#Future_work| future work]]. Document your tested language pair by listing it under [[Lsx_module#Beta_testing]] and in [[Lsx_module/supported_languages|this wiki page]]
  +
|tags=lsx, dictionaries
  +
|multi=yes
  +
|dup=20
  +
}}
  +
  +
{{Taskidea
  +
|type=code, quality
  +
|title=script to test coverage of analyser over corresponding wikipedia corpus
  +
|mentors=JNW, wei2912
  +
|description=Write a script (in python or ruby) that tests coverage of an Apertium analyser over the latest Wikipedia corpus in that language. One mode of this script should check out a specified language module to a given directory, compile it (or updates it if already existant), and then get the most recently Wikipedia nightly archive for that language and run coverage over it (as much in RAM if possible). In another mode, it should compile the language pair in a docker instance that it then disposes of after successfully running coverage. Scripts exist in Apertium already for finding where a Wikipedia is, extracting a Wikipedia archive into a text file, and running coverage. Ask a mentor for help finding these scripts.
  +
|tags=python, ruby, wikipedia
  +
}}
  +
  +
{{Taskidea
  +
|type=code, design
  +
|title=Make source browser headings sticky at bottom of window
  +
|description=Make headings that are out of view (either below when at the top, or above when scrolled down) sticky on [https://apertium.github.io/apertium-on-github/source-browser.html Apertium source browser], so that it's clear what other headings exist. There is a [https://github.com/apertium/apertium-on-github/issues/22 github issue for this].
  +
|tags=css, javascript, html, web
  +
|mentors=sushain, JNW, xavivars
  +
|multi=
  +
|beginner=no
  +
}}
  +
  +
{{Taskidea
  +
|type=code, design
  +
|mentors=JNW, jjjppp, sushain, dolphingarlic
  +
|tags=d3, javascript
  +
|title=Integrate globe viewer into language family visualiser interface
  +
|description=The [https://github.com/apertium/family-visualizations family visualiser interface] has four info boxes when a language is clicked on, and one of those boxes is empty. The [https://github.com/jonorthwash/Apertium-Global-PairViewer globe viewer] provides a globe visualisation of languages that we can translate a given language to and from. This task is to integrate the globe viewer for a specific language into the fourth box in the family visualiser. There is an [https://github.com/jonorthwash/Apertium-Global-PairViewer/issues/32 associated GitHub issue].
  +
|multi=no
  +
|beginner=no
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|mentors=JNW, jjjppp, dolphingarlic
  +
|tags=d3, javascript
  +
|title=Change hard-coded values to dynamic in the globe viewer's code
  +
|description=The [https://github.com/jonorthwash/Apertium-Global-PairViewer globe viewer] provides a globe visualisation of languages that we can translate a given language to and from. This task is to clean up its source code by changing hard-coded values (e.g. scales and object sizes) to dynamic values so that the code will be easier to maintain in the future. There is an [https://github.com/jonorthwash/Apertium-Global-PairViewer/issues/24 associated GitHub issue].
  +
|multi=no
  +
|beginner=no
  +
}}
  +
  +
{{Taskidea
  +
|type=code
  +
|mentors=JNW, jjjppp, dolphingarlic
  +
|tags=d3, javascript
  +
|title=Fix fading for flyers in globe viewer
  +
|description=The [https://github.com/jonorthwash/Apertium-Global-PairViewer globe viewer] provides a globe visualisation of languages that we can translate a given language to and from. Currently, the flyers, which are the 3D colored connections, fade as either end of the connection goes out of the current scope of the globe. However, this causes flyers that connect two far away languages to be invisible (see issue for example). This task is to change the current fading function to account for far away connections and allow them to stay visible. There is an [https://github.com/jonorthwash/Apertium-Global-PairViewer/issues/22 associated GitHub issue].
  +
|multi=no
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=design
  +
|mentors=JNW, ftyers
  +
|tags=UD, design, svg
  +
|title=Design a logo for UD Annotatrix
  +
|description=UD Annotatrix needs a better logo, or set of logos. Have a look at the [https://github.com/jonorthwash/ud-annotatrix/tree/master/server/public current logos] and [https://jonorthwash.github.io/ud-annotatrix/ see them in use]. Design a potential replacement logo that meets the following requirements: somehow incorporates what UD Annotatrix is / is for, is not "cluttered" (like the current cat logo), and can be used at different sizes.
  +
|multi=yes
  +
|beginner=no
  +
}}
  +
  +
  +
{{Taskidea
  +
|type=quality, documentation, design
  +
|mentors=*
  +
|tags=video, tutorial
  +
|title=Video tutorial: installing Apertium, adding to dictionary, and submitting a PR
  +
|description=Post a video online that (1) demonstrates how to install Apertium on an operating system of your choice, (2) demonstrates how to clone and compile an Apertium translation pair of your choice, (3) shows how to add a new word to the dictionary (categorised correctly), and (4) shows how to submit the updated dictionary as a pull request to Apertium's git repository. Add a link to the video on the [http://wiki.apertium.org/wiki/Installation#Installation_Videos installation videos page] of the Apertium wiki.<br/>The title of the video should make it easy to find, and so should probably be similar to the title of this task. We recommend a screencast with voice-over posted to YouTube, but the format and venue are up to you as long as it is publicly accessible for long term. Here are [https://www.youtube.com/playlist?list=PLHldb9r6QkVFsuxlAoVS-OL32aurUOZLC some example videos] that are relevant but that could probably be improved upon.<br/>The video **does not have to be in English**; we can evaluate it in any of the following languages: %ZZZ%. Please let us know when you claim the task what language you plan to create the video in, so that we know which mentor(s) should primarily work to evaluate your task.
  +
|multi=200
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|mentors=mlforcada, anakuz, xavivars, fotonzade, sevilay, Unhammer, eirien, dolphingarlic, wei2912, marcriera, padth4i, Oguz, JNW, jjjppp
  +
|tags=xml, dictionaries
  +
|title=Identify and add 100 new entries to a bilingual dictionary
  +
|description=Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 100 new words to a bidirectional dictionary. Choose one of the language pairs listed below, and with the help of your mentor, identify some text in one of the two languages, and run the text through Apertium's translator for that language pair to identify 100 unknown forms. As needed, add the stems of these forms to the individual languages' analysers in an appropriate way so that these words are analysed correctly. Your submission should be in the form of a pull request to each of the appropriate repositories on GitHub.<br/>The language pairs we can mentor for this task are the following: %ALLPAIRS%.<br/> [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Grow_bilingual More instructions for this task here]...
  +
|multi=40
  +
|dup=10
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|mentors=mlforcada, anakuz, xavivars, fotonzade, ftyers, sevilay, eirien, dolphingarlic, wei2912, marcriera, padth4i, Oguz, JNW
  +
|tags=xml, dictionaries
  +
|title=Identify and add 250 new entries to a bilingual dictionary
  +
|description=Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 250 new words to a bidirectional dictionary. Choose one of the language pairs listed below, and with the help of your mentor, identify some text in one of the two languages, and run the text through Apertium's translator for that language pair to identify 250 unknown forms. As needed, add the stems of these forms to the individual languages' analysers in an appropriate way so that these words are analysed correctly. Your submission should be in the form of a pull request to each of the appropriate repositories on GitHub.<br/>The language pairs we can mentor for this task are the following: %ALLPAIRS%.<br/> [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Grow_bilingual More instructions for this task here]...
  +
|multi=40
  +
|dup=10
  +
|beginner=no
  +
}}
  +
  +
{{Taskidea
  +
|type=quality
  +
|mentors=fotonzade, JNW, ftyers, anakuz, xavivars, mlforcada, shardulc, sevilay, Unhammer, dolphingarlic, wei2912, marcriera
  +
|tags=xml, dictionaries
  +
|title=Post-edit 500 sentences of a public domain text
  +
|description=Many of our systems benefit from statistical methods used with (ideally public domain) bilingual data. For this task, you need to translate a public domain text using an available machine translation system (Apertium preferred) and clean up the translation yourself manually. Commit the source text and post-edited translation (in plain text format) to an existing (via pull request) or if needed new github repository for the language pair in dev/ or texts/ folder. The texts are subject to mentor approval.<br/>The language pairs we can hypothetically mentor for this task (pending their existence) are the following: %ALLPAIRS%.
  +
|multi=40
  +
|dup=10
  +
|beginner=yes
  +
}}
  +
  +
{{Taskidea
  +
|type=research
  +
|mentors=khannatanmai, sevilay, padth4i
  +
|tags=evaluation
  +
|title=Evaluate an existing Apertium translation pair on a text
  +
|description= Pick an existing Apertium language pair and get a parallel text for that language pair. Translate the text using the Apertium translation pair and evaluate the translation using an automatic evaluation metric like BLEU and/or evaluate it manually.<br/>The language pairs we can mentor for this task (pending their existence) are the following: %ALLPAIRS%.<br/>[http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Evaluation_of_translation_of_an_existing_pair Read more]...
  +
|multi=40
  +
|dup=10
  +
}}
  +
  +
<!-- {{Taskidea
  +
|type=quality
  +
|mentors=*
  +
|tags=localisation
  +
|title=Complete website localisation in a language not fully localised
  +
|description=
  +
|multi=
  +
}} -->
  +
  +
<!-- NEW TASKS BELOW -->
  +
  +
  +
  +
</table>
  +
  +
==Mentors==
  +
  +
These are languages that can be substituted for AAA and/or BBB for tasks each mentor is listed to mentor above.
  +
  +
If you do not see your language here, ask. We may be able to mentor or find you a mentor.
  +
  +
{|class=wikitable
  +
! Mentor !! Languages
 
|-
 
|-
  +
| ftyers || eng, spa, cat, fra, nor, rus, por, swe, tur, gag, aze
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Improve the quality of a language pair XX-YY by adding 50 words to its vocabulary || Add words to language pair XX-YY and test that the new vocabulary works. [[/Add words|Read more]]... || [[User:Mlforcada]] [[User:ilnar.salimzyan]] [[User:Xavivars]] [[User:Bech]] [[User:Jimregan|Jimregan]] [[User:Unhammer]] [[User:Fsanchez]] [[User:Nikant]] [[User:Fulup|Fulup]] [[User:Japerez]] [[User:tunedal]] [[User:Juanpabl]] [[User:Youssefsan|Youssefsan]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| JNW || eng, spa, fra, rus, tur, gag, aze, kaz, kir, kaa, tat, bak, kum, nog, kaa, uzb, uig, crh, khk, yid
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Add/correct one structural transfer rule to an existing language pair || Add or correct a structural transfer rule to an existing language pair and test that it works. [[/Add transfer rule|Read more]]... || [[User:Mlforcada]] [[User:ilnar.salimzyan]] [[User:Unhammer]] [[User:Nikant]] [[User:Fulup|Fulup]] [[User:Juanpabl]]
 
 
|-
 
|-
  +
| anakuz || grn, spa, por, rus
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Write 10 lexical selection rules for a language pair already set up with lexical selection || Add 10 lexical selection rules to improve the lexical selection quality of a pair and test them to ensure that they work. [[/Add lexical-select rules|Read more]]... || [[User:Mlforcada]], [[User:Francis Tyers]] [[User:ilnar.salimzyan]] [[User:Unhammer]] [[User:Fsanchez]] [[User:Nikant]] [[User:Japerez]] [[User:Firespeaker]] (more mentors welcome)
 
 
|-
 
|-
  +
| fotonzade || eng, tur, aze, uig, tat, crh, kmr, ckb, fas
| {{sc|code}} || {{sc|multi}} Set up a language pair to use lexical selection and write 5 rules || First set up a language pair to use the new lexical selection module (this will involve changing configure scripts, makefile and [[modes]] file). Then write 5 lexical selection rules. [[/Setup and add lexical selection|Read more]]... || [[User:Mlforcada]], [[User:Francis Tyers]] [[User:Unhammer]] [[User:Fulup|Fulup]] [[User:pankajksharma]] (more mentors welcome)
 
 
|-
 
|-
  +
| xavivars || cat, spa, eng, fra
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Write 10 constraint grammar rules to repair part-of-speech tagging errors || Find some tagging errors and write 10 constraint grammar rules to fix the errors. [[/Add constraint-grammar rules|Read more]]... || [[User:Mlforcada]], [[User:Francis Tyers]] [[User:ilnar.salimzyan]] [[User:Unhammer]] [[User:Fulup|Fulup]] (more mentors welcome)
 
 
|-
 
|-
  +
| Unhammer || nno, nob, swe, dan, fao, sme, ovd
| {{sc|code}} || {{sc|multi}} Set up a language pair such that it uses constraint grammar for part-of-speech tagging || Find a language pair that does not yet use constraint grammar, and set it up to use constraint grammar. After doing this, find some tagging errors and write five rules for resolving them. [[/Setup constraint grammar for a pair|Read more]]... || [[User:Mlforcada]], [[User:Francis Tyers]] [[User:Unhammer]]
 
 
|-
 
|-
  +
| shardulc || eng, fra, mar, hin, urd, kan
| {{sc|code}} || {{sc|multi}} Dictionary conversion || Write a conversion module for an existing dictionary for apertium-dixtools. || [[User:Firespeaker]]
 
 
|-
 
|-
  +
| m-alpha || eng, fra, byv
| {{sc|code}} || {{sc|multi}} Dictionary conversion in python || Write a conversion module for an existing free bilingual dictionary to [[lttoolbox]] format using Python. || [[User:Firespeaker]]
 
 
|-
 
|-
  +
| popcorndude || eng, spa, cym, heb
| {{sc|code}} || [[libvoikko]] support for [[apertium-apy]] || Write a function for [[apertium-apy]] that checks the spelling of an input string and for each word returns whether the word is correct, and if unknown returns suggestions. Whether segmentation is done by the client or by apertium-apy will have to be figured out. You will also need to add scanning for spelling modes to the initialisation section. Try to find a sensible way to structure the requests and returned data with JSON. Add a switch to allow someone to turn off support for this (use argparse set_false). || [[User:Firespeaker]] [[User:Francis Tyers]]
 
 
|-
 
|-
  +
| sevilay || eng, ara, tur, kaz, aze, tat, gag, uig, uzb, crh, kum
| {{sc|code}} || performance tracking in [[apertium-apy]] || Add a way for [[apertium-apy]] to keep track of number of words in input and time between sending input to a pipeline and receiving output, for the last n (e.g., 100) requests, and write a function to return the average words per second over something<n (e.g., 10) requests. || [[User:Firespeaker]] [[User:Francis Tyers]] [[User:Unhammer]]
 
 
|-
 
|-
  +
| eirien || sah, rus, eng
| {{sc|code}} || make [[apertium-apy]] use one lock per pipeline || make [[apertium-apy]] use one lock per pipeline, since we don't need to wait for mk-en just because sme-nob is running. || [[User:Firespeaker]] [[User:Francis Tyers]] [[User:Unhammer]]
 
 
|-
 
|-
  +
| khannatanmai || eng, hin
| {{sc|code}} || make voikkospell understand apertium stream format input || Make voikkospell understand apertium stream format input, e.g. ^word/analysis1/analysis2$, voikkospell should only interpret the 'word' part to be spellchecked. || [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| flammie || fin, krl, olo, hun, nio, kpv, mdf, tlh, fra, swe, eng, est, ekk, vro
| {{sc|code}} || make voikkospell return output in apertium stream format || make voikkospell return output suggestions in apertium stream format, e.g. ^correctword$ or ^incorrectword/correct1/correct2$ || [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| dolphingarlic || afr, deu, eng
| {{sc|code}} || libvoikko support for OS X || Make a spell server for OS X's system-wide spell checker to use arbitrary languages through libvoikko. See https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/SpellCheck/Tasks/CreatingSpellServer.html#//apple_ref/doc/uid/20000770-BAJFBAAH for more information || [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| wei2912 || eng, zho
| {{sc|documentation}} || document: setting up libreoffice voikko on Ubuntu/debian || document how to set up libreoffice voikko working with a language on Ubuntu and debian || [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| marcriera|| cat, spa, eng, ron
| {{sc|documentation}} || document: setting up libreoffice voikko on Fedora || document how to set up libreoffice voikko working with a language on Fedora || [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| padth4i|| eng, mal, hin
| {{sc|documentation}} || document: setting up libreoffice voikko on Windows || document how to set up libreoffice voikko working with a language on Windows || [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| Oguz|| eng, tur, uig, aze, crh
| {{sc|documentation}} || document: setting up libreoffice voikko on OS X || document how to set up libreoffice voikko working with a language on OS X || [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| mlforcada || eng, cat, eus, fra, por, glg, spa, gle, bre
| {{sc|documentation}} || document how to set up libenchant to work with libvoikko || Libenchant is a spellchecking wrapper. Set it up to work with libvoikko, a spellchecking backend, and document how you did it. You may want to use a spellchecking module available in apertium for testing. || [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| ayushjain || eng, hin
| {{sc|code}}, {{sc|interface}} || geriaoueg hover functionality || firefox/iceweasel plugin which, when enabled, allows one to hover over a word and get a pop-up; interface only. Should be something like [http://www.bbc.co.uk/apps/nr/vocab/cy-en/www.bbc.co.uk/newyddion/] or [http://lingro.com] .
 
|| [[User:Francis Tyers]] [[User:Firespeaker]]
 
 
|-
 
|-
  +
| jjjppp || eng, lat
| {{sc|code}}, {{sc|interface}} || geriaoueg hover functionality || chrome/chromium plugin which, when enabled, allows one to hover over a word and get a pop-up; interface only. Should be something like [http://www.bbc.co.uk/apps/nr/vocab/cy-en/www.bbc.co.uk/newyddion/] or [http://lingro.com] . || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Japerez]]
 
|-
+
|}
  +
| {{sc|code}} || geriaoueg language/pair selection || firefox/iceweasel plugin which queries apertium API for available languages and allows the user to set the language pair in preferences || [[User:Francis Tyers]] [[User:Firespeaker]]
 
  +
== Counts ==
|-
 
  +
| {{sc|code}} || geriaoueg language/pair selection || chrome/chromium plugin which queries apertium API for available languages and allows the user to set the language pair in preferences || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Japerez]]
 
  +
Last updated by [[User:Firespeaker|Firespeaker]] ([[User talk:Firespeaker|talk]]) 07:30, 28 October 2019 (CET).
|-
 
  +
| {{sc|code}} || geriaoueg lookup code || firefox/iceweasel plugin which queries apertium API for a word by sending a context (±n words) and the position of the word in the context and gets translation for language pair xxx-yyy || [[User:Francis Tyers]] [[user:Firespeaker]]
 
  +
{| class="sortable wikitable"
|-
 
| {{sc|code}} || geriaoueg lookup code || chrome/chromium plugin which queries apertium API for a word by sending a context (±n words) and the position of the word in the context and gets translation for language pair xxx-yyy || [[User:Francis Tyers]] [[user:Firespeaker]] [[User:Japerez]]
 
|-
 
| {{sc|code}} || apertium-apy mode for geriaoueg (biltrans in context) || apertium-apy function that accepts a context (e.g., ±n ~words around word) and a position in the context of a word, gets biltrans output on entire context, and returns translation for the word || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Unhammer]]
 
|-
 
| {{sc|quality}} || make apertium-quality work with python3.3 on all platforms || migrate apertium-quality away from distribute to newer setup-tools so it installs correctly in more recent versions of python (known incompatible: python3.3 OS X, known compatible: MacPorts python3.2) || [[User:Francis Tyers]] [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || How much of a given sentence pair is explained by Apertium? || Write (in some scripting language of your choice) a command-line program that takes an Apertium language pair, a source-language sentence S, and a target-language sentence T, and outputs the set of pairs of subsegments (s,t) such that s is a subsegment of S, t a subsegment of T and t is the Apertium translation of s or vice-versa (a subsegment is a sequence of whole words). || [[User:Mlforcada]] [[User:Espla]] [[User:Fsanchez]] [[User:Japerez]]
 
|-
 
| {{sc|quality}} || {{sc|multi}} Compare Apertium with another MT system and improve it || This tasks aims at improving an Apertium language pair when a web-accessible system exists for it in the 'net. Particularly good if the system is (approximately) rule-based such as [http://www.lucysoftware.com/english/machine-translation/lucy-lt-kwik-translator-/ Lucy], [http://www.reverso.net/text_translation.aspx?lang=EN Reverso], [http://www.systransoft.com/free-online-translation Systran] or [http://www.freetranslation.com/ SDL Free Translation]: (1) Install the Apertium language pair, ideally such that the source language is a language you know (L₂) and the target language a language you use every day (L₁). (2) Collect a corpus of text (newspaper, wikipedia) Segment it in sentences (using e.g., libsegment-java or a similar processor and a [https://en.wikipedia.org/wiki/Segmentation_Rules_eXchange SRX] segmentation rule file borrowed from e.g. OmegaT) and put each sentence in a line. Run the corpus through Apertium and through the other system Select those sentences where both outputs are very similar (e.g, 90% coincident). Decide which one is better. If the other language is better than Apertium, think of what modification could be done for Apertium to produce the same output, and make 3 such modifications.|| [[User:Mlforcada]] [[User:Jimregan|Jimregan]] [[User:Japerez]] (alternative mentors welcome)
 
|-
 
| {{sc|documentation}} || Installation instructions for missing GNU/Linux distributions or versions || Adapt installation instructions for a particular GNU/Linux or Unix-like distribution if the existing instructions in the Apertium wiki do not work or have bugs of some kind. Prepare it in your user space in the Apertium wiki. It may be uploaded to the main wiki when approved. || [[User:Mlforcada]] [[User:Firespeaker]] (alternative mentors welcome)
 
|-
 
| {{sc|documentation}} || Installing Apertium in lightweight GNU/Linux distributions || Give instructions on how to install Apertium in one of the small or lightweight GNU/Linux distributions such as [https://en.wikipedia.org/wiki/Damn_Small_Linux Damn Small Linux] or [https://en.wikipedia.org/wiki/SliTaz_GNU/Linux SliTaz], so that may be used in older machines || [[User:Mlforcada]] [[User:Bech]] [[User:Youssefsan|Youssefsan]] (alternative mentors welcome)
 
|-
 
| {{sc|documentation}} || {{sc|multi}} What's difficult about this language pair? || For a language pair that is not in trunk or staging such that you know well the two languages involved, write a document describing the main problems that Apertium developers would encounter when developing that language pair (for that, you need to know very well how Apertium works). Note that there may be two such documents, one for A→B and the other for B→A Prepare it in your user space in the Apertium wiki.It may be uploaded to the main wiki when approved. || [[User:Mlforcada]] [[User:Jimregan|Jimregan]] [[User:Youssefsan|Youssefsan]] (alternative mentors welcome)
 
|-
 
| {{sc|documentation}} || Video guide to installation || Prepare a screencast or video about installing Apertium; make sure it uses a format that may be viewed with Free software. When approved by your mentor, upload it to youtube, making sure that you use the HTML5 format which may be viewed by modern browsers without having to use proprietary plugins such as Adobe Flash. || [[User:Mlforcada]] [[User:Firespeaker]] (alternative mentors welcome)
 
|-
 
| {{sc|documentation}} || Apertium in 5 slides || Write a 5-slide HTML presentation (only needing a modern browser to be viewed and ready to be effectively "karaoked" by some else in 5 minutes or less: you can prove this with a screencast) in the language in which you write more fluently, which describes Apertium, how it works, and what makes it different from other machine translation systems. || [[User:Mlforcada]] [[User:Firespeaker]] [[User:Japerez]] (alternative mentors welcome)
 
|-
 
| {{sc|documentation}} || Improved "Become a language-pair developer" document || Read the document [[Become_a_language_pair_developer_for_Apertium]] and think of ways to improve it (don't do this if you have not done any of the language pair tasks). Send comments to your mentor and/or repare it in your user space in the Apertium wiki. There will be a chance to change the document later in the Apertium Wiki. || [[User:Mlforcada]] [[User:Bech]] [[User:Firespeaker]]
 
|-
 
| {{sc|documentation}} || An entry test for Apertium || Write 20 multiple-choice questions about Apertium. Each question will give 3 options of which only one is true, so that we can build an "Apertium exam" for future GSoC/GCI/developers. Optionally, add an explanation for the correct answer. || [[User:Mlforcada]] [[User:Japerez]]
 
|-
 
| {{sc|research}} || {{sc|multi}} Write a contrastive grammar || Using a grammar book/resource document 10 ways in which the grammar of two languages differ, with no fewer than 3 examples of each difference. Put it on the wiki under Language1_and_Language2/Contrastive_grammar. See [[Farsi_and_English/Pending_tests]] for an example of a contrastive grammar that a previous GCI student made. || [[User:Francis Tyers]] [[User:Firespeaker]]
 
|-
 
| {{sc|research}} || {{sc|multi}} Hand annotate 250 words of running text. || Use [[apertium annotatrix]] to hand-annotate 250 words of running text from Wikipedia for a language of your choice. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || The most frequent Romance-to-Romance transfer rules || Study the .t1x transfer rule files of Romance language pairs and distill 5-10 common rules that are common to all of them, perhaps by rewriting them into some equivalent form || [[User:Mlforcada]]
 
|-
 
| {{sc|research}} || {{sc|multi}} Tag and align Macedonian--Bulgarian corpus || Take a Macedonian--Bulgarian corpus, for example SETimes, tag it using the [[apertium-mk-bg]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Write a program to extract Bulgarian inflections || Write a program to extract Bulgarian inflection information for nouns from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Bulgarian_nouns Category:Bulgarian nouns] || [[User:Francis Tyers]]
 
|-
 
| {{sc|quality}} || {{sc|multi}} Improve the quality of a language pair by allowing for alternative translations || Improve the quality of a language pair by (a) detecting 5 cases where the (only) translation provided by the bilingual dictionary is not adequate in a given context, (b) adding the lexical selection module to the language, and (c) writing effective lexical selection rules to exploit that context to select a better translation || [[User:Francis Tyers]] [[User:Mlforcada]] [[User:Unhammer]]
 
|-
 
| {{sc|quality}}, {{sc|code}} || Get bible aligner working (or rewrite it) || trunk/apertium-tools/bible_aligner.py - Should take two bible translations and output a tmx file with one verse per entry. There is a standard-ish plain-text bible translation format that we have bible translations in, and we have files that contain the names of verses of various languages mapped to English verse names || [[User:Firespeaker]]
 
|-
 
| {{sc|research}} || tesseract interface for apertium languages || Find out what it would take to integrate apertium or voikkospell into tesseract. Document thoroughly available options on the wiki. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|interface}} || Abstract the formatting for the [[simple-html]] interface. || The simple-html interface should be easily customisable so that people can make it look how they want. The task is to abstract the formatting and make one or more new stylesheets to change the appearance. This is basically making a way of "skinning" the interface. || [[User:Francis Tyers]] [[User:Japerez]]
 
|-
 
| {{sc|interface}} || [[simple-html]] spell-checker interface || Add an enablable spell-checker module to the simple-html interface. Get fancy with jquery/etc. so that e.g., misspelled words are underlined in red and recommendations for each word are given in some sort of drop-down menu. Feel free to implement a dummy function for testing spelling to test the interface until the "simple-html spell-checker code" task is complete. || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Japerez]]
 
|-
 
| {{sc|code}} || [[simple-html]] spell-checker code || Add code to the simple-html interface that allows spell checking to be performed. Should send entire string, and be able to match each returned result to its appropriate input word. Should also update as new words are typed. || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Japerez]]
 
|-
 
| {{sc|interface}} || [[simple-html]] interface behaviour for language guessing || Based on results of language detection function, make simple-html highlight in the menu the n (e.g., 3) most probable languages, and select the most probable. || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Japerez]]
 
|-
 
| {{sc|interface}}? || Update the [[Apertium guide for Windows users]] with new language pairs || Make sure that the [[Apertium guide for Windows users]] and the Apertium Windows installer is up to date with all the new language pairs. || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Write a program to extract Faroese noun inflections || Write a program to extract Faroese inflection information for nouns from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Faroese_nouns Category:Faroese nouns] || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Write a program to extract Faroese verb inflections || Write a program to extract Faroese inflection information for verbs from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Faroese_verbs Category:Faroese verbs] || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Write a program to extract Faroese adjective inflections || Write a program to extract Faroese inflection information for adjectives from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Faroese_adjectives Category:Faroese adjectives] || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Light Apertium bootable ISO for small machines || Using [https://en.wikipedia.org/wiki/Damn_Small_Linux Damn Small Linux] or [https://en.wikipedia.org/wiki/SliTaz_GNU/Linux SliTaz] or a similar lightweight GNU/Linux, produce the minimum-possible bootable live ISO or live USB image that contains the OS, minimum editing facilities, Apertium, and a language pair of your choice. Make sure no package that is not strictly necessary for Apertium to run is included.|| [[User:Mlforcada]] [[User:Firespeaker]] (alternative mentors welcome)
 
|-
 
| {{sc|code}} || Apertium in XLIFF workflows || Write a shell script and (if possible, using the filter definition files found in the documentation) a filter that takes an [https://en.wikipedia.org/wiki/XLIFF XLIFF] file such as the ones representing a computer-aided translation job and populates with translations of all segments that are not translated, marking them clearly as machine-translated. || [[User:Mlforcada]] [[User:Espla]] [[User:Fsanchez]] [[User:Japerez]] (alternative mentors welcome)
 
|-
 
| {{sc|quality}} || Examples of minimum files where an Apertium language pair messes up (X)HTML formatting || Sometimes, an Apertium language pair takes a valid HTML/XHTML source file but delivers an invalid HTML/XHTML target file, regardless of translation quality. This can usually be blamed on incorrect handling of superblanks in structural transfer rules. The task: (1) select a language pair (2) Install Apertium locally from the Subversion repository; install the language pair; make sure that it works (3) download a series of HTML/XHTML files for testing purposes. Make sure they are valid using an HTML/XHTML validator (4) translate the valid files with the language pair (5) check if the translated files are also valid HTML/XHTML files; select those that aren't (6) find the first source of non-validity and study it, and strip the source file until you just have a small (valid!) source file with some text around the minimum possible example of problematic tags; save each such file and describe the error. || [[User:Mlforcada]] (alternative mentors welcome)
 
|-
 
| {{sc|code}} || {{sc|multi}} {{sc|depend}} Make sure an Apertium language pair does not mess up (X)HTML formatting || (Depends on someone having performed the task 'Examples of files where an Apertium language pair messes up (X)HTML formatting' above). The task: (1) run the file through Apertium try to identify where the tags are broken or lost: this is most likely to happen in a structural transfer step; try to identify the rule where the label is broken or lost (2) repair the rule: a conservative strategy is to make sure that all superblanks (<b pos="..."/>) are output and are in the same order as in the source file. This may involve introducing new simple blanks (<b/>) and advancing the output of the superblanks coming from the source. (3) test again (4) Submit a patch to your mentor (or commit it if you have already gained developer access) || [[User:Mlforcada]] (alternative mentors welcome)
 
|-
 
| {{sc|quality}} || Examples of minimum files where an Apertium language pair messes up wordprocessor (ODT, RTF) formatting || Sometimes, an Apertium language pair takes a valid ODT or RTF source file but delivers an invalid HTML/XHTML target file, regardless of translation quality. This can usually be blamed on incorrect handling of superblanks in structural transfer rules. The task: (1) select a language pair (2) Install Apertium locally from the Subversion repository; install the language pair; make sure that it works (3) download a series of ODT or RTF files for testing purposes. Make sure they are opened using LibreOffice/OpenOffice.org (4) translate the valid files with the language pair (5) check if the translated files are also valid ODT or RTF files; select those that aren't (6) find the first source of non-validity and study it, and strip the source file until you just have a small (valid!) source file with some text around the minimum possible example of problematic tags; save each such file and describe the error. || [[User:Mlforcada]] (alternative mentors welcome)
 
|-
 
| {{sc|code}} || {{sc|multi}} {{sc|depend}} Make sure an Apertium language pair does not mess up wordprocessor (ODT, RTF) formatting || (Depends on someone having performed the task 'Examples of files where an Apertium language pair messes up wordprocessor formatting' above). The task: (1) run the file through Apertium try to identify where the tags are broken or lost: this is most likely to happen in a structural transfer step; try to identify the rule where the label is broken or lost (2) repair the rule: a conservative strategy is to make sure that all superblanks (<b pos="..."/>) are output and are in the same order as in the source file. This may involve introducing new simple blanks (<b/>) and advancing the output of the superblanks coming from the source. (3) test again (4) Submit a patch to your mentor (or commit it if you have already gained developer access) || [[User:Mlforcada]] (alternative mentors welcome)
 
|-
 
| {{sc|code}} || {{sc|multi}} Start a language pair involving Interlingua || Start a new language pair involving [https://en.wikipedia.org/wiki/Interlingua Interlingua] using the [http://wiki.apertium.org/wiki/Apertium_New_Language_Pair_HOWTO Apertium new language HOWTO]. Interlingua is the second most used "artificial" language, after Esperanto). As Interlingua is basically a Romance language, you can use a Romance language as the other language, and Romance-language dictionaries rules may be easily adapted. Include at least 50 very frequent words (including some grammatical words) and at least one noun--phrase transfer rule in the ia→X direction. || [[User:Mlforcada]] [[User:Youssefsan|Youssefsan]] (will reach out also to the interlingua community)
 
|-
 
| {{sc|code}} || Generating 'machine translation memories' || Write a shell script and (using the filter definition files found in the documentation) a filter that takes a plain text file, segments it in sentences using the program segment and an [https://en.wikipedia.org/wiki/Segmentation_Rules_eXchange SRX] specification (which can be borrowed from [http://www.omegat.org/ OmegaT]) and writes a [https://en.wikipedia.org/wiki/Translation_Memory_eXchange TMX] file in which each segment is paired with its Apertium translation, ready to be used with OmegaT as a "machine translation memory" || [[User:Mlforcada]] [[User:Espla]] [[User:Fsanchez]] [[User:Japerez]] [[User:Firespeaker]] (alternative mentors welcome)
 
|-
 
| {{sc|code}} || scraper for all wiktionary pages in a category || a script that returns urls of all pages in a wiktionary category recursively (e.g., http://en.wiktionary.org/wiki/Category:Bashkir_nouns should also include pages from http://en.wiktionary.org/wiki/Category:Bashkir_proper_nouns ) || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || scraper of wiktionary translations between language x and y || a script that for a given wiktionary page (e.g., http://en.wiktionary.org/wiki/key ) returns all available translations between two specified languages, with part of speech and meaning/sense for each || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || better wikipedia extractor script || Make a single script that performs all the steps listed at [[Wikipedia Extractor]]. That is, it should take a wikipedia dump file as input and output a file that is for all intents and purposes identical to what is output by the last step listed on the wiki. There should be no intermediate files stored anywhere, and it should not use any more memory than absolutely necessary, but feel free to use as much of the existing code as you need. You may wish to consult guampa's [much-improved] fork of the WikiExtractor script at [https://github.com/hltdi/guampa/tree/master/wikipedia-import], though it doesn't do everything itself either. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Document materials for a language not yet on our wiki || Document materials for a language not yet on our wiki. This should look something like the page on [[Aromanian]]—i.e., all available dictionaries, grammars, corpora, machine translators, etc., print or digital, where available, whether Free, etc., as well as some scholarly articles regarding the language, especially if about computational resources. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Albanian--Macedonian corpus || Take a Albanian--Macedonian corpus, for example SETimes, tag it using the [[apertium-sq-mk]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Albanian--Serbo-Croatian corpus || Take a Albanian--Serbo-Croatian corpus, for example SETimes, tag it using the [[apertium-sq-sh]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Albanian--Bulgarian corpus || Take a Albanian--Bulgarian corpus, for example SETimes, tag it using the [[apertium-sq-bg]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Albanian--English corpus || Take a Albanian--English corpus, for example SETimes, tag it using the [[apertium-sq-en]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Macedonian--Serbo-Croatian corpus || Take a Macedonian--Serbo-Croatian corpus, for example SETimes, tag it using the [[apertium-mk-sh]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Macedonian--English corpus || Take a Macedonian--English corpus, for example SETimes, tag it using the [[apertium-mk-en]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Serbo-Croatian--Bulgarian corpus || Take a Serbo-Croatian--Bulgarian corpus, for example SETimes, tag it using the [[apertium-sh-bg]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Serbo-Croatian--English corpus || Take a Serbo-Croatian--English corpus, for example SETimes, tag it using the [[apertium-sh-en]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Tag and align Bulgarian--English corpus || Take a Bulgarian--English corpus, for example SETimes, tag it using the [[apertium-bg-en]] pair, and word-align it using GIZA++. || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Write a program to extract Greek noun inflections || Write a program to extract Greek inflection information for nouns from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Greek_nouns Category:Greek nouns] || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Write a program to extract Greek verb inflections || Write a program to extract Greek inflection information for verbs from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Greek_verbs Category:Greek verbs] || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Write a program to extract Greek adjective inflections || Write a program to extract Greek inflection information for adjectives from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Greek_adjectives Category:Greek adjectives] || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Write a program to convert the Giellatekno Faroese CG to Apertium tags || Write a program which converts the tagset of the Giellatekno Faroese constraint grammar. || [[User:Francis Tyers]] [[User:Trondtr]]
 
|-
 
| {{sc|research}} || Categorise Russian nouns || Categorise 150 nouns by inflectional endings for the [[Russian and Ukrainian]] pair. || [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || Categorise Russian adjectives || Categorise 100 adjectives by inflectional endings for the [[Russian and Ukrainian]] pair. || [[User:Francis Tyers]] [[User:Trondtr]]
 
|-
 
| {{sc|research}} || Categorise Russian verbs || Categorise 150 verbs by inflectional endings for the [[Russian and Ukrainian]] pair. || [[User:Francis Tyers]] [[User:Trondtr]]
 
|-
 
| {{sc|code}} || Syntax tree visualisation using GNU bison || Write a program which reads a grammar using bison, parses a sentence and outputs the syntax tree as text, or graphViz or something. Some example bison code can be found [https://svn.code.sf.net/p/apertium/svn/branches/transfer4 here]. || [[User:Francis Tyers]] [[User:Mlforcada]]
 
|-
 
| {{sc|documentation}} || Document how to install [http://www.wikibhasha.org/index.htm WikiBhasha] with MediaWiki || WikiBhasha is an extension for MediaWiki to allow translation of content using Microsoft's translator. As the first step to getting it to work with Apertium, we'd like to find out how to install it. || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || Apertium plugin for WikiBhasha || Make a plugin for [http://www.mediawiki.org/wiki/Extension:WikiBhasha WikiBhasha] that can be used to translate content using the [[API|apertium API]] (or [[apertium-apy]]), with a way to specify the API url to use in a configuration option. || [[User:Francis Tyers]] [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || Make WikiBhasha take content from any language's wikipedia || Modify the code of [http://www.mediawiki.org/wiki/Extension:WikiBhasha WikiBhasha] so that it can use ("collect") content from an arbitrary language's wikipedia. Currently it only takes data from the English-language wikipedia. || [[User:Francis Tyers]] [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || Bilingual dictionary from word alignments script || Write a script which takes [[GIZA++]] alignments and outputs a <code>.dix</code> file. The script should be able to reduce the number of tags, and also have some heuristics to test if a word is too-frequently aligned. || [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || {{sc|multi}} Scraper for free forum content || Write a script to scrape/capture all freely available content for a forum or forum category and dump it to an xml corpus file or text file. || [[User:Firespeaker]]
 
|-
 
| {{sc|research}} || Investigate how orthographic modes on kk.wikipedia.org are implemented || [http://kk.wikipedia.org The Kazakh-language wikipedia] has a menu at the top for selecting alphabet (Кирил, Latın, توتە - for Cyrillic-, Latin-, and Arabic-script modes). This appears to be some sort of plugin that transliterates the text on the fly. Find out what it is and how it works, and then document it somewhere on the wiki. If this has already been documented elsewhere, point a link to that, but you still should summarise in your own words what exactly it is. || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || Write a transliteration plugin for mediawiki || Write a plugin similar in functionality (and perhaps implementation) to the way the [http://kk.wikipedia.org Kazakh-language wikipedia]'s orthography changing system works. It should be able to be directed to use any arbitrary mode from an apertium mode file installed in a pre-specified path on a server.|| [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || Implement the intersection operator in lttoolbox || Write a method for [[lttoolbox]] which can intersect two <code>Transducer</code> classes. || [[User:Francis Tyers|Francis Tyers]]
 
|-
 
| {{sc|code}} || Intersection of [[ATT format]] transducers || Write a python program to intersect two transducers in ATT format. One transducer will be a morphological analyser and the other a bilingual dictionary. The bilingual dictionary should be considered to be a set of prefixes. || [[User:Francis Tyers|Francis Tyers]]
 
|-
 
| {{sc|quality}} || Generalise phenny/begiak git plugin || Rename the module to git (instead of github), and test it to make sure it's general enough for at least three common git services (should already be supported, but double check) || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || phenny/begiak git plugin commit info function || Add a function to get the status of a commit by reponame and name (similar to what the svn module does), and then find out why commit 6a54157b89aee88511a260a849f104ae546e3a65 in turkiccorpora resulted in the following output, and fix it: Something went wrong: dict_keys(['commits', 'user', 'canon_url', 'repository', 'truncated']) || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || phenny/begiak git plugin recent function || Find out why the recent function (begiak: recent) returns "urllib.error.HTTPError: HTTP Error 401: UNAUTHORIZED (file "/usr/lib/python3.1/urllib/request.py", line 476, in http_error_default)" for one of the repos and fix it so it returns status instead. || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || phenny/begiak git plugin status || Add a function that lets anyone (not just admin) get the status of the git event server. || [[User:Firespeaker]]
 
|-
 
| {{sc|documentation}} || Document phenny/begiak git plugin || Document the module: how to use it with each service it supports, and the various ways the module can be interacted with (by administrators and anyone) || [[User:Firespeaker]]
 
|-
 
| {{sc|code}}, {{sc|quality}} || phenny/begiak svn plugin info function || Find out why the info function ("begiak info [repo] [rev]") doesn't work and fix it. || [[User:Firespeaker]]
 
|-
 
| {{sc|research}} || {{sc|multi}} train tesseract on a language with no available tesseract data || Train tesseract (the OCR software) on a language that it hasn't previously been trained on. We're especially interested in languages with some coverage in apertium. We can provide images of text to train on. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || {{sc|multi}} scrape a freely available dictionary using tesseract || Use tesseract to scrape a freely available dictionary that exists in some image format (pdf, djvu, etc.). Be sure to scrape grammatical information if available, as well stems (e.g., some dictionaries might provide entries like АЗНА·Х, where the stem is азна), and all possible translations. Ideally it should dump into something resembling [[bidix]] format, but if there's no grammatical information and no way to guess at it, some flat machine-readable format is fine. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || make scraper plugin for azadliq.org || Using the directions at [[Writing a scraper]], make a RFERL scraper for azadliq.org , as a file that loops through stuff like scrp-* file and a class to be included in the scraper_classes file. || [[User:Firespeaker]]
 
|-
 
| {{sc|documentation}} || enhance documentation on RFERL scraper to make it read like a HOWTO guide || Enhance the documentation at [[Writing a scraper]] to read/flow more like a HOWTO guide, keeping the current documentation (cleaning/reorganising as needed) as more of a reference guide. It should include information about what needs to be done for RFERL content and non-RFERL content (more general) || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || Write an aligner for UDHR || Write a script to align two translations of the [[UDHR]] (final destination: trunk/apertium-tools/udhr_aligner.py). It should take two UDHR translations and output a tmx file with one article per entry. It should use the xml formatted UDHRs available from [http://www.unicode.org/udhr/index_by_name.html http://www.unicode.org/udhr/index_by_name.html] as input and output the aligned texts in tmx format. || [[User:Firespeaker]]
 
|-
 
| {{sc|quality}} || Import nouns from azmorph into apertium-aze || Take the nouns (excluding proper nouns) from [https://svn.code.sf.net/p/apertium/svn/branches/azmorph https://svn.code.sf.net/p/apertium/svn/branches/azmorph] and put them into [[lexc]] format in [https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze]. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|quality}} || Import adjectives from azmorph into apertium-aze || Take the adjectives from [https://svn.code.sf.net/p/apertium/svn/branches/azmorph https://svn.code.sf.net/p/apertium/svn/branches/azmorph] and put them into [[lexc]] format in [https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze]. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|quality}} || Import adverbs from azmorph into apertium-aze || Take the adverbs from [https://svn.code.sf.net/p/apertium/svn/branches/azmorph https://svn.code.sf.net/p/apertium/svn/branches/azmorph] and put them into [[lexc]] format in [https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze]. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|quality}} || Import verbs from azmorph into apertium-aze || Take the verbs from [https://svn.code.sf.net/p/apertium/svn/branches/azmorph https://svn.code.sf.net/p/apertium/svn/branches/azmorph] and put them into [[lexc]] format in [https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze]. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|quality}} || Import misc categories from azmorph into apertium-aze || Take the categories that aren't nouns, proper nouns, adjectives, adverbs, and verbs from [https://svn.code.sf.net/p/apertium/svn/branches/azmorph https://svn.code.sf.net/p/apertium/svn/branches/azmorph] and put them into [[lexc]] format in [https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze]. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || script to generate dictionary from IDS data || Write a script that takes two lg_id codes, scrapes those dictionaries at [http://lingweb.eva.mpg.de/ids/ IDS], matches entries, and outputs a dictionary in [[bidix]] format || [[User:Francis Tyers]] [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || make concordancer work with output of analyser || Allow [http://pastebin.com/raw.php?i=KG8ydLPZ spectie's concordancer] to accept an optional apertium mode and directory (implement via argparse). When it has these, it should run the corpus through that apertium mode and search against the resulting tags and lemmas as well as the surface forms. E.g., the form алдым might have the analysis via an apertium mode of ^алдым/алд{{tag|n><px1sg}}{{tag|nom}}/ал{{tag|v><tv}}{{tag|ifi><p1}}{{tag|sg}}, so a search for "px1sg" should bring up this word. || [[User:Francis Tyers]] [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || scraper for all text urls from kumukia.ru/adabiat || Write a scraper that gets the urls of all texts at kumukia.ru/adabiat . It should look into all categories on the navigation bar at the left, and get all urls to texts from each target page. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || scraper for all article urls from kumukia.ru/cat-qumuq.html || Write a scraper that recursively gets the urls of all the articles in the categories linked to at kumukia.ru/cat-qumuq.html . It should load each page in each category, and get the link to each article (at the text "далее..." = "further..."). || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || write a scraper plugin for kumukia.ru/adabiat texts || [[Writing a scraper|Write a scraper plugin]] that extracts the raw text from the texts linked to from kumukia.ru/adabiat . || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|code}} || write a scraper plugin for kumukia.ru/cat-qumuq.html articles|| [[Writing a scraper|Write a scraper plugin]] that extracts the raw text from the articles linked to from kumukia.ru/cat-qumuq.html . || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|research}} || apache forwarding to apertium-apy || Find out how to get apache to forward an arbitrary url (e.g., http://example.com/UrlForAPI) to [[apertium-apy]], which is a stand-alone service that runs on an arbitrary port. Document in detail on the wiki. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
|-
 
| {{sc|quality}} || fix pairviewer's 2- and 3-letter code conflation problems || [[pairviewer]] doesn't always conflate languages that have two codes. E.g. sv/swe, nb/nob, de/deu, da/dan, uk/ukr, et/est, nl/nld, he/heb, ar/ara, eus/eu are each two separate nodes, but should instead each be collapsed into one node. Figure out why this isn't happening and fix it. Also, implement an algorithm to generate 2-to-3-letter mappings for available languages based on having the identical language name in languages.json instead of loading the huge list from codes.json; try to make this as processor- and memory-efficient as possible. || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || add language searching to pairviewer || Add some way to search for language (by name or code) to [[pairviewer]]. When your search is matched, it should highlight the node in some clever way (e.g., how OS X dims everything but what you searched for). || [[User:Firespeaker]]
 
|-
 
| {{sc|quality}} || come up with better colours for pairviewer || [[Pairviewer]] currently has a set of colours that's semantically annoying. Ideally we would have something clearer. The main idea is that the darker colours represent more-worked-on pairs, and "good" colours (e.g., green) represent more production-ready pairs. The current set of colour scales, instead of relying solely on darkness of colour within each scale, rely on hue also. Make it so that these scales internally rely more exclusively on darkness. Try a few different variants and run them by your mentor, who will have final say as to what's best. (And if you can leave a few variants around that can be switched out easily in the code, that would be good too.) You can merge the 1-9 and 10-99 categories if you want (so you'll only need 4 shades per hue); anything under 100 stems is a very small language pair, and we don't need any more detail than that. || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || map support for pairviewer ("pairmapper") || Write a version of [[pairviewer]] that instead of connecting floating nodes, connects nodes on a map. I.e., it should plot the nodes to an interactive world map (only for languages whose coordinates are provided, in e.g. GeoJSON format), and then connect them with straight-lines (as opposed to the current curved lines). Use an open map framework, like [http://leafletjs.com leaflet], [http://polymaps.org polymaps], or [http://openlayers.org openlayers] || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || coordinates for Caucasian languages || Using the map [https://commons.wikimedia.org/wiki/File:Caucasus-ethnic_en.svg Caucasus-ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format that can be loaded by pairmapper (or, e.g., converted to kml and loaded in google maps). The file should contain points that are a geographic "center" (locus) for where each language on that map is spoken. Exclude Kurdish (since it's off the map), and keep Azeri in Azerbaycan and Iran separate. You can use a capital city for bigger, national languages if you'd like (think Paris as a locus for French). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || coordinates for Central Asian languages || Using the map [https://commons.wikimedia.org/wiki/File:Central_Asia_Ethnic_en.svg Central_Asia_Ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format that can be loaded by pairmapper (or, e.g., converted to kml and loaded in google maps). The file should contain points that are a geographic "center" (locus) for where each Turkic language on that map (and also Tajik) is spoken. You can use a capital city for bigger, national languages if you'd like (think Paris as a locus for French). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || coordinates for Mongolic languages || Using the map [https://en.wikipedia.org/wiki/File:Linguistic_map_of_the_Mongolic_languages.png Linguistic map of the Mongolic languages.png], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format that can be loaded by pairmapper (or, e.g., converted to kml and loaded in google maps). The file should contain points that are a geographic "center" (locus) for where each Mongolic language on that map is spoken. Use the term "Khalkha" (iso 639-3 khk) for "Mongolisch", and find a better map for Buryat. You can use a capital city for bigger, national languages if you'd like (think Paris as a locus for French). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || draw languages as areas for pairmapper || Make a map interface that loads data (in e.g. GeoJSON or KML format) specifying areas where languages are spoken, as well as a single-point locus for the language, and displays the areas on the map (something like [http://leafletjs.com/examples/choropleth.html the way the states are displayed here]) with a node with language code (like for [[pairviewer]]) at the locus. This should be able to be integrated into pairmapper, the planned map version of pairviewer. || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for Tatar, Bashqort, and Chuvash || Using the maps listed here, try to define rough areas for where Tatar, Bashqort, and Chuvash are spoken. These areas should be specified in a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. Try to be fairly accurate and detailed. Maps to consult include [https://commons.wikimedia.org/wiki/File:Tatarbashkirs1989ru.PNG Tatarsbashkirs1989ru], [https://commons.wikimedia.org/wiki/File:NarodaCCCP.jpg NarodaCCCP] || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for North Caucasus Turkic languages || Using the map [https://commons.wikimedia.org/wiki/File:Caucasus-ethnic_en.svg Caucasus-ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Kumyk, Nogay, Karachay, Balkar. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for IE and Mongolic Caucasus-area languages || Using the map [https://commons.wikimedia.org/wiki/File:Caucasus-ethnic_en.svg Caucasus-ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Ossetian, Armenian, Kalmyk. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for North Caucasus languages || Using the map [https://commons.wikimedia.org/wiki/File:Caucasus-ethnic_en.svg Caucasus-ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Avar, Chechen, Abkhaz, Georgian. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for misc Caucasus-area languages || Using the maps [https://commons.wikimedia.org/wiki/File:Caucasus-ethnic_en.svg Caucasus-ethnic_en.svg] and [https://commons.wikimedia.org/wiki/File:Lezgin_map.png Lezgin_map], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Lezgi, Azeri (Azerbaycan), Azeri (Iran), Ingush. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for Central Asian languages: Kazakh || Using the map [https://commons.wikimedia.org/wiki/File:Central_Asia_Ethnic_en.svg Central_Asia_Ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Kazakh is spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for Central Asian languages: Karakalpak and Kyrgyz || Using the map [https://commons.wikimedia.org/wiki/File:Central_Asia_Ethnic_en.svg Central_Asia_Ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Karakalpak and Kyrgyz are spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for Central Asian languages: Uzbek and Uyghur || Using the map [https://commons.wikimedia.org/wiki/File:Central_Asia_Ethnic_en.svg Central_Asia_Ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Uzbek and Uyghur are spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference language areas for Central Asian languages: Tajik and Turkmen || Using the map [https://commons.wikimedia.org/wiki/File:Central_Asia_Ethnic_en.svg Central_Asia_Ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Tajik and Turkmen are spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference areas Russian is spoken || Assume areas in Central Asia with any sort of measurable Russian population speak Russian. Use the following maps to create a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin: [https://commons.wikimedia.org/wiki/File:Kazakhstan_European_2012_Rus.png Kazakhstan_European_2012_Rus], [https://commons.wikimedia.org/wiki/File:Ethnicrussians1989ru.PNG Ethnicrussians1989ru], [https://commons.wikimedia.org/wiki/File:Lenguas_eslavas_orientales.PNG Lenguas_eslavas_orientales], [https://commons.wikimedia.org/wiki/File:NarodaCCCP.jpg NarodaCCCP]. Try to cover all the areas where Russian is spoken at least as a major language. || [[User:Firespeaker]]
 
|-
 
| {{sc|code}} || georeference areas Ukrainian and Belorussian are spoken || Use the following maps to create a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin that defines where Belorussian and Ukrainian are spoken: [https://commons.wikimedia.org/wiki/File:Lenguas_eslavas_orientales.PNG Lenguas_eslavas_orientales], [https://commons.wikimedia.org/wiki/File:NarodaCCCP.jpg NarodaCCCP]. || [[User:Firespeaker]]
 
|-
 
| {{sc|quality}}, {{sc|code}} || split nor into nob and nno in pairviewer || Currently in [[pairviewer]], nor is displayed as a language separately from nob and nno. However, the nor pair actually consists of both an nob and an nno component. Figure out a way for pairviewer (or pairsOut.py / get_all_lang_pairs.py) to detect this split. So instead of having swe-nor, there would be swe-nob and swe-nno displayed (connected seemlessly with other nob-* and nno-* pairs), though the paths between the nodes would each still give information about the swe-nor pair. Implement a solution, trying to make sure it's future-proof (i.e., will work with similar sorts of things in the future). || [[User:Firespeaker]] [[User:Francis Tyers]] [[User:Unhammer]]
 
 
|-
 
|-
  +
! scope="col" | Category
| {{sc|quality}}, {{sc|code}} || add support to pairviewer for regional and alternate orthograpic modes || Currently in [[pairviewer]], there is no way to detect or display modes like zh_TW. Add suppor to pairsOut.py / get_all_lang_pairs.py to detect pairs containing abbreviations like this, as well as alternate orthographic modes in pairs (e.g. uzb_Latn and uzb_Cyrl). Also, figure out a way to display these nicely in the pairviewer's front-end. Get creative. I can imagine something like zh_CN and zh_TW nodes that are in some fixed relation to zho (think Mickey Mouse configuration?). Run some ideas by your mentor and implement what's decided on. || [[User:Firespeaker]] [[User:Francis Tyers]]
 
  +
! scope="col" | Count
 
|-
 
|-
  +
|{{sc|code}}
| {{sc|research}} || using language transducers for predictive text on Android || Investigate what it would take to add some sort of plugin to existing Android predictive text / keyboard framework(s?) that would allow the use of lttoolbox (or hfst? or libvoikko stuff?) transducers to be used to predict text and/or guess swipes (in "swype" or similar). Document your findings on the apertium wiki. || [[User:Firespeaker]]
 
  +
|align="right"|16
 
|-
 
|-
  +
|{{sc|documentation}}
| {{sc|research}} || custom predictive text keyboards for Android || Research and document on apertium's wiki the steps needed to design an application for Android that could load arbitrarily defined / pre-specified keyboard layouts (e.g., say I want to make custom keyboard layouts for [[Kumyk]] and [[Guaraní]], and load either one into the same program) as well as either an lttoolbox-format transducer or a file easily generated from one that could be paired with a keyboard layout and used to predict text in that language. || [[User:Firespeaker]]
 
  +
|align="right"|4
 
|-
 
|-
  +
|{{sc|research}}
| {{sc|code}} || phenny/begiak ethnologue plugin || Make a phenny plugin for use in [[begiak]] that looks up language names and codes on [http://www.ethnologue.com ethnologue] and returns basic info matching the search: language name, iso 639-3 code, where spoken, total number of speakers, and url to main page on ethnologue (note that irc text is character-limited, so it should be concise, e.g. "12,500,000 native speakers" could be abbreviated as just "12.5M L1"). This should work kind of like the .wik and .iso639 plugins (feel free to steal code from those—note that the .wik/.awik plugins chop text (rather well) to fit in a single irc message). || [[User:Firespeaker]]
 
  +
|align="right"|11
 
|-
 
|-
  +
|{{sc|quality}}
| {{sc|quality}}, {{sc|code}} || phenny/begiak mediawiki plugin(s) support for subsections || Add support to the .wik and .awik phenny/[[begiak]] modules for subsections. Currently if you do something like ".wik French language#Phonology" it brings up random text from the French_language article; instead it should find the Phonology section and return the first bit of text from it (the same as with just an article). || [[User:Firespeaker]]
 
  +
|align="right"|8
 
|-
 
|-
  +
|{{sc|design}}
| {{sc|code}} || monodix support for stem-counting script || Add support to the [https://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/get_stems.py dix stem-counter] for [[monodix]] format. It currently only works on [[bidix]] format. For monodix format, it should count {{tag|e}} elements that specify lm="something". || [[User:Firespeaker]]
 
  +
|align="right"|2
  +
|- class="sortbottom"
  +
!Total
  +
|align="right"|33
 
|}
 
|}
   

Latest revision as of 06:23, 5 December 2019

This is the task ideas page for Google Code-in, here you can find ideas on interesting tasks that will improve your knowledge of Apertium and help you get into the world of open-source development.

The people column lists people who you should get in contact with to request further information. All tasks are 2 hours maximum estimated amount of time that would be spent on the task by an experienced developer, however:

  1. this is the time expected to take by an experienced developer, you may find that you spend more time on the task because of the learning curve.

Categories:

  • code: Tasks related to writing or refactoring code
  • documentation: Tasks related to creating/editing documents and helping others learn more
  • research: Tasks related to community management, outreach/marketting, or studying problems and recommending solutions
  • quality: Tasks related to testing and ensuring code is of high quality.
  • design: Tasks related to user experience research or user interface design and interaction

Clarification of "multiple task" types

  • multi = number of students who can do a given task (GCI's "max instances")
  • dup = number of times a student can do the same task

You can find descriptions of some of the mentors here.

Task ideas[edit]

The current task ideas here are for 2019. See Talk:Task ideas for Google Code-in for task ideas from previous years.

Mentors[edit]

These are languages that can be substituted for AAA and/or BBB for tasks each mentor is listed to mentor above.

If you do not see your language here, ask. We may be able to mentor or find you a mentor.

Mentor Languages
ftyers eng, spa, cat, fra, nor, rus, por, swe, tur, gag, aze
JNW eng, spa, fra, rus, tur, gag, aze, kaz, kir, kaa, tat, bak, kum, nog, kaa, uzb, uig, crh, khk, yid
anakuz grn, spa, por, rus
fotonzade eng, tur, aze, uig, tat, crh, kmr, ckb, fas
xavivars cat, spa, eng, fra
Unhammer nno, nob, swe, dan, fao, sme, ovd
shardulc eng, fra, mar, hin, urd, kan
m-alpha eng, fra, byv
popcorndude eng, spa, cym, heb
sevilay eng, ara, tur, kaz, aze, tat, gag, uig, uzb, crh, kum
eirien sah, rus, eng
khannatanmai eng, hin
flammie fin, krl, olo, hun, nio, kpv, mdf, tlh, fra, swe, eng, est, ekk, vro
dolphingarlic afr, deu, eng
wei2912 eng, zho
marcriera cat, spa, eng, ron
padth4i eng, mal, hin
Oguz eng, tur, uig, aze, crh
mlforcada eng, cat, eus, fra, por, glg, spa, gle, bre
ayushjain eng, hin
jjjppp eng, lat

Counts[edit]

Last updated by Firespeaker (talk) 07:30, 28 October 2019 (CET).

Category Count
code 16
documentation 4
research 11
quality 8
design 2
Total 33