Difference between revisions of "Talk:Task ideas for Google Code-in"
(11 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
{{TOCD}} |
{{TOCD}} |
||
==Task ideas (2018) == |
|||
This is the task ideas page for [https://developers.google.com/open-source/gci/ Google Code-in], here you can find ideas on interesting tasks that will improve your knowledge of Apertium and help you get into the world of open-source development. |
|||
The people column lists people who you should get in contact with to request further information. All tasks are 2 hours maximum estimated amount of time that would be spent on the task by an experienced developer, however: |
|||
<!--# '''this does not include time taken to [[Minimal installation from SVN|install]] / set up apertium (and relevant tools)'''.--> |
|||
# this is the time expected to take by an experienced developer, you may find that you spend more time on the task because of the learning curve. |
|||
<!--Если ты не понимаешь английский язык или предпочитаешь работать над русским языком или другими языками России, смотри: [[Task ideas for Google Code-in/Russian]]--> |
|||
'''Categories:''' |
|||
* {{sc|code}}: Tasks related to writing or refactoring code |
|||
* {{sc|documentation}}: Tasks related to creating/editing documents and helping others learn more |
|||
* {{sc|research}}: Tasks related to community management, outreach/marketting, or studying problems and recommending solutions |
|||
* {{sc|quality}}: Tasks related to testing and ensuring code is of high quality. |
|||
* {{sc|design}}: Tasks related to user experience research or user interface design and interaction |
|||
'''Clarification of "multiple task" types''' |
|||
* multi = number of students who can do a given task |
|||
* dup = number of times a student can do the same task |
|||
You can find descriptions of some of the mentors [[List_of_Apertium_mentors | here]]. |
|||
=== Task ideas === |
|||
<table class="sortable wikitable"> |
|||
<tr><th>type</th><th>title</th><th>description</th><th>tags</th><th>mentors</th><th>bgnr?</th><th>multi?</th><th>duplicates</th></tr> |
|||
{{Taskidea |
|||
|type=research |
|||
|title=Join us on IRC |
|||
|description=Use an IRC client to log onto our IRC channel and stick around for four hours. |
|||
|tags=irc |
|||
|mentors=* |
|||
|multi=150 |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=research, quality |
|||
|title=Adopt a Wiki page |
|||
|description=Request for an Apertium Wiki account and adopt a Wiki page by updating and fixing any issues with it. |
|||
|tags=wiki |
|||
|mentors=* |
|||
|multi=150 |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code, design |
|||
|title=Make source browser headings sticky at bottom of window |
|||
|description=Make headings that are out of view (either below when at the top, or above when scrolled down) sticky on [https://apertium.github.io/apertium-on-github/source-browser.html Apertium source browser], so that it's clear what other headings exist. There is a [https://github.com/apertium/apertium-on-github/issues/22 github issue for this]. |
|||
|tags=css, javascript, html, web |
|||
|mentors=sushain, JNW, xavivars, shardulc |
|||
|multi= |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, quality |
|||
|title=Increase test coverage of [[begiak]], our IRC bot, by at least 10% |
|||
|description=There are many modules without any tests at all, unfortunately. See the associated [https://github.com/apertium/phenny/issues/348 GitHub issue] for more details and discussion. |
|||
|tags=python, bot |
|||
|mentors=sushain, JNW, wei2912, Josh, shardulc |
|||
|multi=4 |
|||
|dup=4 |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Improve the .logs command of [[begiak]], our IRC bot |
|||
|description=Currently, the .logs command just links to the root logs. Ideally, it would link to the channel specific logs, support a time being handed to it and have tests. See the associated [https://github.com/apertium/phenny/issues/435 GitHub issue] for more details and discussion. |
|||
|tags=python, bot |
|||
|mentors=sushain, JNW, wei2912, Josh, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Update the awikstats module of [[begiak]], our IRC bot, for GitHub |
|||
|description=There are a couple steps remaining for this process, mostly small modifications to the existing code which are enumerated in the associated [https://github.com/apertium/phenny/issues/389 GitHub issue] which also contains more context and discussion. |
|||
|tags=python, bot |
|||
|mentors=sushain, JNW, wei2912, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, research |
|||
|title=Research and propose a flood control system for [[begiak]], our IRC bot |
|||
|description=Begiak often floods the channel with notifications from modules such as git. Compile a list of modules which flood Begiak, write a mini-report on the associated [https://github.com/apertium/phenny/issues/159 GitHub issue] and propose changes to be made to the modules. For each module, there should be an issue created with a list of proposed changes, referenced from the main issue. The issue should be added to the associated [https://github.com/apertium/phenny/projects/1 GitHub project]. |
|||
|tags=python, bot |
|||
|mentors=sushain, JNW, wei2912, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, research, quality |
|||
|title=Clean up obsolete modules for [[begiak]], our IRC bot |
|||
|description=Refer to the associated [https://github.com/apertium/phenny/issues/436 GitHub issue] for more details. After this task, the list of modules on [http://wiki.apertium.org/wiki/Begiak Begiak's wiki page] should be updated. |
|||
|tags=python, bot, wiki |
|||
|mentors=sushain, JNW, wei2912, shardulc |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Add GitHub issue creation functionality to [[begiak]], our IRC bot |
|||
|description=Ideally, this would be added to an existing module. If that doesn't make sense, a new module is acceptable as well. The associated [https://github.com/apertium/phenny/issues/433 GitHub issue] includes an example of the command's usage and reply. |
|||
|tags=python, github, bot |
|||
|mentors=sushain, JNW, wei2912, Josh, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Support GitHub modules in [[apertium-get]] |
|||
|description=Unfortunately, the transition to GitHub from SVN made it so this script which is very handy for downloading an Apertium language/pair doesn't fetch the newest packages anymore. This also means that beta.apertium.org is out of date. See the associated [https://github.com/apertium/apertium-get/issues/7 GitHub issue] issue for more details and discussion. |
|||
|tags=bash, github |
|||
|mentors=sushain, Unhammer, wei2912, Josh, xavivars, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, quality |
|||
|title=Add default CI configs to Apertium packages via [[Apertium-init]] |
|||
|description=Currently, some Apertium pairs/language modules use CI but it's very inconsistent and doesn't come by default. Apertium-init is the official way to bootstrap a new Apertium package so if it came with CI support by default, that would be great. See the associated [https://github.com/apertium/apertium-init/issues/51 GitHub issue] issue for more details and discussion. |
|||
|tags=ci, circleci, yaml |
|||
|mentors=sushain, Unhammer, wei2912, xavivars, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Ensure XML produced by [[Apertium-init]] has consistent XML declarations |
|||
|description=Currently, some XML produced by [[Apertium-init]], a script which allows bootstrapping Apertium packages easily, has declarations and some doesn't. Moreover, the declarations are sometimes inconsistent. All XML files should have the same declaration. Note that not all of the XML files in [[Apertium-init]] use the .xml file extension. See the associated [https://github.com/apertium/apertium-init/issues/49 GitHub issue] issue for more details and discussion. |
|||
|tags=xml, python |
|||
|mentors=sushain, Unhammer, wei2912, xavivars |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code, quality |
|||
|title=Make [[Apertium-init]]'s default Makefiles and config files pass make distcheck |
|||
|description=Currently, the distcheck target for packages created with [[Apertium-init]], a script which allows bootstrapping Apertium packages easily, does not pass the distcheck target. This task requires fixing this issue. See the associated [https://github.com/apertium/apertium-init/issues/50 GitHub issue] issue for more details and discussion. |
|||
|tags=python, autotools, make, bash |
|||
|mentors=Unhammer, Flammie, Unhammer, Josh |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, quality |
|||
|title=Increase [[Apertium-init]] test coverage |
|||
|description=Currently, we have a decent set of tests for the script but there are some more complex behaviors such as GitHub interaction that we don't test. This task requires making substantial improvements to the test coverage numbers. See the associated [https://github.com/apertium/apertium-init/issues/48 GitHub issue] issue for more details and discussion. |
|||
|tags=python, unittest |
|||
|mentors=sushain, Unhammer, wei2912 |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Ignore .prob files in bilingual modules created by [[Apertium-init]] |
|||
|description=[[Apertium-init]] bootstraps Apertium packages and comes with a default gitignore. This gitignore could be improved by making it ignore *.prob files but only for pairs since they are meaningful for language modules. It would be extra cool if we had some tests for this functionality that weren't too contrived. See the associated [https://github.com/apertium/apertium-init/issues/42 GitHub issue] issue for more details and discussion. |
|||
|tags=python, git |
|||
|mentors=sushain, Unhammer, wei2912, Josh |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Set repository topic on repos created by [[Apertium-init]] |
|||
|description=[[Apertium-init]] bootstraps Apertium packages and supports creating an associated GitHub repository. Our source browser and other scripts expect a GitHub repository topic like "apertium-incubator". This task requires creating the incubator topic by default on repo push with an option for custom topics. See the associated [https://github.com/apertium/apertium-init/issues/36 GitHub issue] issue for more details and discussion. |
|||
|tags=python, github, http |
|||
|mentors=sushain, Unhammer, wei2912, xavivars, shardulc, JNW |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Install Apertium and verify that it works |
|||
|description=See [[Installation]] for instructions and if you encounter any issues along the way, document them and/or improve the Wiki! |
|||
|tags=bash |
|||
|mentors=ftyers, JNW, Unhammer, anakuz, Josh, fotonzade |
|||
|multi=150 |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|title=Write a contrastive grammar |
|||
|description=Document 6 differences between two (preferably related) languages and where they would need to be addressed in the [[Apertium pipeline]] (morph analysis, transfer, etc). Use a grammar book/resource for inspiration. Each difference should have no fewer than 3 examples. Put your work on the Apertium wiki under [[Language1_and_Language2/Contrastive_grammar]]. See [[Farsi_and_English/Pending_tests]] for an example of a contrastive grammar that a previous GCI student made. |
|||
|mentors=Mikel, JNW, Josh, xavivars, fotonzade |
|||
|tags=wiki, languages |
|||
|beginner=yes |
|||
|multi=40 |
|||
}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|mentors=Mikel, anakuz, xavivars, fotonzade |
|||
|tags=xml, dictionaries, svn |
|||
|title=Add 200 new entries to a bidix to language pair %AAA%-%BBB% |
|||
|description=Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 200 new words to a bidirectional dictionary. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Grow_bilingual Read more]... |
|||
|multi=40 |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|mentors=Mikel, anakuz, xavivars, fotonzade, ftyers |
|||
|tags=xml, dictionaries, svn |
|||
|title=Add 500 new entries to a bidix to language pair %AAA%-%BBB% |
|||
|description=Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 500 new words to a bidirectional dictionary. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Grow_bilingual Read more]... |
|||
|dup=10 |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|mentors=fotonzade, JNW, ftyers, anakuz, xavivars, Mikel, shardulc |
|||
|tags=xml, dictionaries, svn |
|||
|title=Post-edit 100 sentences of any public domain text from %AAA% to %BBB% |
|||
|description=Many of our systems benefit from statistical methods used with (ideally public domain) bilingual data. For this task, you need to translate a public domain text from %AAA% to %BBB% using any available machine translation system and clean up the translations yourself manually. Commit the post-edited texts (in plain text format) to an existing (via pull request) or if needed new github repository for the pair in dev/ or texts/ folder. The texts are subject to mentor approval. |
|||
|multi=10 |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|mentors=Mikel, anakuz, xavivars, fotonzade |
|||
|tags=disambiguation, svn |
|||
|title=Disambiguate 500 tokens of text in %AAA% |
|||
|description=Run some text through a morphological analyser and disambiguate the output. Contact the mentor beforehand to approve the choice of language and text. [http://wiki.apertium.org/wiki/Task_ideas_for_Google_Code-in/Manually_disambiguate_text Read more]... |
|||
|multi=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Use SWIG or equivalent to add C++ bindings for text analysis in [https://github.com/apertium/apertium-python apertium-python] |
|||
|description=Currently, apertium-python just pipes text through the binaries in each [[mode]] file. We would like to directly execute the associated C++ function instead. See the associated [https://github.com/apertium/apertium-python/issues/16 GitHub issue] for more details and discussion. |
|||
|tags=python, c++, swig |
|||
|mentors=sushain |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Use cgo to integrate apertium and lttoolbox C++ libraries in Go |
|||
|description=Currently, all apertium core libraries are written in C++. There are other languuages, like Go, where concurrency is at the very core of the language itself. It would be great to be able to write small programs like the new lt-proc intergeneration in Go, using [https://golang.org/cmd/cgo/ cgo] as a way to bind both languages. |
|||
|tags=go, c++, cgo |
|||
|mentors=xavivars |
|||
|begginer=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Integrate [https://github.com/hfst/python HFST's C++ Python] bindings into [https://github.com/apertium/apertium-python apertium-python] |
|||
|description=Currently, apertium-python just pipes text through the binaries in each [[mode]] file. Where appropriate, i.e. a mode accesses HFST binaries, we would like to directly execute the associated C++ function instead. |
|||
|tags=python, c++, swig |
|||
|mentors=sushain |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Improve the [https://github.com/apertium/apertium-python apertium-python] Windows installation process |
|||
|description=Currently, apertium-python requires a complex installation process for Windows (and Linux). The goal is something that works out-of-the-box with pip. See the associated [https://github.com/apertium/apertium-python/issues/6 GitHub issue] for more details and discussion. |
|||
|tags=python, windows |
|||
|mentors=sushain, wei2912, arghya |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Setup scripts for [https://github.com/apertium/apertium-python/blob/windows/windows.py], the relevant issue to ask questions is [[https://github.com/apertium/apertium-python/issues/14]] |
|||
|description=Write setup.py scripts that install the current setup of Apertium+Python and also additionally make the setup.py script work on Windows too. |
|||
|tags=python, windows |
|||
|mentors=sushain, arghya |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=documentation, code |
|||
|title=Setup documentation generation for [https://github.com/apertium/apertium-python apertium-python] |
|||
|description=Currently, there are some docstrings attached to functions and constants. This task requires setting up Sphinx/readthedocs for apertium-python so these docs are easily accessible. Types should also be visible and documentation should support being written in Markdown, not RST. See the associated [https://github.com/apertium/apertium-python/issues/4 GitHub issue] for more details and discussion. |
|||
|tags=python, sphinx |
|||
|mentors=sushain, Josh, arghya |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, design |
|||
|title=Upgrade apertium.org ([[html-tools]]) to Bootstrap 4 |
|||
|description=Currently, we are on a frankensteined version of Bootstrap 3. See the associated [https://github.com/apertium/apertium-html-tools/issues/200 GitHub issue] for more details and discussion. Note that the [https://github.com/apertium/apertium-html-tools/issues/314 frankenstein'd CSS] will likely need to be fixed and theme support should be retained (should be simple). |
|||
|tags=javascript, css, web, bootstrap |
|||
|mentors=sushain, xavivars, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, quality |
|||
|title=Get apertium.org ([[html-tools]]) QUnit testing coverage working |
|||
|description=Currently, we have a QUnit testing framework mostly complete. There are some fixes that need to be made in discussion with the mentor and existing comments and JS coverage checking needs to be added so that we can burn down existing debt. See the associated [https://github.com/apertium/apertium-html-tools/pull/268 GitHub PR] for more details and discussion. |
|||
|tags=javascript, jquery, web |
|||
|mentors=sushain, jjjppp |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, quality |
|||
|title=Fix/prevent apertium.org ([[html-tools]])'s recursive website translation |
|||
|description=Currently, if you try translating Apertium's website with Apertium's website, bad things happen. This 'exploit' is also possible through mutual recursion with another site that offers similar behavior. See the associated [https://github.com/apertium/apertium-html-tools/issues/203 GitHub issue] for more details and discussion. |
|||
|tags=javascript, jquery, web, bootstrap |
|||
|mentors=sushain, Unhammer, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Convert apertium.org's API ([[APy]])'s language name storage from SQL to TSV |
|||
|description=Currently, language names that power part of the Apertium HTTP API are stored and updated in SQL. It would be nice if they were stored in a more human readable format like TSV and the SQLite were generated at build time. See the associated [https://github.com/apertium/apertium-apy/issues/115 GitHub issue] for more details and discussion. |
|||
|tags=python, sql, tsv |
|||
|mentors=sushain, Unhammer, xavivars |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Support unicode without escape sequences in apertium.org's API ([[APy]]) |
|||
|description=Currently, HTTP responses with unicode characters are emitted as \uNNNN by the Apertium API. Ideally, the character could just be decoded. See the associated [https://github.com/apertium/apertium-apy/issues/60 GitHub issue] for more details and discussion. |
|||
|tags=python, api, unicode, json, api, http |
|||
|mentors=sushain, Unhammer, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Make apertium.org ([[html-tools]]) fail more gracefully when the API is down |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] relies on an API endpoint to translate documents, files, etc. However, when this API is down the interface also breaks! This task requires fixing |
|||
this breakage. See the associated [https://github.com/apertium/apertium-html-tools/issues/207 GitHub issue] for more details and discussion. |
|||
|mentors=sushain, Unhammer, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, design |
|||
|title=Refine the apertium.org ([[html-tools]]) dictionary interface |
|||
|tags=javascript, html, css, web |
|||
|description=Significant progress has been made towards providing a dictionary-style interface within [https://github.com/apertium/apertium-html-tools html-tools]. This task requires refining the existing [https://github.com/goavki/apertium-html-tools/pull/184 PR] by de-conflicting it with master and resolving the interface concerns discussed [https://github.com/goavki/apertium-html-tools/pull/184#issuecomment-323597780 here]. See the associated [https://github.com/apertium/apertium-html-tools/issues/105 GitHub issue] for more details and discussion. |
|||
|mentors=sushain, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, design |
|||
|title=Chained translation path interface for apertium.org ([[html-tools]]) |
|||
|tags=javascript, html, css, web |
|||
|description=Significant progress has been made towards providing an interface for selecting a path for chained (multi-step) translation in [https://github.com/apertium/apertium-html-tools html-tools]. The code is currently in a [https://github.com/apertium/apertium-html-tools/tree/issue-91-translation-chain-graph branch] that needs to be de-conflicted with master, refined to accommodate changes in the main interface since the code was written, tested, and finally merged. |
|||
|mentors=sushain, shardulc |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, documentation |
|||
|title=Add a Swagger/OpenAPI specification for apertium.org's API ([[APy]]) |
|||
|description=There's been some work towards this already but it's outdated. This task requires updating it and for bonus points ensuring at build time that all paths are minimally present in the Swagger spec. Furthermore, it would be awesome if a simple HTTP page could be made that loads the spec (e.g. [https://github.com/apertium/apertium-stats-service/blob/master/api.html this page for another service]). See the associated [https://github.com/apertium/apertium-apy/issues/12 GitHub issue] for more details and discussion. |
|||
|tags=python, api, http, openapi, swagger |
|||
|mentors=sushain, xavivars |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Accept ISO-639-1 codes in [https://github.com/apertium/apertium-stats-service apertium-stats-service] |
|||
|description=This task requires making /en-es, /en-spa, etc. work the same as /eng-spa and then adding tests that verify the behavior. See the associated [https://github.com/apertium/apertium-stats-service/issues/32 GitHub issue] for more details and discussion. |
|||
|tags=rust, api, http |
|||
|mentors=sushain |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Support listing of packages in [https://github.com/apertium/apertium-stats-service apertium-stats-service] |
|||
|description=This information is something that is useful in a lot of different places, for example, our source browser. By having the stats service implement it, everyone doesn't have to write the same code in different languages and the information gets cached. For GCI task credit, the last commit info is not required (another task can be made for that feature). This task requires implementing the initial feature, adding some basic tests and tweaking the swagger spec. One or more of those tasks can be broken into other task(s) if the mentor sees fit and the student requests it. See the associated [https://github.com/apertium/apertium-stats-service/issues/37 GitHub issue] for more details and discussion. |
|||
|tags=rust, api, http, rest |
|||
|mentors=sushain |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Add configurable timeout support to [https://github.com/apertium/apertium-stats-service apertium-stats-service] |
|||
|description=Currently, a stats request has no clear timeout and can take ~forever if the async option is not present. This tasks requires adding a timeout option, adding tests and then tweaking the swagger spec. See the associated [https://github.com/apertium/apertium-stats-service/issues/14 GitHub issue] for more details and discussion. |
|||
|tags=rust, api, http, rest |
|||
|mentors=sushain |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Surface errors to the client in [https://github.com/apertium/apertium-stats-service apertium-stats-service] |
|||
|description=Right now, errors are logged and swallowed. The client never knows what happened. This task requires implementing the feature, adding some basic tests and tweaking the swagger spec. One or more of those tasks can be broken into other task(s) if the mentor sees fit and the student requests it. See the associated [https://github.com/apertium/apertium-stats-service/issues/30 GitHub issue] for more details and discussion. |
|||
|tags=rust, api, http, rest |
|||
|mentors=sushain |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=Include Git SHA in [https://github.com/apertium/apertium-stats-service apertium-stats-service]'s file info |
|||
|description=Right now, only the SVN revision number is provided but that doesn't help with mapping back on to a SHA in Git/GH for the client. This task requires implementing the feature, adding some basic tests and tweaking the swagger spec. See the associated [https://github.com/apertium/apertium-stats-service/issues/41 GitHub issue] for more details and discussion. |
|||
|tags=rust, api, http, rest, git, svn |
|||
|mentors=sushain |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=documentation |
|||
|mentors=Flammie, JNW, anakuz, Josh, shardulc |
|||
|tags=xml, dictionaries, screencast |
|||
|title=Create a screencast on how to add new entries to an apertium dictionary |
|||
|description=Screencasts are popular cool way to create a tutorial. Show a narrated work-flow start-to-end on adding new words to a dictionary, compiling and then using it to translate. This task is probably easiest after completing a "Add 200 new entries to a bidix to language pair" task. |
|||
|multi=no |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=documentation |
|||
|mentors=Flammie, Josh, shardulc |
|||
|tags=disambiguation, screencast |
|||
|title=Create a screencast on how to disambiguate tokens of text |
|||
|description=Screencasts are popular cool way to create a tutorial. Show a narrated work-flow start-to-end on disambiguating the words. This task is probably easiest after completing a "Disambiguate 500 tokens" task. |
|||
|multi=no |
|||
}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|mentors=Flammie, wei2912 |
|||
|tags=test, python, bash |
|||
|title=Create automated (travis-ci) test to ensure naïve coverage |
|||
|description=The dictionaries can be tested on [[Coverage]], the idea is to make test that operates on frequency word list to count a coverage of a dictionary and then integrate that to Makefile target check for travis to use. |
|||
|multi=no |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=JNW, sushain, wei2912, shardulc |
|||
|tags=python, git, json |
|||
|title=Scrape Apertium repo information into json |
|||
|description=Write a script to scrape information about Apertium's translation pairs as they exist in GitHub repositories into a json file like [https://github.com/apertium/pairviewer/blob/master/pairs.json.txt this one]. |
|||
|multi=no |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=code, design |
|||
|mentors=JNW, sushain, jjjppp |
|||
|tags=d3, javascript |
|||
|title=Integrate globe viewer into language family visualiser interface |
|||
|description=The [https://github.com/apertium/family-visualizations family visualiser interface] has four info boxes when a language is clicked on, and one of those boxes is empty. The [https://github.com/jonorthwash/Apertium-Global-PairViewer globe viewer] provides a globe visualisation of languages that we can translate a given language to and from. This task is to integrate the globe viewer for a specific language into the fourth box in the family visualiser. There is an [https://github.com/jonorthwash/Apertium-Global-PairViewer/issues/32 associated GitHub issue]. |
|||
|multi=no |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=documentation |
|||
|mentors=JNW, sushain, wei2912, Josh, xavivars, shardulc |
|||
|tags=wiki, github, svn |
|||
|title=Fix (or document the blockers for) five mentions of SVN on the Apertium wiki |
|||
|description=Apertium recently [https://github.com/apertium/apertium-on-github migrated] to GitHub from SVN. There are unfortunately still a lot of pages on the Wiki in need of their references to the Wiki to SVN URLs and SVN in general. This task requires finding five such pages and either outright fixing them or documenting the difficulty involved in fixing the issues. Note Category:GitHub_migration_updates lists articles currently marked for needing migration but is not exhaustive. |
|||
|multi=5 |
|||
|beginner=no |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|mentors=JNW,ftyers, Josh, fotonzade, anakuz |
|||
|tags=wiki, languages, grammar |
|||
|title=Document resources for a language |
|||
|description=Document resources for a language without resources already documented on the Apertium wiki. [[Task_ideas_for_Google_Code-in/Documentation_of_resources|read more]]... |
|||
|multi=10 |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|mentors=JNW, Josh |
|||
|tags=ocr |
|||
|title=tesseract (OCR) interface for apertium languages |
|||
|description=Find out what it would take to integrate apertium or voikkospell into tesseract OCR (image to text). Document thoroughly available options on the wiki. |
|||
|multi=no |
|||
|beginner= |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|mentors=JNW,ftyers, fotonzade |
|||
|title=Create a UD-Apertium morphology mapping |
|||
|description=Choose a language that has a Universal Dependencies treebank and tabulate a potential set of Apertium morph labels based on the (universal) UD morph labels. See Apertium's [[list of symbols]] and [http://universaldependencies.org/ UD]'s POS and feature tags for the labels. |
|||
|tags=morphology, ud, dependencies |
|||
|beginner= |
|||
|multi=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|mentors=JNW, ftyers, fotonzade |
|||
|title=Create an Apertium-UD morphology mapping |
|||
|description=Choose a language that has an Apertium morphological analyser and adapt it to convert the morphology to UD morphology |
|||
|tags=morphology, ud, dependencies |
|||
|beginner= |
|||
|multi=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code,design |
|||
|mentors=JNW,ftyers |
|||
|title=Paradigm generator browser interface |
|||
|description=Write a standalone webpage that makes queries (though javascript) to an [[apertium-apy]] server to fill in a morphological forms based on morphological tags that are hidden throughout the body of the page. For example, say you have the verb "say", and some tags like inf, past, pres.p3.sg—these forms would get filled in as "say", "said", "says". |
|||
|tags=javascript, html, apy |
|||
}} |
|||
{{Taskidea |
|||
|type=Research |
|||
|mentors=anakuz, fotonzade |
|||
|title=Syntactic trees |
|||
|description=Pick up a text of ~200 words and make its syntactic annotation as for Universal Dependencies treebank. UD Annotatrix can be used for visualisation. Consult with mentor about the language. |
|||
|tags=UD, trees, markup |
|||
}} |
|||
{{Taskidea |
|||
|type=Code |
|||
|mentors=xavivars |
|||
|title=Improve apertium-tagger's man page |
|||
|description=The man page for apertium-tagger is outdted, not mentioning some options the --help command does, like -x. They should be synced. See https://github.com/apertium/apertium/issues/10 |
|||
|tags=C++ |
|||
}} |
|||
{{Taskidea |
|||
|type=Code |
|||
|mentors=ftyers |
|||
|title=Port pragmatic segmenter core code to Python |
|||
|description=Pragmatic segmenter (https://github.com/diasks2/pragmatic_segmenter) is a sentence segmenter written in Ruby. The objective of this task is to port the core code to Python. |
|||
|tags=Python, Ruby |
|||
}} |
|||
{{Taskidea |
|||
|type=Code |
|||
|mentors=ftyers |
|||
|title=Port a language model from pragmatic segmenter to Python |
|||
|description=Pragmatic segmenter (https://github.com/diasks2/pragmatic_segmenter) is a sentence segmenter written in Ruby. The objective of this task is to port a given language (e.g. Armenian) to Python. |
|||
|tags=Python, Ruby |
|||
|multi=21 |
|||
}} |
|||
{{Taskidea |
|||
|type=Code |
|||
|mentors=ftyers |
|||
|title=Write a language model for pragmatic segmenter in Python or Ruby |
|||
|description=Pragmatic segmenter (https://github.com/diasks2/pragmatic_segmenter) is a sentence segmenter written in Ruby. The objective of this task is to write a language model for a new language. |
|||
|tags=Python, Ruby |
|||
|multi=21 |
|||
}} |
|||
{{Taskidea |
|||
|type=Code,Documentation |
|||
|mentors=ftyers, shardulc |
|||
|title=Write a program to add a dev branch for each of the released language pairs |
|||
|description=At the moment Apertium language pairs are generally developed in the master branch. We would like to move to having a dev/master split, but we need to make a new dev branch for each pair, and also write documentation to explain to people to send PRs to dev or commit to dev. |
|||
|tags=python |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=JNW |
|||
|title=Use apertium-init to bootstrap a new language pair |
|||
|description=Use the [[Apertium-init]] script to bootstrap a new translation pair between two languages which have monolingual modules already in Apertium. To see if a translation pair has already been made, search our repositories on [https://github.com/apertium/ github], and especially ask on IRC. Add 100 common stems to the dictionary. Your submission should be in the form of a repository on github that we can fork to the Apertium organisation. |
|||
|tags=languages, bootstrap, dictionaries, translators |
|||
|beginner=yes |
|||
|multi=25 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=JNW |
|||
|title=Use apertium-init to bootstrap a new language module |
|||
|description=Use the [[Apertium-init]] script to bootstrap a new language module that doesn't currently exist in Apertium. To see if a language is available, search our repositories on [https://github.com/apertium/ github], and especially ask on IRC. Add enough stems and morphology to the module so that it analyses and generates at least 100 correct forms. Your submission should be in the form of a repository on github that we can fork to the Apertium organisation. [[Task ideas for Google Code-in/Add words from frequency list|Read more about adding stems...]] |
|||
|tags=languages, bootstrap, dictionaries |
|||
|beginner=yes |
|||
|multi=25 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=JNW, mlforcada |
|||
|title=Add a transfer rule to an existing translation pair |
|||
|description=Add a transfer rule to an existing translation pair that fixes an error in translation. Document the rule on the [http://wiki.apertium.org/ Apertium wiki] by adding a [[regression testing|regression tests]] page similar to [[English_and_Portuguese/Regression_tests]] or [[Icelandic_and_English/Regression_tests]]. Check your code into Apertium's codebase. [[Task ideas for Google Code-in/Add transfer rule|Read more...]] |
|||
|tags=languages, bootstrap, transfer |
|||
|multi=25 |
|||
|dup=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=JNW |
|||
|title=Write 10 lexical selection to an existing translation pair |
|||
|description=Add 10 lexical selection rules to an existing translation pair. Submit your work as a github pull request to that pair. [[Task ideas for Google Code-in/Add lexical-select rules|Read more...]] |
|||
|tags=languages, bootstrap, lexical selection, translators |
|||
|multi=25 |
|||
|dup=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=JNW |
|||
|title=Write 10 constraint grammar rules for an existing language module |
|||
|description=Add 10 constraint grammar rules to an existing language that you know. Submit your work as a github pull request to that pair. [[Task ideas for Google Code-in/Add constraint-grammar rules|Read more...]] |
|||
|tags=languages, bootstrap, constraint grammar |
|||
|multi=25 |
|||
|dup=5 |
|||
}} |
|||
</table> |
|||
== Checklist (2018) == |
|||
''Please remove things from this list as the tasks are added'' |
|||
* [[Pairviewer]] needs both tasks and issues in [https://github.com/apertium/pairviewer GitHub]. Some historical tasks are below. |
|||
* [https://github.com/apertium/family-visualizations/issues/3 Family-visualizations] needs both tasks and issues in GitHub. Some historical tasks are below. |
|||
* [[Annotatrix]] needs [https://github.com/jonorthwash/ud-annotatrix/issues issues] to be converted into tasks. Some historical tasks are below. |
|||
* Task on joining the channel with an IRC client will require [https://github.com/apertium/phenny/issues/437] to be completed. |
|||
* Task on adopting a Wiki page will require a list of suitable pages to be compiled. |
|||
==Task ideas (2017)== |
|||
<table class="sortable wikitable"> |
|||
<!-- THE TASKS NEED TO BE HIDDEN FOR NOW, |
|||
but feel free to remove style="display: none" to preview changes to this page. |
|||
Just remember to put it back before saving |
|||
JNW 2017-10-30 |
|||
--> |
|||
<tr><th>type</th><th>title</th><th>description</th><th>tags</th><th>mentors</th><th>bgnr?</th><th>multi?</th><th>duplicates</th></tr> |
|||
{{Taskidea |
|||
|type=research |
|||
|title=Document resources for a language |
|||
|description=Document resources for a language without resources already documented on the Apertium wiki. [[Task ideas for Google Code-in/Documentation of resources|read more...]] |
|||
|tags=wiki, languages |
|||
|mentors=Jonathan, Vin, Xavivars, Marc Riera |
|||
|multi=40 |
|||
|beginner=yes |
|||
}}{{Taskidea |
|||
|type=research |
|||
|title=Write a contrastive grammar |
|||
|description=Document 6 differences between two (preferably related) languages and where they would need to be addressed in the [[Apertium pipeline]] (morph analysis, transfer, etc). Use a grammar book/resource for inspiration. Each difference should have no fewer than 3 examples. Put your work on the Apertium wiki under [[Language1_and_Language2/Contrastive_grammar]]. See [[Farsi_and_English/Pending_tests]] for an example of a contrastive grammar that a previous GCI student made. |
|||
|mentors=Vin, Jonathan, Fran, mlforcada |
|||
|tags=wiki, languages |
|||
|beginner=yes |
|||
|multi=40 |
|||
}} |
|||
{{Taskidea|type=interface|mentors=Fran, Masha, Jonathan |
|||
|tags=annotation, annotatrix |
|||
|title=Nicely laid out interface for ud-annotatrix |
|||
|description=Design an HTML layout for the annotatrix tool that makes best use of the space and functions nicely |
|||
at different screen resolutions. |
|||
}} |
|||
{{Taskidea|type=interface|mentors=Fran, Masha, Jonathan, Vin |
|||
|tags=annotation, annotatrix, css |
|||
|title=Come up with a CSS style for annotatrix |
|||
|description= |
|||
}} |
|||
{{Taskidea|type=code|mentors=Fran, Masha, Jonathan, Vin |
|||
|tags=annotation, annotatrix, javascript, dependencies |
|||
|title=SDparse to CoNLL-U converter in JavaScript |
|||
|description=SDparse is a format for describing dependency trees, they look like relation(head, dependency). CoNLL-U is another |
|||
format for describing dependency trees. Make a converter between the two formats. You will probably need to learn more about the specifics of these formats. The GitHub issue is [https://github.com/jonorthwash/ud-annotatrix/issues/88 here]. |
|||
}} |
|||
{{Taskidea|type=quality|mentors=Fran, Masha, Vin |
|||
|tags=annotation, annotatrix |
|||
|title=Write a test for the format converters in annotatrix |
|||
|description= |
|||
|multi=yes |
|||
}} |
|||
{{Taskidea|type=code|mentors=Fran, Masha, Jonathan |
|||
|tags=annotation, annotatrix, javascript |
|||
|title=Write a function to detect invalid trees in the UD annotatrix software and advise the user about it |
|||
|description=It is possible to detect invalid trees (such as those that have cycles). We would like to write a function to detect those kinds of trees and advise the user. The GitHub issue is [https://github.com/jonorthwash/ud-annotatrix/issues/96 here]. |
|||
}} |
|||
{{Taskidea|type=documentation|mentors=Fran, Masha, Jonathan, Vin |
|||
|tags=annotation, annotatrix, dependencies |
|||
|title=Write a tutorial on how to use annotatrix to annotate a dependency tree |
|||
|description=Give step by step instructions to annotating a dependency tree with Annotatrix. Make sure you include all possibilities in the app, for example tokenisation options. |
|||
}} |
|||
{{Taskidea|type=documentation|mentors=Fran, Masha, Vin |
|||
|tags=annotation, annotatrix, video, dependencies |
|||
|title=Make a video tutorial on annotating a dependency tree using the [https://github.com/jonorthwash/ud-annotatrix/ UD annotatrix software]. |
|||
|description=Give step by step instructions to annotating a dependency tree with Annotatrix. Make sure you include all possibilities available in the app, for example tokenisation options. |
|||
}} |
|||
{{Taskidea|type=quality|mentors=Masha|tags=xml, dictionaries, svn |
|||
|title=Merge two versions of the Polish morphological dictionary |
|||
|description=At some point in the past, someone deleted a lot of entries from the Polish morphological dictionary, and unfortunately we didn't notice at the time and have since added stuff to it. The objective of this task is to take the last |
|||
version before the mass deletion and the current version and merge them. |
|||
Getting list of the changes: |
|||
$ svn diff --old apertium-pol.pol.dix@73196 --new apertium-pol.pol.dix@73199 > changes.diff |
|||
}} |
|||
{{Taskidea|type=quality|mentors=fotonzade, Jonathan, Xavivars, Marc Riera, mlforcada |
|||
|tags=xml, dictionaries, svn |
|||
|title=Add 200 new entries to a bidix to language pair %AAA%-%BBB% |
|||
|description=Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 200 new words to a bidirectional dictionary. |
|||
|multi=yes |
|||
|bgnr=yes |
|||
}} |
|||
{{Taskidea|type=quality|mentors=fotonzade, Jonathan, Xavivars, Marc Riera, mlforcada |
|||
|tags=xml, dictionaries, svn |
|||
|title=Add 500 new entries to a bidix to language pair %AAA%-%BBB% |
|||
|description=Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 500 new words to a bidirectional dictionary. |
|||
|multi=yes |
|||
}} |
|||
{{Taskidea|type=quality|mentors=fotonzade, Xavivars, Marc Riera, mlforcada |
|||
|tags=disambiguation, svn |
|||
|title=Disambiguate 500 tokens of text in %AAA% |
|||
|description=Run some text through a morphological analyser and disambiguate the output. Contact the mentor beforehand to approve the choice of language and text. |
|||
|multi=yes |
|||
}} |
|||
{{Taskidea|type=code|mentors=Fran, Katya|tags=morphology, languages, finite-state, fst |
|||
|title=Use apertium-init to start a new morphological analyser for %AAA% |
|||
|description=Use apertium-init to start a new morphological analyser (for a language we don't already |
|||
have, e.g. %AAA%) and add 100 words. |
|||
|multi=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=documentation |
|||
|mentors=Jonathan, Flammie |
|||
|title=add comments to .dix file symbol definitions |
|||
|tags=dix |
|||
}} |
|||
{{Taskidea |
|||
|type=documentation |
|||
|mentors=Jonathan |
|||
|title=find symbols that aren't on the list of symbols page |
|||
|description=Go through symbol definitions in Apertium dictionaries in svn (.lexc and .dix format), and document any symbols you don't find on the [[List of symbols]] page. This task is fulfilled by adding at least one class of related symbols (e.g., xyz_*) or one major symbol (e.g., abc), along with notes about what it means. |
|||
|tags=wiki,lexc,dix |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=conllu parser and searching |
|||
|description=Write a script (preferably in python3) that will parse files in conllu format, and perform basic searches, such as "find a node that has an nsubj relation to another node that has a noun POS" or "find all nodes with a cop label and a past feature" |
|||
|tags=python, dependencies |
|||
|mentors=Jonathan, Fran, Wei En, Anna |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=group and count possible lemmas output by guesser |
|||
|mentors=Jonathan, Fran, Wei En |
|||
|description=Currently a "guesser" version of Apertium transducers can output a list of possible analyses for unknown forms. Develop a new pipleine, preferably with shell scripts or python, that uses a guesser on all unknown forms in a corpus, and takes the list of all possible analyses, and output a hit count of the most common combinations of lemma and POS tag. |
|||
|tags=guesser, transducers, shellscripts |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=vim mode/tools for annotating dependency corpora in CG3 format |
|||
|mentors=Jonathan, Fran |
|||
|description=includes formatting, syntax highlighting, navigation, adding/removing nodes, updating node numbers, etc. |
|||
|tags=vim, dependencies, CG3 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=vim mode/tools for annotating dependency corpora in CoNLL-U format |
|||
|mentors=Jonathan, Fran |
|||
|description=includes formatting, syntax highlighting, navigation, adding/removing nodes, updating node numbers, etc. |
|||
|tags=vim, dependencies, conllu |
|||
}}{{Taskidea |
|||
|type=quality |
|||
|title=figure out one-to-many bug in the [[lsx module]] |
|||
|mentors=Jonathan, Fran, Wei En, Irene |
|||
|description=There is a bug in the [[lsx module]] referred to as the [http://wiki.apertium.org/wiki/Lsx_module#The_one-to-many_bug one-to-many bug] because lsx-proc will not convert one form to many given an appropriately compiled transducer. Your job is to figure out why this happens and fix it. |
|||
|tags=C++, transducers, lsx |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=add an option for reverse compiling to the [[lsx module]] |
|||
|mentors=Jonathan, Fran, Wei En, Irene, Xavivars |
|||
|description=this should be simple as it can just leverage the existing lttoolbox options for left-right / right-left compiling |
|||
|tags=C++, transducers, lsx |
|||
}}{{Taskidea |
|||
|type=quality, code |
|||
|title=clean up lsx-comp |
|||
|mentors=Jonathan, Fran, Wei En, Irene, Xavivars |
|||
|description=remove extraneous functions from lsx-comp and clean up the code |
|||
|tags=C++, transducers, lsx |
|||
}}{{Taskidea |
|||
|type=quality, code |
|||
|title=clean up lsx-proc |
|||
|mentors=Jonathan, Fran, Wei En, Irene, Xavivars |
|||
|description=remove extraneous functions from lsx-proc and clean up the code |
|||
|tags=C++, transducers, lsx |
|||
}}{{Taskidea |
|||
|type=documentation |
|||
|title=document usage of the lsx module |
|||
|mentors= Irene |
|||
|description= document which language pairs have included the lsx module in its package, which have beta-tested the lsx module, and which are good candidates for including support for lsx. add to [[Lsx_module/supported_languages | this wiki page]] |
|||
|tags=C++, transducers, lsx |
|||
|beginner=yes |
|||
}}{{Taskidea |
|||
|type=quality |
|||
|title=beta testing the lsx-module |
|||
|mentors=Jonathan, Fran, Wei En, Irene |
|||
|description= [[Lsx_module#Creating_the_lsx-dictionary|create an lsx dictionary]]for any relevant and existing language pair that doesn't yet support it, adding 10-30 entries to it. Thoroughly test to make sure the output is as expected. report bugs/non-supported features and add them to [[Lsx_module#Future_work| future work]]. Document your tested language pair by listing it under [[Lsx_module#Beta_testing]] and in [[Lsx_module/supported_languages | this wiki page]] |
|||
|tags=C++, transducers, lsx |
|||
|multi=yes |
|||
|dup=yes |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=fix an lsx bug / add an lsx feature |
|||
|mentors=Jonathan, Fran, Wei En, Irene |
|||
|description= if you've done the above task (beta testing the lsx-module) and discovered any bugs or unsupported features, fix them. |
|||
|tags=C++, transducers, lsx |
|||
|multi=yes |
|||
|dup=yes |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=script to test coverage over wikipedia corpus |
|||
|mentors=Jonathan, Wei En, Shardul |
|||
|description=Write a script (in python or ruby) that in one mode checks out a specified language module to a given directory, compiles it (or updates it if already existant), and then gets the most recently nightly wikipedia archive for that language and runs coverage over it (as much in RAM if possible). In another mode, it compiles the language pair in a docker instance that it then disposes of after successfully running coverage. Scripts exist in Apertium already for finding where a wikipedia is, extracting a wikipedia archive into a text file, and running coverage. |
|||
|tags=python, ruby, wikipedia |
|||
}}{{Taskidea |
|||
|type=quality,code |
|||
|tag=issues |
|||
|title=fix any open ticket |
|||
|description=Fix any open ticket in any of our issues trackers: [https://sourceforge.net/p/apertium/tickets/ main], [https://github.com/goavki/apertium-html-tools/issues html-tools], [https://github.com/goavki/phenny/issues begiak]. When you claim this task, let your mentor know which issue you plan to work on. |
|||
|mentors=Jonathan, Wei En, Sushain, Shardul |
|||
|multi=25 |
|||
|dup=10 |
|||
}} |
|||
{{Taskidea |
|||
|type=quality,code |
|||
|title=make html-tools do better on Chrome's audit |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, apertium.org and generally any [https://github.com/goavki/apertium-html-tools html-tools] installation fails lots of Chrome audit tests. As many as possible should be fixed. Ones that require substantial work should be filed as tickets and measures should be taken to prevent problems from reappearing (e.g. a test or linter rule). More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/201 #201]) and asynchronous discussion should occur there. |
|||
|mentors=Jonathan, Sushain, Shardul |
|||
}} |
|||
{{Taskidea |
|||
|type=code,interface |
|||
|title=upgrade html-tools to Bootstrap 4 |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] uses Bootstrap 3.x. Bootstrap 4 beta is out and we can upgrade (hopefully)! If an upgrade is not possible, you should document why it's not and ensure that it's easy to upgrade when the blockers are removed. More information may be available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/200 #200]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Shardul |
|||
|bgnr=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code,interface |
|||
|title=display API endpoint on sandbox |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] has an "APy" mode where users can easily test out the API. However, it doesn't display the actual URL of the API endpoint and it would be nice to show that to the user. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/147 #147]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan, Shardul |
|||
|bgnr=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code,quality,research |
|||
|title=set up a testing framework for html-tools |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] has no tests (sad!). This task requires researching what solutions there are for testing jQuery based web applications and putting one into place with a couple tests as a proof of concept. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/116 #116]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Shardul |
|||
}} |
|||
{{Taskidea |
|||
|type=code,research |
|||
|title=make html-tools automatically download translated files in Safari, IE, etc. |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] is capable of translating files. However, this translation does not always result in the file immediately being download to the user on all browsers. It would be awesome if it did! This task requires researching what solutions there are, evaluating them against each other and it may result in a conclusion that it just isn't possible (yet). More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/97 #97]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan, Unhammer, Shardul |
|||
}} |
|||
{{Taskidea |
|||
|type=code,interface |
|||
|title=make html-tools fail more gracefully when API is down |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] relies on an API endpoint to translate documents, files, etc. However, when this API is down the interface also breaks! This task requires fixing this breakage. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/207 #207]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan, Shardul |
|||
|bgnr=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code,interface |
|||
|title=make html-tools properly align text in mixed RTL/LTR contexts |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] is capable of displaying results/allowing input for RTL languages in a LTR context (e.g. we're translating Arabic in an English website). However, this doesn't always look exactly how it should look, i.e. things are not aligned correctly. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/49 #49]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan, Shardul |
|||
|bgnr=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code,interface |
|||
|title=de-conflict the 'make a suggestion' interface in html-tools |
|||
|tags=javascript, html, css, web |
|||
|description=There has been much demand for [https://github.com/goavki/apertium-html-tools html-tools] to support an interface for users making suggestions regarding e.g. incorrect translations (c.f. Google translate). An interface was designed for this purpose. However, since it has been a while since anyone touched it, the code now conflicts with the current master branch. This task requires de-conflicting this [https://github.com/goavki/apertium-html-tools/pull/74 branch] with master and providing screenshot/video(s) of the interface to show that it functions. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/74 #74]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan, Shardul |
|||
}} |
|||
{{Taskidea |
|||
|type=code,quality |
|||
|title=make html-tools capable of translating itself |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] supports website translation. However, if asked to translate itself, weird things happen and the interface does not properly load. This task requires figuring out the root problem and correcting the fault. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/203 #203]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan, Shardul |
|||
|bgnr=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=interface |
|||
|title=create mock-ups for variant support in html-tools |
|||
|tags=javascript, html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] supports translation using language variants. However, we do not have first-class style/interface support for it. This task requires speaking with mentors/reading existing discussion to understand the problem and then produce design mockups for a solution. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/82 #82]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan, Fran, Shardul, Xavivars |
|||
}} |
|||
{{Taskidea |
|||
|type=code,interface |
|||
|title=refine the html-tools dictionary interface |
|||
|tags=javascript, html, css, web |
|||
|description=Significant progress has been made towards providing a dictionary-style interface within [https://github.com/goavki/apertium-html-tools html-tools]. This task requires refining the existing [https://github.com/goavki/apertium-html-tools/pull/184 PR] by de-conflicting it with master and resolving the interface concerns discussed [https://github.com/goavki/apertium-html-tools/pull/184#issuecomment-323597780 here]. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/105 #105]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan, Xavivars |
|||
}} |
|||
{{Taskidea |
|||
|type=code,quality,interface |
|||
|title=eliminate inline styles from html-tools |
|||
|tags=html, css, web |
|||
|description=Currently, [https://github.com/goavki/apertium-html-tools html-tools] has inline styles. These are not very maintainable and widely considered as bad style. This task requires surveying the uses, removing all of them in a clean manner, i.e. semantically, and re-enabling the linter rule that will prevent them going forward. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/114 #114]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Shardul, Xavivars |
|||
|bgnr=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code,interface |
|||
|title=refine the html-tools spell checking interface |
|||
|tags=html, css, web |
|||
|description=Spell checking is a feature that would greatly benefit [https://github.com/goavki/apertium-html-tools html-tools]. Significant effort has been put towards implementing an effective interface to provide spelling suggestions to users (this [https://github.com/goavki/apertium-html-tools/pull/176 PR] contains the current progress). This task requires solving the problems highlighted in the code review on the PR and fixing any other bugs uncovered in conversations with the mentors. More information is available in the issue tracker ([https://github.com/goavki/apertium-html-tools/issues/12 #12]) and asynchronous discussion should occur there. |
|||
|mentors=Sushain, Jonathan |
|||
}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|title=find an apertium module not developed in svn and import it |
|||
|description=Find an Apertium module developed elsewhere (e.g., github) released under a compatible open license, and import it into [http://wiki.apertium.org/wiki/SVN Apertium's svn], being sure to attribute any authors (in an AUTHORS file) and keeping the original license. Once place to look for such modules might be among the [https://wikis.swarthmore.edu/ling073/Category:Sp17_FinalProjects final projects] in a recent Computational Linguistics course. |
|||
|mentors=Jonathan, Wei En |
|||
|multi=10 |
|||
|dup=2 |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=add an incubator mode to the wikipedia scraper |
|||
|tags=wikipedia, python |
|||
|description=Add a mode to scrape a Wikipedia in incubator (e.g,. [https://incubator.wikimedia.org/wiki/Wp/inh/Main_page the Ingush incubator]) to the [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/WikiExtractor.py WikiExtractor] script |
|||
|mentors=Jonathan, Wei En |
|||
}}{{Taskidea |
|||
|type=code,interface |
|||
|title=add a translation mode interface to the geriaoueg plugin for firefox |
|||
|description=Fork the [https://github.com/vigneshv59/geriaoueg-firefox geriaoueg firefox plugin] and add an interface for translation mode. It doesn't have to translate at this point, but it should communicate with the server (as it currently does) to load available languages. |
|||
|tags=javascript |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=code, interface |
|||
|title=add a translation mode interface to the geriaoueg plugin for chrome |
|||
|description=Fork the [https://github.com/vigneshv59/geriaoueg-chrome geriaoueg chrome plugin] and add an interface for translation mode. It doesn't have to translate at this point, but it should communicate with the server (as it currently does) to load available languages. |
|||
|tags=javascript |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=quality |
|||
|title=update bidix included in apertium-init |
|||
|description=There are some issues with the bidix currently included in [https://github.com/goavki/bootstrap/ apertium-init]: the alphabet should be empty (or non-existant?) and the "sg" tags shouldn't be in the example entries. It would also be good to have entries in two different languages, especially ones with incompatible POS sub-categories (e.g. casa{{tag|n}}{{tag|f}}). There is [https://github.com/goavki/bootstrap/issues/24 a github issue for this task]. |
|||
|tags=python, xml, dix |
|||
|beginner=yes |
|||
|mentors=Jonathan, Sushain |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=apertium-init support for more features in hfst modules |
|||
|description=Add optional support to hfst modules for enabling spelling modules, an extra twoc module for morphotactic constraints, and spellrelax. You'll want to figure out how to integrate this into the Makefile template. There is [https://github.com/goavki/bootstrap/issues/23 a github issue for this task]. |
|||
|tags=python, xml, Makefile |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=code, quality |
|||
|title=make apertium-init README files show only relevant dictionary file |
|||
|description=Currently in [https://github.com/goavki/bootstrap/ apertium-init], the README files for HFST modules show the "dix" file in the list of files, and it's likely that lttoolbox modules show "hfst" files in their README too. Check this and make it so that READMEs for these two types of monolingual modules display only the right dictionary files. There is [https://github.com/goavki/bootstrap/issues/26 a github issue for this task]. |
|||
|tags=python, xml, Makefile |
|||
|mentors=Jonathan, Sushain |
|||
}}{{Taskidea |
|||
|type=code, quality |
|||
|title=Write a script to add glosses to a monolingual dictionary from a bilingual dictionary |
|||
|description=Write a script that matches bilingual dictionary entries (in dix format) to monolingual dictionary entries in one of the languages (in [[Apertium-specific conventions for lexc|lexc]] format) and adds glosses from the other side of the bilingual dictionary if not already there. The script should combine glosses into one when there's more than one in the bilingual dictionary. Some level of user control might be justified, from simply defaulting to a dry run unless otherwise specified, to controls for adding to versus replacing versus leaving alone existing glosses, and the like. A [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/inject-words-from-bidix-to-lexc.py prototype of this script] is available in SVN, though it's buggy and doesn't fully work—so this task may just end up being to debug it and make it work as intended. A good test case might be the [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-eng-kaz/apertium-eng-kaz.eng-kaz.dix English-Kazakh bilingual dictionary] and the [http://svn.code.sf.net/p/apertium/svn/languages/apertium-kaz/apertium-kaz.kaz.lexc Kazakh monolingual dictionary]. |
|||
|tags=python, lexc, dix, xml |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=Write a script to deduplicate and/or sort individual lexc lexica. |
|||
|description=The lexc format is a way to specify a monolingual dictionary that gets compiled into a transducer: see [[Apertium-specific conventions for lexc]] and [[Lttoolbox and lexc#lexc]]. A single lexc file may contain quite a few individual lexicons of stems, e.g. for nouns, verbs, prepositions, etc. Write a script (in python or ruby) that reads a specified lexicon, and based on which option the user specifies, identifies and removes duplicates from the lexicon, and/or sorts the entries in the lexicon. Be sure to make a dry-run (i.e., do not actually make the changes) the default, and add different levels debugging (such as displaying a number of duplicates versus printing each duplicate). Also consider allowing for different criteria for matching duplicates: e.g., whether or not the comment matches too. There are two scripts that parse lexc files already that would be a good point to start from: [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/lexccounter.py lexccounter.py] and [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/inject-words-from-bidix-to-lexc.py inject-words-from-bidix-to-lexc.py] (not fully functional). |
|||
|tags=python, ruby, lexc |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=quality, interface |
|||
|title=Interface improvement for Apertium Globe Viewer |
|||
|description=The [https://github.com/jonorthwash/Apertium-Global-PairViewer Apertium Globe Viewer] is a tool to visualise the translation pairs that Apertium currently offers, similar to the [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/pairviewer/apertium.html apertium pair viewer]. Choose any [https://wikis.swarthmore.edu/ling073/User:Cpillsb1/Final_project interface or usability issue] listed in the tool's documentation in consultation with your mentor, file an [https://github.com/jonorthwash/Apertium-Global-PairViewer/issues issue], and fix it. |
|||
|tags=javascript, maps |
|||
|multi=3 |
|||
|dup=5 |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=quality, code |
|||
|title=Separate geographic and module data for Apertium Globe Viewer |
|||
|description=The [https://github.com/jonorthwash/Apertium-Global-PairViewer Apertium Globe Viewer] is a tool to visualise the translation pairs that Apertium currently offers, similar to the [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/pairviewer/apertium.html apertium pair viewer]. Currently, geographic data for languages and pairs (latitude, longitude) is stored with the size of the dictionary, etc. Find a way to separate this data into distinct files (named sensibly), and at the same time make it possible to specify only the points for each language and not the endpoints for the arcs for language pairs (those should be trivial to generate dynamically). |
|||
|tags=javascript, json |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=quality, code |
|||
|title=Scraper of information needed for Apertium visualisers |
|||
|description=There are currently three prototype visualisers for the translation pairs Apertium offers: [https://github.com/jonorthwash/Apertium-Global-PairViewer Apertium Globe Viewer] and [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/pairviewer/apertium.html apertium pair viewer] and [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/family-visualizations/ language family visualisation tool]. They all rely on data about Apertium linguistic modules, and that data has to be scraped. There are some tools which do various parts of this already, but they are not unified: There are scripts that do different pieces of all of this already: [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/wiki-tools/dixTable.py queries svn], [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/family-visualizations/overtime.rb queries svn revisions], [http://wiki.apertium.org/wiki/The_Right_Way_to_count_dix_stems counting bidix stems]. Evaluate how well the script works, and attempt to make it output data that will be compatible with all viewers (and/or modify the viewers to make sure it is compatible with the general output format). |
|||
|tags=python, json, scrapers |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=quality |
|||
|title=fix pairviewer's 2- and 3-letter code conflation problems |
|||
|description=[[pairviewer]] doesn't always conflate languages that have two codes. E.g. sv/swe, nb/nob, de/deu, da/dan, uk/ukr, et/est, nl/nld, he/heb, ar/ara, eus/eu are each two separate nodes, but should instead each be collapsed into one node. Figure out why this isn't happening and fix it. Also, implement an algorithm to generate 2-to-3-letter mappings for available languages based on having the identical language name in languages.json instead of loading the huge list from codes.json; try to make this as processor- and memory-efficient as possible. |
|||
|tags=javascript |
|||
|mentors=Jonathan |
|||
}}{{Taskidea |
|||
|type=quality, code |
|||
|title=split nor into nob and nno in pairviewer |
|||
|description=Currently in [[pairviewer]], nor is displayed as a language separately from nob and nno. However, the nor pair actually consists of both an nob and an nno component. Figure out a way for pairviewer (or pairsOut.py / get_all_lang_pairs.py) to detect this split. So instead of having swe-nor, there would be swe-nob and swe-nno displayed (connected seemlessly with other nob-* and nno-* pairs), though the paths between the nodes would each still give information about the swe-nor pair. Implement a solution, trying to make sure it's future-proof (i.e., will work with similar sorts of things in the future). |
|||
|mentors=Jonathan, Fran, Unhammer |
|||
|tags=javascript |
|||
}}{{Taskidea |
|||
|type=quality, code |
|||
|title=add support to pairviewer for regional and alternate orthograpic modes |
|||
|description=Currently in [[pairviewer]], there is no way to detect or display modes like zh_TW. Add suppor to pairsOut.py / get_all_lang_pairs.py to detect pairs containing abbreviations like this, as well as alternate orthographic modes in pairs (e.g. uzb_Latn and uzb_Cyrl). Also, figure out a way to display these nicely in the pairviewer's front-end. Get creative. I can imagine something like zh_CN and zh_TW nodes that are in some fixed relation to zho (think Mickey Mouse configuration?). Run some ideas by your mentor and implement what's decided on. |
|||
|mentors=Jonathan, Fran |
|||
|tags=javascript |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=Extend visualisation of pairs involving a language in language family visualisation tool |
|||
|description=The [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/family-visualizations/ language family visualisation tool] currently has a visualisation of all pairs involving the language. Extend this to include pairs that involve those languages, and so on, until there are no more pairs. This should result in a graph of quite a few languages, with the current language in the middle. Note that if language x is the center, and there are x-y and x-z pairs, but also a y-z pair, this should display the y-z pair with a link, not with an extra z and y node each, connected to the original y and z nodes, respectively. The best way to do this may involve some sort of filtering of the data. |
|||
|mentors=Jonathan |
|||
|tags=javascript |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=Scrape Crimean Tatar Quran translation from a website |
|||
|description=Bible and Quran translations often serve as a parallel corpus useful for solving NLP tasks because both texts are available in many languages. Your goal in this task is to write a program in the language of your choice which scrapes the Quran translation in the Crimean Tatar language available on the following website: http://crimean.org/islam/koran/dizen-qurtnezir/. You can adapt the scraper described on the [[Writing a scraper]] page or write your own from scratch. The output should be plain text in Tanzil format ('text with aya numbers'). You can see examples of that format on http://tanzil.net/trans/ page. When scraping, please be polite and request data at a reasonable rate. |
|||
|mentors=Ilnar, Jonathan, fotonzade |
|||
|tags=scraper |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=Scrape Quran translations from a website |
|||
|description=Bible and Quran translations often serve as a parallel corpus useful for solving NLP tasks because both texts are available in many languages. Your goal in this task is to write a program in the language of your choice which scrapes the Quran translations available on the following website: http://www.quran-ebook.com/. You can adapt the scraper described on the [[Writing a scraper]] page or write your own from scratch. The output should be plain text in Tanzil format ('text with aya numbers'). You can see examples of that format on http://tanzil.net/trans/ page. Before starting, check whether the translation is not already available on the Tanzil project's page (no need to re-scrape those, but you should use them to test the output of your program). Although the format of the translations seems to be the same and thus your program is expected to work for all of them, translations we are interested the most are the following: [http://www.quran-ebook.com/azerbaijan_version2/1.html Azerbaijani version 2], [http://www.quran-ebook.com/bashkir_version/index_ba.html Bashkir], [http://www.quran-ebook.com/chechen_version/index_cech.html Chechen], [http://www.quran-ebook.com/karachayevo_version/index_krc.html Karachay] and [http://www.quran-ebook.com/kyrgyzstan_version/index_kg.html Kyrgyz]. When scraping, please be polite and request data at a reasonable rate. |
|||
|mentors=Ilnar, Jonathan, fotonzade |
|||
|tags=scraper |
|||
}}{{Taskidea |
|||
|type=documentation |
|||
|title=Unified documentation on Apertium visualisers |
|||
|description=There are currently three prototype visualisers for the translation pairs Apertium offers: [https://github.com/jonorthwash/Apertium-Global-PairViewer Apertium Globe Viewer] and [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/pairviewer/apertium.html apertium pair viewer] and [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/family-visualizations/ language family visualisation tool]. Make a page on the Apertium wiki that showcases these three visualisers and links to further documentation on each. If documentation for any of them is available somewhere other than the Apertium wiki, then (assuming compatible licenses) integrate it into the Apertium wiki, with a link back to the original. |
|||
|mentors=Jonathan |
|||
|tags=wiki, visualisers |
|||
}}{{Taskidea|type=research|mentors=Jonathan |
|||
|title=Investigate FST backends for Swype-type input |
|||
|description=Investigate what options exist for implementing an FST (of the sort used in Apertium [[spell checking]]) for auto-correction into an existing open source Swype-type input method on Android. You don't need to do any coding, but you should determine what would need to be done to add an FST backend into the software. Write up your findings on the Apertium wiki. |
|||
|mentors=Jonathan |
|||
|tags=spelling,android |
|||
}}{{Taskidea|type=research|mentors=Jonathan |
|||
|title=tesseract interface for apertium languages |
|||
|description=Find out what it would take to integrate apertium or voikkospell into tesseract. Document thoroughly available options on the wiki. |
|||
|tags=spelling,ocr |
|||
}}{{Taskidea |
|||
|type=documentation |
|||
|mentors=Jonathan, Shardul |
|||
|title=Integrate documentation of the Apertium deformatter/reformatter into system architecture page |
|||
|description=Integrate documentation of the Apertium deformatter and reformatter into the wiki page on the [[Apertium system architecture]]. |
|||
|tags=wiki, architecture |
|||
}}{{Taskidea |
|||
|type=documentation |
|||
|mentors=Jonathan, Shardul |
|||
|title=Document a full example through the Apertium pipeline |
|||
|description=Come up with an example sentence that could hypothetically rely on each stage of the [[Apertium pipeline]], and show the input and output of each stage under the [[Apertium_system_architecture#Example_translation_at_each_stage|Example translation at each stage]] section on the Apertium wiki. |
|||
|tags=wiki, architecture |
|||
}}{{Taskidea |
|||
|type=documentation |
|||
|mentors=Jonathan, Shardul |
|||
|title=Create a visual overview of structural transfer rules |
|||
|description=Based on an [https://wikis.swarthmore.edu/ling073/Structural_transfer existing overview of Apertium structural transfer rules], come up with a visual presentation of transfer rules that shows what parts of a set of rules correspond to which changes in input and output, and also which definitions are used where in the rules. Get creative—you can do this all in any format easily viewed across platforms, especially as a webpage using modern effects like those offered by d3 or similar. |
|||
|tags=wiki, architecture, visualisations, transfer |
|||
}}{{Taskidea |
|||
|type=documentation |
|||
|mentors=Jonathan |
|||
|title=Complete the Linguistic Data chart on Apertium system architecture wiki page |
|||
|description=With the assistance of the Apertium community (our [[IRC]] channel) and the resources available on the Apertium wiki, fill in the remaining cells of the table in the "Linguistic data" section of the [[Apertium system architecture]] wiki page. |
|||
|tags=wiki, architecture |
|||
|beginner=yes |
|||
}}{{Taskidea |
|||
|type=research |
|||
|mentors=Fran |
|||
|title=Do a literature review on anaphora resolution |
|||
|description=Anaphora resolution (see the [[anaphora resolution|wiki page]] is the task of determining for a pronoun or other item with reference what it refers to. Do a literature review and write up common methods with their success rates. |
|||
|tags=anaphora, rbmt, engine |
|||
|beginner= |
|||
}}{{Taskidea |
|||
|type=research |
|||
|mentors=Fran |
|||
|title=Write up grammatical tables for a grammar of a language that Apertium doesn't have an analyser for |
|||
|description=Many descriptive grammars have useful tables that can be used for building morphological analysers. Unfortunately they are in Google Books or in paper and not easily processable by machine. The objective is to find a grammar of a language for which Apertium doesn't have a morphological analyser and write up the tables on a Wiki page. |
|||
|tags=grammar, books, data-entry |
|||
|beginner= |
|||
}}{{Taskidea |
|||
|type=research |
|||
|mentors=Fran, Xavivars |
|||
|title=Phrasebooks and frequency |
|||
|description=Apertium is quite terrible in general with phrasebook style sentences in most languages. Try translating "what's up" from English to Spanish. The objective of this task is to look for phrasebook/filler type sentences/utterances in parallel corpora of film subtitles and on the internet and order them by frequency/generality. Frequency is the amount of times you see the utterance, generality is in how many different places you see it. |
|||
|tags=phrasebook, translation |
|||
|beginner= |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|mentors=Flammie |
|||
|title=Hungarian Open Source dictionaries |
|||
|description=There are currently 3+ open source Hungarian open source resources for morphological analysis/dictionaries, study and document on how to install these and get the words and their inflectional informations out, and e.g. tabulate some examples of similarities and differences of word classes/tags/stuff. See [[Hungarian]] for more info. |
|||
|tags=hungarian |
|||
|beginner= |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|mentors=Vin, Jonathan, Anna |
|||
|title=Create a UD-Apertium morphology mapping |
|||
|description=Choose a language that has a Universal Dependencies treebank and tabulate a potential set of Apertium morph labels based on the (universal) UD morph labels. See Apertium's [[list of symbols]] and [http://universaldependencies.org/ UD]'s POS and feature tags for the labels. |
|||
|tags=morphology, ud, dependencies |
|||
|beginner= |
|||
|multi=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|mentors=Vin, Jonathan, Anna |
|||
|title=Create an Apertium-UD morphology mapping |
|||
|description=Choose a language that has an Apertium morphological analyser and adapt it to convert the morphology to UD morphology |
|||
|tags=morphology, ud, dependencies |
|||
|beginner= |
|||
|multi=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=research |
|||
|mentors=Vin |
|||
|title=Create a full verbal paradigm for an Indo-Aryan language |
|||
|description=Choose a regular verb and create a paradigm with all possible tense/aspect/mood inflections for an Indo-Aryan language (except Hindi or Marathi). Use Masica's grammar as a reference. |
|||
|tags=morphology, indo-aryan |
|||
|beginner= |
|||
|multi=10 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Vin |
|||
|title=Create a syntactic analogy corpus for a particular POS/language. |
|||
|description=Refer to the syntactic section of [https://www.aclweb.org/anthology/N/N16/N16-2002.pdf this paper]. Try to create a data set with more than 2000 * 8 = 16000 entries for a particular POS with any language, using a large corpus for frequency. |
|||
|tags=morphology, embeddings |
|||
|beginner= |
|||
|multi=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Vin |
|||
|title=Envision and create a quick utility for tasks like morphological lookup |
|||
|description=Many tasks like morphological analysis are annoying to do by navigating to the right directory, typing out an entire pipeline etc. Write a bash script to simplify some of these procedures, taking into account the install paths and prefixes if necessary. eg. echo "hargle" \| ~/analysers/apertium-eng/eng.automorf.bin ==> morph "hargle" eng |
|||
|tags=bash, scripting |
|||
|beginner=yes |
|||
|multi=10 |
|||
}} |
|||
{{Taskidea |
|||
|type=research,code |
|||
|mentors=Vin |
|||
|title=Use open-source OCR to convert open-source non-text news corpora to text. Evaluate an analyser's coverage on them. |
|||
|description=Many languages that have online newspapers do not use actual text to store the news but instead use images or GIFs :((( find a newspaper for a language that lacks news text online (eg. Marathi), check licenses, find an OCR tool and scrape a reasonably large corpus from the images if doing so would not violate CC/GPL. Evaluate the morphological analyser on it. |
|||
|tags=python,morphology |
|||
|beginner= |
|||
}} |
|||
{{Taskidea |
|||
|type=research,quality |
|||
|mentors=Shardul, Jonathan |
|||
|tags=issues, python |
|||
|title=Clean up open issues in [https://github.com/goavki/apertium-html-tools/issues html-tools], [https://github.com/goavki/phenny/issues begiak], or [https://github.com/goavki/apertium-apy/issues APy] |
|||
|description=Go through issue threads for [https://github.com/goavki/apertium-html-tools/issues html-tools], [https://github.com/goavki/phenny/issues begiak], or [https://github.com/goavki/apertium-apy/issues APy], and find issues that have been solved in the code but are still open on GitHub. (The fact that they have been solved may not be evident from the comments thread alone.) Once you find such an issue, comment on the thread explaining what code/commit fixed it and how it behaves at the latest revision. |
|||
|multi=15 |
|||
}} |
|||
{{Taskidea |
|||
|type=code,quality |
|||
|mentors=Shardul, Jonathan |
|||
|tags=tests, python, IRC |
|||
|title=Get [https://github.com/goavki/phenny begiak] to build cleanly |
|||
|description=Currently, [https://github.com/goavki/phenny begiak] does not build cleanly because of a number of failing tests. Find what is causing the tests to fail, and either fix the code or the tests if the code has changed its behavior. Document all your changes in the PR that you create. |
|||
}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|mentors=Jonathan, Ilnar |
|||
|title=Find stems in the Kazakh treebank that are not in the Kazakh analyser |
|||
|description=There are quite a few analyses in the [http://svn.code.sf.net/p/apertium/svn/languages/apertium-kaz/texts/puupankki/puupankki.kaz.conllu Kazakh treebank] that don't exist in the [[apertium-kaz|Kazakh analyser]]. Find as many examples of missing stems as you can. Feel free to write a script to automate this so it's as exhaustive (and non-exhausting:) as possible. You may either add what you find to the analyser yourself, commit a list of the missing stems to apertium-kaz/dev, or send a list to your mentor so that they may do one of these. |
|||
|tags=treebank, Kazakh, analyses |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|mentors=Jonathan, Ilnar |
|||
|title=Find missing analyses in the Kazakh treebank that are not in the Kazakh analyser |
|||
|description=There are quite a few analyses in the [http://svn.code.sf.net/p/apertium/svn/languages/apertium-kaz/texts/puupankki/puupankki.kaz.conllu Kazakh treebank] that don't exist in the [[apertium-kaz|Kazakh analyser]]. Find as many examples of missing analyses (for existing stems) as you can. Feel free to write a script to automate this so it's as exhaustive (and non-exhausting:) as possible. You may commit a list of the missing stems to apertium-kaz/dev or send a list to your mentor so that they may do this. |
|||
|tags=treebank, Kazakh, analyses |
|||
|beginner=yes |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Jonathan |
|||
|title=Use apertium-init to bootstrap a new language module |
|||
|description=Use [[Apertium-init]] to bootstrap a new language module that doesn't currently exist in Apertium. To see if a language is available, check [[languages]] and [[incubator]], and especially ask on IRC. Add enough stems and morphology to the module so that it analyses and generates at least 100 correct forms. Check your code into Apertium's codebase. [[Task ideas for Google Code-in/Add words from frequency list|Read more about adding stems...]] |
|||
|tags=languages, bootstrap, dictionaries |
|||
|beginner=yes |
|||
|multi=25 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Jonathan |
|||
|title=Use apertium-init to bootstrap a new language pair |
|||
|description=Use [[Apertium-init]] to bootstrap a new translation pair between two languages which have monolingual modules already in Apertium. To see if a translation pair has already been made, check our [[SVN]] repository, and especially ask on IRC. Add 100 common stems to the dictionary. Check your work into Apertium's codebase. |
|||
|tags=languages, bootstrap, dictionaries, translators |
|||
|beginner=yes |
|||
|multi=25 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Jonathan, mlforcada |
|||
|title=Add a transfer rule to an existing translation pair |
|||
|description=Add a transfer rule to an existing translation pair that fixes an error in translation. Document the rule on the [http://wiki.apertium.org/ Apertium wiki] by adding a [[regression testing|regression tests]] page similar to [[English_and_Portuguese/Regression_tests]] or [[Icelandic_and_English/Regression_tests]]. Check your code into Apertium's codebase. [[Task ideas for Google Code-in/Add transfer rule|Read more...]] |
|||
|tags=languages, bootstrap, transfer |
|||
|multi=25 |
|||
|dup=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Jonathan |
|||
|title=Add stems to an existing translation pair |
|||
|description=Add 1000 common stems to the dictionary of an existing translation pair. Check your work into Apertium's codebase. [[Task ideas for Google Code-in/Add words from frequency list|Read more about adding stems...]] |
|||
|tags=languages, bootstrap, dictionaries, translators |
|||
|multi=25 |
|||
|dup=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Jonathan |
|||
|title=Write 10 lexical selection to an existing translation pair |
|||
|description=Add 10 lexical selection rules to an existing translation pair. Check your work into Apertium's codebase. [[Task ideas for Google Code-in/Add lexical-select rules|Read more...]] |
|||
|tags=languages, bootstrap, lexical selection, translators |
|||
|multi=25 |
|||
|dup=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Jonathan |
|||
|title=Write 10 constraint grammar rules for an existing language module |
|||
|description=Add 10 constraint grammar rules to an existing language that you know. Check your work into Apertium's codebase. [[Task ideas for Google Code-in/Add constraint-grammar rules|Read more...]] |
|||
|tags=languages, bootstrap, constraint grammar |
|||
|multi=25 |
|||
|dup=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code,interface |
|||
|mentors=Jonathan |
|||
|title=Paradigm generator webpage |
|||
|description=Write a standalone webpage that makes queries (though javascript) to an [[apertium-apy]] server to fill in a morphological forms based on morphological tags that are hidden throughout the body of the page. For example, say you have the verb "say", and some tags like inf, past, pres.p3.sg—these forms would get filled in as "say", "said", "says". |
|||
|tags=javascript, html, apy |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|mentors=Anna |
|||
|title=Train a new model for syntactic function labeller |
|||
|description=Choose one of the languages Apertium uses in language pairs and prepare training data for the labeller from its UD-treebank: replace UD tags with Apertium tags, parse the treebank, create fastText embeddings. Then train a new model on this data and evaluate an accuracy. |
|||
|tags=python, UD, embeddings, machine learning |
|||
|multi=5 |
|||
}} |
|||
{{Taskidea |
|||
|type=code,quality |
|||
|mentors=Anna |
|||
|title=Tuning a learning rate for syntactic function labeller's RNN |
|||
|description=Syntactic function labeller uses RNN for training and predicting syntactic functions of words. Current models can be improved by tuning training parameters, e.g. learning rate parameter. |
|||
|tags=python, machine learning |
|||
}} |
|||
</table> |
|||
==Task ideas (2016)== |
|||
<table class="sortable wikitable"> |
|||
<tr><th>type</th><th>title</th><th>description</th><th>tags</th><th>mentors</th><th>bgnr?</th><th>multi?</th></tr> |
|||
{{Taskidea|type=code|mentors=Fran, Unhammer|tags=c++ |
|||
|title=Refactor/mege the main "processing" functions of lrx-proc |
|||
|description=[[lrx-proc]] has two modes, "-m" mode and default mode. They are implemented by each their huge function, nearly identical to each other. Refactor the code to remove the redundancy, and run tests on lots of text with several language pairs to ensure no regressions. |
|||
}} |
|||
{{Taskidea|type=code|mentors=Fran, Unhammer|tags=c++ |
|||
|title=Profile and improve speed of lrx-proc |
|||
|description=[[lrx-proc]] is slower than it should be. There is probably some low-hanging fruit. Try profiling it and implementing an improvement. |
|||
}} |
|||
{{Taskidea|type=research|mentors=Fran|tags=parsing |
|||
|title=See if you can precompile xpath expressions or xslt stylesheets |
|||
|description=An XSLT stylesheet is a program for transforming XML trees. An Xpath expression is a way of specifying a node set in an XML tree. Investigate the possibility of pre-compiling either stylesheets or xpath expressions. |
|||
}} |
|||
{{Taskidea|type=research|mentors=Fran, Schindler|tags=parsing |
|||
|title=Review literature on linearisation of dependency trees |
|||
|description=A dependency tree is an intermediate representation of a sentence with no implicit word order. Linearisation is finding the appropriate word order for a dependency tree. Do a survey of the available literature and write up a review. |
|||
}} |
|||
{{Taskidea|type=research|mentors=Fran |
|||
|title=Manually annotate/Tag text in Apertium format |
|||
|description=Take some running text, analyse it using an Apertium analyser then manually disambiguate the result.}} |
|||
<!-- Convert Chukchi lexicon to HFST/lexc --> |
|||
{{Taskidea|type=code|mentors=Fran|tags= |
|||
|title=Convert Chukchi Nouns to HFST/lexc |
|||
|description=There is a freely available lexicon of Chukchi, a language spoken in the north-east of Russia. The objective of this task is to convert part of the lexicon covering nouns to [[lexc]] format, which is a formalism for specifying concatenative morphology.}} |
|||
{{Taskidea|type=code|mentors=Fran|tags= |
|||
|title=Convert Chukchi Numerals to HFST/lexc |
|||
|description=There is a freely available lexicon of Chukchi, a language spoken in the north-east of Russia. The objective of this task is to convert part of the lexicon covering nouns to [[lexc]] format, which is a formalism for specifying concatenative morphology.}} |
|||
{{Taskidea|type=code|mentors=Fran|tags= |
|||
|title=Convert Chukchi Adjectives to HFST/lexc |
|||
|description=There is a freely available lexicon of Chukchi, a language spoken in the north-east of Russia. The objective of this task is to convert part of the lexicon covering nouns to [[lexc]] format, which is a formalism for specifying concatenative morphology.}} |
|||
{{Taskidea|type=interface|mentors=Fran, Schindler|tags=HTML,CSS |
|||
|title=Make a design for a web-based viewer for parallel treebanks |
|||
|description=(also for viewing diff annotation for same sentence) |
|||
|tags=dependencies,parallel,web |
|||
|mentors=Fran,Jonathan}} |
|||
{{Taskidea|type=code |
|||
|title=Write a script to convert a UD treebank |
|||
|description= for a given language to a format suitable for training the perceptron tagger}} |
|||
{{Taskidea|type=research |
|||
|title=Train the perceptron tagger for a language |
|||
|description=The perceptron tagger is a new part-of-speech tagger that was developed for Apertium in the Summer of Code. Take a language from [[languages]] and train the tagger for that language. |
|||
|mentors=Fran}} |
|||
{{Taskidea|type=interface |
|||
|title=Design an annotation tool for disambiguation |
|||
|description=like c.f. webanno, corpus.mari-language.org, brat |
|||
|tags=disambiguation,annotation |
|||
|mentors=Fran,Jonathan}} |
|||
{{Taskidea|type=interface |
|||
|title=Design an annotation tool for adding dependencies |
|||
|description=Like c.f. brat |
|||
|tags=dependencies,annotation |
|||
|mentors=Fran,Jonathan}} |
|||
{{Taskidea|type=code |
|||
|title=Train lexical selection rules |
|||
|description= from a large parallel corpus for a language pair |
|||
|mentors=Fran}} |
|||
{{Taskidea|type=documentation |
|||
|title=Document how to set up the experiments for weighted transfer rules |
|||
|mentors=Fran}} |
|||
{{Taskidea|type=code |
|||
|title=convert UD treebank to apertium tags, use unigram tagger |
|||
|description=(see #apertium logs 2016-06-22)}} |
|||
{{Taskidea|type=code |
|||
|title=Write a script to extract sentences from CoNLL-U |
|||
|description=where they have the same tokenisation as Apertium. |
|||
|mentors=Fran, wei2912}} |
|||
{{Taskidea|type=documentation|mentors=Schindler |
|||
|title=convert [http://youssefsan.eu/wiki/index.php?title=Wolof] to apertium-style documentation |
|||
|description= |
|||
}} |
|||
{{Taskidea|type=code|tags=c++ |
|||
|title=Implement `lt-print --strings` lt-print -s|type=code|tags=c++|mentors=Fran, wei2912}} |
|||
{{Taskidea|type=code|tags=c++ |
|||
|title=Implement lt-expand -n |
|||
|description=Implement an algorithm that prints out a transducer but only follows ''n'' cycles. |
|||
|type=code|tags=c++|mentors=Fran, wei2912}} |
|||
{{Taskidea |
|||
|title=in-browser globe with apertium languages as points |
|||
|description=Use d3 globe to make an apertium language/pair viewer (like [[pairviewer]]), maybe based on [https://www.jasondavies.com/maps/rotate/ this] or [http://bl.ocks.org/KoGor/5994804 this] or [http://bl.ocks.org/dwtkns/4973620 this]. [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/mapviewer/langdata/apertium-languages.tsv This file] contains coordinates of Apertium languages.|mentors=Jonathan, kvld|type=code|tags=js,html,maps}} |
|||
{{Taskidea|type=code|tags=c++ |
|||
|title=Write a program to detect contexts where a path in a compiled transducer begins with a whitespace |
|||
|desciption=When a transducer contains a path that begins with whitespace it refuses to load the transducer, but the user has no idea which entry in the dictionary caused the error. If we gave some context to the error then it would be easier to detect the error in the dictionary.}} |
|||
{{Taskidea|type=code|tags=c++ |
|||
|title=Make the lt-comp compiler print a warning when a path begins with a whitespace. |
|||
|description=Common mistake in dix files is to have some bad whitespace at places, this needs to be aqutomatically detected in the compilation tool and warning to user issued.}} |
|||
{{Taskidea |
|||
|title=apertium-mar-hin: make the TL morph for any part of speech less daft |
|||
|description=Some morph in Marathi or Hindi are currently daft. |
|||
|tags=morphology|mentors=vin-ivar}} |
|||
{{Taskidea |
|||
|title=add indic scripts/formal latin transliterations |
|||
|description=Translitteration is a ways to write stuffs in different scripts. Currently some indic scrpts are done only to some WX transliterator|tags=python|mentors=vin-ivar}} |
|||
{{Taskidea| |
|||
title=apertium-hin: more consistency with apertium-mar for verbs|tags=morphology|mentors=vin-ivar |
|||
|description= Verbs in Marath and Hindi are incosistently. |
|||
|type=code}} |
|||
{{Taskidea| |
|||
title=apertium-mar: replace cases with postpositions|tags=morphology|tags=morphology|mentors=vin-ivar |
|||
|description=Marathi cases are postpositions |
|||
|type=code}} |
|||
{{Taskidea| |
|||
title=apertium-mar: fix modals and quasi-modals|tags=morphology|mentors=vin-ivar |
|||
|description=Modals in Marathi need fixing |
|||
|type=code}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=refactor x file in apy |
|||
|description=Reorganise apy code to be more readable, maintainable and so forth. |
|||
|mentors=Putti}} |
|||
{{Taskidea |
|||
|type=documentation |
|||
|title=add docstrings to x file in apy |
|||
|description=docstrings are a way to document python code that can be generated into documentation on the web or in python. See following PEPs in python.org |
|||
|mentors=Putti, vin-ivar}} |
|||
{{Taskidea |
|||
|type=quality |
|||
|title=write 10 unit tests for apy |
|||
|mentors=Putti, Unhammer, (sushain?)}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=add 1 transfer rule |
|||
|description=Transfer rules are parts of translation process dealing with re-arranging, adding and deleting words. See also [[Short introduction to transfer]] |
|||
|mentors=Fran, vin-ivar, zfe, kvld}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=add 50 entries to a bidix |
|||
|description=Bilingual dictionary (bidix) contains word-to-word translations between languages, e.g. cat-chat or cat-Katze in English to French or German respectively. Add 50 of such word-translations to languages you know. |
|||
|mentors=Fran, vin-ivar, zfe, kvld, Schindler}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=write 10 lexical selection rules |
|||
|description=Write 10 lexical selection rules for a pair already set up with [[lexical selection]] |
|||
|mentors=Fran, vin-ivar, zfe, Unhammer}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=write 10 constraint grammar rules |
|||
|description=[[Constraint grammar]] is a rule-based approach of selecting linguistic readings from ambiguous cases, to improve translation quality etc. See introduction CG here: |
|||
|mentors=Fran, vin-ivar, zfe, kvld, Unhammer}} |
|||
{{Taskidea |
|||
|type=research |
|||
|title=Document resources for a language |
|||
|description=Document resources for a language without resources already documented on the wiki. [[Task ideas for Google Code-in/Documentation of resources|read more...]] |
|||
|mentors=Jonathan, vin-ivar, zfe, Schindler |
|||
|multi=X|beginner=X}} |
|||
{{Taskidea |
|||
|type=research |
|||
|title=Write a contrastive grammar |
|||
|description=Document 6 differences between two (preferably related) languages and where they would need to be addressed (morph analysis, transfer, etc). Use a grammar book/resource for inspiration. Each difference should have no fewer than 3 examples. Put your work on the Apertium wiki under [[Language1_and_Language2/Contrastive_grammar]]. See [[Farsi_and_English/Pending_tests]] for an example of a contrastive grammar that a previous GCI student made. |
|||
|mentors=vin-ivar, Jonathan, Fran, zfe, Schindler |
|||
|beginner=X |
|||
|multi=X |
|||
}} |
|||
{{Taskidea|type=research|mentors=Flammie|tags=hun,dix |
|||
|title=apertium-hun: match existing apertium-hun paradigms with morphdb.hu |
|||
|description=Morphdb.hu is another implementation of Hungarian morphology, that has a large lexicon. In order to convert it to apertium format, the classification of the words needs to be mapped to one used in apertium.}} |
|||
{{Taskidea|type=code|mentors=Flammie|tags= |
|||
|title=apertium-hun: convert hunmorph.db into apertium |
|||
|description=one of: See prerequisite task above. }} |
|||
{{Taskidea|type=code|mentors=Flammie|tags=fin,dix |
|||
|title=apertium-fin-eng: go through lexicon for potential rubbish words) |
|||
|description=Apertium's Finnish–English dictionary has been converted from projects, like Finnwordnet, that hae a lot of pairs unsuitable for MT, find and delete them from the file. |
|||
}} |
|||
{{Taskidea|type=code|mentors=Flammie|tags=eng,dix |
|||
|title=apertium-fin-eng: add words from apertium-fin-eng to apertium-eng |
|||
|description=grep for English words in apertium-fin-eng.fin-eng.dix and classify them according to paradgims. See also: [[Apertium English]])}} |
|||
{{Taskidea|type=code|mentors=Flammie|tags=apy |
|||
|title=apertium-apy: add i/o formats) |
|||
|description=Currently APY web queries get responses in ad hoc json format. Research and implement interoperabilities with further formats, such as: }} |
|||
{{Taskidea|type=code|mentors=Flammie|tags=apy |
|||
|title=apertium-apy: write metadata about apertium language pairs |
|||
|description=CMDI format that can be deployed for CLARIN stuffs}} |
|||
{{Taskidea|type=code|mentors=Flammie|tags=apy |
|||
|title=apertium-apy: make more parts of apertium-pipeline on web |
|||
|description=apertium.org has a web service interface for getting translations or morphological analyses. This should be extended for other functions of apertium as well. more information: [[Apertium Apy]].}} |
|||
{{Taskidea|type=code|mentors=Flammie|tags=apy |
|||
|title=Finish suggest-a-word feature so it can be deployed to apertium.org |
|||
|description=There exists a version from last GSOC of apertium.org translator where user can suggest fixes to unknown word translations among other things, but this is not deployed to server.}} |
|||
{{Taskidea|type=code|mentors=Flammie|tags=apy |
|||
|title=Further developments to suggest a word |
|||
|description=Currently suggested words may be added to wiki by a service, it would make sense to also have e.g. chance to login and get attributed as contributor, as well as other stuff )}} |
|||
{{Taskidea|type=code|mentors=Fran |
|||
|title=Fix ordering of dependencies in CG matxin format |
|||
}} |
|||
{{Taskidea|type=code|mentors=vin-ivar, Unhammer, (Flammie?) |
|||
|title=CG syntax highlighting plugin for a text editor |
|||
|description=Write a syntax file for your favourite text editor that provides fancy syntax highlighting for Constraint Grammar |
|||
}} |
|||
{{Taskidea|type=code|mentors=vin-ivar |
|||
|title=Package apertium-lint to install to a prefix |
|||
|description=apertium-lint currently installs with pip, modify that to allow passing a flag for installing it to a prefix |
|||
}} |
|||
{{Taskidea|type=quality|mentors=Unhammer, Jonathan, Kira |
|||
|title=Fix a bug in Apertium html-tools |
|||
|description=Fix a currently open issue with [https://github.com/goavki/apertium-html-tools/issues html-tools] in consultation with your mentor. |
|||
|tags=multi,html,js,html-tools |
|||
|multi=X}} |
|||
{{Taskidea|type=quality|mentors=Unhammer, Jonathan, Kira |
|||
|title=Fix a bug in Apertium APy |
|||
|description=Fix a currently open issue with [https://github.com/goavki/apertium-apy/issues APy] in consultation with your mentor. |
|||
|tags=multi,python,apy |
|||
|multi=X}} |
|||
{{Taskidea|type=code|mentors=vin-ivar |
|||
|title=Script to get resources from GF |
|||
|description=Write a script to scrape words from one particular paradigm in GF and make it usable in Apertium. |
|||
}} |
|||
{{Taskidea|type=code|mentors=vin-ivar |
|||
|title=Create a list of text editors compatible with different scripts |
|||
|description=Create a list of ten text editors and document their status with representing human text (Latin), RTL text (Arabic), combining characters (Devanagari), etc. Document any bugs with eg. copy/paste and tab indentation. |
|||
}} |
|||
{{Taskidea|type=code|mentors=vin-ivar |
|||
|title=Write a script to strip apertium morphological information from CONLL-U files |
|||
|description=Write a script to strip apertium morphological information from CONLL-U files so the dependency trees can be rendered okay by the online tools. |
|||
}} |
|||
{{Taskidea|type=research|mentors=Jonathan |
|||
|title=Investigate FST backends for Swype-type input |
|||
|description=Investigate what options exist for implementing an FST (of the sort used in Apertium [[spell checking]]) for auto-correction into an existing open source Swype-type input method on Android. You don't need to do any coding, but you should determine what would need to be done to add an FST backend into the software. Write up your findings on the Apertium wiki. |
|||
|mentors=Jonathan |
|||
|tags=spelling,android |
|||
}} |
|||
{{Taskidea|type=code|mentors=Fran|tags=c++ |
|||
|title=Fix a memory leak in matxin-transfer |
|||
|description=The matxin-transfer program is a component of the [[Matxin]] MT system, a sister system to Apertium. Run valgrind on the code and find and fix a memory leak. There may be serveral. |
|||
}} |
|||
{{Taskidea|type=code|mentors=Bech |
|||
|title=Write a tool helping to test a bidix coherence |
|||
|description=This tool will generate a file with each lema of the main categories (at least nouns, adjectives ans verbs) found in a bidix. Then this file will be translated to the second language and back to the first one. Looking for changes will allow to detect transfer problems and changes of meaning. |
|||
}} |
|||
{{Taskidea|type=quality|mentors=Jonathan, sushain, wei2912 |
|||
|title=fix any begiak issue |
|||
|description=Fix any open issue for [https://github.com/goavki/phenny/issues begiak] (Apertium's IRC bot), to be chosen in consultation with your mentor. |
|||
|tags=python,irc |
|||
|multi=X}} |
|||
{{Taskidea|type=quality|mentors=Jonathan, sushain, Unhammer, wei2912 |
|||
|title=merge phenny upstream into begiak |
|||
|description=Merge upstream patches etc. into [https://github.com/goavki/phenny/issues begiak] (Apertium's IRC bot). |
|||
|tags=git,irc}} |
|||
{{Taskidea|type=quality|mentors=Jonathan, sushain, Unhammer, wei2912 |
|||
|title=open a pull request for merging begiak modules into upstream |
|||
|description=Open a pull request to merge features from [https://github.com/goavki/phenny/issues begiak] (Apertium's IRC bot) into upstream. |
|||
|tags=git,irc}} |
|||
{{Taskidea|type=code|mentors=Jonathan, sushain, Unhammer, wei2912 |
|||
|title=begiak interface to Apertium's web API |
|||
|description=Write a module for [[begiak]] (Apertium's IRC bot) that provides access to at least one feature of [[APy]] (Apertium's web API). You may want to base the code off begiak's Apertium translation module (which may not be in 100% working order...). |
|||
|tags=irc,apy |
|||
|multi=X}} |
|||
{{Taskidea|type=research|mentors=Jonathan |
|||
|title=tesseract interface for apertium languages |
|||
|description=Find out what it would take to integrate apertium or voikkospell into tesseract. Document thoroughly available options on the wiki. |
|||
|tags=spelling,ocr}} |
|||
{{Taskidea|type=interface|mentors=Jonathan,sushain |
|||
|title=Abstract the formatting for the Html-tools interface. |
|||
|description=The interface for [[html-tools]] (Apertium's website framework) should be easily customisable so that people can make it look how they want. The task is to abstract the formatting and make one or more new stylesheets to change the appearance. This is basically making a way of "skinning" the interface. |
|||
|tags=css,html-tools}} |
|||
{{Taskidea|title=improvements to lexc plugin for vim|type=quality|mentors=Jonathan |
|||
|description=A vim syntax definition file for lexc is presented on the following wiki page: [[Apertium-specific conventions for lexc#Syntax highlighting in vim]]. This plugin works, but it has some issues: (1) comments on LEXICON lines are not highlighted as comments, (2) editing lines with comments (or similar) can be really slow, (3) the lexicon a form points at is not highlighted distinctly from the form (e.g., in the line «асқабақ:асқабақ N1 ; ! "pumpkin"», N1 should be highlighted somehow). Modify or rewrite the plugin to fix these issues. |
|||
|tags=vim |
|||
}} |
|||
{{Taskidea|title=Write a transliteration plugin for mediawiki|mentors=Jonathan|type=code |
|||
|description=Write a mediawiki plugin similar in functionality (and perhaps implementation) to the way the [http://kk.wikipedia.org/ Kazakh-language wikipedia]'s orthography changing system works ([http://wiki.apertium.org/wiki/User:Stan88#How_to_enable_multiple_Kazakh_language-variants_on_a_mediawiki_instance_.3F documented by a previous GCI student]). It should be able to be directed to use any arbitrary mode from an apertium mode file installed in a pre-specified path on a server. |
|||
|tags=php}} |
|||
{{Taskidea |
|||
|type=documentation |
|||
|mentors=Schindler |
|||
|title=add comments to .dix file symbol definitions |
|||
|tags=dix |
|||
}} |
|||
{{Taskidea |
|||
|type=documentation |
|||
|mentors=Schindler |
|||
|title=find symbols that aren't on the list of symbols page |
|||
|description=Go through symbol definitions in Apertium dictionaries in svn (.lexc and .dix format), and document any symbols you don't find on the [[List of symbols]] page. This task is fulfilled by adding at least one class of related symbols (e.g., xyz_*) or one major symbol (e.g., abc), along with notes about what it means. |
|||
|tags=wiki,lexc,dix |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=conllu parser and searching |
|||
|description=Write a script (preferably in python3) that will parse files in conllu format, and perform basic searches, such as "find a node that has an nsubj relation to another node that has a noun POS" or "find all nodes with a cop label and a past feature" |
|||
|tags=python,dependencies |
|||
|mentors=Jonathan, Fran |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=group and count possible lemmas output by guesser |
|||
|mentors=Jonathan, Fran |
|||
|description=Currently a "guesser" version of Apertium transducers can output a list of possible analyses for unknown forms. Develop a new pipleine, preferably with shell scripts or python, that uses a guesser on all unknown forms in a corpus, and takes the list of all possible analyses, and output a hit count of the most common combinations of lemma and POS tag. |
|||
|tags=guesser,transducers,shellscripts |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=vim mode/tools for annotating dependency corpora in CG3 format |
|||
|mentors=Jonathan, Fran |
|||
|description=includes formatting, syntax highlighting, navigation, adding/removing nodes, updating node numbers, etc. |
|||
|tags=vim,dependencies,CG3 |
|||
}} |
|||
{{Taskidea |
|||
|type=code |
|||
|title=vim mode/tools for annotating dependency corpora in CoNLL-U format |
|||
|mentors=Jonathan, Fran |
|||
|description=includes formatting, syntax highlighting, navigation, adding/removing nodes, updating node numbers, etc. |
|||
|tags=vim,dependencies,conllu |
|||
}}{{Taskidea |
|||
|type=quality |
|||
|title=figure out one-to-many bug in the [[lsx module]] |
|||
|mentors=Jonathan, Fran |
|||
|description= |
|||
|tags=C++,transducers,lsx |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=add an option for reverse compiling to the [[lsx module]] |
|||
|mentors=Jonathan, Fran |
|||
|description=this should be simple as it can just leverage the existing lttoolbox options for left-right / right-left compiling |
|||
|tags=C++,transducers,lsx |
|||
}}{{Taskidea |
|||
|type=quality |
|||
|title=remove extraneous functions from lsx-comp and clean up the code |
|||
|mentors=Jonathan, Fran |
|||
|description= |
|||
|tags=C++,transducers,lsx |
|||
}}{{Taskidea |
|||
|type=quality |
|||
|title=remove extraneous functions from lsx-proc and clean up the code |
|||
|mentors=Jonathan, Fran |
|||
|description= |
|||
|tags=C++,transducers,lsx |
|||
}}{{Taskidea |
|||
|type=code |
|||
|title=script to test coverage over wikipedia corpus |
|||
|mentors=Jonathan |
|||
|description=Write a script (in python or ruby) that in one mode checks out a specified language module to a given directory, compiles it (or updates it if already existant), and then gets the most recently nightly wikipedia archive for that language and runs coverage over it (as much in RAM if possible). In another mode, it compiles the language pair in a docker instance that it then disposes of after successfully running coverage. Scripts exist in Apertium already for finding where a wikipedia is, extracting a wikipedia archive into a text file, and running coverage. |
|||
|tags=python,ruby,wikipedia |
|||
}}</table> |
|||
==Task drafts== |
==Task drafts== |
||
* (multiple tasks) Take one of the old and abandoned GsoC projects and get it compiling/running/working/documented |
|||
* Make a wiki page of GsoC projects that were never "integrated" (ie. turned into abandonware) |
|||
* Make a regex printer for the binary transfer files, e.g. given <def-cat n="foo"><cat-item n="n.*"/></def-cat> it will print foo\t<n>(<[^>]+>)+ |
* Make a regex printer for the binary transfer files, e.g. given <def-cat n="foo"><cat-item n="n.*"/></def-cat> it will print foo\t<n>(<[^>]+>)+ |
||
Line 25: | Line 1,649: | ||
* a) |
* a) |
||
** Parse language ''a'' to abstract syntax. |
** Parse language ''a'' to abstract syntax. |
||
*** Take ''n''-best AS trees, and linearise to language ''b' |
*** Take ''n''-best AS trees, and linearise to language ''b'' |
||
*** Score all linearisations on a language model of ''b'' |
*** Score all linearisations on a language model of ''b'' |
||
** Choose the AS tree which is ranked top by the language model |
** Choose the AS tree which is ranked top by the language model |
||
Line 40: | Line 1,664: | ||
* Do the same, but for LEXC |
* Do the same, but for LEXC |
||
* Write a program to guess the transitivity of Turkic verbs based on a corpus. |
|||
* Do something with Scandinavian triplets (e.g. triplets of [nno, nob, dan] words) to get equivs in Swedish. |
|||
* Categorise Turkic adjectives automatically |
|||
* Find and fix errors in Swedish->{Nynorsk,Bokmål,Danish} translation. |
|||
== Corrections to Russian tasks == |
== Corrections to Russian tasks == |
||
Line 153: | Line 1,785: | ||
[[Category:Google Code-in]] |
[[Category:Google Code-in]] |
||
== Old tasks (2011)== |
== Old tasks (2011)== |
||
Line 1,295: | Line 2,926: | ||
|- |
|- |
||
|align=center| {{sc|code}} || 1. Hard || Even up the coverage of the Serbo-Croatian and Macedonian morphological analyser || There are words in the Macedonian morphological analyser which do not have a pair in the Serbo-Croatian analyser. Extract a 100, translate them, add them to the bidix and assign a paradigm for each one of them in the Serbo-Croatian analyser. ||align=center| 13-15 hours || [[User:Krvoje|Hrvoje Peradin]], [[User:Francis Tyers|Francis Tyers]] |
|align=center| {{sc|code}} || 1. Hard || Even up the coverage of the Serbo-Croatian and Macedonian morphological analyser || There are words in the Macedonian morphological analyser which do not have a pair in the Serbo-Croatian analyser. Extract a 100, translate them, add them to the bidix and assign a paradigm for each one of them in the Serbo-Croatian analyser. ||align=center| 13-15 hours || [[User:Krvoje|Hrvoje Peradin]], [[User:Francis Tyers|Francis Tyers]] |
||
|} |
|||
==Task list== |
|||
BULK IMPORT COMPLETE. |
|||
Any edits you make to the below tables will not have any impact on the contents of the task tracker. Please edit tasks there. |
|||
=== Misc tools === |
|||
{|class="wikitable sortable" |
|||
! Category !! Title !! Description !! Mentors |
|||
|- |
|||
| {{sc|code}} || Unigram tagging mode for <code>apertium-tagger</code> || Edit the <code>apertium-tagger</code> code to allow for lexicalised unigram tagging. This would basically choose the most frequent analysis for each surface form of a word. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:Francis Tyers|Francis Tyers]] [[User:Wei2912|Wei En]] |
|||
|- |
|||
| {{sc|code}} || Data format for the unigram tagger || Come up with a binary storage format for the data used for the unigram tagger. It could be based on the existing <code>.prob</code> format. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers|Francis Tyers]] [[User:Wei2912|Wei En]] |
|||
|- |
|||
| {{sc|code}} || Add tag combination back-off to unigram tagger. || Modify the unigram tagger to allow for back-off to tag sequence in the case that a given form is not found. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers|Francis Tyers]] [[User:Wei2912|Wei En]] |
|||
|- |
|||
| {{sc|code}} || Prototype unigram tagger. || Write a simple unigram tagger in a language of your choice. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers|Francis Tyers]] [[User:Wei2912|Wei En]] |
|||
|- |
|||
| {{sc|code}} || Training for unigram tagger || Write a program that trains a model suitable for use with the unigram tagger. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers|Francis Tyers]] [[User:Wei2912|Wei En]] |
|||
|- |
|||
| {{sc|code}} || make voikkospell understand apertium stream format input || Make voikkospell understand apertium stream format input, e.g. ^word/analysis1/analysis2$, voikkospell should only interpret the 'word' part to be spellchecked. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || make voikkospell return output in apertium stream format || make voikkospell return output suggestions in apertium stream format, e.g. ^correctword$ or ^incorrectword/correct1/correct2$ <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || libvoikko support for OS X || Make a spell server for OS X's system-wide spell checker to use arbitrary languages through libvoikko. See https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/SpellCheck/Tasks/CreatingSpellServer.html#//apple_ref/doc/uid/20000770-BAJFBAAH for more information <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|documentation}} || document: setting up libreoffice voikko on Ubuntu/debian || document how to set up libreoffice voikko working with a language on Ubuntu and debian <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|documentation}} || document: setting up libreoffice voikko on Fedora || document how to set up libreoffice voikko working with a language on Fedora <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|documentation}} || document: setting up libreoffice voikko on Windows || document how to set up libreoffice voikko working with a language on Windows <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|documentation}} || document: setting up libreoffice voikko on OS X || document how to set up libreoffice voikko working with a language on OS X <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|documentation}} || document how to set up libenchant to work with libvoikko || Libenchant is a spellchecking wrapper. Set it up to work with libvoikko, a spellchecking backend, and document how you did it. You may want to use a spellchecking module available in apertium for testing. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || geriaoueg lookup code || firefox/iceweasel plugin which queries apertium API for a word by sending a context (±n words) and the position of the word in the context and gets translation for language pair xxx-yyy <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[user:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || geriaoueg hovering the right way || Fix the geriaoueg plugins so that the popup stays there until you hover off a word, just like normal hovering. This will involve a redesign of the way the hovering code works. The plugin also crashes sometimes when dealing with urls, but it seems to be related to this issue. It'd be good if it stops crashing in those cases.<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || Translate page feature for geriaoueg firefox & chrome plugins || Add functionality to [[Geriaoueg]] plugins for chrome and firefox that lets them not just gloss words but translate an entire page with apertium, much like existing corporate browser plugins. Don't worry about language detection and other complicated problems for now.<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|quality}} || make apertium-quality work with python3.3 on all platforms || migrate apertium-quality away from distribute to newer setup-tools so it installs correctly in more recent versions of python (known incompatible: python3.3 OS X, known compatible: MacPorts python3.2) <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|quality}}, {{sc|code}} || Get bible aligner working (or rewrite it) || trunk/apertium-tools/bible_aligner.py - Should take two bible translations and output a tmx file with one verse per entry. There is a standard-ish plain-text bible translation format that we have bible translations in, and we have files that contain the names of verses of various languages mapped to English verse names <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || tesseract interface for apertium languages || Find out what it would take to integrate apertium or voikkospell into tesseract. Document thoroughly available options on the wiki. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || Syntax tree visualisation using GNU bison || Write a program which reads a grammar using bison, parses a sentence and outputs the syntax tree as text, or graphViz or something. Some example bison code can be found [https://svn.code.sf.net/p/apertium/svn/branches/transfer4 here]. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Mlforcada]] |
|||
|- |
|||
| {{sc|code}} || make concordancer work with output of analyser || Allow [http://pastebin.com/raw.php?i=KG8ydLPZ spectie's concordancer] to accept an optional apertium mode and directory (implement via argparse). When it has these, it should run the corpus through that apertium mode and search against the resulting tags and lemmas as well as the surface forms. E.g., the form алдым might have the analysis via an apertium mode of ^алдым/алд{{tag|n><px1sg}}{{tag|nom}}/ал{{tag|v><tv}}{{tag|ifi><p1}}{{tag|sg}}, so a search for "px1sg" should bring up this word. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || convert a current transducer for a language using lexc+twol to a guesser || Figure out how to generate a guesser for a language module that uses lexc for morphotactics and twol for morphophonology (e.g., apertium-kaz). One approach to investigate would be to generate all the possible archiphoneme representations of a given form and run the lexc guesser on that. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Flammie]] |
|||
|- |
|||
| {{sc|code}} || let apertium-init support giella pairs || [[Apertium-init]] is a tool to bootstrap a new language module or translation pair, with build rules and minimal data. It doesn't yet support pairs that depend on [http://wiki.apertium.org/w/index.php?title=Category:Giellatekno Giellatekno] language modules, we would like this. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Unhammer]] [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|code}} || create lt-compose tool to compose two transducers || This should do what [https://kitwiki.csc.fi/twiki/bin/view/KitWiki/HfstCompose hfst-compose] does, but for [[lttoolbox]] transducers. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Unhammer]] |
|||
|- |
|||
| {{sc|documentation}},{{sc|research}} || create and test a configuration file for simpledix || [http://wiki.apertium.org/wiki/User:Dtr5 Simpledix] tries to help inexperienced users on the task of inserting words into Apertium dictionaries. But it needs paradigm description files for generating meaningful configuration files. Write and test a description file for the Apertium pair of your choice, and report possible improvements for the procedure. || [[User:dtr5]] |
|||
|- |
|||
|} |
|||
=== Website and apy === |
|||
{|class="wikitable sortable" |
|||
! Category !! Title !! Description !! Mentors |
|||
|- |
|||
| {{sc|code}} || apertium-apy mode for geriaoueg (biltrans in context) || apertium-apy function that accepts a context (e.g., ±n ~words around word) and a position in the context of a word, gets biltrans output on entire context, and returns translation for the word <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Website translation in Html-tools || [[Html-tools]] should detect when the user wants to translate a website (similar to how Google Translate does it) and switch to an interface (See "Website translation in [[Html-tools]] (interface)" task) and perform the translation. It should also make it so that new pages that the user navigates to are translated. See [http://sourceforge.net/p/apertium/tickets/50/ ticket 50] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|interface}} || Website translation in Html-tools || Add an interface to [[Html-tools]] that shows a webpage in an <iframe> with translation options and a back button to return to text/document translation. See [http://sourceforge.net/p/apertium/tickets/50/ ticket 50] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel.'''up2015''' || [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Fix Html-tools crashing on iPads when copying text || Fix [[Html-tools]] so that the Apertium site does not crash on iPads when copying text on any of the modes while maintaining semantic HTML. This task requires having access to an iPad. See [http://sourceforge.net/p/apertium/tickets/42 ticket 42] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Fix Html-tools copying text on Windows Phone IE || Fix [[Html-tools]] so that the Apertium site allows copying text on WP while maintaining semantic HTML. This task requires having access to an Windows Phone. See [http://sourceforge.net/p/apertium/tickets/42 ticket 42] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || APY API keys || Add API key support to [[APY]] but don't overengineer it. See [http://sourceforge.net/p/apertium/tickets/31/ ticket 31] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]] [[User:Xavivars]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Localisation of tag attributes on Html-tools || In [[Html-tools]], the meta description tag isn't localized as of now since the text is an attribute. Search engines often display this as their snippet. A possible way to achieve this is using data-text="@content@description". See [http://sourceforge.net/p/apertium/tickets/29/ ticket 29] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Html-tools font issues || This task concerns a font issue in [[Html-tools]]. See [http://sourceforge.net/p/apertium/tickets/27/ ticket 27] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel.'''up2015''' || [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}}, {{sc|interface}} || Auto-select target language || [http://sourceforge.net/p/apertium/tickets/25/ ticket 25] made apertium-html-tools show the available target languages first, but preferably, one of them would be auto-selected as well (maybe with a single visual "blink" to show that something happened there). <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]] [[User:Francis Tyers]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Maintaining order of user interactions on Html-tools || In [[Html-tools]], if a user clicks a new language choice while translation or detection is proceeding (AJAX callback has not yet returned), the original action will not be cancelled. Make it so that the first action is canceled and overridden by the second. See [http://sourceforge.net/p/apertium/tickets/9/ ticket 9] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || More file formats for APY || [[APY]] does not support DOC, XLS, PPT file translation that require the file being converted to the newer XML based formats through LibreOffice or equivalent and then back. See [http://sourceforge.net/p/apertium/tickets/7/ ticket 7] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]] [[User:Francis Tyers]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Improved file translation functionality for APY || [[APY]] needs logging and to be non-blocking for file translation. See [http://sourceforge.net/p/apertium/tickets/7/ ticket 7] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]] [[User:Francis Tyers]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|interface}} || Abstract the formatting for the Html-tools interface. || The [[html-tools]] interface should be easily customisable so that people can make it look how they want. The task is to abstract the formatting and make one or more new stylesheets to change the appearance. This is basically making a way of "skinning" the interface. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|interface}} || Html-tools spell-checker interface || Integrate the spell-checker interface that was designed for [[html-tools]]. It should be enablable in the [[html-tools]] config. See [http://sourceforge.net/p/apertium/tickets/6/ ticket 6] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Html-tools spell-checker code || Add code to the [[html-tools]] interface that allows spell checking to be performed. Should send entire string, and be able to match each returned result to its appropriate input word. Should also update as new words are typed (but [https://sourceforge.net/p/apertium/svn/HEAD/tree/trunk/apertium-tools/apertium-html-tools/assets/js/translator.js#l42 not on every keystroke]). See [http://sourceforge.net/p/apertium/tickets/6/ ticket 6] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || libvoikko support for APY || Write a function for [[APY]] that checks the spelling of an input string via [[libvoikko]] and for each word returns whether the word is correct, and if unknown returns suggestions. Whether segmentation is done by the client or by apertium-apy will have to be figured out. You will also need to add scanning for spelling modes to the initialisation section. Try to find a sensible way to structure the requests and returned data with JSON. Add a switch to allow someone to turn off support for this (use argparse set_false). See [http://sourceforge.net/p/apertium/tickets/6/ ticket 6] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Html-tools expanding textareas || The input textarea in the [[html-tools]] translation interface does not expand depending on the user's input even when there is significant whitespace remaining on the page. Improvements include varying the length of the textareas to fill up the viewport or expanding depending on input. Both the input and output textareas would have to maintain the same length for interface consistency. Different behavior may be desired on mobile. See [http://sourceforge.net/p/apertium/tickets/4/ ticket 4] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|code}} || Performance tracking in APY || Add a way for [[APY]] to keep track of number of words in input and time between sending input to a pipeline and receiving output, for the last n (e.g., 100) requests, and write a function to return the average words per second over something<n (e.g., 10) requests. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]] |
|||
|- |
|||
| {{sc|code}} || Language variant picker in Html-tools || In [[html-tools]],displaying language variants as distinct languages in the translator language selector is awkward and repetitive. Allowing users to first select a language and then display radio buttons for choosing a variant below the relevant translation box, if relevant, provides a better user interface. See [http://sourceforge.net/p/apertium/tickets/1/ ticket 1] for details and progress tracking. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Unhammer]] [[User:Francis Tyers]], [[User:Sushain]] |
|||
|- |
|||
| {{sc|research}} || Investigate how to implement HTML-translation that can deal with broken HTML || The old Apertium website had a 'surf-and-translate' feature, but it frequently broke on badly-behaved HTML. Investigate how similar web sites deal with broken HTML when rewriting the internal content of a (possible automatically generated) HTML page. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|code}} || Add pipeline debug action to APY || Add a '''/pipedebug''' action to [[APY]] so given a text and a language pair it does not return only the translation, but the whole flow (like [[Apertium-viewer]] does). That would help indentifying where exactly the errors that are APY-only (or null-flush-only) happen, and could be useful for debugging in general. [[/Apy pipedebug|Read more...]] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Xavivars]] [[User:Unhammer]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|interface}} || Grammar checker interface || Create a grammar checker / proofing html interface. It should send the user input through a given pipeline, and parse the [[Constraint Grammar]] output, turning this back into readable output with underlined words. <!-- next task: clickable words, suggestions --> <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Unhammer]] |
|||
|- |
|||
| {{sc|code}} || Suggest a word to html-tools || The apertium web-translator should have clickable links for different problems in translation pipeline (marked by #*@) that could lead to a simple form to collect new word suggestions from peoples <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:TommiPirinen]], more mentors plz |
|||
|- |
|||
| {{sc|code}} || Abumatran paradigm guesser integration to html-tools|| The apertium web-translator could link unknown words to some web based word-classification tool that can add them to dixes<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:TommiPirinen]], more mentors plz |
|||
|- |
|||
| {{sc|code}} || User management for paradigm guesser || The abumatran paradigm guesser currently has only admin-driven user management, for lot of people to be able to contribute with proper attributions but not too much vandalism an automated lightweight user registration system should be created <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:TommiPirinen]], more mentors plz |
|||
|} |
|||
=== Pair visualisations === |
|||
{|class="wikitable sortable" |
|||
! Category !! Title !! Description !! Mentors |
|||
|- |
|||
| {{sc|quality}} || fix pairviewer's 2- and 3-letter code conflation problems || [[pairviewer]] doesn't always conflate languages that have two codes. E.g. sv/swe, nb/nob, de/deu, da/dan, uk/ukr, et/est, nl/nld, he/heb, ar/ara, eus/eu are each two separate nodes, but should instead each be collapsed into one node. Figure out why this isn't happening and fix it. Also, implement an algorithm to generate 2-to-3-letter mappings for available languages based on having the identical language name in languages.json instead of loading the huge list from codes.json; try to make this as processor- and memory-efficient as possible. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}}, {{sc|interface}} || map support for pairviewer ("pairmapper") || Write a version of [[pairviewer]] that instead of connecting floating nodes, connects nodes on a map. I.e., it should plot the nodes to an interactive world map (only for languages whose coordinates are provided, in e.g. GeoJSON format), and then connect them with straight-lines (as opposed to the current curved lines). Use an open map framework, like [http://leafletjs.com leaflet], [http://polymaps.org polymaps], or [http://openlayers.org openlayers] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || coordinates for Mongolic languages || Using the map [https://en.wikipedia.org/wiki/File:Linguistic_map_of_the_Mongolic_languages.png Linguistic map of the Mongolic languages.png], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format that can be loaded by pairmapper (or, e.g., converted to kml and loaded in google maps). The file should contain points that are a geographic "center" (locus) for where each Mongolic language on that map is spoken. Use the term "Khalkha" (iso 639-3 khk) for "Mongolisch", and find a better map for Buryat. You can use a capital city for bigger, national languages if you'd like (think Paris as a locus for French). <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel.'''up2015''' || [[User:Firespeaker]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|code}} || draw languages as areas for pairmapper || Make a map interface that loads data (in e.g. GeoJSON or KML format) specifying areas where languages are spoken, as well as a single-point locus for the language, and displays the areas on the map (something like [http://leafletjs.com/examples/choropleth.html the way the states are displayed here]) with a node with language code (like for [[pairviewer]]) at the locus. This should be able to be integrated into pairmapper, the planned map version of pairviewer. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || georeference language areas for Tatar, Bashqort, and Chuvash || Using the maps listed here, try to define rough areas for where Tatar, Bashqort, and Chuvash are spoken. These areas should be specified in a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. Try to be fairly accurate and detailed. Maps to consult include [https://commons.wikimedia.org/wiki/File:Tatarbashkirs1989ru.PNG Tatarsbashkirs1989ru], [https://commons.wikimedia.org/wiki/File:NarodaCCCP.jpg NarodaCCCP] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|code}} || georeference language areas for North Caucasus Turkic languages || Using the map [https://commons.wikimedia.org/wiki/File:Caucasus-ethnic_en.svg Caucasus-ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Kumyk, Nogay, Karachay, Balkar. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|code}} || georeference language areas for IE and Mongolic Caucasus-area languages || Using the map [https://commons.wikimedia.org/wiki/File:Caucasus-ethnic_en.svg Caucasus-ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Ossetian, Armenian, Kalmyk. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|code}} || georeference language areas for North Caucasus languages || Using the map [https://commons.wikimedia.org/wiki/File:Caucasus-ethnic_en.svg Caucasus-ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Avar, Chechen, Abkhaz, Georgian. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel.'''up2015''' || [[User:Firespeaker]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|code}} || georeference language areas for Central Asian languages: Uzbek and Uyghur || Using the map [https://commons.wikimedia.org/wiki/File:Central_Asia_Ethnic_en.svg Central_Asia_Ethnic_en.svg], write a file in [https://en.wikipedia.org/wiki/GeoJSON GeoJSON] (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Uzbek and Uyghur are spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|quality}}, {{sc|code}} || split nor into nob and nno in pairviewer || Currently in [[pairviewer]], nor is displayed as a language separately from nob and nno. However, the nor pair actually consists of both an nob and an nno component. Figure out a way for pairviewer (or pairsOut.py / get_all_lang_pairs.py) to detect this split. So instead of having swe-nor, there would be swe-nob and swe-nno displayed (connected seemlessly with other nob-* and nno-* pairs), though the paths between the nodes would each still give information about the swe-nor pair. Implement a solution, trying to make sure it's future-proof (i.e., will work with similar sorts of things in the future). <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:Firespeaker]] [[User:Francis Tyers]] [[User:Unhammer]] |
|||
|- |
|||
| {{sc|quality}}, {{sc|code}} || add support to pairviewer for regional and alternate orthograpic modes || Currently in [[pairviewer]], there is no way to detect or display modes like zh_TW. Add suppor to pairsOut.py / get_all_lang_pairs.py to detect pairs containing abbreviations like this, as well as alternate orthographic modes in pairs (e.g. uzb_Latn and uzb_Cyrl). Also, figure out a way to display these nicely in the pairviewer's front-end. Get creative. I can imagine something like zh_CN and zh_TW nodes that are in some fixed relation to zho (think Mickey Mouse configuration?). Run some ideas by your mentor and implement what's decided on. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|code}} || Function that counts stems at all revisions of each bidix involving a specific language || Write a function in python or ruby that takes a language code as input, queries svn to find all language pairs that involve that language (note that there are both two- and three-letter abbreviations in use), count the number of stems in the bilingual dictionary for revision in its history, and output all of this data in a simple json variable. There are scripts that do different pieces of all of this already: [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/wiki-tools/dixTable.py queries svn], [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/family-visualizations/overtime.rb queries svn revisions], [http://wiki.apertium.org/wiki/The_Right_Way_to_count_dix_stems counting bidix stems]. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || Extend visualisation of pairs involving a language in language family visualisation tool || The [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/family-visualizations/ language family visualisation tool] currently has a visualisation of all pairs involving the language. Extend this to include pairs that involve those languages, and so on, until there are no more pairs. This should result in a graph of quite a few languages, with the current language in the middle. Note that if language x is the center, and there are x-y and x-z pairs, but also a y-z pair, this should display the y-z pair with a link, not with an extra z and y node each, connected to the original y and z nodes, respectively. The best way to do this may involve some sort of filtering of the data. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]] |
|||
|} |
|||
=== Begiak === |
|||
{|class="wikitable sortable" |
|||
! Category !! Title !! Description !! Mentors |
|||
|- |
|||
| {{sc|quality}} || Generalise phenny/begiak git plugin || Rename the [[begiak]] module to git (instead of github), and test it to make sure it's general enough for at least three common git services (there should already be that many supported, but make sure they all work). For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|quality}} ||fix .randquote || The .randquote function currently fails with "'module' object has no attribute 'Grab'". Fix it. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || phenny/begiak git plugin commit info function || Add a function to the [[begiak]] github module to get the status of a commit by reponame and name (similar to what the svn module does), and then find out why commit 6a54157b89aee88511a260a849f104ae546e3a65 in turkiccorpora resulted in the following output, and fix it: Something went wrong: dict_keys(['commits', 'user', 'canon_url', 'repository', 'truncated']). For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || make begiak use pm's when doing "follow" || The .follow function currently uses notify, which makes everyone have to see the translations. Make it use PM's (/msg) instead; but if several people follow the same person in the same direction, begiak should not make duplicate translation requests. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Unhammer]] |
|||
|- |
|||
| {{sc|code}} || make begiak use ISO 639-3 codes for "follow" || The .follow function currently uses doesn't understand "swe-dan" for language pairs that use ISO-639-1 codes like "sv-da". Make it understand the 639-3 code. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[[User:Unhammer]] |
|||
|- |
|||
| {{sc|code}} || phenny/begiak git plugin recent function || Find out why [[begiak]]'s "recent" function (begiak: recent) returns "ValueError: No JSON object could be decoded (file "/usr/lib/python3.2/json/decoder.py", line 371, in raw_decode)" for one of the repos (no permission) and find a way to fix it so it returns the status instead. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}}, {{sc|quality}} || phenny/begiak svn plugin info function || Find out why [[begiak]]'s info function ("begiak info [repo] [rev]") doesn't work and fix it. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|documentation}} || document any phenny/begiak command that does not have information || Find a command that our IRC bot ([[begiak]]) uses that is not documented, and document how it works both on the [http://wiki.apertium.org/wiki/Begiak Begiak wiki page] and in the code. This will require you to fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || phenny/begiak wiki modules tell result || Make a function for our IRC bot ([[begiak]]) that allows someone to point another user to a wiki page (apertium wiki or wikipedia), and have it give them the results (e.g. for mentors to point students to resources). It could be an extra function on the .wik and .awik modules. Make sure it allows for all wiki modes in those modules (e.g., .wik.ru) and is intuitive to use. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|quality}} || find content that phenny/begiak wiki modules don't do a good job with || Identify at least 10 pages or sections on Wikipedia or the apertium wiki that the respective [[begiak]] module doesn't return good output for. These may include content where there's immediately a subsection, content where the first thing is a table or infobox, or content where the first . doesn't end the sentence. Document generalisable scenarios about what the preferred behaviour would be. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || make phenny/begiak git and svn modules display urls || When a user asks to display revision information, have [[begiak]] (our IRC bot) include a link to information on the revision. For example, when displaying information for apertium repo revision r57171, include the url http://sourceforge.net/p/apertium/svn/57171/ , maybe even a shortened version. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || improve phenny/begiak timezone math || Currently [[begiak]] (our IRC bot) is able to scrape and use data on timezone names, but it can't do math, e.g. CEST-5, GMT+3, etc. Make it support this stuff For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || make timezone conversion for phenny/begiak support city names too || Add city name support for timezone conversion in the time plugin for [[begiak]] (our IRC bot). It currently accepts a time in one timezone and a destination timezone, and converts the time, e.g. ".tz 335EST in CET" returns "835CET". But it can't do ".tz 335Indianapolis in CET". You should have it rely on the city support code that's already there. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || make city name support phenny/begiak timezone plugin work better || Find a source that maps city names to timezone abbreviations and have the .tz command for [[begiak]] (our IRC bot) scrape and use that data (e.g., ".time Barcelona" should give the current time in CET). The current timezone plugin works, but doesn't support a lot of cities—make it support a lot. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || add analysis and generation modes to apertium translation begiak module || Add the ability for the apertium translation module that's part [[begiak]] (our IRC bot) to query morphological analysis and generation modes. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || make begiak's version control monitoring channel specific || Our IRC bot ([[begiak]]) currently monitors a series of git and svn repositories. When a commit is made to a repository, the bot displays the commit in all channels. For this task, you should modify both of these modules (svn and git) so that repositories being monitored (listed in the config file) can be specified in a channel-specific way. However, it should default to the current behaviour—channel-specific settings should just override the global monitoring pattern. You should fork [https://github.com/jonorthwash/phenny the bot on github] to work on this task and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || allow admins to modify and delete other people's queues in begiak || Modify the queue module for [[begiak]] (our IRC bot) to let admins (as defined by begiak's config file—there should be a function that'll just check if the person issuing a command is an admin) modify and delete queues for other users. For this task, you should fork [https://github.com/jonorthwash/phenny the bot on github] and send a pull request when you're done. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|quality}} || Sync begiak with origin and submit PRs back for our changes || For this task, sync [[begiak]] with origin, and send them pull requests for our local changes of relevance. The synching will probably get a little messy, and the pull requests should ideally be one PR per feature (if possible). [http://mispdev.blogspot.com/2013/02/github-cherry-picking-commits-from-pull.html This document] may be of use.<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]], [[User:Unhammer]] |
|||
|} |
|||
=== Apertium linguistic data === |
|||
{|class="wikitable sortable" |
|||
! Category !! Title !! Description !! Mentors |
|||
|- |
|||
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Improve the bilingual dictionary of a language pair XX-YY in the incubator by adding 50 word correspondences to it || Languages XX and YY may have rather large dictionaries but a small bilingual dictionary. Add words to the bilingual dictionary and test that the new vocabulary works. Check [http://opus.lingfil.uu.se The OPUS bilingual corpus repository] for sentence-aligned corpora such as Tatoeba. [[/Grow bilingual|Read more]]... <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''(some) || [[User:Mlforcada]] [[User:Raveesh]] [[User:Vin-ivar]] [[User:Aida]] [[User:Putti]] |
|||
|- |
|||
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Improve the quality of a language pair XX-YY by adding 50 words to its vocabulary || Add words to language pair XX-YY and test that the new vocabulary works. Check [http://opus.lingfil.uu.se The OPUS bilingual corpus repository] for sentence-aligned corpora such as Tatoeba. [[/Add words|Read more]]... <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''(some) || [[User:Mlforcada]] [[User:ilnar.salimzyan]] [[User:Xavivars]] [[User:Bech]] [[User:Jimregan|Jimregan]] [[User:Unhammer]] [[User:Nikant]] [[User:Fulup|Fulup]] [[User:tunedal]] [[User:Juanpabl]][[User:Youssefsan|Youssefsan]] [[User:Firespeaker]] [[User:Raveesh]] [[User:vin-ivar]] [[User:Aida]] [[User:Putti]] |
|||
|- |
|||
| {{sc|code}}, {{sc|quality}} || {{sc|multi}}=2 Find translation bugs by using LanguageTool, and correct them || The LanguageTool grammar/style checker has great rule sets for Catalan and French. Run it on output from Apertium translation into Catalan/French and fix 5 mistakes. '''up2015''' [[/Fix using LanguageTool|Read more]]... || [[User:Xavivars]] |
|||
|- |
|||
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Add/correct one structural transfer rule to an existing language pair || Add or correct a structural transfer rule to an existing language pair and test that it works. [[/Add transfer rule|Read more]]... <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''[some] || [[User:Mlforcada]] [[User:ilnar.salimzyan]] [[User:Unhammer]] [[User:Nikant]] [[User:Fulup|Fulup]] [[User:Juanpabl]] [[User:Raveesh]] [[User:vin-ivar]] [[User:Aida]] |
|||
|- |
|||
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Write 10 lexical selection rules for a language pair already set up with lexical selection || Add 10 lexical selection rules to improve the lexical selection quality of a pair and test them to ensure that they work. [[/Add lexical-select rules|Read more]]... <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' (a few: need to add more LPs) || [[User:Mlforcada]], [[User:Francis Tyers]] [[User:ilnar.salimzyan]] [[User:Unhammer]] [[User:Nikant]] [[User:Firespeaker]] [[User:Putti]] [[User:Raveesh]] [[User:vin-ivar]] [[User:Aida]] (more mentors welcome) |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} Set up a language pair to use lexical selection and write 5 rules || First set up a language pair to use the new lexical selection module (this will involve changing configure scripts, makefile and [[modes]] file). Then write 5 lexical selection rules. [[/Setup and add lexical selection|Read more]]... <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Mlforcada]], [[User:Francis Tyers]] [[User:Unhammer]] [[User:Fulup|Fulup]] [[User:pankajksharma]] [[User:Aida]] (more mentors welcome) |
|||
|- |
|||
| {{sc|code}}, {{sc|quality}} || {{sc|multi}} Write 10 constraint grammar rules to repair part-of-speech tagging errors || Find some tagging errors and write 10 constraint grammar rules to fix the errors. [[/Add constraint-grammar rules|Read more]]... <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' (some) || [[User:Mlforcada]], [[User:Francis Tyers]] [[User:ilnar.salimzyan]] [[User:Unhammer]] [[User:Fulup|Fulup]] [[User:Aida]] (more mentors welcome) |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} Set up a language pair such that it uses constraint grammar for part-of-speech tagging || Find a language pair that does not yet use constraint grammar, and set it up to use constraint grammar. After doing this, find some tagging errors and write five rules for resolving them. [[/Setup constraint grammar for a pair|Read more]]... <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' (some) || [[User:Mlforcada]], [[User:Francis Tyers]] [[User:Unhammer]] [[User:Aida]] |
|||
|- |
|||
| {{sc|quality}} || {{sc|multi}} Compare Apertium with another MT system and improve it || This tasks aims at improving an Apertium language pair when a web-accessible system exists for it in the 'net. Particularly good if the system is (approximately) rule-based such as [http://www.lucysoftware.com/english/machine-translation/lucy-lt-kwik-translator-/ Lucy], [http://www.reverso.net/text_translation.aspx?lang=EN Reverso], [http://www.systransoft.com/free-online-translation Systran] or [http://www.freetranslation.com/ SDL Free Translation]: (1) Install the Apertium language pair, ideally such that the source language is a language you know (L₂) and the target language a language you use every day (L₁). (2) Collect a corpus of text (newspaper, wikipedia) Segment it in sentences (using e.g., libsegment-java or a similar processor and a [https://en.wikipedia.org/wiki/Segmentation_Rules_eXchange SRX] segmentation rule file borrowed from e.g. OmegaT) and put each sentence in a line. Run the corpus through Apertium and through the other system Select those sentences where both outputs are very similar (e.g, 90% coincident). Decide which one is better. If the other language is better than Apertium, think of what modification could be done for Apertium to produce the same output, and make 3 such modifications.<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Mlforcada]] [[User:Jimregan|Jimregan]] [[User:Aida]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|documentation}} || {{sc|multi}} What's difficult about this language pair? || For a language pair that is not in trunk or staging such that you know well the two languages involved, write a document describing the main problems that Apertium developers would encounter when developing that language pair (for that, you need to know very well how Apertium works). Note that there may be two such documents, one for A→B and the other for B→A Prepare it in your user space in the Apertium wiki.It may be uploaded to the main wiki when approved. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Mlforcada]] [[User:Jimregan|Jimregan]] [[User:Youssefsan|Youssefsan]] [[User:Aida]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|research}} || {{sc|multi}} Write a contrastive grammar || Using a grammar book/resource document 10 ways in which the grammar of two languages differ, with no fewer than 3 examples of each difference. Put it on the wiki under Language1_and_Language2/Contrastive_grammar. See [[Farsi_and_English/Pending_tests]] for an example of a contrastive grammar that a previous GCI student made. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Sereni]] [[User:Aida]] |
|||
|- |
|||
| {{sc|research}} || {{sc|multi}} Hand annotate 250 words of running text. || Use [[apertium annotatrix]] to hand-annotate 250 words of running text from Wikipedia for a language of your choice. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|research}} || The most frequent Romance-to-Romance transfer rules || Study the .t1x transfer rule files of Romance language pairs and distill 5-10 common rules that are common to all of them, perhaps by rewriting them into some equivalent form <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Mlforcada]] |
|||
|- |
|||
| {{sc|research}} || {{sc|multi}} Tag and align Macedonian--Bulgarian corpus || Take a Macedonian--Bulgarian corpus, for example SETimes, tag it using the [[apertium-mk-bg]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|code}} || Write a program to extract Bulgarian inflections || Write a program to extract Bulgarian inflection information for nouns from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Bulgarian_nouns Category:Bulgarian nouns] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|quality}} || {{sc|multi}} Improve the quality of a language pair by allowing for alternative translations || Improve the quality of a language pair by (a) detecting 5 cases where the (only) translation provided by the bilingual dictionary is not adequate in a given context, (b) adding the lexical selection module to the language, and (c) writing effective lexical selection rules to exploit that context to select a better translation <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Mlforcada]] [[User:Unhammer]] [[User:Aida]] |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} {{sc|depend}} Make sure an Apertium language pair does not mess up (X)HTML formatting || (Depends on someone having performed the task 'Examples of files where an Apertium language pair messes up (X)HTML formatting' above). The task: (1) run the file through Apertium try to identify where the tags are broken or lost: this is most likely to happen in a structural transfer step; try to identify the rule where the label is broken or lost (2) repair the rule: a conservative strategy is to make sure that all superblanks (<b pos="..."/>) are output and are in the same order as in the source file. This may involve introducing new simple blanks (<b/>) and advancing the output of the superblanks coming from the source. (3) test again (4) Submit a patch to your mentor (or commit it if you have already gained developer access) <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Mlforcada]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|quality}} || Examples of minimum files where an Apertium language pair messes up wordprocessor (ODT, RTF) formatting || Sometimes, an Apertium language pair takes a valid ODT or RTF source file but delivers an invalid ODT or RTF target file, regardless of translation quality. This can usually be blamed on incorrect handling of superblanks in structural transfer rules. The task: (1) select a language pair (2) Install Apertium locally from the Subversion repository; install the language pair; make sure that it works (3) download a series of ODT or RTF files for testing purposes. Make sure they are opened using LibreOffice/OpenOffice.org (4) translate the valid files with the language pair (5) check if the translated files are also valid ODT or RTF files; select those that aren't (6) find the first source of non-validity and study it, and strip the source file until you just have a small (valid!) source file with some text around the minimum possible example of problematic tags; save each such file and describe the error. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Mlforcada]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} {{sc|depend}} Make sure an Apertium language pair does not mess up wordprocessor (ODT, RTF) formatting || (Depends on someone having performed the task 'Examples of files where an Apertium language pair messes up wordprocessor formatting' above). The task: (1) run the file through Apertium try to identify where the tags are broken or lost: this is most likely to happen in a structural transfer step; try to identify the rule where the label is broken or lost (2) repair the rule: a conservative strategy is to make sure that all superblanks (<b pos="..."/>) are output and are in the same order as in the source file. This may involve introducing new simple blanks (<b/>) and advancing the output of the superblanks coming from the source. (3) test again (4) Submit a patch to your mentor (or commit it if you have already gained developer access) <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Mlforcada]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} Start a language pair involving Interlingua || Start a new language pair involving [https://en.wikipedia.org/wiki/Interlingua Interlingua] using the [http://wiki.apertium.org/wiki/Apertium_New_Language_Pair_HOWTO Apertium new language HOWTO]. Interlingua is the second most used "artificial" language, after Esperanto). As Interlingua is basically a Romance language, you can use a Romance language as the other language, and Romance-language dictionaries rules may be easily adapted. Include at least 50 very frequent words (including some grammatical words) and at least one noun--phrase transfer rule in the ia→X direction. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:Mlforcada]] [[User:Youssefsan|Youssefsan]] (will reach out also to the interlingua community) |
|||
|- |
|||
| {{sc|research}} || Document materials for a language not yet on our wiki || Document materials for a language not yet on our wiki. This should look something like the page on [[Aromanian]]—i.e., all available dictionaries, grammars, corpora, machine translators, etc., print or digital, where available, whether Free, etc., as well as some scholarly articles regarding the language, especially if about computational resources. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] [[User:Francis Tyers]] [[User:Raveesh]] [[User:Aida]] [[User:Unhammer]] |
|||
|- |
|||
| {{sc|research}} || Gujarati Parallel Corpus and Alignment || Collect some parallel corpus for guj-hin, run GIZA++ and verify the alignments. || [[User:Raveesh]] |
|||
|- |
|||
| {{sc|research}} || Urdu-Sindhi Bilingual Dictionary || Add words to bilingual dictionary for Urdu-Sindhi || [[User:Raveesh]] |
|||
|- |
|||
| {{sc|research}} || Hindi-Sindhi Bilingual Dictionary || create a bilingual dictionary for Hindi-Sindhi [with atleast 20 words in each lexical category, such as nouns, verbs, adjectives, adverbs, conjunctions, etc) || [[User:Raveesh]] |
|||
|- |
|||
| {{sc|research}} || Hindi-Gujarati Bilingual Dictionary || create a small bilingual dictionary for Hindi-Gujarati || [[User:Raveesh]] |
|||
|- |
|||
| {{sc|research}} || Gujarati morphology || Define some Morphological paradigms of Gujarati Nouns or Verbs (or any other categories) and provide some Gujarati words (around 50) belonging to those paradigms. '''up2015''' || [[User:Raveesh]] [[User:Vin-ivar]] |
|||
|- |
|||
| {{sc|research}} || Marathi evaluation || Manually tag 500 random Marathi words (based on the monodix) for evaluation '''up2015''' || [[User:Vin-ivar]] |
|||
|- |
|||
| {{sc|research}} || Swedish tagging evaluation || Run a 500 word Wikipedia page through the Swedish tagger (languages/apertium-swe), and correct the mistakes it makes '''up2015''' || [[User:Unhammer]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Albanian--Macedonian corpus || Take a Albanian--Macedonian corpus, for example SETimes, tag it using the [[apertium-sq-mk]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Albanian--Serbo-Croatian corpus || Take a Albanian--Serbo-Croatian corpus, for example SETimes, tag it using the [[apertium-sq-sh]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Albanian--Bulgarian corpus || Take a Albanian--Bulgarian corpus, for example SETimes, tag it using the [[apertium-sq-bg]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Albanian--English corpus || Take a Albanian--English corpus, for example SETimes, tag it using the [[apertium-sq-en]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Danish--Norwegian corpus || Take a Danish--Norwegian corpus, for example OpenSubtitles (da-nb only), tag it using the [[apertium-dan-nor]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Unhammer]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Swedish--Norwegian corpus || Take a Swedish--Norwegian corpus, for example OpenSubtitles (sv-nb only), tag it using the [[apertium-swe-nor]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Unhammer]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Macedonian--Serbo-Croatian corpus || Take a Macedonian--Serbo-Croatian corpus, for example SETimes, tag it using the [[apertium-mk-sh]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Macedonian--English corpus || Take a Macedonian--English corpus, for example SETimes, tag it using the [[apertium-mk-en]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Serbo-Croatian--Bulgarian corpus || Take a Serbo-Croatian--Bulgarian corpus, for example SETimes, tag it using the [[apertium-sh-bg]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Serbo-Croatian--English corpus || Take a Serbo-Croatian--English corpus, for example SETimes, tag it using the [[apertium-sh-en]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}} || Tag and align Bulgarian--English corpus || Take a Bulgarian--English corpus, for example SETimes, tag it using the [[apertium-bg-en]] pair, and word-align it using GIZA++. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:Francis Tyers]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|code}} || Write a program to extract Greek noun inflections || Write a program to extract Greek inflection information for nouns from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Greek_nouns Category:Greek nouns] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|code}} || Write a program to extract Greek verb inflections || Write a program to extract Greek inflection information for verbs from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Greek_verbs Category:Greek verbs] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|code}} || Write a program to extract Greek adjective inflections || Write a program to extract Greek inflection information for adjectives from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Greek_adjectives Category:Greek adjectives] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|code}} || Write a program to convert the Giellatekno Faroese CG to Apertium tags || Write a program which converts the tagset of the Giellatekno Faroese constraint grammar. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Trondtr]] |
|||
|- |
|||
| {{sc|quality}} || Import nouns from azmorph into apertium-aze || Take the nouns (excluding proper nouns) from [https://svn.code.sf.net/p/apertium/svn/branches/azmorph https://svn.code.sf.net/p/apertium/svn/branches/azmorph] and put them into [[lexc]] format in [https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze]. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|quality}} || Import misc categories from azmorph into apertium-aze || Take the categories that aren't nouns, proper nouns, adjectives, adverbs, and verbs from [https://svn.code.sf.net/p/apertium/svn/branches/azmorph https://svn.code.sf.net/p/apertium/svn/branches/azmorph] and put them into [[lexc]] format in [https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze]. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|research}} || Build a clean Kazakh--English sentence-aligned bilingual corpus for testing purposes using official information from Kazakh websites (minimum 50 bilingual sentences). || Download and align the Kazakh and English version of the same page, divide them in sentences, and build two plain text files (eng.FILENAME.txt) and (kaz.FILENAME.txt) with one sentence per line so that they correspond to each other. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:mlforcada]] [[User:Sereni]] [[User:Firespeaker]] [[User:Aida]] |
|||
|- |
|||
| {{sc|research}} || Build a clean Kazakh--Russian sentence-aligned bilingual corpus for testing purposes using official information from Kazakh websites (minimum 50 bilingual sentences). || Download and align the Kazakh and Russian version of the same page, divide them in sentences, and build two plain files (eng.FILENAME.txt) and (rus.FILENAME.txt) with one sentence per line so that they correspond to each other. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:mlforcada]] [[User:Sereni]] [[User:Firespeaker]] [[User:Aida]] |
|||
|- |
|||
| {{sc|code}} || Make a script to generate a table on the wiki of all transducers for a language family || Make a script to go with the other [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/wiki-tools/ wiki-tools] scripts that finds all the apertium single-language transducers for each language in a given family and writes a table describing them to the wiki. The table should be in roughly the same format as that on the [[Turkic languages]] or [[Celtic languages]] pages, and the script can be based off some of the other scripts. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || Combine available wiki-tools scripts into a script that writes a complete language family page || Write a script that generates mostly complete language family pages given dixtable, langtable, and udhrtable, etc. You'll need to combine, and perhaps make more abstract, the existing [http://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/wiki-tools/ wiki-tools] scripts. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|documentation}} || Manually spell-check running text in an apertium language of your choice || Take 500 words from a public source of user contributed content (such as a forum or a comments section of a website) in a language supported by Apertium (other than English) and <em>manually</em> correct all orthographical and typographical errors. Allow for some variation in terms of what is proper spelling, such as regional differences, etc. (e.g., in English, both "color" and "colour" are correct, but "colur" isn't). If you've found fewer than 20 errors, do this for another 500 words (and so on) until you've identified at least 20 errors. Submit a link to the source(s) you used, and a list of only the words you've corrected (one entry per line like "computre,computer" in a text file).<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] [[User:Francis Tyers]] [[User:Ksnmi]] |
|||
|- |
|||
| {{sc|quality}} || Check the performance of an Apertium spell checker in an apertium language of your choice || Take 500 words from a public source of user contributed content (such as a forum or a comments section of a website) in a language supported by Apertium that you know (other than English) and put it through one of our spell checkers (libreoffice, MS Word, firefox, command line voikko, or the website if that task has been done already). Then make a list of all the words it marked wrong, and for each word indicate whether it is (1) a word that is misspelled (provide the correctly spelled form), (2) a word that is spelled correctly, (3) a form from another language that is never used in the language you are checking. Allow for some variation in terms of what is proper spelling, such as regional differences, etc. (e.g., in English, both "color" and "colour" are correct, but "colur" isn't). If you've found fewer than 20 words that fit the first two categories, do this for another 500 words (and so on) until you've identified at least 20 words of types (1) and (2). Submit a link to the source(s) you used, and a list of only the words the spell checker corrected (one entry per line like (1) "computre,computer", (2) "Computer,CORRECT" (3) "計算機,FOREIGN", in a text file).<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] [[User:Francis Tyers]] [[User:Ksnmi]] |
|||
|- |
|||
| {{sc|research}}, {{sc|documentation}} || Categorise 5 twol rules || Choose 5 rules from a twol file for a well-developed hfst pair. For each rule, state what kind of process it is (insertion, deletion, symbol change), and whether it's phonologically conditioned or morphologically conditioned. If it's a phonologically conditioned symbol change, write whether one character is changing to another, or whether the rule is part of a one-to-many or many-to-one correspondence. Write your findings on the apertium wiki at [[Examples_of_twol_rules/Language]] (replacing "Language" with the name of the language).<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|} |
|||
=== Data mangling === |
|||
{|class="wikitable sortable" |
|||
! Category !! Title !! Description !! Mentors |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} Dictionary conversion || Write a conversion module for an existing dictionary for apertium-dixtools. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} Dictionary conversion in python || Write a conversion module for an existing free bilingual dictionary to [[lttoolbox]] format using Python. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || Write a program to extract Faroese noun inflections || Write a program to extract Faroese inflection information for nouns from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Faroese_nouns Category:Faroese nouns] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:vin-ivar]] |
|||
|- |
|||
| {{sc|code}} || Write a program to extract Faroese verb inflections || Write a program to extract Faroese inflection information for verbs from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Faroese_verbs Category:Faroese verbs] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:vin-ivar]] |
|||
|- |
|||
| {{sc|code}} || Write a program to extract Faroese adjective inflections || Write a program to extract Faroese inflection information for adjectives from Wiktionary, see [https://en.wiktionary.org/wiki/Category:Faroese_adjectives Category:Faroese adjectives] <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:vin-ivar]] |
|||
|- |
|||
| {{sc|code}} || Bilingual dictionary from word alignments script || Write a script which takes [[GIZA++]] alignments and outputs a <code>.dix</code> file. The script should be able to reduce the number of tags, and also have some heuristics to test if a word is too-frequently aligned. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Ksnmi]] |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} Scraper for free forum content || Write a script to scrape/capture all freely available content for a forum or forum category and dump it to an xml corpus file or text file. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] [[User:Ksnmi]] |
|||
|- |
|||
| {{sc|research}} || {{sc|multi}} scrape a freely available dictionary using tesseract || Use tesseract to scrape a freely available dictionary that exists in some image format (pdf, djvu, etc.). Be sure to scrape grammatical information if available, as well stems (e.g., some dictionaries might provide entries like АЗНА·Х, where the stem is азна), and all possible translations. Ideally it should dump into something resembling [[bidix]] format, but if there's no grammatical information and no way to guess at it, some flat machine-readable format is fine. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] [[User:Francis Tyers]] [[User:Ksnmi]] |
|||
|- |
|||
| {{sc|code}} || script to generate dictionary from IDS data || Write a script that takes two lg_id codes, scrapes those dictionaries at [http://lingweb.eva.mpg.de/ids/ IDS], matches entries, and outputs a dictionary in [[bidix]] format <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Firespeaker]] [[User:Ksnmi]] |
|||
|- |
|||
| {{sc|code}} || Script to convert rapidwords dictionary to apertium bidix || Write a script (preferably in python3) that converts an arbitrary dictionary from [http://rapidwords.net/reports rapidwords.net] to apertium bidix format. Keep in mind that rapidwords dictionaries may contain more than two languages, while apertium dictionaries may only contain two languages, so the script should take an argument allowing the user to specify which languages to extract. Ideally, there should also be an argument that lists the languages available. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|code}} || Script to convert simple bilingual dictionary entries to lttoolbox-style entries || Write a simple converter for lists of bilingual dictionary entries (one per line) so that one can use the shorthand notation <code>perro.n.m:dog.n</code> to generate lttoolbox-style entries of the form <code><e><l>perro<s n="n"/><s n="m"/></l><r>dog<s n="n"/></r></e></code>. You may start from [https://github.com/jimregan/internostrum-to-lttoolbox] if you wish. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:mlforcada]] [[User:Raveesh]] |
|||
|- |
|||
| {{sc|code}} || {{sc|multi}} Convert one part-of-speech from SALDO to Apertium .dix format|| Take the [http://spraakbanken.gu.se/resurs/saldo SALDO] lexicon of Swedish and convert one of the classes of parts-of-speech to Apertium's [[lttoolbox]] format. (Nouns and verbs already done, see [https://svn.code.sf.net/p/apertium/svn/languages/apertium-swe/dev/saldo swe/dev].) <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:Francis Tyers|Francis Tyers]], [[User:Unhammer|Unhammer]], [[User:Putti]] |
|||
|} |
|||
=== Misc === |
|||
{|class="wikitable sortable" |
|||
! Category !! Title !! Description !! Mentors |
|||
|- |
|||
| {{sc|documentation}} || Installation instructions for missing GNU/Linux distributions or versions || Adapt installation instructions for a particular GNU/Linux or Unix-like distribution if the existing instructions in the Apertium wiki do not work or have bugs of some kind. Prepare it in your user space in the Apertium wiki. It may be uploaded to the main wiki when approved. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:Mlforcada]] [[User:Firespeaker]] [[User:Wei2912|Wei En]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|documentation}} || Installing Apertium in lightweight GNU/Linux distributions || Give instructions on how to install Apertium in one of the small or lightweight GNU/Linux distributions such as [https://en.wikipedia.org/wiki/Damn_Small_Linux Damn Small Linux] in the style of the description for [[Apertium on SliTaz]], so that may be used in older machines. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:Mlforcada]] [[User:Bech]] [[User:Youssefsan|Youssefsan]] [[User:Wei2912|Wei En]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|documentation}} || Video guide to installation || Prepare a screencast or video about installing Apertium; make sure it uses a format that may be viewed with Free software. When approved by your mentor, upload it to Youtube, making sure that you use the HTML5 format which may be viewed by modern browsers without having to use proprietary plugins such as Adobe Flash. An example may be found [https://www.youtube.com/watch?v=h7SjWvPSvp4 here].<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015'''|| [[User:Mlforcada]] [[User:Firespeaker]] [[User:Wei2912|Wei En]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|documentation}} || Apertium in 5 slides || Write a 5-slide HTML presentation (only needing a modern browser to be viewed and ready to be effectively "karaoked" by some else in 5 minutes or less: you can prove this with a screencast) in the language in which you write more fluently, which describes Apertium, how it works, and what makes it different from other machine translation systems. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Mlforcada]] [[User:Firespeaker]] [[User:Wei2912|Wei En]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|documentation}} || Improved "Become a language-pair developer" document || Read the document [[Become_a_language_pair_developer_for_Apertium]] and think of ways to improve it (don't do this if you have not done any of the language pair tasks). Send comments to your mentor and/or prepare it in your user space in the Apertium wiki. There will be a chance to change the document later in the Apertium Wiki. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Mlforcada]] [[User:Bech]] [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|documentation}} || An entry test for Apertium || Write 20 multiple-choice questions about Apertium. Each question will give 3 options of which only one is true, so that we can build an "Apertium exam" for future GSoC/GCI/developers. Optionally, add an explanation for the correct answer. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Mlforcada]] |
|||
|- |
|||
| {{sc|code}} || Apertium develoment on Windows || The [[Apertium on Windows]] guide is severely out-dated, developers tend to use a [[Virtualbox]] (users have a nice [[Apertium Simpleton UI|GUI]]). But some developers might want to use their Windows tools and environment. Go through the guide to install Apertium on Windows, updating the guide where things have changed. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] |
|||
|- |
|||
| {{sc|code}} || Light Apertium bootable ISO for small machines || Using [https://en.wikipedia.org/wiki/Damn_Small_Linux Damn Small Linux] or [https://en.wikipedia.org/wiki/SliTaz_GNU/Linux SliTaz] or a similar lightweight GNU/Linux, produce the minimum-possible bootable live ISO or live USB image that contains the OS, minimum editing facilities, Apertium, and a language pair of your choice. Make sure no package that is not strictly necessary for Apertium to run is included.<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Mlforcada]] [[User:Firespeaker]] [[User:Wei2912|Wei En]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|code}} || Apertium in XLIFF workflows || Write a shell script and (if possible, using the filter definition files found in the documentation) a filter that takes an [https://en.wikipedia.org/wiki/XLIFF XLIFF] file such as the ones representing a computer-aided translation job and populates with translations of all segments that are not translated, marking them clearly as machine-translated. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Mlforcada]] [[User:Espla]] [[User:Fsanchez]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|quality}} || Examples of minimum files where an Apertium language pair messes up (X)HTML formatting || Sometimes, an Apertium language pair takes a valid HTML/XHTML source file but delivers an invalid HTML/XHTML target file, regardless of translation quality. This can usually be blamed on incorrect handling of superblanks in structural transfer rules. The task: (1) select a language pair (2) Install Apertium locally from the Subversion repository; install the language pair; make sure that it works (3) download a series of HTML/XHTML files for testing purposes. Make sure they are valid using an HTML/XHTML validator (4) translate the valid files with the language pair (5) check if the translated files are also valid HTML/XHTML files; select those that aren't (6) find the first source of non-validity and study it, and strip the source file until you just have a small (valid!) source file with some text around the minimum possible example of problematic tags; save each such file and describe the error. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Mlforcada]] (alternative mentors welcome) |
|||
|- |
|||
| {{sc|code}} || Write a transliteration plugin for mediawiki || Write a mediawiki plugin similar in functionality (and perhaps implementation) to the way the [http://kk.wikipedia.org Kazakh-language wikipedia]'s orthography changing system works ([http://wiki.apertium.org/wiki/User:Stan88#How_to_enable_multiple_Kazakh_language-variants_on_a_mediawiki_instance_.3F documented last year here]. It should be able to be directed to use any arbitrary mode from an apertium mode file installed in a pre-specified path on a server.<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|research}} || {{sc|multi}} train tesseract on a language with no available tesseract data || Train tesseract (the OCR software) on a language that it hasn't previously been trained on. We're especially interested in languages with some coverage in apertium. We can provide images of text to train on. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. '''up2015''' || [[User:Firespeaker]], [[User:Unhammer]] |
|||
|- |
|||
| {{sc|research}} || using language transducers for predictive text on Android || Investigate what it would take to add some sort of plugin to existing Android predictive text / keyboard framework(s?) that would allow the use of lttoolbox (or hfst? or libvoikko stuff?) transducers to be used to predict text and/or gesture typing (swipe typing). Document your findings on the apertium wiki. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|research}} || research gesture typing back end for Android || Research and document on [http://wiki.apertium.org/ apertium's wiki] how recent versions of Android's built-in keyboard interface to a spelling dictionary to guess words with gesture typing. You should state in some combination of broad and specific terms what steps would be needed needed to connect this to a custom back end, e.g. how it could call some other program that looked up words for a given language (e.g., a keyboard layout which currently does not have an Android-supported gesture keyboard).<br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] |
|||
|- |
|||
| {{sc|research}} || {{sc|multi}} identify 75 substitutions for conversion from colloquial Finnish to book Finnish || Colloquial Finnish can be written and pronounced differently to book Finnish (e.g. "ei oo" = "ei ole"; "mä oon" = "minä olen"). The objective of this task is to come up with 75 examples of differences between colloquial Finnish and book Finnish. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Inariksit]] |
|||
|- |
|||
| {{sc|research}} || {{sc|multi}} Disambiguate 500 words of Russian text. || The objective of this task is to disambiguate by hand 500 words of text in Russian. You can find a Wikipedia article you are interested in, or you can be assigned one, you will be given the output of a morphological analyser for Russian, and your task is to select the most adequate analysis in context. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Francis Tyers]] [[User:Beboppinbobby]] [[User:Sereni]] |
|||
|- |
|||
| {{sc|research}}, {{sc|quality}} || improvements to lexc plugin for vim || A vim syntax definition file for [[lexc]] is presented on the following wiki page: [[Apertium-specific conventions for lexc#Syntax highlighting in vim]]. This plugin works, but it has some issues: (1) comments on LEXICON lines are not highlighted as comments, (2) editing lines with comments (or similar) can be really slow, (3) the lexicon a form points at is not highlighted distinctly from the form (e.g., in the line «асқабақ:асқабақ N1 ; ! "pumpkin"», N1 should be highlighted somehow). Modify or rewrite the plugin to fix these issues. <br />For further information and guidance on this task, you are encouraged to come to our [[IRC]] channel. || [[User:Firespeaker]] [[User:vin-ivar]] [[User:TommiPirinen]] |
|||
|- |
|||
| {{sc|code}} || make reproducible builds for core tools || Normally, when you compile software on different machines, the byte-for-byte output will differ, making it hard to verify that the code hasn't been tampered with. With a reproducible build, the output is byte-for-byte equal even though built on different machines. Using https://gitian.org, create reproducible builds of the latest releases of lttoolbox / apertium / apertium-lex-tools / vislcg3. '''up2015''' || [[User:Unhammer]] |
|||
|- |
|||
| {{sc|code}} || test and clean up the wx-utf8 script || The script converts stuff written in [https://en.wikipedia.org/wiki/WX_notation WX notation] to produce Devanagari. It ''should'' be bug free, but someone needs to test it with strange words and fix bugs if any. '''up2015''' || [[User:vin-ivar]] |
|||
|- |
|||
| {{sc|code}} || make improvements to the wx-utf8 script || Add support for other encoding standards and other Indic scripts to the Python script to make it a generic multi-way X-Y transliterator. '''up2015''' || [[User:vin-ivar]] |
|||
|- |
|||
| {{sc|quality}}, {{sc|code}} || {{sc|multi}} fix any open ticket || Fix any open ticket in any of our issues trackers: [https://sourceforge.net/p/apertium/tickets/ main], [https://github.com/goavki/apertium-html-tools/issues html-tools], [https://github.com/goavki/phenny/issues begiak]. When you claim this task, let your mentor know which issue you plan to work on. || [[User:Firespeaker]] [[User:Unhammer]] [[User:Sushain]] |
|||
|} |
|} |
Latest revision as of 11:22, 10 December 2019
Task ideas (2018)[edit]
This is the task ideas page for Google Code-in, here you can find ideas on interesting tasks that will improve your knowledge of Apertium and help you get into the world of open-source development.
The people column lists people who you should get in contact with to request further information. All tasks are 2 hours maximum estimated amount of time that would be spent on the task by an experienced developer, however:
- this is the time expected to take by an experienced developer, you may find that you spend more time on the task because of the learning curve.
Categories:
- code: Tasks related to writing or refactoring code
- documentation: Tasks related to creating/editing documents and helping others learn more
- research: Tasks related to community management, outreach/marketting, or studying problems and recommending solutions
- quality: Tasks related to testing and ensuring code is of high quality.
- design: Tasks related to user experience research or user interface design and interaction
Clarification of "multiple task" types
- multi = number of students who can do a given task
- dup = number of times a student can do the same task
You can find descriptions of some of the mentors here.
Task ideas[edit]
type | title | description | tags | mentors | bgnr? | multi? | duplicates |
---|---|---|---|---|---|---|---|
research | Join us on IRC | Use an IRC client to log onto our IRC channel and stick around for four hours. | irc | * | yes | 150 | |
research, quality | Adopt a Wiki page | Request for an Apertium Wiki account and adopt a Wiki page by updating and fixing any issues with it. | wiki | * | yes | 150 | |
code, design | Make source browser headings sticky at bottom of window | Make headings that are out of view (either below when at the top, or above when scrolled down) sticky on Apertium source browser, so that it's clear what other headings exist. There is a github issue for this. | css, javascript, html, web | sushain, JNW, xavivars, shardulc | no | ||
code, quality | Increase test coverage of begiak, our IRC bot, by at least 10% | There are many modules without any tests at all, unfortunately. See the associated GitHub issue for more details and discussion. | python, bot | sushain, JNW, wei2912, Josh, shardulc | no | 4 | 4 |
code | Improve the .logs command of begiak, our IRC bot | Currently, the .logs command just links to the root logs. Ideally, it would link to the channel specific logs, support a time being handed to it and have tests. See the associated GitHub issue for more details and discussion. | python, bot | sushain, JNW, wei2912, Josh, shardulc | no | ||
code | Update the awikstats module of begiak, our IRC bot, for GitHub | There are a couple steps remaining for this process, mostly small modifications to the existing code which are enumerated in the associated GitHub issue which also contains more context and discussion. | python, bot | sushain, JNW, wei2912, shardulc | no | ||
code, research | Research and propose a flood control system for begiak, our IRC bot | Begiak often floods the channel with notifications from modules such as git. Compile a list of modules which flood Begiak, write a mini-report on the associated GitHub issue and propose changes to be made to the modules. For each module, there should be an issue created with a list of proposed changes, referenced from the main issue. The issue should be added to the associated GitHub project. | python, bot | sushain, JNW, wei2912, shardulc | no | ||
code, research, quality | Clean up obsolete modules for begiak, our IRC bot | Refer to the associated GitHub issue for more details. After this task, the list of modules on Begiak's wiki page should be updated. | python, bot, wiki | sushain, JNW, wei2912, shardulc | yes | ||
code | Add GitHub issue creation functionality to begiak, our IRC bot | Ideally, this would be added to an existing module. If that doesn't make sense, a new module is acceptable as well. The associated GitHub issue includes an example of the command's usage and reply. | python, github, bot | sushain, JNW, wei2912, Josh, shardulc | no | ||
code | Support GitHub modules in apertium-get | Unfortunately, the transition to GitHub from SVN made it so this script which is very handy for downloading an Apertium language/pair doesn't fetch the newest packages anymore. This also means that beta.apertium.org is out of date. See the associated GitHub issue issue for more details and discussion. | bash, github | sushain, Unhammer, wei2912, Josh, xavivars, shardulc | no | ||
code, quality | Add default CI configs to Apertium packages via Apertium-init | Currently, some Apertium pairs/language modules use CI but it's very inconsistent and doesn't come by default. Apertium-init is the official way to bootstrap a new Apertium package so if it came with CI support by default, that would be great. See the associated GitHub issue issue for more details and discussion. | ci, circleci, yaml | sushain, Unhammer, wei2912, xavivars, shardulc | no | ||
code | Ensure XML produced by Apertium-init has consistent XML declarations | Currently, some XML produced by Apertium-init, a script which allows bootstrapping Apertium packages easily, has declarations and some doesn't. Moreover, the declarations are sometimes inconsistent. All XML files should have the same declaration. Note that not all of the XML files in Apertium-init use the .xml file extension. See the associated GitHub issue issue for more details and discussion. | xml, python | sushain, Unhammer, wei2912, xavivars | yes | ||
code, quality | Make Apertium-init's default Makefiles and config files pass make distcheck | Currently, the distcheck target for packages created with Apertium-init, a script which allows bootstrapping Apertium packages easily, does not pass the distcheck target. This task requires fixing this issue. See the associated GitHub issue issue for more details and discussion. | python, autotools, make, bash | Unhammer, Flammie, Unhammer, Josh | no | ||
code, quality | Increase Apertium-init test coverage | Currently, we have a decent set of tests for the script but there are some more complex behaviors such as GitHub interaction that we don't test. This task requires making substantial improvements to the test coverage numbers. See the associated GitHub issue issue for more details and discussion. | python, unittest | sushain, Unhammer, wei2912 | no | ||
code | Ignore .prob files in bilingual modules created by Apertium-init | Apertium-init bootstraps Apertium packages and comes with a default gitignore. This gitignore could be improved by making it ignore *.prob files but only for pairs since they are meaningful for language modules. It would be extra cool if we had some tests for this functionality that weren't too contrived. See the associated GitHub issue issue for more details and discussion. | python, git | sushain, Unhammer, wei2912, Josh | no | ||
code | Set repository topic on repos created by Apertium-init | Apertium-init bootstraps Apertium packages and supports creating an associated GitHub repository. Our source browser and other scripts expect a GitHub repository topic like "apertium-incubator". This task requires creating the incubator topic by default on repo push with an option for custom topics. See the associated GitHub issue issue for more details and discussion. | python, github, http | sushain, Unhammer, wei2912, xavivars, shardulc, JNW | no | ||
code | Install Apertium and verify that it works | See Installation for instructions and if you encounter any issues along the way, document them and/or improve the Wiki! | bash | ftyers, JNW, Unhammer, anakuz, Josh, fotonzade | yes | 150 | |
research | Write a contrastive grammar | Document 6 differences between two (preferably related) languages and where they would need to be addressed in the Apertium pipeline (morph analysis, transfer, etc). Use a grammar book/resource for inspiration. Each difference should have no fewer than 3 examples. Put your work on the Apertium wiki under Language1_and_Language2/Contrastive_grammar. See Farsi_and_English/Pending_tests for an example of a contrastive grammar that a previous GCI student made. | wiki, languages | Mikel, JNW, Josh, xavivars, fotonzade | yes | 40 | |
quality | Add 200 new entries to a bidix to language pair %AAA%-%BBB% | Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 200 new words to a bidirectional dictionary. Read more... | xml, dictionaries, svn | Mikel, anakuz, xavivars, fotonzade | yes | 40 | |
quality | Add 500 new entries to a bidix to language pair %AAA%-%BBB% | Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 500 new words to a bidirectional dictionary. Read more... | xml, dictionaries, svn | Mikel, anakuz, xavivars, fotonzade, ftyers | no | 10 | |
quality | Post-edit 100 sentences of any public domain text from %AAA% to %BBB% | Many of our systems benefit from statistical methods used with (ideally public domain) bilingual data. For this task, you need to translate a public domain text from %AAA% to %BBB% using any available machine translation system and clean up the translations yourself manually. Commit the post-edited texts (in plain text format) to an existing (via pull request) or if needed new github repository for the pair in dev/ or texts/ folder. The texts are subject to mentor approval. | xml, dictionaries, svn | fotonzade, JNW, ftyers, anakuz, xavivars, Mikel, shardulc | yes | 10 | |
quality | Disambiguate 500 tokens of text in %AAA% | Run some text through a morphological analyser and disambiguate the output. Contact the mentor beforehand to approve the choice of language and text. Read more... | disambiguation, svn | Mikel, anakuz, xavivars, fotonzade | yes | ||
code | Use SWIG or equivalent to add C++ bindings for text analysis in apertium-python | Currently, apertium-python just pipes text through the binaries in each mode file. We would like to directly execute the associated C++ function instead. See the associated GitHub issue for more details and discussion. | python, c++, swig | sushain | no | ||
code | Use cgo to integrate apertium and lttoolbox C++ libraries in Go | Currently, all apertium core libraries are written in C++. There are other languuages, like Go, where concurrency is at the very core of the language itself. It would be great to be able to write small programs like the new lt-proc intergeneration in Go, using cgo as a way to bind both languages. | go, c++, cgo | xavivars | |||
code | Integrate HFST's C++ Python bindings into apertium-python | Currently, apertium-python just pipes text through the binaries in each mode file. Where appropriate, i.e. a mode accesses HFST binaries, we would like to directly execute the associated C++ function instead. | python, c++, swig | sushain | no | ||
code | Improve the apertium-python Windows installation process | Currently, apertium-python requires a complex installation process for Windows (and Linux). The goal is something that works out-of-the-box with pip. See the associated GitHub issue for more details and discussion. | python, windows | sushain, wei2912, arghya | no | ||
code | Setup scripts for [1], the relevant issue to ask questions is [[2]] | Write setup.py scripts that install the current setup of Apertium+Python and also additionally make the setup.py script work on Windows too. | python, windows | sushain, arghya | yes | ||
documentation, code | Setup documentation generation for apertium-python | Currently, there are some docstrings attached to functions and constants. This task requires setting up Sphinx/readthedocs for apertium-python so these docs are easily accessible. Types should also be visible and documentation should support being written in Markdown, not RST. See the associated GitHub issue for more details and discussion. | python, sphinx | sushain, Josh, arghya | no | ||
code, design | Upgrade apertium.org (html-tools) to Bootstrap 4 | Currently, we are on a frankensteined version of Bootstrap 3. See the associated GitHub issue for more details and discussion. Note that the frankenstein'd CSS will likely need to be fixed and theme support should be retained (should be simple). | javascript, css, web, bootstrap | sushain, xavivars, shardulc | no | ||
code, quality | Get apertium.org (html-tools) QUnit testing coverage working | Currently, we have a QUnit testing framework mostly complete. There are some fixes that need to be made in discussion with the mentor and existing comments and JS coverage checking needs to be added so that we can burn down existing debt. See the associated GitHub PR for more details and discussion. | javascript, jquery, web | sushain, jjjppp | no | ||
code, quality | Fix/prevent apertium.org (html-tools)'s recursive website translation | Currently, if you try translating Apertium's website with Apertium's website, bad things happen. This 'exploit' is also possible through mutual recursion with another site that offers similar behavior. See the associated GitHub issue for more details and discussion. | javascript, jquery, web, bootstrap | sushain, Unhammer, shardulc | no | ||
code | Convert apertium.org's API (APy)'s language name storage from SQL to TSV | Currently, language names that power part of the Apertium HTTP API are stored and updated in SQL. It would be nice if they were stored in a more human readable format like TSV and the SQLite were generated at build time. See the associated GitHub issue for more details and discussion. | python, sql, tsv | sushain, Unhammer, xavivars | no | ||
code | Support unicode without escape sequences in apertium.org's API (APy) | Currently, HTTP responses with unicode characters are emitted as \uNNNN by the Apertium API. Ideally, the character could just be decoded. See the associated GitHub issue for more details and discussion. | python, api, unicode, json, api, http | sushain, Unhammer, shardulc | no | ||
code | Make apertium.org (html-tools) fail more gracefully when the API is down | Currently, html-tools relies on an API endpoint to translate documents, files, etc. However, when this API is down the interface also breaks! This task requires fixing this breakage. See the associated GitHub issue for more details and discussion. | javascript, html, css, web | sushain, Unhammer, shardulc | no | ||
code, design | Refine the apertium.org (html-tools) dictionary interface | Significant progress has been made towards providing a dictionary-style interface within html-tools. This task requires refining the existing PR by de-conflicting it with master and resolving the interface concerns discussed here. See the associated GitHub issue for more details and discussion. | javascript, html, css, web | sushain, shardulc | no | ||
code, design | Chained translation path interface for apertium.org (html-tools) | Significant progress has been made towards providing an interface for selecting a path for chained (multi-step) translation in html-tools. The code is currently in a branch that needs to be de-conflicted with master, refined to accommodate changes in the main interface since the code was written, tested, and finally merged. | javascript, html, css, web | sushain, shardulc | no | ||
code, documentation | Add a Swagger/OpenAPI specification for apertium.org's API (APy) | There's been some work towards this already but it's outdated. This task requires updating it and for bonus points ensuring at build time that all paths are minimally present in the Swagger spec. Furthermore, it would be awesome if a simple HTTP page could be made that loads the spec (e.g. this page for another service). See the associated GitHub issue for more details and discussion. | python, api, http, openapi, swagger | sushain, xavivars | no | ||
code | Accept ISO-639-1 codes in apertium-stats-service | This task requires making /en-es, /en-spa, etc. work the same as /eng-spa and then adding tests that verify the behavior. See the associated GitHub issue for more details and discussion. | rust, api, http | sushain | no | ||
code | Support listing of packages in apertium-stats-service | This information is something that is useful in a lot of different places, for example, our source browser. By having the stats service implement it, everyone doesn't have to write the same code in different languages and the information gets cached. For GCI task credit, the last commit info is not required (another task can be made for that feature). This task requires implementing the initial feature, adding some basic tests and tweaking the swagger spec. One or more of those tasks can be broken into other task(s) if the mentor sees fit and the student requests it. See the associated GitHub issue for more details and discussion. | rust, api, http, rest | sushain | no | ||
code | Add configurable timeout support to apertium-stats-service | Currently, a stats request has no clear timeout and can take ~forever if the async option is not present. This tasks requires adding a timeout option, adding tests and then tweaking the swagger spec. See the associated GitHub issue for more details and discussion. | rust, api, http, rest | sushain | no | ||
code | Surface errors to the client in apertium-stats-service | Right now, errors are logged and swallowed. The client never knows what happened. This task requires implementing the feature, adding some basic tests and tweaking the swagger spec. One or more of those tasks can be broken into other task(s) if the mentor sees fit and the student requests it. See the associated GitHub issue for more details and discussion. | rust, api, http, rest | sushain | no | ||
code | Include Git SHA in apertium-stats-service's file info | Right now, only the SVN revision number is provided but that doesn't help with mapping back on to a SHA in Git/GH for the client. This task requires implementing the feature, adding some basic tests and tweaking the swagger spec. See the associated GitHub issue for more details and discussion. | rust, api, http, rest, git, svn | sushain | no | ||
documentation | Create a screencast on how to add new entries to an apertium dictionary | Screencasts are popular cool way to create a tutorial. Show a narrated work-flow start-to-end on adding new words to a dictionary, compiling and then using it to translate. This task is probably easiest after completing a "Add 200 new entries to a bidix to language pair" task. | xml, dictionaries, screencast | Flammie, JNW, anakuz, Josh, shardulc | yes | no | |
documentation | Create a screencast on how to disambiguate tokens of text | Screencasts are popular cool way to create a tutorial. Show a narrated work-flow start-to-end on disambiguating the words. This task is probably easiest after completing a "Disambiguate 500 tokens" task. | disambiguation, screencast | Flammie, Josh, shardulc | no | ||
quality | Create automated (travis-ci) test to ensure naïve coverage | The dictionaries can be tested on Coverage, the idea is to make test that operates on frequency word list to count a coverage of a dictionary and then integrate that to Makefile target check for travis to use. | test, python, bash | Flammie, wei2912 | no | no | |
code | Scrape Apertium repo information into json | Write a script to scrape information about Apertium's translation pairs as they exist in GitHub repositories into a json file like this one. | python, git, json | JNW, sushain, wei2912, shardulc | no | no | |
code, design | Integrate globe viewer into language family visualiser interface | The family visualiser interface has four info boxes when a language is clicked on, and one of those boxes is empty. The globe viewer provides a globe visualisation of languages that we can translate a given language to and from. This task is to integrate the globe viewer for a specific language into the fourth box in the family visualiser. There is an associated GitHub issue. | d3, javascript | JNW, sushain, jjjppp | no | no | |
documentation | Fix (or document the blockers for) five mentions of SVN on the Apertium wiki | Apertium recently migrated to GitHub from SVN. There are unfortunately still a lot of pages on the Wiki in need of their references to the Wiki to SVN URLs and SVN in general. This task requires finding five such pages and either outright fixing them or documenting the difficulty involved in fixing the issues. Note Category:GitHub_migration_updates lists articles currently marked for needing migration but is not exhaustive. | wiki, github, svn | JNW, sushain, wei2912, Josh, xavivars, shardulc | no | 5 | |
research | Document resources for a language | Document resources for a language without resources already documented on the Apertium wiki. read more... | wiki, languages, grammar | JNW,ftyers, Josh, fotonzade, anakuz | yes | 10 | |
research | tesseract (OCR) interface for apertium languages | Find out what it would take to integrate apertium or voikkospell into tesseract OCR (image to text). Document thoroughly available options on the wiki. | ocr | JNW, Josh | no | ||
research | Create a UD-Apertium morphology mapping | Choose a language that has a Universal Dependencies treebank and tabulate a potential set of Apertium morph labels based on the (universal) UD morph labels. See Apertium's list of symbols and UD's POS and feature tags for the labels. | morphology, ud, dependencies | JNW,ftyers, fotonzade | 5 | ||
research | Create an Apertium-UD morphology mapping | Choose a language that has an Apertium morphological analyser and adapt it to convert the morphology to UD morphology | morphology, ud, dependencies | JNW, ftyers, fotonzade | 5 | ||
code,design | Paradigm generator browser interface | Write a standalone webpage that makes queries (though javascript) to an apertium-apy server to fill in a morphological forms based on morphological tags that are hidden throughout the body of the page. For example, say you have the verb "say", and some tags like inf, past, pres.p3.sg—these forms would get filled in as "say", "said", "says". | javascript, html, apy | JNW,ftyers | |||
Research | Syntactic trees | Pick up a text of ~200 words and make its syntactic annotation as for Universal Dependencies treebank. UD Annotatrix can be used for visualisation. Consult with mentor about the language. | UD, trees, markup | anakuz, fotonzade | |||
Code | Improve apertium-tagger's man page | The man page for apertium-tagger is outdted, not mentioning some options the --help command does, like -x. They should be synced. See https://github.com/apertium/apertium/issues/10 | C++ | xavivars | |||
Code | Port pragmatic segmenter core code to Python | Pragmatic segmenter (https://github.com/diasks2/pragmatic_segmenter) is a sentence segmenter written in Ruby. The objective of this task is to port the core code to Python. | Python, Ruby | ftyers | |||
Code | Port a language model from pragmatic segmenter to Python | Pragmatic segmenter (https://github.com/diasks2/pragmatic_segmenter) is a sentence segmenter written in Ruby. The objective of this task is to port a given language (e.g. Armenian) to Python. | Python, Ruby | ftyers | 21 | ||
Code | Write a language model for pragmatic segmenter in Python or Ruby | Pragmatic segmenter (https://github.com/diasks2/pragmatic_segmenter) is a sentence segmenter written in Ruby. The objective of this task is to write a language model for a new language. | Python, Ruby | ftyers | 21 | ||
Code,Documentation | Write a program to add a dev branch for each of the released language pairs | At the moment Apertium language pairs are generally developed in the master branch. We would like to move to having a dev/master split, but we need to make a new dev branch for each pair, and also write documentation to explain to people to send PRs to dev or commit to dev. | python | ftyers, shardulc | |||
code | Use apertium-init to bootstrap a new language pair | Use the Apertium-init script to bootstrap a new translation pair between two languages which have monolingual modules already in Apertium. To see if a translation pair has already been made, search our repositories on github, and especially ask on IRC. Add 100 common stems to the dictionary. Your submission should be in the form of a repository on github that we can fork to the Apertium organisation. | languages, bootstrap, dictionaries, translators | JNW | yes | 25 | |
code | Use apertium-init to bootstrap a new language module | Use the Apertium-init script to bootstrap a new language module that doesn't currently exist in Apertium. To see if a language is available, search our repositories on github, and especially ask on IRC. Add enough stems and morphology to the module so that it analyses and generates at least 100 correct forms. Your submission should be in the form of a repository on github that we can fork to the Apertium organisation. Read more about adding stems... | languages, bootstrap, dictionaries | JNW | yes | 25 | |
code | Add a transfer rule to an existing translation pair | Add a transfer rule to an existing translation pair that fixes an error in translation. Document the rule on the Apertium wiki by adding a regression tests page similar to English_and_Portuguese/Regression_tests or Icelandic_and_English/Regression_tests. Check your code into Apertium's codebase. Read more... | languages, bootstrap, transfer | JNW, mlforcada | 25 | 5 | |
code | Write 10 lexical selection to an existing translation pair | Add 10 lexical selection rules to an existing translation pair. Submit your work as a github pull request to that pair. Read more... | languages, bootstrap, lexical selection, translators | JNW | 25 | 5 | |
code | Write 10 constraint grammar rules for an existing language module | Add 10 constraint grammar rules to an existing language that you know. Submit your work as a github pull request to that pair. Read more... | languages, bootstrap, constraint grammar | JNW | 25 | 5 |
Checklist (2018)[edit]
Please remove things from this list as the tasks are added
- Pairviewer needs both tasks and issues in GitHub. Some historical tasks are below.
- Family-visualizations needs both tasks and issues in GitHub. Some historical tasks are below.
- Annotatrix needs issues to be converted into tasks. Some historical tasks are below.
- Task on joining the channel with an IRC client will require [3] to be completed.
- Task on adopting a Wiki page will require a list of suitable pages to be compiled.
Task ideas (2017)[edit]
type | title | description | tags | mentors | bgnr? | multi? | duplicates |
---|---|---|---|---|---|---|---|
research | Document resources for a language | Document resources for a language without resources already documented on the Apertium wiki. read more... | wiki, languages | Jonathan, Vin, Xavivars, Marc Riera | yes | 40 | |
research | Write a contrastive grammar | Document 6 differences between two (preferably related) languages and where they would need to be addressed in the Apertium pipeline (morph analysis, transfer, etc). Use a grammar book/resource for inspiration. Each difference should have no fewer than 3 examples. Put your work on the Apertium wiki under Language1_and_Language2/Contrastive_grammar. See Farsi_and_English/Pending_tests for an example of a contrastive grammar that a previous GCI student made. | wiki, languages | Vin, Jonathan, Fran, mlforcada | yes | 40 | |
interface | Nicely laid out interface for ud-annotatrix | Design an HTML layout for the annotatrix tool that makes best use of the space and functions nicely at different screen resolutions. | annotation, annotatrix | Fran, Masha, Jonathan | |||
interface | Come up with a CSS style for annotatrix | annotation, annotatrix, css | Fran, Masha, Jonathan, Vin | ||||
code | SDparse to CoNLL-U converter in JavaScript | SDparse is a format for describing dependency trees, they look like relation(head, dependency). CoNLL-U is another format for describing dependency trees. Make a converter between the two formats. You will probably need to learn more about the specifics of these formats. The GitHub issue is here. | annotation, annotatrix, javascript, dependencies | Fran, Masha, Jonathan, Vin | |||
quality | Write a test for the format converters in annotatrix | annotation, annotatrix | Fran, Masha, Vin | yes | |||
code | Write a function to detect invalid trees in the UD annotatrix software and advise the user about it | It is possible to detect invalid trees (such as those that have cycles). We would like to write a function to detect those kinds of trees and advise the user. The GitHub issue is here. | annotation, annotatrix, javascript | Fran, Masha, Jonathan | |||
documentation | Write a tutorial on how to use annotatrix to annotate a dependency tree | Give step by step instructions to annotating a dependency tree with Annotatrix. Make sure you include all possibilities in the app, for example tokenisation options. | annotation, annotatrix, dependencies | Fran, Masha, Jonathan, Vin | |||
documentation | Make a video tutorial on annotating a dependency tree using the UD annotatrix software. | Give step by step instructions to annotating a dependency tree with Annotatrix. Make sure you include all possibilities available in the app, for example tokenisation options. | annotation, annotatrix, video, dependencies | Fran, Masha, Vin | |||
quality | Merge two versions of the Polish morphological dictionary | At some point in the past, someone deleted a lot of entries from the Polish morphological dictionary, and unfortunately we didn't notice at the time and have since added stuff to it. The objective of this task is to take the last
version before the mass deletion and the current version and merge them. Getting list of the changes: $ svn diff --old apertium-pol.pol.dix@73196 --new apertium-pol.pol.dix@73199 > changes.diff |
xml, dictionaries, svn | Masha | |||
quality | Add 200 new entries to a bidix to language pair %AAA%-%BBB% | Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 200 new words to a bidirectional dictionary. | xml, dictionaries, svn | fotonzade, Jonathan, Xavivars, Marc Riera, mlforcada | yes | yes | |
quality | Add 500 new entries to a bidix to language pair %AAA%-%BBB% | Our translation systems require large lexicons so as to provide production-quality coverage of any input data. This task requires the student to add 500 new words to a bidirectional dictionary. | xml, dictionaries, svn | fotonzade, Jonathan, Xavivars, Marc Riera, mlforcada | yes | ||
quality | Disambiguate 500 tokens of text in %AAA% | Run some text through a morphological analyser and disambiguate the output. Contact the mentor beforehand to approve the choice of language and text. | disambiguation, svn | fotonzade, Xavivars, Marc Riera, mlforcada | yes | ||
code | Use apertium-init to start a new morphological analyser for %AAA% | Use apertium-init to start a new morphological analyser (for a language we don't already have, e.g. %AAA%) and add 100 words. | morphology, languages, finite-state, fst | Fran, Katya | yes | ||
documentation | add comments to .dix file symbol definitions | dix | Jonathan, Flammie | ||||
documentation | find symbols that aren't on the list of symbols page | Go through symbol definitions in Apertium dictionaries in svn (.lexc and .dix format), and document any symbols you don't find on the List of symbols page. This task is fulfilled by adding at least one class of related symbols (e.g., xyz_*) or one major symbol (e.g., abc), along with notes about what it means. | wiki,lexc,dix | Jonathan | |||
code | conllu parser and searching | Write a script (preferably in python3) that will parse files in conllu format, and perform basic searches, such as "find a node that has an nsubj relation to another node that has a noun POS" or "find all nodes with a cop label and a past feature" | python, dependencies | Jonathan, Fran, Wei En, Anna | |||
code | group and count possible lemmas output by guesser | Currently a "guesser" version of Apertium transducers can output a list of possible analyses for unknown forms. Develop a new pipleine, preferably with shell scripts or python, that uses a guesser on all unknown forms in a corpus, and takes the list of all possible analyses, and output a hit count of the most common combinations of lemma and POS tag. | guesser, transducers, shellscripts | Jonathan, Fran, Wei En | |||
code | vim mode/tools for annotating dependency corpora in CG3 format | includes formatting, syntax highlighting, navigation, adding/removing nodes, updating node numbers, etc. | vim, dependencies, CG3 | Jonathan, Fran | |||
code | vim mode/tools for annotating dependency corpora in CoNLL-U format | includes formatting, syntax highlighting, navigation, adding/removing nodes, updating node numbers, etc. | vim, dependencies, conllu | Jonathan, Fran | |||
quality | figure out one-to-many bug in the lsx module | There is a bug in the lsx module referred to as the one-to-many bug because lsx-proc will not convert one form to many given an appropriately compiled transducer. Your job is to figure out why this happens and fix it. | C++, transducers, lsx | Jonathan, Fran, Wei En, Irene | |||
code | add an option for reverse compiling to the lsx module | this should be simple as it can just leverage the existing lttoolbox options for left-right / right-left compiling | C++, transducers, lsx | Jonathan, Fran, Wei En, Irene, Xavivars | |||
quality, code | clean up lsx-comp | remove extraneous functions from lsx-comp and clean up the code | C++, transducers, lsx | Jonathan, Fran, Wei En, Irene, Xavivars | |||
quality, code | clean up lsx-proc | remove extraneous functions from lsx-proc and clean up the code | C++, transducers, lsx | Jonathan, Fran, Wei En, Irene, Xavivars | |||
documentation | document usage of the lsx module | document which language pairs have included the lsx module in its package, which have beta-tested the lsx module, and which are good candidates for including support for lsx. add to this wiki page | C++, transducers, lsx | Irene | yes | ||
quality | beta testing the lsx-module | create an lsx dictionaryfor any relevant and existing language pair that doesn't yet support it, adding 10-30 entries to it. Thoroughly test to make sure the output is as expected. report bugs/non-supported features and add them to future work. Document your tested language pair by listing it under Lsx_module#Beta_testing and in this wiki page | C++, transducers, lsx | Jonathan, Fran, Wei En, Irene | yes | yes | |
code | fix an lsx bug / add an lsx feature | if you've done the above task (beta testing the lsx-module) and discovered any bugs or unsupported features, fix them. | C++, transducers, lsx | Jonathan, Fran, Wei En, Irene | yes | yes | |
code | script to test coverage over wikipedia corpus | Write a script (in python or ruby) that in one mode checks out a specified language module to a given directory, compiles it (or updates it if already existant), and then gets the most recently nightly wikipedia archive for that language and runs coverage over it (as much in RAM if possible). In another mode, it compiles the language pair in a docker instance that it then disposes of after successfully running coverage. Scripts exist in Apertium already for finding where a wikipedia is, extracting a wikipedia archive into a text file, and running coverage. | python, ruby, wikipedia | Jonathan, Wei En, Shardul | |||
quality,code | fix any open ticket | Fix any open ticket in any of our issues trackers: main, html-tools, begiak. When you claim this task, let your mentor know which issue you plan to work on. | Jonathan, Wei En, Sushain, Shardul | 25 | 10 | ||
quality,code | make html-tools do better on Chrome's audit | Currently, apertium.org and generally any html-tools installation fails lots of Chrome audit tests. As many as possible should be fixed. Ones that require substantial work should be filed as tickets and measures should be taken to prevent problems from reappearing (e.g. a test or linter rule). More information is available in the issue tracker (#201) and asynchronous discussion should occur there. | javascript, html, css, web | Jonathan, Sushain, Shardul | |||
code,interface | upgrade html-tools to Bootstrap 4 | Currently, html-tools uses Bootstrap 3.x. Bootstrap 4 beta is out and we can upgrade (hopefully)! If an upgrade is not possible, you should document why it's not and ensure that it's easy to upgrade when the blockers are removed. More information may be available in the issue tracker (#200) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Shardul | yes | ||
code,interface | display API endpoint on sandbox | Currently, html-tools has an "APy" mode where users can easily test out the API. However, it doesn't display the actual URL of the API endpoint and it would be nice to show that to the user. More information is available in the issue tracker (#147) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Jonathan, Shardul | yes | ||
code,quality,research | set up a testing framework for html-tools | Currently, html-tools has no tests (sad!). This task requires researching what solutions there are for testing jQuery based web applications and putting one into place with a couple tests as a proof of concept. More information is available in the issue tracker (#116) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Shardul | |||
code,research | make html-tools automatically download translated files in Safari, IE, etc. | Currently, html-tools is capable of translating files. However, this translation does not always result in the file immediately being download to the user on all browsers. It would be awesome if it did! This task requires researching what solutions there are, evaluating them against each other and it may result in a conclusion that it just isn't possible (yet). More information is available in the issue tracker (#97) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Jonathan, Unhammer, Shardul | |||
code,interface | make html-tools fail more gracefully when API is down | Currently, html-tools relies on an API endpoint to translate documents, files, etc. However, when this API is down the interface also breaks! This task requires fixing this breakage. More information is available in the issue tracker (#207) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Jonathan, Shardul | yes | ||
code,interface | make html-tools properly align text in mixed RTL/LTR contexts | Currently, html-tools is capable of displaying results/allowing input for RTL languages in a LTR context (e.g. we're translating Arabic in an English website). However, this doesn't always look exactly how it should look, i.e. things are not aligned correctly. More information is available in the issue tracker (#49) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Jonathan, Shardul | yes | ||
code,interface | de-conflict the 'make a suggestion' interface in html-tools | There has been much demand for html-tools to support an interface for users making suggestions regarding e.g. incorrect translations (c.f. Google translate). An interface was designed for this purpose. However, since it has been a while since anyone touched it, the code now conflicts with the current master branch. This task requires de-conflicting this branch with master and providing screenshot/video(s) of the interface to show that it functions. More information is available in the issue tracker (#74) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Jonathan, Shardul | |||
code,quality | make html-tools capable of translating itself | Currently, html-tools supports website translation. However, if asked to translate itself, weird things happen and the interface does not properly load. This task requires figuring out the root problem and correcting the fault. More information is available in the issue tracker (#203) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Jonathan, Shardul | yes | ||
interface | create mock-ups for variant support in html-tools | Currently, html-tools supports translation using language variants. However, we do not have first-class style/interface support for it. This task requires speaking with mentors/reading existing discussion to understand the problem and then produce design mockups for a solution. More information is available in the issue tracker (#82) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Jonathan, Fran, Shardul, Xavivars | |||
code,interface | refine the html-tools dictionary interface | Significant progress has been made towards providing a dictionary-style interface within html-tools. This task requires refining the existing PR by de-conflicting it with master and resolving the interface concerns discussed here. More information is available in the issue tracker (#105) and asynchronous discussion should occur there. | javascript, html, css, web | Sushain, Jonathan, Xavivars | |||
code,quality,interface | eliminate inline styles from html-tools | Currently, html-tools has inline styles. These are not very maintainable and widely considered as bad style. This task requires surveying the uses, removing all of them in a clean manner, i.e. semantically, and re-enabling the linter rule that will prevent them going forward. More information is available in the issue tracker (#114) and asynchronous discussion should occur there. | html, css, web | Sushain, Shardul, Xavivars | yes | ||
code,interface | refine the html-tools spell checking interface | Spell checking is a feature that would greatly benefit html-tools. Significant effort has been put towards implementing an effective interface to provide spelling suggestions to users (this PR contains the current progress). This task requires solving the problems highlighted in the code review on the PR and fixing any other bugs uncovered in conversations with the mentors. More information is available in the issue tracker (#12) and asynchronous discussion should occur there. | html, css, web | Sushain, Jonathan | |||
quality | find an apertium module not developed in svn and import it | Find an Apertium module developed elsewhere (e.g., github) released under a compatible open license, and import it into Apertium's svn, being sure to attribute any authors (in an AUTHORS file) and keeping the original license. Once place to look for such modules might be among the final projects in a recent Computational Linguistics course. | Jonathan, Wei En | 10 | 2 | ||
code | add an incubator mode to the wikipedia scraper | Add a mode to scrape a Wikipedia in incubator (e.g,. the Ingush incubator) to the WikiExtractor script | wikipedia, python | Jonathan, Wei En | |||
code,interface | add a translation mode interface to the geriaoueg plugin for firefox | Fork the geriaoueg firefox plugin and add an interface for translation mode. It doesn't have to translate at this point, but it should communicate with the server (as it currently does) to load available languages. | javascript | Jonathan | |||
code, interface | add a translation mode interface to the geriaoueg plugin for chrome | Fork the geriaoueg chrome plugin and add an interface for translation mode. It doesn't have to translate at this point, but it should communicate with the server (as it currently does) to load available languages. | javascript | Jonathan | |||
quality | update bidix included in apertium-init | There are some issues with the bidix currently included in apertium-init: the alphabet should be empty (or non-existant?) and the "sg" tags shouldn't be in the example entries. It would also be good to have entries in two different languages, especially ones with incompatible POS sub-categories (e.g. casa<n> <f> ). There is a github issue for this task. |
python, xml, dix | Jonathan, Sushain | yes | ||
code | apertium-init support for more features in hfst modules | Add optional support to hfst modules for enabling spelling modules, an extra twoc module for morphotactic constraints, and spellrelax. You'll want to figure out how to integrate this into the Makefile template. There is a github issue for this task. | python, xml, Makefile | Jonathan | |||
code, quality | make apertium-init README files show only relevant dictionary file | Currently in apertium-init, the README files for HFST modules show the "dix" file in the list of files, and it's likely that lttoolbox modules show "hfst" files in their README too. Check this and make it so that READMEs for these two types of monolingual modules display only the right dictionary files. There is a github issue for this task. | python, xml, Makefile | Jonathan, Sushain | |||
code, quality | Write a script to add glosses to a monolingual dictionary from a bilingual dictionary | Write a script that matches bilingual dictionary entries (in dix format) to monolingual dictionary entries in one of the languages (in lexc format) and adds glosses from the other side of the bilingual dictionary if not already there. The script should combine glosses into one when there's more than one in the bilingual dictionary. Some level of user control might be justified, from simply defaulting to a dry run unless otherwise specified, to controls for adding to versus replacing versus leaving alone existing glosses, and the like. A prototype of this script is available in SVN, though it's buggy and doesn't fully work—so this task may just end up being to debug it and make it work as intended. A good test case might be the English-Kazakh bilingual dictionary and the Kazakh monolingual dictionary. | python, lexc, dix, xml | Jonathan | |||
code | Write a script to deduplicate and/or sort individual lexc lexica. | The lexc format is a way to specify a monolingual dictionary that gets compiled into a transducer: see Apertium-specific conventions for lexc and Lttoolbox and lexc#lexc. A single lexc file may contain quite a few individual lexicons of stems, e.g. for nouns, verbs, prepositions, etc. Write a script (in python or ruby) that reads a specified lexicon, and based on which option the user specifies, identifies and removes duplicates from the lexicon, and/or sorts the entries in the lexicon. Be sure to make a dry-run (i.e., do not actually make the changes) the default, and add different levels debugging (such as displaying a number of duplicates versus printing each duplicate). Also consider allowing for different criteria for matching duplicates: e.g., whether or not the comment matches too. There are two scripts that parse lexc files already that would be a good point to start from: lexccounter.py and inject-words-from-bidix-to-lexc.py (not fully functional). | python, ruby, lexc | Jonathan | |||
quality, interface | Interface improvement for Apertium Globe Viewer | The Apertium Globe Viewer is a tool to visualise the translation pairs that Apertium currently offers, similar to the apertium pair viewer. Choose any interface or usability issue listed in the tool's documentation in consultation with your mentor, file an issue, and fix it. | javascript, maps | Jonathan | 3 | 5 | |
quality, code | Separate geographic and module data for Apertium Globe Viewer | The Apertium Globe Viewer is a tool to visualise the translation pairs that Apertium currently offers, similar to the apertium pair viewer. Currently, geographic data for languages and pairs (latitude, longitude) is stored with the size of the dictionary, etc. Find a way to separate this data into distinct files (named sensibly), and at the same time make it possible to specify only the points for each language and not the endpoints for the arcs for language pairs (those should be trivial to generate dynamically). | javascript, json | Jonathan | |||
quality, code | Scraper of information needed for Apertium visualisers | There are currently three prototype visualisers for the translation pairs Apertium offers: Apertium Globe Viewer and apertium pair viewer and language family visualisation tool. They all rely on data about Apertium linguistic modules, and that data has to be scraped. There are some tools which do various parts of this already, but they are not unified: There are scripts that do different pieces of all of this already: queries svn, queries svn revisions, counting bidix stems. Evaluate how well the script works, and attempt to make it output data that will be compatible with all viewers (and/or modify the viewers to make sure it is compatible with the general output format). | python, json, scrapers | Jonathan | |||
quality | fix pairviewer's 2- and 3-letter code conflation problems | pairviewer doesn't always conflate languages that have two codes. E.g. sv/swe, nb/nob, de/deu, da/dan, uk/ukr, et/est, nl/nld, he/heb, ar/ara, eus/eu are each two separate nodes, but should instead each be collapsed into one node. Figure out why this isn't happening and fix it. Also, implement an algorithm to generate 2-to-3-letter mappings for available languages based on having the identical language name in languages.json instead of loading the huge list from codes.json; try to make this as processor- and memory-efficient as possible. | javascript | Jonathan | |||
quality, code | split nor into nob and nno in pairviewer | Currently in pairviewer, nor is displayed as a language separately from nob and nno. However, the nor pair actually consists of both an nob and an nno component. Figure out a way for pairviewer (or pairsOut.py / get_all_lang_pairs.py) to detect this split. So instead of having swe-nor, there would be swe-nob and swe-nno displayed (connected seemlessly with other nob-* and nno-* pairs), though the paths between the nodes would each still give information about the swe-nor pair. Implement a solution, trying to make sure it's future-proof (i.e., will work with similar sorts of things in the future). | javascript | Jonathan, Fran, Unhammer | |||
quality, code | add support to pairviewer for regional and alternate orthograpic modes | Currently in pairviewer, there is no way to detect or display modes like zh_TW. Add suppor to pairsOut.py / get_all_lang_pairs.py to detect pairs containing abbreviations like this, as well as alternate orthographic modes in pairs (e.g. uzb_Latn and uzb_Cyrl). Also, figure out a way to display these nicely in the pairviewer's front-end. Get creative. I can imagine something like zh_CN and zh_TW nodes that are in some fixed relation to zho (think Mickey Mouse configuration?). Run some ideas by your mentor and implement what's decided on. | javascript | Jonathan, Fran | |||
code | Extend visualisation of pairs involving a language in language family visualisation tool | The language family visualisation tool currently has a visualisation of all pairs involving the language. Extend this to include pairs that involve those languages, and so on, until there are no more pairs. This should result in a graph of quite a few languages, with the current language in the middle. Note that if language x is the center, and there are x-y and x-z pairs, but also a y-z pair, this should display the y-z pair with a link, not with an extra z and y node each, connected to the original y and z nodes, respectively. The best way to do this may involve some sort of filtering of the data. | javascript | Jonathan | |||
code | Scrape Crimean Tatar Quran translation from a website | Bible and Quran translations often serve as a parallel corpus useful for solving NLP tasks because both texts are available in many languages. Your goal in this task is to write a program in the language of your choice which scrapes the Quran translation in the Crimean Tatar language available on the following website: http://crimean.org/islam/koran/dizen-qurtnezir/. You can adapt the scraper described on the Writing a scraper page or write your own from scratch. The output should be plain text in Tanzil format ('text with aya numbers'). You can see examples of that format on http://tanzil.net/trans/ page. When scraping, please be polite and request data at a reasonable rate. | scraper | Ilnar, Jonathan, fotonzade | |||
code | Scrape Quran translations from a website | Bible and Quran translations often serve as a parallel corpus useful for solving NLP tasks because both texts are available in many languages. Your goal in this task is to write a program in the language of your choice which scrapes the Quran translations available on the following website: http://www.quran-ebook.com/. You can adapt the scraper described on the Writing a scraper page or write your own from scratch. The output should be plain text in Tanzil format ('text with aya numbers'). You can see examples of that format on http://tanzil.net/trans/ page. Before starting, check whether the translation is not already available on the Tanzil project's page (no need to re-scrape those, but you should use them to test the output of your program). Although the format of the translations seems to be the same and thus your program is expected to work for all of them, translations we are interested the most are the following: Azerbaijani version 2, Bashkir, Chechen, Karachay and Kyrgyz. When scraping, please be polite and request data at a reasonable rate. | scraper | Ilnar, Jonathan, fotonzade | |||
documentation | Unified documentation on Apertium visualisers | There are currently three prototype visualisers for the translation pairs Apertium offers: Apertium Globe Viewer and apertium pair viewer and language family visualisation tool. Make a page on the Apertium wiki that showcases these three visualisers and links to further documentation on each. If documentation for any of them is available somewhere other than the Apertium wiki, then (assuming compatible licenses) integrate it into the Apertium wiki, with a link back to the original. | wiki, visualisers | Jonathan | |||
research | Investigate FST backends for Swype-type input | Investigate what options exist for implementing an FST (of the sort used in Apertium spell checking) for auto-correction into an existing open source Swype-type input method on Android. You don't need to do any coding, but you should determine what would need to be done to add an FST backend into the software. Write up your findings on the Apertium wiki. | spelling,android | Jonathan | |||
research | tesseract interface for apertium languages | Find out what it would take to integrate apertium or voikkospell into tesseract. Document thoroughly available options on the wiki. | spelling,ocr | Jonathan | |||
documentation | Integrate documentation of the Apertium deformatter/reformatter into system architecture page | Integrate documentation of the Apertium deformatter and reformatter into the wiki page on the Apertium system architecture. | wiki, architecture | Jonathan, Shardul | |||
documentation | Document a full example through the Apertium pipeline | Come up with an example sentence that could hypothetically rely on each stage of the Apertium pipeline, and show the input and output of each stage under the Example translation at each stage section on the Apertium wiki. | wiki, architecture | Jonathan, Shardul | |||
documentation | Create a visual overview of structural transfer rules | Based on an existing overview of Apertium structural transfer rules, come up with a visual presentation of transfer rules that shows what parts of a set of rules correspond to which changes in input and output, and also which definitions are used where in the rules. Get creative—you can do this all in any format easily viewed across platforms, especially as a webpage using modern effects like those offered by d3 or similar. | wiki, architecture, visualisations, transfer | Jonathan, Shardul | |||
documentation | Complete the Linguistic Data chart on Apertium system architecture wiki page | With the assistance of the Apertium community (our IRC channel) and the resources available on the Apertium wiki, fill in the remaining cells of the table in the "Linguistic data" section of the Apertium system architecture wiki page. | wiki, architecture | Jonathan | yes | ||
research | Do a literature review on anaphora resolution | Anaphora resolution (see the wiki page is the task of determining for a pronoun or other item with reference what it refers to. Do a literature review and write up common methods with their success rates. | anaphora, rbmt, engine | Fran | |||
research | Write up grammatical tables for a grammar of a language that Apertium doesn't have an analyser for | Many descriptive grammars have useful tables that can be used for building morphological analysers. Unfortunately they are in Google Books or in paper and not easily processable by machine. The objective is to find a grammar of a language for which Apertium doesn't have a morphological analyser and write up the tables on a Wiki page. | grammar, books, data-entry | Fran | |||
research | Phrasebooks and frequency | Apertium is quite terrible in general with phrasebook style sentences in most languages. Try translating "what's up" from English to Spanish. The objective of this task is to look for phrasebook/filler type sentences/utterances in parallel corpora of film subtitles and on the internet and order them by frequency/generality. Frequency is the amount of times you see the utterance, generality is in how many different places you see it. | phrasebook, translation | Fran, Xavivars | |||
research | Hungarian Open Source dictionaries | There are currently 3+ open source Hungarian open source resources for morphological analysis/dictionaries, study and document on how to install these and get the words and their inflectional informations out, and e.g. tabulate some examples of similarities and differences of word classes/tags/stuff. See Hungarian for more info. | hungarian | Flammie | |||
research | Create a UD-Apertium morphology mapping | Choose a language that has a Universal Dependencies treebank and tabulate a potential set of Apertium morph labels based on the (universal) UD morph labels. See Apertium's list of symbols and UD's POS and feature tags for the labels. | morphology, ud, dependencies | Vin, Jonathan, Anna | 5 | ||
research | Create an Apertium-UD morphology mapping | Choose a language that has an Apertium morphological analyser and adapt it to convert the morphology to UD morphology | morphology, ud, dependencies | Vin, Jonathan, Anna | 5 | ||
research | Create a full verbal paradigm for an Indo-Aryan language | Choose a regular verb and create a paradigm with all possible tense/aspect/mood inflections for an Indo-Aryan language (except Hindi or Marathi). Use Masica's grammar as a reference. | morphology, indo-aryan | Vin | 10 | ||
code | Create a syntactic analogy corpus for a particular POS/language. | Refer to the syntactic section of this paper. Try to create a data set with more than 2000 * 8 = 16000 entries for a particular POS with any language, using a large corpus for frequency. | morphology, embeddings | Vin | 5 | ||
code | Envision and create a quick utility for tasks like morphological lookup | Many tasks like morphological analysis are annoying to do by navigating to the right directory, typing out an entire pipeline etc. Write a bash script to simplify some of these procedures, taking into account the install paths and prefixes if necessary. eg. echo "hargle" \ | bash, scripting | Vin | yes | 10 | |
research,code | Use open-source OCR to convert open-source non-text news corpora to text. Evaluate an analyser's coverage on them. | Many languages that have online newspapers do not use actual text to store the news but instead use images or GIFs :((( find a newspaper for a language that lacks news text online (eg. Marathi), check licenses, find an OCR tool and scrape a reasonably large corpus from the images if doing so would not violate CC/GPL. Evaluate the morphological analyser on it. | python,morphology | Vin | |||
research,quality | Clean up open issues in html-tools, begiak, or APy | Go through issue threads for html-tools, begiak, or APy, and find issues that have been solved in the code but are still open on GitHub. (The fact that they have been solved may not be evident from the comments thread alone.) Once you find such an issue, comment on the thread explaining what code/commit fixed it and how it behaves at the latest revision. | issues, python | Shardul, Jonathan | 15 | ||
code,quality | Get begiak to build cleanly | Currently, begiak does not build cleanly because of a number of failing tests. Find what is causing the tests to fail, and either fix the code or the tests if the code has changed its behavior. Document all your changes in the PR that you create. | tests, python, IRC | Shardul, Jonathan | |||
quality | Find stems in the Kazakh treebank that are not in the Kazakh analyser | There are quite a few analyses in the Kazakh treebank that don't exist in the Kazakh analyser. Find as many examples of missing stems as you can. Feel free to write a script to automate this so it's as exhaustive (and non-exhausting:) as possible. You may either add what you find to the analyser yourself, commit a list of the missing stems to apertium-kaz/dev, or send a list to your mentor so that they may do one of these. | treebank, Kazakh, analyses | Jonathan, Ilnar | yes | ||
quality | Find missing analyses in the Kazakh treebank that are not in the Kazakh analyser | There are quite a few analyses in the Kazakh treebank that don't exist in the Kazakh analyser. Find as many examples of missing analyses (for existing stems) as you can. Feel free to write a script to automate this so it's as exhaustive (and non-exhausting:) as possible. You may commit a list of the missing stems to apertium-kaz/dev or send a list to your mentor so that they may do this. | treebank, Kazakh, analyses | Jonathan, Ilnar | yes | ||
code | Use apertium-init to bootstrap a new language module | Use Apertium-init to bootstrap a new language module that doesn't currently exist in Apertium. To see if a language is available, check languages and incubator, and especially ask on IRC. Add enough stems and morphology to the module so that it analyses and generates at least 100 correct forms. Check your code into Apertium's codebase. Read more about adding stems... | languages, bootstrap, dictionaries | Jonathan | yes | 25 | |
code | Use apertium-init to bootstrap a new language pair | Use Apertium-init to bootstrap a new translation pair between two languages which have monolingual modules already in Apertium. To see if a translation pair has already been made, check our SVN repository, and especially ask on IRC. Add 100 common stems to the dictionary. Check your work into Apertium's codebase. | languages, bootstrap, dictionaries, translators | Jonathan | yes | 25 | |
code | Add a transfer rule to an existing translation pair | Add a transfer rule to an existing translation pair that fixes an error in translation. Document the rule on the Apertium wiki by adding a regression tests page similar to English_and_Portuguese/Regression_tests or Icelandic_and_English/Regression_tests. Check your code into Apertium's codebase. Read more... | languages, bootstrap, transfer | Jonathan, mlforcada | 25 | 5 | |
code | Add stems to an existing translation pair | Add 1000 common stems to the dictionary of an existing translation pair. Check your work into Apertium's codebase. Read more about adding stems... | languages, bootstrap, dictionaries, translators | Jonathan | 25 | 5 | |
code | Write 10 lexical selection to an existing translation pair | Add 10 lexical selection rules to an existing translation pair. Check your work into Apertium's codebase. Read more... | languages, bootstrap, lexical selection, translators | Jonathan | 25 | 5 | |
code | Write 10 constraint grammar rules for an existing language module | Add 10 constraint grammar rules to an existing language that you know. Check your work into Apertium's codebase. Read more... | languages, bootstrap, constraint grammar | Jonathan | 25 | 5 | |
code,interface | Paradigm generator webpage | Write a standalone webpage that makes queries (though javascript) to an apertium-apy server to fill in a morphological forms based on morphological tags that are hidden throughout the body of the page. For example, say you have the verb "say", and some tags like inf, past, pres.p3.sg—these forms would get filled in as "say", "said", "says". | javascript, html, apy | Jonathan | |||
code | Train a new model for syntactic function labeller | Choose one of the languages Apertium uses in language pairs and prepare training data for the labeller from its UD-treebank: replace UD tags with Apertium tags, parse the treebank, create fastText embeddings. Then train a new model on this data and evaluate an accuracy. | python, UD, embeddings, machine learning | Anna | 5 | ||
code,quality | Tuning a learning rate for syntactic function labeller's RNN | Syntactic function labeller uses RNN for training and predicting syntactic functions of words. Current models can be improved by tuning training parameters, e.g. learning rate parameter. | python, machine learning | Anna |
Task ideas (2016)[edit]
type | title | description | tags | mentors | bgnr? | multi? | |
---|---|---|---|---|---|---|---|
code | Refactor/mege the main "processing" functions of lrx-proc | lrx-proc has two modes, "-m" mode and default mode. They are implemented by each their huge function, nearly identical to each other. Refactor the code to remove the redundancy, and run tests on lots of text with several language pairs to ensure no regressions. | c++ | Fran, Unhammer | |||
code | Profile and improve speed of lrx-proc | lrx-proc is slower than it should be. There is probably some low-hanging fruit. Try profiling it and implementing an improvement. | c++ | Fran, Unhammer | |||
research | See if you can precompile xpath expressions or xslt stylesheets | An XSLT stylesheet is a program for transforming XML trees. An Xpath expression is a way of specifying a node set in an XML tree. Investigate the possibility of pre-compiling either stylesheets or xpath expressions. | parsing | Fran | |||
research | Review literature on linearisation of dependency trees | A dependency tree is an intermediate representation of a sentence with no implicit word order. Linearisation is finding the appropriate word order for a dependency tree. Do a survey of the available literature and write up a review. | parsing | Fran, Schindler | |||
research | Manually annotate/Tag text in Apertium format | Take some running text, analyse it using an Apertium analyser then manually disambiguate the result. | Fran | ||||
code | Convert Chukchi Nouns to HFST/lexc | There is a freely available lexicon of Chukchi, a language spoken in the north-east of Russia. The objective of this task is to convert part of the lexicon covering nouns to lexc format, which is a formalism for specifying concatenative morphology. | Fran | ||||
code | Convert Chukchi Numerals to HFST/lexc | There is a freely available lexicon of Chukchi, a language spoken in the north-east of Russia. The objective of this task is to convert part of the lexicon covering nouns to lexc format, which is a formalism for specifying concatenative morphology. | Fran | ||||
code | Convert Chukchi Adjectives to HFST/lexc | There is a freely available lexicon of Chukchi, a language spoken in the north-east of Russia. The objective of this task is to convert part of the lexicon covering nouns to lexc format, which is a formalism for specifying concatenative morphology. | Fran | ||||
interface | Make a design for a web-based viewer for parallel treebanks | (also for viewing diff annotation for same sentence) | dependencies,parallel,web | Fran,Jonathan | |||
code | Write a script to convert a UD treebank | for a given language to a format suitable for training the perceptron tagger | |||||
research | Train the perceptron tagger for a language | The perceptron tagger is a new part-of-speech tagger that was developed for Apertium in the Summer of Code. Take a language from languages and train the tagger for that language. | Fran | ||||
interface | Design an annotation tool for disambiguation | like c.f. webanno, corpus.mari-language.org, brat | disambiguation,annotation | Fran,Jonathan | |||
interface | Design an annotation tool for adding dependencies | Like c.f. brat | dependencies,annotation | Fran,Jonathan | |||
code | Train lexical selection rules | from a large parallel corpus for a language pair | Fran | ||||
documentation | Document how to set up the experiments for weighted transfer rules | Fran | |||||
code | convert UD treebank to apertium tags, use unigram tagger | (see #apertium logs 2016-06-22) | |||||
code | Write a script to extract sentences from CoNLL-U | where they have the same tokenisation as Apertium. | Fran, wei2912 | ||||
documentation | convert [4] to apertium-style documentation | Schindler | |||||
code | Implement `lt-print --strings` lt-print -s | c++ | Fran, wei2912 | ||||
code | Implement lt-expand -n | Implement an algorithm that prints out a transducer but only follows n cycles. | c++ | Fran, wei2912 | |||
code | in-browser globe with apertium languages as points | Use d3 globe to make an apertium language/pair viewer (like pairviewer), maybe based on this or this or this. This file contains coordinates of Apertium languages. | js,html,maps | Jonathan, kvld | |||
code | Write a program to detect contexts where a path in a compiled transducer begins with a whitespace | c++ | |||||
code | Make the lt-comp compiler print a warning when a path begins with a whitespace. | Common mistake in dix files is to have some bad whitespace at places, this needs to be aqutomatically detected in the compilation tool and warning to user issued. | c++ | ||||
apertium-mar-hin: make the TL morph for any part of speech less daft | Some morph in Marathi or Hindi are currently daft. | morphology | vin-ivar | ||||
add indic scripts/formal latin transliterations | Translitteration is a ways to write stuffs in different scripts. Currently some indic scrpts are done only to some WX transliterator | python | vin-ivar | ||||
code | apertium-hin: more consistency with apertium-mar for verbs | Verbs in Marath and Hindi are incosistently. | morphology | vin-ivar | |||
code | apertium-mar: replace cases with postpositions | Marathi cases are postpositions | morphology | vin-ivar | |||
code | apertium-mar: fix modals and quasi-modals | Modals in Marathi need fixing | morphology | vin-ivar | |||
code | refactor x file in apy | Reorganise apy code to be more readable, maintainable and so forth. | Putti | ||||
documentation | add docstrings to x file in apy | docstrings are a way to document python code that can be generated into documentation on the web or in python. See following PEPs in python.org | Putti, vin-ivar | ||||
quality | write 10 unit tests for apy | Putti, Unhammer, (sushain?) | |||||
code | add 1 transfer rule | Transfer rules are parts of translation process dealing with re-arranging, adding and deleting words. See also Short introduction to transfer | Fran, vin-ivar, zfe, kvld | ||||
code | add 50 entries to a bidix | Bilingual dictionary (bidix) contains word-to-word translations between languages, e.g. cat-chat or cat-Katze in English to French or German respectively. Add 50 of such word-translations to languages you know. | Fran, vin-ivar, zfe, kvld, Schindler | ||||
code | write 10 lexical selection rules | Write 10 lexical selection rules for a pair already set up with lexical selection | Fran, vin-ivar, zfe, Unhammer | ||||
code | write 10 constraint grammar rules | Constraint grammar is a rule-based approach of selecting linguistic readings from ambiguous cases, to improve translation quality etc. See introduction CG here: | Fran, vin-ivar, zfe, kvld, Unhammer | ||||
research | Document resources for a language | Document resources for a language without resources already documented on the wiki. read more... | Jonathan, vin-ivar, zfe, Schindler | X | X | ||
research | Write a contrastive grammar | Document 6 differences between two (preferably related) languages and where they would need to be addressed (morph analysis, transfer, etc). Use a grammar book/resource for inspiration. Each difference should have no fewer than 3 examples. Put your work on the Apertium wiki under Language1_and_Language2/Contrastive_grammar. See Farsi_and_English/Pending_tests for an example of a contrastive grammar that a previous GCI student made. | vin-ivar, Jonathan, Fran, zfe, Schindler | X | X | ||
research | apertium-hun: match existing apertium-hun paradigms with morphdb.hu | Morphdb.hu is another implementation of Hungarian morphology, that has a large lexicon. In order to convert it to apertium format, the classification of the words needs to be mapped to one used in apertium. | hun,dix | Flammie | |||
code | apertium-hun: convert hunmorph.db into apertium | one of: See prerequisite task above. | Flammie | ||||
code | apertium-fin-eng: go through lexicon for potential rubbish words) | Apertium's Finnish–English dictionary has been converted from projects, like Finnwordnet, that hae a lot of pairs unsuitable for MT, find and delete them from the file. | fin,dix | Flammie | |||
code | apertium-fin-eng: add words from apertium-fin-eng to apertium-eng | grep for English words in apertium-fin-eng.fin-eng.dix and classify them according to paradgims. See also: Apertium English) | eng,dix | Flammie | |||
code | apertium-apy: add i/o formats) | Currently APY web queries get responses in ad hoc json format. Research and implement interoperabilities with further formats, such as: | apy | Flammie | |||
code | apertium-apy: write metadata about apertium language pairs | CMDI format that can be deployed for CLARIN stuffs | apy | Flammie | |||
code | apertium-apy: make more parts of apertium-pipeline on web | apertium.org has a web service interface for getting translations or morphological analyses. This should be extended for other functions of apertium as well. more information: Apertium Apy. | apy | Flammie | |||
code | Finish suggest-a-word feature so it can be deployed to apertium.org | There exists a version from last GSOC of apertium.org translator where user can suggest fixes to unknown word translations among other things, but this is not deployed to server. | apy | Flammie | |||
code | Further developments to suggest a word | Currently suggested words may be added to wiki by a service, it would make sense to also have e.g. chance to login and get attributed as contributor, as well as other stuff ) | apy | Flammie | |||
code | Fix ordering of dependencies in CG matxin format | Fran | |||||
code | CG syntax highlighting plugin for a text editor | Write a syntax file for your favourite text editor that provides fancy syntax highlighting for Constraint Grammar | vin-ivar, Unhammer, (Flammie?) | ||||
code | Package apertium-lint to install to a prefix | apertium-lint currently installs with pip, modify that to allow passing a flag for installing it to a prefix | vin-ivar | ||||
quality | Fix a bug in Apertium html-tools | Fix a currently open issue with html-tools in consultation with your mentor. | multi,html,js,html-tools | Unhammer, Jonathan, Kira | X | ||
quality | Fix a bug in Apertium APy | Fix a currently open issue with APy in consultation with your mentor. | multi,python,apy | Unhammer, Jonathan, Kira | X | ||
code | Script to get resources from GF | Write a script to scrape words from one particular paradigm in GF and make it usable in Apertium. | vin-ivar | ||||
code | Create a list of text editors compatible with different scripts | Create a list of ten text editors and document their status with representing human text (Latin), RTL text (Arabic), combining characters (Devanagari), etc. Document any bugs with eg. copy/paste and tab indentation. | vin-ivar | ||||
code | Write a script to strip apertium morphological information from CONLL-U files | Write a script to strip apertium morphological information from CONLL-U files so the dependency trees can be rendered okay by the online tools. | vin-ivar | ||||
research | Investigate FST backends for Swype-type input | Investigate what options exist for implementing an FST (of the sort used in Apertium spell checking) for auto-correction into an existing open source Swype-type input method on Android. You don't need to do any coding, but you should determine what would need to be done to add an FST backend into the software. Write up your findings on the Apertium wiki. | spelling,android | Jonathan | |||
code | Fix a memory leak in matxin-transfer | The matxin-transfer program is a component of the Matxin MT system, a sister system to Apertium. Run valgrind on the code and find and fix a memory leak. There may be serveral. | c++ | Fran | |||
code | Write a tool helping to test a bidix coherence | This tool will generate a file with each lema of the main categories (at least nouns, adjectives ans verbs) found in a bidix. Then this file will be translated to the second language and back to the first one. Looking for changes will allow to detect transfer problems and changes of meaning. | Bech | ||||
quality | fix any begiak issue | Fix any open issue for begiak (Apertium's IRC bot), to be chosen in consultation with your mentor. | python,irc | Jonathan, sushain, wei2912 | X | ||
quality | merge phenny upstream into begiak | Merge upstream patches etc. into begiak (Apertium's IRC bot). | git,irc | Jonathan, sushain, Unhammer, wei2912 | |||
quality | open a pull request for merging begiak modules into upstream | Open a pull request to merge features from begiak (Apertium's IRC bot) into upstream. | git,irc | Jonathan, sushain, Unhammer, wei2912 | |||
code | begiak interface to Apertium's web API | Write a module for begiak (Apertium's IRC bot) that provides access to at least one feature of APy (Apertium's web API). You may want to base the code off begiak's Apertium translation module (which may not be in 100% working order...). | irc,apy | Jonathan, sushain, Unhammer, wei2912 | X | ||
research | tesseract interface for apertium languages | Find out what it would take to integrate apertium or voikkospell into tesseract. Document thoroughly available options on the wiki. | spelling,ocr | Jonathan | |||
interface | Abstract the formatting for the Html-tools interface. | The interface for html-tools (Apertium's website framework) should be easily customisable so that people can make it look how they want. The task is to abstract the formatting and make one or more new stylesheets to change the appearance. This is basically making a way of "skinning" the interface. | css,html-tools | Jonathan,sushain | |||
quality | improvements to lexc plugin for vim | A vim syntax definition file for lexc is presented on the following wiki page: Apertium-specific conventions for lexc#Syntax highlighting in vim. This plugin works, but it has some issues: (1) comments on LEXICON lines are not highlighted as comments, (2) editing lines with comments (or similar) can be really slow, (3) the lexicon a form points at is not highlighted distinctly from the form (e.g., in the line «асқабақ:асқабақ N1 ; ! "pumpkin"», N1 should be highlighted somehow). Modify or rewrite the plugin to fix these issues. | vim | Jonathan | |||
code | Write a transliteration plugin for mediawiki | Write a mediawiki plugin similar in functionality (and perhaps implementation) to the way the Kazakh-language wikipedia's orthography changing system works (documented by a previous GCI student). It should be able to be directed to use any arbitrary mode from an apertium mode file installed in a pre-specified path on a server. | php | Jonathan | |||
documentation | add comments to .dix file symbol definitions | dix | Schindler | ||||
documentation | find symbols that aren't on the list of symbols page | Go through symbol definitions in Apertium dictionaries in svn (.lexc and .dix format), and document any symbols you don't find on the List of symbols page. This task is fulfilled by adding at least one class of related symbols (e.g., xyz_*) or one major symbol (e.g., abc), along with notes about what it means. | wiki,lexc,dix | Schindler | |||
code | conllu parser and searching | Write a script (preferably in python3) that will parse files in conllu format, and perform basic searches, such as "find a node that has an nsubj relation to another node that has a noun POS" or "find all nodes with a cop label and a past feature" | python,dependencies | Jonathan, Fran | |||
code | group and count possible lemmas output by guesser | Currently a "guesser" version of Apertium transducers can output a list of possible analyses for unknown forms. Develop a new pipleine, preferably with shell scripts or python, that uses a guesser on all unknown forms in a corpus, and takes the list of all possible analyses, and output a hit count of the most common combinations of lemma and POS tag. | guesser,transducers,shellscripts | Jonathan, Fran | |||
code | vim mode/tools for annotating dependency corpora in CG3 format | includes formatting, syntax highlighting, navigation, adding/removing nodes, updating node numbers, etc. | vim,dependencies,CG3 | Jonathan, Fran | |||
code | vim mode/tools for annotating dependency corpora in CoNLL-U format | includes formatting, syntax highlighting, navigation, adding/removing nodes, updating node numbers, etc. | vim,dependencies,conllu | Jonathan, Fran | |||
quality | figure out one-to-many bug in the lsx module | C++,transducers,lsx | Jonathan, Fran | ||||
code | add an option for reverse compiling to the lsx module | this should be simple as it can just leverage the existing lttoolbox options for left-right / right-left compiling | C++,transducers,lsx | Jonathan, Fran | |||
quality | remove extraneous functions from lsx-comp and clean up the code | C++,transducers,lsx | Jonathan, Fran | ||||
quality | remove extraneous functions from lsx-proc and clean up the code | C++,transducers,lsx | Jonathan, Fran | ||||
code | script to test coverage over wikipedia corpus | Write a script (in python or ruby) that in one mode checks out a specified language module to a given directory, compiles it (or updates it if already existant), and then gets the most recently nightly wikipedia archive for that language and runs coverage over it (as much in RAM if possible). In another mode, it compiles the language pair in a docker instance that it then disposes of after successfully running coverage. Scripts exist in Apertium already for finding where a wikipedia is, extracting a wikipedia archive into a text file, and running coverage. | python,ruby,wikipedia | Jonathan |
Task drafts[edit]
- (multiple tasks) Take one of the old and abandoned GsoC projects and get it compiling/running/working/documented
- Make a wiki page of GsoC projects that were never "integrated" (ie. turned into abandonware)
- Make a regex printer for the binary transfer files, e.g. given <def-cat n="foo"><cat-item n="n.*"/></def-cat> it will print foo\t<n>(<[^>]+>)+
- Make a program that checks to see that your attributes and categories are defined before using them in transfer files.
- Make a script to check that multicharacter symbols (tags, archiphonemes) are declared in both lexc and twol files (input: lexc, twol, .hfst file)
- Compile and install Matxin, and document steps, for en-eu
- Compile and install Matxin, and document steps, for es-eu
- Get HTML translation working for www.apertium.org
- Write a script to go from bilingual apertium tagged corpus to word aligned corpus with fast align
- Write a script that will, given a bilingual dictionary, remove the TL side of any non-ambiguous word in the previous corpus
- Write transfer rules to correctly convert time and date expressions from language 'a' to language 'b'
- Use word2vec for something
- Lexical selection by linearisation majority voting in GF:
- a)
- Parse language a to abstract syntax.
- Take n-best AS trees, and linearise to language b
- Score all linearisations on a language model of b
- Choose the AS tree which is ranked top by the language model
- Linearise to language b.
- Parse language a to abstract syntax.
- b)
- Parse language a to abstract syntax.
- Take n-best AS trees, and linearise to all languages
- For each language, score all linearisations on a language model of that language
- Choose the AS tree which is ranked top by the majority of the language models
- Linearise to language b.
- Parse language a to abstract syntax.
- Write a program to take two monolingual dictionaries and a bilingual word list (or bidix) and create a list of sl_pardef:tl_pardef mappings, each mapping will get a unique_id.
- Write a script that given a sample of unique_id and XML template will generate accurate bilingual dictionary entries.
- Do the same, but for LEXC
- Write a program to guess the transitivity of Turkic verbs based on a corpus.
- Do something with Scandinavian triplets (e.g. triplets of [nno, nob, dan] words) to get equivs in Swedish.
- Categorise Turkic adjectives automatically
- Find and fix errors in Swedish->{Nynorsk,Bokmål,Danish} translation.
Corrections to Russian tasks[edit]
- (17:06:14) firespeaker: "Эти тесты должны содержать как можно больше особенностей этих языков"
- (17:06:59) firespeaker: удмурский >> удмуртский
- (17:11:27) firespeaker: "Create a set of test sentences": "Создать множество тестовых фраз" > "Создать ряд тестовых предложении"
- (17:12:23) jimregan: have 'турецкий' and 'турецский'
—Firespeaker 22:09, 20 November 2011 (UTC)
Old tasks (2012)[edit]
Category | Title | Description | Mentors |
---|---|---|---|
code | Write lexical selection rules | Write 50 lexical selection rules for 10 words. For further information, you can consult the getting started guide. | Francis Tyers |
code | Add entries to transfer lexicon | Add 100 entries to a transfer lexicon of your choice. This will involve adding lexical transfer entries, which consist of a translation, and its corresponding grammatical features. | Francis Tyers - Hrvoje Peradin - Unhammer - Firespeaker |
code | Write transfer rules | Write 5 transfer rules for a new language pair. You can find basic documentation in the New language pair HOWTO and more in-depth (but incomplete) documentation in the long introduction to transfer rules. | Jimregan - Hrvoje Peradin - Unhammer - Firespeaker |
code | Dictionary conversion | Write a conversion module for an existing dictionary for apertium-dixtools. | Jimregan |
code | Dictionary conversion in python | Write a conversion module for an existing free bilingual dictionary to lttoolbox format using Python. | Firespeaker |
code | Apertium support for Morfologik | Add support to morfologik for reading Apertium-format dictionaries. | Jimregan |
code | Write disambiguation rules for apertium-tur | Write 3 disambiguation rules for apertium-tur. For further information, contact your mentor. | zfe |
code | Write disambiguation rules for apertium-sh-sl | Write 5 disambiguation rules for Slovene. For further information contact your mentor. | Zfe - Hrvoje Peradin |
research | Write a contrastive grammar | Using a grammar document 20 sample cases of grammatical differences between two languages | Francis Tyers - Zfe - Hrvoje Peradin - Unhammer - Firespeaker |
research | Disambiguating text | You will be given ambiguous sentences in a language, totalling 500 words. Your job is to pick the correct morphological reading in context. First read the page "morphologically disambiguating text" and then ask your mentor for more information. | Francis Tyers - Zfe - Hrvoje Peradin - Unhammer |
quality | Profile apertium-lex-tools | Take a large corpus, and run it through apertium-lex-tools to find out how long the program spends in each part of the code. You can use tools such as valgrind and gprof |
Francis Tyers |
interface | Design a nice javascript drop-down box | Taking as inspiration the Google Translate drop-down box, design a similar drop-down for Apertium. Will require knowledge of JavaScript and possibly Jquery. | Francis Tyers - Hrvoje Peradin - Firespeaker |
interface | Dictionary lookup | Integrate the Javascript dictionary lookup tool into the translation interface (AWI), to offer alternative translations where available | Jimregan |
interface | Google n-grams visualisation | Design an interface to compare possible translations using Google N-Grams | Jimregan |
quality | System quality control | Read 500 words of machine translation output and report on translation errors | Francis Tyers - Zfe - Hrvoje Peradin - Unhammer - Firespeaker |
quality | Input correction | Write 10 rules for LanguageTool for common errors that affect translation | Jimregan |
quality | Post-correction rules | Write 10 rules for LanguageTool to fix Apertium-generated errors | Jimregan |
quality | New release check | Compare released language pairs with their SVN version, to see which language pairs need a new release | Jimregan |
quality | Testvoc | Help prepare a language pair for a new release by fixing 20 dictionary entries with generation errors | Jimregan - Hrvoje Peradin - Unhammer |
documentation | Check installation instructions | Check that the installation instructions are up to date and work. Report any problems. | Francis Tyers - Zfe - Unhammer |
documentation | Check new language pair howto | Read through the new language pair howto, follow the steps, and check to see if it works. | Francis Tyers - Zfe Hrvoje Peradin - Unhammer |
documentation | Check new language with lttoobox howto | Read through the new language with lttoolbox howto, follow the steps, and check to see if it works. | Francis Tyers - Zfe - Hrvoje Peradin - Unhammer |
documentation | Check new language with HFST howto | Read through the new language with HFST howto, follow the steps, and check to see if it works. | Francis Tyers - Zfe - Hrvoje Peradin - Unhammer |
interface | Design the new webpage of apertium.org | Design, using HTML+CSS. The page should validate with the W3C validator. | Francis Tyers - Zfe - Hrvoje Peradin |
interface | Interface the new website with the webservice | Interface the new website with the webservice provided by Apertium. JavaScript knowledge required. | Francis Tyers - Zfe - Hrvoje Peradin |
research | Categorise words | For 100 words find the right inflection paradigm. You can learn more about inflectional paradigms on the page "Monodix basics". | Francis Tyers - Zfe - Hrvoje Peradin - Unhammer |
research | Catalogue resources | Catalogue all the available resources (grammatical descriptions, wordlists, dictionaries, spellcheckers, papers, corpora, etc.), along with the licences they are under. See for example the page Aromanian which was documented last year. | Francis Tyers - Zfe - Unhammer |
research | Make a list of potential language pairs. | Make a list of pairs of closely-related languages. For each pair of languages, collect information about Wikipedia size, number of editors, if there are existing MT systems or not. Contact your mentor for further information. | Francis Tyers - Zfe |
research | Make a 50 sentences long translation memory | Make a 50 sentence translation memory using text found on Wikipedia. This will involve finding articles which are translations of each other, and putting the equivalent sentences in an XML file. | Francis Tyers - Zfe - Unhammer |
documentation | Document the mecab tag set | see Japanese. Mecab is a morphological analysis and part-of-speech tagging module for Japanese. The tags are written in Japanese. We'd like to find translations for each of the tags, along with example word forms for each tag. This task requires knowledge of Japanese. | Francis Tyers - Kanmuri |
documentation | Document the Turmorph tag set | Document, with the use of samples sentences, the tag set used by turmorph | Zfe |
research | Investigate CIA replacement options | The CIA bot was an IRC bot which reported SVN commits to our IRC channel. Unfortunately the service has been offline for sometime. This task is to investigate other options for commit reporting to IRC which are compatible with having our SVN in SourceForge. | Francis Tyers - Unhammer |
code | Write a morphological transducer | Write a morphological analyser to analyse a paragraph of text. This will involve reading through the morphological analyser Howtos (e.g. HFST, lttoolbox), choosing a language you want to work with, and going through the process for that language. It is not expected to be complete, but should be to analyse a paragraph of text of your choosing. | Firespeaker - Francis Tyers - Unhammer |
research | Design some localised stickers | Design some Apertium stickers with the Apertium logo, localised to your country or region. | Francis Tyers - Firespeaker |
code | Implement IBM model 1 alignment | Implement IBM model 1 alignment for a tagged parallel corpus in Apertium stream format. Use python or C++. | Francis Tyers |
code | Implement IBM model 2 alignment | Implement IBM model 2 alignment for a tagged parallel corpus in Apertium stream format. Use python or C++. | Francis Tyers |
code | Implement IBM model 3 alignment | Implement IBM model 3 alignment for a tagged parallel corpus in Apertium stream format. Use python or C++. | Francis Tyers |
code | Implement IBM model 4 alignment | Implement IBM model 4 alignment for a tagged parallel corpus in Apertium stream format. Use python or C++. | Francis Tyers |
code | Implement IBM model 5 alignment | Implement IBM model 5 alignment for a tagged parallel corpus in Apertium stream format. Use python or C++. | Francis Tyers |
research | Build a translation memory | You will be given some free parallel text in two languages (e.g. the Bible, Parliament proceedings, etc.) and sentence align it. The difference between this task and the Wikipedia one is that in the Wikipedia one you will need to search for the documents, in this one, the documents will be provided. | Francis Tyers - Zfe |
code | Design a testvoc script for biltrans | Design a testvoc script which can deal with pairs which have ambiguous bilingual dictionaries. It should test each of the entries in turn. | Francis Tyers |
code | Port paradigm chopper to python3/elementtree | Take the paradigm-chopper.py from Speling tools and port it to use python 3 and ElementTree instead of python 2 and 4suite. |
User:Francis Tyers |
code | Port speling tools to python3 | Port the speling tools (except paradigm chopper ) to python 3. |
Francis Tyers |
code | Improve paradigm review | Make paradigm review in speling tools sort paradigms by frequency of use. | Francis Tyers |
code | Clean up and document yasmet | YASMET is a small toolkit for maximum entropy modelling -- only around 130 lines of code. The task is to deobfuscate and document it. The .cc file can be found in apertium SVN here. |
Francis Tyers |
code | Extract a category from Wiktionary | Screen scrape and extract inflectional information in speling format for a given category from Wiktionary. | Francis Tyers |
code | Extract translations from Wiktionary | Screen scrape and extract translations for a given category from Wiktionary. | Francis Tyers |
Old tasks (2011)[edit]
Area | Difficulty | Title | Description | Time (hours) |
People |
---|---|---|---|---|---|
code | 1. Hard | Convert existing resource: Urdu morphological analyser | Take Muhammad Humayoun's Urdu Morphology and convert to lttoolbox format. | 8–10 | Francis Tyers |
code | 1. Hard | Convert existing resource: Punjabi morphological analyser | Take Muhammad Humayoun's Punjabi Morphology and convert to lttoolbox format. | 8–10 | Francis Tyers |
code | 1. Hard | Convert existing resource: Kurdish morphological analyser | Take the Alexina Kurdish Morphology and convert to lttoolbox format. | 8–10 | Francis Tyers |
code | 3. Easy | Convert existing resource: Reta Vortaro Belarusian-Esperanto | Take the Belarusian-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: Reta Vortaro Breton-Esperanto | Take the Breton-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jacob_Nordfalk |
code | 3. Easy | Convert existing resource: Reta Vortaro Bulgarian-Esperanto | Take the Bulgarian-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro Czech-Esperanto | Take the Czech-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: Reta Vortaro Dutch-Esperanto | Take the Dutch-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro Finnish-Esperanto | Take the Finnish-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro German-Esperanto | Take the German-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro Greek-Esperanto | Take the Greek-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jacob_Nordfalk |
code | 3. Easy | Convert existing resource: Reta Vortaro Hebrew-Esperanto | Take the Hebrew-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jacob_Nordfalk |
code | 3. Easy | Convert existing resource: Reta Vortaro Hungarian-Esperanto | Take the Hungarian-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jacob_Nordfalk |
code | 3. Easy | Convert existing resource: Reta Vortaro Italian-Esperanto | Take the Italian-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro Persian-Esperanto | Take the Persian-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro Portuguese-Esperanto | Take the Portuguese-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro Polish-Esperanto | Take the Polish-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro Portuguese-Esperanto | Take the Portuguese-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Hèctor Alòs |
code | 3. Easy | Convert existing resource: Reta Vortaro Russian-Esperanto | Take the Russian-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: Reta Vortaro Slovakian-Esperanto | Take the Slovakian-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: Reta Vortaro Swedish-Esperanto | Take the Swedish-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jacob_Nordfalk |
code | 3. Easy | Convert existing resource: Reta Vortaro Turkish-Esperanto | Take the Turkish-Esperanto lexicon and convert to lttoolbox format. | 2–4 | Jimregan |
code | 2. Medium | Convert Apertium resources: nn-nb for Freedict | Apertium's lexicons would make an excellent start for bilingual dictionaries. FreeDict currently has no Nynorsk-Bokmal dictionary. | 2–4 | Piotr Bański |
code | 2. Medium | Convert Apertium resources: es-ca for Freedict | Apertium's lexicons would make an excellent start for bilingual dictionaries. FreeDict currently has no Spanish-Catalan dictionary. | 2–4 | Piotr Bański |
code | 2. Medium | Convert Apertium resources: is-en for Freedict | Apertium's lexicons would make an excellent start for bilingual dictionaries. FreeDict currently has no Icelandic-English dictionary. | 2–4 | Piotr Bański |
code | 2. Medium | Convert Apertium resources: es-ast for Freedict | Apertium's lexicons would make an excellent start for bilingual dictionaries. FreeDict currently has no Asturian-Spanish dictionary. | 2–4 | Piotr Bański |
code | 2. Medium | Convert Apertium resources: oc-ca for Freedict | Apertium's lexicons would make an excellent start for bilingual dictionaries. FreeDict currently has no Occitan-Catalan dictionary. | 2–4 | Piotr Bański |
code | 2. Medium | Convert Apertium resources: mk-bg for Freedict | Apertium's lexicons would make an excellent start for bilingual dictionaries. FreeDict currently has no Macedonian-Bulgarian dictionary. | 2–4 | Piotr Bański |
code | 2. Medium | Convert Apertium resources: mk-en for Freedict | Apertium's lexicons would make an excellent start for bilingual dictionaries. FreeDict currently has no Macedonian-English dictionary. | 2–4 | Piotr Bański |
code | 3. Easy | Convert existing resource: English-Slovakian dictionary | Take MSAS/MASS and convert to lttoolbox format. | 1–4 | Zdenko Podobný |
code | 2. Medium | Convert existing resource: Slovakian morphological analyser | Take the morphological analyser distributed with LanguageTool and convert to lttoolbox format. | 1–4 | Zdenko Podobný |
code | 3. Easy | Convert existing resource: FreeDict afr-deu | Take the Freedict afr-deu dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict ckb-kmr | Take the Freedict ckb-kmr dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict dan-eng | Take the Freedict dan-eng dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-ell | Take the Freedict eng-ell dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-hin | Take the Freedict eng-hin dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-hrv | Take the Freedict eng-hrv dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-pol | Take the Freedict eng-pol dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-rom | Take the Freedict eng-rom dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-rus | Take the Freedict eng-rus dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict gla-deu | Take the Freedict gla-deu dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict hrv-eng | Take the Freedict hrv-eng dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict jpn-deu | Take the Freedict jpn-deu dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict kha-deu | Take the Freedict kha-deu dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict kha-eng | Take the Freedict kha-eng dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict kur-eng | Take the Freedict kur-eng dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict kur-tur | Take the Freedict kur-tur dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict lat-deu | Take the Freedict lat-deu dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict san-deu | Take the Freedict san-deu dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict slo-eng | Take the Freedict slo-eng dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict swh-pol | Take the Freedict swh-pol dictionary and convert to lttoolbox format. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict ara-eng and eng-ara | Take the Freedict ara-eng and eng-ara dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict bre-fra and fra-bre | Take the Freedict bre-fra and fra-bre dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict deu-fra and fra-deu | Take the Freedict deu-fra and fra-deu dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict deu-ita and ita-deu | Take the Freedict deu-ita and ita-deu dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict deu-kur and kur-deu | Take the Freedict deu-kur and kur-deu dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict deu-nld and nld-deu | Take the Freedict deu-nld and nld-deu dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict deu-por and por-deu | Take the Freedict deu-por and por-deu dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict deu-tur and tur-deu | Take the Freedict deu-tur and tur-deu dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-afr and afr-eng | Take the Freedict eng-afr and afr-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-cym and cym-eng | Take the Freedict eng-cym and cym-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-cze and ces-eng | Take the Freedict eng-cze and ces-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-deu and deu-eng | Take the Freedict eng-deu and deu-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-fra and fra-eng | Take the Freedict eng-fra and fra-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-gle and gle-eng | Take the Freedict eng-gle and gle-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-hun and hun-eng | Take the Freedict eng-hun and hun-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-ita and ita-eng | Take the Freedict eng-ita and ita-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-lat and lat-eng | Take the Freedict eng-lat and lat-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-lit and lit-eng | Take the Freedict eng-lit and lit-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-nld and nld-eng | Take the Freedict eng-nld and nld-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-por and por-eng | Take the Freedict eng-por and por-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-scr and scr-eng | Take the Freedict eng-scr and scr-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-spa and spa-eng | Take the Freedict eng-spa and spa-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-swa and swa-eng | Take the Freedict eng-swa and swa-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-swe and swe-eng | Take the Freedict eng-swe and swe-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-tur and tur-eng | Take the Freedict eng-tur and tur-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict eng-wel and wel-eng | Take the Freedict eng-wel and wel-eng dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict fra-nld and nld-fra | Take the Freedict fra-nld and nld-fra dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 3. Easy | Convert existing resource: FreeDict gle-pol and pol-gle | Take the Freedict gle-pol and pol-gle dictionaries, convert to lttoolbox format, and merge them. | 2–4 | Jimregan |
code | 2. Medium | Convert existing resource: Polish-Slovakian transfer rules | Much of the existing rules in Apertium's pl-cs system originated in pl-sk. Take the new rules in pl-cs and apply them to pl-sk. No knowledge of Polish, Slovakian, or Czech is required, though it will help | 1–4 | Zdenko Podobný |
outreach | 3. Easy | Apertium on Macedonian Wikipedia | Bulgarian WP has 107,355 articles, Macedonian WP has 42,112, less than half as many. Translate some articles from Bulgarian Wikipedia to Macedonian Wikipedia using Apertium, and then postedit them. Explain to the local Wikipedia community what you are doing beforehand. | 1–4 | Francis Tyers |
outreach | 3. Easy | Apertium on Occitan Wikipedia | Catalan WP has 350,000 articles, Occitan WP has 55,000. Translate some articles from Catalan Wikipedia to Occitan Wikipedia using Apertium, and then postedit them. Explain to the local Wikipedia community what you are doing beforehand. | 1–4 | Francis Tyers |
outreach | 3. Easy | Apertium on Asturian Wikipedia | Spanish WP has 840,000 articles, Asturian WP has 15,000, almost a fiftieth as few. Translate some articles from Spanish Wikipedia to Asturian Wikipedia using Apertium, and then postedit them. Explain to the local Wikipedia community what you are doing beforehand. | 1–4 | Francis Tyers |
outreach | 3. Easy | Apertium on Aragonese Wikipedia | Spanish WP has 840,000 articles, Aragonese WP has 26,000. Translate some articles from Spanish Wikipedia to Aragonese Wikipedia using Apertium, and then postedit them. Explain to the local Wikipedia community what you are doing beforehand. | 1–4 | Francis Tyers |
outreach | 3. Easy | Apertium on Esperanto Wikipedia: Catalan | Catalan WP has 350,000 articles, Esperanto WP has 150,000. Translate some articles from Catalan Wikipedia to Esperanto Wikipedia using Apertium, and then postedit them. You can use the utility Vikitradukilo. Explain to the Esperanto Wikipedia community what you are doing beforehand. | 1–4 | Hèctor Alòs |
outreach | 3. Easy | Apertium on Esperanto Wikipedia: Spanish | Spanish WP has 840,000 articles, Esperanto WP has 150,000. Translate some articles from Spanish Wikipedia to Esperanto Wikipedia using Apertium, and then postedit them. You can use the utility Vikitradukilo. Explain to the Esperanto Wikipedia community what you are doing beforehand. | 1–4 | Hèctor Alòs |
outreach | 3. Easy | Apertium on Esperanto Wikipedia: French | French WP has 1,200,000 articles, Esperanto WP has 150,000. Translate some articles from French Wikipedia to Esperanto Wikipedia using Apertium, and then postedit them. You can use the utility Vikitradukilo. Explain to the Esperanto Wikipedia community what you are doing beforehand. | 1–4 | Hèctor Alòs |
outreach | 3. Easy | Apertium on Esperanto Wikipedia: English | English WP has 3,800,000 articles, Esperanto WP has 150,000. Translate some articles from English Wikipedia to Esperanto Wikipedia using Apertium, and then postedit them. You can use the utility Vikitradukilo. Explain to the Esperanto Wikipedia community what you are doing beforehand. | 1–4 | Hèctor Alòs |
outreach | 3. Easy | Apertium on Portuguese Wikitravel: Spanish | Translate some articles from Spanish Wikitravel to Portuguese Wikitravel using Apertium, and then postedit them. Explain to the Portuguese Wikitravel community what you are doing beforehand. | 1–4 | Gramirez |
outreach | 3. Medium | LUG Flyer | Design a flyer that briefly explains Apertium, suitable for handing out at Linux User Group meetings | 1–4 | Jimregan |
outreach | 3. Medium | School Flyer | Design a flyer that briefly explains Apertium, suitable for handing out at your school | 1–4 | Jimregan |
quality | 3. Easy | Thorough checkup of bn-en morphological analyser | While the current bn-en morphological analyser has a pretty good coverage, it should have been higher. Part of the reason is that a lot of verbs have one/two slight different surface forms that differ from the regular ones and the analyser misses them. Using lt-expand it's possible to generate all forms of the verbs, then manually check these and using another script (already in the pair) rebuild the analyser file. This checking will require a native speaker/expert on Bengali language | 2–4 | Abu Zaher |
code | 2. Medium | Dixtools: TEI export | Take the code from Dix2CC.java or Dix2Tiny.java and adapt to export TEI P5 format dictionaries, suitable for FreeDict. This project is suitable for someone interested in learning Java. | 2–4 | Jimregan |
research | 2. Medium | Contrastive analysis: Macedonian and Albanian | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Macedonian and Albanian. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Francis Tyers |
research | 2. Medium | Contrastive analysis: Kurdish and Persian | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Kurdish and Persian. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Francis Tyers |
research | 2. Medium | Contrastive analysis: Hindi and Urdu | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Hindu and Urdu. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Francis Tyers |
research | 2. Medium | Contrastive analysis: Finnish and Estonian | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Finnish and Estonian. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Francis Tyers |
research | 2. Medium | Contrastive analysis: Spanish and Italian | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Spanish and Italian. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Gramirez |
research | 2. Medium | Contrastive analysis: Catalan and Sardinian | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Catalan and Sardinian. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Francis Tyers |
research | 2. Medium | Contrastive analysis: Italian and Sardinian | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Italian and Sardinian. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Deadbeef |
research | 2. Medium | Contrastive analysis: Belorussian and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Belorussian and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Breton and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Breton and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Bulgarian and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Bulgarian and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Czech and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Czech and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Dutch and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Dutch and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: German and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between German and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Greek and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Greek and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Italian and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Italian and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Persian and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Persian and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Polish and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Polish and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Portuguese and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Portuguese and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Russian and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Russian and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Slovak and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Slovak and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Swedish and Esperanto | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Swedish and Esperanto. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Hèctor Alòs |
research | 2. Medium | Contrastive analysis: Spanish and Aragonese | Create a set of test sentences (see various pages of 'Pending tests' and 'Regression tests' on the Wiki) for translation between Spanish and Aragonese. The tests should cover as many features of the languages as possible. Some of the examples might be able to be found in a grammar, others might need to be invented. This will not involve programming, only grammatical analysis. | 4–6 | Juan Pablo Martínez |
исследование | 2. Нормальное | Противопоставление: Русский язык и эсперанто | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с русского на эсперанто. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Чувашский и русский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с чувашского на русский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Чувашский и татарский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с татарского на чувашский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Чувашский и башкирский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с башкирского на чувашский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Чувашский и турецкий языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с турецкого на чувашский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Татарский и русский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с татарского на русский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Башкирский и татарский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с башкирского на татарский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Башкирский и турецкий языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с турецкого на башкирский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Якутский и татарский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с якутского на русский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Якутский и татарский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с татарского на якутский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Якутский и турецкий языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с турецкого на якутский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Якутский и русский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с русского на якутский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs Halan |
исследование | 2. Нормальное | Противопоставление: Якутский и английский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с английского на якутский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs Halan |
исследование | 2. Нормальное | Противопоставление: Кумыкский и ногайский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с кумыкского на ногайский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Кумыкский и татарский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с татарского на кумыкский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Кумыкский и турецкий языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с турецкого на кумыкский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Карачаево-балкарский и татарский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с татарского на карачаево-балкарский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Карачаево-балкарский и турецкий языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с турецкого на карачаево-балкарский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Тувинский и хакасский языки | Создать множество тестовых фраз (посмотрите 'Pending tests' и 'Regression tests' в Вики) для перевода с тувинского на хакасский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Тувинский и татарский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с татарского на тувинский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Тувинский и турецкий языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с турецкого на тувинский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Осетинский и русский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с осетинского на русский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs Amikeco |
исследование | 2. Нормальное | Противопоставление: Осетинский и английский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с осетинского на английский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs Amikeco |
исследование | 2. Нормальное | Противопоставление: Осетинский язык и эсперанто | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с осетинского языка на эсперанто. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs Amikeco |
исследование | 2. Нормальное | Противопоставление: Бурятский и калмыцкий языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с бурятского на калмыцкий язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs Jargal |
исследование | 2. Нормальное | Противопоставление: Бурятский и якутский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с бурятского на якутский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs Jargal |
исследование | 2. Нормальное | Противопоставление: Бурятский и тувинский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с бурятского на тувинский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs Jargal |
исследование | 2. Нормальное | Противопоставление: Удмурский и русский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с удмурстского на русский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Удмурский и финский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с финского на удмурский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Удмурский и коми языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с коми на удмурский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Луговомарийский и горномарийский языки | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с горномарийского на луговомарийский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Луговомарийский язык и эрзя | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с эрзя на луговомарийский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
исследование | 2. Нормальное | Противопоставление: Луговомарийский язык и мокша | Создать множество тестовых фраз (посмотрите страницы 'Pending tests' и 'Regression tests' в Вики) для перевода с мокша на луговомарийский язык. Тесты должны содержать как можно больше черт языков. Некоторые из примеров можно найти в грамматике, другие могут быть придуманы. Без использования программирования, только грамматический анализ. | 4–6 | Hèctor Alòs |
research | 3. Easy | Catalogue resources: Aromanian | Catalogue all the available resources (grammatical descriptions, wordlists, dictionaries, spellcheckers, papers, corpora, etc.) for Aromanian, along with the licences they are under. | Francis Tyers | |
translation | 2. Medium | Translate the HOWTO: Norwegian | Translate the new language pair HOWTO into Nynorsk. | 5–8 | Unhammer |
translation | 2. Medium | Translate the HOWTO: Dutch | Translate the new language pair HOWTO into Dutch. | 5–8 | Pim Otte |
translation | 2. Medium | Translate the HOWTO: Aragonese | Translate the new language pair HOWTO into Aragonese. | 5–8 | Juan Pablo Martínez |
translation | 2. Medium | Translate the HOWTO: Turkish | Translate the new language pair HOWTO into Turkish. | 5–8 | Zfe |
translation | 2. Medium | Translate the HOWTO: Esperanto | Finish the translation of the Kiel aldoni novan lingvoparon into Esperanto. | 4–6 | Hèctor Alòs |
quality | 3. Easy | Update test pages: Esperanto and Catalan | Test the outstanding tests in the outstanding test page page and put the ones which work in the regression test page. Test the regression tests in the regression test page and put the ones which don't work in the outstanding test page | 1–2 | Hèctor Alòs |
quality | 3. Easy | Update test pages: Esperanto and Spanish | Test the outstanding tests in the outstanding test page and put the ones which work in the regression test page. Test the regression tests in the regression test page and put the ones which don't work in the outstanding test page | 1–2 | Hèctor Alòs |
quality | 3. Easy | Update test pages: Esperanto and French | Test the outstanding tests in the outstanding test page and put the ones which work in the regression test page. Test the regression tests in the regression test page and put the ones which don't work in the outstanding test page | 1–2 | Hèctor Alòs |
quality | 3. Easy | Add new tests: Esperanto and Catalan | Add 10 new constructions which aren't correctly translated in the outstanding test page. | 1–2 | Hèctor Alòs |
quality | 3. Easy | Add new tests: Esperanto and Spanich | Add 10 new constructions which aren't correctly translated in the outstanding test page. | 1–2 | Hèctor Alòs |
quality | 3. Easy | Add new tests: Esperanto and French | Add 10 new constructions which aren't correctly translated in the outstanding test page. | 1–2 | Hèctor Alòs |
quality | 3. Easy | Quality evaluation: Spanish and French | Perform a human post-edition evaluation of the Spanish and French language pair. This will involve taking some free text (e.g. from Wikipedia or Wikinews), running it through the translator and then altering the output to be correct. Then using apertium-eval-translator to calculate the Word Error Rate. The minimum amount of text should be 2,000 words. | 4–8 | Francis Tyers |
quality | 3. Easy | Quality evaluation: Spanish and Occitan | Perform a human post-edition evaluation of the Spanish and Occitan language pair. This will involve taking some free text (e.g. from Wikipedia or Wikinews), running it through the translator and then altering the output to be correct. Then using apertium-eval-translator to calculate the Word Error Rate. The minimum amount of text should be 2,000 words. | 4–8 | Mireia Ginestí |
quality | 3. Easy | Quality evaluation: Spanish and Asturian | Perform a human post-edition evaluation of the Spanish and Asturian language pair. This will involve taking some free text (e.g. from Wikipedia or Wikinews), running it through the translator and then altering the output to be correct. Then using apertium-eval-translator to calculate the Word Error Rate. The minimum amount of text should be 2,000 words. | 4–8 | Francis Tyers |
user interface | 2. Medium | Design a user-friendly Glade interface for Apertium | Apertium does not currently have a friendly user interface for translators. Look at other translation software on the market, and sketch out some ideas for how to design a user interface. We don't require an implementation, just the XML-based interface mockup | 2–4 | Jimregan |
user interface | 2. Medium | Design a user-friendly web interface for Apertium | Apertium has a friendly user interface for translators, but more attention needs to be paid to its visual appearance. This will involve either a user interface mockup (preferably using GWT), or a "theme" using CSS for the existing interface. | 2–4 | Jimregan |
user interface | 2. Medium | Design a user-friendly interface for a web-based dictionary management tool. | Apertium does not currently have a friendly user interface for adding new words to the dictionaries. We need someone with a good sense of design to provide us with a mockup for a web interface for managing a dictionary. | 2–4 | Jimregan |
user interface | 2. Medium | Design a user-friendly interface for an Android version of TinyLex | TinyLex is a dictionary tool for J2ME. We would like to port it to Android, but as we are not UI designers, we would prefer it if someone with a sense for visual design took on this task. There are tools available for drawing an interface in XML - it would be better if they were used. | 2–4 | Jimregan |
user interface | 2. Medium | Design a user-friendly Android interface for Apertium | Design a mockup of a GUI for Apertium for Android. We don't run on Android yet, but work is ongoing. We would like some ideas for an interface that makes sense on phones, primarily, but taking the tablet form factor into account is also an option. | 2–4 | Jimregan |
training | 3. Easy | Step-by-step "become a developer" guide | Write a simple step-by-step guide (on the wiki) for pre-university students (of varying levels of computer literacy) to install a development version of Apertium and make a single change in a language pair. This should include everything, from checking out with SVN to requesting committer access on SourceForge. Document everything you do! | 2–3 | Mikel L. Forcada |
training | 3. Easy | Step-by-step "constraint grammar" guide | Write a simple step-by-step guide (on the wiki) for pre-university students (of varying levels of computer literacy) to install Constraint Grammar and fix 5 disambiguation problems in a single sentence, then committing to the incubator. This should include everything, from checking out with SVN to requesting committer access on SourceForge. Document everything you do! | 2–3 | Unhammer |
training | 2. Easy | Basics of grammar guide | Write a basic guide that teaches the basics of grammar, with reference to the part of speech tags used in Apertium | 2–3 | Jimregan |
training | 2. Medium | Moodle course | Design a Moodle-based course for beginning a new language pair. The New Language Pair HOWTO can be used as a guide | 4–6 | Jimregan |
training | 2. Easy | Apertium AWI Screencase | Create a screencast that gives a step by step guide to using Apertium via Apertium AWI. It's OK to assume that it has already been set up | 2–3 | Jimregan |
training | 1. Hard | Apertium Regular Expressions guide | lttoolbox allows a limited subset of POSIX regular expressions. Create a guide to the regexes allowed, and to using them for common tasks, such as matching dates. | 2–3 | Jimregan |
quality | 3. Easy | Release freshness | Go through all the 25 released pairs and note down their date of last release and how many dictionary entries and rules they have. Then go to SVN and look at the module for the released pair and find out how many dictionary entries and rules it has. Put this into a spreadsheet and email the mailing list. Why? Our release cycle is very slow, and often we get pairs in trunk which have substantial improvements but have not been released. | 2–4 | Francis Tyers |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Aragonese | Translate the article on Apertium into Aragonese for the Aragonese Wikipedia | 1h | Juan Pablo Martínez |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Chuvash | Translate the article on Apertium into Chuvash for the Chuvash Wikipedia | 1h | Hèctor Alòs |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Tatar | Translate the article on Apertium into Tatar for the Tatar Wikipedia | 1h | Hèctor Alòs |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Bashkir | Translate the article on Apertium into Bashkir for the Bashkir Wikipedia | 1h | Hèctor Alòs |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Yakut | Translate the article on Apertium into Yakut for the Yakut Wikipedia | 1h | Hèctor Alòs Halan |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Komi | Translate the article on Apertium into Komi for the Komi Wikipedia | 1h | Hèctor Alòs |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Udmurt | Translate the article on Apertium into Udmurt for the Udmurt Wikipedia | 1h | Hèctor Alòs |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Meadow Mari | Translate the article on Apertium into Meadow Mari for the Meadow Mari Wikipedia | 1h | Hèctor Alòs |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Hill Mari | Translate the article on Apertium into Hill Mari for the Hill Mari Wikipedia | 1h | Hèctor Alòs |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Osetian | Translate the article on Apertium into Osetian for the Osetian Wikipedia | 1h | Hèctor Alòs |
outreach | 3. Easy | Translate the Wikipedia article on Apertium: Buryat | Translate the article on Apertium into Osetian for the Buryat Wikipedia | 1h | Hèctor Alòs Jargal |
documentation | 3. Easy | Create a dictionary crossing guide | Create a full guide to crossing dictionaries, using notes that will be provided. | 2–3 | Jimregan |
documentation | 3. Easy | Create an installation guide for Windows users | We have some installation notes, but they were not written with an average user in mind. Write a new installation guide, specifically for Windows users, that don't presume a high level of technical knowledge. | 2–3 | Jimregan |
documentation | 3. Easy | Create an installation guide for Mac users | We have some installation notes, but they were not written with an average user in mind. Write a new installation guide, specifically for Mac users, that don't presume a high level of technical knowledge. | 2–3 | Jimregan |
documentation | 3. Easy | Create an installation guide for Ubuntu users | We have some installation notes, but they were not written with an average user in mind. Write a new installation guide, specifically for Ubuntu users, that don't presume a high level of technical knowledge. Specifically, steer people away from installing the dated Debian/Ubuntu packages. | 2–3 | Jimregan |
outreach | 3. Easy | Writing a quick guide on 'What Apertium can and cannot do to help you with your homework'. | Students around the world use Apertium (and other MT systems) to do their second-language homework. The documents would summarize the do's and don'ts, and could even elaborate on how students using Apertium for their homework could discover ways in which Apertium could be improved. | 2–3 | Mikel L. Forcada |
documentation | 3. Easy | Document undocumented features: manpages | Work through each of the manpages in apertium and lttoolbox, checking that each of the options listed by --help is documented. | 2–4 | Jimregan |
research | 3. Easy | Create manually tagged corpora: Occitan | Fix tagging errors in a piece of analysed text, for use in tagger training. This will involve taking some free text (such as from Wikipedia), running it through the analyser and tagger, and replacing incorrect analyses with the correct one. | 2–4 | Mireia Ginestí |
research | 3. Easy | Create manually tagged corpora: Italian | Fix tagging errors in a piece of analysed text, for use in tagger training. This will involve taking some free text (such as from Wikipedia), running it through the analyser and tagger, and replacing incorrect analyses with the correct one. | 2–4 | Mireia Ginestí |
research | 3. Easy | Create manually tagged corpora: Catalan | Fix tagging errors in a piece of analysed text, for use in tagger training. This will involve taking the corpus in the es-ca package, and adapting it in terms of the multiwords present in en-ca, but absent in es-ca. | 2–4 | Mireia Ginestí |
research | 3. Easy | Create manually tagged corpora: Polish | Fix tagging errors in a piece of analysed text, for use in tagger training. This will involve taking some free text (such as from Wikipedia), running it through the analyser and tagger, and replacing incorrect analyses with the correct one. It may be preferable to use LanguageTool's tagger. | 2–4 | Jimregan |
research | 3. Easy | Create manually tagged corpora: Czech | Fix tagging errors in a piece of analysed text, for use in tagger training. This will involve taking some free text (such as from Wikipedia), running it through the analyser and tagger, and replacing incorrect analyses with the correct one. It may be preferable to use LanguageTool's tagger. | 2–4 | Jimregan |
research | 3. Easy | Create manually tagged corpora: Slovakian | Fix tagging errors in a piece of analysed text, for use in tagger training. This will involve taking some free text (such as from Wikipedia), running it through the analyser and tagger, and replacing incorrect analyses with the correct one. It may be preferable to use LanguageTool's tagger. | 2–4 | Jimregan |
research | 3. Easy | Create manually tagged corpora: Russian | Fix tagging errors in a piece of analysed text, for use in tagger training. This will involve taking some free text (such as from Wikipedia), running it through the analyser and tagger, and replacing incorrect analyses with the correct one. It may be preferable to use LanguageTool's tagger. | 2–4 | Jimregan |
research | 3. Easy | Create manually tagged corpora: Ukrainian | Fix tagging errors in a piece of analysed text, for use in tagger training. This will involve taking some free text (such as from Wikipedia), running it through the analyser and tagger, and replacing incorrect analyses with the correct one. It may be preferable to use LanguageTool's tagger. | 2–4 | Jimregan |
quality | 1. Hard | Improve a language pair: Welsh-English | Find some faults in Welsh-English and fix them. | 8–12 | Francis Tyers |
quality | 1. Hard | Improve a language pair: Breton-French | Find some faults in Breton-French and fix them. | 8–12 | Francis Tyers |
quality | 1. Hard | Improve a language pair: Basque-Spanish | Find some faults in Basque-Spanish and fix them. | 8–12 | Mireia Ginestí |
quality | 1. Hard | Improve a language pair: French-Esperanto | Find some faults in French-Esperanto and fix them. | 8–12 | Hèctor Alòs |
quality | 1. Hard | Improve a language pair: Spanish-Esperanto | Find some faults in Spanish-Esperanto and fix them. | 8–12 | Hèctor Alòs |
quality | 1. Hard | Improve a language pair: Catalan-Esperanto | Find some faults in Catalan-Esperanto and fix them. | 8–12 | Hèctor Alòs |
quality | 1. Hard | Improve a language pair: English-Esperanto | Find some faults in English-Esperanto and fix them. | 8–12 | Hèctor Alòs |
documentation | 2. Medium | Document undocumented features: cascaded interchunk | Update the Apertium manual to document cascaded interchunk. | 4–8 | Mikel L. Forcada |
documentation | 2. Medium | Document undocumented features: transliteration | Update the Apertium manual to document the transliteration features in lttoolbox. | 4–8 | Francis Tyers |
quality | 1. Hard | Fix some tagger errors in Swedish->Danish | apertium-sv-da could be improved with a Constraint Grammar. Find 10 sentences that get wrong translations due to tagging, and write CG rules to fix them. The student should have good knowledge of Swedish, or at least some Scandinavian language. | 8–12 | Unhammer |
quality | 3. Easy | Improve Swedish-Danish dictionaries | Add 50 nouns you feel are missing in translations from Swedish to Danish. | 3–6 | Jacob Nordfalk |
quality | 3. Easy | Improve English-Esperanto dictionaries | Add 50 words you feel are missing in translations from English to Esperanto. | 3–6 | Jacob Nordfalk |
quality | 3. Easy | Improve Spanish-Esperanto dictionaries | Add 50 words you feel are missing in translations from Spanish to Esperanto. | 3–6 | Hèctor Alòs |
quality | 3. Easy | Improve Catalan-Esperanto dictionaries | Add 50 words you feel are missing in translations from Catalan to Esperanto. | 3–6 | Hèctor Alòs |
quality | 3. Easy | Improve French-Esperanto dictionaries | Add 50 words you feel are missing in translations from French to Esperanto. | 3–6 | Hèctor Alòs |
quality | 3. Easy | Improve Spanish-Aragonese | Add 50 nouns you feel are missing in translations from Aragonese to Spanish. | 3–6 | Juan Pablo Martínez |
quality | 3. Easy | Improve Spanish-Aragonese | Add 50 verbs you feel are missing in translations from Aragonese to Spanish. | 3–6 | Juan Pablo Martínez |
quality | 3. Easy | Improve Spanish-Aragonese | Add 50 adjectives you feel are missing in translations from Aragonese to Spanish. | 3–6 | Juan Pablo Martínez |
quality | 3. Easy | Improve Afrikaans-Dutch dictionaries | Add 50 words you feel are missing in translations from Afrikaans-Dutch. A list of unknown words can be provided. | 3–6 | Pim Otte |
quality | 3. Easy | Improve Spanish-Portuguese dictionaries for tourist domain | Add 50 nouns you feel are missing in translations in the touristic domain from Spanish to Portuguese. | 3–6 | Gramirez |
quality | 3. Easy | Afrikaans-Dutch tests | Finish the Afrikaans-Dutch pending tests and move the passing tests to a seperate page for regression testing. | 3–6 | Pim Otte |
quality | 3. Easy | Afrikaans-Dutch cleanup | Add about 100 missing words to the bidix of Afrikaans-Dutch and possibly the Dutch side | 3–6 | Pim Otte |
код | 3. Лёгкое | Создать чувашско-русский словарь | Создать чувашско-русский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать чувашско-татарский словарь | Создать чувашско-татарский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать чувашско-башкирский словарь | Создать чувашско-башкирский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать чувашско-якутский словарь | Создать чувашско-якутский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать татарско-русский словарь | Создать татарско-русский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать татарско-турецский словарь | Создать татарско-турецский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать татарско-башкирский словарь | Создать татарско-башкирский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать татарско-якутский словарь | Создать чувашско-якутский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать башкирско-русский словарь | Создать татарско-русский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать башкирско-турецский словарь | Создать татарско-турецский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать башкирско-якутский словарь | Создать чувашско-якутский словарь сиз 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать осетинско-русский словарь | Создать осетинско-русский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs Amikeco |
код | 3. Лёгкое | Создать осетинско-английский словарь | Создать осетинско-английский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs Amikeco |
код | 3. Лёгкое | Создать осетинско-эсперанто словарь | Создать осетинско-эсперанто словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs Amikeco |
код | 3. Лёгкое | Создать бурятско-калмыцкий словарь | Создать бурятско-калмыцкий словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs Jargal |
код | 3. Лёгкое | Создать бурятско-якутский словарь | Создать бурятско-якутский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs Jargal |
код | 3. Лёгкое | Создать бурятско-тувинский словарь | Создать бурятско-тувинский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs Jargal |
код | 3. Лёгкое | Создать русско-якутский словарь | Создать русско-якутский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs Halan |
код | 3. Лёгкое | Создать англо-якутский словарь | Создать англо-якутский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs Halan |
код | 3. Лёгкое | Создать русско-удмурстский словарь | Создать русско-удмурстский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать финско-удмурстский словарь | Создать финско-удмурстский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать коми-удмурстский словарь | Создать коми-удмурстский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать луговомарийско-горномарийский словарь | Создать луговомарийско-горномарийский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать финско-горномарийский словарь | Создать луговомарийский-горномарийский словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать луговомарийский-эрзя словарь | Создать луговомарийский-эрзя словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
код | 3. Лёгкое | Создать луговомарийский-мокша словарь | Создать луговомарийский-мокша словарь из 100 слов на формате lttoolbox Апертиума. | 2–4 | Hèctor Alòs |
quality | 3. Easy | Improve Irish-Manx Gaelic coverage | I can provide a list of the most common Irish words not covered by the bilingual dictionary, and their English translations. Manx translations needed for these. | 3–6 | Kevin Scannell |
quality | 3. Easy | Add gender information to Manx dictionary | Most of the nouns in the Manx dictionary have gender information in place - look up and add any that are missing. | 3–6 | Kevin Scannell |
quality | 3. Easy | Proofread Albanian analyser | We have a morphological analyser for Albanian, but it has been written by a non-native speaker and needs to be checked. | 6–10 | Francis Tyers |
translation | 3. Easy | Proofread Catalan-Sardinian dictionary | Go through the Catalan-Sardinian dictionary and check the entries, there are only around a thousand or so. | 1–2 | Francis Tyers |
quality | 2. Medium | Improve Spanish-Aragonese coverage | Create a corpus from Aragonese Wikipedia. Then, add the top 50-100 frequently used words which are not covered in the Apertium es-an dictionaries. | 6–10 | Juan Pablo Martínez |
code | 2. Medium | Add toponyms to the Spanish-Aragonese dictionaries | Extract from wikipedia the names in Aragonese for countries in the world, their capital cities, main Spanish cities and municipalities in Aragon, and add them to the es-an dictionaries. | 6–10 | Juan Pablo Martínez |
code | 2. Medium | Update the Apertium TinyLex J2ME apps for one language pair | Update Apertium TinyLex J2ME packages (http://www.tinylex.com/) to contain the most recent versions of dictionaries for one language pair | 4–6 per package | Mikel L. Forcada |
code | 2. Medium | Create an Apertium TinyLex J2ME app for a new language pair | Create an Apertium TinyLex J2ME package (http://www.tinylex.com/) from an existing Apertium language pair for a language pair not offered yet in Tinylex | 6–10 per package | Mikel L. Forcada |
research | 2. Medium | Contrastive analysis: Russian--Spanish (Simple noun phrases) | Write a contrastive grammar of Russian and Spanish for the translation of noun phrases from Russian to Spanish. The grammar should be written as a series of human readable rules, with example sentences. | 3 hours | Francis Tyers |
research | 2. Medium | Contrastive analysis: Russian--Spanish (Prepositional phrases) | Write a contrastive grammar of Russian and Spanish for the translation of prepositions/prepositional phrases from Russian to Spanish. The grammar should be written as a series of human readable rules, with example sentences. | 3 hours | Francis Tyers |
research | 2. Medium | Contrastive analysis: Russian--Spanish (Tenses) | Write a contrastive grammar of Russian and Spanish for the translation of verb tenses from Russian to Spanish. The grammar should be written as a series of human readable rules, with example sentences. | 3 hours | Francis Tyers |
research | 2. Medium | Contrastive analysis: Russian--Spanish (Aspect) | Write a contrastive grammar of Russian and Spanish for the translation of verbal aspect from Russian to Spanish. The grammar should be written as a series of human readable rules, with example sentences. | 3 hours | Francis Tyers |
research | 2. Medium | Contrastive analysis: Russian--Spanish (Pronouns) | Write a comprehensive contrastive grammar of Russian and Spanish for the translation of pronouns from Russian to Spanish. The grammar should be written as a series of human readable rules, with example sentences. | 3 hours | Francis Tyers |
research | 2. Medium | Contrastive analysis: Russian--Spanish (Impersonal constructions) | Write a comprehensive contrastive grammar of Russian and Spanish for the translation of impersonal constructions from Russian to Spanish. The grammar should be written as a series of human readable rules, with example sentences. | 3 hours | Francis Tyers |
research | 2. Medium | Contrastive analysis: Russian--Spanish (Verbs of motion) | Write a comprehensive contrastive grammar of Russian and Spanish for the translation of verbs of motion from Russian to Spanish. The grammar should be written as a series of human readable rules, with example sentences. | 3 hours | Francis Tyers |
research | 2. Medium | Contrastive analysis: Russian--Spanish (Particles and adverbs) | Write a comprehensive contrastive grammar of Russian and Spanish for the translation of particles and adverbs from Russian to Spanish, paying special attention to word/constituent order. The grammar should be written as a series of human readable rules, with example sentences. | 3 hours | Francis Tyers |
code | 2. Medium | TRmorph phonological rule conversion: 010-exception_deye.fst |
Convert the TRmorph 010-exception_deye.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 015-exception_obs.fst |
Convert the TRmorph 015-exception_obs.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 017-exception_i.fst |
Convert the TRmorph 017-exception_i.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 020-compn.fst |
Convert the TRmorph 020-compn.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 040-exception_ben.fst |
Convert the TRmorph 040-exception_ben.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 050-exception_su.fst |
Convert the TRmorph 050-exception_su.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 060-xception_del_bS.fst |
Convert the TRmorph 060-xception_del_bS.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 070-exception_del_buff.fst |
Convert the TRmorph 070-exception_del_buff.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 080-vowel_epenth.fst |
Convert the TRmorph 080-vowel_epenth.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 090-duplication.fst |
Convert the TRmorph 090-duplication.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 100-fs_devoicing.fst |
Convert the TRmorph 100-fs_devoicing.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 110-v_assimilation.fst |
Convert the TRmorph 110-v_assimilation.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 120-passive_ln.fst |
Convert the TRmorph 120-passive_ln.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 130-exception_yor.fst |
Convert the TRmorph 130-exception_yor.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: 140-v_harmony.fst |
Convert the TRmorph 140-v_harmony.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: phon+bm.fst |
Convert the TRmorph phon+bm.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph phonological rule conversion: phon.fst |
Convert the TRmorph phon.fst into XFST syntax and test it. |
5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph lexicon conversion: Nouns | Convert the TRmorph noun lexicon into lexc syntax and test it. | 5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph lexicon conversion: Verbs | Convert the TRmorph verb lexicon into lexc syntax and test it. | 5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph lexicon conversion: Adjectives | Convert the TRmorph adjective lexicon into lexc syntax and test it. | 5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 2. Medium | TRmorph lexicon conversion: Closed categories and adverbs | Convert the TRmorph closed categories and adverb lexicons into lexc syntax and test them. | 5-8 hours | Francis Tyers, Firespeaker, Zfe, Hèctor Alòs |
code | 1. Hard | Kazakh lexicon: add 500 nouns (1) | Add 500 nouns to the Kazakh lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh lexicon: add 500 nouns (2) | Add 500 nouns to the Kazakh lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh lexicon: add 500 nouns (3) | Add 500 nouns to the Kazakh lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh lexicon: add 500 nouns (4) | Add 500 nouns to the Kazakh lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh lexicon: add 500 verbs (1) | Add 500 verbs to the Kazakh lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh lexicon: add 500 verbs (2) | Add 500 verbs to the Kazakh lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh lexicon: add 500 verbs (3) | Add 500 verbs to the Kazakh lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh lexicon: add 500 verbs (4) | Add 500 verbs to the Kazakh lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh lexicon: add 500 adjectives (1) | Add 500 adjectives to the Kazakh lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Kazakh lexicon: add 500 adjectives (2) | Add 500 adjectives to the Kazakh lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Kazakh lexicon: add 500 adjectives (3) | Add 500 adjectives to the Kazakh lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Kazakh lexicon: add 500 adjectives (4) | Add 500 adjectives to the Kazakh lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Kazakh lexicon: add 50 adverbs | Add 50 adverbs to the Kazakh lexicon in lexc, avoiding compositional forms of verbs and nouns. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Kazakh--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Mongolian bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Kazakh--Mongolian bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Kazakh--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Russian bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Kazakh--Russian bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Kazakh--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kazakh--Kyrgyz bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Kazakh--Kyrgyz bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Kyrgyz--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Kyrgyz--Russian bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Kyrgyz--Russian bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 nouns (1) | Add 500 nouns to the Mongolian lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 nouns (2) | Add 500 nouns to the Mongolian lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 nouns (3) | Add 500 nouns to the Mongolian lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 nouns (4) | Add 500 nouns to the Mongolian lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 100 nouns with complete paradigms (1) | Add 100 nouns to the Mongolian lexicon in lexc, along with all paradigm information. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 100 nouns with complete paradigms (2) | Add 100 nouns to the Mongolian lexicon in lexc, along with all paradigm information. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 100 nouns with complete paradigms (3) | Add 100 nouns to the Mongolian lexicon in lexc, along with all paradigm information. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 verbs (1) | Add 500 verbs to the Mongolian lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 verbs (2) | Add 500 verbs to the Mongolian lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 verbs (3) | Add 500 verbs to the Mongolian lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 verbs (4) | Add 500 verbs to the Mongolian lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Mongolian lexicon: add 500 adjectives (1) | Add 500 adjectives to the Mongolian lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Mongolian lexicon: add 500 adjectives (2) | Add 500 adjectives to the Mongolian lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Mongolian lexicon: add 500 adjectives (3) | Add 500 adjectives to the Mongolian lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Mongolian lexicon: add 500 adjectives (4) | Add 500 adjectives to the Mongolian lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Mongolian lexicon: add 50 adverbs | Add 50 adverbs to the Mongolian lexicon in lexc, avoiding compositional forms of verbs and nouns. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 nouns (1) | Add 500 nouns to the Buriad lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 nouns (2) | Add 500 nouns to the Buriad lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 nouns (3) | Add 500 nouns to the Buriad lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 nouns (4) | Add 500 nouns to the Buriad lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 verbs (1) | Add 500 verbs to the Buriad lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 verbs (2) | Add 500 verbs to the Buriad lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 verbs (3) | Add 500 verbs to the Buriad lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 verbs (4) | Add 500 verbs to the Buriad lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad lexicon: add 500 adjectives (1) | Add 500 adjectives to the Buriad lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Buriad lexicon: add 500 adjectives (2) | Add 500 adjectives to the Buriad lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Buriad lexicon: add 500 adjectives (3) | Add 500 adjectives to the Buriad lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Buriad lexicon: add 500 adjectives (4) | Add 500 adjectives to the Buriad lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Buriad lexicon: add 50 adverbs | Add 50 adverbs to the Buriad lexicon in lexc, avoiding compositional forms of verbs and nouns. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Buriad--Mongolian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Mongolian bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Buriad--Mongolian bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Buriad--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Buriad--Russian bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Buriad--Russian bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Altay--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Altay--Russian bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Altay--Russian bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Tuvan--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Tuvan--Russian bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Tuvan--Russian bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 nouns (1) | Add 500 nouns to the Uzbek lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 nouns (2) | Add 500 nouns to the Uzbek lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 nouns (3) | Add 500 nouns to the Uzbek lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 nouns (4) | Add 500 nouns to the Uzbek lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 verbs (1) | Add 500 verbs to the Uzbek lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 verbs (2) | Add 500 verbs to the Uzbek lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 verbs (3) | Add 500 verbs to the Uzbek lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 verbs (4) | Add 500 verbs to the Uzbek lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek lexicon: add 500 adjectives (1) | Add 500 adjectives to the Uzbek lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Uzbek lexicon: add 500 adjectives (2) | Add 500 adjectives to the Uzbek lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Uzbek lexicon: add 500 adjectives (3) | Add 500 adjectives to the Uzbek lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Uzbek lexicon: add 500 adjectives (4) | Add 500 adjectives to the Uzbek lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Uzbek lexicon: add 50 adverbs | Add 50 adverbs to the Uzbek lexicon in lexc, avoiding compositional forms of verbs and nouns. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Uzbek--Russian bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Russian bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Uzbek--Russian bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Uzbek--Kazakh bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kazakh bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Uzbek--Kazakh bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Uzbek--Kyrgyz bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Kyrgyz bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Uzbek--Kyrgyz bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Uzbek--Turkish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Uzbek--Turkish bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Uzbek--Turkish bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 nouns (1) | Add 500 nouns to the Udmurt lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 nouns (2) | Add 500 nouns to the Udmurt lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 nouns (3) | Add 500 nouns to the Udmurt lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 nouns (4) | Add 500 nouns to the Udmurt lexicon in lexc. | 5-8 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 verbs (1) | Add 500 verbs to the Udmurt lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 verbs (2) | Add 500 verbs to the Udmurt lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 verbs (3) | Add 500 verbs to the Udmurt lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 verbs (4) | Add 500 verbs to the Udmurt lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt lexicon: add 500 adjectives (1) | Add 500 adjectives to the Udmurt lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Udmurt lexicon: add 500 adjectives (2) | Add 500 adjectives to the Udmurt lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Udmurt lexicon: add 500 adjectives (3) | Add 500 adjectives to the Udmurt lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Udmurt lexicon: add 500 adjectives (4) | Add 500 adjectives to the Udmurt lexicon in lexc. | 12-16 hours | Francis Tyers, Firespeaker|} |
code | 1. Hard | Udmurt lexicon: add 50 adverbs | Add 50 adverbs to the Udmurt lexicon in lexc, avoiding compositional forms of verbs and nouns. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Udmurt--Komi bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Komi bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Udmurt--Komi bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Udmurt--Mari bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mari bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Udmurt--Mari bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Udmurt--Finnish bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Finnish bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Udmurt--Finnish bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 nouns (1) | Add 500 nouns to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 nouns (2) | Add 500 nouns to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 nouns (3) | Add 500 nouns to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 nouns (4) | Add 500 nouns to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 verbs (1) | Add 500 verbs to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 verbs (2) | Add 500 verbs to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 verbs (3) | Add 500 verbs to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 verbs (4) | Add 500 verbs to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 adjectives (1) | Add 500 adjectives to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 adjectives (2) | Add 500 adjectives to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 adjectives (3) | Add 500 adjectives to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 500 adjectives (4) | Add 500 adjectives to the Udmurt--Mordvin bidix. | 12-16 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Udmurt--Mordvin bilingual dictionary: add 50 adverbs | Add 50 adverbs to the Udmurt--Mordvin bidix. | 3-5 hours | Francis Tyers, Firespeaker |
code | 1. Hard | Fix memory hogging in lttoolbox compound analyser | Described in bug report 109, the compound analyser in lttoolbox seems to cache too much without releasing memory. Fix this bug so it keeps memory usage constant without running slower and slower for every line of input. Requires C++ knowledge. | 12-16 hours | User:Unhammer |
code | 3. Easy | Proofread 100 entries in the Serbo-Croatian morphological analyser | Go through a list of a 100 words and check their morphological paradigms. Correct typos and other errors in word entries and paradigms. If a word is in a wrong paradigm, assign it an other. | 3-5 hours | Hrvoje Peradin, Francis Tyers |
code | 2. Medium | Proofread 200 entries in the Serbo-Croatian morphological analyser | Go through a list of a 200 words and check their morphological paradigms. Correct typos and other errors in word entries and paradigms. If a word is in a wrong paradigm, assign it an other. | 8-10 hours | Hrvoje Peradin, Francis Tyers |
code | 1. Hard | Proofread 400 entries in the Serbo-Croatian morphological analyser | Go through a list of a 400 words and check their morphological paradigms. Correct typos and other errors in word entries and paradigms. If a word is in a wrong paradigm, assign it an other. | 13-15 hours | Hrvoje Peradin, Francis Tyers |
code | 1. Hard | Increase coverage for the Serbo-Croatian - Macedonian language pair | Add 80 words from a frequency list, assing them a paradigm in the Serbo-Croatian analyser, translate them, put the translation in the bidix and assign a paradigm for the translation in the Macedonian analyser. | 13-15 hours | Hrvoje Peradin, Francis Tyers |
code | 1. Hard | Even up the coverage of the Serbo-Croatian and Macedonian morphological analyser | There are words in the Macedonian morphological analyser which do not have a pair in the Serbo-Croatian analyser. Extract a 100, translate them, add them to the bidix and assign a paradigm for each one of them in the Serbo-Croatian analyser. | 13-15 hours | Hrvoje Peradin, Francis Tyers |
Task list[edit]
BULK IMPORT COMPLETE.
Any edits you make to the below tables will not have any impact on the contents of the task tracker. Please edit tasks there.
Misc tools[edit]
Category | Title | Description | Mentors |
---|---|---|---|
code | Unigram tagging mode for apertium-tagger |
Edit the apertium-tagger code to allow for lexicalised unigram tagging. This would basically choose the most frequent analysis for each surface form of a word. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
Francis Tyers Wei En |
code | Data format for the unigram tagger | Come up with a binary storage format for the data used for the unigram tagger. It could be based on the existing .prob format. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
Francis Tyers Wei En |
code | Add tag combination back-off to unigram tagger. | Modify the unigram tagger to allow for back-off to tag sequence in the case that a given form is not found. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
Francis Tyers Wei En |
code | Prototype unigram tagger. | Write a simple unigram tagger in a language of your choice. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
Francis Tyers Wei En |
code | Training for unigram tagger | Write a program that trains a model suitable for use with the unigram tagger. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
Francis Tyers Wei En |
code | make voikkospell understand apertium stream format input | Make voikkospell understand apertium stream format input, e.g. ^word/analysis1/analysis2$, voikkospell should only interpret the 'word' part to be spellchecked. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
code | make voikkospell return output in apertium stream format | make voikkospell return output suggestions in apertium stream format, e.g. ^correctword$ or ^incorrectword/correct1/correct2$ For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
code | libvoikko support for OS X | Make a spell server for OS X's system-wide spell checker to use arbitrary languages through libvoikko. See https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/SpellCheck/Tasks/CreatingSpellServer.html#//apple_ref/doc/uid/20000770-BAJFBAAH for more information For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
documentation | document: setting up libreoffice voikko on Ubuntu/debian | document how to set up libreoffice voikko working with a language on Ubuntu and debian For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
documentation | document: setting up libreoffice voikko on Fedora | document how to set up libreoffice voikko working with a language on Fedora For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
documentation | document: setting up libreoffice voikko on Windows | document how to set up libreoffice voikko working with a language on Windows For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
documentation | document: setting up libreoffice voikko on OS X | document how to set up libreoffice voikko working with a language on OS X For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
documentation | document how to set up libenchant to work with libvoikko | Libenchant is a spellchecking wrapper. Set it up to work with libvoikko, a spellchecking backend, and document how you did it. You may want to use a spellchecking module available in apertium for testing. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
code | geriaoueg lookup code | firefox/iceweasel plugin which queries apertium API for a word by sending a context (±n words) and the position of the word in the context and gets translation for language pair xxx-yyy For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers user:Firespeaker |
code | geriaoueg hovering the right way | Fix the geriaoueg plugins so that the popup stays there until you hover off a word, just like normal hovering. This will involve a redesign of the way the hovering code works. The plugin also crashes sometimes when dealing with urls, but it seems to be related to this issue. It'd be good if it stops crashing in those cases. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
code | Translate page feature for geriaoueg firefox & chrome plugins | Add functionality to Geriaoueg plugins for chrome and firefox that lets them not just gloss words but translate an entire page with apertium, much like existing corporate browser plugins. Don't worry about language detection and other complicated problems for now. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
quality | make apertium-quality work with python3.3 on all platforms | migrate apertium-quality away from distribute to newer setup-tools so it installs correctly in more recent versions of python (known incompatible: python3.3 OS X, known compatible: MacPorts python3.2) For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker |
quality, code | Get bible aligner working (or rewrite it) | trunk/apertium-tools/bible_aligner.py - Should take two bible translations and output a tmx file with one verse per entry. There is a standard-ish plain-text bible translation format that we have bible translations in, and we have files that contain the names of verses of various languages mapped to English verse names For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Sereni |
research | tesseract interface for apertium languages | Find out what it would take to integrate apertium or voikkospell into tesseract. Document thoroughly available options on the wiki. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker |
code | Syntax tree visualisation using GNU bison | Write a program which reads a grammar using bison, parses a sentence and outputs the syntax tree as text, or graphViz or something. Some example bison code can be found here. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Mlforcada |
code | make concordancer work with output of analyser | Allow spectie's concordancer to accept an optional apertium mode and directory (implement via argparse). When it has these, it should run the corpus through that apertium mode and search against the resulting tags and lemmas as well as the surface forms. E.g., the form алдым might have the analysis via an apertium mode of ^алдым/алд<n><px1sg> <nom> /ал<v><tv> <ifi><p1> <sg> , so a search for "px1sg" should bring up this word. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker |
code | convert a current transducer for a language using lexc+twol to a guesser | Figure out how to generate a guesser for a language module that uses lexc for morphotactics and twol for morphophonology (e.g., apertium-kaz). One approach to investigate would be to generate all the possible archiphoneme representations of a given form and run the lexc guesser on that. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Flammie |
code | let apertium-init support giella pairs | Apertium-init is a tool to bootstrap a new language module or translation pair, with build rules and minimal data. It doesn't yet support pairs that depend on Giellatekno language modules, we would like this. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Unhammer User:Francis Tyers |
code | create lt-compose tool to compose two transducers | This should do what hfst-compose does, but for lttoolbox transducers. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Unhammer |
documentation,research | create and test a configuration file for simpledix | Simpledix tries to help inexperienced users on the task of inserting words into Apertium dictionaries. But it needs paradigm description files for generating meaningful configuration files. Write and test a description file for the Apertium pair of your choice, and report possible improvements for the procedure. | User:dtr5 |
Website and apy[edit]
Category | Title | Description | Mentors |
---|---|---|---|
code | apertium-apy mode for geriaoueg (biltrans in context) | apertium-apy function that accepts a context (e.g., ±n ~words around word) and a position in the context of a word, gets biltrans output on entire context, and returns translation for the word For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker User:Unhammer, User:Sushain |
code | Website translation in Html-tools | Html-tools should detect when the user wants to translate a website (similar to how Google Translate does it) and switch to an interface (See "Website translation in Html-tools (interface)" task) and perform the translation. It should also make it so that new pages that the user navigates to are translated. See ticket 50 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer, User:Sushain |
interface | Website translation in Html-tools | Add an interface to Html-tools that shows a webpage in an <iframe> with translation options and a back button to return to text/document translation. See ticket 50 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel.up2015 |
User:Firespeaker User:Unhammer, User:Sushain |
code | Fix Html-tools crashing on iPads when copying text | Fix Html-tools so that the Apertium site does not crash on iPads when copying text on any of the modes while maintaining semantic HTML. This task requires having access to an iPad. See ticket 42 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer, User:Sushain |
code | Fix Html-tools copying text on Windows Phone IE | Fix Html-tools so that the Apertium site allows copying text on WP while maintaining semantic HTML. This task requires having access to an Windows Phone. See ticket 42 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer, User:Sushain |
code | APY API keys | Add API key support to APY but don't overengineer it. See ticket 31 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer User:Xavivars, User:Sushain |
code | Localisation of tag attributes on Html-tools | In Html-tools, the meta description tag isn't localized as of now since the text is an attribute. Search engines often display this as their snippet. A possible way to achieve this is using data-text="@content@description". See ticket 29 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer, User:Sushain |
code | Html-tools font issues | This task concerns a font issue in Html-tools. See ticket 27 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel.up2015 |
User:Firespeaker User:Unhammer, User:Sushain |
code, interface | Auto-select target language | ticket 25 made apertium-html-tools show the available target languages first, but preferably, one of them would be auto-selected as well (maybe with a single visual "blink" to show that something happened there). For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer User:Francis Tyers, User:Sushain |
code | Maintaining order of user interactions on Html-tools | In Html-tools, if a user clicks a new language choice while translation or detection is proceeding (AJAX callback has not yet returned), the original action will not be cancelled. Make it so that the first action is canceled and overridden by the second. See ticket 9 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer, User:Sushain |
code | More file formats for APY | APY does not support DOC, XLS, PPT file translation that require the file being converted to the newer XML based formats through LibreOffice or equivalent and then back. See ticket 7 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer User:Francis Tyers, User:Sushain |
code | Improved file translation functionality for APY | APY needs logging and to be non-blocking for file translation. See ticket 7 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer User:Francis Tyers, User:Sushain |
interface | Abstract the formatting for the Html-tools interface. | The html-tools interface should be easily customisable so that people can make it look how they want. The task is to abstract the formatting and make one or more new stylesheets to change the appearance. This is basically making a way of "skinning" the interface. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker, User:Sushain |
interface | Html-tools spell-checker interface | Integrate the spell-checker interface that was designed for html-tools. It should be enablable in the html-tools config. See ticket 6 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker, User:Sushain |
code | Html-tools spell-checker code | Add code to the html-tools interface that allows spell checking to be performed. Should send entire string, and be able to match each returned result to its appropriate input word. Should also update as new words are typed (but not on every keystroke). See ticket 6 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker, User:Sushain |
code | libvoikko support for APY | Write a function for APY that checks the spelling of an input string via libvoikko and for each word returns whether the word is correct, and if unknown returns suggestions. Whether segmentation is done by the client or by apertium-apy will have to be figured out. You will also need to add scanning for spelling modes to the initialisation section. Try to find a sensible way to structure the requests and returned data with JSON. Add a switch to allow someone to turn off support for this (use argparse set_false). See ticket 6 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer, User:Sushain |
code | Html-tools expanding textareas | The input textarea in the html-tools translation interface does not expand depending on the user's input even when there is significant whitespace remaining on the page. Improvements include varying the length of the textareas to fill up the viewport or expanding depending on input. Both the input and output textareas would have to maintain the same length for interface consistency. Different behavior may be desired on mobile. See ticket 4 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker, User:Sushain |
code | Performance tracking in APY | Add a way for APY to keep track of number of words in input and time between sending input to a pipeline and receiving output, for the last n (e.g., 100) requests, and write a function to return the average words per second over something<n (e.g., 10) requests. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer |
code | Language variant picker in Html-tools | In html-tools,displaying language variants as distinct languages in the translator language selector is awkward and repetitive. Allowing users to first select a language and then display radio buttons for choosing a variant below the relevant translation box, if relevant, provides a better user interface. See ticket 1 for details and progress tracking. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Unhammer User:Francis Tyers, User:Sushain |
research | Investigate how to implement HTML-translation that can deal with broken HTML | The old Apertium website had a 'surf-and-translate' feature, but it frequently broke on badly-behaved HTML. Investigate how similar web sites deal with broken HTML when rewriting the internal content of a (possible automatically generated) HTML page. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers |
code | Add pipeline debug action to APY | Add a /pipedebug action to APY so given a text and a language pair it does not return only the translation, but the whole flow (like Apertium-viewer does). That would help indentifying where exactly the errors that are APY-only (or null-flush-only) happen, and could be useful for debugging in general. Read more... For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Xavivars User:Unhammer User:Firespeaker |
interface | Grammar checker interface | Create a grammar checker / proofing html interface. It should send the user input through a given pipeline, and parse the Constraint Grammar output, turning this back into readable output with underlined words. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Unhammer |
code | Suggest a word to html-tools | The apertium web-translator should have clickable links for different problems in translation pipeline (marked by #*@) that could lead to a simple form to collect new word suggestions from peoples For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:TommiPirinen, more mentors plz |
code | Abumatran paradigm guesser integration to html-tools | The apertium web-translator could link unknown words to some web based word-classification tool that can add them to dixes For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:TommiPirinen, more mentors plz |
code | User management for paradigm guesser | The abumatran paradigm guesser currently has only admin-driven user management, for lot of people to be able to contribute with proper attributions but not too much vandalism an automated lightweight user registration system should be created For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:TommiPirinen, more mentors plz |
Pair visualisations[edit]
Category | Title | Description | Mentors |
---|---|---|---|
quality | fix pairviewer's 2- and 3-letter code conflation problems | pairviewer doesn't always conflate languages that have two codes. E.g. sv/swe, nb/nob, de/deu, da/dan, uk/ukr, et/est, nl/nld, he/heb, ar/ara, eus/eu are each two separate nodes, but should instead each be collapsed into one node. Figure out why this isn't happening and fix it. Also, implement an algorithm to generate 2-to-3-letter mappings for available languages based on having the identical language name in languages.json instead of loading the huge list from codes.json; try to make this as processor- and memory-efficient as possible. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker |
code, interface | map support for pairviewer ("pairmapper") | Write a version of pairviewer that instead of connecting floating nodes, connects nodes on a map. I.e., it should plot the nodes to an interactive world map (only for languages whose coordinates are provided, in e.g. GeoJSON format), and then connect them with straight-lines (as opposed to the current curved lines). Use an open map framework, like leaflet, polymaps, or openlayers For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker |
code | coordinates for Mongolic languages | Using the map Linguistic map of the Mongolic languages.png, write a file in GeoJSON (or similar) format that can be loaded by pairmapper (or, e.g., converted to kml and loaded in google maps). The file should contain points that are a geographic "center" (locus) for where each Mongolic language on that map is spoken. Use the term "Khalkha" (iso 639-3 khk) for "Mongolisch", and find a better map for Buryat. You can use a capital city for bigger, national languages if you'd like (think Paris as a locus for French). For further information and guidance on this task, you are encouraged to come to our IRC channel.up2015 |
User:Firespeaker User:Sereni |
code | draw languages as areas for pairmapper | Make a map interface that loads data (in e.g. GeoJSON or KML format) specifying areas where languages are spoken, as well as a single-point locus for the language, and displays the areas on the map (something like the way the states are displayed here) with a node with language code (like for pairviewer) at the locus. This should be able to be integrated into pairmapper, the planned map version of pairviewer. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker |
code | georeference language areas for Tatar, Bashqort, and Chuvash | Using the maps listed here, try to define rough areas for where Tatar, Bashqort, and Chuvash are spoken. These areas should be specified in a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. Try to be fairly accurate and detailed. Maps to consult include Tatarsbashkirs1989ru, NarodaCCCP For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Sereni |
code | georeference language areas for North Caucasus Turkic languages | Using the map Caucasus-ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Kumyk, Nogay, Karachay, Balkar. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Sereni |
code | georeference language areas for IE and Mongolic Caucasus-area languages | Using the map Caucasus-ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Ossetian, Armenian, Kalmyk. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Sereni |
code | georeference language areas for North Caucasus languages | Using the map Caucasus-ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Avar, Chechen, Abkhaz, Georgian. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). For further information and guidance on this task, you are encouraged to come to our IRC channel.up2015 |
User:Firespeaker User:Sereni |
code | georeference language areas for Central Asian languages: Uzbek and Uyghur | Using the map Central_Asia_Ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Uzbek and Uyghur are spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Sereni |
quality, code | split nor into nob and nno in pairviewer | Currently in pairviewer, nor is displayed as a language separately from nob and nno. However, the nor pair actually consists of both an nob and an nno component. Figure out a way for pairviewer (or pairsOut.py / get_all_lang_pairs.py) to detect this split. So instead of having swe-nor, there would be swe-nob and swe-nno displayed (connected seemlessly with other nob-* and nno-* pairs), though the paths between the nodes would each still give information about the swe-nor pair. Implement a solution, trying to make sure it's future-proof (i.e., will work with similar sorts of things in the future). For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Francis Tyers User:Unhammer |
quality, code | add support to pairviewer for regional and alternate orthograpic modes | Currently in pairviewer, there is no way to detect or display modes like zh_TW. Add suppor to pairsOut.py / get_all_lang_pairs.py to detect pairs containing abbreviations like this, as well as alternate orthographic modes in pairs (e.g. uzb_Latn and uzb_Cyrl). Also, figure out a way to display these nicely in the pairviewer's front-end. Get creative. I can imagine something like zh_CN and zh_TW nodes that are in some fixed relation to zho (think Mickey Mouse configuration?). Run some ideas by your mentor and implement what's decided on. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker User:Francis Tyers |
code | Function that counts stems at all revisions of each bidix involving a specific language | Write a function in python or ruby that takes a language code as input, queries svn to find all language pairs that involve that language (note that there are both two- and three-letter abbreviations in use), count the number of stems in the bilingual dictionary for revision in its history, and output all of this data in a simple json variable. There are scripts that do different pieces of all of this already: queries svn, queries svn revisions, counting bidix stems. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker |
code | Extend visualisation of pairs involving a language in language family visualisation tool | The language family visualisation tool currently has a visualisation of all pairs involving the language. Extend this to include pairs that involve those languages, and so on, until there are no more pairs. This should result in a graph of quite a few languages, with the current language in the middle. Note that if language x is the center, and there are x-y and x-z pairs, but also a y-z pair, this should display the y-z pair with a link, not with an extra z and y node each, connected to the original y and z nodes, respectively. The best way to do this may involve some sort of filtering of the data. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker |
Begiak[edit]
Category | Title | Description | Mentors |
---|---|---|---|
quality | Generalise phenny/begiak git plugin | Rename the begiak module to git (instead of github), and test it to make sure it's general enough for at least three common git services (there should already be that many supported, but make sure they all work). For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
quality | fix .randquote | The .randquote function currently fails with "'module' object has no attribute 'Grab'". Fix it. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | phenny/begiak git plugin commit info function | Add a function to the begiak github module to get the status of a commit by reponame and name (similar to what the svn module does), and then find out why commit 6a54157b89aee88511a260a849f104ae546e3a65 in turkiccorpora resulted in the following output, and fix it: Something went wrong: dict_keys(['commits', 'user', 'canon_url', 'repository', 'truncated']). For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | make begiak use pm's when doing "follow" | The .follow function currently uses notify, which makes everyone have to see the translations. Make it use PM's (/msg) instead; but if several people follow the same person in the same direction, begiak should not make duplicate translation requests. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Unhammer |
code | make begiak use ISO 639-3 codes for "follow" | The .follow function currently uses doesn't understand "swe-dan" for language pairs that use ISO-639-1 codes like "sv-da". Make it understand the 639-3 code. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
[[[User:Unhammer]] |
code | phenny/begiak git plugin recent function | Find out why begiak's "recent" function (begiak: recent) returns "ValueError: No JSON object could be decoded (file "/usr/lib/python3.2/json/decoder.py", line 371, in raw_decode)" for one of the repos (no permission) and find a way to fix it so it returns the status instead. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code, quality | phenny/begiak svn plugin info function | Find out why begiak's info function ("begiak info [repo] [rev]") doesn't work and fix it. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
documentation | document any phenny/begiak command that does not have information | Find a command that our IRC bot (begiak) uses that is not documented, and document how it works both on the Begiak wiki page and in the code. This will require you to fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | phenny/begiak wiki modules tell result | Make a function for our IRC bot (begiak) that allows someone to point another user to a wiki page (apertium wiki or wikipedia), and have it give them the results (e.g. for mentors to point students to resources). It could be an extra function on the .wik and .awik modules. Make sure it allows for all wiki modes in those modules (e.g., .wik.ru) and is intuitive to use. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
quality | find content that phenny/begiak wiki modules don't do a good job with | Identify at least 10 pages or sections on Wikipedia or the apertium wiki that the respective begiak module doesn't return good output for. These may include content where there's immediately a subsection, content where the first thing is a table or infobox, or content where the first . doesn't end the sentence. Document generalisable scenarios about what the preferred behaviour would be. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | make phenny/begiak git and svn modules display urls | When a user asks to display revision information, have begiak (our IRC bot) include a link to information on the revision. For example, when displaying information for apertium repo revision r57171, include the url http://sourceforge.net/p/apertium/svn/57171/ , maybe even a shortened version. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | improve phenny/begiak timezone math | Currently begiak (our IRC bot) is able to scrape and use data on timezone names, but it can't do math, e.g. CEST-5, GMT+3, etc. Make it support this stuff For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | make timezone conversion for phenny/begiak support city names too | Add city name support for timezone conversion in the time plugin for begiak (our IRC bot). It currently accepts a time in one timezone and a destination timezone, and converts the time, e.g. ".tz 335EST in CET" returns "835CET". But it can't do ".tz 335Indianapolis in CET". You should have it rely on the city support code that's already there. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | make city name support phenny/begiak timezone plugin work better | Find a source that maps city names to timezone abbreviations and have the .tz command for begiak (our IRC bot) scrape and use that data (e.g., ".time Barcelona" should give the current time in CET). The current timezone plugin works, but doesn't support a lot of cities—make it support a lot. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | add analysis and generation modes to apertium translation begiak module | Add the ability for the apertium translation module that's part begiak (our IRC bot) to query morphological analysis and generation modes. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | make begiak's version control monitoring channel specific | Our IRC bot (begiak) currently monitors a series of git and svn repositories. When a commit is made to a repository, the bot displays the commit in all channels. For this task, you should modify both of these modules (svn and git) so that repositories being monitored (listed in the config file) can be specified in a channel-specific way. However, it should default to the current behaviour—channel-specific settings should just override the global monitoring pattern. You should fork the bot on github to work on this task and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | allow admins to modify and delete other people's queues in begiak | Modify the queue module for begiak (our IRC bot) to let admins (as defined by begiak's config file—there should be a function that'll just check if the person issuing a command is an admin) modify and delete queues for other users. For this task, you should fork the bot on github and send a pull request when you're done. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
quality | Sync begiak with origin and submit PRs back for our changes | For this task, sync begiak with origin, and send them pull requests for our local changes of relevance. The synching will probably get a little messy, and the pull requests should ideally be one PR per feature (if possible). This document may be of use. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker, User:Unhammer |
Apertium linguistic data[edit]
Category | Title | Description | Mentors |
---|---|---|---|
code, quality | multi Improve the bilingual dictionary of a language pair XX-YY in the incubator by adding 50 word correspondences to it | Languages XX and YY may have rather large dictionaries but a small bilingual dictionary. Add words to the bilingual dictionary and test that the new vocabulary works. Check The OPUS bilingual corpus repository for sentence-aligned corpora such as Tatoeba. Read more... For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015(some) |
User:Mlforcada User:Raveesh User:Vin-ivar User:Aida User:Putti |
code, quality | multi Improve the quality of a language pair XX-YY by adding 50 words to its vocabulary | Add words to language pair XX-YY and test that the new vocabulary works. Check The OPUS bilingual corpus repository for sentence-aligned corpora such as Tatoeba. Read more... For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015(some) |
User:Mlforcada User:ilnar.salimzyan User:Xavivars User:Bech Jimregan User:Unhammer User:Nikant Fulup User:tunedal User:JuanpablYoussefsan User:Firespeaker User:Raveesh User:vin-ivar User:Aida User:Putti |
code, quality | multi=2 Find translation bugs by using LanguageTool, and correct them | The LanguageTool grammar/style checker has great rule sets for Catalan and French. Run it on output from Apertium translation into Catalan/French and fix 5 mistakes. up2015 Read more... | User:Xavivars |
code, quality | multi Add/correct one structural transfer rule to an existing language pair | Add or correct a structural transfer rule to an existing language pair and test that it works. Read more... For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015[some] |
User:Mlforcada User:ilnar.salimzyan User:Unhammer User:Nikant Fulup User:Juanpabl User:Raveesh User:vin-ivar User:Aida |
code, quality | multi Write 10 lexical selection rules for a language pair already set up with lexical selection | Add 10 lexical selection rules to improve the lexical selection quality of a pair and test them to ensure that they work. Read more... For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 (a few: need to add more LPs) |
User:Mlforcada, User:Francis Tyers User:ilnar.salimzyan User:Unhammer User:Nikant User:Firespeaker User:Putti User:Raveesh User:vin-ivar User:Aida (more mentors welcome) |
code | multi Set up a language pair to use lexical selection and write 5 rules | First set up a language pair to use the new lexical selection module (this will involve changing configure scripts, makefile and modes file). Then write 5 lexical selection rules. Read more... For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada, User:Francis Tyers User:Unhammer Fulup User:pankajksharma User:Aida (more mentors welcome) |
code, quality | multi Write 10 constraint grammar rules to repair part-of-speech tagging errors | Find some tagging errors and write 10 constraint grammar rules to fix the errors. Read more... For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 (some) |
User:Mlforcada, User:Francis Tyers User:ilnar.salimzyan User:Unhammer Fulup User:Aida (more mentors welcome) |
code | multi Set up a language pair such that it uses constraint grammar for part-of-speech tagging | Find a language pair that does not yet use constraint grammar, and set it up to use constraint grammar. After doing this, find some tagging errors and write five rules for resolving them. Read more... For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 (some) |
User:Mlforcada, User:Francis Tyers User:Unhammer User:Aida |
quality | multi Compare Apertium with another MT system and improve it | This tasks aims at improving an Apertium language pair when a web-accessible system exists for it in the 'net. Particularly good if the system is (approximately) rule-based such as Lucy, Reverso, Systran or SDL Free Translation: (1) Install the Apertium language pair, ideally such that the source language is a language you know (L₂) and the target language a language you use every day (L₁). (2) Collect a corpus of text (newspaper, wikipedia) Segment it in sentences (using e.g., libsegment-java or a similar processor and a SRX segmentation rule file borrowed from e.g. OmegaT) and put each sentence in a line. Run the corpus through Apertium and through the other system Select those sentences where both outputs are very similar (e.g, 90% coincident). Decide which one is better. If the other language is better than Apertium, think of what modification could be done for Apertium to produce the same output, and make 3 such modifications. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada Jimregan User:Aida (alternative mentors welcome) |
documentation | multi What's difficult about this language pair? | For a language pair that is not in trunk or staging such that you know well the two languages involved, write a document describing the main problems that Apertium developers would encounter when developing that language pair (for that, you need to know very well how Apertium works). Note that there may be two such documents, one for A→B and the other for B→A Prepare it in your user space in the Apertium wiki.It may be uploaded to the main wiki when approved. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada Jimregan Youssefsan User:Aida (alternative mentors welcome) |
research | multi Write a contrastive grammar | Using a grammar book/resource document 10 ways in which the grammar of two languages differ, with no fewer than 3 examples of each difference. Put it on the wiki under Language1_and_Language2/Contrastive_grammar. See Farsi_and_English/Pending_tests for an example of a contrastive grammar that a previous GCI student made. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Firespeaker User:Sereni User:Aida |
research | multi Hand annotate 250 words of running text. | Use apertium annotatrix to hand-annotate 250 words of running text from Wikipedia for a language of your choice. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers |
research | The most frequent Romance-to-Romance transfer rules | Study the .t1x transfer rule files of Romance language pairs and distill 5-10 common rules that are common to all of them, perhaps by rewriting them into some equivalent form For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada |
research | multi Tag and align Macedonian--Bulgarian corpus | Take a Macedonian--Bulgarian corpus, for example SETimes, tag it using the apertium-mk-bg pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers |
code | Write a program to extract Bulgarian inflections | Write a program to extract Bulgarian inflection information for nouns from Wiktionary, see Category:Bulgarian nouns For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers |
quality | multi Improve the quality of a language pair by allowing for alternative translations | Improve the quality of a language pair by (a) detecting 5 cases where the (only) translation provided by the bilingual dictionary is not adequate in a given context, (b) adding the lexical selection module to the language, and (c) writing effective lexical selection rules to exploit that context to select a better translation For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Mlforcada User:Unhammer User:Aida |
code | multi depend Make sure an Apertium language pair does not mess up (X)HTML formatting | (Depends on someone having performed the task 'Examples of files where an Apertium language pair messes up (X)HTML formatting' above). The task: (1) run the file through Apertium try to identify where the tags are broken or lost: this is most likely to happen in a structural transfer step; try to identify the rule where the label is broken or lost (2) repair the rule: a conservative strategy is to make sure that all superblanks () are output and are in the same order as in the source file. This may involve introducing new simple blanks () and advancing the output of the superblanks coming from the source. (3) test again (4) Submit a patch to your mentor (or commit it if you have already gained developer access) For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada (alternative mentors welcome) |
quality | Examples of minimum files where an Apertium language pair messes up wordprocessor (ODT, RTF) formatting | Sometimes, an Apertium language pair takes a valid ODT or RTF source file but delivers an invalid ODT or RTF target file, regardless of translation quality. This can usually be blamed on incorrect handling of superblanks in structural transfer rules. The task: (1) select a language pair (2) Install Apertium locally from the Subversion repository; install the language pair; make sure that it works (3) download a series of ODT or RTF files for testing purposes. Make sure they are opened using LibreOffice/OpenOffice.org (4) translate the valid files with the language pair (5) check if the translated files are also valid ODT or RTF files; select those that aren't (6) find the first source of non-validity and study it, and strip the source file until you just have a small (valid!) source file with some text around the minimum possible example of problematic tags; save each such file and describe the error. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada (alternative mentors welcome) |
code | multi depend Make sure an Apertium language pair does not mess up wordprocessor (ODT, RTF) formatting | (Depends on someone having performed the task 'Examples of files where an Apertium language pair messes up wordprocessor formatting' above). The task: (1) run the file through Apertium try to identify where the tags are broken or lost: this is most likely to happen in a structural transfer step; try to identify the rule where the label is broken or lost (2) repair the rule: a conservative strategy is to make sure that all superblanks () are output and are in the same order as in the source file. This may involve introducing new simple blanks () and advancing the output of the superblanks coming from the source. (3) test again (4) Submit a patch to your mentor (or commit it if you have already gained developer access) For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada (alternative mentors welcome) |
code | multi Start a language pair involving Interlingua | Start a new language pair involving Interlingua using the Apertium new language HOWTO. Interlingua is the second most used "artificial" language, after Esperanto). As Interlingua is basically a Romance language, you can use a Romance language as the other language, and Romance-language dictionaries rules may be easily adapted. Include at least 50 very frequent words (including some grammatical words) and at least one noun--phrase transfer rule in the ia→X direction. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada Youssefsan (will reach out also to the interlingua community) |
research | Document materials for a language not yet on our wiki | Document materials for a language not yet on our wiki. This should look something like the page on Aromanian—i.e., all available dictionaries, grammars, corpora, machine translators, etc., print or digital, where available, whether Free, etc., as well as some scholarly articles regarding the language, especially if about computational resources. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker User:Francis Tyers User:Raveesh User:Aida User:Unhammer |
research | Gujarati Parallel Corpus and Alignment | Collect some parallel corpus for guj-hin, run GIZA++ and verify the alignments. | User:Raveesh |
research | Urdu-Sindhi Bilingual Dictionary | Add words to bilingual dictionary for Urdu-Sindhi | User:Raveesh |
research | Hindi-Sindhi Bilingual Dictionary | create a bilingual dictionary for Hindi-Sindhi [with atleast 20 words in each lexical category, such as nouns, verbs, adjectives, adverbs, conjunctions, etc) | User:Raveesh |
research | Hindi-Gujarati Bilingual Dictionary | create a small bilingual dictionary for Hindi-Gujarati | User:Raveesh |
research | Gujarati morphology | Define some Morphological paradigms of Gujarati Nouns or Verbs (or any other categories) and provide some Gujarati words (around 50) belonging to those paradigms. up2015 | User:Raveesh User:Vin-ivar |
research | Marathi evaluation | Manually tag 500 random Marathi words (based on the monodix) for evaluation up2015 | User:Vin-ivar |
research | Swedish tagging evaluation | Run a 500 word Wikipedia page through the Swedish tagger (languages/apertium-swe), and correct the mistakes it makes up2015 | User:Unhammer |
research | Tag and align Albanian--Macedonian corpus | Take a Albanian--Macedonian corpus, for example SETimes, tag it using the apertium-sq-mk pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Sereni |
research | Tag and align Albanian--Serbo-Croatian corpus | Take a Albanian--Serbo-Croatian corpus, for example SETimes, tag it using the apertium-sq-sh pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Sereni |
research | Tag and align Albanian--Bulgarian corpus | Take a Albanian--Bulgarian corpus, for example SETimes, tag it using the apertium-sq-bg pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Sereni |
research | Tag and align Albanian--English corpus | Take a Albanian--English corpus, for example SETimes, tag it using the apertium-sq-en pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Sereni |
research | Tag and align Danish--Norwegian corpus | Take a Danish--Norwegian corpus, for example OpenSubtitles (da-nb only), tag it using the apertium-dan-nor pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Unhammer |
research | Tag and align Swedish--Norwegian corpus | Take a Swedish--Norwegian corpus, for example OpenSubtitles (sv-nb only), tag it using the apertium-swe-nor pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Unhammer |
research | Tag and align Macedonian--Serbo-Croatian corpus | Take a Macedonian--Serbo-Croatian corpus, for example SETimes, tag it using the apertium-mk-sh pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Sereni |
research | Tag and align Macedonian--English corpus | Take a Macedonian--English corpus, for example SETimes, tag it using the apertium-mk-en pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Sereni |
research | Tag and align Serbo-Croatian--Bulgarian corpus | Take a Serbo-Croatian--Bulgarian corpus, for example SETimes, tag it using the apertium-sh-bg pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Sereni |
research | Tag and align Serbo-Croatian--English corpus | Take a Serbo-Croatian--English corpus, for example SETimes, tag it using the apertium-sh-en pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Sereni |
research | Tag and align Bulgarian--English corpus | Take a Bulgarian--English corpus, for example SETimes, tag it using the apertium-bg-en pair, and word-align it using GIZA++. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Francis Tyers User:Sereni |
code | Write a program to extract Greek noun inflections | Write a program to extract Greek inflection information for nouns from Wiktionary, see Category:Greek nouns For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers |
code | Write a program to extract Greek verb inflections | Write a program to extract Greek inflection information for verbs from Wiktionary, see Category:Greek verbs For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers |
code | Write a program to extract Greek adjective inflections | Write a program to extract Greek inflection information for adjectives from Wiktionary, see Category:Greek adjectives For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers |
code | Write a program to convert the Giellatekno Faroese CG to Apertium tags | Write a program which converts the tagset of the Giellatekno Faroese constraint grammar. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Trondtr |
quality | Import nouns from azmorph into apertium-aze | Take the nouns (excluding proper nouns) from https://svn.code.sf.net/p/apertium/svn/branches/azmorph and put them into lexc format in https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker User:Francis Tyers |
quality | Import misc categories from azmorph into apertium-aze | Take the categories that aren't nouns, proper nouns, adjectives, adverbs, and verbs from https://svn.code.sf.net/p/apertium/svn/branches/azmorph and put them into lexc format in https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker User:Francis Tyers |
research | Build a clean Kazakh--English sentence-aligned bilingual corpus for testing purposes using official information from Kazakh websites (minimum 50 bilingual sentences). | Download and align the Kazakh and English version of the same page, divide them in sentences, and build two plain text files (eng.FILENAME.txt) and (kaz.FILENAME.txt) with one sentence per line so that they correspond to each other. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:mlforcada User:Sereni User:Firespeaker User:Aida |
research | Build a clean Kazakh--Russian sentence-aligned bilingual corpus for testing purposes using official information from Kazakh websites (minimum 50 bilingual sentences). | Download and align the Kazakh and Russian version of the same page, divide them in sentences, and build two plain files (eng.FILENAME.txt) and (rus.FILENAME.txt) with one sentence per line so that they correspond to each other. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:mlforcada User:Sereni User:Firespeaker User:Aida |
code | Make a script to generate a table on the wiki of all transducers for a language family | Make a script to go with the other wiki-tools scripts that finds all the apertium single-language transducers for each language in a given family and writes a table describing them to the wiki. The table should be in roughly the same format as that on the Turkic languages or Celtic languages pages, and the script can be based off some of the other scripts. | User:Firespeaker |
code | Combine available wiki-tools scripts into a script that writes a complete language family page | Write a script that generates mostly complete language family pages given dixtable, langtable, and udhrtable, etc. You'll need to combine, and perhaps make more abstract, the existing wiki-tools scripts. | User:Firespeaker |
documentation | Manually spell-check running text in an apertium language of your choice | Take 500 words from a public source of user contributed content (such as a forum or a comments section of a website) in a language supported by Apertium (other than English) and manually correct all orthographical and typographical errors. Allow for some variation in terms of what is proper spelling, such as regional differences, etc. (e.g., in English, both "color" and "colour" are correct, but "colur" isn't). If you've found fewer than 20 errors, do this for another 500 words (and so on) until you've identified at least 20 errors. Submit a link to the source(s) you used, and a list of only the words you've corrected (one entry per line like "computre,computer" in a text file). For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker User:Francis Tyers User:Ksnmi |
quality | Check the performance of an Apertium spell checker in an apertium language of your choice | Take 500 words from a public source of user contributed content (such as a forum or a comments section of a website) in a language supported by Apertium that you know (other than English) and put it through one of our spell checkers (libreoffice, MS Word, firefox, command line voikko, or the website if that task has been done already). Then make a list of all the words it marked wrong, and for each word indicate whether it is (1) a word that is misspelled (provide the correctly spelled form), (2) a word that is spelled correctly, (3) a form from another language that is never used in the language you are checking. Allow for some variation in terms of what is proper spelling, such as regional differences, etc. (e.g., in English, both "color" and "colour" are correct, but "colur" isn't). If you've found fewer than 20 words that fit the first two categories, do this for another 500 words (and so on) until you've identified at least 20 words of types (1) and (2). Submit a link to the source(s) you used, and a list of only the words the spell checker corrected (one entry per line like (1) "computre,computer", (2) "Computer,CORRECT" (3) "計算機,FOREIGN", in a text file). For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker User:Francis Tyers User:Ksnmi |
research, documentation | Categorise 5 twol rules | Choose 5 rules from a twol file for a well-developed hfst pair. For each rule, state what kind of process it is (insertion, deletion, symbol change), and whether it's phonologically conditioned or morphologically conditioned. If it's a phonologically conditioned symbol change, write whether one character is changing to another, or whether the rule is part of a one-to-many or many-to-one correspondence. Write your findings on the apertium wiki at Examples_of_twol_rules/Language (replacing "Language" with the name of the language). For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
Data mangling[edit]
Category | Title | Description | Mentors |
---|---|---|---|
code | multi Dictionary conversion | Write a conversion module for an existing dictionary for apertium-dixtools. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | multi Dictionary conversion in python | Write a conversion module for an existing free bilingual dictionary to lttoolbox format using Python. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | Write a program to extract Faroese noun inflections | Write a program to extract Faroese inflection information for nouns from Wiktionary, see Category:Faroese nouns For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:vin-ivar |
code | Write a program to extract Faroese verb inflections | Write a program to extract Faroese inflection information for verbs from Wiktionary, see Category:Faroese verbs For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:vin-ivar |
code | Write a program to extract Faroese adjective inflections | Write a program to extract Faroese inflection information for adjectives from Wiktionary, see Category:Faroese adjectives For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:vin-ivar |
code | Bilingual dictionary from word alignments script | Write a script which takes GIZA++ alignments and outputs a .dix file. The script should be able to reduce the number of tags, and also have some heuristics to test if a word is too-frequently aligned. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Ksnmi |
code | multi Scraper for free forum content | Write a script to scrape/capture all freely available content for a forum or forum category and dump it to an xml corpus file or text file. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker User:Ksnmi |
research | multi scrape a freely available dictionary using tesseract | Use tesseract to scrape a freely available dictionary that exists in some image format (pdf, djvu, etc.). Be sure to scrape grammatical information if available, as well stems (e.g., some dictionaries might provide entries like АЗНА·Х, where the stem is азна), and all possible translations. Ideally it should dump into something resembling bidix format, but if there's no grammatical information and no way to guess at it, some flat machine-readable format is fine. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker User:Francis Tyers User:Ksnmi |
code | script to generate dictionary from IDS data | Write a script that takes two lg_id codes, scrapes those dictionaries at IDS, matches entries, and outputs a dictionary in bidix format For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Firespeaker User:Ksnmi |
code | Script to convert rapidwords dictionary to apertium bidix | Write a script (preferably in python3) that converts an arbitrary dictionary from rapidwords.net to apertium bidix format. Keep in mind that rapidwords dictionaries may contain more than two languages, while apertium dictionaries may only contain two languages, so the script should take an argument allowing the user to specify which languages to extract. Ideally, there should also be an argument that lists the languages available. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
code | Script to convert simple bilingual dictionary entries to lttoolbox-style entries | Write a simple converter for lists of bilingual dictionary entries (one per line) so that one can use the shorthand notation perro.n.m:dog.n to generate lttoolbox-style entries of the form <e><l>perro . You may start from [5] if you wish. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:mlforcada User:Raveesh |
code | multi Convert one part-of-speech from SALDO to Apertium .dix format | Take the SALDO lexicon of Swedish and convert one of the classes of parts-of-speech to Apertium's lttoolbox format. (Nouns and verbs already done, see swe/dev.) For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
Francis Tyers, Unhammer, User:Putti |
Misc[edit]
Category | Title | Description | Mentors |
---|---|---|---|
documentation | Installation instructions for missing GNU/Linux distributions or versions | Adapt installation instructions for a particular GNU/Linux or Unix-like distribution if the existing instructions in the Apertium wiki do not work or have bugs of some kind. Prepare it in your user space in the Apertium wiki. It may be uploaded to the main wiki when approved. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada User:Firespeaker Wei En (alternative mentors welcome) |
documentation | Installing Apertium in lightweight GNU/Linux distributions | Give instructions on how to install Apertium in one of the small or lightweight GNU/Linux distributions such as Damn Small Linux in the style of the description for Apertium on SliTaz, so that may be used in older machines. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada User:Bech Youssefsan Wei En (alternative mentors welcome) |
documentation | Video guide to installation | Prepare a screencast or video about installing Apertium; make sure it uses a format that may be viewed with Free software. When approved by your mentor, upload it to Youtube, making sure that you use the HTML5 format which may be viewed by modern browsers without having to use proprietary plugins such as Adobe Flash. An example may be found here. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada User:Firespeaker Wei En (alternative mentors welcome) |
documentation | Apertium in 5 slides | Write a 5-slide HTML presentation (only needing a modern browser to be viewed and ready to be effectively "karaoked" by some else in 5 minutes or less: you can prove this with a screencast) in the language in which you write more fluently, which describes Apertium, how it works, and what makes it different from other machine translation systems. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Mlforcada User:Firespeaker Wei En (alternative mentors welcome) |
documentation | Improved "Become a language-pair developer" document | Read the document Become_a_language_pair_developer_for_Apertium and think of ways to improve it (don't do this if you have not done any of the language pair tasks). Send comments to your mentor and/or prepare it in your user space in the Apertium wiki. There will be a chance to change the document later in the Apertium Wiki. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Mlforcada User:Bech User:Firespeaker |
documentation | An entry test for Apertium | Write 20 multiple-choice questions about Apertium. Each question will give 3 options of which only one is true, so that we can build an "Apertium exam" for future GSoC/GCI/developers. Optionally, add an explanation for the correct answer. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Mlforcada |
code | Apertium develoment on Windows | The Apertium on Windows guide is severely out-dated, developers tend to use a Virtualbox (users have a nice GUI). But some developers might want to use their Windows tools and environment. Go through the guide to install Apertium on Windows, updating the guide where things have changed. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers |
code | Light Apertium bootable ISO for small machines | Using Damn Small Linux or SliTaz or a similar lightweight GNU/Linux, produce the minimum-possible bootable live ISO or live USB image that contains the OS, minimum editing facilities, Apertium, and a language pair of your choice. Make sure no package that is not strictly necessary for Apertium to run is included. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Mlforcada User:Firespeaker Wei En (alternative mentors welcome) |
code | Apertium in XLIFF workflows | Write a shell script and (if possible, using the filter definition files found in the documentation) a filter that takes an XLIFF file such as the ones representing a computer-aided translation job and populates with translations of all segments that are not translated, marking them clearly as machine-translated. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Mlforcada User:Espla User:Fsanchez (alternative mentors welcome) |
quality | Examples of minimum files where an Apertium language pair messes up (X)HTML formatting | Sometimes, an Apertium language pair takes a valid HTML/XHTML source file but delivers an invalid HTML/XHTML target file, regardless of translation quality. This can usually be blamed on incorrect handling of superblanks in structural transfer rules. The task: (1) select a language pair (2) Install Apertium locally from the Subversion repository; install the language pair; make sure that it works (3) download a series of HTML/XHTML files for testing purposes. Make sure they are valid using an HTML/XHTML validator (4) translate the valid files with the language pair (5) check if the translated files are also valid HTML/XHTML files; select those that aren't (6) find the first source of non-validity and study it, and strip the source file until you just have a small (valid!) source file with some text around the minimum possible example of problematic tags; save each such file and describe the error. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Mlforcada (alternative mentors welcome) |
code | Write a transliteration plugin for mediawiki | Write a mediawiki plugin similar in functionality (and perhaps implementation) to the way the Kazakh-language wikipedia's orthography changing system works (documented last year here. It should be able to be directed to use any arbitrary mode from an apertium mode file installed in a pre-specified path on a server. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
research | multi train tesseract on a language with no available tesseract data | Train tesseract (the OCR software) on a language that it hasn't previously been trained on. We're especially interested in languages with some coverage in apertium. We can provide images of text to train on. For further information and guidance on this task, you are encouraged to come to our IRC channel. up2015 |
User:Firespeaker, User:Unhammer |
research | using language transducers for predictive text on Android | Investigate what it would take to add some sort of plugin to existing Android predictive text / keyboard framework(s?) that would allow the use of lttoolbox (or hfst? or libvoikko stuff?) transducers to be used to predict text and/or gesture typing (swipe typing). Document your findings on the apertium wiki. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
research | research gesture typing back end for Android | Research and document on apertium's wiki how recent versions of Android's built-in keyboard interface to a spelling dictionary to guess words with gesture typing. You should state in some combination of broad and specific terms what steps would be needed needed to connect this to a custom back end, e.g. how it could call some other program that looked up words for a given language (e.g., a keyboard layout which currently does not have an Android-supported gesture keyboard). For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker |
research | multi identify 75 substitutions for conversion from colloquial Finnish to book Finnish | Colloquial Finnish can be written and pronounced differently to book Finnish (e.g. "ei oo" = "ei ole"; "mä oon" = "minä olen"). The objective of this task is to come up with 75 examples of differences between colloquial Finnish and book Finnish. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Inariksit |
research | multi Disambiguate 500 words of Russian text. | The objective of this task is to disambiguate by hand 500 words of text in Russian. You can find a Wikipedia article you are interested in, or you can be assigned one, you will be given the output of a morphological analyser for Russian, and your task is to select the most adequate analysis in context. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Francis Tyers User:Beboppinbobby User:Sereni |
research, quality | improvements to lexc plugin for vim | A vim syntax definition file for lexc is presented on the following wiki page: Apertium-specific conventions for lexc#Syntax highlighting in vim. This plugin works, but it has some issues: (1) comments on LEXICON lines are not highlighted as comments, (2) editing lines with comments (or similar) can be really slow, (3) the lexicon a form points at is not highlighted distinctly from the form (e.g., in the line «асқабақ:асқабақ N1 ; ! "pumpkin"», N1 should be highlighted somehow). Modify or rewrite the plugin to fix these issues. For further information and guidance on this task, you are encouraged to come to our IRC channel. |
User:Firespeaker User:vin-ivar User:TommiPirinen |
code | make reproducible builds for core tools | Normally, when you compile software on different machines, the byte-for-byte output will differ, making it hard to verify that the code hasn't been tampered with. With a reproducible build, the output is byte-for-byte equal even though built on different machines. Using https://gitian.org, create reproducible builds of the latest releases of lttoolbox / apertium / apertium-lex-tools / vislcg3. up2015 | User:Unhammer |
code | test and clean up the wx-utf8 script | The script converts stuff written in WX notation to produce Devanagari. It should be bug free, but someone needs to test it with strange words and fix bugs if any. up2015 | User:vin-ivar |
code | make improvements to the wx-utf8 script | Add support for other encoding standards and other Indic scripts to the Python script to make it a generic multi-way X-Y transliterator. up2015 | User:vin-ivar |
quality, code | multi fix any open ticket | Fix any open ticket in any of our issues trackers: main, html-tools, begiak. When you claim this task, let your mentor know which issue you plan to work on. | User:Firespeaker User:Unhammer User:Sushain |