Difference between revisions of "Task ideas for Google Code-in"
Jump to navigation
Jump to search
TommiPirinen (talk | contribs) (→Task ideas: hun) |
|||
Line 33: | Line 33: | ||
* Train lexical selection rules from a large parallel corpus for a language pair |
* Train lexical selection rules from a large parallel corpus for a language pair |
||
* Document how to set up the experiments for weighted transfer rules |
* Document how to set up the experiments for weighted transfer rules |
||
+ | * improve / port / do stuff with apertium-hun maybe based on existing stuff or not |
||
[[Category:Google Code-in]] |
[[Category:Google Code-in]] |
Revision as of 12:06, 21 October 2016
Contents |
This is the task ideas page for Google Code-in, here you can find ideas on interesting tasks that will improve your knowledge of Apertium and help you get into the world of open-source development.
The people column lists people who you should get in contact with to request further information. All tasks are 2 hours maximum estimated amount of time that would be spent on the task by an experienced developer, however:
- this does not include time taken to install / set up apertium.
- this is the time expected to take by an experienced developer, you may find that you spend more time on the task because of the learning curve.
Categories:
- code: Tasks related to writing or refactoring code
- documentation: Tasks related to creating/editing documents and helping others learn more
- research: Tasks related to community management, outreach/marketting, or studying problems and recommending solutions
- quality: Tasks related to testing and ensuring code is of high quality.
- interface: Tasks related to user experience research or user interface design and interaction
You can find descriptions of some of the mentors here: List_of_Apertium_mentors.
Task ideas
- Fix a memory leak in matxin-transfer
- Tag text in Apertium format
- Convert Chukchi lexicon to HFST/lexc
- Nouns
- Numerals
- Adjectives
- Make a (web) viewer for parallel treebanks (also for viewing diff annotation for same sentence)
- Write a script to convert a UD treebank for a given language to a format suitable for training the perceptron tagger
- Train the perceptron tagger for a language
- Design an annotation tool for disambiguation
- Design an annotation tool for adding dependencies
- Train lexical selection rules from a large parallel corpus for a language pair
- Document how to set up the experiments for weighted transfer rules
- improve / port / do stuff with apertium-hun maybe based on existing stuff or not