Task ideas for Google Code-in (2013)
Jump to navigation
Jump to search
This is the task ideas page for Google Code-in 2013 (http://www.google-melange.com/gci/homepage/google/gci2013), here you can find ideas on interesting tasks that will improve your knowledge of Apertium and help you get into the world of open-source development.
For current GCI task ideas, see Task ideas for Google Code-in
The people column lists people who you should get in contact with to request further information. All tasks are 2 hours maximum estimated amount of time that would be spent on the task by an experienced developer, however:
- this does not include time taken to install / set up apertium.
- this is the time expected to take by an experienced developer, you may find that you spend more time on the task because of the learning curve.
Categories:
- code: Tasks related to writing or refactoring code
- documentation: Tasks related to creating/editing documents and helping others learn more
- research: Tasks related to community management, outreach/marketting, or studying problems and recommending solutions
- quality: Tasks related to testing and ensuring code is of high quality.
- interface: Tasks related to user experience research or user interface design and interaction
You can find descriptions of some of the mentors here: List_of_Apertium_mentors.
Task list
Category | Title | Description | Mentors |
---|---|---|---|
code, quality | multi Improve the quality of a language pair XX-YY by adding 50 words to its vocabulary | Add words to language pair XX-YY and test that the new vocabulary works. Read more... | User:Mlforcada User:ilnar.salimzyan User:Xavivars User:Bech Jimregan User:Unhammer User:Fsanchez User:Nikant Fulup User:Japerez User:tunedal User:Juanpabl Youssefsan User:Firespeaker |
code, quality | multi Add/correct one structural transfer rule to an existing language pair | Add or correct a structural transfer rule to an existing language pair and test that it works. Read more... | User:Mlforcada User:ilnar.salimzyan User:Unhammer User:Nikant Fulup User:Juanpabl |
code, quality | multi Write 10 lexical selection rules for a language pair already set up with lexical selection | Add 10 lexical selection rules to improve the lexical selection quality of a pair and test them to ensure that they work. Read more... | User:Mlforcada, User:Francis Tyers User:ilnar.salimzyan User:Unhammer User:Fsanchez User:Nikant User:Japerez User:Firespeaker (more mentors welcome) |
code | multi Set up a language pair to use lexical selection and write 5 rules | First set up a language pair to use the new lexical selection module (this will involve changing configure scripts, makefile and modes file). Then write 5 lexical selection rules. Read more... | User:Mlforcada, User:Francis Tyers User:Unhammer Fulup (more mentors welcome) |
code, quality | multi Write 10 constraint grammar rules to repair part-of-speech tagging errors | Find some tagging errors and write 10 constraint grammar rules to fix the errors. Read more... | User:Mlforcada, User:Francis Tyers User:ilnar.salimzyan User:Unhammer Fulup (more mentors welcome) |
code | multi Set up a language pair such that it uses constraint grammar for part-of-speech tagging | Find a language pair that does not yet use constraint grammar, and set it up to use constraint grammar. After doing this, find some tagging errors and write five rules for resolving them. Read more... | User:Mlforcada, User:Francis Tyers User:Unhammer |
code | multi Dictionary conversion | Write a conversion module for an existing dictionary for apertium-dixtools. | User:Firespeaker |
code | multi Dictionary conversion in python | Write a conversion module for an existing free bilingual dictionary to lttoolbox format using Python. | User:Firespeaker |
code | Localised available languages function in apertium-apy | Make a new function for apertium-apy, is takes as input a language code, and as output gives the list of available pairs, and their translations in the language specified by the language code. You will probably need to know JavaScript and Python. | User:Firespeaker User:Unhammer User:Francis Tyers |
code | Language detection in apertium-apy | Make a new function for apertium-apy, that allows the language of some input text to be identified. This function should return a dict of languages and probabilities. For this task you will also need to train models for the language identifier. | User:Firespeaker User:Unhammer User:Francis Tyers |
code | SSL in apertium-apy | Make apertium-apy optionally use SSL. (If you put simple-html on an ssl domain, new browsers won't let you do plaintext/non-ssl ajax). | User:Firespeaker User:Unhammer User:Francis Tyers |
code | libvoikko support for apertium-apy | Write a function for apertium-apy that checks the spelling of an input string and for each word returns whether the word is correct, and if unknown returns suggestions. Whether segmentation is done by the client or by apertium-apy will have to be figured out. You will also need to add scanning for spelling modes to the initialisation section. Try to find a sensible way to structure the requests and returned data with JSON. Add a switch to allow someone to turn off support for this (use argparse set_false). | User:Firespeaker User:Francis Tyers |
code | *-morph and *-gener modes for apertium-apy | Write a function each for apertium-apy that does morphological analysis and morphological generation. You'll also need to add scanning for such modes to the initialisation section. Try to find a sensible way to structure the requests and returned data with JSON. Add a switch to allow someone to turn off support for this (use argparse set_false). | User:Firespeaker User:Francis Tyers User:Unhammer |
code | performance tracking in apertium-apy | Add a way for apertium-apy to keep track of number of words in input and time between sending input to a pipeline and receiving output, for the last n (e.g., 100) requests, and write a function to return the average words per second over something<n (e.g., 10) requests. | User:Firespeaker User:Francis Tyers User:Unhammer |
code | apertium-apy gateway | Write an intermediary server that takes apertium-apy requests and forwards them to one server from a list of apertium-apy servers:ports (round-robin style or similar) | User:Firespeaker User:Francis Tyers User:Unhammer |
code | apertium-apy gateway server pool management | write a function that's called every n requests that polls the server pool and reorders the list of available servers based on speed (i.e., fastest servers first) | User:Firespeaker User:Francis Tyers User:Unhammer |
documentation, quality | test and document init scripts for apertium-apy | Try setting up apertium-apy to run on startup using Upstart (Ubuntu) using this script, check that it actually starts up on startup and restarts when you kill it. Document how you did it on the wiki | User:Firespeaker User:Francis Tyers User:Unhammer |
documentation, quality | test and document init scripts for apertium-apy | Try setting up apertium-apy to run on startup using systemd (Fedora, Arch Linux, SUSE) using this script, check that it actually starts up on startup and restarts when you kill it. Document how you did it on the wiki | User:Firespeaker User:Francis Tyers User:Unhammer |
documentation, quality | test and document init scripts for apertium-apy | Try setting up apertium-apy to run on startup using inittab e.g. like this, check that it actually starts up on startup and restarts when you kill it. Document how you did it on the wiki | User:Firespeaker User:Francis Tyers User:Unhammer |
code, documentation | cronjob to detect a "hang" for apertium-apy | Write a script that tries to translate something via a local apertium-apy, and if it doesn't respond nicely within a certain amount of time (e.g. curl times out and exits with non-zero status, or the translation is wrong/empty), then "initctl restart apertium-apy" (Upstart) or "systemctl restart apertium-apy" (systemd); post the contents of the script on the wiki and how to set it up. | User:Firespeaker User:Francis Tyers User:Unhammer |
code | make apertium-apy use one lock per pipeline | make apertium-apy use one lock per pipeline, since we don't need to wait for mk-en just because sme-nob is running. | User:Firespeaker User:Francis Tyers User:Unhammer |
code | make voikkospell understand apertium stream format input | Make voikkospell understand apertium stream format input, e.g. ^word/analysis1/analysis2$, voikkospell should only interpret the 'word' part to be spellchecked. | User:Francis Tyers User:Firespeaker |
code | make voikkospell return output in apertium stream format | make voikkospell return output suggestions in apertium stream format, e.g. ^correctword$ or ^incorrectword/correct1/correct2$ | User:Francis Tyers User:Firespeaker |
code | libvoikko support for OS X | Make a spell server for OS X's system-wide spell checker to use arbitrary languages through libvoikko. See https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/SpellCheck/Tasks/CreatingSpellServer.html#//apple_ref/doc/uid/20000770-BAJFBAAH for more information | User:Francis Tyers User:Firespeaker |
documentation | document: setting up libreoffice voikko on Ubuntu/debian | document how to set up libreoffice voikko working with a language on Ubuntu and debian | User:Francis Tyers User:Firespeaker |
documentation | document: setting up libreoffice voikko on Fedora | document how to set up libreoffice voikko working with a language on Fedora | User:Francis Tyers User:Firespeaker |
documentation | document: setting up libreoffice voikko on Windows | document how to set up libreoffice voikko working with a language on Windows | User:Francis Tyers User:Firespeaker |
documentation | document: setting up libreoffice voikko on OS X | document how to set up libreoffice voikko working with a language on OS X | User:Francis Tyers User:Firespeaker |
documentation | document how to set up libenchant to work with libvoikko | Libenchant is a spellchecking wrapper. Set it up to work with libvoikko, a spellchecking backend, and document how you did it. You may want to use a spellchecking module available in apertium for testing. | User:Francis Tyers User:Firespeaker |
code, interface | geriaoueg hover functionality | firefox/iceweasel plugin which, when enabled, allows one to hover over a word and get a pop-up; interface only. Should be something like [1] or [2] . | User:Francis Tyers User:Firespeaker |
code, interface | geriaoueg hover functionality | chrome/chromium plugin which, when enabled, allows one to hover over a word and get a pop-up; interface only. Should be something like [3] or [4] . | User:Francis Tyers User:Firespeaker User:Japerez |
code | geriaoueg language/pair selection | firefox/iceweasel plugin which queries apertium API for available languages and allows the user to set the language pair in preferences | User:Francis Tyers User:Firespeaker |
code | geriaoueg language/pair selection | chrome/chromium plugin which queries apertium API for available languages and allows the user to set the language pair in preferences | User:Francis Tyers User:Firespeaker User:Japerez |
code | geriaoueg lookup code | firefox/iceweasel plugin which queries apertium API for a word by sending a context (±n words) and the position of the word in the context and gets translation for language pair xxx-yyy | User:Francis Tyers user:Firespeaker |
code | geriaoueg lookup code | chrome/chromium plugin which queries apertium API for a word by sending a context (±n words) and the position of the word in the context and gets translation for language pair xxx-yyy | User:Francis Tyers user:Firespeaker User:Japerez |
code | apertium-apy translation-per-word mode | apertium-apy function/mode that runs analyser (-morph), then returns output of biltrans for each analysis; will need to decide whether it makes sense to return once, as a list, or return multiple times, once for each analysis (probably the former) | User:Francis Tyers User:Firespeaker User:Unhammer |
code | apertium-apy mode for geriaoueg (biltrans in context) | apertium-apy function that accepts a context (e.g., ±n ~words around word) and a position in the context of a word, gets biltrans output on entire context, and returns translation for the word | User:Francis Tyers User:Firespeaker User:Unhammer |
quality | make apertium-quality work with python3.3 on all platforms | migrate apertium-quality away from distribute to newer setup-tools so it installs correctly in more recent versions of python (known incompatible: python3.3 OS X, known compatible: MacPorts python3.2) | User:Francis Tyers User:Firespeaker |
code | How much of a given sentence pair is explained by Apertium? | Write (in some scripting language of your choice) a command-line program that takes an Apertium language pair, a source-language sentence S, and a target-language sentence T, and outputs the set of pairs of subsegments (s,t) such that s is a subsegment of S, t a subsegment of T and t is the Apertium translation of s or vice-versa (a subsegment is a sequence of whole words). | User:Mlforcada User:Espla User:Fsanchez User:Japerez |
quality | multi Compare Apertium with another MT system and improve it | This tasks aims at improving an Apertium language pair when a web-accessible system exists for it in the 'net. Particularly good if the system is (approximately) rule-based such as Lucy, Reverso, Systran or SDL Free Translation: (1) Install the Apertium language pair, ideally such that the source language is a language you know (L₂) and the target language a language you use every day (L₁). (2) Collect a corpus of text (newspaper, wikipedia) Segment it in sentences (using e.g., libsegment-java or a similar processor and a SRX segmentation rule file borrowed from e.g. OmegaT) and put each sentence in a line. Run the corpus through Apertium and through the other system Select those sentences where both outputs are very similar (e.g, 90% coincident). Decide which one is better. If the other language is better than Apertium, think of what modification could be done for Apertium to produce the same output, and make 3 such modifications. | User:Mlforcada Jimregan User:Japerez (alternative mentors welcome) |
documentation | Check that the Apertium guide for Windows users still works | We have an Apertium guide for Windows users, to help them install on Windows. Check that it works for a recent/current version of Windows, and if not, report any bugs you find. | User:Francis Tyers |
documentation | Installation instructions for missing GNU/Linux distributions or versions | Adapt installation instructions for a particular GNU/Linux or Unix-like distribution if the existing instructions in the Apertium wiki do not work or have bugs of some kind. Prepare it in your user space in the Apertium wiki. It may be uploaded to the main wiki when approved. | User:Mlforcada User:Firespeaker (alternative mentors welcome) |
documentation | Installing Apertium in lightweight GNU/Linux distributions | Give instructions on how to install Apertium in one of the small or lightweight GNU/Linux distributions such as Damn Small Linux or SliTaz, so that may be used in older machines | User:Mlforcada User:Bech Youssefsan (alternative mentors welcome) |
documentation | multi What's difficult about this language pair? | For a language pair that is not in trunk or staging such that you know well the two languages involved, write a document describing the main problems that Apertium developers would encounter when developing that language pair (for that, you need to know very well how Apertium works). Note that there may be two such documents, one for A→B and the other for B→A Prepare it in your user space in the Apertium wiki.It may be uploaded to the main wiki when approved. | User:Mlforcada Jimregan Youssefsan (alternative mentors welcome) |
documentation | Video guide to installation | Prepare a screencast or video about installing Apertium; make sure it uses a format that may be viewed with Free software. When approved by your mentor, upload it to youtube, making sure that you use the HTML5 format which may be viewed by modern browsers without having to use proprietary plugins such as Adobe Flash. | User:Mlforcada User:Firespeaker (alternative mentors welcome) |
documentation | Apertium in 5 slides | Write a 5-slide HTML presentation (only needing a modern browser to be viewed and ready to be effectively "karaoked" by some else in 5 minutes or less: you can prove this with a screencast) in the language in which you write more fluently, which describes Apertium, how it works, and what makes it different from other machine translation systems. | User:Mlforcada User:Firespeaker User:Japerez (alternative mentors welcome) |
documentation | Improved "Become a language-pair developer" document | Read the document Become_a_language_pair_developer_for_Apertium and think of ways to improve it (don't do this if you have not done any of the language pair tasks). Send comments to your mentor and/or repare it in your user space in the Apertium wiki. There will be a chance to change the document later in the Apertium Wiki. | User:Mlforcada User:Bech User:Firespeaker |
documentation | An entry test for Apertium | Write 20 multiple-choice questions about Apertium. Each question will give 3 options of which only one is true, so that we can build an "Apertium exam" for future GSoC/GCI/developers. Optionally, add an explanation for the correct answer. | User:Mlforcada User:Japerez |
research | multi Write a contrastive grammar | Using a grammar book/resource document 10 ways in which the grammar of two languages differ, with no fewer than 3 examples of each difference. Put it on the wiki under Language1_and_Language2/Contrastive_grammar. See Farsi_and_English/Pending_tests for an example of a contrastive grammar that a previous GCI student made. | User:Francis Tyers User:Firespeaker |
research | multi Hand annotate 250 words of running text. | Use apertium annotatrix to hand-annotate 250 words of running text from Wikipedia for a language of your choice. | User:Francis Tyers |
research | The most frequent Romance-to-Romance transfer rules | Study the .t1x transfer rule files of Romance language pairs and distill 5-10 common rules that are common to all of them, perhaps by rewriting them into some equivalent form | User:Mlforcada |
research | multi Tag and align Macedonian--Bulgarian corpus | Take a Macedonian--Bulgarian corpus, for example SETimes, tag it using the apertium-mk-bg pair, and word-align it using GIZA++. | User:Francis Tyers |
code | Write a program to extract Bulgarian inflections | Write a program to extract Bulgarian inflection information for nouns from Wiktionary, see Category:Bulgarian nouns | User:Francis Tyers |
quality | multi Improve the quality of a language pair by allowing for alternative translations | Improve the quality of a language pair by (a) detecting 5 cases where the (only) translation provided by the bilingual dictionary is not adequate in a given context, (b) adding the lexical selection module to the language, and (c) writing effective lexical selection rules to exploit that context to select a better translation | User:Francis Tyers User:Mlforcada User:Unhammer |
quality, code | Get bible aligner working (or rewrite it) | trunk/apertium-tools/bible_aligner.py - Should take two bible translations and output a tmx file with one verse per entry. There is a standard-ish plain-text bible translation format that we have bible translations in, and we have files that contain the names of verses of various languages mapped to English verse names | User:Firespeaker |
research | tesseract interface for apertium languages | Find out what it would take to integrate apertium or voikkospell into tesseract. Document thoroughly available options on the wiki. | User:Firespeaker User:Francis Tyers |
interface | Abstract the formatting for the simple-html interface. | The simple-html interface should be easily customisable so that people can make it look how they want. The task is to abstract the formatting and make one or more new stylesheets to change the appearance. This is basically making a way of "skinning" the interface. | User:Francis Tyers User:Japerez |
interface | simple-html spell-checker interface | Add an enablable spell-checker module to the simple-html interface. Get fancy with jquery/etc. so that e.g., misspelled words are underlined in red and recommendations for each word are given in some sort of drop-down menu. Feel free to implement a dummy function for testing spelling to test the interface until the "simple-html spell-checker code" task is complete. | User:Francis Tyers User:Firespeaker User:Japerez |
code | simple-html spell-checker code | Add code to the simple-html interface that allows spell checking to be performed. Should send entire string, and be able to match each returned result to its appropriate input word. Should also update as new words are typed. | User:Francis Tyers User:Firespeaker User:Japerez |
interface | simple-html morphological analysis/generation interface | Add an enablable morphology module to the simple-html interface. Should accept text and display analysis (to be gotten via code in another task) or accept analyses and return text. Functionality similar to [5], but make interface nicer, and integratable into simple-html | User:Francis Tyers User:Firespeaker User:Japerez |
code | simple-html morphological analysis/generation code | Add code to simple-html to query morphological analysis/generation function of apertium-apy and return results for interface to deal with accordingly | User:Francis Tyers User:Firespeaker User:Unhammer User:Japerez |
interface | simple-html interface behaviour for language guessing | Based on results of language detection function, make simple-html highlight in the menu the n (e.g., 3) most probable languages, and select the most probable. | User:Francis Tyers User:Firespeaker User:Japerez |
code | simple-html function for language guessing | Make simple-html interface not use 2.9MB javascript module for language detection/identification. Instead it should query apertium-apy with text to get a list of languages with probabilities. | User:Francis Tyers User:Firespeaker User:Japerez |
interface? | Update the Apertium guide for Windows users with new language pairs | Make sure that the Apertium guide for Windows users and the Apertium Windows installer is up to date with all the new language pairs. | User:Francis Tyers |
code | Write a program to extract Faroese noun inflections | Write a program to extract Faroese inflection information for nouns from Wiktionary, see Category:Faroese nouns | User:Francis Tyers |
code | Write a program to extract Faroese verb inflections | Write a program to extract Faroese inflection information for verbs from Wiktionary, see Category:Faroese verbs | User:Francis Tyers |
code | Write a program to extract Faroese adjective inflections | Write a program to extract Faroese inflection information for adjectives from Wiktionary, see Category:Faroese adjectives | User:Francis Tyers |
code | Light Apertium bootable ISO for small machines | Using Damn Small Linux or SliTaz or a similar lightweight GNU/Linux, produce the minimum-possible bootable live ISO or live USB image that contains the OS, minimum editing facilities, Apertium, and a language pair of your choice. Make sure no package that is not strictly necessary for Apertium to run is included. | User:Mlforcada User:Firespeaker (alternative mentors welcome) |
code | Apertium in XLIFF workflows | Write a shell script and (if possible, using the filter definition files found in the documentation) a filter that takes an XLIFF file such as the ones representing a computer-aided translation job and populates with translations of all segments that are not translated, marking them clearly as machine-translated. | User:Mlforcada User:Espla User:Fsanchez User:Japerez (alternative mentors welcome) |
quality | Examples of minimum files where an Apertium language pair messes up (X)HTML formatting | Sometimes, an Apertium language pair takes a valid HTML/XHTML source file but delivers an invalid HTML/XHTML target file, regardless of translation quality. This can usually be blamed on incorrect handling of superblanks in structural transfer rules. The task: (1) select a language pair (2) Install Apertium locally from the Subversion repository; install the language pair; make sure that it works (3) download a series of HTML/XHTML files for testing purposes. Make sure they are valid using an HTML/XHTML validator (4) translate the valid files with the language pair (5) check if the translated files are also valid HTML/XHTML files; select those that aren't (6) find the first source of non-validity and study it, and strip the source file until you just have a small (valid!) source file with some text around the minimum possible example of problematic tags; save each such file and describe the error. | User:Mlforcada (alternative mentors welcome) |
code | multi depend Make sure an Apertium language pair does not mess up (X)HTML formatting | (Depends on someone having performed the task 'Examples of files where an Apertium language pair messes up (X)HTML formatting' above). The task: (1) run the file through Apertium try to identify where the tags are broken or lost: this is most likely to happen in a structural transfer step; try to identify the rule where the label is broken or lost (2) repair the rule: a conservative strategy is to make sure that all superblanks () are output and are in the same order as in the source file. This may involve introducing new simple blanks () and advancing the output of the superblanks coming from the source. (3) test again (4) Submit a patch to your mentor (or commit it if you have already gained developer access) | User:Mlforcada (alternative mentors welcome) |
quality | Examples of minimum files where an Apertium language pair messes up wordprocessor (ODT, RTF) formatting | Sometimes, an Apertium language pair takes a valid ODT or RTF source file but delivers an invalid HTML/XHTML target file, regardless of translation quality. This can usually be blamed on incorrect handling of superblanks in structural transfer rules. The task: (1) select a language pair (2) Install Apertium locally from the Subversion repository; install the language pair; make sure that it works (3) download a series of ODT or RTF files for testing purposes. Make sure they are opened using LibreOffice/OpenOffice.org (4) translate the valid files with the language pair (5) check if the translated files are also valid ODT or RTF files; select those that aren't (6) find the first source of non-validity and study it, and strip the source file until you just have a small (valid!) source file with some text around the minimum possible example of problematic tags; save each such file and describe the error. | User:Mlforcada (alternative mentors welcome) |
code | multi depend Make sure an Apertium language pair does not mess up wordprocessor (ODT, RTF) formatting | (Depends on someone having performed the task 'Examples of files where an Apertium language pair messes up wordprocessor formatting' above). The task: (1) run the file through Apertium try to identify where the tags are broken or lost: this is most likely to happen in a structural transfer step; try to identify the rule where the label is broken or lost (2) repair the rule: a conservative strategy is to make sure that all superblanks () are output and are in the same order as in the source file. This may involve introducing new simple blanks () and advancing the output of the superblanks coming from the source. (3) test again (4) Submit a patch to your mentor (or commit it if you have already gained developer access) | User:Mlforcada (alternative mentors welcome) |
code | multi Start a language pair involving Interlingua | Start a new language pair involving Interlingua using the Apertium new language HOWTO. Interlingua is the second most used "artificial" language, after Esperanto). As Interlingua is basically a Romance language, you can use a Romance language as the other language, and Romance-language dictionaries rules may be easily adapted. Include at least 50 very frequent words (including some grammatical words) and at least one noun--phrase transfer rule in the ia→X direction. | User:Mlforcada Youssefsan (will reach out also to the interlingua community) |
code | Generating 'machine translation memories' | Write a shell script and (using the filter definition files found in the documentation) a filter that takes a plain text file, segments it in sentences using the program segment and an SRX specification (which can be borrowed from OmegaT) and writes a TMX file in which each segment is paired with its Apertium translation, ready to be used with OmegaT as a "machine translation memory" | User:Mlforcada User:Espla User:Fsanchez User:Japerez User:Firespeaker (alternative mentors welcome) |
code | scraper for all wiktionary pages in a category | a script that returns urls of all pages in a wiktionary category recursively (e.g., http://en.wiktionary.org/wiki/Category:Bashkir_nouns should also include pages from http://en.wiktionary.org/wiki/Category:Bashkir_proper_nouns ) | User:Firespeaker User:Francis Tyers |
code | scraper of wiktionary translations between language x and y | a script that for a given wiktionary page (e.g., http://en.wiktionary.org/wiki/key ) returns all available translations between two specified languages, with part of speech and meaning/sense for each | User:Firespeaker User:Francis Tyers |
code | better wikipedia extractor script | Make a single script that performs all the steps listed at Wikipedia Extractor. That is, it should take a wikipedia dump file as input and output a file that is for all intents and purposes identical to what is output by the last step listed on the wiki. There should be no intermediate files stored anywhere, and it should not use any more memory than absolutely necessary, but feel free to use as much of the existing code as you need. You may wish to consult guampa's [much-improved] fork of the WikiExtractor script at [6], though it doesn't do everything itself either. | User:Firespeaker User:Francis Tyers |
research | Document materials for a language not yet on our wiki | Document materials for a language not yet on our wiki. This should look something like the page on Aromanian—i.e., all available dictionaries, grammars, corpora, machine translators, etc., print or digital, where available, whether Free, etc., as well as some scholarly articles regarding the language, especially if about computational resources. | User:Firespeaker User:Francis Tyers |
research | Tag and align Albanian--Macedonian corpus | Take a Albanian--Macedonian corpus, for example SETimes, tag it using the apertium-sq-mk pair, and word-align it using GIZA++. | User:Francis Tyers |
research | Tag and align Albanian--Serbo-Croatian corpus | Take a Albanian--Serbo-Croatian corpus, for example SETimes, tag it using the apertium-sq-sh pair, and word-align it using GIZA++. | User:Francis Tyers |
research | Tag and align Albanian--Bulgarian corpus | Take a Albanian--Bulgarian corpus, for example SETimes, tag it using the apertium-sq-bg pair, and word-align it using GIZA++. | User:Francis Tyers |
research | Tag and align Albanian--English corpus | Take a Albanian--English corpus, for example SETimes, tag it using the apertium-sq-en pair, and word-align it using GIZA++. | User:Francis Tyers |
research | Tag and align Macedonian--Serbo-Croatian corpus | Take a Macedonian--Serbo-Croatian corpus, for example SETimes, tag it using the apertium-mk-sh pair, and word-align it using GIZA++. | User:Francis Tyers |
research | Tag and align Macedonian--English corpus | Take a Macedonian--English corpus, for example SETimes, tag it using the apertium-mk-en pair, and word-align it using GIZA++. | User:Francis Tyers |
research | Tag and align Serbo-Croatian--Bulgarian corpus | Take a Serbo-Croatian--Bulgarian corpus, for example SETimes, tag it using the apertium-sh-bg pair, and word-align it using GIZA++. | User:Francis Tyers |
research | Tag and align Serbo-Croatian--English corpus | Take a Serbo-Croatian--English corpus, for example SETimes, tag it using the apertium-sh-en pair, and word-align it using GIZA++. | User:Francis Tyers |
research | Tag and align Bulgarian--English corpus | Take a Bulgarian--English corpus, for example SETimes, tag it using the apertium-bg-en pair, and word-align it using GIZA++. | User:Francis Tyers |
code | Write a program to extract Greek noun inflections | Write a program to extract Greek inflection information for nouns from Wiktionary, see Category:Greek nouns | User:Francis Tyers |
code | Write a program to extract Greek verb inflections | Write a program to extract Greek inflection information for verbs from Wiktionary, see Category:Greek verbs | User:Francis Tyers |
code | Write a program to extract Greek adjective inflections | Write a program to extract Greek inflection information for adjectives from Wiktionary, see Category:Greek adjectives | User:Francis Tyers |
code | Write a program to convert the Giellatekno Faroese CG to Apertium tags | Write a program which converts the tagset of the Giellatekno Faroese constraint grammar. | User:Francis Tyers User:Trondtr |
research | Categorise Russian nouns | Categorise 150 nouns by inflectional endings for the Russian and Ukrainian pair. | User:Francis Tyers |
research | Categorise Russian adjectives | Categorise 100 adjectives by inflectional endings for the Russian and Ukrainian pair. | User:Francis Tyers User:Trondtr |
research | Categorise Russian verbs | Categorise 150 verbs by inflectional endings for the Russian and Ukrainian pair. | User:Francis Tyers User:Trondtr |
code | Syntax tree visualisation using GNU bison | Write a program which reads a grammar using bison, parses a sentence and outputs the syntax tree as text, or graphViz or something. Some example bison code can be found here. | User:Francis Tyers User:Mlforcada |
documentation | Document how to install WikiBhasha with MediaWiki | WikiBhasha is an extension for MediaWiki to allow translation of content using Microsoft's translator. As the first step to getting it to work with Apertium, we'd like to find out how to install it. | User:Francis Tyers |
code | Apertium plugin for WikiBhasha | Make a plugin for WikiBhasha that can be used to translate content using the apertium API (or apertium-apy), with a way to specify the API url to use in a configuration option. | User:Francis Tyers User:Firespeaker |
code | Make WikiBhasha take content from any language's wikipedia | Modify the code of WikiBhasha so that it can use ("collect") content from an arbitrary language's wikipedia. Currently it only takes data from the English-language wikipedia. | User:Francis Tyers User:Firespeaker |
code | Bilingual dictionary from word alignments script | Write a script which takes GIZA++ alignments and outputs a .dix file. The script should be able to reduce the number of tags, and also have some heuristics to test if a word is too-frequently aligned. |
User:Francis Tyers |
code | multi Scraper for free forum content | Write a script to scrape/capture all freely available content for a forum or forum category and dump it to an xml corpus file or text file. | User:Firespeaker |
research | Investigate how orthographic modes on kk.wikipedia.org are implemented | The Kazakh-language wikipedia has a menu at the top for selecting alphabet (Кирил, Latın, توتە - for Cyrillic-, Latin-, and Arabic-script modes). This appears to be some sort of plugin that transliterates the text on the fly. Find out what it is and how it works, and then document it somewhere on the wiki. If this has already been documented elsewhere, point a link to that, but you still should summarise in your own words what exactly it is. | User:Firespeaker |
code | Write a transliteration plugin for mediawiki | Write a plugin similar in functionality (and perhaps implementation) to the way the Kazakh-language wikipedia's orthography changing system works. It should be able to be directed to use any arbitrary mode from an apertium mode file installed in a pre-specified path on a server. | User:Firespeaker |
code | Implement the intersection operator in lttoolbox | Write a method for lttoolbox which can intersect two Transducer classes. |
Francis Tyers |
code | Intersection of ATT format transducers | Write a python program to intersect two transducers in ATT format. One transducer will be a morphological analyser and the other a bilingual dictionary. The bilingual dictionary should be considered to be a set of prefixes. | Francis Tyers |
quality | Generalise phenny/begiak git plugin | Rename the module to git (instead of github), and test it to make sure it's general enough for at least three common git services (should already be supported, but double check) | User:Firespeaker |
code | phenny/begiak git plugin commit info function | Add a function to get the status of a commit by reponame and name (similar to what the svn module does), and then find out why commit 6a54157b89aee88511a260a849f104ae546e3a65 in turkiccorpora resulted in the following output, and fix it: Something went wrong: dict_keys(['commits', 'user', 'canon_url', 'repository', 'truncated']) | User:Firespeaker |
code | phenny/begiak git plugin recent function | Find out why the recent function (begiak: recent) returns "urllib.error.HTTPError: HTTP Error 401: UNAUTHORIZED (file "/usr/lib/python3.1/urllib/request.py", line 476, in http_error_default)" for one of the repos and fix it so it returns status instead. | User:Firespeaker |
code | phenny/begiak git plugin status | Add a function that lets anyone (not just admin) get the status of the git event server. | User:Firespeaker |
documentation | Document phenny/begiak git plugin | Document the module: how to use it with each service it supports, and the various ways the module can be interacted with (by administrators and anyone) | User:Firespeaker |
code, quality | phenny/begiak svn plugin info function | Find out why the info function ("begiak info [repo] [rev]") doesn't work and fix it. | User:Firespeaker |
research | multi train tesseract on a language with no available tesseract data | Train tesseract (the OCR software) on a language that it hasn't previously been trained on. We're especially interested in languages with some coverage in apertium. We can provide images of text to train on. | User:Firespeaker User:Francis Tyers |
research | multi scrape a freely available dictionary using tesseract | Use tesseract to scrape a freely available dictionary that exists in some image format (pdf, djvu, etc.). Be sure to scrape grammatical information if available, as well stems (e.g., some dictionaries might provide entries like АЗНА·Х, where the stem is азна), and all possible translations. Ideally it should dump into something resembling bidix format, but if there's no grammatical information and no way to guess at it, some flat machine-readable format is fine. | User:Firespeaker User:Francis Tyers |
code | make scraper plugin for azadliq.org | Using the directions at Writing a scraper, make a RFERL scraper for azadliq.org , as a file that loops through stuff like scrp-* file and a class to be included in the scraper_classes file. | User:Firespeaker |
documentation | enhance documentation on RFERL scraper to make it read like a HOWTO guide | Enhance the documentation at Writing a scraper to read/flow more like a HOWTO guide, keeping the current documentation (cleaning/reorganising as needed) as more of a reference guide. It should include information about what needs to be done for RFERL content and non-RFERL content (more general) | User:Firespeaker |
code | Write an aligner for UDHR | Write a script to align two translations of the UDHR (final destination: trunk/apertium-tools/udhr_aligner.py). It should take two UDHR translations and output a tmx file with one article per entry. It should use the xml formatted UDHRs available from http://www.unicode.org/udhr/index_by_name.html as input and output the aligned texts in tmx format. | User:Firespeaker |
quality | Import nouns from azmorph into apertium-aze | Take the nouns (excluding proper nouns) from https://svn.code.sf.net/p/apertium/svn/branches/azmorph and put them into lexc format in https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze. | User:Firespeaker User:Francis Tyers |
quality | Import adjectives from azmorph into apertium-aze | Take the adjectives from https://svn.code.sf.net/p/apertium/svn/branches/azmorph and put them into lexc format in https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze. | User:Firespeaker User:Francis Tyers |
quality | Import adverbs from azmorph into apertium-aze | Take the adverbs from https://svn.code.sf.net/p/apertium/svn/branches/azmorph and put them into lexc format in https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze. | User:Firespeaker User:Francis Tyers |
quality | Import verbs from azmorph into apertium-aze | Take the verbs from https://svn.code.sf.net/p/apertium/svn/branches/azmorph and put them into lexc format in https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze. | User:Firespeaker User:Francis Tyers |
quality | Import misc categories from azmorph into apertium-aze | Take the categories that aren't nouns, proper nouns, adjectives, adverbs, and verbs from https://svn.code.sf.net/p/apertium/svn/branches/azmorph and put them into lexc format in https://svn.code.sf.net/p/apertium/svn/incubator/apertium-aze. | User:Firespeaker User:Francis Tyers |
code | script to generate dictionary from IDS data | Write a script that takes two lg_id codes, scrapes those dictionaries at IDS, matches entries, and outputs a dictionary in bidix format | User:Francis Tyers User:Firespeaker |
code | write a browser interface for concordancer | Make an html/javascript interface/front-end for spectie's concordancer, meant for local use. It should have a box to specify the local path of a corpus, as well as a search box. Results should be returned without reloading the page. Ideally, you should use something like bottlepy for this, and convert the concordancer for use as a library that the server can load. Make it streamlined both in look and speed (no bloat, please), and make it load results dynamically (via AJAX or similar). | User:Francis Tyers User:Firespeaker |
code | make concordancer work with output of analyser | Allow spectie's concordancer to accept an optional apertium mode and directory (implement via argparse). When it has these, it should run the corpus through that apertium mode and search against the resulting tags and lemmas as well as the surface forms. E.g., the form алдым might have the analysis via an apertium mode of ^алдым/алд<n><px1sg> <nom> /ал<v><tv> <ifi><p1> <sg> , so a search for "px1sg" should bring up this word. |
User:Francis Tyers User:Firespeaker |
code | regex searching in concordancer | Support searches of regexes, e.g. ".*[кгқғ][ае]н$", in spectie's concordancer. | User:Francis Tyers User:Firespeaker |
code | scraper for all text urls from kumukia.ru/adabiat | Write a scraper that gets the urls of all texts at kumukia.ru/adabiat . It should look into all categories on the navigation bar at the left, and get all urls to texts from each target page. | User:Firespeaker User:Francis Tyers |
code | scraper for all article urls from kumukia.ru/cat-qumuq.html | Write a scraper that recursively gets the urls of all the articles in the categories linked to at kumukia.ru/cat-qumuq.html . It should load each page in each category, and get the link to each article (at the text "далее..." = "further..."). | User:Firespeaker User:Francis Tyers |
code | write a scraper plugin for kumukia.ru/adabiat texts | Write a scraper plugin that extracts the raw text from the texts linked to from kumukia.ru/adabiat . | User:Firespeaker User:Francis Tyers |
code | write a scraper plugin for kumukia.ru/cat-qumuq.html articles | Write a scraper plugin that extracts the raw text from the articles linked to from kumukia.ru/cat-qumuq.html . | User:Firespeaker User:Francis Tyers |
research | apache forwarding to apertium-apy | Find out how to get apache to forward an arbitrary url (e.g., http://example.com/UrlForAPI) to apertium-apy, which is a stand-alone service that runs on an arbitrary port. Document in detail on the wiki. | User:Firespeaker User:Francis Tyers |
quality | fix pairviewer's 2- and 3-letter code conflation problems | pairviewer doesn't always conflate languages that have two codes. E.g. sv/swe, nb/nob, de/deu, da/dan, uk/ukr, et/est, nl/nld, he/heb, ar/ara, eus/eu are each two separate nodes, but should instead each be collapsed into one node. Figure out why this isn't happening and fix it. Also, implement an algorithm to generate 2-to-3-letter mappings for available languages based on having the identical language name in languages.json instead of loading the huge list from codes.json; try to make this as processor- and memory-efficient as possible. | User:Firespeaker |
code | add language searching to pairviewer | Add some way to search for language (by name or code) to pairviewer. When your search is matched, it should highlight the node in some clever way (e.g., how OS X dims everything but what you searched for). | User:Firespeaker |
quality | come up with better colours for pairviewer | Pairviewer currently has a set of colours that's semantically annoying. Ideally we would have something clearer. The main idea is that the darker colours represent more-worked-on pairs, and "good" colours (e.g., green) represent more production-ready pairs. The current set of colour scales, instead of relying solely on darkness of colour within each scale, rely on hue also. Make it so that these scales internally rely more exclusively on darkness. Try a few different variants and run them by your mentor, who will have final say as to what's best. (And if you can leave a few variants around that can be switched out easily in the code, that would be good too.) You can merge the 1-9 and 10-99 categories if you want (so you'll only need 4 shades per hue); anything under 100 stems is a very small language pair, and we don't need any more detail than that. | User:Firespeaker |
code | map support for pairviewer ("pairmapper") | Write a version of pairviewer that instead of connecting floating nodes, connects nodes on a map. I.e., it should plot the nodes to an interactive world map (only for languages whose coordinates are provided, in e.g. GeoJSON format), and then connect them with straight-lines (as opposed to the current curved lines). Use an open map framework, like leaflet, polymaps, or openlayers | User:Firespeaker |
code | coordinates for Caucasian languages | Using the map Caucasus-ethnic_en.svg, write a file in GeoJSON (or similar) format that can be loaded by pairmapper (or, e.g., converted to kml and loaded in google maps). The file should contain points that are a geographic "center" (locus) for where each language on that map is spoken. Exclude Kurdish (since it's off the map), and keep Azeri in Azerbaycan and Iran separate. You can use a capital city for bigger, national languages if you'd like (think Paris as a locus for French). | User:Firespeaker |
code | coordinates for Central Asian languages | Using the map Central_Asia_Ethnic_en.svg, write a file in GeoJSON (or similar) format that can be loaded by pairmapper (or, e.g., converted to kml and loaded in google maps). The file should contain points that are a geographic "center" (locus) for where each Turkic language on that map (and also Tajik) is spoken. You can use a capital city for bigger, national languages if you'd like (think Paris as a locus for French). | User:Firespeaker |
code | coordinates for Mongolic languages | Using the map Linguistic map of the Mongolic languages.png, write a file in GeoJSON (or similar) format that can be loaded by pairmapper (or, e.g., converted to kml and loaded in google maps). The file should contain points that are a geographic "center" (locus) for where each Mongolic language on that map is spoken. Use the term "Khalkha" (iso 639-3 khk) for "Mongolisch", and find a better map for Buryat. You can use a capital city for bigger, national languages if you'd like (think Paris as a locus for French). | User:Firespeaker |
code | draw languages as areas for pairmapper | Make a map interface that loads data (in e.g. GeoJSON or KML format) specifying areas where languages are spoken, as well as a single-point locus for the language, and displays the areas on the map (something like the way the states are displayed here) with a node with language code (like for pairviewer) at the locus. This should be able to be integrated into pairmapper, the planned map version of pairviewer. | User:Firespeaker |
code | georeference language areas for Tatar, Bashqort, and Chuvash | Using the maps listed here, try to define rough areas for where Tatar, Bashqort, and Chuvash are spoken. These areas should be specified in a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. Try to be fairly accurate and detailed. Maps to consult include Tatarsbashkirs1989ru, NarodaCCCP | User:Firespeaker |
code | georeference language areas for North Caucasus Turkic languages | Using the map Caucasus-ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Kumyk, Nogay, Karachay, Balkar. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). | User:Firespeaker |
code | georeference language areas for IE and Mongolic Caucasus-area languages | Using the map Caucasus-ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Ossetian, Armenian, Kalmyk. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). | User:Firespeaker |
code | georeference language areas for North Caucasus languages | Using the map Caucasus-ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Avar, Chechen, Abkhaz, Georgian. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). | User:Firespeaker |
code | georeference language areas for misc Caucasus-area languages | Using the maps Caucasus-ethnic_en.svg and Lezgin_map, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the area(s) the following languages are spoken in: Lezgi, Azeri (Azerbaycan), Azeri (Iran), Ingush. There should be a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). | User:Firespeaker |
code | georeference language areas for Central Asian languages: Kazakh | Using the map Central_Asia_Ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Kazakh is spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). | User:Firespeaker |
code | georeference language areas for Central Asian languages: Karakalpak and Kyrgyz | Using the map Central_Asia_Ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Karakalpak and Kyrgyz are spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). | User:Firespeaker |
code | georeference language areas for Central Asian languages: Uzbek and Uyghur | Using the map Central_Asia_Ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Uzbek and Uyghur are spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). | User:Firespeaker |
code | georeference language areas for Central Asian languages: Tajik and Turkmen | Using the map Central_Asia_Ethnic_en.svg, write a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin. The file should contain specifications for the areas Tajik and Turkmen are spoken in, with a certain level of detail (e.g., don't just make a shape matching Kazakhstan for Kazakh) and accuracy (i.e., don't just put a square over Kazakhstan and call it the area for Kazakh). | User:Firespeaker |
code | georeference areas Russian is spoken | Assume areas in Central Asia with any sort of measurable Russian population speak Russian. Use the following maps to create a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin: Kazakhstan_European_2012_Rus, Ethnicrussians1989ru, Lenguas_eslavas_orientales, NarodaCCCP. Try to cover all the areas where Russian is spoken at least as a major language. | User:Firespeaker |
code | georeference areas Ukrainian and Belorussian are spoken | Use the following maps to create a file in GeoJSON (or similar) format for use by pairmapper's languages-as-areas plugin that defines where Belorussian and Ukrainian are spoken: Lenguas_eslavas_orientales, NarodaCCCP. | User:Firespeaker |
quality, code | split nor into nob and nno in pairviewer | Currently in pairviewer, nor is displayed as a language separately from nob and nno. However, the nor pair actually consists of both an nob and an nno component. Figure out a way for pairviewer (or pairsOut.py / get_all_lang_pairs.py) to detect this split. So instead of having swe-nor, there would be swe-nob and swe-nno displayed (connected seemlessly with other nob-* and nno-* pairs), though the paths between the nodes would each still give information about the swe-nor pair. Implement a solution, trying to make sure it's future-proof (i.e., will work with similar sorts of things in the future). | User:Firespeaker User:Francis Tyers User:Unhammer |
quality, code | add support to pairviewer for regional and alternate orthograpic modes | Currently in pairviewer, there is no way to detect or display modes like zh_TW. Add suppor to pairsOut.py / get_all_lang_pairs.py to detect pairs containing abbreviations like this, as well as alternate orthographic modes in pairs (e.g. uzb_Latn and uzb_Cyrl). Also, figure out a way to display these nicely in the pairviewer's front-end. Get creative. I can imagine something like zh_CN and zh_TW nodes that are in some fixed relation to zho (think Mickey Mouse configuration?). Run some ideas by your mentor and implement what's decided on. | User:Firespeaker User:Francis Tyers |
research | using language transducers for predictive text on Android | Investigate what it would take to add some sort of plugin to existing Android predictive text / keyboard framework(s?) that would allow the use of lttoolbox (or hfst? or libvoikko stuff?) transducers to be used to predict text and/or guess swipes (in "swype" or similar). Document your findings on the apertium wiki. | User:Firespeaker |
research | custom predictive text keyboards for Android | Research and document on apertium's wiki the steps needed to design an application for Android that could load arbitrarily defined / pre-specified keyboard layouts (e.g., say I want to make custom keyboard layouts for Kumyk and Guaraní, and load either one into the same program) as well as either an lttoolbox-format transducer or a file easily generated from one that could be paired with a keyboard layout and used to predict text in that language. | User:Firespeaker |
code | phenny/begiak ethnologue plugin | Make a phenny plugin for use in begiak that looks up language names and codes on ethnologue and returns basic info matching the search: language name, iso 639-3 code, where spoken, total number of speakers, and url to main page on ethnologue (note that irc text is character-limited, so it should be concise, e.g. "12,500,000 native speakers" could be abbreviated as just "12.5M L1"). This should work kind of like the .wik and .iso639 plugins (feel free to steal code from those—note that the .wik/.awik plugins chop text (rather well) to fit in a single irc message). | User:Firespeaker |
quality, code | phenny/begiak url module localisation improvements | Phenny/begiak has a module that detects pasted urls and reports the title of the page. This doesn't work well with non-UTF8-encoded webpages. Fix this so that the titles get properly converted to UTF8 and display as intended. Some titles to test on include Ìàäèåâà - Àâàðñêèé ÿçûê, Ïîèñê â êîðïóñå. Íàöèîíàëüíûé êîðïóñ ðóññêîãî ÿçûêà, Óêðà¿íñüêà ïðàâäà, ×óâàøñêàÿ ðåñïóáëèêàíñêàÿ ãàçåòà «Õûïàð», ÀÎÒ :: Òåõíîëîãèè :: Ðóññêàÿ ìîðôîëîãèÿ | User:Firespeaker User:Francis Tyers User:Trondtr |
quality, code | phenny/begiak mediawiki plugin(s) support for subsections | Add support to the .wik and .awik phenny/begiak modules for subsections. Currently if you do something like ".wik French language#Phonology" it brings up random text from the French_language article; instead it should find the Phonology section and return the first bit of text from it (the same as with just an article). | User:Firespeaker |
code | monodix support for stem-counting script | Add support to the dix stem-counter for monodix format. It currently only works on bidix format. For monodix format, it should count <e> elements that specify lm="something". |
User:Firespeaker |