Talk:Ideas for Google Summer of Code
So, was your organization a part of the google summer of code last year too?
- Nope, but we're hoping to be included this year -- Francis Tyers 02:45, 16 March 2008 (UTC)
From old Projects page
Writing extensions to Apertium could be the ideal undergraduate (major) project. Here are some suggestions, along with brief outlines for how you might go about starting it.
A word compounder for Germanic languages
Most Germanic languages have compound words, we can analyse the compounds using LRLM (see Agglutination and compounds), but we cannot generate them without having them in the dictionary (a laborious task). The idea of this project it to create a post-generation module that takes series of words, e.g. in Afrikaans:
vlote bestorming fase naval assault phase
and turn them into compounds:
We don't want to compound all words, but it might be a good idea to compound those which have been seen before . There are many large wordlists of compound words that could be used for this. Of course if they aren't found maybe some kind of heuristics could be used. Probably we'd only want to compound where words are >= 5 characters long.
Automatic accent and diacritic insertion
One of the problems in machine translating text in real time chat environments (and generally) is the lack of accents or diacritic marks. This makes machine translation hard, because without the (´), traducción is an unknown word.
There is a need for a module for Apertium which would automatically replace the accents/diacritics on unaccented/diacritic'd words.
- Simard, Michel (1998). "Automatic Insertion of Accents in French Texts". Proceedings of EMNLP-3. Granada, Spain.
- Rada F. Mihalcea. (2002). "Diacritics Restoration: Learning from Letters versus Learning from Words". Lecture Notes in Computer Science 2276/2002 pp. 96--113
|Porting read more...||4. Entry level||Port Apertium to Windows complete with nice installers and all that jazz. Apertium currently compiles on Windows (see Apertium on Windows)||While we all might use GNU/Linux, there are a lot of people out there who don't, some of them use Microsoft's Windows. It would be nice for these people to be able use Apertium too.||C++, autotools, experience in programming on Windows.|
|Tree-based transfer read more...||1. Very hard||Create a new XML-based transfer language for tree-based transfer and a prototype implementation, and transfer rules for an existing language pair.||Apertium currently works on finite-state chunking, which works well, but is problematic for less-closely related languages and for getting the final few percent in closely-related languages. A tree-based transfer would allow us to work on real syntactic constituents, and probably simplify many existing pairs. There are some existing non-free implementations. ||XML, Knowledge of parsing, implementation language largely free.|
|Interfaces||4. Entry level||Create plugins or extensions for popular free software applications to include support for translation using Apertium. We'd expect at least Firefox and Evolution (or Thunderbird), but to start with something more easy we have half-finished plugins for Pidgin and XChat that could use some love. The more the better! Further ideas on plugins page||Apertium currently runs as a stand alone translator. It would be great if it was integrated in other free software applications. For example so instead of copy/pasting text out of your email, you could just click a button and have it translated in place. This should use a local installation with optional fallback to the webservice.||Depends on the application chosen, but probably Java, C, C++, Python or Perl.|
|2. Hard||Writing a C++ wrapper around Markus Forsberg's Extract tool (version 2.0) as a library to allow it to be used with Apertium paradigms and TSX files / Constraint grammars as input into its paradigms and constraints.||One of the things that takes a lot of time when creating a new language pair is constructing the monodices. The extract tool can greatly reduce the time this takes by matching lemmas to paradigms based on distribution in a corpus.||Haskell, C++, XML|
|VM for the transfer module read more...||3. Medium||VM for the current transfer architecture of Apertium and for the future transfers, pure C++||Define an instruction set for a virtual machine that processes transfer code, then implement a prototype in Python, then porting to C++. The rationale behind this is that XML tree-walking is quite slow and CPU intensive. In modern (3 or more stage) pairs, transfer takes up most of the CPU. There are other options, like Bytecode for transfer, but we would like something that does not require external libraries and is adapted specifically for Apertium.||Python, C/C++, XML, XSLT, code optimisation, JIT techniques, etc.||Sortiz|
|Linguistically-driven bilingual-phrase filtering for inferring transfer rules||3. Medium||Re-working apertium-transfer-training-tools to filter the set of bilingual phrases automatically obtained from a word-aligned sentence pair by using linguistic criteria.||Apertium-transfer-training-tools is a cool piece of software that generates shallow-transfer rules from aligned parallel corpora. It could greatly speed up the creation of new language pairs by generating rules that would otherwise have to be written by human linguists||C++, general knowledge of GIZA++, Perl considered a plus.||Jimregan|
|Context-dependent lexicalised categories for inferring transfer rules||2. Hard||Re-working apertium-transfer-training-tools to use context-dependent lexicalised categories in the inference of shallow-transfer rules.||Apertium-transfer-training-tools generates shallow-transfer rules from aligned parallel corpora. It uses an small set of lexicalised categories, categories that are usually involved in lexical changes, such as prepositions, pronouns or auxiliary verbs. Lexicalised categories differentiate from the rest of categories because their lemmas are taken into account in the generation of rules.||C++, general knowledge of GIZA++, XML.||Jimregan|
|Corpus-assisted dictionary expansion||4. Entry level||Semi-automatic bilingual word equivalence retrieval from a bitext and a monolingual word list.||Improve an existing Python script to retrieve the best translations (suggestions) of a word (typically an unknown word) given a particular parallel text corpus. Perhaps combine the result with automatic paradigm guessing (also suggestions) to improve the productivity of the lexical work for most contributors||Python, C/C++, AWK, Bash, perhaps web interface in PHP, Python, Ruby on Rails||Sortiz, Jimregan|
|Detect 'hidden' unknown words read more...||3. Medium||The part-of-speech tagger of Apertium can be modified to work out the likelihood of each 'tag' in a certain context, this can be used to detect missing entries in the dictionary.||Apertium dictionaries may have incomplete entries, that is, surface forms (lexical units as they appear in running texts) for which the dictionary does not provide all the possible lexical forms (consisting of lemma, part-of-speech and morphological inflection information). As those surface form for which there is at least one lexical form cannot be considered unknown words, it is difficult to know whether all lexical forms for a given surface form have been included in the dictionaries or not. This feature will detect 'possible' missing lexical forms for those surface forms in the dictionaries.||C++ if you plan to modify the part-of-speech tagger; whatever if rewriting it from scratch.||Felipe Sánchez-Martínez|
|Improvements to target-language tagger training read more...||2. Hard||Modify apertium-tagger-training-tools so that it can deals with n-stage transfer rules when segmenting the input source-language text, and applies a k-best viterbi pruning approach that does not require to compute the a-priori likelihood of every disambiguation path before pruning.||apertium-tagger-training-tools is a program for doing target-language tagger training, meaning it improves POS tagging performance specifically for the translation task, achieving a result for unsupervised training comparable with supervised training. This task would also require switching the default perl-based language model to either IRSTLM or RandLM (or both!).||C++, XML, XSLT||Felipe Sánchez-Martínez|
|Hybrid MT||2. Hard||Building Apertium-Marclator rule-based/corpus-based hybrids||Both the rule-based machine translation system Apertium and the corpus-based machine translation system Marclator do some kind of chunking of the input as well as use a relatively straightforward left-to-right machine translation strategy. This has been explored before but there are other ways to organize hybridization which should be explored (the mentor is happy to discuss). Hybridization may make it easier to adapt Apertium to a particular corpus by using chunk pairs derived from it.||Knowledge of Java, C++, and scripting languages, and appreciation for research-like coding projects||Mlforcada, Jimregan|
Note: The table below is sortable by column. Click on the little squares below or next to the headers.
|Improve integration of
lttoolbox in libvoikko and libhfst read more...
|3. Medium||Dictionaries from lttoolbox can now be used for spellchecking directly with libvoikko (see Spell checking). The idea of this project is to improve the integration. Fix bugs, look at ways of codifying "standard"/"sub-standard" forms in our dictionaries.||Spell checkers can be useful, for languages other than English moreso. They are one of the "must have" items of language technology. If we can re-use Apertium data for this purpose it will help both the project (by making creating new language pairs more rewarding) and the language communities (by making more useful software).||XML, C++.||Francis Tyers|
|Regular expressions in lt-tmxproc read more...||2. Hard||Adding regex support to lt-tmxproc would maximise the amount of translations we can get from an available TMX.||lt-tmxproc already includes some limited support for making translation units in a TMX file into something of a template, but only for digits. Gintrowicz and Jassem describe an interesting idea, using user-definable regular expressions, to turn items such as dates into templates. lttoolbox already has support for a subset of regular expressions; add a mechanism to allow the user to make use of this, and to include these regular expressions in processing.||C++, Knowledge of FSTs||Jimregan|
|Quality control framework||3. Medium||Write a unified testing framework for released language pairs in Apertium. The system should be able to track both regressions with respect to previous versions, and quality checks with respect to previous quality evaluations.||We are gradually improving our quality control, with (semi-)automated tests, but these are done on the Wiki on an ad-hoc basis. Having a unified testing framework would allow us to be able to more easily track quality improvements over all language pairs, and more easily deal with regressions. See ||PHP or Python||Francis Tyers|
|Tree-based reordering read more...||2. Hard||Currently we have a problem with very distantly related languages that have long-distance constituent reordering. Some languages have dependency parsers which create graphs of words.||The purpose of this task would be to create a module that comes before or after apertium-transfer which reorders the graph.||XML, C++||Francis Tyers|
|Dictionary induction from wikis||3. Medium||Extract dictionaries from linguistic wikis||Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets.||MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction||Atoral, Jimregan (dbpedia extraction only)|
|Inferring transfer rules with active learning||2. Hard||Re-working apertium-transfer-training-tools to get more general rules. The right level of generalisation can be achieved by asking non-expert users to validate examples.||Apertium-transfer-training-tools generates shallow-transfer rules from aligned parallel corpora. The corpus-extracted rules provide a translation quality comparable with the expert-written ones. However, human-written rules are more general and easier to mantain. The purpose of this task is to reduce the number of inferred rules while keeping translation quality. Gaps in the information elicited from the parallel corpus are filled with knowledge from non-expert users to achieve the right degree of generalisation.||C++, general knowledge of GIZA++, XML.||Víctor M. Sánchez-Cartagena|
|Active learning to choose among paradigms which share superficial forms||2. Hard||Developing an active learning algorithm to choose the most appropriate paradigm (in a monolingual dictionary) for a new word. Previous research work allows us to reduce the problem to a set of paradigms which share superficial forms but not lexical forms.||Current research on the addition of entries to a monolingual dictionary by non-expert users has partially solved the problem of choosing the best paradigm for a new word. However, when different paradigms share all the lexical forms the problem becomes harder. In such case, it is necessary to ask the users to validate sentences in which the word to be added acts as different lexical forms. Such sentences may be extracted from a monolingual corpus.||XML, Java, Web technologies.||Víctor M. Sánchez-Cartagena, Miquel Esplà-Gomis|
|Optimising paradigms||3. Medium||Developing a tool to simplify the paradigms in monolingual dictionaries in order to minimise the redundancy.||The collaborative improvement of dictionaries can cause some degree of redundancy in paradigms (it is frequent to find paradigms generating a very similar set of lexical forms). Although apertium-dixtools includes an option to remove identical paradigms, this task consists in the implementation of a tool to reduce the redundancy in paradigms by generating all the lexical forms and restructuring the paradigms.||XML, Java.||Miquel Esplà-Gomis|
|Corpus-based distinction between verb/pronoun
combinations in Romance languages
|2. Hard||Write a module which learns to distinguish between different translations of pronouns/verbs in Romance languages, e.g. "se come bien", "él se come", etc.||Some constructions are ambiguous in Romance languages (e.g. Spanish). One of these are verb/pronoun combinations, however, with any verb, some combinations are more likely than others.||C++, XML|
|Corpus-based definiteness transfer||3. Medium||Develop a program that uses information from corpora to improve the transfer of definiteness between language pairs which have it.||Languages treat definiteness differently, some using it more than others, in Apertium we typically just transfer it as is, but this has problems. read more...||C++, XML, python linguistics||Francis Tyers|
Old further reading
- Automated lexical extraction
- M. Forsberg H. Hammarström A. Ranta. "Morphological Lexicon Extraction from Raw Text Data". FinTAL 2006, LNAI 4139, pp. 488--499.
- Support for agglutinative languages
- Beesley, K. R and Karttunen, L. (2000) "Finite-State Non-Concatenative Morphotactics". SIGPHON-2000, Proceedings of the Fifth Workshop of the ACLSpecial Interest Group in Computational Phonology, pp. 1--12,