Difference between revisions of "Ideas for Google Summer of Code"

From Apertium
Jump to navigation Jump to search
Line 21: Line 21:
 
|-
 
|-
 
| '''Sliding window based'''<br/>'''part-of-speech tagging''' || 2.&nbsp;High || Writing a complete drop-in replacement for the Apertium part-of-speech tagger based on the sliding-window part-of-speech tagger of Sánchez-Villamil et al. (2004) [http://www.dlsi.ua.es/~mlf/docum/sanchezvillamil04p.pdf] and Sánchez-Villamil et al. (2005) [http://www.dlsi.ua.es/~mlf/docum/sanchezvillamil05p.pdf] (Apertium currently uses hidden Markov models). The specification file should be as similar as possible as the one used now. || The taggers described are very intuitive and may easily be turned into a compact set of finite-state rules (no need to handle probabilities after training), and may be trained in an unsupervised manner. Depending on the language, the sliding window of words to be analyzed may be configured to suit it. || C or C++, basic knowledge of the grammar of the language(s) involved || [[User:Mlforcada|Mlforcada]]
 
| '''Sliding window based'''<br/>'''part-of-speech tagging''' || 2.&nbsp;High || Writing a complete drop-in replacement for the Apertium part-of-speech tagger based on the sliding-window part-of-speech tagger of Sánchez-Villamil et al. (2004) [http://www.dlsi.ua.es/~mlf/docum/sanchezvillamil04p.pdf] and Sánchez-Villamil et al. (2005) [http://www.dlsi.ua.es/~mlf/docum/sanchezvillamil05p.pdf] (Apertium currently uses hidden Markov models). The specification file should be as similar as possible as the one used now. || The taggers described are very intuitive and may easily be turned into a compact set of finite-state rules (no need to handle probabilities after training), and may be trained in an unsupervised manner. Depending on the language, the sliding window of words to be analyzed may be configured to suit it. || C or C++, basic knowledge of the grammar of the language(s) involved || [[User:Mlforcada|Mlforcada]]
|-
 
| '''<s>Documentation</s>''' || <s>4.&nbsp;Low</s> || <s>Improving, updating and adding to the existing Apertium documentation. Particularly, a HOWTO for writing transfer rules. This will involve either selecting a current language pair with little-transfer, or creating a new language pair, and documenting how you approach writing rules.</s> || <s>Apertium has (or so we've been told) pretty good documentation, but as everyone knows, documentation can always be improved. For example: at the moment we're lacking HOWTO style documentation for transfer rules.</s> || <s>XML, good grammatical knowledge of the syntax of the language pair you're approaching.</s> || <s>[[User:Francis Tyers|Francis Tyers]]</s>
 
|-
 
| '''<s>Three-level</s>'''<br/>'''<s>transfer training</s>''' || <s>1.&nbsp;Very&nbsp;High</s> || <s>Re-working apertium-transfer-training-tools to generate rules for three-level transfer (chunk, interchunk, postchunk), currently it only generates single-level rules. This would also involve porting it to use Unicode</s> || <s>Apertium-transfer-training-tools is a cool piece of software that generates shallow-transfer rules from aligned parallel corpora. It could greatly speed up the creation of new language pairs by generating rules that would otherwise have to be written by human linguists</s> || <s>C++, knowledge of Giza++, Perl considered a plus.</s> ||
 
 
|-
 
|-
 
| '''Linguistically-driven filtering of the bilingual phrases used to infer shallow-transfer rules''' || 3.&nbsp;Medium || Re-working apertium-transfer-training-tools to filter the set of bilingual phrases automatically obtained from a word-aligned sentence pair by using linguistic criteria. This may also involve porting it to use Unicode. || Apertium-transfer-training-tools is a cool piece of software that generates shallow-transfer rules from aligned parallel corpora. It could greatly speed up the creation of new language pairs by generating rules that would otherwise have to be written by human linguists || C++, general knowledge of GIZA++, Perl considered a plus. || [[User:Fsanchez|Fsanchez]]
 
| '''Linguistically-driven filtering of the bilingual phrases used to infer shallow-transfer rules''' || 3.&nbsp;Medium || Re-working apertium-transfer-training-tools to filter the set of bilingual phrases automatically obtained from a word-aligned sentence pair by using linguistic criteria. This may also involve porting it to use Unicode. || Apertium-transfer-training-tools is a cool piece of software that generates shallow-transfer rules from aligned parallel corpora. It could greatly speed up the creation of new language pairs by generating rules that would otherwise have to be written by human linguists || C++, general knowledge of GIZA++, Perl considered a plus. || [[User:Fsanchez|Fsanchez]]

Revision as of 10:01, 10 March 2008

This is the ideas page for Google Summer of Code, here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality. If you have an idea please add it below, if you think you could mentor someone in a particular area — or just have interests or ideas for that, add your name to "Interested parties" using ~~~

Maybe take a look at some open bugs ?

Difficulty = 1 (Very Hard) ... 4 (Easy)
Task Difficulty Description Rationale Requirements Interested
parties
Improve interoperability 3. Medium Either to modify Apertium to accept different formats, or to modify the other tools to accept the Apertium format, or alternatively write some kind of generic "glue" code that converts between them. There is a lot of great free software that could be used with the Apertium engine. For example the Stuttgart FST (SFST)[1] tools for morphological analysis/generation could be used in place of lttoolbox, and the constraint grammars from VISL[2] could be used in place of apertium-tagger. Unfortunately these, along with many other tools have incompatible input/output formats. C, C++, XML Francis Tyers, Jimregan
Accent and diacritic
restoration
3. Medium Create an optional module to restore diacritics and accents on input text, and integrate it into the Apertium pipeline. Many languages use diacritics and accents in normal writing, and Apertium is designed to use these, however in some places, especially for example. instant messaging, irc, searching in the web etc. these are often not used or untyped. This causes problems as for the engine, traduccion is not the same as traducción. C, C++, XML, familiarity with linguistic issues Francis Tyers
Handling texts without
accents or diacritics
3. Medium Modify the linguistic data in an Apertium language-pair package so that it can accept text without accents or diacritics (or partially diacriticized). The task may constitute an alternative solution to the problem in the previous task. see: Accent and diacritic restoration Perl or Python, familiarity with linguistic issues. Mlforcada, Jimregan
Porting 3. Medium Port Apertium to Windows and Mac OS/X complete with nice installers and all that jazz. Apertium currently compiles on Windows (see Apertium on Windows), but we'd like to see it compile with a free tool-chain (MingGW, etc.) While we all might use GNU/Linux, there are a lot of people out there who don't, some of them use Microsoft's Windows, others use Mac OS. It would be nice for these people to be able use Apertium too. C++, autotools, experience in programming on Windows or Mac. See bugs: #27 and #32 Francis Tyers, Jimregan
Lexical selection 1. Very High Write a prototype lexical selection module for Apertium using a combination of rule-based and statistical approaches, or maybe only an statistical approach. Lexical selection is the task of choosing a sense (meaning) for a word out of a number of possible senses (related to word sense disambiguation), when languages are close, they often share semantic ambiguity, when they are further apart they do not, so for example Spanish "estación" can be either "station", "season" or "resort" in English. Lexical selection is the task of choosing the right one. C++, XML, good knowledge of statistics. Jimregan, Fsanchez
Interfaces 4. Low Create plugins or extensions for popular free software applications to include support for translation using Apertium. We'd expect at least OpenOffice and Firefox, but to start with something more easy we have half-finished plugins for Gaim and XChat that could use some love. The more the better! Apertium currently runs as a stand alone translator. It would be great if it was integrated in other free software applications. For example so instead of copy/pasting text out of your email, you could just click a button and have it translated in place. Depends on the application chosen, but probably C, C++, Python or Perl. Francis Tyers, Jimregan
Sliding window based
part-of-speech tagging
2. High Writing a complete drop-in replacement for the Apertium part-of-speech tagger based on the sliding-window part-of-speech tagger of Sánchez-Villamil et al. (2004) [1] and Sánchez-Villamil et al. (2005) [2] (Apertium currently uses hidden Markov models). The specification file should be as similar as possible as the one used now. The taggers described are very intuitive and may easily be turned into a compact set of finite-state rules (no need to handle probabilities after training), and may be trained in an unsupervised manner. Depending on the language, the sliding window of words to be analyzed may be configured to suit it. C or C++, basic knowledge of the grammar of the language(s) involved Mlforcada
Linguistically-driven filtering of the bilingual phrases used to infer shallow-transfer rules 3. Medium Re-working apertium-transfer-training-tools to filter the set of bilingual phrases automatically obtained from a word-aligned sentence pair by using linguistic criteria. This may also involve porting it to use Unicode. Apertium-transfer-training-tools is a cool piece of software that generates shallow-transfer rules from aligned parallel corpora. It could greatly speed up the creation of new language pairs by generating rules that would otherwise have to be written by human linguists C++, general knowledge of GIZA++, Perl considered a plus. Fsanchez
Use of context-dependent lexicalized categories in the inference of shallow-transfer rules 2. High Re-working apertium-transfer-training-tools to use context-dependent lexicalized categories in the inference of shallow-transfer rules. This may also involve porting it to use Unicode. Apertium-transfer-training-tools generates shallow-transfer rules from aligned parallel corpora. It uses an small set of lexicalized categories, categories that are usually involved in lexical changes, such as prepositions, pronouns or auxiliary verbs. Lexicalized categories differentiate from the rest of categories because their lemmas are taken into account in the generation of rules. C++, general knowledge of GIZA++, XML. Fsanchez
Automated lexical
extraction
2. High Writing a C++ wrapper around Markus Forsberg's Extract tool (version 2.0) as a library to allow it to be used with Apertium paradigms and TSX files as input into its paradigms and constraints. One of the things that takes a lot of time when creating a new language pair is constructing the monodices. The extract tool can greatly reduce the time this takes by matching lemmas to paradigms based on distribution in a corpus. Haskell, C++, XML Francis Tyers
Generating grammar
checkers
3. Medium The data that come with Apertium (morphological analysers) could be used to create grammar checkers. This task would be to work on an automatic converter for Apertium formats to other popular grammar checker formats, or alternatively work on a standalone grammar checker. Maybe using something like languagetool. Grammar checkers can be useful, for languages other than English moreso. They are one of the "must have" items of language technology. If we can re-use Apertium data for this purpose it will help both the project (by making creating new language pairs more rewarding) and the language communities (by making more useful software). XML, whatever programming language and natural language are used for testing. Francis Tyers, Jimregan
Support for agglutinative
languages
2. High Propose a new dictionary format that is suited to languages with agglutinative morphology and modify the morphological compiler/analyser. Our dictionary format isn't particularly suited to agglutinative languages, and those with complex morphologies. There are many of these types of languages in the world, so it would be good to support them better. See also: Agglutination C++, XML, knowledge of a language with these features (e.g. Finnish, Basque, Turkish, Estonian, Aymara, etc.) Sortiz
Complex multiwords 2. High Write a bidirectional module for specifying complex multiword units, for example dirección general and zračna luka. See Multiwords for more information. Although in the Romance languages it is not a big problem, as soon as you start to get to languages with cases (e.g. Serbo-Croatian, Slovenian, German, etc.) the problem comes that you can't define a multiword of adj nom because the adjective has a lot of inflection. C, C++, XML Francis Tyers, Jimregan
Adopt a
language pair
4. Low Take on an orphaned language pair, and bring it up to release quality results. What this quality will be will depend on the language pair adopted, and will need to be discussed with the prospective mentor. This will involve writing linguistic data (including morphological rules and transfer rules — which are specified in a declarative language) Apertium has a few pairs of languages (e.g. sv-da, sh-mk, en-af, cy-en, etc...) that are orphaned, they don't have active maintainers. A lot of these pairs have a lot of work already put in, just need another few months to get them to release quality. XML, a scripting language (Python, Perl), good knowledge of the language pair adopted. Francis Tyers, Jimregan
Word compounder
and de-compounder
4. Low Write a de-compounder and compounder for Apertium. Many languages in the world have compound words, in Europe e.g. German, Dutch, Danish, etc. These are often very low frequency or completely novel, and as such do not exist in our dictionaries. If we had some software to split these into their constituent parts we might be able to translate them, and improve accuracy on our pairs with these languages. See also bug #13, and the page Compounds C, C++, XML Francis Tyers
Post-edition
tool
3. Medium Make a post-edition tool to speed up revision of Apertium translations. It would likely include at least support for spelling and grammar checking, along with defining user-specified rules for fixing translations (search and replace, etc.) After translating with Apertium revision work has to be done to consider a translation as an "adequate" translation. An intelligent post-edition environment will help doing this task. In this environment some typical mistakes in the translation process that can be automatically detected (for example unknown words and homographs) could be highlighted to be taken in consideration while doing post-edition. Some typical mistakes could also be defined to advise the post-editor to check them. XML, PHP, Python, C, C++, whichever programming language. Sortiz
Lexical insertion
tool
3. Medium Improving the current web based dictionary application to insert new word pairs into the Apertium dictionaries. This would involve both improving the functionality, and efficiency of the software. Currently people have to edit XML in order to add words to the dictionaries. We have a web application, written in Python that does a lot of this work, but it still lacks some functionality, for example multiwords, and complete support for the new dictionary format. What is more it is quite slow and memory intensive. Python Francis Tyers
Generating spell
checkers
4. Low The data that come with Apertium (morphological analysers) could be used to create spell checkers. This task would be to work on an automatic converter for Apertium formats to other popular spell checker formats. Maybe using something ispell, myspell, hunspell, youspell, etc. Spell checkers can be useful especially before translating. They are one of the basic items of language technology. If we can re-use Apertium data for this purpose it will help both the project (by making creating new language pairs more rewarding) and the language communities (by making more useful software). This will be particulary useful for minority languages having an Apertium translator but not having a free spell checker, and also to use spell checking tools as a controlled language tool. XML, whatever programming language and natural language are used for testing. Sortiz
Detect 'hidden' unknown words 3. Medium The lexical disambiguator of Apertium can be modified to display the likelihood of each 'tag' being applied and then being aware of missing entries in the dictionary. Dictionaries in Apertium are missing of some quite frequent entries just because the surface form of an entry is not in all of its lexical forms (e.g. "running" may be in as a verb, but not a noun — "he is in the running ...") and you cannot detect it as unknown word. This feature will detect 'possible' unknown words and 'possible' mis-design of lexical disambiguators. C++ if modifying the lexical disambiguator is better, or whatever if writing from scratch. Sortiz

Notes

  1. Free software, GPL licensed
  2. Free software, GPL incompatible (MPL variant)

Further reading

Accent and diacritic restoration
  • Simard, Michel (1998). "Automatic Insertion of Accents in French Texts". Proceedings of EMNLP-3. Granada, Spain.
  • Rada F. Mihalcea. (2002). "Diacritics Restoration: Learning from Letters versus Learning from Words". Lecture Notes in Computer Science 2276/2002 pp. 96--113
  • "G. De Pauw, P. W. Wagacha; G.M. de Schryver (2007) "Automatic diacritic restoration for resource-scarce languages". Proceedings of Text, Speech and Dialogue, Tenth International Conference. pp. 170--179
Lexical selection
Sliding-window based part-of-speech tagging
Automated lexical extraction
Support for agglutinative languages
Transfer rule learning
  • Sánchez-Martínez, F. and Forcada, M.L. (2007) "Automatic induction of shallow-transfer rules for open-source machine translation", in Proceedings of TMI 2007, pp.181-190 (paper, poster)
Compounding and de-compounding