Difference between revisions of "Ideas for Google Summer of Code"

From Apertium
Jump to navigation Jump to search
(more ideas (some new, some old))
Line 62: Line 62:
| more = /Morphological analyser
| more = /Morphological analyser
}}
}}

{{IdeaSummary
| name = User-friendly lexical selection training
| difficulty = Medium
| skills = Python, C++, shell scripting
| description = Make it so that training/inference of lexical selection rules is a more user-friendly process
| rationale = Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
| mentors = [[User:Unhammer|Unhammer]], [[User:Mlforcada|Mikel Forcada]]
| more = /User-friendly lexical selection training
}}

{{IdeaSummary
| name = Robust tokenisation in lttoolbox
| difficulty = Medium
| skills = C++, XML, Python
| description = Improve the longest-match left-to-right tokenisation strategy in [[lttoolbox]] to be fully Unicode compliant.
| rationale = One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:TommiPirinen|Flammie]]
| more = /Robust tokenisation
}}

{{IdeaSummary
| name = apertium-separable language-pair integration
| difficulty = Medium
| skills = XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
| description = Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the [[Apertium-separable]] module to process the multiwords, and clean up the dictionaries accordingly.
| rationale = Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle.
| mentors = [[User:Francis Tyers|Francis Tyers]], [[User:Firespeaker]]
| more = /Apertium separable
}}

{{IdeaSummary
| name = UD and Apertium integration
| difficulty = Entry level
| skills = python, javascript, HTML, (C++)
| description = Create a range of tools for making Apertium compatible with Universal Dependencies
| rationale = Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
| mentors = [[User:Francis Tyers]] [[User:Firespeaker]]
| more = /UD and Apertium integration
}}

{{IdeaSummary
| name = rule visualization tools
| difficulty = Medium
| skills = python? javascript? XML
| description = make tools to help visualize the effect of various rules
| rationale = TODO see https://github.com/Jakespringer/dapertium for an example
| mentors = ???
| more = /Visualization tools
}}

{{IdeaSummary
| name = dictionary induction from wikis
| difficulty = Medium
| skills = MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction
| description = Extract dictionaries from linguistic wikis
| rationale = Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets.
| mentors = ???
| more = /Dictionary induction from wikis
}}

{{IdeaSummary
| name = unit testing framework
| difficulty = Medium
| skills = perl
| description = adapt https://github.com/TinoDidriksen/regtest for general Apertium use and implement
| rationale = We are gradually improving our quality control, with (semi-)automated tests, but these are done on the Wiki on an ad-hoc basis. Having a unified testing framework would allow us to be able to more easily track quality improvements over all language pairs, and more easily deal with regressions.
| mentors = ???
| more = /Unit testing
}}



[[Category:Development]]
[[Category:Development]]

Revision as of 21:42, 18 January 2021

This page has not been updated for GSoC 2021 yet. Some of these projects were completed in 2020, and none are adjusted for 2021 only allowing half the working hours of previous years.

This is the ideas page for Google Summer of Code, here you can find ideas on interesting projects that would make Apertium more useful for people and improve or expand our functionality. If you have an idea please add it below, if you think you could mentor someone in a particular area, add your name to "Interested mentors" using ~~~

The page is intended as an overview of the kind of projects we have in mind. If one of them particularly piques your interest, please come and discuss with us on #apertium on irc.freenode.net, mail the mailing list, or draw attention to yourself in some other way.

Note that, if you have an idea that isn't mentioned here, we would be very interested to hear about it.

Here are some more things you could look at:


If you're a student trying to propose a topic, the recommended way is to request a wiki account and then go to

http://wiki.apertium.org/wiki/User:[[your username]]/GSoC2020Proposal

and click the "create" button near the top of the page. It's also nice to include [[Category:GSoC_2020_student_proposals]] to help organize submitted proposals.

Ideas

Python API for Apertium

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    C++, Python
  • Description:
    Update the Python API for Apertium to expose all Apertium modes and test with all major OSes
  • Rationale:
    The current Python API misses out on a lot of functionality, like phonemisation and segmentation and doesn't work for some OSes like Debian.
  • Mentors:
    Francis Tyers
  • read more...


Web API extensions

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    Python
  • Description:
    Update the web API for Apertium to expose all Apertium modes
  • Rationale:
    The current Web API misses out on a lot of functionality, like phonemisation and segmentation
  • Mentors:
    Francis Tyers
  • read more...


Develop a morphological analyser

  • Difficulty:
    3. Entry level
  • Size: default Unknown size
  • Required skills:
    XML
  • Description:
    Write a morphological analyser and generator for a language that does not yet have one
  • Rationale:
    A key part of an Apertium machine translation system is a morphological analyser and generator. The objective of this task is to create an analyser for a language that does not yet have one.
  • Mentors:
    Francis Tyers
  • read more...


Support for Enhanced Dependencies in UD Annotatrix

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    NodeJS
  • Description:
    UD Annotatrix is an annotation interface for Universal Dependencies, but does not yet support all functionality
  • Rationale:
  • Mentors:
    Francis Tyers
  • read more...


User-friendly lexical selection training

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    Python, C++, shell scripting
  • Description:
    Make it so that training/inference of lexical selection rules is a more user-friendly process
  • Rationale:
    Our lexical selection module allows for inferring rules from corpora and word alignments, but the procedure is currently a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed. The goal of this task is to make the procedure as user-friendly as possible, so that ideally only a simple config file would be needed, and a driver script would take care of the rest.
  • Mentors:
    Unhammer, Mikel Forcada
  • read more...


Robust tokenisation in lttoolbox

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    C++, XML, Python
  • Description:
    Improve the longest-match left-to-right tokenisation strategy in lttoolbox to be fully Unicode compliant.
  • Rationale:
    One of the most frustrating things about working with Apertium on texts "in the wild" is the way that the tokenisation works. If a letter is not specified in the alphabet, it is dealt with as whitespace, so e.g. you get unknown words split in two so you can end up with stuff like ^G$ö^k$ı^rmak$ which is terrible for further processing.
  • Mentors:
    Francis Tyers, Flammie
  • read more...


apertium-separable language-pair integration

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    XML, a scripting language (Python, Perl), some knowledge of linguistics and/or at least one relevant natural language
  • Description:
    Choose a language you can identify as having a good number of "multiwords" in the lexicon. Modify all language pairs in Apertium to use the Apertium-separable module to process the multiwords, and clean up the dictionaries accordingly.
  • Rationale:
    Apertium-separable is a newly developed module to process lexical items with discontinguous dependencies, an area where Apertium has traditionally fallen short. Despite all the module has to offer, it has only been put to use in small test cases, and hasn't been integrated into any translation pair's development cycle.
  • Mentors:
    Francis Tyers, User:Firespeaker
  • read more...


UD and Apertium integration

  • Difficulty:
    3. Entry level
  • Size: default Unknown size
  • Required skills:
    python, javascript, HTML, (C++)
  • Description:
    Create a range of tools for making Apertium compatible with Universal Dependencies
  • Rationale:
    Universal dependencies is a fast growing project aimed at creating a unified annotation scheme for treebanks. This includes both part-of-speech and morphological features. Their annotated corpora could be extremely useful for Apertium for training models for translation. In addition, Apertium's rule-based morphological descriptions could be useful for software that relies on Universal dependencies.
  • Mentors:
    User:Francis Tyers User:Firespeaker
  • read more...


rule visualization tools


dictionary induction from wikis

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    MySQL, mediawiki syntax, perl, maybe C++ or Java; Java, Scala, RDF, and DBpedia to use DBpedia extraction
  • Description:
    Extract dictionaries from linguistic wikis
  • Rationale:
    Wiki dictionaries and encyclopedias (e.g. omegawiki, wiktionary, wikipedia, dbpedia) contain information (e.g. bilingual equivalences, morphological features, conjugations) that could be exploited to speed up the development of dictionaries for Apertium. This task aims at automatically building dictionaries by extracting different pieces of information from wiki structures such as interlingual links, infoboxes and/or from dbpedia RDF datasets.
  • Mentors:
    ???
  • read more...


unit testing framework

  • Difficulty:
    2. Medium
  • Size: default Unknown size
  • Required skills:
    perl
  • Description:
    adapt https://github.com/TinoDidriksen/regtest for general Apertium use and implement
  • Rationale:
    We are gradually improving our quality control, with (semi-)automated tests, but these are done on the Wiki on an ad-hoc basis. Having a unified testing framework would allow us to be able to more easily track quality improvements over all language pairs, and more easily deal with regressions.
  • Mentors:
    ???
  • read more...