Difference between revisions of "User:Skh/Application GSoC 2010"
Line 153: | Line 153: | ||
==== Formalisms ==== |
==== Formalisms ==== |
||
Formally, a finite state transducer can't handle either complex multiwords as defined above (explicitly listing all possible inflected forms works just fine of course), because it needs to match tags and copy them to the right side, or discontiguous multiwords, because it needs to keep track of what stands between the verb and its particle. Constructs like can't be part of a regular language, and therefore can't be matched by finite automata. |
Formally, a finite state transducer can't handle either complex multiwords as defined above (explicitly listing all possible inflected forms works just fine of course), because it needs to match tags and copy them to the right side, or discontiguous multiwords, because it needs to keep track of what stands between the verb and its particle. Constructs like these can't be part of a regular language, and therefore can't be matched by finite automata. |
||
In the field of regular expressions, however, patterns like these are typically handled by extensions to the "regular" expression engine (which then isn't regular any more of course). I therefore plan to research existing regular expression engines (sed, perl/pcre) first to find out how they have solved the problem. I would like to avoid reinventing the wheel, and find a solution that is robust and efficient. |
In the field of regular expressions, however, patterns like these are typically handled by extensions to the "regular" expression engine (which then isn't regular any more of course). I therefore plan to research existing regular expression engines (sed, perl/pcre) first to find out how they have solved the problem. I would like to avoid reinventing the wheel, and find a solution that is robust and efficient. |
Revision as of 13:02, 8 April 2010
Contents
- 1 Improving multiword support in Apertium
- 1.1 About me
- 1.2 List your skills and give evidence of your qualifications.
- 1.3 Motivation
- 1.4 Project: Improving multiword support in Apertium
Improving multiword support in Apertium
About me
Name
Sonja Krause-Harder
Contact information
- E-mail: krauseha@gmail.com
- IRC: skh on freenode
- Sourceforge: skh
- Apertium wiki: Skh
List your skills and give evidence of your qualifications.
I am studying computational linguistics and indo-european studies at the University of Erlangen. I'm in my second year of a three-year undergraduate program. My courses so far include formal languages, data structures and algorithms, morphological analysis (with JSLIM, see http://www.linguistik.uni-erlangen.de/clue/en/research/jslim.html) and linguistics.
Before I started studying I worked 7 years at SuSE Linux / Novell as a linux packager and software developer. I maintained RPM packages related to java development (eclipse, tomcat, jakarta project) as well as the Apache webserver, and I helped programming internally used tools.
During the initial launch of the openSUSE project I was involved in concept discussions and community relations, presenting the project externally on conferences and internally to other departments at Novell, to improve the collaboration between the openSUSE community and SuSE / Novell R&D.
Examples of my work:
- A tool to transliterate devanagari in IAST or Harvard-Kyoto transliteration:
http://www.linguistik.uni-erlangen.de/~sakrause/transliterate
- SWAMP: A workflow management system used internally at SuSE, I was working on the workflow definition language and the core workflow engine.
http://swamp.sf.net
Language skills
- native: German, near-native: English
- some: French, Czech
- little: Italian, Spanish, Dutch, Icelandic
- ancient: Sanskrit, Ancient Greek, some Latin
Any non-Summer-of-Code plans for the Summer
Summer term at my university finishes on July 24th, so until then I have some class work to do. Should I be accepted in the program, I will pause the student programming job (20 hours/week) which I've been doing since I started studying, and spend at least these 20 hours/week on my Google Summer of Code project. After July 24th I have no other plans but GSoC.
Motivation
Why is it you are interested in machine translation?
I have been interested in languages for a long time, and I've been already working as a programmer, so my decision to study computational linguistics was a logical conclusion. Machine translation appeals to me because to do it successfully, both current research and real-world engineering methods and consideration of efficiency are necessary.
Also, while there are already translation tools with work well for specific subject areas and languages, there is still pioneer work to be done, especially for languages that don't have that many speakers.
Why is it that you are interested in the Apertium project?
It is one of not too many projects in the field of natural language processing that are completely open source. I really believe in open source, and in my opinion machine translation, as a result of linguistic research, should be accessible and usable for as many people as possible, not only those who can afford to pay for expensive proprietary translation tools.
I like the architecture: small unix tools in a chain that do one thing only and can be used differently for different language pairs.
There is a considerable variety of languages already in the project, and the project is very alive. I've already received lots of help on IRC and the mailing list and feel that Apertium is a mentoring organization that is willing to help its students, and is interested in good and usable results from its GSoC projects.
Project: Improving multiword support in Apertium
Natural languages can have lexical units which consist of two or more separate words, and which as a unit, following certain composition rules, have a meaning that cannot be inferred from the meanings of their constituent parts. To handle these lexical units in apertium the concept of multiwords is used. Because the ways in which languages use multiword constructs are so varied, only some cases can be handled with the current dictionary syntax and implementation in apertium. This project aims at extending multiword support in Apertium so that two more major types of multiwords can be handled.
Supported multiword constructs
Three kinds of multiword lexical units are already supported, as explained in apertium-documentation. These are:
- Multiwords without inflection: short phrases used like adverbs (Example: english at the moment)
- Compound multiwords: two words concatenated into one for phonetic or orthographic reasons (Examples: spanish del < de el, english isn't < is not
- Multiwords with inner inflection: groups of two or more words where one is inflected and the others unchanged (Example: english record player)
record player → record players :) A better example might be took away < take away - Francis Tyers 10:23, 8 April 2010 (UTC)
Missing multiword constructs
I would like to add support for the following kind of multiwords to Apertium:
Complex multiwords (adj-noun)
These multiwords consist of two inflected words, typically an adjective and a noun, which agree with each other in gender, number and case. They are formally not distinguishable from any other adjective-noun pair, but their meaning can't be inferred from the constituent words, and they therefor need a separate entry in the dictionary.
Currently, multiwords of this type can be handled in the monolingual dictionaries by explicitly defining the adj-noun combination in all its conjugated forms. This works fine in languages with little inflection (see the example for dirección general on the Multiwords wiki page), but gets increasingly ugly when a language inflects with more variation, like the slavic or some germanic languages.
More examples:
- polish Baba Jaga (proper noun), see apertium/incubator/apertium-en-pl/pldic/pldix-np-ant-mw-lex.xml
- german gelbe Rübe ("carrot")
- serbian zračna luka ("airport"), see Multiwords
To process this type of multiword, I propose a definition similar to this one in the multilingual dictionary:
pair: left : gelbe(adj-f-$num-$case) Rübe(n-f-$num-$case) right: gelbe Rübe(np-f-$num-$case) agree-on: $num, $case
This is of course pseudocode, please see the discussion part of this page how this could be expressed similar to the existing apertium syntax.
The monodix will only have entries for the adjective gelb and the noun Rübe on their own. The bidix will have an entry for gelbe<br />Rübe as defined in the multiword dictionary.
The multiword module shall accept a disambiguated stream similar to this:
^gelbe/gelb<adj><f><pl><nomgendatacc>$^Rüben/Rübe<n><f><pl><nomgendatacc>
and output something similar to this:
^gelbe Rüben/gelbe Rübe<np><f><pl><nomgendatacc>$
Related, but more complicated example:
- adj-noun with additional words in it: polish Umawiające się Strony ("contract parties") where Umawiające agrees with Strony, and się is invariable
Discontiguous multiwords (verb ... particle)
These multiwords consist of a verb and a particle, which do not stand next to each other in the sentence. What can stand between the verb and the particle depends on the language, some are more strict than others. For some, phrase-based rules are necessary, others have position-based rules.
Example:
- to make something up
dictionary example TBD
stream example TBD
Ambiguity
It is possible to construct many ambiguous cases with multiwords, especially with those of the discontiguous type.
As an example, consider the following sentences:
- Unambiguous because of word order: The man threw off the dog who bites his ear off. (1)
- Nested, unambiguous because of punctuation: The man threw the dog, who bites his ear off, off. (2)
- Two candidate verbs to which the particle may belong, unambiguous because of punctuation: The man threw the dog, who bites his ear, off. (3)
- Ambiguous: The man threw the dog biting his ear off. (4)
Sentences like (1) are what I refer to as "simple sentence" in the work plan, these should be recognized and generated correctly. (2) and (3) may be theoretically recognized, (4) is inherently ambiguous and would need to be analysed in different ways, passing the ambiguity on to another module using other approaches to resolve it.
Also, it might be best, for the more difficult cases, to only attempt to recognize, but not correctly generate them, so not all transformations will be bidirectional.
Cases (2) to (4) are most likely also beyond the scope of this project, but may be worked upon at the end if there's time left.
Additional types of multiwords
- cases where a particle verb in some cases needs to be reorded and the particle comes first (because the finite verb needs to be in sentence-second position)
- separable verbs
- multiwords that are complex and discontiguous at the same time
- verb constructions with auxiliary verbs (french passé composé, german perfect)
It is desired to extend the multiword module in the future so that these and other additional types of multiwords can be handled. They are beyond the main scope of this project, though. Some of them might be included if there's time left at the end.
Formalisms
Formally, a finite state transducer can't handle either complex multiwords as defined above (explicitly listing all possible inflected forms works just fine of course), because it needs to match tags and copy them to the right side, or discontiguous multiwords, because it needs to keep track of what stands between the verb and its particle. Constructs like these can't be part of a regular language, and therefore can't be matched by finite automata.
In the field of regular expressions, however, patterns like these are typically handled by extensions to the "regular" expression engine (which then isn't regular any more of course). I therefore plan to research existing regular expression engines (sed, perl/pcre) first to find out how they have solved the problem. I would like to avoid reinventing the wheel, and find a solution that is robust and efficient.
Integration into the apertium pipeline
Analysis of these two new types of multiwords should happen in a separate module. Initially I would like to run it after the POS tagger so that it can work on a disambiguated data stream. It is possible, however, that in some cases the POS tagger destroys a multiword by tagging one of its constituent words incorrectly. Because of this it might be desirable, at some point, to extend the multiword module and have it work on the data stream that still contains ambiguous analyses from the morphological analyzer. This is also most likely beyond the scope of this project.
Timeline
All tasks include regression tests and user-level documentation in the form of cookbook style examples in the wiki.
- Now: read code, work on any language pair (en-de because I know it, nl-de was suggested on IRC) to get acquainted with the system and the work of a language pair maintainer, research regular expression engines
- Community bonding phase: Discuss and define format of the multiword dictionary, turn examples into test cases for regression tests
- Week 1 and 2: Create new tool, parse dictionary.
- Week 3 and 4: read disambiguated stream with help of existing libraries
- Deliverable #1: working binary that can parse the multiword dictionary and read in a disambiguated data stream
- Week 5 and 6: recognize and generaten complex multiwords of type adj-noun
- Week 7 and 8: recognize discontiguous multiword in unambiguous simple sentences
- Deliverable #2: working binary that can recognize and generate complex multiwords of type adj-noun and recognize discontiguous multiwords in unambiguous simple sentences
- Week 9 and 10: generate discontiguous multiwords in simple sentences
- Week 11 and 12: clean up and prepare for release
- Project completed
Additional tasks, if there is time left at the end of the coding phase:
- handle ambiguous cases of discontiguous multiwords
- optionally process streams with ambiguous analyses still present
- handle additional types of multiwords
- work with language maintainers to use the new multiword types in existing language pairs
Reasons why Google and Apertium should sponsor it
Enhanced multiword support will make Apertium usable for more languages. As it is now, some of the multiword constructs can only be implemented with workarounds in the dictionary, and some, like separable verbs, not at all. Having support or these will improve the translation quality for many languages. Also, a logical and documented way to describe these multiwords and handle them in the engine will make the work of language-pair maintainers easier. This will lead to more languages pairs and increase the scope and impact of the Apertium project.
A description of how and who it will benefit in society
The variety of languages currently spoken is an important part of cultural diversity. But still, people need to communicate, and have access to written information that is only available in some languages -- textbooks, manuals, news. Usable, open source machine translation for a broad range of languages will be a real help in people's lives.