Курсы машинного перевода для языков России/Раздел 0

From Apertium
Jump to navigation Jump to search
Session 0: Обзор

В данном разделе будет дан краткий обзор такого метода как машинный перевод, основанный на использовании введенных правил и представлено как работает бесплатный open/source машинный переводчик на платформе Апертиум.

Существуют два принципиально отличающихся друг от друга вида машинного перевода:

  • Машинный перевод, основанный на правилах (RBMT), его также называют символьным машинным переводом; Аппертиум как раз относится к данному виду и этот раздел посвящено подвиду машинного перевода, основанного на правилах
  • Корпусный машинный перевод; при таком переводе для перевода новых предложений переводчик обращается к наборам из ранее переведенных предложений.

Если кратко представить корпусный МП, то его можно разделить на две главные подгруппы: подгруппа, в основе которой лежат статистические данные, и подгруппа, основанная на примерах. Теоретически, основной принцип работы статистического машинного перевода заключается в следующем: берется набор ранее переведенных предложений (параллельный корпус) и подсчитывается какие символы совпадают наиболее часто. Всем символам, которые совпадают, присваивается признак вероятности. При переводе нового предложения переводчик рассматривает все слова (символы), которым присвоен признак вероятности, их вероятности комбинируется, делается несколько вариантов возможных переводов и затем выбирается вариант перевода с самой высокой степенью вероятности. Первые системы статистического МП учитывали только совпадение слов, но более новые системы могут учитывать совпадения последовательных рядов слов (фраз) и иерархических деревьев.

By contrast example-based machine translation can be thought to be translation by analogy. It still uses a parallel corpus, but instead of assigning probabilities to words, it tries to learn by example. For example, given the sentence pairs (A la chica le gustan los gatos(es) → Das Mädchen mag Katzen(de) and A la chica le gustan los elefantes → Das Mädchen mag Elefanten) it might produce a translation example of (A la chica le gustan X → Das Mädchen mag X). When translating a new sentence, the parts are looked up and substituted.

Automatically applying a large translation memory to a text may also be considered a form of corpus-based machine translation. In practice, the lines between statistical and example-based MT are more blurry. Both rule-based and corpus-based methods have advantages and disadvantges. Corpus-based methods may produce translations which sound more fluent, but the meaning may be less faithfully reproduced. Rule-based systems tend to produce translations which are less fluent, but more preserving of the source language meaning.

Rule-based and corpus-based systems can also be combined in various ways as hybrid systems. For example, one might make a hybrid system that uses an example-based system to find equivalences, and then uses a rule-based system as backoff — when no pattern is found.

Types of machine translation systems

Direct

Direct, or word-for-word machine translation works by reading words in the source language one at a time, and looking them up in a bilingual word list of surface forms. Words may also be deleted, or left out, and maybe translated to one or more words. No grammatical analysis is done, so even simple errors, such as agreement in gender and number between a determiner and head noun will remain in the target language output.

EXEMPLE 1
Heinrich köpeğine bir parça et verir.
<< TXUVAIX AQUÍ >>
Генрих сетö яй кусöк аслас понлы.
Heinrich antoi lihapalan koiralleen.
Генрих даёт кусок мяса своей собаке.

Transfer

Transfer-based machine translation works by first converting the source language to a language-dependent intermediate representation, and then rules are applied to this intermediate representation in order to change the structure of the source language to the structure of the target language. The translation is generated from this representation using both bilingual dictionaries and grammatical rules.

There can be differences in the level of abstraction of the intermediate representation. We can distinguish two broad groups, shallow transfer, and deep transfer. In shallow-transfer MT the intermediate representation is usually either based on morphology or shallow syntax. In deep-transfer MT the intermediate representation usually includes some kind of parse tree or graph structure (see images on the right).

EXEMPLE 2

Transfer-based MT usually works as follows: The original text is first analysed and disambiguated morphologically (and in the case of deep transfer, syntactically) in order to obtain the source language intermediate representation. The transfer process then converts this final representation (still in the source language) to a representation of the same level of abstraction in the target language. From the target language representation, the target language is generated.

Interlingual

In transfer-based machine translation, rules are written on a pair-by-pair basis, making them specific to a language pair. In the interlingua approach, the intermediate representation is entirely language independent. There are a number of benefits to this approach, but also disadvantages. The benefits are that in order to add a new language to an existing MT system, it is only necessary to write an analyser and generator for the new language, and not transfer rules between the new language and all the existing languages. The drawbacks are that it is very hard to define an interlingua which can truely represent all nuances of all natural languages, and in practice, interlingua systems are only used for limited translation domains.

Problems in machine translation

Analysis

Form does not entirely determine content.

This is also called the problem of ambiguity. The problem is that many sentences in natural language can have more than one interpretation, and these interpretations may be translated differently in different languages. Consider the following example:

EXEMPLE 3
Здесь нужен пример синтаксической неоднозначности на русском или на чувашском языке (чем проще, тем лучше)
Вот друг Саша, которого я вчера встретил.

Synthesis

Content does not entirely determine form.

This is the problem that in a given language there is usually more than one way to communicate the same meaning for any given meaning.

EXEMPLE 4
Эсир мӗнле пурӑнатӑр?
Мӗнле пурнӑҫсем?
Мӗнле халсем?
Мӗнле еҫсем?

All of these questions demand the same answer (how are you), but they may be more or less frequently used, or emphasise different things. In Apertium, for a given input sentence, one output sentence is produced. It is up to the designer of the translation system to choose which translation they want the system to produce. Often we recommend the most literal translation possible, as this reduces the necessity of transfer rules.

Transfer

The same content is represented differently in different languages.

Languages have different ways of expressing the same meaning. These ways are often incompatible between languages. Consider the following examples expressing the same content:

EXEMPLE 5

In Apertium, rules are applied which convert source language structure to target language structure using sequences of lexical forms as an intermediate representation. For further information see: Session 5: Structural transfer basics.

Description

Representing knowledge about the translation process in machine-readable form.

The final problem is that of description. In order to build a machine translation system it is necessary for people with knowledge of both languages to sit down and codify that knowledge in a form explicit and declarative enough for the machine to be able to process it.

While translation is often an unconscious process, we translate without reflecting on the rules that we use to translate, the machine does not have this unconsciousness, and must be told exactly what operations to perform. If these operations rely on information that the machine does not have, or cannot have, then a machine translation will not be possible.

But for many sentences, this information is not necessary:

EXEMPLE 6

Practice

Installation

For guidance on the installation process of Apertium, HFST and Constraint grammar, see the hand out.

Usage

To use Apertium, first open up a terminal. Now cd into the directory of the language pair you want to test.

$ cd apertium-aa-bb

You can test it with the following command:

$ echo "Text that you want to translate" | apertium -d . aa-bb

For example, from Turkish to Kyrgyz:

$ echo "En güzel kız evime geldi." | apertium -d . tr-ky
Эң жакшынакай кыз үйүмө келди. 

Directory layout

Below is a table which gives a description of the main data files that can be found in a typical language pair, and links to the sessions where they are described.

File Type Description Session(s)
apertium-tr-ky.ky.lexc Dictionary Kyrgyz morphotactic dictionary, used for analysis and generation 1 2
apertium-tr-ky.ky.twol Phonological rules Kyrgyz morphophonological rules 1 2
apertium-fr-es.tr.lexc Dictionary Turkish morphotactic dictionary, used for analysis and generation 1 2
apertium-tr-ky.tr.twol Phonological rules Turkish morphophonological rules 1 2
apertium-tr-ky.tr-ky.dix Dictionary Turkish—Kyrgyz bilingual dictionary, used for lexical transfer 4
apertium-tr-ky.tr.rlx Tagging rules Turkish constraint grammar, used for morphological disambiguation 3
apertium-tr-ky.ky.rlx Tagging rules Kyrgyz constraint grammar, used for morphological disambiguation 3
apertium-tr-ky.tr-ky.t1x Transfer rules Turkish→Kyrgyz first-level rule file, for structural transfer 5 6
apertium-tr-ky.ky-tr.t1x Transfer rules Kyrgyz→Turkish first-level rule file, for structural transfer 5 6