Difference between revisions of "Anaphora resolution module"

From Apertium
Jump to navigation Jump to search
(23 intermediate revisions by the same user not shown)
Line 2: Line 2:
Here you will find the documentation for the Anaphora Resolution module created during the Google Summer of Code 2019. ([http://wiki.apertium.org/wiki/User:Khannatanmai Proposal])
Here you will find the documentation for the Anaphora Resolution module created during the Google Summer of Code 2019. ([http://wiki.apertium.org/wiki/User:Khannatanmai Proposal])


Repo: https://github.com/apertium/apertium-anaphora
Repo: https://github.com/apertium/apertium-anaphora (Final Module)

Repo: https://github.com/khannatanmai/apertium-anaphora (used during GSoC)


== What is Anaphora Resolution? ==
== What is Anaphora Resolution? ==
Line 16: Line 18:
For example,
For example,


'''Jessica''' is in the sixth grade and this is '''her''' father.
'''''Jessica''' is in the sixth grade and this is '''her''' father.''


Here, "her" is the anaphor, and its antecedent is "Jessica".
Here, "her" is the anaphor, and its antecedent is "Jessica".
Line 90: Line 92:


== How does it work? ==
== How does it work? ==
Anaphora Resolution is usually done either using Parse Trees, or using Machine Learning. However, to obtain accurate Parse Trees or accurate results from an ML algorithm, one needs a lot of data.

However, Apertium is a system which deals largely with Low Resource languages and hence parse trees aren't available during translation and the language pairs usually don't have enough parallel data to train ML algorithms that give accurate results.

The Algorithm we use to resolve Anaphora in this module is a method which doesn't use parse trees or any data to train. '''It uses saliency scores to select an antecedent in the context.'''

=== Mitkov's Antecedent Indicators ===

In this algorithm, every time we encounter an anaphor, we collect a list of all possible antecedents in the current sentence and the last 3 sentences.

Then using some indicators, we give each potential antecedent a positive or a negative score. These indicators are chosen based on a knowledge of the language pair and statistical analysis.

Some of these indicators could be language pair specific and hence it is completely customisable, using the .arx files.

Here are some common indicators:

'''Boosting Indicators''' (given a positive score)

* First NPs
* Referential Distance: Potential antecedents closer to the anaphor are given are more likely to be the antecedent.

'''Impeding Indicators''' (given a negative score)

* Indefiniteness: Indefinite NPs are penalised
* Prepositional NPs: NPs which are part of a PP are penalised.

After this is done, the '''highest scored potential antecedent is chosen as the final antecedent''' and attached to the anaphor.

Reference : [https://link.springer.com/content/pdf/10.1023%2FA%3A1011184828072.pdf Multilingual Anaphora Resolution, Ruslan Mitkov]

== How to Add this Module to a Language Pair ==
First, clone the repo : https://github.com/apertium/apertium-anaphora and follow the instructions in the [https://github.com/apertium/apertium-anaphora/blob/master/README README] to install the module.

You can use the Anaphora Resolution module with your language pair in two steps: Creating an arx file for the Anaphora Module and modifying transfer rules (t1x). Here's how to do this:

=== Creating an arx file for a language pair ===
<code>
apertium-xxx-yyy.xxx-yyy.arx
</code>

The Anaphora Resolution Module is language agnostic but any decent module which does this needs to be tailored to specific language pairs. That is the function of this file.

You will define language specific syntax to detect patterns for the antecedent indicators - Preposition Phrases, Indefinite Noun Phrases, etc. These are called markables in the arx file.

'''NOTE:''' The AR Module processes input as a stream and each time it encounters an anaphor, '''it looks at the current sentence (upto and including the anaphor), and the last three sentences in the context and that's where the pattern matching happens.''' You must keep that in mind when you define the patterns you want it to detect.

As mentioned earlier, based on these patterns, you can give the antecedents which fit these patterns positive or negative scores.

==== How do I know what patterns I should detect? ====

All the patterns mentioned here are based on '''statistical and linguistic intuition''' about which linguistic units are more likely or less likely to be the antecedents of an anaphor.

For example, the reason we detect Preposition Phrases (PPs) is because a noun which is part of a PP is '''less likely''' to be the antecedent of an anaphor, so we subtract 1 from their score.

''The groups of parliament will meet to share their concern.''

Here ''of parliament'' is a Preposition Phrase and hence ''parliament'' gets 1 subtracted from its score as it is less likely to be the antecedent of the later anaphor - ''their''.

==== Compulsory Indicators ====

There are two indicators that are always used '''in addition to the user defined indicators:''' First NP and Referential Distance.

* '''First NP:''' Gives a +1 score to the first noun in each sentence.

Based on the intuition that the first NPs are usually subjects which are more likely to be the antecedents.

* '''Referential Distance:''' Gives a +2 score to a noun in the same sentence as the anaphor; A +1 score to a noun in the previous sentence; A +0 score to a noun in the sentence before that; And -1 to a noun in the sentence before that.

Based on the intuition that the antecedent of an anaphor is usually more likely to be closer to the anaphor.

==== User Defined Indicators ====

Through the arx files you can, using your own linguistic intuition, define more indicators for antecedents and give them scores. For now, these indicators will be applied when looking for an antecedent for all anaphors. In the future we plan to change this so that you can add indicators conditional to certain anaphors.

An arx file consists of these sections:

===== def-parameters =====

Here you define what the module will identify as anaphors and as antecedents. All antecedents will be given scores in the module and all anaphors will have an antecedent attached to them in the module's output.
<pre>
<section-parameters>
<anaphor>
<parameter-item has-tags="det pos"/>
<parameter-item has-tags="prn"/>
</anaphor>

<antecedent>
<parameter-item has-tags="n"/>
<parameter-item has-tags="np"/>
</antecedent>

<delimiter>
<parameter-item has-tags="sent"/>
</delimiter>
</section-parameters>
</pre>

In this example we tell the module that our anaphors could be pronouns or possessive determiners, and that our antecedents could be nouns.

You also define the delimiter here, which is the tag <sent> which marks the ends of sentences, which is used to keep a context of the current and last three sentences. Make sure this is defined in your arx file.

===== def-cats =====

Like transfer, here you define the categories that you're gonna use later to detect the patterns for the markables later.

<pre>
<section-def-cats>
<def-cat n="det">
<cat-item has-tags="det"/>
<cat-item has-tags="det pos"/>
</def-cat>

<def-cat n="adj">
<cat-item has-tags="adj"/>
</def-cat>

<def-cat n="nom">
<cat-item has-tags="n"/>
</def-cat>
.
.
.

</section-def-cats>
</pre>

===== def-markables =====

Here you define the markables and all the patterns that you want to be detected for each markable. For example, all the syntactic patterns that are Preposition Phrases.

<pre>
<section-markables>
<markable n="PP">
<pattern>
<pattern-item n="prep"/>
<pattern-item n="det"/>
<pattern-item n="nom"/>
</pattern>
<pattern>
<pattern-item n="prep"/>
<pattern-item n="nom"/>
</pattern>
<pattern>
<pattern-item n="prep"/>
<pattern-item n="det"/>
<pattern-item n="adj"/>
<pattern-item n="nom"/>
</pattern>
.
.
.
<score n="-1"/>
</markable>
</section-markables>
</pre>

With each markable you can also mention an integer as the score which will be used when calculating the scores.

If you put the score as -1, every antecedent which is a part of that markable gets 1 subtracted from its score when the algorithm is calculating the final score to find the antecedent of an anaphor.

'''Example File: [https://github.com/apertium/apertium-eng-spa/blob/anaphora-transfer/apertium-eng-spa.spa-eng.arx apertium-eng-spa.spa-eng.arx]'''

'''NOTE:''' All indicators are applied on All possible antecedents to give the final score of each antecedent. As mentioned earlier, the highest scored antecedent is then chosen as the final antecedent.

=== Modifying Transfer Rules (t1x) ===

Once the antecedent is attached to an anaphor, we can modify it in transfer to produce the correct translation in the target language.

To do this, you can write a macro which changes the output based on the attached antecedent '''(side="ref")'''.

'''Example: Changes made to [https://github.com/apertium/apertium-eng-spa/pull/13/files#diff-8bbb1dad5ab918be99ea69cb408f413c apertium-eng-spa.spa-eng.t1x]'''

== How to Use this module ==

First, follow the instructions in the [https://github.com/apertium/apertium-anaphora/blob/master/README README] to install the module in your system.

USAGE:
<pre>
apertium-anaphora [-z] arx_file [input [output]]
</pre>

Options:
* -z / --null-flushing : null-flushing output on \0
* -h / --help : shows this message
* -d / --debug: Debug mode

By default, the module takes its input from stdin and gives the output to stdout.

'''Important:''' In the [[pipeline]] the Anaphora Resolution module '''has to''' be after Lexical Selection and right before Transfer.

USAGE in Pipeline (For Spanish-English):
<pre>
apertium-deshtml < input.txt | lt-proc spa-eng.automorf.bin | apertium-tagger -g $2 spa-eng.prob |
apertium-pretransfer | lt-proc -b spa-eng.autobil.bin | lrx-proc -m spa-eng.autolex.bin |
apertium-anaphora apertium-eng-spa.spa-eng.arx | apertium-transfer -b apertium-eng-spa.spa-eng.t1x spa-eng.t1x.bin |
apertium-interchunk apertium-eng-spa.spa-eng.t2x spa-eng.t2x.bin | apertium-postchunk apertium-eng-spa.spa-eng.t3x spa-eng.t3x.bin |
lt-proc -g spa-eng.autogen.bin | lt-proc -p spa-eng.autopgen.bin | apertium-retxt
</pre>

== Evaluation of the module ==

The Anaphora Resolution module was tested on multiple languages with some basic indicators. I'll be presenting the results of the evaluation, which was done manually.

=== Spanish - English ===
Spanish has a possessive determiner ''su'', which can translate to ''his/her/its'' in English, so we need to resolve it as an anaphor.

The Anaphora Resolution Module was run on a corpus of a 1000 sentences from Europarl, using this [https://github.com/apertium/apertium-eng-spa/blob/anaphora-transfer/apertium-eng-spa.spa-eng.arx arx file]

Out of these 1000 sentences, 258 sentences had at least one possessive determiner. The translations of these sentences with and without the Anaphora Resolution module in the pipeline were evaluated comparatively. The results are as follows:

==== Results ====

* No Change, Correct: Anaphora Resolution didn't change the anaphor and it is correct.
* No Change, Incorrect: Anaphora Resolution didn't change the anaphor, and it is incorrect, i.e. it should have changed.
* Change, Correct: Anaphora Resolution changed the anaphor and it is now correct (was incorrect earlier).
* Change, Incorrect: Anaphora Resolution changed the anaphor and it is now incorrect. (was correct earlier)

{| class="wikitable"
|-
! colspan="2" | No Change
! colspan="2" | Change
|-
| Correct
| Incorrect
| Correct
| Incorrect
|-
| '''33'''
| '''53'''
| '''32'''
| '''2'''
|}

Number of anaphors translated correctly without the Anaphora Resolution module and with:
{| class="wikitable"
|-
! Total 3rd Person Anaphors
! Without Anaphora Resolution
! With Anaphora Resolution
|-
|
| Correct
| Correct
|-
| '''120'''
| '''35'''
| '''65'''
|}

'''Accuracy of Anaphora Resolution with the module on Spa-Eng: 54.17%'''

'''Accuracy of Anaphora Resolution without the module on Spa-Eng: 29.17%'''

'''Note''': Out of 258 sentences, 120 sentences had third person pronouns. The rest had first or second person pronouns which were anyway being translated correctly and are largely out of the scope of this module.

==== Observations ====

* A lot of the errors are made because the tagger gives the singular tag to group nouns such as ''Parliament, Commission, Group''. If this is fixed, the results should improve significantly.
* Since the module only outputs ''his/her/their'' right now, all the examples with ''its'' haven't been resolved. Adding this would improve the results as well.

* '''The indicators one uses are corpus dependent'''. This corpus has a dialogue and hence we added an impeding indicator to patterns such as: <NP> <comma>, as that NP is usually the addressee.

For detailed observations, refer to the [https://drive.google.com/file/d/18MSisDqrq0DDAHhzTkuBcsj9INURJu50/view?usp=sharing Complete Evaluation]

=== Catalan - Italian ===
A corpus was created from a freely available journal, and random paragraphs were analysed.

In total, 108 cases of anaphora for the 3rd person possessive determiner in Catalan when translating it to Italian were analysed. What matters in this case is the number of the referent, but not his/her/its gender. Without anaphora, the referent is always chosen to be singular.

==== Results ====

* No Change, Correct: Anaphora Resolution didn't change the anaphor and it is correct.
* No Change, Incorrect: Anaphora Resolution didn't change the anaphor, and it is incorrect, i.e. it should have changed.
* Change, Correct: Anaphora Resolution changed the anaphor and it is now correct (was incorrect earlier).
* Change, Incorrect: Anaphora Resolution changed the anaphor and it is now incorrect. (was correct earlier)

{| class="wikitable"
|-
! colspan="2" | No Change
! colspan="2" | Change
|-
| Correct
| Incorrect
| Correct
| Incorrect
|-
| '''76'''
| '''13'''
| '''5'''
| '''14'''
|}

Number of anaphors translated correctly without the Anaphora Resolution module and with:
{| class="wikitable"
|-
! Total 3rd Person Anaphors
! Without Anaphora Resolution
! With Anaphora Resolution
|-
|
| Correct
| Correct
|-
| '''108'''
| '''90'''
| '''81'''
|}

==== Observations ====
* In this corpus, just choosing singular gives correct translations in 90/108 examples so the anaphors aren't evenly spread out.
* While the Anaphora Resolution module gives worse results here, the configurations can be tuned to give much better results for this corpus.

For detailed observations, refer to the [https://drive.google.com/file/d/18MSisDqrq0DDAHhzTkuBcsj9INURJu50/view?usp=sharing Complete Evaluation] and go to '''Catalan-Italian'''.

Revision as of 12:38, 26 August 2019

Here you will find the documentation for the Anaphora Resolution module created during the Google Summer of Code 2019. (Proposal)

Repo: https://github.com/apertium/apertium-anaphora (Final Module)

Repo: https://github.com/khannatanmai/apertium-anaphora (used during GSoC)

What is Anaphora Resolution?

Anaphora Resolution is the problem of resolving references to earlier items in discourse.

Anaphor: A linguistic unit that refers to an earlier linguistic unit in discourse.
Antecedent: The linguistic unit that the anaphor refers to.

The most common form of this is Pronominal Anaphora, where the anaphor is a pronoun and the antecedent is a noun.

For example,

Jessica is in the sixth grade and this is her father.

Here, "her" is the anaphor, and its antecedent is "Jessica".

Anaphora Resolution in Machine Translation

Anaphora Resolution is required in Machine Translation to produce correct and fluent translations. Since different languages encode information differently, resolving the antecedent of the anaphors in text becomes essential in several language pairs.

For example,

  • Spanish -> English
La chica comió su manzana
Translation: The girl ate his/her/its apple
Resolved Anaphora: The girl ate her apple
  • Add more examples

Anaphora Resolution in Apertium

Anaphora Resolution happens in two stages in the pipeline: In the Anaphora Resolution module and the in the Transfer stage.

We find the antecedent and attach it to the anaphor in the Anaphora Resolution module and select the correct pronoun in the Transfer stage.

Anaphora Resolution Module

In the Apertium pipeline, Anaphora Resolution happens after the Lexical Selection module, right before Transfer.

The output from the Lexical Selection module is analysed, and for each anaphor, the context is processed and the perceived antecedent is attached to the Lexical Unit of the anaphor. It is then sent to Transfer.

If the input sentence is Els grups del Parlament han mostrat aquest dimarts el seu suport al batle d'Alaró

The input to the Anaphora Resolution Module is:

^El<det><def><m><pl>/The<det><def><m><pl>$ ^grup<n><m><pl>/group<n><pl>$ ^de<pr>/of<pr>/from<pr>$ 
^el<det><def><m><sg>/the<det><def><m><sg>$ ^Parlament<n><m><sg>/Parliament<n><sg>$ 
^haver<vbhaver><pri><p3><pl>/have<vbhaver><pri><p3><pl>$ 
^mostrar<vblex><pp><m><sg>/show<vblex><pp><m><sg>/display<vblex><pp><m><sg>$ 
^aquest<det><dem><m><sg>/this<det><dem><m><sg>$ ^dimarts<n><m><sp>/Tuesday<n><ND>$ 
^el seu<det><pos><m><sg>/his<det><pos><m><sg>$ ^suport<n><m><sg>/support<n><sg>$ 
^a<pr>/at<pr>/in<pr>/to<pr>$ ^el<det><def><m><sg>/the<det><def><m><sg>$ 
^*batle/*batle$ ^de<pr>/of<pr>/from<pr>$ ^*Alaró/*Alaró$^.<sent>/.<sent>$

The output is as follows:

^El<det><def><m><pl>/The<det><def><m><pl>/$ ^grup<n><m><pl>/group<n><pl>/$ ^de<pr>/of<pr>/from<pr>/$ 
^el<det><def><m><sg>/the<det><def><m><sg>/$ ^Parlament<n><m><sg>/Parliament<n><sg>/$ 
^haver<vbhaver><pri><p3><pl>/have<vbhaver><pri><p3><pl>/$ 
^mostrar<vblex><pp><m><sg>/show<vblex><pp><m><sg>/display<vblex><pp><m><sg>/$ 
^aquest<det><dem><m><sg>/this<det><dem><m><sg>/$ ^dimarts<n><m><sp>/Tuesday<n><ND>/$ 
^el seu<det><pos><m><sg>/his<det><pos><m><sg>/group<n><pl>$ ^suport<n><m><sg>/support<n><sg>/$ 
^a<pr>/at<pr>/in<pr>/to<pr>/$ ^el<det><def><m><sg>/the<det><def><m><sg>/$ 
^*batle/*batle/$ ^de<pr>/of<pr>/from<pr>/$ ^*Alaró/*Alaró/$^.<sent>/.<sent>/$

So we can see that the anaphor el seu (a possessive determiner)

^el seu<det><pos><m><sg>/his<det><pos><m><sg>$

gets modified to

^el seu<det><pos><m><sg>/his<det><pos><m><sg>/group<n><pl>$

as we attach its antecedent group to it.

This is then passed on to Transfer.

Transfer

Since originally Apertium didn't deal with Anaphora Resolution, it used to put a default translation - "his" in the above example.

Now, the Anaphora Resolution Module attaches its antecedent in the LU, which we can use to change it to the correct anaphor using a macro in the transfer rules of the language pair. (t1x)

These rules represent logic similar to:

  • if antecedent is plural, change his to their.
  • if antecedent is female, change his to her.

How does it work?

Anaphora Resolution is usually done either using Parse Trees, or using Machine Learning. However, to obtain accurate Parse Trees or accurate results from an ML algorithm, one needs a lot of data.

However, Apertium is a system which deals largely with Low Resource languages and hence parse trees aren't available during translation and the language pairs usually don't have enough parallel data to train ML algorithms that give accurate results.

The Algorithm we use to resolve Anaphora in this module is a method which doesn't use parse trees or any data to train. It uses saliency scores to select an antecedent in the context.

Mitkov's Antecedent Indicators

In this algorithm, every time we encounter an anaphor, we collect a list of all possible antecedents in the current sentence and the last 3 sentences.

Then using some indicators, we give each potential antecedent a positive or a negative score. These indicators are chosen based on a knowledge of the language pair and statistical analysis.

Some of these indicators could be language pair specific and hence it is completely customisable, using the .arx files.

Here are some common indicators:

Boosting Indicators (given a positive score)

  • First NPs
  • Referential Distance: Potential antecedents closer to the anaphor are given are more likely to be the antecedent.

Impeding Indicators (given a negative score)

  • Indefiniteness: Indefinite NPs are penalised
  • Prepositional NPs: NPs which are part of a PP are penalised.

After this is done, the highest scored potential antecedent is chosen as the final antecedent and attached to the anaphor.

Reference : Multilingual Anaphora Resolution, Ruslan Mitkov

How to Add this Module to a Language Pair

First, clone the repo : https://github.com/apertium/apertium-anaphora and follow the instructions in the README to install the module.

You can use the Anaphora Resolution module with your language pair in two steps: Creating an arx file for the Anaphora Module and modifying transfer rules (t1x). Here's how to do this:

Creating an arx file for a language pair

apertium-xxx-yyy.xxx-yyy.arx

The Anaphora Resolution Module is language agnostic but any decent module which does this needs to be tailored to specific language pairs. That is the function of this file.

You will define language specific syntax to detect patterns for the antecedent indicators - Preposition Phrases, Indefinite Noun Phrases, etc. These are called markables in the arx file.

NOTE: The AR Module processes input as a stream and each time it encounters an anaphor, it looks at the current sentence (upto and including the anaphor), and the last three sentences in the context and that's where the pattern matching happens. You must keep that in mind when you define the patterns you want it to detect.

As mentioned earlier, based on these patterns, you can give the antecedents which fit these patterns positive or negative scores.

How do I know what patterns I should detect?

All the patterns mentioned here are based on statistical and linguistic intuition about which linguistic units are more likely or less likely to be the antecedents of an anaphor.

For example, the reason we detect Preposition Phrases (PPs) is because a noun which is part of a PP is less likely to be the antecedent of an anaphor, so we subtract 1 from their score.

The groups of parliament will meet to share their concern.

Here of parliament is a Preposition Phrase and hence parliament gets 1 subtracted from its score as it is less likely to be the antecedent of the later anaphor - their.

Compulsory Indicators

There are two indicators that are always used in addition to the user defined indicators: First NP and Referential Distance.

  • First NP: Gives a +1 score to the first noun in each sentence.

Based on the intuition that the first NPs are usually subjects which are more likely to be the antecedents.

  • Referential Distance: Gives a +2 score to a noun in the same sentence as the anaphor; A +1 score to a noun in the previous sentence; A +0 score to a noun in the sentence before that; And -1 to a noun in the sentence before that.

Based on the intuition that the antecedent of an anaphor is usually more likely to be closer to the anaphor.

User Defined Indicators

Through the arx files you can, using your own linguistic intuition, define more indicators for antecedents and give them scores. For now, these indicators will be applied when looking for an antecedent for all anaphors. In the future we plan to change this so that you can add indicators conditional to certain anaphors.

An arx file consists of these sections:

def-parameters

Here you define what the module will identify as anaphors and as antecedents. All antecedents will be given scores in the module and all anaphors will have an antecedent attached to them in the module's output.

<section-parameters>
	<anaphor>
		<parameter-item has-tags="det pos"/>
		<parameter-item has-tags="prn"/>
	</anaphor>

	<antecedent>
		<parameter-item has-tags="n"/>
		<parameter-item has-tags="np"/>
	</antecedent>

	<delimiter>
		<parameter-item has-tags="sent"/>
	</delimiter>
</section-parameters>

In this example we tell the module that our anaphors could be pronouns or possessive determiners, and that our antecedents could be nouns.

You also define the delimiter here, which is the tag <sent> which marks the ends of sentences, which is used to keep a context of the current and last three sentences. Make sure this is defined in your arx file.

def-cats

Like transfer, here you define the categories that you're gonna use later to detect the patterns for the markables later.

<section-def-cats>
		
	<def-cat n="det">
		<cat-item has-tags="det"/>
		<cat-item has-tags="det pos"/>
	</def-cat>

	<def-cat n="adj">
		<cat-item has-tags="adj"/>
	</def-cat>

	<def-cat n="nom">
		<cat-item has-tags="n"/>
	</def-cat>
.
.
.

</section-def-cats>
def-markables

Here you define the markables and all the patterns that you want to be detected for each markable. For example, all the syntactic patterns that are Preposition Phrases.

<section-markables>
	<markable n="PP">
		<pattern>
			<pattern-item n="prep"/>
			<pattern-item n="det"/>
			<pattern-item n="nom"/>
		</pattern>
		<pattern>
			<pattern-item n="prep"/>
			<pattern-item n="nom"/>
		</pattern>
		<pattern>
			<pattern-item n="prep"/>
			<pattern-item n="det"/>
			<pattern-item n="adj"/>
			<pattern-item n="nom"/>
		</pattern>
                .
                .
                .
                <score n="-1"/>
	</markable>
</section-markables>


With each markable you can also mention an integer as the score which will be used when calculating the scores.

If you put the score as -1, every antecedent which is a part of that markable gets 1 subtracted from its score when the algorithm is calculating the final score to find the antecedent of an anaphor.

Example File: apertium-eng-spa.spa-eng.arx

NOTE: All indicators are applied on All possible antecedents to give the final score of each antecedent. As mentioned earlier, the highest scored antecedent is then chosen as the final antecedent.

Modifying Transfer Rules (t1x)

Once the antecedent is attached to an anaphor, we can modify it in transfer to produce the correct translation in the target language.

To do this, you can write a macro which changes the output based on the attached antecedent (side="ref").

Example: Changes made to apertium-eng-spa.spa-eng.t1x

How to Use this module

First, follow the instructions in the README to install the module in your system.

USAGE:

apertium-anaphora [-z] arx_file [input [output]]

Options:

  • -z / --null-flushing : null-flushing output on \0
  • -h / --help : shows this message
  • -d / --debug: Debug mode

By default, the module takes its input from stdin and gives the output to stdout.

Important: In the pipeline the Anaphora Resolution module has to be after Lexical Selection and right before Transfer.

USAGE in Pipeline (For Spanish-English):

apertium-deshtml < input.txt | lt-proc spa-eng.automorf.bin | apertium-tagger -g $2 spa-eng.prob |
apertium-pretransfer | lt-proc -b spa-eng.autobil.bin | lrx-proc -m spa-eng.autolex.bin |
apertium-anaphora apertium-eng-spa.spa-eng.arx | apertium-transfer -b apertium-eng-spa.spa-eng.t1x spa-eng.t1x.bin |
apertium-interchunk apertium-eng-spa.spa-eng.t2x spa-eng.t2x.bin | apertium-postchunk apertium-eng-spa.spa-eng.t3x spa-eng.t3x.bin |
lt-proc -g spa-eng.autogen.bin | lt-proc -p spa-eng.autopgen.bin | apertium-retxt

Evaluation of the module

The Anaphora Resolution module was tested on multiple languages with some basic indicators. I'll be presenting the results of the evaluation, which was done manually.

Spanish - English

Spanish has a possessive determiner su, which can translate to his/her/its in English, so we need to resolve it as an anaphor.

The Anaphora Resolution Module was run on a corpus of a 1000 sentences from Europarl, using this arx file

Out of these 1000 sentences, 258 sentences had at least one possessive determiner. The translations of these sentences with and without the Anaphora Resolution module in the pipeline were evaluated comparatively. The results are as follows:

Results

  • No Change, Correct: Anaphora Resolution didn't change the anaphor and it is correct.
  • No Change, Incorrect: Anaphora Resolution didn't change the anaphor, and it is incorrect, i.e. it should have changed.
  • Change, Correct: Anaphora Resolution changed the anaphor and it is now correct (was incorrect earlier).
  • Change, Incorrect: Anaphora Resolution changed the anaphor and it is now incorrect. (was correct earlier)
No Change Change
Correct Incorrect Correct Incorrect
33 53 32 2

Number of anaphors translated correctly without the Anaphora Resolution module and with:

Total 3rd Person Anaphors Without Anaphora Resolution With Anaphora Resolution
Correct Correct
120 35 65

Accuracy of Anaphora Resolution with the module on Spa-Eng: 54.17%

Accuracy of Anaphora Resolution without the module on Spa-Eng: 29.17%

Note: Out of 258 sentences, 120 sentences had third person pronouns. The rest had first or second person pronouns which were anyway being translated correctly and are largely out of the scope of this module.

Observations

  • A lot of the errors are made because the tagger gives the singular tag to group nouns such as Parliament, Commission, Group. If this is fixed, the results should improve significantly.
  • Since the module only outputs his/her/their right now, all the examples with its haven't been resolved. Adding this would improve the results as well.
  • The indicators one uses are corpus dependent. This corpus has a dialogue and hence we added an impeding indicator to patterns such as: <NP> <comma>, as that NP is usually the addressee.

For detailed observations, refer to the Complete Evaluation

Catalan - Italian

A corpus was created from a freely available journal, and random paragraphs were analysed.

In total, 108 cases of anaphora for the 3rd person possessive determiner in Catalan when translating it to Italian were analysed. What matters in this case is the number of the referent, but not his/her/its gender. Without anaphora, the referent is always chosen to be singular.

Results

  • No Change, Correct: Anaphora Resolution didn't change the anaphor and it is correct.
  • No Change, Incorrect: Anaphora Resolution didn't change the anaphor, and it is incorrect, i.e. it should have changed.
  • Change, Correct: Anaphora Resolution changed the anaphor and it is now correct (was incorrect earlier).
  • Change, Incorrect: Anaphora Resolution changed the anaphor and it is now incorrect. (was correct earlier)
No Change Change
Correct Incorrect Correct Incorrect
76 13 5 14

Number of anaphors translated correctly without the Anaphora Resolution module and with:

Total 3rd Person Anaphors Without Anaphora Resolution With Anaphora Resolution
Correct Correct
108 90 81

Observations

  • In this corpus, just choosing singular gives correct translations in 90/108 examples so the anaphors aren't evenly spread out.
  • While the Anaphora Resolution module gives worse results here, the configurations can be tuned to give much better results for this corpus.

For detailed observations, refer to the Complete Evaluation and go to Catalan-Italian.