Difference between revisions of "Anaphora resolution module"

From Apertium
Jump to navigation Jump to search
 
(22 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{TOCD}}


Here you will find the documentation for the Anaphora Resolution module created during the Google Summer of Code 2019. ([http://wiki.apertium.org/wiki/User:Khannatanmai Proposal])
Here you will find the documentation for the Anaphora Resolution module created during the Google Summer of Code 2019. ([http://wiki.apertium.org/wiki/User:Khannatanmai Proposal])
Line 8: Line 9:
== What is Anaphora Resolution? ==
== What is Anaphora Resolution? ==


Anaphora Resolution is the problem of resolving references to earlier items in discourse.
Anaphora Resolution is the process of resolving references to earlier items in discourse.


:'''Anaphor:''' A linguistic unit that refers to an earlier linguistic unit in discourse.
:'''Anaphor:''' A linguistic unit that refers to an earlier linguistic unit in discourse.
Line 171: Line 172:


Here you define what the module will identify as anaphors and as antecedents. All antecedents will be given scores in the module and all anaphors will have an antecedent attached to them in the module's output.
Here you define what the module will identify as anaphors and as antecedents. All antecedents will be given scores in the module and all anaphors will have an antecedent attached to them in the module's output.

Now you can define multiple types of antecedent-anaphor pairs, so that your scores don't have to all apply to all kinds of anaphors. You can also use exclude-tags in parameter-items so that you can mention tags you don't want to match. You can define them like this:
<pre>
<pre>
<section-parameters>
<section-parameters>
<anaphor>
<parameter-item has-tags="det pos"/>
<parameter-item has-tags="prn"/>
</anaphor>

<antecedent>
<parameter-item has-tags="n"/>
<parameter-item has-tags="np"/>
</antecedent>

<delimiter>
<delimiter>
<parameter-item has-tags="sent"/>
<parameter-item has-tags="sent"/>
</delimiter>
</delimiter>

<def-parameter n="detpos">
<anaphor>
<parameter-item has-tags="det pos"/>
</anaphor>

<antecedent>
<parameter-item has-tags="n" exclude-tags="rel"/>
<parameter-item has-tags="np" exclude-tags="rel"/>
</antecedent>
</def-parameter>

<def-parameter n="verbal">
<anaphor>
<parameter-item has-tags="v"/>
</anaphor>

<antecedent>
<parameter-item has-tags="n"/>
<parameter-item has-tags="np"/>
<parameter-item has-tags="prn" lemma="it" />
</antecedent>
</def-parameter>

</section-parameters>
</section-parameters>
</pre>
</pre>


In this example we tell the module that our anaphors could be pronouns or possessive determiners, and that our antecedents could be nouns.
In this example we tell the module that our anaphors could be possessive determiners, and that our antecedents could be nouns or proper nouns (or one lemma-specific pronoun). We also define another parameter, where we say that our anaphor is a verb and our antecedents could be nouns or proper nouns.


You also define the delimiter here, which is the tag <sent> which marks the ends of sentences, which is used to keep a context of the current and last three sentences. Make sure this is defined in your arx file.
You also define the delimiter here, which is the tag <sent> which marks the ends of sentences, which is used to keep a context of the current and last three sentences. Make sure this is defined in your arx file.
Line 245: Line 262:
.
.
<score n="-1"/>
<score n="-1"/>
</markable>

<markable n="IP">
<pattern>
<pattern-item n="ind"/>
<pattern-item n="nom"/>
</pattern>
<pattern>
<pattern-item n="ind"/>
<pattern-item n="adj"/>
<pattern-item n="nom"/>
</pattern>

<score n="-1"/> <!-- This gives a -1 score to any antecedent that is part of a Indefinite Noun Phrase -->
</markable>

<markable n="PNP">
<pattern>
<pattern-item n="np"/>
</pattern>

<score n="1"/> <!-- This gives a +1 score to any antecedent that is part of a Proper Noun -->
</markable>

<markable n="AdNP"> <!-- NP that is being addressed , for example, "Madam President, ..."-->
<pattern>
<pattern-item n="nom"/>
<pattern-item n="com"/>
</pattern>

<pattern>
<pattern-item n="nom"/>
<pattern-item n="nom"/>
<pattern-item n="com"/>
</pattern>

<score n="-2"/>
</markable>
<markable n="Cop">
<pattern>
<pattern-item n="nom"/>
<pattern-item n="cop"/>
<pattern-item n="anaphor"/>
</pattern>

<score n="-5" parameter="detpos"/>
</markable>
</markable>
</section-markables>
</section-markables>
Line 254: Line 318:
If you put the score as -1, every antecedent which is a part of that markable gets 1 subtracted from its score when the algorithm is calculating the final score to find the antecedent of an anaphor.
If you put the score as -1, every antecedent which is a part of that markable gets 1 subtracted from its score when the algorithm is calculating the final score to find the antecedent of an anaphor.


'''If you don't mention a parameter with the score, then that markable score will be applied to all detected antecedents. If you mention a parameter, such as in the markable "Cop", then that score will only be applied for when Anaphora Resolution is happening for that anaphor-antecedent pair. (i.e. only if the anaphor is a possessive determiner). '''
'''Example File: [Add Example Here]'''


'''Example File: [https://github.com/apertium/apertium-anaphora/blob/master/tests/apertium-eng-spa.spa-eng.arx]'''
'''NOTE:''' All indicators are applied on All possible antecedents to give the final score of each antecedent. As mentioned earlier, the highest scored antecedent is then chosen as the final antecedent.


=== Modifying Transfer Rules (t1x) ===
=== Modifying Transfer Rules (t1x) ===
Line 264: Line 328:
To do this, you can write a macro which changes the output based on the attached antecedent '''(side="ref")'''.
To do this, you can write a macro which changes the output based on the attached antecedent '''(side="ref")'''.


'''Example: Changes made to [https://github.com/apertium/apertium-eng-spa/pull/13/files#diff-8bbb1dad5ab918be99ea69cb408f413c apertium-eng-spa.spa-eng.t1x]'''
'''Example File: [Add Example Here]'''

== Cataphora / when the "antecedent" is after the target ==
There is no support in apertium-anaphora for cataphora. Fortunately, most of these are close to the target(citation needed), so you can probably deal with it in transfer rules, though see https://github.com/apertium/apertium-anaphora/issues/28 .


== How to Use this module ==
== How to Use this module ==
Line 272: Line 339:
USAGE:
USAGE:
<pre>
<pre>
apertium-anaphora [-z] arx_file
apertium-anaphora [-z] arx_file [input [output]]
</pre>
</pre>


Line 278: Line 345:
* -z / --null-flushing : null-flushing output on \0
* -z / --null-flushing : null-flushing output on \0
* -h / --help : shows this message
* -h / --help : shows this message
* -d / --debug: Debug mode


By default, the module takes its input from stdin and gives the output to stdout.
By default, the module takes its input from stdin and gives the output to stdout.
Line 291: Line 359:
lt-proc -g spa-eng.autogen.bin | lt-proc -p spa-eng.autopgen.bin | apertium-retxt
lt-proc -g spa-eng.autogen.bin | lt-proc -p spa-eng.autopgen.bin | apertium-retxt
</pre>
</pre>

== Evaluation of the module ==

The Anaphora Resolution module was tested on multiple languages with some basic indicators. I'll be presenting the results of the evaluation, which was done manually.

=== Spanish - English ===
Spanish has a possessive determiner ''su'', which can translate to ''his/her/its'' in English, so we need to resolve it as an anaphor.

The Anaphora Resolution Module was run on a corpus of a 1000 sentences from Europarl, using this [https://github.com/apertium/apertium-eng-spa/blob/anaphora-transfer/apertium-eng-spa.spa-eng.arx arx file]

Out of these 1000 sentences, 258 sentences had at least one possessive determiner. The translations of these sentences with and without the Anaphora Resolution module in the pipeline were evaluated comparatively. The results are as follows:

==== Results ====

* No Change, Correct: Anaphora Resolution didn't change the anaphor and it is correct.
* No Change, Incorrect: Anaphora Resolution didn't change the anaphor, and it is incorrect, i.e. it should have changed.
* Change, Correct: Anaphora Resolution changed the anaphor and it is now correct (was incorrect earlier).
* Change, Incorrect: Anaphora Resolution changed the anaphor and it is now incorrect. (was correct earlier)

{| class="wikitable"
|-
! colspan="2" | No Change
! colspan="2" | Change
|-
| Correct
| Incorrect
| Correct
| Incorrect
|-
| '''33'''
| '''53'''
| '''32'''
| '''2'''
|}

Number of anaphors translated correctly without the Anaphora Resolution module and with:
{| class="wikitable"
|-
! Total 3rd Person Anaphors
! Without Anaphora Resolution
! With Anaphora Resolution
|-
|
| Correct
| Correct
|-
| '''120'''
| '''35'''
| '''65'''
|}

'''Accuracy of Anaphora Resolution with the module on Spa-Eng: 54.17%'''

'''Accuracy of Anaphora Resolution without the module on Spa-Eng: 29.17%'''

'''Note''': Out of 258 sentences, 120 sentences had third person pronouns. The rest had first or second person pronouns which were anyway being translated correctly and are largely out of the scope of this module.

==== Observations ====

* A lot of the errors are made because the tagger gives the singular tag to group nouns such as ''Parliament, Commission, Group''. If this is fixed, the results should improve significantly.
* Since the module only outputs ''his/her/their'' right now, all the examples with ''its'' haven't been resolved. Adding this would improve the results as well.

* '''The indicators one uses are corpus dependent'''. This corpus has a dialogue and hence we added an impeding indicator to patterns such as: <NP> <comma>, as that NP is usually the addressee.

For detailed observations, refer to the [https://drive.google.com/file/d/18MSisDqrq0DDAHhzTkuBcsj9INURJu50/view?usp=sharing Complete Evaluation]

=== Catalan - Italian ===
A corpus was created from a freely available journal, and random paragraphs were analysed.

In total, 108 cases of anaphora for the 3rd person possessive determiner in Catalan when translating it to Italian were analysed. What matters in this case is the number of the referent, but not his/her/its gender. Without anaphora, the referent is always chosen to be singular.

==== Results ====

* No Change, Correct: Anaphora Resolution didn't change the anaphor and it is correct.
* No Change, Incorrect: Anaphora Resolution didn't change the anaphor, and it is incorrect, i.e. it should have changed.
* Change, Correct: Anaphora Resolution changed the anaphor and it is now correct (was incorrect earlier).
* Change, Incorrect: Anaphora Resolution changed the anaphor and it is now incorrect. (was correct earlier)

{| class="wikitable"
|-
! colspan="2" | No Change
! colspan="2" | Change
|-
| Correct
| Incorrect
| Correct
| Incorrect
|-
| '''76'''
| '''13'''
| '''5'''
| '''14'''
|}

Number of anaphors translated correctly without the Anaphora Resolution module and with:
{| class="wikitable"
|-
! Total 3rd Person Anaphors
! Without Anaphora Resolution
! With Anaphora Resolution
|-
|
| Correct
| Correct
|-
| '''108'''
| '''90'''
| '''81'''
|}

==== Observations ====
* In this corpus, just choosing singular gives correct translations in 90/108 examples so the anaphors aren't evenly spread out.
* While the Anaphora Resolution module gives worse results here, the configurations can be tuned to give much better results for this corpus.

For detailed observations, refer to the [https://drive.google.com/file/d/18MSisDqrq0DDAHhzTkuBcsj9INURJu50/view?usp=sharing Complete Evaluation] and go to '''Catalan-Italian'''.
<!-- == Timing ==
In a test done by [[User:ScoopGracie]], adding the Anaphora module to the pipeline increased the duration from 31.515 seconds to 32.046 seconds, a 1.68% increase. The module on its own took 11.698 seconds to run. See https://scoopgracie.com/apertium-anaphora-timing.pdf for the full report.
-->

[[Category:Tools]]

Latest revision as of 21:13, 24 June 2022

Here you will find the documentation for the Anaphora Resolution module created during the Google Summer of Code 2019. (Proposal)

Repo: https://github.com/apertium/apertium-anaphora (Final Module)

Repo: https://github.com/khannatanmai/apertium-anaphora (used during GSoC)

What is Anaphora Resolution?[edit]

Anaphora Resolution is the process of resolving references to earlier items in discourse.

Anaphor: A linguistic unit that refers to an earlier linguistic unit in discourse.
Antecedent: The linguistic unit that the anaphor refers to.

The most common form of this is Pronominal Anaphora, where the anaphor is a pronoun and the antecedent is a noun.

For example,

Jessica is in the sixth grade and this is her father.

Here, "her" is the anaphor, and its antecedent is "Jessica".

Anaphora Resolution in Machine Translation[edit]

Anaphora Resolution is required in Machine Translation to produce correct and fluent translations. Since different languages encode information differently, resolving the antecedent of the anaphors in text becomes essential in several language pairs.

For example,

  • Spanish -> English
La chica comió su manzana
Translation: The girl ate his/her/its apple
Resolved Anaphora: The girl ate her apple
  • Add more examples

Anaphora Resolution in Apertium[edit]

Anaphora Resolution happens in two stages in the pipeline: In the Anaphora Resolution module and the in the Transfer stage.

We find the antecedent and attach it to the anaphor in the Anaphora Resolution module and select the correct pronoun in the Transfer stage.

Anaphora Resolution Module[edit]

In the Apertium pipeline, Anaphora Resolution happens after the Lexical Selection module, right before Transfer.

The output from the Lexical Selection module is analysed, and for each anaphor, the context is processed and the perceived antecedent is attached to the Lexical Unit of the anaphor. It is then sent to Transfer.

If the input sentence is Els grups del Parlament han mostrat aquest dimarts el seu suport al batle d'Alaró

The input to the Anaphora Resolution Module is:

^El<det><def><m><pl>/The<det><def><m><pl>$ ^grup<n><m><pl>/group<n><pl>$ ^de<pr>/of<pr>/from<pr>$ 
^el<det><def><m><sg>/the<det><def><m><sg>$ ^Parlament<n><m><sg>/Parliament<n><sg>$ 
^haver<vbhaver><pri><p3><pl>/have<vbhaver><pri><p3><pl>$ 
^mostrar<vblex><pp><m><sg>/show<vblex><pp><m><sg>/display<vblex><pp><m><sg>$ 
^aquest<det><dem><m><sg>/this<det><dem><m><sg>$ ^dimarts<n><m><sp>/Tuesday<n><ND>$ 
^el seu<det><pos><m><sg>/his<det><pos><m><sg>$ ^suport<n><m><sg>/support<n><sg>$ 
^a<pr>/at<pr>/in<pr>/to<pr>$ ^el<det><def><m><sg>/the<det><def><m><sg>$ 
^*batle/*batle$ ^de<pr>/of<pr>/from<pr>$ ^*Alaró/*Alaró$^.<sent>/.<sent>$

The output is as follows:

^El<det><def><m><pl>/The<det><def><m><pl>/$ ^grup<n><m><pl>/group<n><pl>/$ ^de<pr>/of<pr>/from<pr>/$ 
^el<det><def><m><sg>/the<det><def><m><sg>/$ ^Parlament<n><m><sg>/Parliament<n><sg>/$ 
^haver<vbhaver><pri><p3><pl>/have<vbhaver><pri><p3><pl>/$ 
^mostrar<vblex><pp><m><sg>/show<vblex><pp><m><sg>/display<vblex><pp><m><sg>/$ 
^aquest<det><dem><m><sg>/this<det><dem><m><sg>/$ ^dimarts<n><m><sp>/Tuesday<n><ND>/$ 
^el seu<det><pos><m><sg>/his<det><pos><m><sg>/group<n><pl>$ ^suport<n><m><sg>/support<n><sg>/$ 
^a<pr>/at<pr>/in<pr>/to<pr>/$ ^el<det><def><m><sg>/the<det><def><m><sg>/$ 
^*batle/*batle/$ ^de<pr>/of<pr>/from<pr>/$ ^*Alaró/*Alaró/$^.<sent>/.<sent>/$

So we can see that the anaphor el seu (a possessive determiner)

^el seu<det><pos><m><sg>/his<det><pos><m><sg>$

gets modified to

^el seu<det><pos><m><sg>/his<det><pos><m><sg>/group<n><pl>$

as we attach its antecedent group to it.

This is then passed on to Transfer.

Transfer[edit]

Since originally Apertium didn't deal with Anaphora Resolution, it used to put a default translation - "his" in the above example.

Now, the Anaphora Resolution Module attaches its antecedent in the LU, which we can use to change it to the correct anaphor using a macro in the transfer rules of the language pair. (t1x)

These rules represent logic similar to:

  • if antecedent is plural, change his to their.
  • if antecedent is female, change his to her.

How does it work?[edit]

Anaphora Resolution is usually done either using Parse Trees, or using Machine Learning. However, to obtain accurate Parse Trees or accurate results from an ML algorithm, one needs a lot of data.

However, Apertium is a system which deals largely with Low Resource languages and hence parse trees aren't available during translation and the language pairs usually don't have enough parallel data to train ML algorithms that give accurate results.

The Algorithm we use to resolve Anaphora in this module is a method which doesn't use parse trees or any data to train. It uses saliency scores to select an antecedent in the context.

Mitkov's Antecedent Indicators[edit]

In this algorithm, every time we encounter an anaphor, we collect a list of all possible antecedents in the current sentence and the last 3 sentences.

Then using some indicators, we give each potential antecedent a positive or a negative score. These indicators are chosen based on a knowledge of the language pair and statistical analysis.

Some of these indicators could be language pair specific and hence it is completely customisable, using the .arx files.

Here are some common indicators:

Boosting Indicators (given a positive score)

  • First NPs
  • Referential Distance: Potential antecedents closer to the anaphor are given are more likely to be the antecedent.

Impeding Indicators (given a negative score)

  • Indefiniteness: Indefinite NPs are penalised
  • Prepositional NPs: NPs which are part of a PP are penalised.

After this is done, the highest scored potential antecedent is chosen as the final antecedent and attached to the anaphor.

Reference : Multilingual Anaphora Resolution, Ruslan Mitkov

How to Add this Module to a Language Pair[edit]

First, clone the repo : https://github.com/apertium/apertium-anaphora and follow the instructions in the README to install the module.

You can use the Anaphora Resolution module with your language pair in two steps: Creating an arx file for the Anaphora Module and modifying transfer rules (t1x). Here's how to do this:

Creating an arx file for a language pair[edit]

apertium-xxx-yyy.xxx-yyy.arx

The Anaphora Resolution Module is language agnostic but any decent module which does this needs to be tailored to specific language pairs. That is the function of this file.

You will define language specific syntax to detect patterns for the antecedent indicators - Preposition Phrases, Indefinite Noun Phrases, etc. These are called markables in the arx file.

NOTE: The AR Module processes input as a stream and each time it encounters an anaphor, it looks at the current sentence (upto and including the anaphor), and the last three sentences in the context and that's where the pattern matching happens. You must keep that in mind when you define the patterns you want it to detect.

As mentioned earlier, based on these patterns, you can give the antecedents which fit these patterns positive or negative scores.

How do I know what patterns I should detect?[edit]

All the patterns mentioned here are based on statistical and linguistic intuition about which linguistic units are more likely or less likely to be the antecedents of an anaphor.

For example, the reason we detect Preposition Phrases (PPs) is because a noun which is part of a PP is less likely to be the antecedent of an anaphor, so we subtract 1 from their score.

The groups of parliament will meet to share their concern.

Here of parliament is a Preposition Phrase and hence parliament gets 1 subtracted from its score as it is less likely to be the antecedent of the later anaphor - their.

Compulsory Indicators[edit]

There are two indicators that are always used in addition to the user defined indicators: First NP and Referential Distance.

  • First NP: Gives a +1 score to the first noun in each sentence.

Based on the intuition that the first NPs are usually subjects which are more likely to be the antecedents.

  • Referential Distance: Gives a +2 score to a noun in the same sentence as the anaphor; A +1 score to a noun in the previous sentence; A +0 score to a noun in the sentence before that; And -1 to a noun in the sentence before that.

Based on the intuition that the antecedent of an anaphor is usually more likely to be closer to the anaphor.

User Defined Indicators[edit]

Through the arx files you can, using your own linguistic intuition, define more indicators for antecedents and give them scores. For now, these indicators will be applied when looking for an antecedent for all anaphors. In the future we plan to change this so that you can add indicators conditional to certain anaphors.

An arx file consists of these sections:

def-parameters[edit]

Here you define what the module will identify as anaphors and as antecedents. All antecedents will be given scores in the module and all anaphors will have an antecedent attached to them in the module's output.

Now you can define multiple types of antecedent-anaphor pairs, so that your scores don't have to all apply to all kinds of anaphors. You can also use exclude-tags in parameter-items so that you can mention tags you don't want to match. You can define them like this:

<section-parameters>
	<delimiter>
		<parameter-item has-tags="sent"/>
	</delimiter>

	<def-parameter n="detpos">
		<anaphor>
			<parameter-item has-tags="det pos"/>
		</anaphor>

		<antecedent>
			<parameter-item has-tags="n" exclude-tags="rel"/>
			<parameter-item has-tags="np" exclude-tags="rel"/>
		</antecedent>
	</def-parameter>

	<def-parameter n="verbal">
		<anaphor>
			<parameter-item has-tags="v"/>
		</anaphor>

		<antecedent>
			<parameter-item has-tags="n"/>
			<parameter-item has-tags="np"/>
			<parameter-item has-tags="prn" lemma="it" />
		</antecedent>
	</def-parameter>

</section-parameters>

In this example we tell the module that our anaphors could be possessive determiners, and that our antecedents could be nouns or proper nouns (or one lemma-specific pronoun). We also define another parameter, where we say that our anaphor is a verb and our antecedents could be nouns or proper nouns.

You also define the delimiter here, which is the tag <sent> which marks the ends of sentences, which is used to keep a context of the current and last three sentences. Make sure this is defined in your arx file.

def-cats[edit]

Like transfer, here you define the categories that you're gonna use later to detect the patterns for the markables later.

<section-def-cats>
		
	<def-cat n="det">
		<cat-item has-tags="det"/>
		<cat-item has-tags="det pos"/>
	</def-cat>

	<def-cat n="adj">
		<cat-item has-tags="adj"/>
	</def-cat>

	<def-cat n="nom">
		<cat-item has-tags="n"/>
	</def-cat>
.
.
.

</section-def-cats>
def-markables[edit]

Here you define the markables and all the patterns that you want to be detected for each markable. For example, all the syntactic patterns that are Preposition Phrases.

<section-markables>
	<markable n="PP">
		<pattern>
			<pattern-item n="prep"/>
			<pattern-item n="det"/>
			<pattern-item n="nom"/>
		</pattern>
		<pattern>
			<pattern-item n="prep"/>
			<pattern-item n="nom"/>
		</pattern>
		<pattern>
			<pattern-item n="prep"/>
			<pattern-item n="det"/>
			<pattern-item n="adj"/>
			<pattern-item n="nom"/>
		</pattern>
                .
                .
                .
                <score n="-1"/>
	</markable>

        <markable n="IP">
	        <pattern>
			<pattern-item n="ind"/>
			<pattern-item n="nom"/>
		</pattern>
		<pattern>
			<pattern-item n="ind"/>
			<pattern-item n="adj"/>
			<pattern-item n="nom"/>
		</pattern>

		<score n="-1"/> <!-- This gives a -1 score to any antecedent that is part of a Indefinite Noun Phrase -->
        </markable>

	<markable n="PNP">
		<pattern>
			<pattern-item n="np"/>
		</pattern>

		<score n="1"/> <!-- This gives a +1 score to any antecedent that is part of a Proper Noun -->
	</markable>

	<markable n="AdNP"> <!-- NP that is being addressed , for example, "Madam President, ..."-->
		<pattern>
			<pattern-item n="nom"/>
			<pattern-item n="com"/>
		</pattern>

		<pattern>
			<pattern-item n="nom"/>
			<pattern-item n="nom"/>
			<pattern-item n="com"/>
		</pattern>

		<score n="-2"/>
	</markable>
	
	<markable n="Cop"> 
		<pattern>
			<pattern-item n="nom"/>
			<pattern-item n="cop"/>
			<pattern-item n="anaphor"/>
		</pattern>

		<score n="-5" parameter="detpos"/>
	</markable>
</section-markables>


With each markable you can also mention an integer as the score which will be used when calculating the scores.

If you put the score as -1, every antecedent which is a part of that markable gets 1 subtracted from its score when the algorithm is calculating the final score to find the antecedent of an anaphor.

If you don't mention a parameter with the score, then that markable score will be applied to all detected antecedents. If you mention a parameter, such as in the markable "Cop", then that score will only be applied for when Anaphora Resolution is happening for that anaphor-antecedent pair. (i.e. only if the anaphor is a possessive determiner).

Example File: [1]

Modifying Transfer Rules (t1x)[edit]

Once the antecedent is attached to an anaphor, we can modify it in transfer to produce the correct translation in the target language.

To do this, you can write a macro which changes the output based on the attached antecedent (side="ref").

Example: Changes made to apertium-eng-spa.spa-eng.t1x

Cataphora / when the "antecedent" is after the target[edit]

There is no support in apertium-anaphora for cataphora. Fortunately, most of these are close to the target(citation needed), so you can probably deal with it in transfer rules, though see https://github.com/apertium/apertium-anaphora/issues/28 .

How to Use this module[edit]

First, follow the instructions in the README to install the module in your system.

USAGE:

apertium-anaphora [-z] arx_file [input [output]]

Options:

  • -z / --null-flushing : null-flushing output on \0
  • -h / --help : shows this message
  • -d / --debug: Debug mode

By default, the module takes its input from stdin and gives the output to stdout.

Important: In the pipeline the Anaphora Resolution module has to be after Lexical Selection and right before Transfer.

USAGE in Pipeline (For Spanish-English):

apertium-deshtml < input.txt | lt-proc spa-eng.automorf.bin | apertium-tagger -g $2 spa-eng.prob |
apertium-pretransfer | lt-proc -b spa-eng.autobil.bin | lrx-proc -m spa-eng.autolex.bin |
apertium-anaphora apertium-eng-spa.spa-eng.arx | apertium-transfer -b apertium-eng-spa.spa-eng.t1x spa-eng.t1x.bin |
apertium-interchunk apertium-eng-spa.spa-eng.t2x spa-eng.t2x.bin | apertium-postchunk apertium-eng-spa.spa-eng.t3x spa-eng.t3x.bin |
lt-proc -g spa-eng.autogen.bin | lt-proc -p spa-eng.autopgen.bin | apertium-retxt

Evaluation of the module[edit]

The Anaphora Resolution module was tested on multiple languages with some basic indicators. I'll be presenting the results of the evaluation, which was done manually.

Spanish - English[edit]

Spanish has a possessive determiner su, which can translate to his/her/its in English, so we need to resolve it as an anaphor.

The Anaphora Resolution Module was run on a corpus of a 1000 sentences from Europarl, using this arx file

Out of these 1000 sentences, 258 sentences had at least one possessive determiner. The translations of these sentences with and without the Anaphora Resolution module in the pipeline were evaluated comparatively. The results are as follows:

Results[edit]

  • No Change, Correct: Anaphora Resolution didn't change the anaphor and it is correct.
  • No Change, Incorrect: Anaphora Resolution didn't change the anaphor, and it is incorrect, i.e. it should have changed.
  • Change, Correct: Anaphora Resolution changed the anaphor and it is now correct (was incorrect earlier).
  • Change, Incorrect: Anaphora Resolution changed the anaphor and it is now incorrect. (was correct earlier)
No Change Change
Correct Incorrect Correct Incorrect
33 53 32 2

Number of anaphors translated correctly without the Anaphora Resolution module and with:

Total 3rd Person Anaphors Without Anaphora Resolution With Anaphora Resolution
Correct Correct
120 35 65

Accuracy of Anaphora Resolution with the module on Spa-Eng: 54.17%

Accuracy of Anaphora Resolution without the module on Spa-Eng: 29.17%

Note: Out of 258 sentences, 120 sentences had third person pronouns. The rest had first or second person pronouns which were anyway being translated correctly and are largely out of the scope of this module.

Observations[edit]

  • A lot of the errors are made because the tagger gives the singular tag to group nouns such as Parliament, Commission, Group. If this is fixed, the results should improve significantly.
  • Since the module only outputs his/her/their right now, all the examples with its haven't been resolved. Adding this would improve the results as well.
  • The indicators one uses are corpus dependent. This corpus has a dialogue and hence we added an impeding indicator to patterns such as: <NP> <comma>, as that NP is usually the addressee.

For detailed observations, refer to the Complete Evaluation

Catalan - Italian[edit]

A corpus was created from a freely available journal, and random paragraphs were analysed.

In total, 108 cases of anaphora for the 3rd person possessive determiner in Catalan when translating it to Italian were analysed. What matters in this case is the number of the referent, but not his/her/its gender. Without anaphora, the referent is always chosen to be singular.

Results[edit]

  • No Change, Correct: Anaphora Resolution didn't change the anaphor and it is correct.
  • No Change, Incorrect: Anaphora Resolution didn't change the anaphor, and it is incorrect, i.e. it should have changed.
  • Change, Correct: Anaphora Resolution changed the anaphor and it is now correct (was incorrect earlier).
  • Change, Incorrect: Anaphora Resolution changed the anaphor and it is now incorrect. (was correct earlier)
No Change Change
Correct Incorrect Correct Incorrect
76 13 5 14

Number of anaphors translated correctly without the Anaphora Resolution module and with:

Total 3rd Person Anaphors Without Anaphora Resolution With Anaphora Resolution
Correct Correct
108 90 81

Observations[edit]

  • In this corpus, just choosing singular gives correct translations in 90/108 examples so the anaphors aren't evenly spread out.
  • While the Anaphora Resolution module gives worse results here, the configurations can be tuned to give much better results for this corpus.

For detailed observations, refer to the Complete Evaluation and go to Catalan-Italian.