Difference between revisions of "Chunking"

From Apertium
Jump to navigation Jump to search
Line 99: Line 99:
 
</pre>
 
</pre>
   
becomes after disambiguation
+
becomes after tagger disambiguation
 
<pre>
 
<pre>
 
^prpers<prn><subj><p1><mf><sg>$
 
^prpers<prn><subj><p1><mf><sg>$
Line 114: Line 114:
 
</pre>
 
</pre>
   
and transformed by rule SN SV SN -> SV SV SN<acc>
+
and transformed by rule SN SV SN<nom> -> SN SV SN<acc>
 
<pre>
 
<pre>
 
^prnpers<SN><p1><mf><sg>{^prpers<prn><subj><2><3><4>$}$
 
^prnpers<SN><p1><mf><sg>{^prpers<prn><subj><2><3><4>$}$
Line 120: Line 120:
 
^nom<SN><sg><acc>{^signalo<n><2><3><4>$}$.
 
^nom<SN><sg><acc>{^signalo<n><2><3><4>$}$.
 
</pre>
 
</pre>
  +
Note how the chunk has now tags nom<SN><sg><acc> and therefore ^signalo<n><2><3><4>$ gets these tags when unchunking:
 
and then unchunked
 
 
<pre>
 
<pre>
 
^prpers<prn><subj><p1><mf><sg>$
 
^prpers<prn><subj><p1><mf><sg>$

Revision as of 13:51, 14 September 2008

Short intro

     jacobn>	But really I have a big problem about all this "shallow transfer".

	<spectie>	shallow transfer = no parse trees
	<spectie>	basically
	<jimregan2>	yep
     
	<jacobn>	HOW is reordering of the phrase then going to happen!!
     jimregan2> we use chunking

    <jimregan2> first we reorder words in the chunk, then we reorder chunks

	<jacobn>	Pls tell me 'bout it or point to a web page
	<jimregan2>	it's easy enough
	<jimregan2>	first, we match phrase patterns
	<jimregan2>	adj+noun
	<jimregan2>	adj+adj+noun
	<jimregan2>	from these, we make a 'pseudo lemma', with a tag containing the type - normally 'SN' (noun phrase) or SV (verb phrase)
	<jimregan2>	then, we translate based on these pseudo words
	<jimregan2>	breaking the language down to its bare essentials, basically
	<jimregan2>	at the moment, I'm taking the 'hard wired' parts of the english to spanish chunker, and adapting it for french
	<jimregan2>	changing 'más' to 'plus' in a macro, etc.

      <spectie> but the chunks cannot be recursive	

Longer intro

Our rules are based on the source language patterns; we need to use chunking for f.ex. English-Esperanto, so the first task is to identify those patterns.

the man sees the girl

Chunking:

SN(the man) SV(sees) SN(the girl)

(Normally, in English those are 'NP' and 'VP' for 'noun phrase' and 'verb phrase' respectively, but we'll stick to the established convention in apertium)

Two rules are needed to make those chunks: further chunking rules can match 'the tall man' 'my favourite Spanish friend' 'the prettiest Polish girl' etc. as SN; 'was going', 'had been going', 'must have been going' as SV. We first consider these patterns separately, but tag the chunks with whatever information will be useful later.

So

The chunks are normally given a 'pseudo lemma' that matches the pattern that matched them ('the man', 'my friend' will be put in a chunk called 'det_nom', etc.), the first tag added is the phrase type; after that, tags that are needed in the next set of rules. Essentially, we're treating phrase chunks in the same way that the morphological analyser treats lexemes ('surface forms').

So, taking 'big cat', we would get: ^adj_nom<SN><sg><CD>{^granda<ad><2><3>$ ^kato<n><2><3>$}$

(the numbers in the lemma tags mean 'take the information from chunk tag number #', CD means 'Case to be Determined - it's not fully established, as GD and ND are, but it's the logical one to use).

so, with a simple SN SV SN, we can have a rule that outputs the same things in the same order, but changes the 'CD' of SN number 1 to 'nom', and of SN number 2 to 'acc'. All very simple.


Now, a note.

The next kind of thing we should think about is the type of sentence part that goes like this:

'the man you saw' 'the man the girl saw'

I don't know if we have to change word order here - probably not - but the nominative and accusative are SNs 2 and 1 respectively.

But think about this:

'the man my brother became'

Adding accusative here is wrong, so what can we do about it? Not much. Maybe in this specific instance, sure, but generally, we can only take the common cases and hope for the best. There's been plenty of work into statistical parsing, subject identification, etc., but it's still not much better than picking the common cases, and hoping for the best.

This is why we always tell people to have their translations checked by a native speaker :)

Example

I saw a signal

becomes after tagger disambiguation

^prpers<prn><subj><p1><mf><sg>$ 
^see<vblex><past>$ 
^a<det><ind><sg>$ 
^signal<n><sg>$.

which is transfered and chunked into

^prnpers<SN><p1><mf><sg>{^prpers<prn><subj><2><3><4>$}$ 
^verb<SV><past>{^vidi<vblex><past>$}$ 
^nom<SN><sg><nom>{^signalo<n><2><3><4>$}$.

and transformed by rule SN SV SN<nom> -> SN SV SN<acc>

^prnpers<SN><p1><mf><sg>{^prpers<prn><subj><2><3><4>$}$
^verb<SV><past>{^vidi<vblex><past>$}$
^nom<SN><sg><acc>{^signalo<n><2><3><4>$}$.

Note how the chunk has now tags nom<SN><sg><acc> and therefore ^signalo<n><2><3><4>$ gets these tags when unchunking:

^prpers<prn><subj><p1><mf><sg>$ 
^vidi<vblex><past>$ 
^signalo<n><sg><acc>$. 

See also

External links

Headline text