Difference between revisions of "User:Mlforcada/Sandbox/basque"

From Apertium
Jump to navigation Jump to search
Line 47: Line 47:
 
The naming of NP and PP pseudo-lemmas should be systematized. If these pseudolemmas are not used by t2x, they could be arbitrarily long and descriptive.
 
The naming of NP and PP pseudo-lemmas should be systematized. If these pseudolemmas are not used by t2x, they could be arbitrarily long and descriptive.
   
For instance, we have for "gizonaren etxea" but <code>Det_nom<SN></code> for "gizona".
+
For instance, we have for "gizonaren etxea" but <code>Det_nom<SN></code> for "gizona".
   
Regarding pseudolemmas, I think we have to review what to use as chunk categories and what to use as chunk pseudolemmas. In a quick visit to the current .t2x I have seen cases where we try to detect lemmas when categories could have equally been used.
+
Regarding pseudolemmas, I think we have to review what to use as chunk categories and what to use as chunk pseudolemmas. In a quick visit to the current .t2x I have seen cases where we detect pseudolemmas when categories could have equally been used with no "lexicalization".
   
 
==== What should constitute a chunk? ====
 
==== What should constitute a chunk? ====
Line 59: Line 59:
 
Having '''short chunks''' and rich interchunk operations makes the description of chunks simpler but reduces the range (length) of structural transfer operations.
 
Having '''short chunks''' and rich interchunk operations makes the description of chunks simpler but reduces the range (length) of structural transfer operations.
   
We have to find a way to reconcile both, writing our interchunk operations in the most general way, so that they can operate on short NP / PP chunks and frequent but long NP /PP chunks.
+
We have to find a way to reconcile both, writing our interchunk operations in the most general way, so that they can operate on short NP / PP chunks and frequent but long NP/PP chunks. This could help reducing the size of structural transfer files (this causes problems with the current interpreter).
   
 
For instance, currently we have "complex" chunks such as <code>D_n_pr_d_n<SN></code> "gizonaren etxea" (genitive structure treated at the chunk stage without interchunk operation) but "simple" chunks followed by an interchunk operation in cases such as:
 
For instance, currently we have "complex" chunks such as <code>D_n_pr_d_n<SN></code> "gizonaren etxea" (genitive structure treated at the chunk stage without interchunk operation) but "simple" chunks followed by an interchunk operation in cases such as:

Revision as of 09:36, 19 November 2008

How to improve Apertium-eu-es 0.3

These are some notes on how to improve apertium-eu-es 0.3 so that its performance improves for assimilation purposes and its maintenance is easier for future developers.

Lexical coverage

Lexical coverage may be improved in different ways:

Regular vocabulary

  • Collect large corpora of basque news text and search for unknown words (as has been done for version 0.3)
  • Using possible new vocabulary from the new version of Matxin (extracting it and converting it to our format).
  • Using existing vocabulary (esp. multiword lexical units or MWLUs) in current dictionaries of apertium-eu-es, especially tagging and activating untagged MWLUs.

Proper names

  • Including massive lists of proper names (place names "gazeteer", person names, etc.).
  • Using some kind of guesser for proper names so that we don't have to include them in the dictionary. Apertium-cy-en uses a guesser for proper names. We can look at endings. For instance, something like
<e>
         <re>[A-Z]([a-z]*)</re>
         <p>
           <l>tik</l>
           <r><s n="np"/><s n="top"/></r>
         </p>
       </e> 

could detect a place name such as Tuscaloosa if the text contains Tuscaloosatik, with a regular expression entry (well, we would also have other endings like etik and dik)


Structural transfer

Verb chunks

We need to have paradigms for the potential ("ezan") and other verb structures. Perhaps we can use information in Matxin for this and other analytical verb forms.

Having "verb chunks" (when they are continuous, which is sometimes not the case for negation) could allow us to generate a correct Spanish word order for some short sentences using interchunk operations (for instance NP-erg, NP-abs VP-nor-nork --> NP-erg VP-nor-nork NP-abs)

Noun phrases and prepositional phrases

Naming conventions

The naming of NP and PP pseudo-lemmas should be systematized. If these pseudolemmas are not used by t2x, they could be arbitrarily long and descriptive.

For instance, we have for "gizonaren etxea" but Det_nom<SN> for "gizona".

Regarding pseudolemmas, I think we have to review what to use as chunk categories and what to use as chunk pseudolemmas. In a quick visit to the current .t2x I have seen cases where we detect pseudolemmas when categories could have equally been used with no "lexicalization".

What should constitute a chunk?

The idea of having Apertium 2.0 was to curtail the proliferation of long, flat patterns, by defining chunks as building bricks or factors for a later interchunk operation. This does not increase the computational power of our structural transfer, but allows factoring long Apertium 1 patterns into shorter chunks and interchunk operations.

Having long chunks extends the range of interchunk operations but at the cost of writing many long chunks. Focussing on those long chunks that appear frequently in corpora could be a possible compromise.

Having short chunks and rich interchunk operations makes the description of chunks simpler but reduces the range (length) of structural transfer operations.

We have to find a way to reconcile both, writing our interchunk operations in the most general way, so that they can operate on short NP / PP chunks and frequent but long NP/PP chunks. This could help reducing the size of structural transfer files (this causes problems with the current interpreter).

For instance, currently we have "complex" chunks such as D_n_pr_d_n<SN> "gizonaren etxea" (genitive structure treated at the chunk stage without interchunk operation) but "simple" chunks followed by an interchunk operation in cases such as: