Difference between revisions of "Northern Sámi and Norwegian/smemorf"

From Apertium
Jump to navigation Jump to search
 
(18 intermediate revisions by the same user not shown)
Line 2: Line 2:


==Description==
==Description==
{{TOCD}}


The sme morphological analyser is a ''trimmed'' version of the one in Giellatekno. Everything we get from Giellatekno is contained in the files
The sme morphological analyser is a trimmed and tag-changed version of the one in Giellatekno.


===Trimming===
* '''apertium-sme-nob.sme.lexc''' (lexicon)


Trimming happens using the same HFST method as in the Turkic pairs etc. [[Automatically_trimming_a_monodix#Compounds_vs_trimming_in_HFST|Compounds are not handled correctly by this method]].
* '''apertium-sme-nob.sme.twol''' (two-level morphology)


===Tagset changes===
The twol file is a plain copy of twol-sme.txt. The lexc file is a concatenation of sme-lex.txt and the various POS-sme-lex.txt files in gt/sme/src (e.g. verb-sme-lex.txt, adj-sme-lex.txt). However, for each of those POS-files, the apertium lexc file only contains lines where the lemma exists in bidix.


The tagset in apertium-sme-nob is closer to Apertium's tagset, thus a bit different from the one in Giellatekno.


* We remove Err/Orth/Usage tags (+Use/Sub, etc) –
In addition, there are a few two-level rules and similar that are composed on the analyser:
* We remove any derivational analyses that aren't yet handled by transfer/bidix
# '''dev/xfst2apertium.useless.twol'''
** https://victorio.uit.no/langtech/trunk/langs/sme/tools/mt/apertium/filters/remove-derivation-strings-modifications.nob.regex
#* This file is composed first, and removes Usage tags (+Use/Sub, etc), and removes any derivational analyses that aren't yet handled by transfer/bidix
** https://victorio.uit.no/langtech/trunk/langs/sme/src/filters/remove-illegal-derivation-strings.regex
#* It also removes the - from split compound lemmas.
#* [[Northern Sámi and Norwegian/Derivations|More on derivations in sme-nob]]
** [[Northern Sámi and Norwegian/Derivations|More on derivations in sme-nob]]
* We reorder some tags
# '''dev/xfst2apertium.hashtags.twol'''
** https://victorio.uit.no/langtech/trunk/langs/sme/src/filters/reorder-subpos-tags.sme.regex
#* This file is composed second and removes the #-mark between those compounds that are lexicalised (non-dynamic)
* We change the format of tags, so +N becomes <n>, +Der/PassL becomes <der_passl>, etc.
#* It also ensure the +G3 tag occurs ''after'' the +N tag
* We change certain tags
# '''dev/xfst2apertium.relabel'''
** https://victorio.uit.no/langtech/trunk/langs/sme/tools/mt/apertium/tagsets/apertium.nob.relabel
#* This file is used with hfst-substitute to change the format of tags, so +N becomes <N>, +Der/1 becomes <Der_1>, etc.
** https://victorio.uit.no/langtech/trunk/langs/sme/tools/mt/apertium/tagsets/apertium.postproc.relabel

** https://victorio.uit.no/langtech/trunk/langs/sme/tools/mt/apertium/tagsets/modify-tags.nob.regex

===Updating the lexc when giellatekno/bidix changes===
We keep the lexc file up to date with the bidix and the giellatekno entries with the python script '''update-morph/update-lexc.py''' and a per-user configuration file based on '''update-morph/langs.cfg.in'''. The configuration file tells which -lex.txt source files are to be plain copied, and which are to be trimmed, and any POS tags to restrict the trimming to. For trimming, it loads the compiled bidix FST ('''sme-nob.autobil.bin'''), and, for each of the lines in the files that are to be trimmed, it checks if the lemma (plus possible POS tags) is possible to analyse with the FST. So if noun-sme-lex.txt has
<pre>
beron GAHPIR ;
beroštupmi:berošt UPMI ;
beroštus#riidu:beroštus#rij'du ALBMI ;
beroštus#vuostálasvuohta+CmpN/SgG+CmpN/DefPlGen:beroštus#vuostálasvuoh'ta LUONDU ;
</pre>
and the config says to append <code><N></code> when trimming nouns, it will try sending <code>^beron<N>$ ^beroštupmi<N>$ ^beroštusvuostálasvuohta<N>$</code> through sme-nob.autobil.bin, and if beron gave a match, that line will be included, if beroštupmi didn't, it'll be excluded, etc. (If the bidix actually specified <code>^beron<N><Actor>$</code>, it would still get included since it's a partial match; it's not perfect, but it saves a lot of trouble.)

So to add new words to the lexc:
# the word has to be in giellatekno's lexc
# the word has to be in bidix ('''apertium-sme-nob.sme-nob.dix''') with a translation
# the bidix has to be compiled (<code>make sme-nob.autobil.bin</code>)
# and then you can run <code>/usr/bin/python2.6 update-morph/update-lexc.py --config=update-morph/my-langs.cfg</code>
#* you create update-morph/my-langs.cfg by copying update-morph/langs.cfg.in and editing the SRC line to point to where you checked out the sme morphology from Giellatekno svn
# and then you can run <code>make</code> to compile the analyser

For simple copy-pasting, the last three steps are:
<pre>make sme-nob.autobil.bin
/usr/bin/python2.6 update-morph/update-lexc.py --config=update-morph/my-langs.cfg
make</pre>
(given that /usr/bin/python2.6 is your python2 version, and your personal copy of langs.cfg.in is stored as update-morph/my-langs.cfg)


==TODO==
==TODO==
===Misc===
===Misc===
* add entries from bidix that are missing from the analyser
* add entries from bidix that are missing from the analyser
** Missing [http://codepad.org/Dusebd68 nouns], [http://apertium.codepad.org/6Kr6H7RO adverbs], [http://codepad.org/7Hadok6S adjectives]


* regex for URL's (don't want telefonkatalogen.no => *telefonkatalogen.nå)
* regex for URL's (don't want telefonkatalogen.no => *telefonkatalogen.nå)


* regex for acronyms like "GsoC:as" (tokenisation dependent...)
* regex for acronyms like "GsoC:as" (tokenisation dependent...)


* <code>8632: SUBSTITUTE:TV_IV (V TV) (V IV) FAUXV (0 ("lávet"));</code> -- this should be analysed as both, and disambiguated
* <code>8632: SUBSTITUTE:TV_IV (V TV) (V IV) FAUXV (0 ("lávet"));</code> -- this should be analysed as both, and disambiguated


Line 69: Line 39:


===Dashes===
===Dashes===
lexc handles dashes by adding them literally (like a lemma), doing that in with a bidix pardef would be very messy (also, doesn't it give issues with lemma-matching in CG?). Currently we remove the dashes in dev/xfst2apertium.useless.twol (and in certain cases re-add them in transfer as the tag &lt;dash&gt;), but perhaps we could add a tag there …
lexc handles dashes by adding them literally (like a lemma), doing that in with a bidix pardef would be very messy (also, doesn't it give issues with lemma-matching in CG?).


===Multiwords===
===Multiwords===

Latest revision as of 09:46, 15 April 2015

Documentation and TODO's for apertium-sme-nob's sme morphological analyser from Giellatekno.

Description[edit]

The sme morphological analyser is a trimmed and tag-changed version of the one in Giellatekno.

Trimming[edit]

Trimming happens using the same HFST method as in the Turkic pairs etc. Compounds are not handled correctly by this method.

Tagset changes[edit]

The tagset in apertium-sme-nob is closer to Apertium's tagset, thus a bit different from the one in Giellatekno.

TODO[edit]

Misc[edit]

  • add entries from bidix that are missing from the analyser
  • regex for URL's (don't want telefonkatalogen.no => *telefonkatalogen.nå)
  • regex for acronyms like "GsoC:as" (tokenisation dependent...)
  • 8632: SUBSTITUTE:TV_IV (V TV) (V IV) FAUXV (0 ("lávet")); -- this should be analysed as both, and disambiguated

Typos[edit]

I've seen "odda" many places ("ođđa"), can we just add these to the analyser? (Would be cool to have charlifter/Diacritic Restoration, but until then…)

Dashes[edit]

lexc handles dashes by adding them literally (like a lemma), doing that in with a bidix pardef would be very messy (also, doesn't it give issues with lemma-matching in CG?).

Multiwords[edit]

Add simple multiwords and fixed expressions to the analyser.

  • dasa lassin => i tillegg (til det)
  • dán áigge => for tiden
  • mun ieš => meg selv
  • bures boahtin => velkommen
  • Buorre beaivi => God dag
  • leat guollebivddus => å fiske
  • maid ban dainna => hva i all verden
  • jagis jahkái => fra år til år
  • oaidnaleapmái => 'see you'
  • ovdamearkka => for eksempel
  • Mo manná? => Hvordan går det?
  • ja nu ain => og så videre

(Some of these MWE's might be very Apertium-specific, but in that case we just keep our own file and append the entries with update-lexc.sh.)

Also, oktavuohta.N.Sg.Loc turns into an mwe preposition in nob:

  • oktavuođas => i forbindelse med

it'd be a lot simpler for transfer to just analyse such a fixed expression as oktavuođas.Po in the first place.