Difference between revisions of "Northern Sámi and Norwegian/smemorf"
m (→Description) |
|||
(9 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
==Description== |
==Description== |
||
{{TOCD}} |
|||
The sme morphological analyser is a |
The sme morphological analyser is a trimmed and tag-changed version of the one in Giellatekno. |
||
===Trimming=== |
|||
* '''apertium-sme-nob.sme.lexc''' (lexicon) |
|||
Trimming happens using the same HFST method as in the Turkic pairs etc. [[Automatically_trimming_a_monodix#Compounds_vs_trimming_in_HFST|Compounds are not handled correctly by this method]]. |
|||
* '''apertium-sme-nob.sme.twol''' (two-level morphology) |
|||
===Tagset changes=== |
|||
The twol file is a plain copy of twol-sme.txt. The lexc file is a concatenation of sme-lex.txt and the various POS-sme-lex.txt files in gt/sme/src (e.g. verb-sme-lex.txt, adj-sme-lex.txt). However, for each of those POS-files, the apertium lexc file only contains lines where the lemma exists in bidix. |
|||
⚫ | |||
⚫ | |||
⚫ | |||
⚫ | |||
We don't use the regex rules (gt/common/src/*.xfst) to remove Der2 and Use/Sub tags (though we may start doing this later). Tag fixes happens using a few two-level rules and similar that are composed on the analyser: |
|||
** https://victorio.uit.no/langtech/trunk/langs/sme/tools/mt/apertium/filters/remove-derivation-strings-modifications.nob.regex |
|||
# '''[http://apertium.svn.sourceforge.net/viewvc/apertium/staging/apertium-sme-nob/xfst2apertium.useless.twol?revision=HEAD xfst2apertium.useless.twol]''' |
|||
** https://victorio.uit.no/langtech/trunk/langs/sme/src/filters/remove-illegal-derivation-strings.regex |
|||
#* This file is composed first |
|||
⚫ | |||
⚫ | |||
* We reorder some tags |
|||
⚫ | |||
** https://victorio.uit.no/langtech/trunk/langs/sme/src/filters/reorder-subpos-tags.sme.regex |
|||
⚫ | |||
⚫ | |||
#* It also removes the - from split compound lemmas so that they may be looked up in bidix. |
|||
* We change certain tags |
|||
# '''[http://apertium.svn.sourceforge.net/viewvc/apertium/staging/apertium-sme-nob/xfst2apertium.hashtags.twol?revision=HEAD xfst2apertium.hashtags.twol]''' |
|||
** https://victorio.uit.no/langtech/trunk/langs/sme/tools/mt/apertium/tagsets/apertium.nob.relabel |
|||
#* This file is composed second |
|||
** https://victorio.uit.no/langtech/trunk/langs/sme/tools/mt/apertium/tagsets/apertium.postproc.relabel |
|||
#* It removes the #-mark between those compounds that are lexicalised (non-dynamic) |
|||
** https://victorio.uit.no/langtech/trunk/langs/sme/tools/mt/apertium/tagsets/modify-tags.nob.regex |
|||
#* It also ensures the +G3 tag occurs ''after'' the +N tag, a common error in the lexc files |
|||
# '''[http://apertium.svn.sourceforge.net/viewvc/apertium/staging/apertium-sme-nob/xfst2apertium.relabel?revision=HEAD xfst2apertium.relabel]''' |
|||
⚫ | |||
===Updating the lexc when giellatekno/bidix changes=== |
|||
We keep the lexc file up to date with the bidix and the giellatekno entries with the python script '''update-morph/update-lexc.py''' and a per-user configuration file based on '''update-morph/langs.cfg.in'''. The configuration file tells which -lex.txt source files are to be plain copied, and which are to be trimmed, and any POS tags to restrict the trimming to. For trimming, it loads the compiled bidix FST ('''sme-nob.autobil.bin'''), and, for each of the lines in the files that are to be trimmed, it checks if the lemma (plus possible POS tags) is possible to analyse with the FST. So if noun-sme-lex.txt has |
|||
<pre> |
|||
beron GAHPIR ; |
|||
beroštupmi:berošt UPMI ; |
|||
beroštus#riidu:beroštus#rij'du ALBMI ; |
|||
beroštus#vuostálasvuohta+CmpN/SgG+CmpN/DefPlGen:beroštus#vuostálasvuoh'ta LUONDU ; |
|||
</pre> |
|||
and the config says to append <code><N></code> when trimming nouns, it will try sending <code>^beron<N>$ ^beroštupmi<N>$ ^beroštusvuostálasvuohta<N>$</code> through sme-nob.autobil.bin, and if beron gave a match, that line will be included, if beroštupmi didn't, it'll be excluded, etc. (If the bidix actually specified <code>^beron<N><Actor>$</code>, it would still get included since it's a partial match; it's not perfect, but it saves a lot of trouble.) |
|||
So to add new words to the lexc: |
|||
# the word has to be in giellatekno's lexc |
|||
# the word has to be in bidix ('''apertium-sme-nob.sme-nob.dix''') with a translation |
|||
# the bidix has to be compiled (<code>make sme-nob.autobil.bin</code>) |
|||
# and then you can run <code>/usr/bin/python2.6 update-morph/update-lexc.py --config=update-morph/my-langs.cfg</code> |
|||
#* you create update-morph/my-langs.cfg by copying update-morph/langs.cfg.in and editing the SRC line to point to where you checked out the sme morphology from Giellatekno svn |
|||
# and then you can run <code>make</code> to compile the analyser |
|||
For simple copy-pasting, the last three steps are: |
|||
<pre>make sme-nob.autobil.bin && |
|||
/usr/bin/python2.6 update-morph/update-lexc.py --config=update-morph/my-langs.cfg && |
|||
make</pre> |
|||
(given that /usr/bin/python2.6 is your python2 version, and your personal copy of langs.cfg.in is stored as update-morph/my-langs.cfg) |
|||
==TODO== |
==TODO== |
||
===regex vs twol=== |
|||
* Investigate if we can use at least some of the xfst scripts from giellatekno instead of xfst2apertium.useless.twol. |
|||
===Misc=== |
===Misc=== |
||
* add entries from bidix that are missing from the analyser |
* add entries from bidix that are missing from the analyser |
||
Line 69: | Line 39: | ||
===Dashes=== |
===Dashes=== |
||
lexc handles dashes by adding them literally (like a lemma), doing that in with a bidix pardef would be very messy (also, doesn't it give issues with lemma-matching in CG?). |
lexc handles dashes by adding them literally (like a lemma), doing that in with a bidix pardef would be very messy (also, doesn't it give issues with lemma-matching in CG?). |
||
===Multiwords=== |
===Multiwords=== |
Latest revision as of 09:46, 15 April 2015
Documentation and TODO's for apertium-sme-nob's sme morphological analyser from Giellatekno.
Contents
Description[edit]
The sme morphological analyser is a trimmed and tag-changed version of the one in Giellatekno.
Trimming[edit]
Trimming happens using the same HFST method as in the Turkic pairs etc. Compounds are not handled correctly by this method.
Tagset changes[edit]
The tagset in apertium-sme-nob is closer to Apertium's tagset, thus a bit different from the one in Giellatekno.
- We remove Err/Orth/Usage tags (+Use/Sub, etc) –
- We remove any derivational analyses that aren't yet handled by transfer/bidix
- We reorder some tags
- We change the format of tags, so +N becomes <n>, +Der/PassL becomes <der_passl>, etc.
- We change certain tags
TODO[edit]
Misc[edit]
- add entries from bidix that are missing from the analyser
- regex for URL's (don't want telefonkatalogen.no => *telefonkatalogen.nå)
- regex for acronyms like "GsoC:as" (tokenisation dependent...)
8632: SUBSTITUTE:TV_IV (V TV) (V IV) FAUXV (0 ("lávet"));
-- this should be analysed as both, and disambiguated
Typos[edit]
I've seen "odda" many places ("ođđa"), can we just add these to the analyser? (Would be cool to have charlifter/Diacritic Restoration, but until then…)
- a list of high-frequency typos where the correction has an analysis
Dashes[edit]
lexc handles dashes by adding them literally (like a lemma), doing that in with a bidix pardef would be very messy (also, doesn't it give issues with lemma-matching in CG?).
Multiwords[edit]
Add simple multiwords and fixed expressions to the analyser.
- dasa lassin => i tillegg (til det)
- dán áigge => for tiden
- mun ieš => meg selv
- bures boahtin => velkommen
- Buorre beaivi => God dag
- leat guollebivddus => å fiske
- maid ban dainna => hva i all verden
- jagis jahkái => fra år til år
- oaidnaleapmái => 'see you'
- ovdamearkka => for eksempel
- Mo manná? => Hvordan går det?
- ja nu ain => og så videre
(Some of these MWE's might be very Apertium-specific, but in that case we just keep our own file and append the entries with update-lexc.sh.)
Also, oktavuohta.N.Sg.Loc turns into an mwe preposition in nob:
- oktavuođas => i forbindelse med
it'd be a lot simpler for transfer to just analyse such a fixed expression as oktavuođas.Po in the first place.