Northern Sámi and Norwegian/smemorf
apertium-sme-nob TODOs for the sme morphological analyser from Giellatekno.
Description
The sme morphological analyser is a trimmed version of the one in Giellatekno. It's all contained in the files
- apertium-sme-nob.sme.lexc (lexicon)
- apertium-sme-nob.sme.twol (two-level morphology)
The twol file is a plain copy of twol-sme.txt. The lexc file is a concatenation of sme-lex.txt and the various POS-sme-lex.txt files in gt/sme/src (e.g. verb-sme-lex.txt, adj-sme-lex.txt). However, for each of those POS-files, the apertium lexc file only contains lines where the lemma exists in bidix.
We keep the lexc file up to date with the bidix and the giellatekno entries with the python script update-morph/update-lexc.py and a configuration file based on update-morph/langs.cfg.in. The configuration file tells which -lex.txt source files are to be plain copied, and which are to be trimmed, and any POS tags to restrict the trimming to. For trimming, it loads the compiled bidix FST (sme-nob.autobil.bin), and, for each of the lines in the files that are to be trimmed, it checks if the lemma (plus possible POS tags) is possible to analyse with the FST. So if noun-sme-lex.txt has
beron GAHPIR ; beroštupmi:berošt UPMI ; beroštus#riidu:beroštus#rij'du ALBMI ; beroštus#vuostálasvuohta+CmpN/SgG+CmpN/DefPlGen:beroštus#vuostálasvuoh'ta LUONDU ;
and the config says to append <N>
when trimming nouns, it will try sending ^beron<N>$ ^beroštupmi<N>$ ^beroštusvuostálasvuohta<N>$
through sme-nob.autobil.bin, and if beron gave a match, that line will be included, if beroštupmi didn't, it'll be excluded, etc. (If the bidix actually specified ^beron<N><Actor>$
, it would still get included since it's a partial match; not perfect, but saves a lot of trouble.)
Misc
- add entries from bidix that are missing from the analyser
- Missing nouns, adverbs, adjectives
- regex for URL's (don't want telefonkatalogen.no => *telefonkatalogen.nå)
- regex for acronyms like "GsoC:as" (tokenisation dependent...)
8632: SUBSTITUTE:TV_IV (V TV) (V IV) FAUXV (0 ("lávet"));
-- this should be analysed as both, and disambiguated
Typos
I've seen "odda" many places ("ođđa"), can we just add these to the analyser? (Would be cool to have charlifter/Diacritic Restoration, but until then…)
- a list of high-frequency typos where the correction has an analysis
Dashes
lexc handles dashes by adding them literally (like a lemma), doing that in with a bidix pardef would be very messy (also, doesn't it give issues with lemma-matching in CG?). Currently we remove the dashes in dev/xfst2apertium.useless.twol (and in certain cases re-add them in transfer as the tag <dash>), but perhaps we could add a tag there …
Compounding
ensure compounding is only tried if there is no other solution
Most general solution: Use a weighted transducer, and give the compound border (ie the dynamic compounding border, the R lexicon) a non-zero weight.
However, we have a CG rule that removes compounds if there are other readings, so we're OK for now.
Multiwords
Add simple multiwords and fixed expressions to the analyser.
- dasa lassin => i tillegg (til det)
- dán áigge => for tiden
- mun ieš => meg selv
- bures boahtin => velkommen
- Buorre beaivi => God dag
- leat guollebivddus => å fiske
- maid ban dainna => hva i all verden
- jagis jahkái => fra år til år
- oaidnaleapmái => 'see you'
- ovdamearkka => for eksempel
- Mo manná? => Hvordan går det?
- ja nu ain => og så videre
(Some of these MWE's might be very Apertium-specific, but in that case we just keep our own file and append the entries with update-lexc.sh.)
Also, oktavuohta.N.Sg.Loc turns into an mwe preposition in nob:
- oktavuođas => i forbindelse med
it'd be a lot simpler for transfer to just analyse such a fixed expression as oktavuođas.Po in the first place.