Difference between revisions of "Matxin"
(→Troubleshooting: these errors have gone away) |
|||
Line 531: | Line 531: | ||
* [http://matxin.svn.sourceforge.net/viewvc/matxin/trunk/doc/documentation-es.pdf Descripción del sistema de traducción es-eu Matxin] (in Spanish) |
* [http://matxin.svn.sourceforge.net/viewvc/matxin/trunk/doc/documentation-es.pdf Descripción del sistema de traducción es-eu Matxin] (in Spanish) |
||
* [[Documentation of Matxin]] (in English) |
* [[Documentation of Matxin]] (in English) |
||
==Basque language== |
|||
* [http://en.wikipedia.org/wiki/Basque_language#Grammar wikipedia] |
|||
*[http://www.hermanboel.eu/en-language_basque_lesson.htm#definite hermanboel] |
|||
*[http://www.buber.net/Basque/Euskara/lesson.2.html buber.net] |
|||
*[http://learnlanguagefromluton.net/basque.html luton] |
|||
*[http://ixa.si.ehu.es/Ixa ixa.si.ehu.es] |
|||
[[Category:Matxin|*]] |
[[Category:Matxin|*]] |
Revision as of 10:02, 7 June 2009
Matxin is a free software machine translation engine related to Apertium. It allows for deeper transfer than can be found in Apertium. The linguistic data available under a free licence is a fraction of the data that is used in the papers and descriptions of the subject, so naturally the translations from the pair will be less good than you can find results in the papers.
Contact
Questions and comments about Matxin can be sent to their mailing list matxin-devel, or to the apertium-stuff list.
Prerequisites
- BerkleyDB — sudo apt-get install libdb4.6++-dev
- libpcre3 — sudo apt-get install libpcre3-dev
Install the following libraries in <prefix>,
- libcfg+ — http://platon.sk/upload/_projects/00003/libcfg+-0.6.2.tar.gz
- libomlet (from SVN) — (
svn co http://devel.cpl.upc.edu/freeling/svn/latest/omlet
) - libfries (from SVN) — (
svn co http://devel.cpl.upc.edu/freeling/svn/latest/fries
) - FreeLing (from SVN) — (
svn co http://devel.cpl.upc.edu/freeling/svn/latest/freeling
)
- If you're installing into a prefix, you'll need to set two environment variables: CPPFLAGS=-I<prefix>/include LDFLAGS=-L<prefix>/lib ./configure --prefix=<prefix>
- lttoolbox (from SVN) — (
svn co https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/lttoolbox
) Take as a minimum version 3.1.1; 3.1.0 and lower versions cause data error and error messages in Matxin due to a missing string close.
Building
- Checkout
$ svn co http://matxin.svn.sourceforge.net/svnroot/matxin
Then do the usual:
$ ./configure --prefix=<prefix> $ make
After you've got it built, do:
$ su # export LD_LIBRARY_PATH=/usr/local/lib # export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig # make install
Executing
The default for MATXIN_DIR
, if you have not specified a prefix is /usr/local/bin
, if you have not specified a prefix, then you should cd /usr/local/bin
to make the tests.
Bundled with Matxin there's a script called Matxin_translator
which calls all the necessary modules and interconnects them using UNIX pipes. This is the recommended way of running Matxin for getting translations. This does not work in the given form.
$ echo "Esto es una prueba" | ./Matxin_translator -f $MATXIN_DIR/share/matxin/config/es-eu.cfg
There exists a program txt-deformat calling sequence: txt-deformat format-file input-file. txt-deformat creates an xml file from a normal txt input file. This can be used before ./Analyzer.
txt-deformat is an HTML format processor. Data should be passed through this processor before being piped to /Analyzer. The program takes input in the form of an HTML document and produces output suitable for processing with lt-proc. HTML tags and other format information are enclosed in brackets so that lt-proc treats them as whitespace between words.
Calling it with -h or --help displays help information. You could write the following to show how the word "gener" is analysed:
echo "gener" | ./txt-deformat | ./Analyzer -f $MATXIN_DIR/share/matxin/config/es-eu.cfg
For advanced uses, you can run each part of the pipe separately and save the output to temporary files for feeding the next modules. At the moment this is the method of choice
Spanish-Basque
<prefix> is typically /usr/local
$ export MATXIN_DIR=<prefix> $ echo "Esto es una prueba" | \ ./Analyzer -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./LT -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_intra -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_inter --inter 1 -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_prep -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_inter --inter 2 -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_verb -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_inter --inter 3 -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./SG_inter -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./SG_intra -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./MG -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./reFormat Da proba bat hau
English-Basque
Using the above example for English-Basque looks:
$ cat src/matxinallen.sh src/Analyzer -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/LT -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/ST_intra -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/ST_inter --inter 1 -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/ST_prep -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/ST_inter --inter 2 -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/ST_verb -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/ST_inter --inter 3 -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/SG_inter -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/SG_intra -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/MG -f $MATXIN_DIR/share/matxin/config/en-eu.cfg | \ src/reFormat $ echo "This is a test" | sh src/matxin_allen.sh Hau proba da $ echo "How are you?" | sh src/matxin_allen.sh Nola zu da? $ echo "Otto plays football and tennis" | sh src/matxin_allen.sh Otto-ak jokatzen du futbola tenis-a eta
Speed
Between 25--30 words per second.
Troubleshooting
- libcfg+
If you get the following error:
ld: ../src/cfg+.o: relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC
Delete the directory, and start from scratch, this time when you call make, call it with make CFLAGS=-fPIC
- Matxin
If you get the error:
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/local/include -I/usr/local/include/lttoolbox-2.0 -I/usr/include/libxml2 -g -O2 -ansi -march=i686 -O3 -fno-pic -fomit-frame-pointer -MT Analyzer.o -MD -MP -MF .deps/Analyzer.Tpo -c -o Analyzer.o Analyzer.C --->Analyzer.C:10:22: error: freeling.h: Datei oder Verzeichnis nicht gefunden In file included from Analyzer.C:9: config.h: In constructor 'config::config(char**)': config.h:413: warning: deprecated conversion from string constant to 'char*' Analyzer.C: In function 'void PrintResults(std::list<sentence, std::allocator<sentence> >&, const config&, int&)': Analyzer.C:123: error: aggregate 'std::ofstream log_file' has incomplete type and cannot be defined Analyzer.C:126: error: incomplete type 'std::ofstream' used in nested name s...
Then change the header files in src/Analyzer.C
to:
//#include "freeling.h" #include "util.h" #include "tokenizer.h" #include "splitter.h" #include "maco.h" #include "nec.h" #include "senses.h" #include "tagger.h" #include "hmm_tagger.h" #include "relax_tagger.h" #include "chart_parser.h" #include "maco_options.h" #include "dependencies.h"
Upon finding yourself battling the following compile problem,
Analyzer.C: In function ‘int main(int, char**)’: Analyzer.C:226: error: no matching function for call to ‘hmm_tagger::hmm_tagger(std::string, char*&, int&, int&)’ /home/fran/local/include/hmm_tagger.h:108: note: candidates are: hmm_tagger::hmm_tagger(const std::string&, const std::string&, bool) /home/fran/local/include/hmm_tagger.h:84: note: hmm_tagger::hmm_tagger(const hmm_tagger&) Analyzer.C:230: error: no matching function for call to ‘relax_tagger::relax_tagger(char*&, int&, double&, double&, int&, int&)’ /home/fran/local/include/relax_tagger.h:74: note: candidates are: relax_tagger::relax_tagger(const std::string&, int, double, double, bool) /home/fran/local/include/relax_tagger.h:51: note: relax_tagger::relax_tagger(const relax_tagger&) Analyzer.C:236: error: no matching function for call to ‘senses::senses(char*&, int&)’ /home/fran/local/include/senses.h:52: note: candidates are: senses::senses(const std::string&) /home/fran/local/include/senses.h:45: note: senses::senses(const senses&)
Make the following changes in the file src/Analyzer.C
:
if (cfg.TAGGER_which == HMM) - tagger = new hmm_tagger(cfg.Lang, cfg.TAGGER_HMMFile, cfg.TAGGER_Retokenize, cfg.TAGGER_ForceSelect); + tagger = new hmm_tagger(string(cfg.Lang), string(cfg.TAGGER_HMMFile), false); else if (cfg.TAGGER_which == RELAX) - tagger = new relax_tagger(cfg.TAGGER_RelaxFile, cfg.TAGGER_RelaxMaxIter, + tagger = new relax_tagger(string(cfg.TAGGER_RelaxFile), cfg.TAGGER_RelaxMaxIter, cfg.TAGGER_RelaxScaleFactor, cfg.TAGGER_RelaxEpsilon, - cfg.TAGGER_Retokenize, cfg.TAGGER_ForceSelect); + false); if (cfg.NEC_NEClassification) neclass = new nec("NP", cfg.NEC_FilePrefix); if (cfg.SENSE_SenseAnnotation!=NONE) - sens = new senses(cfg.SENSE_SenseFile, cfg.SENSE_DuplicateAnalysis); + sens = new senses(string(cfg.SENSE_SenseFile)); //, cfg.SENSE_DuplicateAnalysis);
Then probably there will be issues with actually running Matxin.
If you get the error:
config.h:33:29: error: freeling/traces.h: No such file or directory
Then change the header files in src/config.h
to:
//#include "freeling/traces.h" #include "traces.h"
If you get this error:
$ echo "Esto es una prueba" | ./Analyzer -f /home/fran/local/share/matxin/config/es-eu.cfg Constraint Grammar '/home/fran/local//share/matxin/freeling/es/constr_gram.dat'. Line 2. Syntax error: Unexpected 'SETS' found. Constraint Grammar '/home/fran/local//share/matxin/freeling/es/constr_gram.dat'. Line 7. Syntax error: Unexpected 'DetFem' found. Constraint Grammar '/home/fran/local//share/matxin/freeling/es/constr_gram.dat'. Line 10. Syntax error: Unexpected 'VerbPron' found.
You can change the tagger from the RelaxCG to HMM, edit the file <prefix>/share/matxin/config/es-eu.cfg
, and change:
#### Tagger options #Tagger=relax Tagger=hmm
Then there might be a problem in the dependency grammar:
$ echo "Esto es una prueba" | ./Analyzer -f /home/fran/local/share/matxin/config/es-eu.cfg DEPENDENCIES: Error reading dependencies from '/home/fran/local//share/matxin/freeling/es/dep/dependences.dat'. Unregistered function d:sn.tonto
The easiest thing to do here is to just remove references to the stuff it complains about:
cat <prefix>/share/matxin/freeling/es/dep/dependences.dat | grep -v d:grup-sp.lemma > newdep cat newdep | grep -v d\.class > newdep2 cat newdep2 | grep -v d:sn.tonto > <prefix>/share/matxin/freeling/es/dep/dependences.dat
Error in db
If you get:
- SEMDB: Error 13 while opening database /usr/local/share/matxin/freeling/es/dep/../senses16.db
rebuild senses16.deb from source:
- cat senses16.src | indexdict senses16.db
- (remove senses16.db before rebuild)
Error when reading xml files
If xml files read does not work, you get error like: ERROR: invalid document: found <corpus i> when <corpus> was expected..., do following in src/XML_reader.cc do:
1. add following subroutine after line 43:
wstring mystows(string const &str) { wchar_t* result = new wchar_t[str.size()+1]; size_t retval = mbstowcs(result, str.c_str(), str.size()); result[retval] = L'\0'; wstring result2 = result; delete[] result; return result2; }
2. replace all occurencies of
XMLParseUtil::stows
with
mystows
Version 3.1.1 of lttoolbox does not have this error any more.
Results of the individual steps:
--------------------Step1 en@anonymous:/usr/local/bin$ echo "Esto es una prueba" | ./Analyzer -f $MATXIN_DIR/share/matxin/config/es-eu.cfg <?xml version='1.0' encoding='UTF-8' ?> <corpus> <SENTENCE ord='1' alloc='0'> <CHUNK ord='2' alloc='5' type='grup-verb' si='top'> <NODE ord='2' alloc='5' form='es' lem='ser' mi='VSIP3S0'> </NODE> <CHUNK ord='1' alloc='0' type='sn' si='subj'> <NODE ord='1' alloc='0' form='Esto' lem='este' mi='PD0NS000'> </NODE> </CHUNK> <CHUNK ord='3' alloc='8' type='sn' si='att'> <NODE ord='4' alloc='12' form='prueba' lem='prueba' mi='NCFS000'> <NODE ord='3' alloc='8' form='una' lem='uno' mi='DI0FS0'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus>
---------------------Step2 [glabaka@siuc05 bin]$ cat /tmp/x | ./LT -f $MATXIN_DIR/share/matxin/config/es-eu.cfg <?xml version='1.0' encoding='UTF-8'?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top'> <NODE ref='2' alloc='5' UpCase='none' lem='_izan_' mi='VSIP3S0' pos='[ADI][SIN]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus>
----------- step3 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top'> <NODE ref='2' alloc='5' UpCase='none' lem='_izan_' mi='VSIP3S0' pos='[ADI][SIN]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP4 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top' length='1' trans='DU' cas='[ABS]'> <NODE ref='2' alloc='5' UpCase='none' lem='_izan_' mi='VSIP3S0' pos='[ADI][SIN]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att' length='2' cas='[ABS]'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP5 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top' length='1' trans='DU' cas='[ABS]'> <NODE ref='2' alloc='5' UpCase='none' lem='_izan_' mi='VSIP3S0' pos='[ADI][SIN]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att' length='2' cas='[ABS]'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP6 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE ref='2' alloc='5' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj' cas='[ERG]' length='1'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP7 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE ref='2' alloc='5' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP8 <?xml version='1.0' encoding='UTF-8'?> <corpus > <SENTENCE ord='1' ref='1' alloc='0'> <CHUNK ord='2' ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE ref='2' alloc='5' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ord='0' ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ord='1' ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP9 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ord='1' ref='1' alloc='0'> <CHUNK ord='2' ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE ord='0' ref='2' alloc='5' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ord='0' ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ord='0' ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ord='1' ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE ord='0' ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ord='1' ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------- step10 <?xml version='1.0' encoding='UTF-8'?> <corpus > <SENTENCE ord='1' ref='1' alloc='0'> <CHUNK ord='2' ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE form='da' ref ='2' alloc ='5' ord='0' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ord='0' ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE form='hau' ref ='1' alloc ='0' ord='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ord='1' ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE form='proba' ref ='4' alloc ='12' ord='0' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE form='bat' ref ='3' alloc ='8' ord='1' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP11 Hau proba bat da
Documentation
- Descripción del sistema de traducción es-eu Matxin (in Spanish)
- Documentation of Matxin (in English)