Matxin
Matxin is a free software machine translation engine related to Apertium. It allows for deeper transfer than can be found in Apertium. The linguistic data available under a free licence is a fraction of the data that is used in the papers and descriptions of the subject, so naturally the translations from the pair will be less good than you can find results in the papers.
Contact
Questions and comments about Matxin can be sent to their mailing list matxin-devel, or to the apertium-stuff list.
Prerequisites
- BerkleyDB — sudo apt-get install libdb4.6++-dev
- libpcre3 — sudo apt-get install libpcre3-dev
Install the following libraries in <prefix>,
- libcfg+ — http://platon.sk/upload/_projects/00003/libcfg+-0.6.2.tar.gz
- libomlet — https://lafarga.cpl.upc.edu/frs/download.php/130/libomlet-0.97.tar.gz
- libfries — https://lafarga.cpl.upc.edu/frs/download.php/129/libfries-0.95.tar.gz
- FreeLing (from SVN) — (
svn co http://devel.cpl.upc.edu/freeling/svn/latest/freeling
)
- If you're installing into a prefix, you'll need to set two environment variables: CPPFLAGS=-I<prefix>/include LDFLAGS=-L<prefix>/lib ./configure --prefix=<prefix>
- lttoolbox (from SVN) — (
svn co https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/lttoolbox
)
Building
- Checkout
$ svn co http://matxin.svn.sourceforge.net/svnroot/matxin
Then do the usual:
$ ./configure --prefix=<prefix> $ make
After you've got it built, do:
$ su # export LD_LIBRARY_PATH=/usr/local/lib # export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig # make install
Executing
The default for MATXIN_DIR
, if you have not specified a prefix is /usr/local/bin
, if you have not specified a prefix, then you should cd /usr/local/bin
to make the tests.
Bundled with Matxin there's a script called Matxin_translator
which calls all the necessary modules and interconnects them using UNIX pipes. This is the recommended way of running Matxin for getting translations.
$ echo "Esto es una prueba" | ./Matxin_translator -f $MATXIN_DIR/share/matxin/config/es-eu.cfg
For advanced uses, you can run each part of the pipe separately and save the output to temporary files for feeding the next modules.
<prefix> is typically /usr/local
$ export MATXIN_DIR=<prefix> $ echo "Esto es una prueba" | \ ./Analyzer -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./LT -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_inter --inter 1 -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_prep -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_inter --inter 2 -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_verb -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./ST_inter --inter 3 -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./SG_inter -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./SG_intra -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./MG -f $MATXIN_DIR/share/matxin/config/es-eu.cfg | \ ./reFormat Da proba bat hau
Speed
Between 25--30 words per second.
Troubleshooting
- libfries
If you get the error to do with "strlen was not declared in this scope", add #include <string.h>
to the file src/include/fries/language.h
.
If you get the error to do with "set_union was not declared in this scope", add #include <algorithm>
to the file src/libfries/RGF.cc
.
Sometimes, people get the error:
configure:2668: error: C++ compiler cannot create executables
Try installing libpcre3-dev and trying again.
- libomlet
If you get the error to do with "exit was not declared in this scope", add #include <stdlib.h>
to the file src/libomlet/adaboost.cc
.
- FreeLing
If you get the error to do with "exit was not declared in this scope", add #include <stdlib.h>
to the files src/utilities/indexdict.cc
, src/libmorfo/accents.cc
, src/libmorfo/accents_modules.cc
, src/libmorfo/dictionary.cc
, src/libmorfo/tagger.cc
, src/libmorfo/punts.cc
, src/libmorfo/maco_options.cc
, src/libmorfo/splitter.cc
src/libmorfo/suffixes.cc
src/libmorfo/senses.cc
src/libmorfo/hmm_tagger.cc
.
If you get the error to do with "strlen was not declared in this scope", add #include <string.h>
to the files src/libmorfo/automat.cc
, src/libmorfo/dates.cc
, src/libmorfo/locutions.cc
, src/libmorfo/maco.cc
, src/libmorfo/np.cc
, src/libmorfo/nec.cc
, src/libmorfo/numbers.cc
, src/libmorfo/numbers_modules.cc
, src/libmorfo/quantities.cc
, src/libmorfo/tokenizer.cc
and src/libmorfo/dates_modules.cc
.
If you get the error to do with "memcpy was not declared in this scope", add #include <string.h>
to the files src/include/traces.h
, src/libmorfo/dictionary.cc
, src/libmorfo/traces.cc
src/libmorfo/senses.cc
, src/libmorfo/feature_extractor/fex.cc
If you get the error to do with "set_union was not declared in this scope", add #include <algorithm>
to the file src/libmorfo/feature_extractor/RGF.cc
.
To add the following strings
#include <stdlib.h> #include <string.h> #include <algorithm>
in the top of every .cc file in the FreeLing-1.5 directory, you can use the following command:
pasquale@dell:~/stuff/matxin/FreeLing-1.5$ ./configure .. pasquale@dell:~/stuff/matxin/FreeLing-1.5$ find . -type f -name "*.cc" | awk '{ print "echo \"#include <stdlib.h>\n#include <string.h>\n\ #include <algorithm>\n\" > " $1 ".new && cat " $1 " >> " $1 ".new && mv " $1 ".new " $1 }' > k pasquale@dell:~/stuff/matxin/FreeLing-1.5$ sh k pasquale@dell:~/stuff/matxin/FreeLing-1.5$ make ..
If you get the error:
In file included from analyzer.cc:72: config.h:32:18: error: cfg+.h: No such file or directory
Run make
like make CXXFLAGS=-I<prefix>/include
- Matxin
If you get the error:
g++ -DHAVE_CONFIG_H -I. -I.. -I/usr/local/include -I/usr/local/include/lttoolbox-2.0 -I/usr/include/libxml2 -g -O2 -ansi -march=i686 -O3 -fno-pic -fomit-frame-pointer -MT Analyzer.o -MD -MP -MF .deps/Analyzer.Tpo -c -o Analyzer.o Analyzer.C --->Analyzer.C:10:22: error: freeling.h: Datei oder Verzeichnis nicht gefunden In file included from Analyzer.C:9: config.h: In constructor 'config::config(char**)': config.h:413: warning: deprecated conversion from string constant to 'char*' Analyzer.C: In function 'void PrintResults(std::list<sentence, std::allocator<sentence> >&, const config&, int&)': Analyzer.C:123: error: aggregate 'std::ofstream log_file' has incomplete type and cannot be defined Analyzer.C:126: error: incomplete type 'std::ofstream' used in nested name s...
Then change the header files in src/Analyzer.C
to:
//#include "freeling.h" #include "util.h" #include "tokenizer.h" #include "splitter.h" #include "maco.h" #include "nec.h" #include "senses.h" #include "tagger.h" #include "hmm_tagger.h" #include "relax_tagger.h" #include "chart_parser.h" #include "maco_options.h" #include "dependencies.h"
Upon finding yourself battling the following compile problem,
Analyzer.C: In function ‘int main(int, char**)’: Analyzer.C:226: error: no matching function for call to ‘hmm_tagger::hmm_tagger(std::string, char*&, int&, int&)’ /home/fran/local/include/hmm_tagger.h:108: note: candidates are: hmm_tagger::hmm_tagger(const std::string&, const std::string&, bool) /home/fran/local/include/hmm_tagger.h:84: note: hmm_tagger::hmm_tagger(const hmm_tagger&) Analyzer.C:230: error: no matching function for call to ‘relax_tagger::relax_tagger(char*&, int&, double&, double&, int&, int&)’ /home/fran/local/include/relax_tagger.h:74: note: candidates are: relax_tagger::relax_tagger(const std::string&, int, double, double, bool) /home/fran/local/include/relax_tagger.h:51: note: relax_tagger::relax_tagger(const relax_tagger&) Analyzer.C:236: error: no matching function for call to ‘senses::senses(char*&, int&)’ /home/fran/local/include/senses.h:52: note: candidates are: senses::senses(const std::string&) /home/fran/local/include/senses.h:45: note: senses::senses(const senses&)
Make the following changes in the file src/Analyzer.C
:
if (cfg.TAGGER_which == HMM) - tagger = new hmm_tagger(cfg.Lang, cfg.TAGGER_HMMFile, cfg.TAGGER_Retokenize, cfg.TAGGER_ForceSelect); + tagger = new hmm_tagger(string(cfg.Lang), string(cfg.TAGGER_HMMFile), false); else if (cfg.TAGGER_which == RELAX) - tagger = new relax_tagger(cfg.TAGGER_RelaxFile, cfg.TAGGER_RelaxMaxIter, + tagger = new relax_tagger(string(cfg.TAGGER_RelaxFile), cfg.TAGGER_RelaxMaxIter, cfg.TAGGER_RelaxScaleFactor, cfg.TAGGER_RelaxEpsilon, - cfg.TAGGER_Retokenize, cfg.TAGGER_ForceSelect); + false); if (cfg.NEC_NEClassification) neclass = new nec("NP", cfg.NEC_FilePrefix); if (cfg.SENSE_SenseAnnotation!=NONE) - sens = new senses(cfg.SENSE_SenseFile, cfg.SENSE_DuplicateAnalysis); + sens = new senses(string(cfg.SENSE_SenseFile)); //, cfg.SENSE_DuplicateAnalysis);
Then probably there will be issues with actually running Matxin.
If you get the error:
config.h:33:29: error: freeling/traces.h: No such file or directory
Then change the header files in src/config.h
to:
//#include "freeling/traces.h" #include "traces.h"
If you get this error:
$ echo "Esto es una prueba" | ./Analyzer -f /home/fran/local/share/matxin/config/es-eu.cfg Constraint Grammar '/home/fran/local//share/matxin/freeling/es/constr_gram.dat'. Line 2. Syntax error: Unexpected 'SETS' found. Constraint Grammar '/home/fran/local//share/matxin/freeling/es/constr_gram.dat'. Line 7. Syntax error: Unexpected 'DetFem' found. Constraint Grammar '/home/fran/local//share/matxin/freeling/es/constr_gram.dat'. Line 10. Syntax error: Unexpected 'VerbPron' found.
You can change the tagger from the RelaxCG to HMM, edit the file <prefix>/share/matxin/config/es-eu.cfg
, and change:
#### Tagger options #Tagger=relax Tagger=hmm
Then there might be a problem in the dependency grammar:
$ echo "Esto es una prueba" | ./Analyzer -f /home/fran/local/share/matxin/config/es-eu.cfg DEPENDENCIES: Error reading dependencies from '/home/fran/local//share/matxin/freeling/es/dep/dependences.dat'. Unregistered function d:sn.tonto
The easiest thing to do here is to just remove references to the stuff it complains about:
cat <prefix>/share/matxin/freeling/es/dep/dependences.dat | grep -v d:grup-sp.lemma > newdep cat newdep | grep -v d\.class > newdep2 cat newdep2 | grep -v d:sn.tonto > <prefix>/share/matxin/freeling/es/dep/dependences.dat
Error in db
If you get:
- SEMDB: Error 13 while opening database /usr/local/share/matxin/freeling/es/dep/../senses16.db
rebuild senses16.deb from source:
- cat senses16.src | indexdict senses16.db
- (remove senses16.db before rebuild)
Error when reading xml files
If xml files read does not work, you get error like: ERROR: invalid document: found <corpus i> when <corpus> was expected..., do following in src/XML_reader.cc do:
1. add following subroutine after line 43:
wstring mystows(string const &str) { wchar_t* result = new wchar_t[str.size()+1]; size_t retval = mbstowcs(result, str.c_str(), str.size()); result[retval] = L'\0'; wstring result2 = result; delete[] result; return result2; }
2. replace all occurencies of
XMLParseUtil::stows
with
mystows
Results of the individual steps:
--------------------Step1 en@anonymous:/usr/local/bin$ echo "Esto es una prueba" | ./Analyzer -f $MATXIN_DIR/share/matxin/config/es-eu.cfg <?xml version='1.0' encoding='UTF-8' ?> <corpus> <SENTENCE ord='1' alloc='0'> <CHUNK ord='2' alloc='5' type='grup-verb' si='top'> <NODE ord='2' alloc='5' form='es' lem='ser' mi='VSIP3S0'> </NODE> <CHUNK ord='1' alloc='0' type='sn' si='subj'> <NODE ord='1' alloc='0' form='Esto' lem='este' mi='PD0NS000'> </NODE> </CHUNK> <CHUNK ord='3' alloc='8' type='sn' si='att'> <NODE ord='4' alloc='12' form='prueba' lem='prueba' mi='NCFS000'> <NODE ord='3' alloc='8' form='una' lem='uno' mi='DI0FS0'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus>
---------------------Step2 [glabaka@siuc05 bin]$ cat /tmp/x | ./LT -f $MATXIN_DIR/share/matxin/config/es-eu.cfg <?xml version='1.0' encoding='UTF-8'?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top'> <NODE ref='2' alloc='5' UpCase='none' lem='_izan_' mi='VSIP3S0' pos='[ADI][SIN]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus>
----------- step3 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top'> <NODE ref='2' alloc='5' UpCase='none' lem='_izan_' mi='VSIP3S0' pos='[ADI][SIN]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP4 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top' length='1' trans='DU' cas='[ABS]'> <NODE ref='2' alloc='5' UpCase='none' lem='_izan_' mi='VSIP3S0' pos='[ADI][SIN]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att' length='2' cas='[ABS]'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP5 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top' length='1' trans='DU' cas='[ABS]'> <NODE ref='2' alloc='5' UpCase='none' lem='_izan_' mi='VSIP3S0' pos='[ADI][SIN]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att' length='2' cas='[ABS]'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP6 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE ref='2' alloc='5' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj' cas='[ERG]' length='1'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP7 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ref='1' alloc='0'> <CHUNK ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE ref='2' alloc='5' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP8 <?xml version='1.0' encoding='UTF-8'?> <corpus > <SENTENCE ord='1' ref='1' alloc='0'> <CHUNK ord='2' ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE ref='2' alloc='5' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ord='0' ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ord='1' ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP9 <?xml version='1.0' encoding='UTF-8' ?> <corpus > <SENTENCE ord='1' ref='1' alloc='0'> <CHUNK ord='2' ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE ord='0' ref='2' alloc='5' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ord='0' ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE ord='0' ref='1' alloc='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ord='1' ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE ord='0' ref='4' alloc='12' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE ord='1' ref='3' alloc='8' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------- step10 <?xml version='1.0' encoding='UTF-8'?> <corpus > <SENTENCE ord='1' ref='1' alloc='0'> <CHUNK ord='2' ref='2' type='adi-kat' alloc='5' si='top' cas='[ABS]' trans='DU' length='1'> <NODE form='da' ref ='2' alloc ='5' ord='0' lem='izan' pos='[NAG]' mi='[ADT][A1][NR_HU]'> </NODE> <CHUNK ord='0' ref='1' type='is' alloc='0' si='subj' length='1' cas='[ERG]'> <NODE form='hau' ref ='1' alloc ='0' ord='0' UpCase='none' lem='hau' pos='[DET][ERKARR]'> </NODE> </CHUNK> <CHUNK ord='1' ref='3' type='is' alloc='8' si='att' cas='[ABS]' length='2'> <NODE form='proba' ref ='4' alloc ='12' ord='0' UpCase='none' lem='proba' pos='[IZE][ARR]' mi='[NUMS]' sem='[BIZ-]'> <NODE form='bat' ref ='3' alloc ='8' ord='1' UpCase='none' lem='bat' pos='[DET][DZH]' vpost='IZO'> </NODE> </NODE> </CHUNK> </CHUNK> </SENTENCE> </corpus> -------------STEP11 Hau proba bat da
Documentation
- Descripción del sistema de traducción es-eu Matxin (in Spanish)
- Documentation of Matxin (in English)