Difference between revisions of "Apertium-neural"

From Apertium
Jump to navigation Jump to search
(Created page with "Apertium was originally developed to offer a free/open-source framework for creating RBMT systems. It was modelled on existing systems, but targetted at related languages, try...")
 
 
(2 intermediate revisions by the same user not shown)
Line 15: Line 15:
 
apertium-destxt | apertium-preprocess | apertium-encode | apertium-decode | apertium-retxt
 
apertium-destxt | apertium-preprocess | apertium-encode | apertium-decode | apertium-retxt
 
</pre>
 
</pre>
  +
  +
Backend:
  +
  +
* Own built(?)
  +
* DyNet --- forked?
  +
  +
==Refs==
  +
  +
* R.P. Ñeco, M.L. Forcada, "Asynchronous translations with recurrent neural nets", in Proceedings of ICNN'97 (Houston, Texas, 8-12.06.1997) , vol. 4, p. 2535-2540 [PS]
   
 
[[Category:Ideas]]
 
[[Category:Ideas]]

Latest revision as of 15:06, 29 June 2020

Apertium was originally developed to offer a free/open-source framework for creating RBMT systems. It was modelled on existing systems, but targetted at related languages, trying to do one thing well.

What might an Apertium NMT system for lesser-resourced and marginalised languages look like?

Thoughts:

  • Trains without GPU or large compute
  • Optimised for small corpora (under 100k parallel sentences)
  • Includes linguistic tricks
  • C++, autotools
  • Works with existing tools (formatters, APY etc.)

Pipeline(?):

apertium-destxt | apertium-preprocess | apertium-encode | apertium-decode | apertium-retxt

Backend:

  • Own built(?)
  • DyNet --- forked?

Refs[edit]

  • R.P. Ñeco, M.L. Forcada, "Asynchronous translations with recurrent neural nets", in Proceedings of ICNN'97 (Houston, Texas, 8-12.06.1997) , vol. 4, p. 2535-2540 [PS]