Apertium has moved from SourceForge to GitHub.
If you have any questions, please come and talk to us on #apertium on irc.freenode.net or contact the GitHub migration team.

Task ideas for Google Code-in/Evaluation of translation of an existing pair

From Apertium
< Task ideas for Google Code-in(Difference between revisions)
Jump to: navigation, search
(Created page with "Given a language pair, evaluate the translation quality on a corpus of 100 sentences. You must evaluate each sentence in three ways: # Overall translation quality (0-5) # Flu...")
 
 
(One intermediate revision by one user not shown)
Line 1: Line 1:
Given a language pair, evaluate the translation quality on a corpus of 100 sentences.
+
Given a language pair, evaluate the translation quality on a corpus of 50 sentences.
   
 
You must evaluate each sentence in three ways:
 
You must evaluate each sentence in three ways:
 
# Overall translation quality (0-5)
 
# Overall translation quality (0-5)
# Fluency (0-5)
+
# Fluency (0-5): How well-formed is the target sentence in the target language.
# Adequacy/Fidelity (0-5)
+
# Adequacy/Fidelity (0-5): How well does the translation capture the meaning of the original sentence.
   
 
Ask for a corpus if you claim this task.
 
Ask for a corpus if you claim this task.

Latest revision as of 17:00, 4 December 2019

Given a language pair, evaluate the translation quality on a corpus of 50 sentences.

You must evaluate each sentence in three ways:

  1. Overall translation quality (0-5)
  2. Fluency (0-5): How well-formed is the target sentence in the target language.
  3. Adequacy/Fidelity (0-5): How well does the translation capture the meaning of the original sentence.

Ask for a corpus if you claim this task.

Personal tools