Difference between revisions of "User:Ksnmi/Application"

From Apertium
Jump to navigation Jump to search
Line 1: Line 1:
  +
== Introduction ==
   
  +
This section contains some points of introduction from my side.
   
   
 
*'''Name''' : Akshay Minocha
== Some details about myself ==
 
*Name : Akshay Minocha
 
   
*E-mail address : akshayminocha5@gmail.com | akshay.minocha@students.iiit.ac.in
+
*'''E-mail address''' : akshayminocha5@gmail.com | akshay.minocha@students.iiit.ac.in
   
*Other information that may be useful to contact you: nick on the #apertium channel: ksnmi
+
*'''Other information that may be useful to contact you''': nick on the #apertium channel: '''''ksnmi'''''
   
*Why is it you are interested in machine translation?
+
*'''Why is it you are interested in machine translation?'''
 
** I'm interested in Language, and machine translation is a part of handling the language change. I have been working with understanding both theoretically as well as through building MT systems the methods involved in the Translation process.
 
** I'm interested in Language, and machine translation is a part of handling the language change. I have been working with understanding both theoretically as well as through building MT systems the methods involved in the Translation process.
   
*Why is it that they are interested in the Apertium project?
+
*'''Why is it that they are interested in the Apertium project?'''
 
** This current project on "non-standard text input" has everything I love to work on. From, Informal data from Twitter/IRC/etc., handling noise removal, building FST's, analysing data and at the end building Machine Translation systems. I believe that this approach can be standardized for many source languages with the approach I have in mind. For the time being I'm sticking to English but changing ways is easy as well as interesting. Also, the translation quality we are working on should be intact when we are giving back to the community, well at least this is an important step. This is also one of the kind of projects whose implementation will help translation on all the language pairs on apertium at the end.
 
** This current project on "non-standard text input" has everything I love to work on. From, Informal data from Twitter/IRC/etc., handling noise removal, building FST's, analysing data and at the end building Machine Translation systems. I believe that this approach can be standardized for many source languages with the approach I have in mind. For the time being I'm sticking to English but changing ways is easy as well as interesting. Also, the translation quality we are working on should be intact when we are giving back to the community, well at least this is an important step. This is also one of the kind of projects whose implementation will help translation on all the language pairs on apertium at the end.
   
*Which of the published tasks are you interested in? What do you plan to do?
+
*'''Which of the published tasks are you interested in? What do you plan to do?'''
 
**I initially want to start working on the English and Español as the source language since we have plenty of informal data on social media available for these languages. After completing this task we can include the pair and build a standard to improve translation quality for other languages too.
 
**I initially want to start working on the English and Español as the source language since we have plenty of informal data on social media available for these languages. After completing this task we can include the pair and build a standard to improve translation quality for other languages too.
   
*Include a proposal, including
+
*'''Include a proposal, including'''
 
**'''Reasons why Google and Apertium should sponsor it''' - I'd love to work on the current project with the set of mentors. This project is important because the MT community on an open level should also welcome the change in the use of language, in the form of the popular non standard text, by the people. This will extend our reach to several people and practically increase the efficiency of the translation task no doubt.
**a title,
 
**reasons why Google and Apertium should sponsor it - I'd love to work on the current project with the set of mentors. This project is important because the MT community on an open level should also welcome the change in the use of language, in the form of the popular non standard text, by the people. This will extend our reach to several people and practically increase the efficiency of the translation task no doubt.
 
   
*And a detailed work plan (including, if possible, a brief schedule with milestones and deliverables). Include time needed to think, to program, to document and to disseminate.
+
*'''And a detailed work plan (including, if possible, a brief schedule with milestones and deliverables). Include time needed to think, to program, to document and to disseminate.'''
  +
Link to the workplan
#Draft version at the moment ( 13th March, 2014 )
 
   
== Coding Task ==
 
   
 
== Coding Challenges ==
*'''Points and my progress on the Coding Task that was posted on the Ideas page of this project''' ->
 
**A test corpus from tweets collected earlier, has been collected. Some general trends were seen in the case of non-standard input. Most frequented sample set is put on [https://docs.google.com/spreadsheet/ccc?key=0ApJ82JmDw6DHdDBad1ZXay1LZDhQckpxcXZmQTl1VVE#gid=2 Link]
 
**In the above link you will find details of the authenticity of the tweets ( collected for an earlier project hence the year 2011), the translation by apertium and also my comment on each of the translations.
 
   
  +
=== Analysing the issues in non-standard data ===
  +
I created a random set of 50 non-standard tweets and analysed them individually to see what goes wrong while performing the translation task. <br/> Details of the analysis can be found on the following link - [https://docs.google.com/spreadsheet/ccc?key=0ApJ82JmDw6DHdDBad1ZXay1LZDhQckpxcXZmQTl1VVE#gid=2 Link] <br/> In the above link you will find details of the authenticity of the tweets ( collected for an earlier project hence the year 2011). <br/> '''TranslationAnalysisSheet'''
  +
=== The Extended word reduction task ''(Mailing list)'' ===
 
At the moment this works for English using the wordlist generated from the English dictionary. <br/> The dictionary can be replaced by any other word list and the output will work properly accordingly. <br/> Sample Input1 -> <br/> Helllooo''\n''i''\n''completely''\n''loooooove''\n''youuu''\n''!!!''\n''nooooo''\n''doubt''\n''about''\n''that''\n''!!!!!!!!''\n'';)''\n''<br/> Output2 (at the end of the processing) <br/> ^Helllooo/Hello$''\n''^i/i$''\n''^completely/completely$''\n''^loooooove/love$''\n''^youuu/you$''\n''^!!!/!!!$''\n''^nooooo/no$''\n''^doubt/doubt$''\n''^about/about$''\n''^that/that$''\n''^!!!!!!!!/!!!!!!!!$''\n''^;)/;)$''\n'' <br/>
   
== Corpus Creation ==
+
=== Corpus Creation ===
   
'''Separate task on Corpus Creation''' ->
+
'''Separate task on Corpus Creation for English''' ->
*I created several types of non-standard corpus for the purpose of analysis, and have taken the above set of 50 tweets from random parts of these.
 
** With special symbols. The number of tweets were high and the list of emoticons from this was considerable. Ended up finding around 545 most frequently used emoticons ( list of emoticons from the twitter dataset can be found here [http://web.iiit.ac.in/~akshay.minocha/emoticons_list_non_standard.txt Emoticons_NON_Standard] <br/> 'Number of Posts'' -> 475,179 <br/> ''Link'' -> [https://www.dropbox.com/s/lg3uizuefw978tr/emoticon_tweets Emoticon_dataset ]
 
**Abbreviations are the words which are not in the dictionary but which are used on social platforms specially like Twitter where the users face a crunch in the limit of the characters. <br/> Around 100 most Common abbreviations from tweets collected over a period of time are listed in the following link -> [http://web.iiit.ac.in/~akshay.minocha/abbreviations_english.txt Abbreviations_english ] <br/>''Number of Posts'' -> 94,290 <br/> ''Link'' -> [https://www.dropbox.com/s/3cvvw7oewvvm0gs/abbreviations_non_standard_english abbreviations_english_dataset]
 
**Repetitive or Extended words and punctuators -> Using a simple algorithm, I separated these occurrences. By generating a word list we know how the trend of using these words are. Also helps us to standardize it for further processing. <br/> ''Number of Posts'' -> 411,404 <br/>''Link'' -> [https://www.dropbox.com/s/yoe24xobmf4uyjo/extended_words_non_standard Extended_words_dataset]
 
   
 
* With special symbols. The number of tweets were high and the list of emoticons from this was considerable. Ended up finding around 545 most frequently used emoticons ( list of emoticons from the twitter dataset can be found here [http://web.iiit.ac.in/~akshay.minocha/emoticons_list_non_standard.txt Emoticons_NON_Standard] <br/> 'Number of Posts'' -> 475,179 <br/> ''Link'' -> [https://www.dropbox.com/s/lg3uizuefw978tr/emoticon_tweets Emoticon_dataset ]
I analysed the most common categories of non-standard text occurrences and have summed it up below, These are if handled in the sequence of their mention below, would create the most effective standard text ->
 
 
*Abbreviations are the words which are not in the dictionary but which are used on social platforms specially like Twitter where the users face a crunch in the limit of the characters. <br/> Around 100 most Common abbreviations from tweets collected over a period of time are listed in the following link -> [http://web.iiit.ac.in/~akshay.minocha/abbreviations_english.txt Abbreviations_english ] <br/>''Number of Posts'' -> 94,290 <br/> ''Link'' -> [https://www.dropbox.com/s/3cvvw7oewvvm0gs/abbreviations_non_standard_english abbreviations_english_dataset]
 
*Repetitive or Extended words and punctuators -> Using a simple algorithm, I separated these occurrences. By generating a word list we know how the trend of using these words are. Also helps us to standardize it for further processing. <br/> ''Number of Posts'' -> 411,404 <br/>''Link'' -> [https://www.dropbox.com/s/yoe24xobmf4uyjo/extended_words_non_standard Extended_words_dataset]
   
  +
== Non Standard features in the Text ==
*'''Use of content specific terms''' ->
 
**Such as RT (ReTweet) @<referral> and hashtags in the case of twitter. These have to be Ignored and we should understand that this does not affect the translation quality much. The random ( any position ) use of the above however affects the machine translation system which are ahead in the pipeline for the processing of the text. Links are also present in most of the tweets.
 
   
 
I analysed the most common categories of non-standard text occurrences and have summed it up below, The prototype later will describe how I plan to use the modules below. For some the order wont be important as we aim to make the whole structure regardless of the input language.
*'''Handling Links(Imp) ->'''
 
**Not only for non standard but also for normal standard input this needs to taken into account in case of apertium at the moment.<br/> Suggestion -> Links at the moment are not being ignored. They are marked with a <br/> *(unknown) This should be noted and corrected. As machine translation on the links <br/> changes the purpose of the same.<br/> For example, say en->es translation of <br/> http://en.wikipedia.org/wiki/Red_Bull -> http://en.wikipedia.org/wiki/Rojo_Toro ) <br/> Current translation by Apertium -> <br/> http://en.wikipedia.org/wiki/Rojo_Bull <br/> which is incorrect. The above example would re-direct us to an undesirable page.
 
   
  +
== Literature Review ==
*'''Use of Emoticons ->'''
 
**People use emoticons very frequently in posts. These have to be ignored.<br/> Analysing the symbols which were present in the set of tweets, I found out that the most commonly occurring emoticons are the following -> [http://web.iiit.ac.in/~akshay.minocha/emoticons_list_non_standard.txt Emoticons most commonly used (546) ] (Already mentioned above) <br/> '''Solution''' -> <br/> If we want the expression not to be lost in translation then these can be kept as it is. Otherwise if apertium treats them as punctuators we should remove them. <br/> Since the popular one’s include characters and words as well. We WON’T be using regular expressions which would limit our reach. <br/>
 
   
  +
There are many sites <ref> http://transl8it.com/ </ref>, <ref> http://www.lingo2word.com/translate.php </ref>, <ref> http://www.dtxtrapp.com/ </ref> on the internet that offer SMS English to English translation services. However the technology behind these sites is simple and uses straight dictionary substitution, with no language model or any other approach to help them disambiguate between possible word substitutions. <ref> Raghunathan, Karthik, and Stefan Krawczyk. CS224N: Investigating SMS text normali(z)ation using statistical machine translation. Technical Report, 2009.</ref> <br/>
*'''Use of Repetitive or Extended Words -> '''
 
**This is the most commonly occurring issue in the non-standard text. <br/> Task given by Francis earlier on the mailing list was to standardise the output according to a dictionary. At the moment this works for English using the wordlist generated from the English dictionary. <br/> The dictionary can be replaced by any other word list and the output will work properly accordingly. <br/> Sample Input1 -> <br/> Helllooo''\n''i''\n''completely''\n''loooooove''\n''youuu''\n''!!!''\n''nooooo''\n''doubt''\n''about''\n''that''\n''!!!!!!!!''\n'';)''\n''<br/> Output2 (at the end of the processing) <br/> ^Helllooo/Hello$''\n''^i/i$''\n''^completely/completely$''\n''^loooooove/love$''\n''^youuu/you$''\n''^!!!/!!!$''\n''^nooooo/no$''\n''^doubt/doubt$''\n''^about/about$''\n''^that/that$''\n''^!!!!!!!!/!!!!!!!!$''\n''^;)/;)$''\n'' <br/> Our final aim is to -> reduce these words in a similar fashion as described above and then match them. <br/>It is to be noted that in the dictionary the abbreviations and acronyms should also be added externally. In many cases repetition such as <br/> “uuuu” is given which would standardize to “you” so “uuuu”->”u”->”you”<bt/> Hence abbreviation processing should always be after this step. Preferably at the end.
 
**Punctuation repetition is not a problem for us. <br/> Since Apertium handles '''!!!''' similar to '''!''
 
   
  +
*There have been a few attempts to improve the machine translation task for non-standard data. One of the preliminary research include <ref> Jehl, Laura Elisabeth. "Machine translation for twitter." (2010). </ref> Where the comparison between the linguistic characteristics of Europarl data and Twitter data is made. The methodology suggested relies heavily on the in-domain data to improve on the quality for further steps. The Evaluation metric shows an improvement 0.57% BLEU score corressponding to the set of improvement on a set of 600 sentences. Major suggestion from this research - t hashtags, @usernames, URLs should not be treated like regular words. This was the mistake we were doing earlier and didn’t help much on the translation task. They also follow the technique of putting in xml markup in the source text to work on it like super blanks. <br/> '''Issues in this case -'''
*'''Handling of Hashtags ->'''
 
  +
**Working on building a bi-lingual resource from in-domain data.
**'''Cases in Hashtags ->'''
 
  +
**Other sources of non standard data don’t see to get a significant improvement
***Words are separated by Capitals <br/> For example, #ForLife -> For Life
 
  +
**BLEU score improvement marginal
***Words are not separated by Capitals <br/> For example, #Fridayafterthenext
 
  +
<br/>
**'''Solution''' - <br/> Hashtag disambiguation can be easily done by any of the two ways -> We need to break it into separate words by using recurring references to the dictionary or FST’s. I think the later will be much easier. <br/> It is Important to separate the words mentioned in the hashtags. Hashtags are supposed to convey the emotion or the summary of the tweet. Hence most frequent not in context to the grammatical surroundings.
 
  +
*This is a standard research<ref>Sproat, Richard, et al. "Normalis(z)ation of non-standard words." Computer Speech & Language 15.3 (2001): 287-333.</ref> on the Non-Standard Words, (NSW) It suggests that Non-standard words are more ambiguous with respect to ordinary words in the ways of pronunciation and interpretation. In many applications, it is desirable to “normalize” text by replacing the NSWs with the contextually appropriate ordinary word or sequence of words. They have generally categorized numbers, abbreviations, other markup, url’s, handled capitalisation, etc. A very interesting method on tree based abbreviation model has been suggested in the research, which can give us ideas on improving our current abbreviation model or just have another addition to it in the model. This includes suggestion for vowel Dropping, shortened words and first syllable usage. <br/> The issue with most of the research is the limitation to a particular language in this case English. They have standardized the most common points of english leaving scope for a lot of improvement. In our kind of processing at the moment we are not considering any specific markup techniques within the pipeline but this paper shows some promising work on the same which can be useful for developers, and other users who want to analyse the data in more detail. Such a convention can be added easily after conducting experiments and seeing results.
**So Words in hashtags should be represented as a ‘lone sentence’. <br/> Example, “Today comes monday again, #whereismyextrasunday” -> <br/> Today comes monday again. “Where is my extra Sunday”
 
  +
<br/>
  +
*This <ref> Pennell, Deana, and Yang Liu. "A Character-Level Machine Translation Approach for Normalis(z)ation of SMS Abbreviations." IJCNLP. 2011. </ref> is a completely different approach where the author tries to solve the problem by proposing a character level machine translation approach. The issue here is accuracy, they have used the Jazzy spell checker<ref> Mindaugas Idzelis. 2005. Jazzy: The java open source spell checker. </ref> as baseline and the compared it with previous such research. The issue here is the huge resource being used up in training and tuning the MT system and also, such a system would have complications being included on the run with Apertium.
  +
*This research <ref> Lopez, Adam, and Matt Post. "Beyond bitext: Five open problems in machine translation." </ref> is more idea centric, where the author says the MT research is far from complete and we face many challenges. With our project we aim to target these problems specifically. Translation of Informal text and Translation of low resource language pairs are the ones which concern Apertium and us the most.
  +
*This research <ref> Lo, Chi-kiu, and Dekai Wu. "Can informal genres be better translated by tuning on automatic semantic metrics." Proceedings of the 14th Machine Translation Summit (MTSummit-XIV) (2013). </ref> identifies the difficulties faced not only by the Translation community with the web forum data and other informal genres but also by people working on semantic role labelling, and probably many more who rely on data analytics, etc. <br/> They propose that evaluation of systems which are MEANT tuned performed significantly better than other systems tuned according to BLEU and TER. With our module the Error analysis suggested would improve on the system, because of a significant rise in the number of known words, grammar and word sense, the semantic parser being used here would perform better.
  +
*This research<ref>Pennell, Deana L., and Yang Liu. "Normalis(z)ation of informal text." Computer Speech & Language 28.1 (2014): 256-277.</ref> idea’s to the approach very similar to ours. But they have focussed mainly on the abbreviated word re-modelling and expansion, by implementing a character based translation model.
  +
*Inspired by this <ref>S. Bangalore, V. Murdock, G. Riccardi - Bootstrapping bilingual data using consensus translation for a multilingual instant messaging system, 19th International Conference on Computational Linguistics, Taipei, Taiwan (2002), pp. 1–7
  +
</ref> research, Srinivas Bangalore has suggested a method of bootstrapping from the data on the chat forums and other informal sources. So that we can build up abbreviation resources for a particular language. The way we can proceed with this task in Apertium, as I had suggested before was to first take in a list of few abbreviations and the use them to suggest what other more frequent words in the data might also count as abbreviations. This resource can be verified and then included for building up the system for the particular language.
   
*'''Abbreviation and Acronyms ->'''
 
**In the tweets by matching the most frequently occurring non dictionary words, I came up
 
with the list of a few abbreviations.<br/> These are -> [http://web.iiit.ac.in/~akshay.minocha/abbreviations_english.txt English_abbreviations_list_non_standard] <br/> The solution to improve translation due to the occurrence of these is simple. <br/> When we know what their full form is, we can simply trade places as the final step of the processing towards standard input. <br/> Abbreviation of single character representations such as r->are, u->you, 2->to are also included. This list can be increased by further analysing the data. <br/>
 
   
  +
==References==
*'''Spelling mistakes ->'''
 
  +
<references/>
**These include spelling mistakes on purpose as well as the errors that arise due to vowel dropping. <br/> Levenshtein distance between two strings is defined as the minimum number of edits needed to transform one string into another, with the allowable edit operations being insertion, deletion, or substitution of a single character. Although this algorithm worked well, and is also implemented by the PyEnchant library on python <br/> >> d = enchant.request_dict("en_US") <br/> >> d.suggest("Helo") <br/> ['He lo', 'He-lo', 'Hello', 'Helot', 'Help', 'Halo', 'Hell', 'Held', 'Helm', 'Hero', "He'll"] <br/> this was a bit non-accurate as it did not consider the "transposition" action which is defined in the following link - ( http://norvig.com/spell-correct.html). Peter Norvig in the spell correct link, shows us how easily we can build a spelling correction script by using a large standard corpora for a particular language. <br/> Building a spelling corrector for a language becomes easy be it by any of the above ways. It solves both the problems. <br/> Alternately from the large bag of words we can also probabilistically find out the most likely spelling for the word.
 
   
*'''Apostrophe correction ->'''
 
** There are some words where we can predict easily whether the apostrophe exists or not <br/> for example - theyll -> they’ll <br/> or im -> i’m <br/> but ambiguity exists in words like - > <br/> hell -> he’ll or hell ? <br/> shell -> she’ll or shell ? <br/> Here the apostrophe makes a difference in the total sense of the words as they are two completely different words. <br/> This can be improved by using the predicting mechanism discussed where the trigram probabilities of the text from the standard corpus will be compared and the results will be reported. <br/> List of apostrophe occurrences from a standard corpus collected by me earlier -> [http://web.iiit.ac.in/~akshay.minocha/apostrophe_list.txt List of apostrophe occurrences_standard_English]
 
 
*'''Spacing and hyphen variation & optional hyphen -> '''
 
***Since we are proposing a proper mechanism to figure out a solution. One way is to come up with the creation of a reference corpus ( either what apertium is currently using or we can come up with something real quick using the technique described in my paper) -> ( Feed Corpus: An Ever Growing Up-To-Date Corpus, Minocha, Akshay and Reddy, Siva and Kilgarriff, Adam, ACL SIGWAC, 2013. ) <br/> With this we can use a trigram based model( or higher n-gram) to predict the most probably occurring word. We can also train on the reference corpus to predict the word. <br/> After creating the Standard text, the only way to verify our level of success would be to check and compare our system against the other machine translation systems available like Moses, train them on different sets and check our accuracy. <br/>
 
== Conclusion ==
 
The project in effectively important since non standard text is not handled by many MT systems, and it is important because we have to go with the trend of the language used today to convey the meaning intact to a different native speaker.
 
   
 
[[Category:GSoC 2014 Student proposals|Ksnmi]]
 
[[Category:GSoC 2014 Student proposals|Ksnmi]]

Revision as of 04:48, 20 March 2014

Introduction

This section contains some points of introduction from my side.


  • Name : Akshay Minocha
  • E-mail address : akshayminocha5@gmail.com | akshay.minocha@students.iiit.ac.in
  • Other information that may be useful to contact you: nick on the #apertium channel: ksnmi
  • Why is it you are interested in machine translation?
    • I'm interested in Language, and machine translation is a part of handling the language change. I have been working with understanding both theoretically as well as through building MT systems the methods involved in the Translation process.
  • Why is it that they are interested in the Apertium project?
    • This current project on "non-standard text input" has everything I love to work on. From, Informal data from Twitter/IRC/etc., handling noise removal, building FST's, analysing data and at the end building Machine Translation systems. I believe that this approach can be standardized for many source languages with the approach I have in mind. For the time being I'm sticking to English but changing ways is easy as well as interesting. Also, the translation quality we are working on should be intact when we are giving back to the community, well at least this is an important step. This is also one of the kind of projects whose implementation will help translation on all the language pairs on apertium at the end.
  • Which of the published tasks are you interested in? What do you plan to do?
    • I initially want to start working on the English and Español as the source language since we have plenty of informal data on social media available for these languages. After completing this task we can include the pair and build a standard to improve translation quality for other languages too.
  • Include a proposal, including
    • Reasons why Google and Apertium should sponsor it - I'd love to work on the current project with the set of mentors. This project is important because the MT community on an open level should also welcome the change in the use of language, in the form of the popular non standard text, by the people. This will extend our reach to several people and practically increase the efficiency of the translation task no doubt.
  • And a detailed work plan (including, if possible, a brief schedule with milestones and deliverables). Include time needed to think, to program, to document and to disseminate.

Link to the workplan


Coding Challenges

Analysing the issues in non-standard data

I created a random set of 50 non-standard tweets and analysed them individually to see what goes wrong while performing the translation task.
Details of the analysis can be found on the following link - Link
In the above link you will find details of the authenticity of the tweets ( collected for an earlier project hence the year 2011).
TranslationAnalysisSheet

The Extended word reduction task (Mailing list)

At the moment this works for English using the wordlist generated from the English dictionary. 
The dictionary can be replaced by any other word list and the output will work properly accordingly.
Sample Input1 ->
Helllooo\ni\ncompletely\nloooooove\nyouuu\n!!!\nnooooo\ndoubt\nabout\nthat\n!!!!!!!!\n;)\n
Output2 (at the end of the processing)
^Helllooo/Hello$\n^i/i$\n^completely/completely$\n^loooooove/love$\n^youuu/you$\n^!!!/!!!$\n^nooooo/no$\n^doubt/doubt$\n^about/about$\n^that/that$\n^!!!!!!!!/!!!!!!!!$\n^;)/;)$\n

Corpus Creation

Separate task on Corpus Creation for English ->

  • With special symbols. The number of tweets were high and the list of emoticons from this was considerable. Ended up finding around 545 most frequently used emoticons ( list of emoticons from the twitter dataset can be found here Emoticons_NON_Standard
    'Number of Posts -> 475,179
    Link -> Emoticon_dataset
  • Abbreviations are the words which are not in the dictionary but which are used on social platforms specially like Twitter where the users face a crunch in the limit of the characters.
    Around 100 most Common abbreviations from tweets collected over a period of time are listed in the following link -> Abbreviations_english
    Number of Posts -> 94,290
    Link -> abbreviations_english_dataset
  • Repetitive or Extended words and punctuators -> Using a simple algorithm, I separated these occurrences. By generating a word list we know how the trend of using these words are. Also helps us to standardize it for further processing.
    Number of Posts -> 411,404
    Link -> Extended_words_dataset

Non Standard features in the Text

I analysed the most common categories of non-standard text occurrences and have summed it up below, The prototype later will describe how I plan to use the modules below. For some the order wont be important as we aim to make the whole structure regardless of the input language.

Literature Review

There are many sites [1], [2], [3] on the internet that offer SMS English to English translation services. However the technology behind these sites is simple and uses straight dictionary substitution, with no language model or any other approach to help them disambiguate between possible word substitutions. [4]

  • There have been a few attempts to improve the machine translation task for non-standard data. One of the preliminary research include [5] Where the comparison between the linguistic characteristics of Europarl data and Twitter data is made. The methodology suggested relies heavily on the in-domain data to improve on the quality for further steps. The Evaluation metric shows an improvement 0.57% BLEU score corressponding to the set of improvement on a set of 600 sentences. Major suggestion from this research - t hashtags, @usernames, URLs should not be treated like regular words. This was the mistake we were doing earlier and didn’t help much on the translation task. They also follow the technique of putting in xml markup in the source text to work on it like super blanks.
    Issues in this case -
    • Working on building a bi-lingual resource from in-domain data.
    • Other sources of non standard data don’t see to get a significant improvement
    • BLEU score improvement marginal


  • This is a standard research[6] on the Non-Standard Words, (NSW) It suggests that Non-standard words are more ambiguous with respect to ordinary words in the ways of pronunciation and interpretation. In many applications, it is desirable to “normalize” text by replacing the NSWs with the contextually appropriate ordinary word or sequence of words. They have generally categorized numbers, abbreviations, other markup, url’s, handled capitalisation, etc. A very interesting method on tree based abbreviation model has been suggested in the research, which can give us ideas on improving our current abbreviation model or just have another addition to it in the model. This includes suggestion for vowel Dropping, shortened words and first syllable usage.
    The issue with most of the research is the limitation to a particular language in this case English. They have standardized the most common points of english leaving scope for a lot of improvement. In our kind of processing at the moment we are not considering any specific markup techniques within the pipeline but this paper shows some promising work on the same which can be useful for developers, and other users who want to analyse the data in more detail. Such a convention can be added easily after conducting experiments and seeing results.


  • This [7] is a completely different approach where the author tries to solve the problem by proposing a character level machine translation approach. The issue here is accuracy, they have used the Jazzy spell checker[8] as baseline and the compared it with previous such research. The issue here is the huge resource being used up in training and tuning the MT system and also, such a system would have complications being included on the run with Apertium.
  • This research [9] is more idea centric, where the author says the MT research is far from complete and we face many challenges. With our project we aim to target these problems specifically. Translation of Informal text and Translation of low resource language pairs are the ones which concern Apertium and us the most.
  • This research [10] identifies the difficulties faced not only by the Translation community with the web forum data and other informal genres but also by people working on semantic role labelling, and probably many more who rely on data analytics, etc.
    They propose that evaluation of systems which are MEANT tuned performed significantly better than other systems tuned according to BLEU and TER. With our module the Error analysis suggested would improve on the system, because of a significant rise in the number of known words, grammar and word sense, the semantic parser being used here would perform better.
  • This research[11] idea’s to the approach very similar to ours. But they have focussed mainly on the abbreviated word re-modelling and expansion, by implementing a character based translation model.
  • Inspired by this [12] research, Srinivas Bangalore has suggested a method of bootstrapping from the data on the chat forums and other informal sources. So that we can build up abbreviation resources for a particular language. The way we can proceed with this task in Apertium, as I had suggested before was to first take in a list of few abbreviations and the use them to suggest what other more frequent words in the data might also count as abbreviations. This resource can be verified and then included for building up the system for the particular language.


References

  1. http://transl8it.com/
  2. http://www.lingo2word.com/translate.php
  3. http://www.dtxtrapp.com/
  4. Raghunathan, Karthik, and Stefan Krawczyk. CS224N: Investigating SMS text normali(z)ation using statistical machine translation. Technical Report, 2009.
  5. Jehl, Laura Elisabeth. "Machine translation for twitter." (2010).
  6. Sproat, Richard, et al. "Normalis(z)ation of non-standard words." Computer Speech & Language 15.3 (2001): 287-333.
  7. Pennell, Deana, and Yang Liu. "A Character-Level Machine Translation Approach for Normalis(z)ation of SMS Abbreviations." IJCNLP. 2011.
  8. Mindaugas Idzelis. 2005. Jazzy: The java open source spell checker.
  9. Lopez, Adam, and Matt Post. "Beyond bitext: Five open problems in machine translation."
  10. Lo, Chi-kiu, and Dekai Wu. "Can informal genres be better translated by tuning on automatic semantic metrics." Proceedings of the 14th Machine Translation Summit (MTSummit-XIV) (2013).
  11. Pennell, Deana L., and Yang Liu. "Normalis(z)ation of informal text." Computer Speech & Language 28.1 (2014): 256-277.
  12. S. Bangalore, V. Murdock, G. Riccardi - Bootstrapping bilingual data using consensus translation for a multilingual instant messaging system, 19th International Conference on Computational Linguistics, Taipei, Taiwan (2002), pp. 1–7