Difference between revisions of "Apertium scalable service"
Jump to navigation
Jump to search
(Redirected page to Apertium-apy) |
|||
(73 intermediate revisions by 11 users not shown) | |||
Line 1: | Line 1: | ||
#redirect[[apertium-apy]] |
|||
=Introduction= |
|||
==Project Information== |
|||
This is one of the 9 Google Summer of Code projects accepted for Apertium (see http://socghop.appspot.com/org/home/google/gsoc2009/apertium). |
|||
'''Student:''' Víctor Manuel Sánchez Cartagena, University of Alicante, Spain |
|||
'''Mentor:''' Juan Antonio Pérez-Ortiz, from Transducens Group, University of Alicante, Spain. |
|||
==Introduction== |
|||
Currently Apertium is a very useful translation platform, and hopefully it will be even more useful in the future, when new language pairs will be added. |
|||
But, if another application wants to profit from Apertium power, Apertium needs to be installed on the same machine. Although installing Apertium is not a very difficult task, as linguistic data are frequently updated, the installation should be updated often too. Moreover, communication between an external application and Apertium is not easy to code, because Apertium only reads input text from standard input. |
|||
There is another option: to use the simple web service currently located at http://www.apertium.org/, but it has two major problems: |
|||
* Its features are quite limited, since it cannot list available language pairs, and only accepts http GET and POST parameters. |
|||
* As it starts a new Apertium instance for each request, it consumes a lot of computer resources, making scalability difficult, especially when there is only a single server. |
|||
So, the aim of this project is to build an application wrapper for Apertium with a public web service API (both REST and SOAP) that allows third-party programmers to access it from their desktop or web applications, and request the same operations that can be done with a local installation. The key feature of this application is scalability. It is intended to balance high loads by scheduling and prioritizing pending translations according to the server-side resources available. Environments which will be considered can be static, where there is a fixed amount of servers available, or dynamic, as in elastic cloud computing services. When working in dynamic mode, new servers will be automatically added when load rises. The availability of highly scalable web services for Apertium will catalyze the worldwide use and adoption of the platform in lots of translation contexts. |
|||
==Technical challenges and application features== |
|||
One of the first challenges to overcome is the design of a easy-to-use API. A difficult API would stop many developers from integrating Apertium into their applications. Therefore, it would be a great idea to study other popular translation APIs. We plan to use both REST and SOAP technologies to give developers as many options as possible. However, the REST web service will not be be totally RESTful, since it will accept translation requests over HTTP POST in order to overcome HTTP GET length limits. |
|||
Currently Apertium's scalability is strongly limited by the fact that it cannot run as a daemon, and has to be launched from scratch every time a translation is required. In a web service environment, continuously launching and terminating Apertium processes by the operative system would cause a very strong overhead. In fact, in preliminary experiments I found that, in a common desktop system, processing more than 10 simultaneous translation requests of around 10000 words each makes the system get out of resources when launching an Apertium instance for each request. However, if we spread the requests between a pair of daemons using a queue, the system keeps responding and it takes less time to perform all the translations. |
|||
I have made a very simple implementation of an Apertium daemon for testing purposes. It's a quite simple program that launches Apertium and opens a pipe attached to its standard input. Since the pipe is never closed, the Apertium process never dies. Different translation requests are surrounded in the input stream by special XML tags. However, this is not very useful because, sometimes, Apertium does not output short translations until it receives a new request. This happens because information is stored in buffers, and they are only flushed when they are full or the pipeline processes finish (this never happens in daemon mode). Overcoming this problem probably involves changing Apertium core. |
|||
But the most difficult challenge is designing a highly scalable and reliable system that distributes the translation requests between Apertium daemons hosted in different servers (probably there will be more than one daemon per server), and starts or shutdowns daemons on demand. A daemon works only with a pair of languages, because changing the languages would imply instantiating pipeline processes with different dictionaries. In addition, in elastic cloud computing environments (where servers are requested and released on demand) we also need to know when to stop using a server or allocating a new one. So, it is necessary to use load balancing features like priority activation, priority queuing, etc. |
|||
There are a lot of fast open source load balancing systems, but most of them are highly web application oriented. So, they only implement simple load balancing algorithms, based on the amount of traffic already assigned to each server, server response time, etc. However, we need to take into account a different source of information in order to forward the request to the right server, as the language pairs of the daemons available in each one. And, since most of the time of a request is spent in the Apertium daemon, it will be better to implement a new load balancing system able to deal with our specific requirements. |
|||
The Java platform has a good built-in support for priority queues (see http://java.sun.com/javase/6/docs/api/java/util/AbstractQueue.html and its subclasses) and communication between servers (with the RMI protocol), so using it would be a good option. Additionally, there are completely open source Java implementations and the Apache Axis2 web services engine supports both SOAP and REST web services. |
|||
Security must be a very important feature of the system. Applications should register to grant reliable access to the API, and connections from unregistered clients will be limited to a fixed amount per IP. |
|||
==Working plan== |
|||
Community bonding period: Study queue and load balancing algorithms, and their possible implementations on Java platform. Study RMI, different ways of daemonizing Apertium and Axis2 web services engine. |
|||
* Week 1: Implement daemon mode |
|||
* Week 2: " |
|||
* Week 3: Test Apertium daemon. Check if it is fault-tolerant and as fast as expected. |
|||
* Week 4: Define API and implement some methods without load balancing nor on-demand daemon management. |
|||
Deliverable #1: Some of the API methods allow to translate with Apertium using a fixed number of daemons and a single computer. |
|||
* Week 5: Implement a protocol for communication between servers |
|||
* Week 6: Design and implement load balancing and daemon management algorithm. |
|||
* Week 7: " |
|||
* Week 8: Implement all API methods |
|||
Deliverable #2: API fully implemented, dynamic daemon management with fixed number of servers. |
|||
* Week 9: Implement dynamic server management for an elastic cloud hosting environment. Amazon EC2 is probably the best option, but Eucalyptus could also be an alternative, as it is open source and its interface is compatible with Amazon EC2. |
|||
* Week 10: " |
|||
* Week 11: Testing, evaluation and full documentation. Ensure that the API is well-documented and any external developer can easily integrate Apertium into his/her application. |
|||
* Week 12: " |
|||
* Week 13: Extra time for schedule slips. |
|||
Project completed. Final deliverable: Highly scalable web service application working in both dynamic and static environments, with customizable load balancing. |
|||
==Student skills and experience== |
|||
Last September, I finished my degree in Computer Engineering at University of Alicante, and now I am studying a postgraduate diploma in Application Development with Java Enterprise Technology. Next November I will start a Doctorate Programme in Computing Applications. |
|||
I have some experience in open source projects: |
|||
* ANTLArbol is a tool that builds parse trees from an execution of a parser/translator written in Java with the ANTLR tool. It was my degree dissertation and now is used by Compiler Design students in University of Alicante to debug their compilers and translators. More information: http://code.google.com/p/antlrarbol/ |
|||
* Currently I am working for the Transducens group in University of Alicante, Spain. I am developing an open-source web project related to social translation around Apertium. We plan to release an early prototype in the next weeks. As a result of this work, I have learnt a lot about Apertium design and its limitations, and I have detected the need for having a highly scalable web service around Apertium. |
|||
==See also== |
|||
* [[Apertium going SOA]] |
|||
=Development status= |
|||
==1st week== |
|||
Apertium now works as a daemon. The same instance can process many translation requests, as they are separated by a superblank with a special comment. |
|||
Null flush option (-z) is implemented in all modules except deformatters and reformatters. So, the daemon outputs the translation as soon as it is available, but deformatter and reformatter are invoked one time each per translation. The overhead of invoking them is quite small and this way we can use the same daemon to translate different format inputs. |
|||
Project is split into 2 subprojects: |
|||
* ApertiumServerWS is the request router. It processes Web Service requests and sends them via RMI to the right ApertiumServerGSOC instance. It is also the placement controller, telling each ApertiumServerGSOC the language pairs it should work with. If we detect that this module acts as a bottleneck, we can run more than one instance and share the placement algorithm object via Terracotta (http://www.terracotta.org/). |
|||
* ApertiumServerGSOC is a set of Apertium daemons running in the same machine. It processes translation requests sent via RMI. The request router also asks each ApertiumServerGSOC for a list of running daemons, and tells them to start or stop some daemons. |
|||
At the time of writing ApertiumServerWS can only work with one ApertiumServerGSOC instance, and only allocates one daemon, for the pair es-ca. |
|||
==2nd week== |
|||
* Updated API to match Google AJAX Language API (only translation API, not language detection). Implemented batch processing interface too. More information: http://code.google.com/intl/es/apis/ajaxlanguage/documentation/reference.html#_intro_fonje . Added an API method to list language pairs. |
|||
* Simple web interface to test JSON parsing. |
|||
* Updated communication protocol. Now when an instance of ApertiumServerGSOC starts, it registers with the request router. The request router asks it for supported pairs and updates its list of servers. When the router receives a translation request, sends it to a server that has a daemon for the requested language pair (parameters like server load are not taken into account). If there isn't any server with a suitable daemon, the router asks one server to create a daemon. The protocol is not very useful, since it doesn't work well on high load situations (it is necessary to allocate more daemons for the same pair) or when there are requests of many different language pairs. |
|||
* Test with JMeter. 500 sequential translation requests, i.e. a request is sent when the response to the previous one is received. Source text in Spanish, 1884 characters. Translate it to Catalan. Test file is available in SVN, inside ApertiumServerWS project. |
|||
Processing each request takes an average of 102ms |
|||
[[Image:vApertiumServerTestGraphDaemon.jpg]] |
|||
If we change ApertiumServerGSOC code , to make it invoke the whole Apertium pipeline for each request, the average time is 901 ms |
|||
[[Image:vApertiumServerTestGraphNoDaemon.jpg]] |
|||
Of course, this is a special case where all requests are directed to the same daemon. If there were more different languages in the requests than the maximum number of running daemons, daemons would be stopped and started many times, so the difference between some approaches wouldn't be so big. However, having more than one daemon for the same language pair could make the first test even faster. |
|||
==3rd week== |
|||
* Tested application with long inputs (about 2 MiB of text). |
|||
* Now it is possible to know CPU and memory consumption of each daemon. |
|||
* Implemented a proof of concept of null flush in deformatters and reformatters, but this feature hasn't been tested enough yet |
|||
==4rd week== |
|||
* Tested null flush in deformatters and reformatters. This feature will be disabled. The fact that flex stores input in memory buffers makes having a reliable implementation very difficult. I'll deal with it in the future if I have enough time. |
|||
* Implemented null flush in Constraint Grammar. There have been some difficulties because it reads Unicode input with ICU library, and I/O functions from this library always report EOF when they read a '\0'. Patch submitted and accepted by VISLCG3 project. |
|||
* Now all the stable pairs can work as a daemon. |
|||
==1st deliverable== |
|||
* JSON API allows translating and listing language pairs. |
|||
* All stable pairs are available. |
|||
* A daemon is created for each pair (this behavior will change in the future). |
|||
* You can launch more than a server, but load balancing algorithm will only take into account the first one. |
|||
==5th week== |
|||
* Fixed bug in apertium-interchunk related to null flush. Committed patch. |
|||
* Done some load tests, found interesting conclusions: |
|||
**A single daemon consumes all the CPU capacity of a server, at least in computers with 2 o less CPUs. |
|||
**The average translation time of requests processed by the same daemon (if it's the only running daemon in the computer) is smaller than the average time when requests are spread between some daemons in the same computer. |
|||
**There's no reason to run more than one daemon for the same language pair in the same machine. Admission control procedures (not implemented yet) will allow changing daemon priorities. |
|||
* Studied some papers about application placement: |
|||
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81.1564&rep=rep1&type=pdf A Scalable Application Placement Controller for Enterprise Data Centers] |
|||
** [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.80.7874&rep=rep1&type=pdf Dynamic Placement for clustered web applications] |
|||
** [http://personals.ac.upc.edu/dcarrera/papers/NOMS2008.pdf Utility-based Placement of Dynamic Web Applications with Fairness Goals] |
|||
** [http://www.cs.ncl.ac.uk/publications/trs/papers/1126.pdf Concurrent Management of Composite Services According to Response Time SLAs] |
|||
==6th week== |
|||
* Starting to implement placement system described by [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81.1564&rep=rep1&type=pdf A Scalable Application Placement Controller for Enterprise Data Centers]. |
|||
** Implemented [http://en.wikipedia.org/wiki/Ford-Fulkerson_algorithm Ford-Fulkerson algorithm] to solve [http://en.wikipedia.org/wiki/Maximum_flow_problem max flow] problem. |
|||
** Implemented [http://en.wikipedia.org/wiki/Bellman-Ford_algorithm Bellman-Ford algorithm]. A combination of Ford-Fulkerson and Bellman-Ford algorithms can solve [http://en.wikipedia.org/wiki/Minimum_cost_flow_problem#Relation_to_other_problems minimum cost maximum flow] problem. |
|||
==Steps to test the application== |
|||
Checkout ApertiumServerWS and ApertiumServerGSOC from my branch. |
|||
Build both projects with the ant scripts. When building ApertiumServerWS, set the property j2ee.platform.classpath to servlet-api.jar in your web server "lib" directory. For example, I compile the application with the following command: |
|||
<pre> |
|||
ant -Dj2ee.platform.classpath=/home/user/software/apache-tomcat-6.0.18/lib/servlet-api.jar |
|||
</pre> |
|||
Install Apertium with the script "installApertiumAndPairs.sh" that can be found in the "deploy" directory, inside ApertiumServerGSOC. The meaning of the different parameters is explained in the script source code. |
|||
Start rmimregistry, use port 1098, with the command "rmiregistry 1098". |
|||
Deploy the ApertiumServerWS war file in your favorite web server. |
|||
Run ApertiumServerGSOC with the script "run-apertium-server.sh" that can be found in the "deploy" directory. |
|||
Now browse index.jsp page of ApertiumServerWS web application. If something went wrong, you can check the logs at /tmp. |
|||
[[Category:Development]] |
Latest revision as of 08:39, 7 March 2018
Redirect to: