Difference between revisions of "Apertium scalable service"
Jump to navigation
Jump to search
(Redirected page to Apertium-apy) |
|||
(100 intermediate revisions by 14 users not shown) | |||
Line 1: | Line 1: | ||
#redirect[[apertium-apy]] |
|||
=Highly scalable web service architecture for Apertium= |
|||
==Project Information== |
|||
This is one of the 9 Google Summer of Code projects accepted into Apertium (see http://socghop.appspot.com/org/home/google/gsoc2009/apertium). |
|||
Student: Víctor Manuel Sánchez Cartagena |
|||
Mentor: Juan Antonio Pérez-Ortiz, from Transducens Group, University of Alicante. |
|||
==Introduction== |
|||
Currently Apertium is a very useful translation platform, and hopefully it will be even more useful in the future, when new language pairs will be added. |
|||
But, if an application wants to take advantage of Apertium power, Apertium has to be installed on the same machine. Although installing Apertium is not a very difficult task, linguistic data are frequently updated, so the installation should be updated too. Moreover, communication between an external application and Apertium is not easy to code, because Apertium only reads input text from standard input. |
|||
There is another option: to use the web service located at http://www.apertium.org/, but it has two major problems: |
|||
* Its features are quite limited, since it can't list available language pairs, and only accepts http GET and POST parameters. |
|||
* As it starts a new Apertium instance for each request, it consumes a lot of computer resources, making scalability difficult, especially when there is only a single server. |
|||
So, the aim of this task is to build an application wrapper for Apertium with a public web service API (REST) that allows programmers to access it in their desktop or web applications, and perform the same operations that can be done with a local installation. The key feature of this application is scalability. It is intended to support high loads by scheduling and prioritizing pending translations according to the server-side resources available, i.e. load balancing. Environments can be static, where there is a fixed amount of servers available, or dynamic, as in cloud hosting services. When working in dynamic mode, new servers are automatically added if load rises. The availability of highly scalable web services for Apertium will catalyze the worldwide use and adoption of the platform in lots of translation contexts. |
|||
==Technical challenges and application features== |
|||
One of the first challenges we have to overcome is the design of a easy-to-use API. A difficult API would stop many developers from integrate Apertium into their applications. So it would be a great idea to study other popular services APIs. We are going to use REST, because it's simpler and more scalable than SOAP. However, the web service won't be totally RESTful, since it will accept translation requests over HTTP POST to overcome HTTP GET length limits. |
|||
Now, Apertium's scalability is strongly limited by the fact that it can't run as a daemon. It has to be launched from scratch every time a translation is required. In a web service environment, continuously launching and terminating Apertium processes by the operative system would cause a very strong overhead. In fact, in preliminary experiments I found that, in a common desktop system, processing more than 10 simultaneous translation requests of around 10000 words each one makes the system get out of resources if we launch an Apertium instance for each request. However, if we spread the requests between a pair of daemons using a queue, the system keeps responding and it takes less time to perform all the translations. |
|||
I have made a very simple implementation of an Apertium daemon for testing purposes. It's a quite simple program that launches Apertium and opens a pipe attached to its standard input. Since the pipe is never closed, Apertium process never dies. Different translation requests are surrounded in the input stream by special XML tags. However, it is not very useful because, sometimes, Apertium doesn't output short translations until it receives some other requests. This happens because it stores information in buffers, and they are only flushed when they are full or the pipeline processes finish (this never happens in daemon mode). Overcoming this problem probably involves changing Apertium core. |
|||
There is a task in Apertium wiki (http://wiki.apertium.org/wiki/Ideas_for_Google_Summer_of_Code) called “Daemon mode” that would solve this problem. As it is not known if any student would do that task, I have included it in my schedule. If it is finally done by another student, the remaining time will be spent in implementing SOAP web service technology. |
|||
But the most difficult challenge is designing a highly scalable and reliable system that distributes the translation requests between the Apertium daemons present in different servers (probably there will be more than a daemon per server), and starts or shutdowns daemons on demand. A daemon works only with a pair of languages, because changing the languages implies instantiating pipeline processes with different dictionaries. In addition, in dynamic environments (e.g. cloud hosting, we can have more servers on demand) we also need to know when it's time to stop using a server or allocating a new one. So, it is necessary to use load balancing features like priority activation, priority queuing, etc. |
|||
There are a lot of fast open source load balancing systems, but most of them are highly web application oriented. So, they only implement simple load balancing algorithms, based on the amount of traffic already assigned to each server, server response time, etc. However, we need to take account of other information to forward the request to the right server, like the language pairs of the daemons available in each one. And, since most of the time of a request is spent in Apertium daemon, it's better to implement a new load balancing system able to deal with our specific requirements. |
|||
The Java platform has a good built-in support for priority queues (see http://java.sun.com/javase/6/docs/api/java/util/AbstractQueue.html and its subclasses) and communication between servers (with the RMI protocol), so using it would be a good option. Additionally, there are completely open source Java implementations and the Apache Axis2 web services engine supports both SOAP and REST web services. However, if the mentor or the organization wants me to use another technology, I'll agree. |
|||
Security must be a very important feature of the system. Applications should register to grant reliable access to the API, and connections from unregistered clients will be limited to a fixed amount per IP. |
|||
==Working plan== |
|||
Community bonding period: Study queue and load balancing algorithms, and their possible implementations on Java platform. Study RMI, different ways of daemonizing Apertium and Axis2 web services engine. |
|||
Week 1: Implement daemon mode |
|||
Week 2: " |
|||
Week 3: Test Apertium daemon. Check if it is fault-tolerant and as fast as expected. |
|||
Week 4: Define API and implement some methods without load balancing nor on-demand daemon management. |
|||
Deliverable #1: Some of the API methods allow to translate with apertium using a fixed number of daemons and a single computer. |
|||
Week 5: Implement a protocol for communication between servers |
|||
Week 6: Design and implement load balancing and daemon management algorithm. It must be customizable. |
|||
Week 7: " |
|||
Week 8: Implement all API methods |
|||
Deliverable #2: API fully implemented, dynamic daemon management with fixed number of servers. |
|||
Week 9: Implement dynamic server management for a cloud hosting environment. Eucalyptus would be a good option, as it is open source and its interface is compatible with Amazon EC2. |
|||
Week 10: " |
|||
Week 11: Testing, evaluation and full documentation. Ensure that API is well-documented and any external developer can easily integrate Apertium into his/her application. |
|||
Week 12: " |
|||
Week 13: Extra time for schedule slips |
|||
Project completed. Final deliverable: Highly scalable web service application working in both dynamic and static environments, with customizable load balancing. |
|||
==Student skills and experience== |
|||
Last September, I finished a degree in Computer Engineering at University of Alicante, and now I am studying a postgraduate diploma in Application Development with Java Enterprise Technology. Next November I will start a Doctorate Programme in Computing Applications. |
|||
I have some experience in open source projects: |
|||
* ANTLArbol is a tool that builds parse trees from an execution of a parser/translator written in Java with the ANTLR tool. It was my degree dissertation and now is used by Compiler Design students in University of Alicante to debug their compilers and translators. More information: http://code.google.com/p/antlrarbol/ |
|||
* Currently I am working for the Transducens group, in University of Alicante. I am developing an open-source web project related to social translation around Apertium. We plan to release an early prototype in the next weeks. As a result of this work, I have learnt a lot about Apertium design and its limitations, and I have detected the need of having a highly scalable web service around Apertium. |
|||
[[Category:Development]] |
Latest revision as of 08:39, 7 March 2018
Redirect to: