Difference between revisions of "Apertium-apy/load balancing"

From Apertium
Jump to navigation Jump to search
(Created page with "<pre>OrderedDict( ( (ServerHost, ServerPort), #('http://localhost/', 2738) (AggregateServerScore, { #12.55266488 - Sum of values in following dict (weight ...")
 
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
<pre>OrderedDict(
+
OrderedDict(
(
+
(
(ServerHost, ServerPort), #('http://localhost/', 2738)
+
(ServerHost, ServerPort), #('http://localhost/', 2738)
(AggregateServerScore, { #12.55266488 - Sum of values in following dict (weight them somehow - check numbers to see how)
+
(AggregateServerScore, { #12.55266488 - Sum of values in following dict (weight them somehow - check numbers to see how)
'/list.*': MovingAverageResponseTime (how many? - exponential moving average maybe?)
+
'/list.*': MovingAverageResponseTime (how many? - exponential moving average maybe?)
'/analyze': MovingAverageResponseTime/ResponseLength,
+
'/analyze': MovingAverageResponseTime/ResponseLength,
'/translate': MovingAverageResponseTime/ResponseLength,
+
'/translate': MovingAverageResponseTime/ResponseLength,
'/generate': MovingAverageResponseTime/ResponseLength
+
'/generate': MovingAverageResponseTime/ResponseLength
}
+
}
) ...
+
) ...
  +
)
)</pre>
 
   
 
The plan for a "Fastest" paradigm load balancer (Random, RoundRobin, and LeastConnections exist; WeightedRandom in dev.)
 
The plan for a "Fastest" paradigm load balancer (Random, RoundRobin, and LeastConnections exist; WeightedRandom in dev.)
1. On gateway start, call each server's '/list's and initalize the server pool while checking for valid responses
+
# On gateway start, call each server's '/list's and initalize the server pool while checking for valid responses - drop any servers that don't respond properly
2. For each request, inform the handler on request start and end
+
# For each request, inform the handler on request start and end
3. If the request is on the list of acceptable benchmark URLs, update the cooresponding moving average and aggregate score, re-sort the serverpool ( :\ )
+
# If the request is on the list of acceptable benchmark URLs, update the cooresponding moving average and aggregate score, re-sort the serverpool on aggregate score
4. Periodically, call each server's '/list's and drop all existing data (what intervals?)
+
# Periodically, call each server's '/list's and drop all existing data (what intervals?)
5. When the request handler asks for a server, return the handler which has the LOWEST aggregateServerScore - this will form a negative feedback loop
+
# When the request handler asks for a server, return the handler which has the LOWEST aggregateServerScore - this will form a negative feedback loop
   
Future: integrate least connections and fastest load balancer... maybe even query servers to ask for their current load
+
Future: integrate least connections and fastest load balancer... maybe even query servers to ask for their current load status (is their integrity a concern?)
   
 
Notes to self:
 
Notes to self:
1. Ignore requests that response is 4xx HTTP code
+
# Ignore requests that response is 4xx HTTP code
2. If server returns a 5xx for a '/list' at anytime, float('inf') for that server (effectively remove it from pool), let it only reenter the server pool when periodic testing occurs
+
# If server returns a 5xx for a '/list' at anytime, float('inf') for that server (effectively remove it from pool), let it only reenter the server pool when periodic testing occurs
3. Check if pool is ever empty, if so, raise critical error and return 503 for all requests
+
# Check if pool is ever empty, if so, raise critical error and return 503 for all requests
 
Balkan langs page:
 
1. bg-el number in wrong place
 
2. mkd-en number in wrong place
 

Latest revision as of 02:59, 25 December 2013

   OrderedDict(
       (
           (ServerHost, ServerPort), #('http://localhost/', 2738)
           (AggregateServerScore, { #12.55266488 - Sum of values in following dict (weight them somehow - check numbers to see how)
               '/list.*': MovingAverageResponseTime (how many? - exponential moving average maybe?)
               '/analyze': MovingAverageResponseTime/ResponseLength,
               '/translate': MovingAverageResponseTime/ResponseLength,
               '/generate': MovingAverageResponseTime/ResponseLength
           }
       ) ...
   )

The plan for a "Fastest" paradigm load balancer (Random, RoundRobin, and LeastConnections exist; WeightedRandom in dev.)

  1. On gateway start, call each server's '/list's and initalize the server pool while checking for valid responses - drop any servers that don't respond properly
  2. For each request, inform the handler on request start and end
  3. If the request is on the list of acceptable benchmark URLs, update the cooresponding moving average and aggregate score, re-sort the serverpool on aggregate score
  4. Periodically, call each server's '/list's and drop all existing data (what intervals?)
  5. When the request handler asks for a server, return the handler which has the LOWEST aggregateServerScore - this will form a negative feedback loop

Future: integrate least connections and fastest load balancer... maybe even query servers to ask for their current load status (is their integrity a concern?)

Notes to self:

  1. Ignore requests that response is 4xx HTTP code
  2. If server returns a 5xx for a '/list' at anytime, float('inf') for that server (effectively remove it from pool), let it only reenter the server pool when periodic testing occurs
  3. Check if pool is ever empty, if so, raise critical error and return 503 for all requests