Apertium-apy/load balancing

From Apertium
< Apertium-apy
Revision as of 02:59, 25 December 2013 by Sushain (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
   OrderedDict(
       (
           (ServerHost, ServerPort), #('http://localhost/', 2738)
           (AggregateServerScore, { #12.55266488 - Sum of values in following dict (weight them somehow - check numbers to see how)
               '/list.*': MovingAverageResponseTime (how many? - exponential moving average maybe?)
               '/analyze': MovingAverageResponseTime/ResponseLength,
               '/translate': MovingAverageResponseTime/ResponseLength,
               '/generate': MovingAverageResponseTime/ResponseLength
           }
       ) ...
   )

The plan for a "Fastest" paradigm load balancer (Random, RoundRobin, and LeastConnections exist; WeightedRandom in dev.)

  1. On gateway start, call each server's '/list's and initalize the server pool while checking for valid responses - drop any servers that don't respond properly
  2. For each request, inform the handler on request start and end
  3. If the request is on the list of acceptable benchmark URLs, update the cooresponding moving average and aggregate score, re-sort the serverpool on aggregate score
  4. Periodically, call each server's '/list's and drop all existing data (what intervals?)
  5. When the request handler asks for a server, return the handler which has the LOWEST aggregateServerScore - this will form a negative feedback loop

Future: integrate least connections and fastest load balancer... maybe even query servers to ask for their current load status (is their integrity a concern?)

Notes to self:

  1. Ignore requests that response is 4xx HTTP code
  2. If server returns a 5xx for a '/list' at anytime, float('inf') for that server (effectively remove it from pool), let it only reenter the server pool when periodic testing occurs
  3. Check if pool is ever empty, if so, raise critical error and return 503 for all requests