Scaling Servers

While much of the focus of the early PERT has been on the performance of individual TCP connections, frequently one is interested in the aggregate performance of a server that handles many requests/connections in parallel. Performance aspects include:

  • The capability of serving large numbers of clients
  • Maintaining quick response time
  • Maintaining good throughput

When tuning a server to be able to serve tens of thousands of simultaneous users, issues such as InterruptCoalescence, scheduling, application and threading performance, sufficient TCP listen backlogs, etc. become important factors.

An important approach to server scaling consists of "scaling out" to multiple servers that share the load. This can be done e.g. using cluster technology using Load Balancers, or through the use of Content Distribution Networks (CDNs).

Related Case Studies

  • ApacheScaling - Scaling Apache 2.x beyond 20,000 concurrent downloads (on


-- SimonLeinen - 02 Nov 2008 - 17 Dec 2008
-- PekkaSavola - 02 Oct 2008

Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r4 - 2008-12-17 - SimonLeinen
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2004-2009 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.