NginX and Apache, but no memcached

on August 31, 2009 in Linux with 7 comments by

I’ve been reading a few other blogs about how some people have implemented NginX as an accelerator for their Apache-based websites.

NginX outperforms Apache on small- to mid-range servers when it comes to static file handling, particularly because it is event driven.

The downside of NginX is that PHP can only be used with FastCGI. In general, most how-to’s explain how to implement PHP FastCGI with NginX using TCP. This is adding extra overhead and slows PHP to a crawl. A better solution is to use the UNIX sockets instead, which is explained well in Till’s blog.

But even using UNIX sockets, the PHP FastCGI and NginX combination is not as fast as Apache can handle PHP requests. For this reason, NginX can act as a great accelerator for static files while Apache deals with all the PHP requests. Even with the extra TCP overhead between NginX and Apache, this makes for quite a speedy combination.

Thinking logically, some people figured that loading static files from RAM memory instead of the harddrive must make things even faster. But that really depends…

Using Memcached

Loading static files from RAM memory is accomplished using the memcached daemon and the NginX memcached module. Various how-to’s describe the procedure along these lines:

  • A script that’s run at boot time, using a CRON or other method, that loads static files from the harddrive into memcached (RAM).
  • NginX is configured to load a requested file from memcached and if failed to do so, load it directly from the harddrive instead.

But unfortunately the NginX memcached module is only capable of TCP communications. If memcached is located on the same server, it would have made more sense to use UNIX sockets instead (just like PHP FastCGI over UNIX sockets).

See, in my case there was so much overhead that NginX slowed to less than 8,000 requests per second. To put this in perspective, Apache was capable of serving 18,898 static 100-byte files per second on the same server.

So using memcached is only worth it if you are caching files that are spread over several back-end servers or are dynamic of nature (ie., PHP output that changes very little). Not if these files are local to NginX, especially with NginX’s highly efficient caching methods.

Speedy NginX

Now, how much faster is NginX compared to Apache when it comes to static files anyway? Well, I’ve have some non-definitive numbers for you, obtained on a low-range server:

Server:

Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66GHz, 2x750GB Seagate Barracuda 7200.11 32M RAID-1, 4GB RAM, Debian Lenny (5.0) i386 (2.6.24-7 Ubuntu-based, back-ported kernel)

Test parameters:

ab -k -c5 -t10000 http://<server>/100.html

Apache:

Server Software:        Apache/2.2.9
Server Hostname:        <server>
Server Port:            80

Document Path:          /100.html
Document Length:        100 bytes

Concurrency Level:      5
Time taken for tests:   2.646 seconds
Complete requests:      50000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    49508
Total transferred:      22577856 bytes
HTML transferred:       5000000 bytes
Requests per second:    18898.08 [#/sec] (mean)
Time per request:       0.265 [ms] (mean)
Time per request:       0.053 [ms] (mean, across all concurrent requests)
Transfer rate:          8333.56 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    0   0.5      0      49
Waiting:        0    0   0.5      0      49
Total:          0    0   0.5      0      49

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      1
  99%      1
 100%     49 (longest request)

NginX:

Server Software:        nginx/0.7.61
Server Hostname:        <server>
Server Port:            80

Document Path:          /100.html
Document Length:        100 bytes

Concurrency Level:      5
Time taken for tests:   1.928 seconds
Complete requests:      50000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    49502
Total transferred:      19397510 bytes
HTML transferred:       5000000 bytes
Requests per second:    25937.28 [#/sec] (mean)
Time per request:       0.193 [ms] (mean)
Time per request:       0.039 [ms] (mean, across all concurrent requests)
Transfer rate:          9826.54 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    0   0.1      0       5
Waiting:        0    0   0.1      0       5
Total:          0    0   0.1      0       5

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      0
  99%      0
 100%      5 (longest request)

This is the setup I’m using on all the web servers I am deploying, NginX as the front-end and a number of Apache servers at the back-end to deal with non-static files, including PHP.

7 comments

  1. posted on Aug 06, 2010 at 6:25 PM  |  reply

    Now, I use nginx for all my website and It’s work great.
    I used nginx cache and nginx serving static file very fast.

  2. posted on Feb 25, 2010 at 11:10 PM  |  reply

    […] you may have read in one of my previous blog entries, specifically “NginX and Apache, but no memcached”, I prefer to use NginX as the front-end serving static files, and Apache as a back-end dealing with […]

  3. Anonymous
    posted on Sep 01, 2009 at 1:03 AM  |  reply

    Could you post some tips to optimise apache for high load?

    I have ~ 9000-10000 req/sec .. I have the old kimsufi XL (pentium D @ 3Ghz, 2GB DDR).. Is the gap from me to your 18k simply hardware related?

    • posted on Sep 01, 2009 at 11:31 AM  |  reply

      Did you use the same test parameters I had used, and a 100-byte file to test? Hardware would account for some of the difference, yes. More RAM = happier Apache.

      One trick is to use “ulimit -s 512” in Apache’s init script (/etc/init.d/apache2) to reduce the stack size. It doesn’t need gobs of stack space and will leave more RAM for spawning additional child processes.

      Also limit the use of .htaccess files. Ideally you put all the .htaccess files in the site configuration file, and set “AllowOverride” to “None”. Otherwise Apache will check for (, and process) a .htaccess each time a file is requested.

      Same goes for FollowSymlinks – they add extra overhead for Apache.

      In the output I’d given above, you can also see a difference between Apache and NginX’s “Total Transferred”. Less to send = more time to do other things (especially because it depends on how fast the remote side can receive it). So strip additional fields, such as server information, unneeded cookies, etc. if possible. Enabling gzip compression may in fact add some CPU cycles, but will also help – in fact, NginX has gzip enabled when I did the above test… I just noticed that :)

      • Anonymous
        posted on Sep 23, 2009 at 12:07 PM  |  reply

        My server is using mpm-itk because it hosts several users. mem_cache doesn’t work 100% with php scripts when running mpm-itk so I use a disk_cache. It also has php as a module so more ram per process.

        Logging is setup well on my system. The difference between having logging enabled (access logs and error logs) and not having it enabled is less than 1%.

        Can’t set AllowOverride to none because its hosting other users who need to be able to configure apache using it. I am considering disabling symlinks … and during writing I have now done so (for most folders).

        ulimit -s 512 didnt offer much help but I have it anyway.

        Any other suggestions ?

        • posted on Sep 24, 2009 at 7:35 PM  |  reply

          I’m not that familiar with mpm-itk (although it does look quite
          interesting!). One thing I did notice on their website is that it
          doesn’t fork, so indeed decreasing the size of the heap stack (with
          ulimit -s 512) would have done little to improve things.

          By default, most Apache installations also enable the mod_status which
          can add a bit of overhead on top of each request. I’ve found this
          overhead to be insignificant, but that may be a different story on your
          server. So yu could disable this particular module if it’s enabled with
          your install (“a2dismod status”) and see if this will add some improvements.

          Also make sure you have “HostnameLookups” set to “off”. With todays log
          viewers, etc., this really isn’t needed. This can be set in the main
          configuration (“/etc/apache2/apache2.conf”) or on a per file/directory
          request.

          Do you have mod_deflate enabled? This will reduce the amount of data
          sent over the network (and the CPU vs. Traffic is in favor of traffic).
          Look into the mod_deflate configuration section (ie.,
          “/etc/apache2/mods-available/deflate.conf”) and check which MIME types
          are automatically compressed. You could possible tweak this to add
          additional file formats, provided it is HTTP conformant.

          Also, and this takes a bit more time to do properly, is to tweak the
          “StartServers” and other related settings for the worker/prefork module.
          This really is dependent on your server hardware (memory, CPU, etc).

          • Anonymous
            posted on Sep 24, 2009 at 11:35 PM  |  

            It does do some forking as it’s based on mpm-prefork.. but I always have plenty of servers running to handle incoming requests.

            I have no mods that I don’t need. mod_status is disabled.

            Hostname lookups is definitely off but either way shouldn’t impact a localhost based ab.

            mod_deflate is an option but the main traffic drivers I have already compress (gzip) within their php code.. this include the css/js. This doesn’t apply for other peoples hosted sites but I tend to recommend a few things for the heavy users anyway.. caching for one. For images I don’t see much advantage of using compression.

            StartServers … or more importantly Min and Max Idle Servers is set fairly high.

            I have now installed a varnish http proxy on the front and provided some ‘available to everyone’ php scripts designed to detect high load and send cache control headers as required. Hopefully they will be used.

Join the discussion

Your email address will not be published. Required fields are marked *