Always Get Better

Posts Tagged ‘apache’

Setting up WordPress with nginx and FastCGI

Monday, January 30th, 2012

All web site owners should feel a burning need to speed. Studies have shown that viewers waiting more than 2 or 3 seconds for content to load online are likely to leave without allowing the page to fully load. This is particularly bad if you’re trying to run a web site that relies on visitors to generate some kind of income – content is king but speed keeps the king’s coffers flowing.

If your website isn’t the fastest it can be, you can take some comfort in the fact that the majority of the “top” web sites also suffer from page load times pushing up into the 10 second range (have you BEEN to Amazon lately?). But do take the time to download YSlow today and use its suggestions to start making radical improvements.

I’ve been very interested in web server performance because it is the first leg of the web page’s journey to the end user. The speed of execution at the server level is capable of making or breaking the user’s experience by controlling the amount of ‘lag time’ between the web page request and visible activity in the web browser. We want our server to send page data as immediately as possible so the browser can begin rendering it and downloading supporting files.

Not long ago, I described my web stack and explained why I moved away from the “safe” Apache server solution in favour of nginx. Since nginx doesn’t have a PHP module I had to use PHP’s FastCGI (PHP FPM) server with nginx as a reverse proxy. Additionally, I used memcached to store sessions rather than writing to disk.

Here are the configuration steps I took to realize this stack:

1. Memcached Sessions
Using memcached for sessions gives me slightly better performance on my Rackspace VM because in-memory reading&writing is hugely faster than reading&writing to a virtualized disk. I went into a lot more detail about this last April when I wrote about how to use memcached as a session handler in PHP.

The newest Ubuntu distributions have a package php5-fpm that installs PHP5 FastCGI and an init.d script for it. Once installed, you can tweak your php.ini settings to suit, depending on your system’s configuration. (Maybe we can get into this another time.)

3. Nginx
Once PHP FPM was installed, I created a site entry that would pass PHP requests forward to the FastCGI server, while serving other files directly. Since the majority of my static content (css, javascript, images) have already been moved to a content delivery network, nginx has very little actual work to do.

server {
listen 80;
access_log /var/log/nginx/sitename-access.log;
error_log /var/log/nginx/sitename-error.log;
# serve static files
location / {
root /www/;
index index.php index.html index.htm;

# this serves static files that exists without
# running other rewrite tests
if (-f $request_filename) {
expires 30d;

# this sends all-non-existing file or directory requests to index.php
if (!-e $request_filename) {
rewrite ^(.+)$ /index.php?q=$1 last;

location ~ \.php$ {
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /www/$fastcgi_script_name;
include fastcgi_params;

The fastcgi_param setting controls which script is executed, based upon the root path of the site being accessed. All of the requests parameters are passed through to PHP, and once the configuration is started up I didn’t miss Apache one little bit.

My next step will be to put a varnish server in front of nginx. Since the majority of my site traffic comes from search engine results where a user has not yet been registered to the site or needs refreshed content, Varnish can step in and serve a fully cached version of my pages from memory far faster than FastCGI can render the WordPress code. I’ll experiment with this setup in the coming months and post my results.

Performance Tuning Apache

Tuesday, April 12th, 2011

One of my favourite aspects of the cloud is the ease with which we can create new VMs to test our wacky architecture theories. It’s so easy (and cheap!) to spin up a small server cluster for some serious load testing, and then destroy it again when done.

If nothing else, it provides a safety net and teaches you how to squeeze every ounce of performance out of big and small server instances. Let’s examine ways in which we can make our dynamic Apache settings much faster.

Turn Off Modules You’re Not Using
This should be fairly obvious, but Apache ships with a number of modules which can affect performance but which most of us never need. Check your /etc/apache/mods-enabled folder to see what can be removed.

Never Trust Defaults
The default Apache settings are optimized for a website serving static files only. Booorrring! Never be afraid to question what you see in the configuration files; the more you understand about the inner workings of the system, the better you will be able to improve its performance.

RAM is good, Swap is Bad
Running out of physical memory (RAM) and hitting the hard drive’s swap space is bad, especially in the Virtual Machine world. When this happens your performance will nose dive; your machine may even crash. The simplest solution is to increase the amount of RAM available to your server, but if that is too costly or impossible, read on.

Kill the KeepAlive
Whenever a request is made to the web server, it keeps the network connection open for a small amount of time (often 15 seconds). During that time, if the visitor’s web browser needs to get another file, it goes through the same connection thereby avoiding wasting time re-connecting to your server. The problem is the open connection will use up space in your connection pool so if your site is under heavy load new visitors will get queued up and may experience slowdowns trying to access your content.

If Apache is your front-end web server, set the KeepAliveTimeout to 2 seconds. This will keep the number of requests fluid even under heavy load.

If your server is behind a firewall like nginx or HAProxy where KeepAlives are not honoured, turn this setting off entirely.

Don’t Serve Static Files
Apache is a memory hog. Since each hit to the server is relatively heavy in terms of threads and memory, we are in better shape when we serve non-changing static content like images, stylesheets and javascript using a single-threaded server like nginx or lighttpd or even a memory server like varnish (bonus points for using a CDN to serve static files, avoiding the hit to your server at all).

Turn off HostnameLookups
This should already be done by default in your Apache configuration; if it isn’t, do it now. When HostnameLookups is on, Apache checks every incoming request’s IP address for its host name. This can dramatically increase your latency, and isn’t healthy for DNS servers either.

Disable AllowOverride
It is tempting to set AllowOverride to All in order to give your .htaccess files free reign to do as they please. The downside of this directive is that every time anything is requested Apache will need to check that folder and every one of its parents all the way down to the site root in order to check for .htaccess commands. Apache recommends setting AlloverOverride to none globally, enabling access for .htaccess files that can’t be set in the site configuration.

Memcached as Session Handler

Saturday, April 9th, 2011

By default PHP loads and saves sessions to disk. Disk storage has a few problems:

1. Slow IO: Reading from disk is one of the most expensive operations an application can perform, aside from reading across a network.
2. Scale: If we add a second server, neither machine will be aware of sessions on the other.

Enter Memcached
I hinted at Memcached before as a content cache that can improve application performance by preventing trips to the database. Memcached is also perfect for storing session data, and has been supported in PHP for quite some time.

Why use memcached rather than file-based sessions? Memcache stores all of its data using key-value pairs in RAM – it does not ever hit the hard drive, which makes it F-A-S-T. In multi-server setups, PHP can grab a persistent connection to the memcache server and share all sessions between multiple nodes.

Before beginning, you’ll need to have the Memcached server running. I won’t get into the details for building and installing the program as it is different in each environment, but there are good guides on the memcached site. On Ubuntu it’s as easy as aptitude install memcached. Most package managers have a memcached installation available.

Installing memcache for PHP is hard (not!). Here’s how you do it:

pecl install memcache

Careful, it’s memcache, without the ‘d’ at the end. Why is this the case? It’s a long story – let’s save the history lesson for another day.

When prompted to install session handling, answer ‘Yes’.

Using memcache for sessions is as easy as changing the session handler settings in PHP.ini:
session.save_handler = memcache
session.save_path = “tcp://” (assuming memcached is set up to use default port)

Now restart apache (or nginx, or whatever) and watch as your sessions are turbo-charged.

Cheap File Replication: Synchronizing Web Assets with fsniper

Sunday, November 14th, 2010

Awhile ago I wrote about how I was using nginx to serve static files rather than letting the more memory-intensive Apache handle the load for files that don’t need its processing capabilities. The basic premise is that nginx is the web-facing daemon and handles static files directly from the file system, while shipping any other request off to Apache on another port.

What if Apache is on a different server entirely? Unless you have the luxury of an NAS device, your options are:

1. Maintain a copy of the site’s assets separate from the web site
There are two problems with this approach: maintainability, and synchronization. You’ll have to remember to deploy any content changes separately to the rest of the site, which is counter-intuitive and opens up your process to human error. User-generated content stays on the Apache server and would be inaccessible to nginx.

2. Use a replicating network file system like GlusterFS
Network-based replication systems are advanced and provide amazing redundancy. Any changes you make to one server can be replicated to the others very quickly, so any user generated content will be available to your content servers, and you only have to deploy your web site once.

The downside is that many NFS solutions are optimized for larger (>50Mb) filesizes. If you rely on your content server for small files (images, css, js), the read performance may decline when your traffic numbers increase. For high availability systems where it is critical for each server to have a full set of up-to-date files, this is probably the best solution.

3. Use an rsync-based solution
This is the method I’ve chosen to look at here. It’s important that my content server is updated as fast as possible, and I would like to know that when I perform disaster recovery or make backups of my web site the files will be reasonably up to date. If a single file takes a few seconds to appear on any of my servers, it isn’t a huge deal (I’m just running WordPress).

The Delivery Mechanism
rsync is fast and installed by default on most servers. Pair it with ssh and use password-less login keys, and you have an easy solution for script-able file replication. The only missing piece is the “trigger” – whenever the filesystem is updated, we need to run our update script in order to replicate to our content server.

Icrond is one possible solution – whenever a directory is updated icrond can run our update script. The problem here is that service does not act upon file updates recursively. fsniper is our solution.

The process flow should look like this.
1. When the content directory is updated (via site upload or user file upload), fsniper initiates our update script.
2. Update script connects to the content server via ssh, and issues an rsync command between our content directory and the server’s content directory.
3. Hourly (or whatever), initiate an rsync command from the content server to any web servers – this will keep all the nodes fairly up-to-date for backup and disaster recovery purposes.

Python file upload

Friday, January 2nd, 2009

I recently needed to handle file uploads from a Flash form post using CGI and Python. I made two discoveries:

  1. Python’s CGI library ignores query string variables on POST requests.
  2. After you’ve done it once, working with POST variables whether uploaded files or otherwise is dead simple!

Here is the file I came up with:

import cgi, sys, os

UPLOAD_DIR = “/home/user/uploads”

postVars = cgi.FieldStorage()

if postVars.has_key(“myFile”):
fileitem = postVars[“myFile”]

# If myFile doesn’t contain a FILE, exit
if not fileitem.file:

# Strip file extension
(name,ext) = os.path.splitext( fileitem.filename )

# If a binary file, ensure write flags are binary
if ext == “.jpg” or ext == “.png” or ext == “.gif”:
ioFlag = “wb”
ioFlag = “w”

# Save file data to disk stream
fileObj = file(os.path.join(self.path, fileitem.filename),ioFlag)
while 1:
chunk =
if not chunk: break

Bonus points for checking for and creating a new directory to store the uploaded file in, if needed.

Drupal Stuck at Database Configuration

Monday, November 24th, 2008

When configuring Drupal 6.6 on a Windows XP/Apache/MySQL box, I ran into an issue whereby I would enter the database information on the Database Configuration screen, press the advance button, but be constantly redirected back to the Database Configuration screen.

The Drupal community indicates this is a problem with permissions – Drupal needs to be able to write to your site’s settings.php file.  All permissions appeared to be correct in my setup but I was still unable to continue.

The solution was to edit the settings.php file, putting in my database information manually.  Just look for this line:

$db_url = ‘mysql://username:password@localhost/databasename’;

And change the username, password and databasename parts.

Then return to the Database Configuration screen, enter the information again and continue.  The correct database information will be read from the settings file and the configuration will continue to the next step.

Happy hunting!

“Connection to Server Reset” when Installing Drupal

Friday, May 2nd, 2008

Has anyone else had this issue?

When I try to install Drupal on a Windows 2003 Apache server, I get a pause and then “Connection Reset” error under Firefox.  If I then try to install it using Internet Explorer, the installation process comes up immediately and works without a hitch.

I still can’t seem to get the admin ‘Modules’ page to load at all – PHP is crashing.  That is an entirely separate issue as far as I can tell.