Always Get Better

Posts Tagged ‘linux’

Memcached as Session Handler

Saturday, April 9th, 2011

By default PHP loads and saves sessions to disk. Disk storage has a few problems:

1. Slow IO: Reading from disk is one of the most expensive operations an application can perform, aside from reading across a network.
2. Scale: If we add a second server, neither machine will be aware of sessions on the other.

Enter Memcached
I hinted at Memcached before as a content cache that can improve application performance by preventing trips to the database. Memcached is also perfect for storing session data, and has been supported in PHP for quite some time.

Why use memcached rather than file-based sessions? Memcache stores all of its data using key-value pairs in RAM – it does not ever hit the hard drive, which makes it F-A-S-T. In multi-server setups, PHP can grab a persistent connection to the memcache server and share all sessions between multiple nodes.

Installation
Before beginning, you’ll need to have the Memcached server running. I won’t get into the details for building and installing the program as it is different in each environment, but there are good guides on the memcached site. On Ubuntu it’s as easy as aptitude install memcached. Most package managers have a memcached installation available.

Installing memcache for PHP is hard (not!). Here’s how you do it:


pecl install memcache

Careful, it’s memcache, without the ‘d’ at the end. Why is this the case? It’s a long story – let’s save the history lesson for another day.

When prompted to install session handling, answer ‘Yes’.

Usage
Using memcache for sessions is as easy as changing the session handler settings in PHP.ini:
session.save_handler = memcache
session.save_path = “tcp://127.0.0.1:11211” (assuming memcached is set up to use default port)

Now restart apache (or nginx, or whatever) and watch as your sessions are turbo-charged.

Cheap File Replication: Synchronizing Web Assets with fsniper

Sunday, November 14th, 2010

Awhile ago I wrote about how I was using nginx to serve static files rather than letting the more memory-intensive Apache handle the load for files that don’t need its processing capabilities. The basic premise is that nginx is the web-facing daemon and handles static files directly from the file system, while shipping any other request off to Apache on another port.

What if Apache is on a different server entirely? Unless you have the luxury of an NAS device, your options are:

1. Maintain a copy of the site’s assets separate from the web site
There are two problems with this approach: maintainability, and synchronization. You’ll have to remember to deploy any content changes separately to the rest of the site, which is counter-intuitive and opens up your process to human error. User-generated content stays on the Apache server and would be inaccessible to nginx.

2. Use a replicating network file system like GlusterFS
Network-based replication systems are advanced and provide amazing redundancy. Any changes you make to one server can be replicated to the others very quickly, so any user generated content will be available to your content servers, and you only have to deploy your web site once.

The downside is that many NFS solutions are optimized for larger (>50Mb) filesizes. If you rely on your content server for small files (images, css, js), the read performance may decline when your traffic numbers increase. For high availability systems where it is critical for each server to have a full set of up-to-date files, this is probably the best solution.

3. Use an rsync-based solution
This is the method I’ve chosen to look at here. It’s important that my content server is updated as fast as possible, and I would like to know that when I perform disaster recovery or make backups of my web site the files will be reasonably up to date. If a single file takes a few seconds to appear on any of my servers, it isn’t a huge deal (I’m just running WordPress).

The Delivery Mechanism
rsync is fast and installed by default on most servers. Pair it with ssh and use password-less login keys, and you have an easy solution for script-able file replication. The only missing piece is the “trigger” – whenever the filesystem is updated, we need to run our update script in order to replicate to our content server.

Icrond is one possible solution – whenever a directory is updated icrond can run our update script. The problem here is that service does not act upon file updates recursively. fsniper is our solution.

The process flow should look like this.
1. When the content directory is updated (via site upload or user file upload), fsniper initiates our update script.
2. Update script connects to the content server via ssh, and issues an rsync command between our content directory and the server’s content directory.
3. Hourly (or whatever), initiate an rsync command from the content server to any web servers – this will keep all the nodes fairly up-to-date for backup and disaster recovery purposes.

Life in Linux

Friday, August 6th, 2010

So I wiped my hard drive and installed Ubuntu. After struggling with the decision to switch from Windows for some time, I finally resolved to move.

So far the results have been very good. My system boots up and is ready to use in less than a minute, there is no lag loading and switching programs, and everything I need for my day-to-day programming is available much more readily than it was with the other operating system.

The most striking difference to me is the amount of disk space I now have available to me. With all of my software, work projects, and operating system overhead, Windows left 80Gb free from my 285Gb drive. With all of my projects, code libraries, files and operating system installed, Ubuntu uses just 6.7Gb, leaving 97% of the drive available for my use. I am blown away by how much less clutter I have now.

I haven’t tried to do very much with Mono yet; we’ll see how it works when I try making improvements to my SiteAssistant project. I’ve been reading about Mono’s Winforms capabilities and so far am impressed by the possibilities. We’ll see how well it works with my fairly simple project; with any luck I may have found a cross-platform .NET solution with this one. Maybe the Winforms explorations will be a good topic for a future post.

Not missing Office yet, either. My Quicken financial software has been running perfectly under Wine, and all of my files appear to have made the move intact. I still own licenses to all my software, so on those rare instances if I really need it I can install Windows with VirtualBox and fill up some of that hard drive space I’ve earned.

Google Launches Its Own Programming Language

Thursday, November 12th, 2009

Google has taken another step toward world domination with the launch of an experimental new programming languages aptly named “Go”. Go promises to pick up where C and Python left off, providing programmers with a new garbage-collecting low level languages suited to efficient server programming.

I spent the better part of last night looking for a way to get Go‘s tool chain to run under Windows using Cygwin. Unfortunately the tools can’t be created; even if they could, they would produce binary files which would be unusable within Cygwin/Windows due to their low-level nature.

If you are a Windows programmer hoping to give Go a try, your best bet is to download the andLinux distribution – this is a native Linux distribution that runs similar to a virtual machine under Windows. Once you have set it up, go to the Installing Go page for instructions on getting started with the new language.