How to survive a Slashdotting

Q I wrote an article a few months back that got mentioned on Slashdot, and, because I hosted the site myself, my humble server got pounded by thousands of visitors in just a few hours. Back then I used Apache 1.3 with MySQL 3.23. I've now got another article ready to upload that I think will also be a big hit, but this time I want to be prepared -what can I do to ensure maximum throughput for my server? Since the first article, I have upgraded to Apache 2 and MySQL 4 (both compiled by hand), plus PHP 4.3, and am now using a dual 2.0GHz box with 1GB of RAM. The rest of the system is a pretty basic CentOS (the freebie Red Hat Enterprise) install. If possible I would rather not upgrade the hardware further!

A Slashdot can be difficult to prepare for without some sort of testing. Much of the advice I'll give depends on the type of content you're hosting. Obviously the more static you make your page the more hits your server will be able to handle. You may even want to create a separate low-bandwidth version for the Slashdot crowd. Mounting your file system (especially if it's ext3) with the noatime option will minimise disk overhead as your system will not be updating the 'last accessed' time for the page every time it's opened:

/dev/sda5 /var/www/html	ext3	defaults,noatime	1	1

In Apache itself it's well worth turning keepalives off. That will reduce the amount of simultaneous open connection but will introduce some latency into the page loading, especially if there are many images in your page. This is a tradeoff you will have to test but usually turning keepalives off is beneficial. You mentioned that you have a custom-compiled version of Apache. Be sure to turn your MaxClients variable up quite a bit. By default it's hard-coded to 256 in the Apache source and you'll probably need something substantially higher. Still in Apache, you could try an Apache module like mod_gzip. This is only really useful if the bottleneck is your bandwidth (as opposed to your CPU or system). It will compress outgoing data with a hit on CPU utilisation.

From the kernel you could modify the net.ipv4.tcp_keepalive_time and net.ipv4.tcp_fin_timeout to something more suitable. I've had good results with setting fin_timeout to 30 seconds and keepalive to 20 minutes. You can modify these using the sysctl command. You could consider using bdflush, especially if you're using a 2.4 kernel. There are many options here to optimise your memory (and page file) usage. Given the specs of the server you've mentioned I doubt you're using IDE hard drives but if you are, make sure that DMA is turned on. Check your IDE drives performance out by running:

hdparm -Tt /dev/hda

An acceptable speed is about 400MB/sec for cached reads and 20-30MB/sec for disk reads. I've seen a similar configuration on similar hardware managing 200,000 unique connections per hour serve around 300k each time so with a little planning you should be successful. Good luck!

Follow us on Identi.ca or Twitter

Username:   Password:
Create Account | About TuxRadar