Skip to main content

PHP: XCache performance testing

Aside APC, as far as I know XCache is the second most popular PHP caching optimizer. So I manually compiled and installed XCache on my VM and configured the WordPress W3 Total Cache plugin to use the XCache optimizer and ran the same benchmarks test that I did when APC was enabled.

After a few tests, the total requests per second was around 24-25 seconds. Slightly slower than APC. However, unlike APC, I noticed that with XCache the overall server load was less (peak at about 3.3), in addition the I/O system activity also appeared to be less than with APC.

Concurrency Level: 5
Time taken for tests: 40.740110 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 351000 bytes
HTML transferred: 0 bytes
Requests per second: 24.55 [#/sec] (mean)
Time per request: 203.701 [ms] (mean)
Time per request: 40.740 [ms] (mean, across all concurrent requests)
Transfer rate: 8.39 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 134.1 0 3000
Processing: 99 196 25.6 200 297
Waiting: 98 196 25.6 199 297
Total: 99 202 136.9 200 3209

Percentage of the requests served within a certain time (ms)
50% 200
66% 209
75% 214
80% 216
90% 222
95% 227
98% 234
99% 241
100% 3209 (longest request)


Awesome Applications: 

Apache stress testing

As I didn't have anything much better to do a Sunday afternoon, I wanted to get some benchmarks on my Apache VM that's hosting my blog I've used the ab Apache benchmarking utility in the past to simulate high load on a server but have not used it on benchmarking Apache in detail.

My VM has a single shared Core i5-2415M 2.30GHz CPU with 1.5 GB of RAM allocated to it.

I based made my benchmarks using a total of 1000 requests with 5 concurrent requests at a time.

ab -n 1000 -c 5

Using just the mod_pagespeed Apache module enabled.

Time taken for tests: 154.687976 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 351000 bytes
HTML transferred: 0 bytes
Requests per second: 6.46 [#/sec] (mean)
Time per request: 773.440 [ms] (mean)
Time per request: 154.688 [ms] (mean, across all concurrent requests)
Transfer rate: 2.21 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 328 772 46.4 772 1040
Waiting: 327 771 46.4 772 1040
Total: 328 772 46.4 772 1040

Using mod_pagespeed and APC enabled.

Time taken for tests: 41.355400 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 351000 bytes
HTML transferred: 0 bytes
Requests per second: 24.18 [#/sec] (mean)
Time per request: 206.777 [ms] (mean)
Time per request: 41.355 [ms] (mean, across all concurrent requests)
Transfer rate: 8.27 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 134.1 0 3000
Processing: 88 199 28.4 202 459
Waiting: 88 199 28.4 201 459
Total: 88 205 137.2 202 3208

Using the WordPress W3 Total Cache plugin configured with Page, Database, Object, and Browser cache enabled the APC caching method and mod_pagespeed.

ime taken for tests: 37.750269 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 351000 bytes
HTML transferred: 0 bytes
Requests per second: 26.49 [#/sec] (mean)
Time per request: 188.751 [ms] (mean)
Time per request: 37.750 [ms] (mean, across all concurrent requests)
Transfer rate: 9.06 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 5 133.9 0 2996
Processing: 74 181 26.6 185 315
Waiting: 74 181 26.6 184 314
Total: 74 187 136.4 185 3178

As you can see, APC is the once caching method that makes a huge difference. Without APC, the server response time was just 6.46 requests per second and the load average peaked at about 12, while with the default APC configuration enabled, the server response time was 24.18 requests per second, with a load average peaking about 3. Adding the W3 Total Cache WordPress plugin helped performance slightly more, from 24.18 requests per second to 26.49 requests per second (load was about the same, including I/O activity). One interesting thing that I noticed is that with caching enabled, that is APC, the I/O usage spiked considerably. Most notably, MySQL was the high cpu usage process when doing the benchmarks. Since the caching is based in memory at this point it appears that the bottleneck in the web application is MySQL.


Awesome Applications: 

Apache: Installing mod_pagespeed on CentOS 6


rpm -ivh mod-pagespeed-stable_current_x86_64.rpm
warning: mod-pagespeed-stable_current_x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 7fac5991: NOKEY
error: Failed dependencies:
at is needed by mod-pagespeed-stable-


yum localinstall mod-pagespeed-stable_current_x86_64.rpm


Awesome Applications: 

Apache: RedirectMatch

For the longest time, I've been using mod_rewrite for any type of URL redirect that requires any sort of pattern matching.

A few days ago I migrated my Gallery web app from to and to my surprise, I learned that Redirect directive from mod_alias also has the RedirectMatch directive available, which essentially It's Redirect with regular expression support.

I was able to easily setup the simple redirect using RedirectMatch instead of using mod_rewrite.

RedirectMatch 301 ^/photos(/)?$
RedirectMatch 301 /photos/(.*)$1

Awesome Applications: 

ZFS on Linux: Nagios check_zfs plugin

To monitor my ZFS pool, of course I'm using Nagios, duh. Nagios Exchange provide a check_zfs plugin written in Perl.

Although the plugin was originally designed for Solaris and FreeBSD systems, I got it to work under my Linux system with very little modification. The code can be found on my SysAdmin-Scripts git repo on GitHub

[[email protected] ~]# su - nagios -c "/usr/local/nagios/libexec/check_zfs backups 3"
OK ZPOOL backups : ONLINE {Size:464G Used:11.1G Avail:453G Cap:2%}


Awesome Applications: 

ZFS on Linux: Storage setup

For my media storage, I'm using a 500GB 5400 RPM USB drive. Since my Linux ZFS backup server is a virtual machine under VirtualBox, in order for the VM to be able to access the entire USB drive completely, the VirtualBox Extension Pack add-on needs to be installed.

The VirtualBox Extension Pack for all versions can be found on the following web site . It is important that the Extension Pack installed must be for the same version as VirtualBox.

VirtualBox about

VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.12.vbox-extpack

Additionally, it is also important that the user which VirtualBox will run under is member of the vboxusers group.

groups tony
tony : tony adm cdrom sudo dip plugdev lpadmin sambashare
sudo usermod -G adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,vboxusers tony
groups tony
tony : tony adm cdrom sudo dip plugdev lpadmin sambashare vboxusers

Since my computer is already using two other 500GB external USB drives, I had to properly identify the drive that I wanted to use for my ZFS data. This was a really simple process (I don't give a flying fuck about sharing my drive's serial).

sudo hdparm -I /dev/sdd|grep Serial
Serial Number: J2260051H80D8C
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b

Now that I know the serial number of the USB drive, I can configure my VirtualBox Linux ZFS server VM to automatically use the drive.
VirtualBox drive configuration

At this point I'm about to use the 500 GB hard drive as /dev/sdb under my Linux ZFS server and use it to create ZFS pools and file systems.

zpool create pool backups /dev/sdb
zfs create backups/dhcp

Since I haven't used ZFS on Linux extensively before, I'm manually mounting my ZFS pool after a reboot.

[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
3.5G 1.6G 1.8G 47% /
tmpfs 1.5G 0 1.5G 0% /dev/shm
/dev/sda1 485M 67M 393M 15% /boot
[[email protected] ~]# zpool import
pool: backups
id: 15563678275580781179
state: ONLINE
action: The pool can be imported using its name or numeric identifier.

backups ONLINE
[[email protected] ~]# zpool import backups
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
3.5G 1.6G 1.8G 47% /
tmpfs 1.5G 0 1.5G 0% /dev/shm
/dev/sda1 485M 67M 393M 15% /boot
backups 446G 128K 446G 1% /backups
backups/afs 447G 975M 446G 1% /backups/afs
backups/afs2 447G 750M 446G 1% /backups/afs2
backups/bashninja 448G 1.4G 446G 1% /backups/bashninja
backups/debian 449G 2.5G 446G 1% /backups/debian
backups/dhcp 451G 4.4G 446G 1% /backups/dhcp
backups/macbookair 446G 128K 446G 1% /backups/macbookair
backups/monitor 447G 880M 446G 1% /backups/monitor
backups/monitor2 446G 128K 446G 1% /backups/monitor2
446G 128K 446G 1% /backups/
backups/rubysecurity 447G 372M 446G 1% /backups/rubysecurity
backups/solaris 446G 128K 446G 1% /backups/solaris
backups/ubuntu 446G 128K 446G 1% /backups/ubuntu


Awesome Applications: 

ZFS on Linux: Installation

Attending the ZFS Administration talk on SCALE 11x a couple of weeks ago made me interested in trying ZFS on Linux. Given that the speaker said that he uses ZFS on Linux on his production machines, made me think that ZFS on Linux may be finally ready for everyday use. So I'm currently looking into using the ZFS snapshots feature for my personal local file backups.

For my Linux ZFS backup server, I'm using the latest CentOS 6. Below are the steps I took to get ZFS on Linux working.

yum install automake make gcc kernel-devel kernel-headers zlib zlib-devel libuuid libuuid-devel

Since the ZFS modules get build using dkms, the latest dkms package will be needed. This can be downloaded from from Dell's website at

rpm -ivh dkms-

Now, the spl-modules-dkms-X rpms need to be installed.

rpm -ivh spl*.rpm

After the spl-modules-dkms-X rpms have been installed, the ZFS rpm packages can now be finally installed.

rpm -ivh zfs*.rpm

One thing that confused me was that after all rpm's were installed, the zfs or zfspool binaries were no where on my system, which according to the documentation the zfs-* rpm process would have build the kernel modules and installed them on my running kernel, however this didn't look to be the case.
Instead I had to do the following:

cd /usr/src/zfs-0.6.0
make install

After the install completed both zfs and zfspool utilities were available and ready to use.


Awesome Applications: 

Installing the Nagios Service Check Acceptor

One of the cool things that Nagios supports is the ability to do passive checks. That is instead of Nagios actively checking a client machine for errors, the client is able to send error notifications to Nagios. This can be accomplished using the Nagios Service Check Acceptor.

Installing plugin is a straight forward process. The following steps were the ones I made to get it working under CentOS 6 (Nagios server) and CentOS 5 (client).

Install dependencies:

yum install libmcrypt libmcrypt-devel

Download latest stable version (I tend to stick with stable versions, unless it's absolutely necessary to run development versions), configure and compile.

tar -xvf nsca-2.7.2.tar.gz
cd nsca-2.7.2
*** Configuration summary for nsca 2.7.2 07-03-2007 ***:

General Options:
NSCA port: 5667
NSCA user: nagios
NSCA group: nagios

make all

Copy xinet.d sample config file and nsca.cfg file.

cp sample-config/nsca.cfg /usr/local/nagios/etc/
cp sample-config/nsca.xinetd /etc/xinetd.d/nsca

Update /etc/xinetd.d/nsca.xinetd/nsca (where is the client IP that will be passively checked)

# default: on
	# description: NSCA
	service nsca
        	flags           = REUSE
	        socket_type     = stream        
        	wait            = no
	        user            = nagios
		group		= nagcmd
        	server          = /usr/local/nagios/bin/nsca
	        server_args     = -c /usr/local/nagios/etc/nsca.cfg --inetd
        	log_on_failure  += USERID
	        disable         = no
		only_from       =

Restart xinet.d

service xinetd restart

Verify that it's running

netstat -anp|grep 5667
tcp 0 0 :::5667 :::* LISTEN 30008/xinetd

Add firewall rule

iptables -A INPUT -p tcp -m tcp --dport 5667 -s -j ACCEPT

Set password and update decryption type in /usr/local/nagios/etc/nsca.cfg

Finally, update the permissions so no one can read the password.

chmod 400 /usr/local/nagios/etc/nsca.cfg
chown nagios.nagios /usr/local/nagios/etc/nsca.cfg

Now lets configure the client machine. The same dependencies also need to be installed on the client system. I also went ahead and download and compiled nsca. (In theory I could of just copied over the send_nsca binary that was compiled on the Nagios server since both are x64 Linux systems).
Once compiled, copy the send_nsca binary and update its permissions.

cp src/send_nsca /usr/local/nagios/bin/
chown nagios.nagios /usr/local/nagios/bin/send_nsca
chmod 4710 /usr/local/nagios/bin/send_nsca

Copy the sample send_nsca.cfg config file and update the encryption settings, this must match those as the nsca server

cp sample-config/send_nsca.cfg /usr/local/nagios/etc/

Finally, update the permissions so no one can read the password.

chown nagios.nagios /usr/local/nagios/etc/send_nsca.cfg
chmod 400 /usr/local/nagios/etc/send_nsca.cfg

Now you can use the following test script to test the settings.

CMD="rubyninja;test;3;UNKNOWN - just an nsca test"
/bin/echo $CMD| /usr/local/nagios/bin/send_nsca -H $nagiosserveriphere -d ';' -c $CFG

In my case:

[[email protected] ~]# su - nagios -c 'bash /usr/local/nagios/libexec/test_nsca'
1 data packet(s) sent to host successfully.

Server successfully received the passive check.

Feb 22 20:46:39 monitor nagios: Warning:  Passive check result was received for service 'test' on host 'rubyninja', but the service could not be found!

Last words, the only problem I ran into was having xinetd load the newly available nsca properly.

xinetd[3499]: Started working: 0 available services
nsca[3615]: Handling the connection...
nsca[3615]: Could not send init packet to client

Fix: The was because the sample nsca.xinetd file had the nagios as the group setting. I simply had to update it to 'nagcmd'.
I suspect this is because of the permissions set on the Nagios command file nagios.cmd, which is the interface for the external commands sent to the Nagios server.


Awesome Applications: 

Logging iptables rules

When debugging certain custom firewall rules, it can sometimes be extremely useful log the rule's activity.
For example, the following rule logs all input firewall activity. The logs will be available via dmesg or syslog.

iptables -A INPUT -j LOG --log-prefix " iptables INPUT "

Awesome Applications: 

Custom Nagios mdadm monitoring: check_mdadm-raid

Simple Nagios mdadm monitoring plugin.

#!/usr/bin/env ruby

# Tony Baltazar. root[@]

OK = 0

# Note to self, mdadm exit status:
#0 The array is functioning normally.
#1 The array has at least one failed device.
#2 The array has multiple failed devices such that it is unusable.
#4 There was an error while trying to get information about the device.

raid_device = '/dev/md0'

get_raid_output = %x[sudo mdadm --detail #{raid_device}].lines.to_a

get_raid_status = get_raid_output.grep(/\sState\s:\s/).to_s.match(/:\s(.*)\\n\"\]/)
raid_state = get_raid_status[1].strip

if raid_state.empty?
 print "Unable to get RAID status!"

if /^(clean(, checking)?|active)$/.match(raid_state) 
 print "RAID OK: #{raid_state}"
 exit OK
elsif /degraded/i.match(raid_state)
 print "WARNING RAID: #{raid_state}"
elsif /fail/i.match(raid_state)
 print "CRITICAL RAID: #{raid_state}"
 print "UNKNOWN RAID detected: #{raid_state}"


Awesome Applications: 


Premium Drupal Themes by Adaptivethemes