Skip to main content

You are here

PHP

Hide Server and Web Application Information

First and foremost, this is security over obscurity. No relevant security improvement is gained by this, but rather just annoy or hopefully prevent automated bots or script kiddies from doing any future damage. By NO means it should be the only defensive mechanism for any site!

This site runs on Drupal. Drupal, by default returns Drupal specific http headers on every request. So I wanted to disabled this completely at the server level, in addition to the PHP and Apache information that is part of a typical stock LAMP configuration.

First, we start with PHP. The PHP X-Powered-By header can be disabled by disabling the expose_php option:

expose_php = Off

Next, is updating the default Server header set by Apache:

ServerTokens Prod

Finally, it's time to remove the X-Generator and X-Drupal-Cache specific Drupal headers.
Using Apache via mod_headers module:

<IfModule mod_headers.c>
     Header unset X-Generator
     Header unset X-Drupal-Cache
</IfModule>

Using Nginx via the headers more module:

more_clear_headers 'x-generator';
more_clear_headers 'x-drupal-cache';

Why stop here when I can set custom headers. So as a joke, I want to tell the world that my sites are powered by Unicors and I’m the hacker being it. Doing so is dead simple.

Nginx

add_header              X-Powered-By "Unicorns";
add_header              X-hacker "Alpha01";

Varnish (vcl_deliver)

set resp.http.X-Powered-By = "Unicorns";
set resp.http.X-hacker = "Alpha01";

Now, let's view my new http headers

alpha03:~ tony$ curl -I https://www.rubysecurity.org
HTTP/1.1 200 OK
Date: Thu, 24 Nov 2016 03:53:40 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Set-Cookie: __cfduid=d8fbf8e2a27fac74e782224db3fd3c86c1479959620; expires=Fri, 24-Nov-17 03:53:40 GMT; path=/; domain=.rubysecurity.org; HttpOnly
Strict-Transport-Security: max-age=63072000; includeSubdomains; preload
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: public, max-age=300
X-Content-Type-Options: nosniff
Content-Language: en
X-Frame-Options: SAMEORIGIN
Last-Modified: Tue, 22 Nov 2016 06:55:27 GMT
Vary: Cookie,Accept-Encoding
Front-End-Https: on
X-Powered-By: Unicorns
X-hacker: Alpha01
Server: cloudflare-nginx
CF-RAY: 3069ea499a6320ba-LAX

References:
http://php.net/manual/en/ini.core.php#ini.expose-php
http://httpd.apache.org/docs/2.4/mod/core.html#servertokens
https://www.drupal.org/node/982034#comment-4719282
http://nginx.org/en/docs/http/ngx_http_headers_module.html

Programming: 

Awesome Applications: 

Packt Publishing Free E-Books crawler

I'm a big fan of Packt Publishing, and have purchased quite a few books from them. So when I first heard a couple of months back that they were going to give out free e-books everyday, my jaw literally dropped. https://www.packtpub.com/packt/offers/free-learning

I've normally been manually checking the site everyday for books that I might be interested on reading. The problem with this, is that their have been days that I missed out getting some free books that I would've love to read. So I wrote a short script that would notify me if there's a free book available that I might be interested in reading. I would've love if Packt Publishing provided an rss feed so I can easily get notifications of their free books. Thus said, I really can't complain since they're already kind enough to give the world free e-books to spread knowledge.

https://github.com/alpha01/Packt-Publishing-Free-Learning

Programming: 

Grepping for PHP system level command functions

grep --color -r -E -e '(escapeshellarg|escapeshellcmd|exec|passthru|proc_close|proc_get_status|proc_nice|proc_open|proc_terminate|shell_exec|system)(\s+)?\(' ./

Programming: 

Running my own Git server: GitList

For the longest time I've been wanting to streamline updates to my sites, ie. implement good software deployment technique and procedures. To be specific, start using Git for source code management, and Jenkins to deploy. No, I'm not drinking the whole Agile Kool-Aid. After all we're in 2015, and people who still continue to use FTP/SFTP to push out changes to their sites should really need to be practicing more long term sustainable procedures. Setting up a git server is really simple. See https://www.rubysecurity.org/ansible-git

Git workflow:
I prefer to only communicate with Git over ssh and not https. Since I don't use the default ssh port, the initial repository clone looks like this:

git clone ssh://$GIT-USER@$GIT-SERVER:$SSH-PORT/home/git/$REPO

GitHub has become the defacto Git hosting provider. I think much of it's success, aside from the fact that Git is an amazing piece of software, is GitHub's polished web user interface. While Git ships with a daemon that provides a visual look at the repositories, it's definitely not pretty. I wanted to have a local GitHub like interface on my private git repos, so I decided to use GitList. GitList is fairly minimalistic. Requiring just PHP and mod_rewrite, it allows you to browse your repositories, view files under different revisions, commit history and diffs. Configuring GitList is really easy.

git clone https://github.com/klaussilveira/gitlist.git
cd gitlist
chmod 777 cache
mv config.ini-example config.ini

Then update config.ini to point to the location where the Git repositories are stored in the server. On my server, they're located in /home/git.

repositories[] = '/home/git/' ;

Lastly, is just configuring the web server's virtual host. Since I use Apache mine looks like this.

<VirtualHost 192.168.1.16:443>
        ServerName git.rubyninja.org
        ServerAlias git.rubyninja.org

        DocumentRoot /var/www/gitlist

        <Directory "/var/www/gitlist">
                AllowOverride All
                AuthType Basic
                AuthName "Git Repos"
                AuthUserFile /home/svn/.htpasswd
                Require valid-user
        </Directory>

        SSLEngine on
        SSLCertificateFile /etc/httpd/certs/svn.rubyninja.org.crt
        SSLCertificateKeyFile /etc/httpd/certs/svn.rubyninja.org.key
        SSLCACertificateFile /etc/httpd/certs/rubyninjaCA.crt

        ErrorLog logs/git_ssl_error_log
        CustomLog logs/git_ssl_request_log \
                "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
</VirtualHost>

GitList
GitList

Programming: 

Awesome Applications: 

Server Move and Upgrades!

My little corner of the internet has a new home. My old $29.99 8GB RAM, 3.40GHz Intel Core i3 dedicated server was simply not enough to handle my server needs. Which apparently OVH doesn't even provide that service anymore. So instead I hoped to their mid-tear dedicated service service branch they call So you Start. I opted with their $49.00 SYS-IP-2 service. Now my server's specs is a follows:

  • 2.66 GHz+ Intel Xeon W3520 (4 cores/ 8 threads)
  • 32 GB ECC
  • 2 x 2 TB SATA drives (Software RAID)

I would've love the drives to be SAS and the RAID to be hardware based, but it's definitely not a deal breaker, and just $49.99 a month, it's not much to complain about.

CentOS 6 to CentOS 7 upgrade:
My server migration was fairly straight forward for the most part. I opted to re-create the KVM hypervisor and its guests from scratch. Mainly because I wanted to upgrade all of guests and host from CentOS 6 to CentOS 7. This is where I encountered my first problem. Since I rely on custom nat PREROUTING/POSTROUTING iptables firewall rules for my VMs to properly be able to talk to each other and to the internet. I realized CentOS 7 defaults to firewalld, so instead of trying to rewrite my firewall rules to be compatible with firewalld, I decided to continue to use CentOS 6 on my host operating system, and only upgrade my guests VMs to CentOS 7.

On a side note, my previous guest VMs were originally using raw image format (default cache settings) for its storage, and by god what a hell of a difference it makes changing to use native block storage via LVM. I/O performance on my old server was terrible, the I/O wait percentage was roughly about 6%, now it's less than 1%. Even with the software raid, I/O performance is much better on my new server.

PHP 5.3 to 5.6 upgrade:
Since I don't have anything heavily customized on any of sites, the PHP version upgrade was practically painless.

Apache 2.2 to 2.4 upgrade:
Luckily, upgrading Apache wasn't a big hassle. Anyone considering upgrading from 2.2 to 2.4, it's definitely worth checking out the official upgrade documentation since dropping the old 2.2 configs in onto a 2.4 environment won't work off the gecko. In my case all of my sites were returning 403 forbidden replies and non of my .htaccess files weren't being read by Apache. The fix was really simple.


                AllowOverride All
                Require all granted
   

I must say, I really like Apache 2.4 new authorization syntax. What used to be a three line configuration is now a single line configuration, and much more human readable.

Future Upgrade Plans:
I didn't tackle this during the server migration, but I'll definitely going to be upgrading to Varnish 4 and use PHP FastCGI via php-fpm and mod_proxy_fcgi.

Programming: 

Linux: 

Awesome Applications: 

Securing the WordPress Admin Dashboard

So the primary reason why I wanted to add SSL support to www.rubyninja.org is because I want all my /wp-admin traffic to be served securely.

Configuring WordPress to force the login page and all wp-admin traffic to be served over SSL is simply just a matter of defining the FORCE_SSL_LOGIN and FORCE_SSL_ADMIN constants in wp-config.php

define( 'FORCE_SSL_LOGIN', true );
define( 'FORCE_SSL_ADMIN', true );

Programming: 

Awesome Applications: 

PHP: XCache performance testing

Aside APC, as far as I know XCache is the second most popular PHP caching optimizer. So I manually compiled and installed XCache on my www.rubyninja.org VM and configured the WordPress W3 Total Cache plugin to use the XCache optimizer and ran the same benchmarks test that I did when APC was enabled.

After a few tests, the total requests per second was around 24-25 seconds. Slightly slower than APC. However, unlike APC, I noticed that with XCache the overall server load was less (peak at about 3.3), in addition the I/O system activity also appeared to be less than with APC.

Concurrency Level: 5
Time taken for tests: 40.740110 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 351000 bytes
HTML transferred: 0 bytes
Requests per second: 24.55 [#/sec] (mean)
Time per request: 203.701 [ms] (mean)
Time per request: 40.740 [ms] (mean, across all concurrent requests)
Transfer rate: 8.39 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 134.1 0 3000
Processing: 99 196 25.6 200 297
Waiting: 98 196 25.6 199 297
Total: 99 202 136.9 200 3209

Percentage of the requests served within a certain time (ms)
50% 200
66% 209
75% 214
80% 216
90% 222
95% 227
98% 234
99% 241
100% 3209 (longest request)

Programming: 

Awesome Applications: 

Apache stress testing

As I didn't have anything much better to do a Sunday afternoon, I wanted to get some benchmarks on my Apache VM that's hosting my blog www.rubyninja.org. I've used the ab Apache benchmarking utility in the past to simulate high load on a server but have not used it on benchmarking Apache in detail.

My VM has a single shared Core i5-2415M 2.30GHz CPU with 1.5 GB of RAM allocated to it.

I based made my benchmarks using a total of 1000 requests with 5 concurrent requests at a time.

ab -n 1000 -c 5 http://www.rubyninja.org/index.php

Results:
Using just the mod_pagespeed Apache module enabled.

Time taken for tests: 154.687976 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 351000 bytes
HTML transferred: 0 bytes
Requests per second: 6.46 [#/sec] (mean)
Time per request: 773.440 [ms] (mean)
Time per request: 154.688 [ms] (mean, across all concurrent requests)
Transfer rate: 2.21 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 328 772 46.4 772 1040
Waiting: 327 771 46.4 772 1040
Total: 328 772 46.4 772 1040

Using mod_pagespeed and APC enabled.

Time taken for tests: 41.355400 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 351000 bytes
HTML transferred: 0 bytes
Requests per second: 24.18 [#/sec] (mean)
Time per request: 206.777 [ms] (mean)
Time per request: 41.355 [ms] (mean, across all concurrent requests)
Transfer rate: 8.27 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 134.1 0 3000
Processing: 88 199 28.4 202 459
Waiting: 88 199 28.4 201 459
Total: 88 205 137.2 202 3208

Using the WordPress W3 Total Cache plugin configured with Page, Database, Object, and Browser cache enabled the APC caching method and mod_pagespeed.

ime taken for tests: 37.750269 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 351000 bytes
HTML transferred: 0 bytes
Requests per second: 26.49 [#/sec] (mean)
Time per request: 188.751 [ms] (mean)
Time per request: 37.750 [ms] (mean, across all concurrent requests)
Transfer rate: 9.06 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 5 133.9 0 2996
Processing: 74 181 26.6 185 315
Waiting: 74 181 26.6 184 314
Total: 74 187 136.4 185 3178

As you can see, APC is the once caching method that makes a huge difference. Without APC, the server response time was just 6.46 requests per second and the load average peaked at about 12, while with the default APC configuration enabled, the server response time was 24.18 requests per second, with a load average peaking about 3. Adding the W3 Total Cache WordPress plugin helped performance slightly more, from 24.18 requests per second to 26.49 requests per second (load was about the same, including I/O activity). One interesting thing that I noticed is that with caching enabled, that is APC, the I/O usage spiked considerably. Most notably, MySQL was the high cpu usage process when doing the benchmarks. Since the caching is based in memory at this point it appears that the bottleneck in the web application is MySQL.

Programming: 

Awesome Applications: 

PHP memory_limit stress testing

I wrote this code a couple of years ago that I found it was very useful when I was troubleshooting PHP memory limit settings. The script essentially creates one huge array until it runs out of memory.

<?php
ini_set('display_errors', true);
 
while (1) {
        echo 'Hello' . nl2br("\n");
        $array = array(1,2);
        while(1) {
                $tmp = $array;
                $array = array_merge($array, $tmp);
                echo memory_get_usage() . nl2br("\n");
                flush();
                sleep(1);
        }
}
?>

Programming: 

Awesome Applications: 

Setting phpMyAdmin to display a single database

Edit config.inc.php


$cfg['Servers'][$i]['only_db'] = 'databasename';

Programming: 

Databases: 

Awesome Applications: 

Premium Drupal Themes by Adaptivethemes