rubysecurity.org

Anecdotes from a Linux Systems Administrator. /root

Home About Books Blog Portfolio Archive

Tag Cloud


Currently Reading

MCA Microsoft Certified Associate Azure Administrator Study Guide: Exam AZ-104
MCA Microsoft Certified Associate Azure Administrator Study Guide: Exam AZ-104
308 / 435 Pages


Latest Book Reviews

Latest Posts


March 17, 2013

Apache: RedirectMatch

by Alpha01

For the longest time, I’ve been using mod_rewrite for any type of URL redirect that requires any sort of pattern matching.

A few days ago I migrated my Gallery web app from https://www.rubysecurity.org/photos to http://photos.antoniobaltazar.com and I learned that the Redirect directive from mod_alias also has the RedirectMatch directive available, which essentially it’s Redirect with regular expression support.

I was able to easily setup the simple redirect using RedirectMatch instead of using mod_rewrite.

RedirectMatch 301 ^/photos(/)?$ http://photos.antoniobaltazar.com
RedirectMatch 301 /photos/(.*) http://photos.antoniobaltazar.com/$1
Tags: [ apache ]
March 16, 2013

ZFS on Linux: Nagios check_zfs plugin

by Alpha01

To monitor my ZFS pool, of course I’m using Nagios, duh. Nagios Exchange provide a check_zfs plugin written in Perl. http://exchange.nagios.org/directory/Plugins/Operating-Systems/Solaris/check_zfs/details

Although the plugin was originally designed for Solaris and FreeBSD systems, I got it to work under my Linux system with very little modification. The code can be found on my SysAdmin-Scripts git repo on my GitHub account.

[root@backup ~]# su - nagios -c "/usr/local/nagios/libexec/check_zfs backups 3"
OK ZPOOL backups : ONLINE {Size:464G Used:11.1G Avail:453G Cap:2%} <sdb:ONLINE>
Tags: [ perl nagios zfs ]
March 15, 2013

ZFS on Linux: Storage setup

by Alpha01

For my media storage, I’m using a 500GB 5400 RPM USB drive. Since my Linux ZFS backup server is a virtual machine under VirtualBox, in order for the VM to be able to access the entire USB drive completely, the VirtualBox Extension Pack add-on needs to be installed.

The VirtualBox Extension Pack for all versions can be found on the VirtualBox website. It is important that the Extension Pack installed must be for the same version as VirtualBox~

VirtualBox about

wget http://download.virtualbox.org/virtualbox/4.1.12/Oracle_VM_VirtualBox_Extension_Pack-4.1.12.vbox-extpack
VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.12.vbox-extpack

Additionally, it is also important that the user which VirtualBox will run under is member of the vboxusers group.

groups tony
tony : tony adm cdrom sudo dip plugdev lpadmin sambashare
sudo usermod -G  adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,vboxusers tony
groups tony
tony : tony adm cdrom sudo dip plugdev lpadmin sambashare vboxusers

Since my computer is already using two other 500GB external USB drives, I had to properly identify the drive that I wanted to use for my ZFS data. This was a really simple process (I don’t give a flying fuck about sharing my drive’s serial).

sudo hdparm -I /dev/sdd|grep Serial
  Serial Number:      J2260051H80D8C
  Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b

Now that I know the serial number of the USB drive, I can configure my VirtualBox Linux ZFS server VM to automatically use the drive.

VirtualBox drive configuration

At this point I’m about to use the 500 GB hard drive as /dev/sdb under my Linux ZFS server and use it to create ZFS pools and file systems.

zpool create pool backups /dev/sdb
zfs create backups/dhcp

Since I haven’t used ZFS on Linux extensively before, I’m manually mounting my ZFS pool after a reboot.

root@backup ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      3.5G  1.6G  1.8G  47% /
tmpfs                 1.5G     0  1.5G   0% /dev/shm
/dev/sda1             485M   67M  393M  15% /boot
[root@backup ~]# zpool import
   pool: backups
     id: 15563678275580781179
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

  backups     ONLINE
    sdb       ONLINE
[root@backup ~]# zpool import backups
[root@backup ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      3.5G  1.6G  1.8G  47% /
tmpfs                 1.5G     0  1.5G   0% /dev/shm
/dev/sda1             485M   67M  393M  15% /boot
backups               446G  128K  446G   1% /backups
backups/afs           447G  975M  446G   1% /backups/afs
backups/afs2          447G  750M  446G   1% /backups/afs2
backups/bashninja     448G  1.4G  446G   1% /backups/bashninja
backups/debian        449G  2.5G  446G   1% /backups/debian
backups/dhcp          451G  4.4G  446G   1% /backups/dhcp
backups/macbookair    446G  128K  446G   1% /backups/macbookair
backups/monitor       447G  880M  446G   1% /backups/monitor
backups/monitor2      446G  128K  446G   1% /backups/monitor2
backups/rubyninja.net
                      446G  128K  446G   1% /backups/rubyninja.net
backups/rubysecurity  447G  372M  446G   1% /backups/rubysecurity
backups/solaris       446G  128K  446G   1% /backups/solaris
backups/ubuntu        446G  128K  446G   1% /backups/ubuntu
Tags: [ ubuntu centos virtualbox zfs ]
March 14, 2013

ZFS on Linux: Installation

by Alpha01

Attending the ZFS Administration talk on SCALE 11x a couple of weeks ago made me interested in trying ZFS on Linux. Given that the speaker said that he uses ZFS on Linux on his production machines, made me think that ZFS on Linux may be finally ready for everyday use. So I’m currently looking into using the ZFS snapshots feature for my personal local file backups.

For my Linux ZFS backup server, I’m using the latest CentOS 6. Below are the steps I took to get ZFS on Linux working.

yum install automake make gcc kernel-devel kernel-headers zlib zlib-devel libuuid libuuid-devel

Since the ZFS modules get build using dkms, the latest dkms package will be needed. This can be downloaded from from Dell’s website at http://linux.dell.com/dkms/

wget http://linux.dell.com/dkms/permalink/dkms-2.2.0.3-1.noarch.rpm
rpm -ivh dkms-2.2.0.3-1.noarch.rpm

Now, the spl-modules-dkms-X rpms need to be installed.

wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.0-rc14.src.rpm
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-modules-0.6.0-rc14.src.rpm
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-modules-dkms-0.6.0-rc14.noarch.rpm
rpm -ivh spl*.rpm

After the spl-modules-dkms-X rpms have been installed, the ZFS rpm packages can now be finally installed.

wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.0-rc14.src.rpm
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-modules-0.6.0-rc14.src.rpm
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-modules-dkms-0.6.0-rc14.noarch.rpm
rpm -ivh zfs*.rpm

One thing that confused me was that after all rpm’s were installed, the zfs and zfspool binaries were no where on my system, which according to the documentation the zfs-* rpm process would have build the kernel modules and installed them on my running kernel, however this didn’t look to be the case. Instead I had to do the following:

cd /usr/src/zfs-0.6.0
make
make install

After the install completed both zfs and zfspool utilities were available and ready to use.

Resources

Tags: [ zfs centos ]
February 23, 2013

Installing the Nagios Service Check Acceptor

by Alpha01

One of the cool things that Nagios supports is the ability to do passive checks. That is instead of Nagios actively checking a client machine for errors, the client is able to send error notifications to Nagios. This can be accomplished using the Nagios Service Check Acceptor.

Installing plugin is a straight forward process. The following steps were the ones I made to get it working under CentOS 6 (Nagios server) and CentOS 5 (client).

Install dependencies:

yum install libmcrypt libmcrypt-devel

Download latest stable version (I tend to stick with stable versions, unless it’s absolutely necessary to run development versions), configure and compile.

wget http://prdownloads.sourceforge.net/sourceforge/nagios/nsca-2.7.2.tar.gz
tar -xvf nsca-2.7.2.tar.gz
cd nsca-2.7.2
./configure
[...]
*** Configuration summary for nsca 2.7.2 07-03-2007 ***:

 General Options:
 -------------------------
 NSCA port:  5667
 NSCA user:  nagios
 NSCA group: nagios

make all

Copy xinet.d sample config file and nsca.cfg file.

cp sample-config/nsca.cfg /usr/local/nagios/etc/
cp sample-config/nsca.xinetd /etc/xinetd.d/nsca

Update /etc/xinetd.d/nsca.xinetd/nsca (where 10.10.1.20 is the client IP that will be passively checked)

Enable all rules:

# default: on
	# description: NSCA
	service nsca
	{
        	flags           = REUSE
	        socket_type     = stream        
        	wait            = no
	        user            = nagios
		group		= nagcmd
        	server          = /usr/local/nagios/bin/nsca
	        server_args     = -c /usr/local/nagios/etc/nsca.cfg --inetd
        	log_on_failure  += USERID
	        disable         = no
		only_from       = 10.10.1.20
	}

Restart xinet.d

service xinetd restart

Verify xinetd it’s running

netstat -anp|grep 5667
tcp        0      0 :::5667                     :::*                        LISTEN      30008/xinetd 

Add firewall rule

iptables -A INPUT -p tcp -m tcp --dport 5667 -s 10.10.1.20 -j ACCEPT

Finally, set password and update decryption type in /usr/local/nagios/etc/nsca.cfg, and update the permissions so no one can read the password.

chmod 400 /usr/local/nagios/etc/nsca.cfg
chown nagios.nagios /usr/local/nagios/etc/nsca.cfg

Now lets configure the client machine. The same dependencies also need to be installed on the client system. I also went ahead and download and compiled nsca. (In theory I could of just copied over the send_nsca binary that was compiled on the Nagios server since both are x64 Linux systems). Once compiled, copy the send_nsca binary and update its permissions.

cp src/send_nsca /usr/local/nagios/bin/
chown nagios.nagios /usr/local/nagios/bin/send_nsca
chmod 4710 /usr/local/nagios/bin/send_nsca

Copy the sample send_nsca.cfg config file and update the encryption settings, this must match those as the nsca server

cp sample-config/send_nsca.cfg /usr/local/nagios/etc/

Finally, update the permissions so no one can read the password.

chown nagios.nagios /usr/local/nagios/etc/send_nsca.cfg 
chmod 400 /usr/local/nagios/etc/send_nsca.cfg 

Now you can use the following test script to test the settings.

#!/bin/bash
CFG="/usr/local/nagios/etc/send_nsca.cfg"
CMD="rubyninja;test;3;UNKNOWN - just an nsca test"
 
/bin/echo $CMD| /usr/local/nagios/bin/send_nsca -H $nagiosserveriphere -d ';' -c $CFG

In my case:

[root@rubyninja ~]# su - nagios -c 'bash /usr/local/nagios/libexec/test_nsca'
1 data packet(s) sent to host successfully.

Server successfully received the passive check.

Feb 22 20:46:39 monitor nagios: Warning:  Passive check result was received for service 'test' on host 'rubyninja', but the service could not be found!

Last words, the only problem I ran into was having xinetd load the newly available nsca properly.

xinetd[3499]: Started working: 0 available services
nsca[3615]: Handling the connection...
nsca[3615]: Could not send init packet to client

Fix: This was because the sample nsca.xinetd file had the nagios as the group setting. I simply had to update it to nagcmd. I suspect this is because of the permissions set on the Nagios command file nagios.cmd, which is the interface for the external commands sent to the Nagios server.

Tags: [ nagios centos ]