rubysecurity.org

Cloud Architect / DevOps Engineer / SRE / Developer | /root

Home About Books Blog Portfolio Archive

Tag Cloud


Currently Reading

Certified Kubernetes Administrator (CKA) Study Guide
Certified Kubernetes Administrator (CKA) Study Guide
38 / 155 Pages


Latest Book Reviews

Latest Posts


March 14, 2013

ZFS on Linux: Installation

by Alpha01

Attending the ZFS Administration talk on SCALE 11x a couple of weeks ago made me interested in trying ZFS on Linux. Given that the speaker said that he uses ZFS on Linux on his production machines, made me think that ZFS on Linux may be finally ready for everyday use. So I’m currently looking into using the ZFS snapshots feature for my personal local file backups.

For my Linux ZFS backup server, I’m using the latest CentOS 6. Below are the steps I took to get ZFS on Linux working.

yum install automake make gcc kernel-devel kernel-headers zlib zlib-devel libuuid libuuid-devel

Since the ZFS modules get build using dkms, the latest dkms package will be needed. This can be downloaded from from Dell’s website at http://linux.dell.com/dkms/

wget http://linux.dell.com/dkms/permalink/dkms-2.2.0.3-1.noarch.rpm
rpm -ivh dkms-2.2.0.3-1.noarch.rpm

Now, the spl-modules-dkms-X rpms need to be installed.

wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.0-rc14.src.rpm
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-modules-0.6.0-rc14.src.rpm
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-modules-dkms-0.6.0-rc14.noarch.rpm
rpm -ivh spl*.rpm

After the spl-modules-dkms-X rpms have been installed, the ZFS rpm packages can now be finally installed.

wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.0-rc14.src.rpm
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-modules-0.6.0-rc14.src.rpm
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-modules-dkms-0.6.0-rc14.noarch.rpm
rpm -ivh zfs*.rpm

One thing that confused me was that after all rpm’s were installed, the zfs and zfspool binaries were no where on my system, which according to the documentation the zfs-* rpm process would have build the kernel modules and installed them on my running kernel, however this didn’t look to be the case. Instead I had to do the following:

cd /usr/src/zfs-0.6.0
make
make install

After the install completed both zfs and zfspool utilities were available and ready to use.

Resources

Tags: [ zfs centos ]
February 23, 2013

Installing the Nagios Service Check Acceptor

by Alpha01

One of the cool things that Nagios supports is the ability to do passive checks. That is instead of Nagios actively checking a client machine for errors, the client is able to send error notifications to Nagios. This can be accomplished using the Nagios Service Check Acceptor.

Installing plugin is a straight forward process. The following steps were the ones I made to get it working under CentOS 6 (Nagios server) and CentOS 5 (client).

Install dependencies:

yum install libmcrypt libmcrypt-devel

Download latest stable version (I tend to stick with stable versions, unless it’s absolutely necessary to run development versions), configure and compile.

wget http://prdownloads.sourceforge.net/sourceforge/nagios/nsca-2.7.2.tar.gz
tar -xvf nsca-2.7.2.tar.gz
cd nsca-2.7.2
./configure
[...]
*** Configuration summary for nsca 2.7.2 07-03-2007 ***:

 General Options:
 -------------------------
 NSCA port:  5667
 NSCA user:  nagios
 NSCA group: nagios

make all

Copy xinet.d sample config file and nsca.cfg file.

cp sample-config/nsca.cfg /usr/local/nagios/etc/
cp sample-config/nsca.xinetd /etc/xinetd.d/nsca

Update /etc/xinetd.d/nsca.xinetd/nsca (where 10.10.1.20 is the client IP that will be passively checked)

Enable all rules:

# default: on
	# description: NSCA
	service nsca
	{
        	flags           = REUSE
	        socket_type     = stream        
        	wait            = no
	        user            = nagios
		group		= nagcmd
        	server          = /usr/local/nagios/bin/nsca
	        server_args     = -c /usr/local/nagios/etc/nsca.cfg --inetd
        	log_on_failure  += USERID
	        disable         = no
		only_from       = 10.10.1.20
	}

Restart xinet.d

service xinetd restart

Verify xinetd it’s running

netstat -anp|grep 5667
tcp        0      0 :::5667                     :::*                        LISTEN      30008/xinetd 

Add firewall rule

iptables -A INPUT -p tcp -m tcp --dport 5667 -s 10.10.1.20 -j ACCEPT

Finally, set password and update decryption type in /usr/local/nagios/etc/nsca.cfg, and update the permissions so no one can read the password.

chmod 400 /usr/local/nagios/etc/nsca.cfg
chown nagios.nagios /usr/local/nagios/etc/nsca.cfg

Now lets configure the client machine. The same dependencies also need to be installed on the client system. I also went ahead and download and compiled nsca. (In theory I could of just copied over the send_nsca binary that was compiled on the Nagios server since both are x64 Linux systems). Once compiled, copy the send_nsca binary and update its permissions.

cp src/send_nsca /usr/local/nagios/bin/
chown nagios.nagios /usr/local/nagios/bin/send_nsca
chmod 4710 /usr/local/nagios/bin/send_nsca

Copy the sample send_nsca.cfg config file and update the encryption settings, this must match those as the nsca server

cp sample-config/send_nsca.cfg /usr/local/nagios/etc/

Finally, update the permissions so no one can read the password.

chown nagios.nagios /usr/local/nagios/etc/send_nsca.cfg 
chmod 400 /usr/local/nagios/etc/send_nsca.cfg 

Now you can use the following test script to test the settings.

#!/bin/bash
CFG="/usr/local/nagios/etc/send_nsca.cfg"
CMD="rubyninja;test;3;UNKNOWN - just an nsca test"
 
/bin/echo $CMD| /usr/local/nagios/bin/send_nsca -H $nagiosserveriphere -d ';' -c $CFG

In my case:

[root@rubyninja ~]# su - nagios -c 'bash /usr/local/nagios/libexec/test_nsca'
1 data packet(s) sent to host successfully.

Server successfully received the passive check.

Feb 22 20:46:39 monitor nagios: Warning:  Passive check result was received for service 'test' on host 'rubyninja', but the service could not be found!

Last words, the only problem I ran into was having xinetd load the newly available nsca properly.

xinetd[3499]: Started working: 0 available services
nsca[3615]: Handling the connection...
nsca[3615]: Could not send init packet to client

Fix: This was because the sample nsca.xinetd file had the nagios as the group setting. I simply had to update it to nagcmd. I suspect this is because of the permissions set on the Nagios command file nagios.cmd, which is the interface for the external commands sent to the Nagios server.

Tags: [ nagios centos ]
February 13, 2013

Logging iptables rules

by Alpha01

When debugging certain custom firewall rules, it can sometimes be extremely useful log the rule’s activity. For example, the following rule logs all input firewall activity. The logs will be available via dmesg or syslog.

iptables -A INPUT -j LOG --log-prefix " iptables INPUT "
Tags: [ iptables networking ]
February 12, 2013

Custom Nagios mdadm monitoring: check_mdadm-raid

by Alpha01

Simple Nagios mdadm monitoring plugin.

#!/usr/bin/env ruby

# Tony Baltazar, Feb 2012. root[@]rubyninja.org

OK = 0
WARNING = 1
CRITICAL = 2
UNKNOWN = 3

# Note to self, mdadm exit status:
#0 The array is functioning normally.
#1 The array has at least one failed device.
#2 The array has multiple failed devices such that it is unusable.
#4 There was an error while trying to get information about the device.

raid_device = '/dev/md0'

get_raid_output = %x[sudo mdadm --detail #{raid_device}].lines.to_a


get_raid_status = get_raid_output.grep(/\sState\s:\s/).to_s.match(/:\s(.*)\\n\"\]/)
raid_state = get_raid_status[1].strip



if raid_state.empty?
 print "Unable to get RAID status!"
 exit UNKNOWN
end

if /^(clean(, checking)?|active)$/.match(raid_state) 
 print "RAID OK: #{raid_state}"
 exit OK
elsif /degraded/i.match(raid_state)
 print "WARNING RAID: #{raid_state}"
 exit WARNING
elsif /fail/i.match(raid_state)
 print "CRITICAL RAID: #{raid_state}"
 exit CRITICAL
else
 print "UNKNOWN RAID detected: #{raid_state}"
 exit UNKNOWN
end
Tags: [ nagios ruby nagios ]
February 6, 2013

Monitoring computer's temperature with lm_sensors

by Alpha01

One of the primary reasons I use SSD drives on both of my Mac Minis that I use as hypervisors (besides speed), is that compared to regular hard drives, SSD drives consume far less power and more importantly generate less heat. Before using SSD drives on my machines, the fan noise both of them made during the middle of summer was pretty evident compared to any other time during the year.

Although at the time I did little research about proactively monitoring the temperature of my machines, now thanks to the Nagios book that I’m currently reading, I learned about the tool lm-sensors, which is available to monitor the hardware temperature in Linux.

Installing lm-sersors in Ubuntu Server 12.04 is really simple.

sudo apt-get install libsensors4 libsensors4-dev lm-sensors

Since lm-sensors requires low-level hooks to monitor hardware temperate, it comes with the utility sensors-detect, which can be used to automatically detect and load the appropriate kernel modules for the lm-sensors tool to function on the respective piece of hardware.

tony@mini02:~$ sudo sensors-detect 
# sensors-detect revision 5984 (2011-07-10 21:22:53 +0200)
# System: Apple Inc. Macmini5,1
# Board: Apple Inc. Mac-8ED6AF5B48C039E1

This program will help you determine which kernel modules you need
to load to use lm_sensors most effectively. It is generally safe
and recommended to accept the default answers to all questions,
unless you know what you're doing.

Some south bridges, CPUs or memory controllers contain embedded sensors.
Do you want to scan for them? This is totally safe. (YES/no): YES
[...]

In the case of my mid 2011 Apple Mac Minis, it was only able to use the coretemp module. File /etc/modules:

# Generated by sensors-detect on Sat Feb  2 21:22:20 2013
# Chip drivers
coretemp

After the module has been added, then its just a matter of loading the recently applied modules.

[....]
Do you want to add these lines automatically to /etc/modules? (yes/NO)yes
Successful!

Monitoring programs won't work until the needed modules are
loaded. You may want to run 'service module-init-tools start'
to load them.

Unloading i2c-dev... OK
Unloading i2c-i801... OK
Unloading cpuid... OK

tony@mini02:~$ sudo service module-init-tools start
module-init-tools stop/waiting
tony@mini02:~$ 

Now that the appropriate kernel modules have been loaded. I have everything needed to check the temperature.

tony@mini02:~$ sensors
coretemp-isa-0000
Adapter: ISA adapter
Physical id 0:  +49.0°C  (high = +86.0°C, crit = +100.0°C)
Core 0:         +48.0°C  (high = +86.0°C, crit = +100.0°C)
Core 1:         +50.0°C  (high = +86.0°C, crit = +100.0°C)

applesmc-isa-0300
Adapter: ISA adapter
Exhaust  :   1801 RPM  (min = 1800 RPM)
TA0P:         +36.0°C  
TA0p:         +36.0°C  
TA1P:         +34.8°C  
TA1p:         +34.8°C  
TC0C:         +47.0°C  
TC0D:         +44.8°C  
TC0E:         +57.5°C  
TC0F:         +58.5°C  
TC0G:         +94.0°C  
TC0J:          +0.8°C  
TC0P:         +42.5°C  
TC0c:         +47.0°C  
TC0d:         +44.8°C  
TC0p:         +42.5°C  
TC1C:         +50.0°C  
TC1c:         +50.0°C  
TCFC:          +0.2°C  
TCGC:         +49.0°C  
TCGc:         +49.0°C  
TCPG:         +98.0°C  
TCSC:         +50.0°C  
TCSc:         +50.0°C  
TCTD:        +255.5°C  
TCXC:         +49.5°C  

Of course, I just had to write a Nagios plugin to monitor them:

#!/usr/bin/env perl
use strict;
# Tony Baltazar. root[@]rubyninja.org

use constant OK => 0;
use constant WARNING => 1;
use constant CRITICAL => 2;
use constant UNKNOWN => 3;

my %THRESHOLDS = (OK => 70, WARNING => 75, CRITICAL => 86);

# Sample output
#Physical id 0:  +55.0°C  (high = +86.0°C, crit = +100.0°C)
#Core 0:         +54.0°C  (high = +86.0°C, crit = +100.0°C)
#Core 1:         +55.0°C  (high = +86.0°C, crit = +100.0°C)
my @get_current_heat = split "\n", `sensors 2>/dev/null|grep -E -e '(Physical id 0|Core [0-1])'`;


my $counter = 0;
my $output_string;

for my $heat_usage_per_core (@get_current_heat) {
    $heat_usage_per_core =~ /(.*):\s+\+([0-9]{1,3})/;
    my $core = $1;
    my $temp = $2;


    if ($temp < $THRESHOLDS{OK}) {
        $output_string .= "$core - temperature : $temp" . 'C | ';
        $counter++;
    } elsif ( ($temp > $THRESHOLDS{OK}) && ($temp >= $THRESHOLDS{WARNING}) && ($temp < $THRESHOLDS{CRITICAL}) ) {
        print "WARNING! $core temperature: $temp\n";
        exit(WARNING);
    } elsif ( ($temp > $THRESHOLDS{OK}) && ($temp > $THRESHOLDS{WARNING}) && ($temp >= $THRESHOLDS{CRITICAL}) ) { 
        print "CRITICAL! $core temperature: $temp\n";
        exit(CRITICAL);
    }
}

if ($counter == 3 ) {
    print $output_string;
    exit(OK);
} else {
    print "Unable to get all CPU's temperature.\n";
    exit(UNKNOWN);
}
Tags: [ perl ubuntu monitoring nagios ]