Skip to main content

Nuking GPT partition table

Error:

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

Fix:

parted /dev/sdb
mklabel msdos
quit

Awesome Applications: 

Black background in all desktops after Ubuntu 13.10 upgrade

So I just upgraded my Dell XPS 13 laptop from Ubuntu 13.04 to 13.10, and immeditely the first thing I noticed that all of my desktops had a black background. and manually changing the background wallpaper took no effect. Turns out this is a common problem. In my case it turned out to be related to Gnome, which I found it to be rather interesting giving that a Gnome specific setting will case this in Unity.
Fix:

gsettings set org.gnome.settings-daemon.plugins.background active true

Reference: http://askubuntu.com/questions/287571/desktop-shows-a-white-or-black-bac...

Linux: 

Awesome Applications: 

Monitoring TFTPd server

So I just spent the last two hours of my life trying to figure why PXE booting was not working in my home network. Turned out the root cause was my fault completely since, I forgot to add a firewall rule on my dhcp/PXE server to allow incoming UDP connections on port 69.

Fix:

iptables -A INPUT -p udp -m udp --dport 69 -j ACCEPT

As with just about any other service, this service can be monitored using Nagios. Originally, I had problems using the check_tftp.pl and check_tftp plugins that are available from on Nagios Exchange repo, mainly because of the way I have setup my machines.

check_tftp This plugin was useless in my environment because this plugin all it does is send out an status command to the TFTP server. Since I'm using the BSD tftp client, all status commands sent to any host will always show up as being connected regardless.
http://exchange.nagios.org/directory/Plugins/Network-Protocols/TFTP/chec...

check_tftp.pl This plugin was not opted to work in my environment. Mainly because it uses Net::TFTP, unlike the tftp client application, Net::TFTP does not support specifying a custom reverse connection port (or port ranges). By default, when connecting to a TFTP server, the TFTP server will dynamically choose a random non-standard port to connect back to the client machine and proceed with the TFTP download. My Nagios machine (like all of my machines) are set to drop all incoming packets except for specific ports and related/established connections.
http://exchange.nagios.org/directory/Plugins/Network-Protocols/TFTP/chec...

I wrote a simple Nagios plugin that monitors TFTP. All it simply does, is download a non-empty file called test.txt.

#!/usr/bin/perl -w

# Tony Baltazar. root[@]rubyninja.org

use strict;
use Getopt::Long;




my %options;
GetOptions(\%options, "host|H:s", "port|p:i", "rport|R:s", "file|f:s", "help");


if ($options{help}) {
	usage();
	exit 0;
} elsif ($options{host} && $options{port} && $options{file}) {
	chdir('/tmp');

	my $cmd_str = ( $options{rport} ?  "/usr/bin/tftp -R $options{rport}:$options{rport} $options{host} $options{port} -c get $options{file}" : "/usr/bin/tftp $options{host} $options{port} -c get $options{file}");

	my $cmd = `$cmd_str`;
	if ($? != 0) {
		print "CRITICAL: $cmd";
		system("rm -f /tmp/$options{file}");
		exit 2;
	} else {
		if (! -z "/tmp/$options{file}" ) {
			print "TFTP is ok.\n$cmd";
			system("rm -f /tmp/$options{file}");
			exit 0;
		} else {
			print "WARNING: $cmd";
			system("rm -f /tmp/$options{file}");
			exit 1;
		}
	}

} else {
	usage();
}



sub usage {
print < --port= --file=]

   --host | -H  : TFTP server.
   --port | -p  : TFTP Port.
   --file | -m  : Test file that will be downloaded.
   --help | -h  : This help message.

Optionally,
   --rport | -R : Explicitly force the reverse originating connection's port.

EOF
}

https://github.com/alpha01/SysAdmin-Scripts/blob/master/nagios-plugins/c...

Seeing the plugin in action:
Assuming, we're using port udp 1069 to allow the TFTP server (192.168.1.2) to connect to the Nagios monitoring machine.

[[email protected] libexec]# iptables -L -n |grep "Chain INPUT"
Chain INPUT (policy DROP)
[[email protected] libexec]# iptables-save|grep 1069
-A INPUT -s 192.168.1.2/32 -p udp -m udp --dport 1069 -j ACCEPT

Firewall not allowing TFTP to connect back using port 1066.

[[email protected] libexec]# su - nagios -c '/usr/local/nagios/libexec/check_tftp.pl -H 192.168.1.2 -p 69 -R 1066 -f test.txt'
CRITICAL: Transfer timed out.

Downloading a non-existing file from the TFTP server.

[[email protected] tmp]# su - nagios -c '/usr/local/nagios/libexec/check_tftp.pl -H 192.168.1.2 -p 69 -R 1069 -f test.txtFAKESHIT'
WARNING: Error code 1: File not found

Successful connection and transfer.

[[email protected] tmp]# su - nagios -c '/usr/local/nagios/libexec/check_tftp.pl -H 192.168.1.2 -p 69 -R 1069 -f test.txt'
TFTP is ok.

Programming: 

Awesome Applications: 

Chef encountered an error attempting to create the client

So I'm finally starting to keep up with modern times and started to learn Chef more in depth. My goal is to completely automate and easily manage all of virtual machine instances running in my home network.

Upon attempting to bootstrap my very first node, I received the following error:

ubuntu Creating a new client identity for ubuntu01 using the validator key.
ubuntu
ubuntu ===================================================================
ubuntu Chef encountered an error attempting to create the client "ubuntu01"
ubuntu ===================================================================
ubuntu
ubuntu
ubuntu Resource Not Found:
ubuntu -------------------
ubuntu The server returned a HTTP 404. This usually indicates that your chef_server_url is incorrect.
ubuntu
ubuntu
ubuntu
ubuntu Relevant Config Settings:
ubuntu -------------------------
ubuntu chef_server_url "https://chef.rubyninja.org:443"
ubuntu
ubuntu
ubuntu
ubuntu [2013-09-15T22:25:28-07:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
ubuntu Chef Client failed. 0 resources updated
ubuntu [2013-09-15T22:25:28-07:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

This essentially means that the node is not able to communicate with the Chef server. In my case, it turned out that the ubuntu01 machine was not using my local DNS servers, thus the chef.rubyninja.org lookup from the machine was failing.

Linux: 

Awesome Applications: 

Can't locate local/lib.pm CPAN error on Ubuntu 12.04

So the default Perl installation that ships with Ubuntu 12.04 LTS, does not include local::lib which is necessary if you want to use CPAN.

Error:

Can't locate local/lib.pm in @INC (@INC contains: /home/tony/perl5/lib/perl5 /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl /home/tony) at /usr/share/perl/5.14/CPAN/FirstTime.pm line 1300.

Fix:

sudo apt-get install liblocal-lib-perl

Reference: http://stackoverflow.com/questions/16702642/cant-locate-local-lib-pm-in-...

Programming: 

Linux: 

Reverse DNS in BIND 9.8

I use BIND on my home network, and giving the bast amount of virtual machines I have online, I've always find myself wanting to easily look up which machine is using which IP address without having to ssh into the actual vm or check the zone file. Configuring reverse DNS in BIND 9.8 is actually a dead simple process.
First, a separate zone file for PTR records needs to be created, I named mine db.192.168.1.255.
Note: since my network address space is 192.168.1, the actual PTR record will be the network address backgrounds followed by in-addr.arpa..

$TTL 3h
@       IN SOA  ns1.rubyninja.org. dnsadmin.rubysecurity.org. (
                                        2013090701      ; serial
                                        3h              ; refresh after 3 hours
                                        1h              ; retry after 1 hour
                                        1w              ; expire after 1 week
                                        1H )            ; negative caching TTL of 1 hour

              IN      NS      ns1.rubyninja.org.
              IN      NS      ns2.rubyninja.org.

14.1.168.192.in-addr.arpa.      IN      PTR     email.rubyninja.org.

Lastly, the zone entry needs to be added to the master named.conf file. Mine looks like this

zone "1.168.192.in-addr.arpa" IN {
        type master;
        file "etc/zones/db.192.168.1.255";
        allow-query { any; };
};

After reloading Bind, you verify reverse DNS works by using the utility of your choice; ie dig, host, nslookup, etc..

alpha03:~ tony$ nslookup 192.168.1.14
Server:		192.168.1.10
Address:	192.168.1.10#53

14.1.168.192.in-addr.arpa	name = email.rubyninja.org.

Linux: 

Awesome Applications: 

Exporting data in MySQL to an XML file

So I just started reading a new MySQL administration book and starting to learn cool things that I didn't even know previosly. One cool feature MySQL supports that I wasn't aware of is the ability to export and import data to and from XML files.

For example the following will export the table City from world database to an XML file.

[email protected]:~# mysql --xml -e 'SELECT * from world.City' > city.xml

To import data from the XML itself can be accomplished using the LOAD XML statement.

([email protected]) [world] LOAD XML INFILE 'city.xml' INTO TABLE City;
Query OK, 4079 rows affected, 14 warnings (0.81 sec)
Records: 4079  Deleted: 0  Skipped: 0  Warnings: 14

Additional information: http://dev.mysql.com/doc/refman/5.5/en/load-xml.html

Databases: 

ZFS on Linux: Kernel updates

Just as I would expect, updating both the kernel's of the machine that is running VirtualBox and its virtual machines and the ZFS enabled Linux virtual machine was completed with absolutely no issues. I originally was more concern on updating the host VirtualBox machine's kernel given that I've never really done this in the past using the additional VirtualBox Extension Pack add-on before, on the other hand I wasn't to concern regarding the ZFS kernel module, given that it was installed as part of a dkms kernel module rpm. Which regardless of what people think about dkms modules, as a sysadmin that have worked with Linux systems with them (proprietary), it's certainly a relief knowing that little or no additional work is needed to rebuild the respective module after updating to a newer kernel.

Linux: 

Awesome Applications: 

lsof alternative in Solaris

(Warning: output is pretty ugly)

pfiles /proc/*

Unix: 

User creations in Solaris 11

To my surprise Solaris 11 does not create new user's home directory by default.

Errors:

[email protected]:~# su - testuser
su: No directory!
[email protected]:~# pwck
[....]
testuser:x:106:10::/export/home/testuser:/usr/bin/bash
Login directory not found

Fix:

[email protected]:~# useradd -m testuser
80 blocks

In the process, I learned something new about the su command. In Linux, when switching from root to a limited user, I used to do the following:

[[email protected] ~]# su tony -

What I did not know was that the above command will indeed load up the PATH of tony, but it will also append root's PATH at end of it which is kind of scary. In theory the command that I wanted to use was `su - username`, luckily this feature is not supported in Solaris 11.

[email protected]:/# su testuser -
bash: /root/.bashrc: Permission denied

Unix: 

Pages

Premium Drupal Themes by Adaptivethemes