Skip to main content

Problems installing Chrome on OpenSuSE 13.1

Error:

linux-5n99:/home/tony/Downloads # rpm -ivh google-chrome-stable_current_x86_64.rpm
warning: google-chrome-stable_current_x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 7fac5991: NOKEY
error: Failed dependencies:
lsb >= 4.0 is needed by google-chrome-stable-31.0.1650.63-1.x86_64

Fix:

linux-5n99:/home/tony/Downloads # yast --install lsb

Linux: 

Awesome Applications: 

Password protecting single user mode

I was surprise to find out how easy it was to password protect runlevel 1 aka single user mode in RHEL/CentOS.

Simply update the SINGLE variable in the file /etc/sysconfig/init

SINGLE=/sbin/sulogin

Single User mode password protected

If the root password cannot be retrieved/reset, then at this point the only option will be to boot into a rescue environment, assuming encryption hasn't been enabled.

Password protecting GRUB in RHEL/CentOS

Specifying a password to modify GRUB during the boot start-up phase can be initially set during the install, but it can also be manually added and or modified after the installation.

Using the grub-md5-crypt utility, you can generate an md5 hashed password (some security better than no security).

[[email protected] ~]# grub-md5-crypt
Password:
Retype password:
$1$/dvPV1$ngGsOO21eHj2lzEk7wg9d0

Now, is just a matter of adding the following entry in /boot/grub/grub.conf

password --md5 $1$/dvPV1$ngGsOO21eHj2lzEk7wg9d0

Restart, and voala.
GRUB image

Linux: 

Varnish WordPress Performance Testing

Thanks to my new job, I've been working a lot with Varnish. Man, Varnish is one kick ass HTTP web accelerator! A few months back I ran a few performance Apache performance tests on my WordPress site with different layers of caching enabled:
https://www.rubysecurity.org/php_xcache
https://www.rubysecurity.org/apache_stress-testing

So now I wanted to see how the results may differ using Varnish.

Configuration:
At a bare minimal, Varnish needs to be configured to remove cookies set by WordPress in order to make the content cacheable.

sub vcl_recv {
  # Drop any cookies sent to Wordpress.
  if (!(req.url ~ "wp-(login|admin)") && req.http.host ~ "rubyninja.org") {
    unset req.http.cookie;
  }
}

sub vcl_fetch {
  # Drop any cookies Wordpress tries to send back to the client.
  if (!(req.url ~ "wp-(login|admin)") && req.http.host ~ "rubyninja.org") {
    unset beresp.http.set-cookie;
  }
}

With this configuration enabled, I ran the same identical ab tests that used to benchmark my server previously.

[[email protected] ~]# ab -n 1000 -c 5 http://www.rubyninja.org/index.php
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking www.rubyninja.org (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests

Server Software: Apache/2.2.15
Server Hostname: www.rubyninja.org
Server Port: 80

Document Path: /index.php
Document Length: 0 bytes

Concurrency Level: 5
Time taken for tests: 39.528015 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1001
Total transferred: 374139 bytes
HTML transferred: 0 bytes
Requests per second: 25.30 [#/sec] (mean)
Time per request: 197.640 [ms] (mean)
Time per request: 39.528 [ms] (mean, across all concurrent requests)
Transfer rate: 9.23 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 87 100 134.1 93 3092
Processing: 88 95 10.7 91 220
Waiting: 87 94 10.6 90 218
Total: 177 196 134.4 186 3183

Percentage of the requests served within a certain time (ms)
50% 186
66% 192
75% 196
80% 199
90% 207
95% 217
98% 234
99% 251
100% 3183 (longest request)

The requests per second handled by the webserver weren't much different from the caching layer that I already had enabled previously, which is the WordPress W3 Total Cache plugin configured with Page, Database, Object, and Browser cache enabled using APC, and mod_pagespeed.

However the huge difference is that all requests never reached the Apache backend. Varnish cached all content and served all the requests directly. Varnish is just fucking awesome.

Awesome Applications: 

Leaving Gmail and Google Apps: Part I

Since I'm paying for essentially unmaged dedicated hosting so I can run my mail server. I opted to consolidate my personal web applications to the same physical box. This is why I created a KVM guest that would solely be used for my web traffic. One of the main challenges I'm faced is the fact that I only have one public IP address. This means that all of my KVM guests have been configured using the default NAT networking.

For all http traffic I'm using Varnish as the proxy and caching server and for https traffic I'm using Nginx.
http_architecture

First thing that broke using this different architecture on my sites are the mod_access IP restrictions that I originally had set in place previously. This is becuase the Apache backend see's all requests originating from the Varnish and Nginx proxies. Luckily both Varnish and Nginx have really simple access control mechanism built in them.

For example, in Varnish I can create a list of IPs that I can use to their block or grant access to certain URLs.

acl admin {
   "localhost";
    "MyPublicIPAddress";
} 

sub vcl_recv {
  # Only allow access to the admin ACL
  if (req.url ~ "^/secureshit" && req.http.host ~ "rubysecurity.org") {
    if (client.ip ~ admin) {
      return(pass);
    } else {
      error 403 "Not allowed in admin area.";
    }
  }
}

Nginx acl

location  /secureshit {
  allow MyPublicIPAddres;
  deny all;
  proxy_pass https://www.rubysecurity.org/secureshit;
}

Linux: 

Awesome Applications: 

Leaving Gmail and Google Apps

Ever since finding out about Google's involvement in PRISM a few months back, I've been wanting to completely ditch their Gmail and Google Docs services for good. Having used those services for such a long time, and more importantly them being free (as in beer), deciding which new platform I would use as the replacement was my first challenge. Firstly, all email services will be managed by me solely. So my first task was to find a reliable, and perhaps more importantly, a really cheap unmanaged dedicated hosting provider. I opted to go with OVH. Having first heard of OVH on a Linux Journal advertisement, I became completely sold on their $29.99 a month dedicated hosting offering. Even better, OVH is not an American company and they themselves having to deal with lots of scrutiny for hosting WikiLeaks. The really cheap $29.99 dedicated hosting plan does have a catch, the hardware is not enterprise quality server hardware, but rather desktop hardware. The disk is not RAIDed, and the memory is non ECC. Personally, since I don't believe my server will have much heavy load, I really don't see this as a problem. Additionally, using my home Nagios monitoring server and with the help of NRPE, I'm going to monitoring just about everything on the dedicated machine.

My current hypothetical architecture:

Install configure KVM on the system, and run three virtual machine instances.
The host machine will have the following:

  • Varnish, to proxy all HTTP traffic.
  • Nginx, to proxy all HTTPS traffic.
  • Proxy POP/IMAP mail traffic using iptables (if this does not work as I expect, I might look into using haproxy instead)

1). VM 1: http

  • Apache PHP/Ruby, all of my web apps will be using on this VM. (including this site itself)

2). VM 2: database

  • MySQL
  • PostgreSQL

3). VM 3: email

  • Postfix for sending mail
  • Dovecot for receiving mail

Enabling SMART on a hard drive

Error:

[[email protected] ~]# smartctl -H /dev/sdb
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-358.23.2.el6.x86_64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net

SMART Disabled. Use option -s with argument 'on' to enable it.

Fix:

[[email protected] ~]# smartctl -s on /dev/sdb
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-358.23.2.el6.x86_64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF ENABLE/DISABLE COMMANDS SECTION ===
SMART Enabled.

Awesome Applications: 

Nuking GPT partition table

Error:

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

Fix:

parted /dev/sdb
mklabel msdos
quit

Awesome Applications: 

Black background in all desktops after Ubuntu 13.10 upgrade

So I just upgraded my Dell XPS 13 laptop from Ubuntu 13.04 to 13.10, and immeditely the first thing I noticed that all of my desktops had a black background. and manually changing the background wallpaper took no effect. Turns out this is a common problem. In my case it turned out to be related to Gnome, which I found it to be rather interesting giving that a Gnome specific setting will case this in Unity.
Fix:

gsettings set org.gnome.settings-daemon.plugins.background active true

Reference: http://askubuntu.com/questions/287571/desktop-shows-a-white-or-black-bac...

Linux: 

Awesome Applications: 

Monitoring TFTPd server

So I just spent the last two hours of my life trying to figure why PXE booting was not working in my home network. Turned out the root cause was my fault completely since, I forgot to add a firewall rule on my dhcp/PXE server to allow incoming UDP connections on port 69.

Fix:

iptables -A INPUT -p udp -m udp --dport 69 -j ACCEPT

As with just about any other service, this service can be monitored using Nagios. Originally, I had problems using the check_tftp.pl and check_tftp plugins that are available from on Nagios Exchange repo, mainly because of the way I have setup my machines.

check_tftp This plugin was useless in my environment because this plugin all it does is send out an status command to the TFTP server. Since I'm using the BSD tftp client, all status commands sent to any host will always show up as being connected regardless.
http://exchange.nagios.org/directory/Plugins/Network-Protocols/TFTP/chec...

check_tftp.pl This plugin was not opted to work in my environment. Mainly because it uses Net::TFTP, unlike the tftp client application, Net::TFTP does not support specifying a custom reverse connection port (or port ranges). By default, when connecting to a TFTP server, the TFTP server will dynamically choose a random non-standard port to connect back to the client machine and proceed with the TFTP download. My Nagios machine (like all of my machines) are set to drop all incoming packets except for specific ports and related/established connections.
http://exchange.nagios.org/directory/Plugins/Network-Protocols/TFTP/chec...

I wrote a simple Nagios plugin that monitors TFTP. All it simply does, is download a non-empty file called test.txt.

#!/usr/bin/perl -w

# Tony Baltazar. root[@]rubyninja.org

use strict;
use Getopt::Long;




my %options;
GetOptions(\%options, "host|H:s", "port|p:i", "rport|R:s", "file|f:s", "help");


if ($options{help}) {
	usage();
	exit 0;
} elsif ($options{host} && $options{port} && $options{file}) {
	chdir('/tmp');

	my $cmd_str = ( $options{rport} ?  "/usr/bin/tftp -R $options{rport}:$options{rport} $options{host} $options{port} -c get $options{file}" : "/usr/bin/tftp $options{host} $options{port} -c get $options{file}");

	my $cmd = `$cmd_str`;
	if ($? != 0) {
		print "CRITICAL: $cmd";
		system("rm -f /tmp/$options{file}");
		exit 2;
	} else {
		if (! -z "/tmp/$options{file}" ) {
			print "TFTP is ok.\n$cmd";
			system("rm -f /tmp/$options{file}");
			exit 0;
		} else {
			print "WARNING: $cmd";
			system("rm -f /tmp/$options{file}");
			exit 1;
		}
	}

} else {
	usage();
}



sub usage {
print < --port= --file=]

   --host | -H  : TFTP server.
   --port | -p  : TFTP Port.
   --file | -m  : Test file that will be downloaded.
   --help | -h  : This help message.

Optionally,
   --rport | -R : Explicitly force the reverse originating connection's port.

EOF
}

https://github.com/alpha01/SysAdmin-Scripts/blob/master/nagios-plugins/c...

Seeing the plugin in action:
Assuming, we're using port udp 1069 to allow the TFTP server (192.168.1.2) to connect to the Nagios monitoring machine.

[[email protected] libexec]# iptables -L -n |grep "Chain INPUT"
Chain INPUT (policy DROP)
[[email protected] libexec]# iptables-save|grep 1069
-A INPUT -s 192.168.1.2/32 -p udp -m udp --dport 1069 -j ACCEPT

Firewall not allowing TFTP to connect back using port 1066.

[[email protected] libexec]# su - nagios -c '/usr/local/nagios/libexec/check_tftp.pl -H 192.168.1.2 -p 69 -R 1066 -f test.txt'
CRITICAL: Transfer timed out.

Downloading a non-existing file from the TFTP server.

[[email protected] tmp]# su - nagios -c '/usr/local/nagios/libexec/check_tftp.pl -H 192.168.1.2 -p 69 -R 1069 -f test.txtFAKESHIT'
WARNING: Error code 1: File not found

Successful connection and transfer.

[[email protected] tmp]# su - nagios -c '/usr/local/nagios/libexec/check_tftp.pl -H 192.168.1.2 -p 69 -R 1069 -f test.txt'
TFTP is ok.

Programming: 

Awesome Applications: 

Pages

Premium Drupal Themes by Adaptivethemes