Skip to main content

Accessing KVM Guest Using Virtual Serial Console

For the longest time, after creating my KVM guest virtual machines, I’ve only used virt-manager afterwards to do any sort of remote non-direct ssh connection. It wasn’t until now that I finally decided to start using the serial console feature of KVM, and I have to say, I kind of regret procrastinating on this, because this feature is really convenient.

Enabling serial console access to a guest VM is relatively easy process.
In CentOS, it’s simply a matter of adding the following kernel parameter to GRUB_CMDLINE_LINUX in /etc/default/grub

console=ttyS0

Then build new grub menu and reboot:

grub2-mkconfig –o /boot/grub2/grub.cfg

Afterwards from the host system, you should be able to virsh console onto the guest VM.

The only caveat with connecting to a guest using the virtual serial console is existing the console. In my case, the way to log off the console connection was using Ctrl+5 keyboard keys. This disconnection quirk reminded me of the good old days were I actually worked on physical servers and used IPMI’s serial over network feature and it’s associated unique key combination to properly close the serial connection.

Resources:
https://www.certdepot.net/rhel7-access-virtual-machines-console/
https://superuser.com/questions/637669/how-to-exit-a-virsh-console-connection

Awesome Applications: 

Log Varnish/proxy and Local Access Separately in Apache

I use Varnish on all of my web sites, with Apache as the backend web server. All Varnish traffic that hits my sites, is traffic that originates from the internet, while all access from my local home network hits Apache directly (Accomplished using local BIND authoritative servers).

For the longest time, I've been logging all direct Apache traffic and traffic originating from Varnish to the same Apache access file. It turns out, segmenting the access logs is a very easy task. This can be accomplish, with the help of environment variables in Apache using SetEnvIf.

For example, my Varnish server's local IP is 192.168.1.150, and SetEnvIf can use Remote_Addr (IP address of the client making the request), as part of it's set condition. So in my case, I can check if the originating request came from my Varnish server's "192.168.1.150" address, if so set the is_proxied environment variable. Afterwards I can use the is_proxied environment variable to tell Apache where to log that access request too.

Inside my VirtualHost directive, the log configuration looks like this:

        SetEnvIf Remote_Addr "192.168.1.150" is_proxied=1

        ErrorLog /var/log/httpd/antoniobaltazar.com/error.log

        CustomLog /var/log/httpd/antoniobaltazar.com/access.log cloudflare env=is_proxied
        CustomLog /var/log/httpd/antoniobaltazar.com/access.log combined

Unfortunately, we can't use this same technique to log the error logs separately as ErrorLog does not support this.

Awesome Applications: 

Nagios SSL Certificate Expiration Check

So, a while back I demonstrated a way to to set up an automated SSL certificate expiration monitoring solution.
Well, it turns out the check_http Nagios plugin has built-in support to monitor SSL certificate expiration as well. This is accompished using the -C / --certificate options.

Example check on a local expired Let's Encrypt Certificate:

[[email protected] plugins]# ./check_http -t 10 -H www.rubysecurity.org -I 192.168.1.61 -C 10
SSL CRITICAL - Certificate 'www.rubysecurity.org' expired on 2018-07-25 18:39 -0700/PDT.

check_http help doc:

-C, --certificate=INTEGER[,INTEGER]
    Minimum number of days a certificate has to be valid. Port defaults to 443
    (when this option is used the URL is not checked.)

CHECK CERTIFICATE: check_http -H www.verisign.com -C 30,14

 When the certificate of 'www.verisign.com' is valid for more than 30 days,
 a STATE_OK is returned. When the certificate is still valid, but for less than
 30 days, but more than 14 days, a STATE_WARNING is returned.
 A STATE_CRITICAL will be returned when certificate expires in less than 14 days

Awesome Applications: 

Log into a Docker Container as root

docker exec -u 0 -it mycontainer bash

Awesome Applications: 

Ubuntu 18.04 LTS + Systemd + Netplan = Annoyance

Unless it's something that is suppose to help improve workflow, I really hate change; especially if the change involves changing something that worked perfectly fine.

I upgraded (fresh install) from Ubuntu Server LTS 12.04 to 18.04, among the addition of systemd, which I don't mind to be honest, as I see it as necessary evil. I was shocked to see the old traditional Debian networking configuration does not work anymore. Instead, networking is handled by a new utility called Netplan. Using Netplan for normal static networking configurations is not terrible, however in my use-case, I needed to able to create a new virtual interface for the shared KVM bridge networking config needed for my guest VMs.

After about 30 minutes of trail and error (and wasn't able to find any useful documentation), I opted to configure the networking config to continue using the old legacy networking config. The only problem is that reverting to my old 12.04 networking config was not quite as easy as simply copying over the old interfaces file.

1. Remove all of the configs on /etc/netplan/

rm /etc/netplan/*.yml

2. Install ifupdown utility

sudo apt install ifupdown

Now, populate your /etc/network/interfaces config. This is how mine looks (where eno1 is my physical interface):

# ifupdown has been replaced by netplan(5) on this system.  See
# /etc/netplan for current configuration.
# To re-enable ifupdown on this system, you can run:
#    sudo apt install ifupdown
#
auto lo
iface lo inet loopback

 auto br0
 iface br0 inet static
         address 192.168.1.25
         netmask 255.255.255.0
         dns-nameservers 8.8.8.8 192.168.1.10 192.168.1.11
         gateway 192.168.1.1
         # set static route for LAN
         post-up route add -net 192.168.0.0 netmask 255.255.255.0 gw 192.168.1.1
         bridge_ports eno1
         bridge_stp off
         bridge_fd 0
         bridge_maxwait 0

After restarting the network service, my new shared interface was successfully created with the proper IP Address and routing, however DNS was not configured. This is because now DNS configurations seem to have their own dedicated tool called systemd-resolved. So to get my static DNS configured and working on the half-ass networking legacy configuration. Using systemd-resolved is a two step process:

1. Update the file /etc/systemd/resolved.conf with the corresponding DNS configuration, in case it looks like this

[Resolve]
DNS=192.168.1.10
DNS=192.168.1.11
DNS=8.8.8.8
Domains=rubyninja.org

2. Then finally restart the systemd-resolved service.

systemctl restart systemd-resolved

You can verify the DNS config using

systemd-resolve --status

It wasn't easy as I first imagined, but thus said, this was the only inconvenience during my entire 12.04 to 18.04 upgrade.

Linux: 

Homelab Updates!

It's been well over a month since I finally decided to retire both of my Apple Mac Minis in favor for a single (for the time being), quieter, and more powerful Intel NUC.

Migrating over my existing KVM and VirtualBox VMs to my new KVM server was a really easy process. If doing the import manually, then it's just a matter of selecting the existing vdi and qcow2 images as the source disks when creating the guests VMs on the server. In my cause, however I also had to update the new MAC address given that all of my VMs are configured to get their respective fixed IP addresses via my isc-dhcpd server.

This this was somewhat of a fresh start, so I nuked a bunch of unused VMs that I had lingering around for testing purposes, and only kept what I really need for now. Which at the time of this writing these are my current active VMs that I use on my homelab:
proxy - Reverse proxy Varnish and Nginx (SSL termination)
dhcp - ISC-dhcpd and PXE server
database - MySQL and PostgreSQL server
monitor - Nagios, Graphite/Grafana
web - Apache
ns1 - Master BIND server
ns2 - Slave BIND server
git - GitLab and Subversion
ansible - Ansible and Puppet Configuration Management
build - Jenkins
logs - ELK stack

Future Plans:
I have lots of future plans for my homelab. Like upgrading my BIND DNS servers to a new version and rollout out DNSSec on my local network, upgrading dhcp server (running a really old version of Debian), rollout 389 Directory Server (I have a love/hate relationship with openldap). These are just to name a few!

Linux: 

Awesome Applications: 

Annoying Ansible Gotcha

Ansible is by far my favorite Configuration Management tool, however it certainly has it's own unique quirks and annoyances. To start, I prefer the Ansible's YAML/Jinja approach instead of Puppet and Chef's own DSL custom configurations.

Today I ran into an interesting YAML parsing quirk. It turns out if you use colon ':' character inside a string anywhere in your playbooks, Ansible will fail to properly parse it.

Example playbook:

---
- hosts: 127.0.0.1
  tasks:
    - lineinfile: dest=/etc/sudoers regexp='^testuser ALL=' state=present line="testuser ALL=(ALL) NOPASSWD: TEST_PROGRAM" state=present

When running the playbook, triggers the following error:

ERROR! Syntax Error while loading YAML.


The error appears to have been in '/etc/ansible/one_off_playbooks/example.yml': line 4, column 104, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  tasks:
    - lineinfile: dest=/etc/sudoers regexp='^testuser ALL=' state=present line="testuser ALL=(ALL) NOPASSWD: TEST_PROGRAM" state=present
                                                                                                       ^ here

Fix:
This is a known issue https://github.com/ansible/ansible/issues/1341 and the easiest work around for this, is to force the colon ':' character to be evaluated by the Jinja templating engine.

{{':'}}

The hilarious part of this, is that it doesn't look like this stupid quirk is going to be fixed.

Awesome Applications: 

Working with Ruby obfuscated code: Finding all classes available in a module

As a follow up to my HashiCorp Rocks! blog post. Up until now, I've never directly worked with any obfuscated code. HashiCorp obfuscates their VMware Fusion and Workstation commercial Vagrant plugins.

Like Vagrant, the plugins themselves are written in Ruby.

alpha03:lib tony$ file vagrant-vmware-fusion.rb
vagrant-vmware-fusion.rb: ASCII text, with very long lines, with CRLF, LF line terminators

However, if you try to read the source all you'll see is a bunch of encoded text. Since my Vagrant plugin has some functionality that only works after a certain action gets executed by the proprietary plugins. This is why I needed to know the exact name of that particular action (class name) exactly how it's defined inside the VMware Fusion and Workstation plugins. This was a serious problem because I can't read their source code!

Luckily, this wasn't as difficult as it seems. Finding the classes (or methods but in the case of mine I didn't need too) available in Ruby is fairly simple process. To my luck somebody had already asked and answered this question in StackOverflow.

In my case, first step was needing to know the name of the actual module itself. I found the easiest way to get the name of the module that's obfuscated, is to intentionally have it spit out an exception. In doing that, I found that the module names whose namespace I'll be searching were HashiCorp::VagrantVMwarefusion and HashiCorp::VagrantVMwareworkstation.

Once I knew the modules's name, I was able to use Ruby to view what additional modules I have within the particular module namespace. I was able to accomplish that using the following

 
t = HashiCorp::VagrantVMwareworkstation.constants.select {|c| 
HashiCorp::VagrantVMwareworkstation.const_get(c).is_a? Module
}
puts t

The above sample code spit out a bunch of modules inside HashiCorp::VagrantVMwareworkstation, but since I know the Vagrant plugin API and it's coding standards/practices. I was able to verify that the module I'm searching for is HashiCorp::VagrantVMwareworkstation::Action. Once again, looking at Plugin API and other examples, I knew that this is where the class is I'm looking is stored in. So I used the following to get the corresponding class name within HashiCorp::VagrantVMwareworkstation::Action

p = HashiCorp::VagrantVMwareworkstation::Action.constants.select { |c|
HashiCorp::VagrantVMwareworkstation::Action.const_get(c).is_a? Class
}
 puts p

I repeated the above tests for HashiCorp::VagrantVMwarefusion and I was also able to find the corresponding class name that it's defined inside the obfuscated Ruby code.

In the end I was able to get the classes HashiCorp::VagrantVMwareworkstation::Action::Suspend and HashiCorp::VagrantVMwarefusion::Action::Suspend, and everything worked as expected.

Programming: 

Awesome Applications: 

aws cli AuthFailure gotcha

It's probably been about 5-6 years since the last time I've used the AWS command line tools. The other day I signed up for the AWS Free Tier to familiarize myself with the aws command-line tools once again. I created myself a test user and granted full access to my aws account. Easy and simple. Well not so fast, initially my user account failed to authenticate properly. I tried re-generating new API keys and even created other different accounts with different types of access, but the problem persisted.

Problem: I was seeing the following:

[[email protected] ~]# aws ec2 describe-regions

An error occurred (AuthFailure) when calling the DescribeRegions operation: AWS was not able to validate the provided access credentials

I verified my AWS API credentials stored on ~/.aws/credentials to ensure everything was correct. Yet, the fucking problem persisted.

Fix:
It turned out that the time on the system I was trying to use the aws cli tools in, had the time completely wrong! Once the time was fixed I was able to authenticate my account and use the aws cli tool.

Programming: 

Awesome Applications: 

Using Python to Output Pretty Looking JSON

Every once and awhile, theirs occasions that I have a giant glob of JSON data that I want to easily read it's data. Normally I opted to use http://jsonlint.com/. The problem is whenever I use http://jsonlint.com/, I always have to be sure the JSON data doesn't include anything confidential. I was happy to learn that you can easily use the json library in Python to accomplish essentially the same thing.

For example:

alpha03:~ tony$ cat test.json
{"employees":[{"firstName":"John", "lastName":"Doe"},{"firstName":"Anna", "lastName":"Smith"},{"firstName":"Peter", "lastName":"Jones"}]} 
alpha03:~ tony$ python -m json.tool < test.json
{
    "employees": [
        {
            "firstName": "John",
            "lastName": "Doe"
        },
        {
            "firstName": "Anna",
            "lastName": "Smith"
        },
        {
            "firstName": "Peter",
            "lastName": "Jones"
        }
    ]
}

Programming: 

Pages

Premium Drupal Themes by Adaptivethemes