Tag Cloud
Currently Reading
Latest Book Reviews
- Certified Kubernetes Application Developer (CKAD) Study Guide, 2nd Edition Posted on January 11, 2025
- Rancher Deep Dive Posted on March 31, 2023
- Leveraging Kustomize for Kubernetes Manifests Posted on March 24, 2023
- Automating Workflows with GitHub Actions Posted on October 13, 2022
- Deep-Dive Terraform on Azure Posted on August 30, 2022 All Book Reviews
Latest Posts
- Nagios SSL Certificate Expiration Check Posted on November 4, 2018
- Log Varnish/proxy and Local Access Separately in Apache Posted on November 4, 2018
- Ubuntu 18.04 LTS + Systemd + Netplan = Annoyance Posted on October 28, 2018
- Log into a Docker Container as root Posted on October 28, 2018
- Homelab Updates! Posted on October 12, 2018
November 4, 2018
Nagios SSL Certificate Expiration Check
by Alpha01
So, a while back I demonstrated a way to to set up an automated SSL certificate expiration monitoring solution.
Well, it turns out the check_http Nagios plugin has built-in support to monitor SSL certificate expiration as well. This is accomplished using the -C
/ --certificate
options.
Example check on a local expired Let’s Encrypt Certificate:
[root@monitor plugins]# ./check_http -t 10 -H www.rubysecurity.org -I 192.168.1.61 -C 10
SSL CRITICAL - Certificate 'www.rubysecurity.org' expired on 2018-07-25 18:39 -0700/PDT.
check_http
help doc:
-C, --certificate=INTEGER[,INTEGER]
Minimum number of days a certificate has to be valid. Port defaults to 443
(when this option is used the URL is not checked.)
CHECK CERTIFICATE: check_http -H www.verisign.com -C 30,14
When the certificate of 'www.verisign.com' is valid for more than 30 days,
a STATE_OK is returned. When the certificate is still valid, but for less than
30 days, but more than 14 days, a STATE_WARNING is returned.
A STATE_CRITICAL will be returned when certificate expires in less than 14 days
nagios
]
November 4, 2018
Log Varnish/proxy and Local Access Separately in Apache
by Alpha01
I use Varnish on all of my web sites, with Apache as the backend web server. All Varnish traffic that hits my sites, is traffic that originates from the internet, while all access from my local home network hits Apache directly (Accomplished using local BIND authoritative servers).
For the longest time, I’ve been logging all direct Apache traffic and traffic originating from Varnish to the same Apache access file. It turns out, segmenting the access logs is a very easy task. This can be accomplish, with the help of environment variables in Apache using SetEnvIf.
For example, my Varnish server’s local IP is 192.168.1.150, and SetEnvIf can use Remote_Addr (IP address of the client making the request), as part of it’s set condition. So in my case, I can check if the originating request came from my Varnish server’s “192.168.1.150” address, if so set the is_proxied
environment variable. Afterwards I can use the is_proxied
environment variable to tell Apache where to log that access request too.
Inside my VirtualHost
directive, the log configuration looks like this:
SetEnvIf Remote_Addr "192.168.1.150" is_proxied=1
ErrorLog /var/log/httpd/antoniobaltazar.com/error.log
CustomLog /var/log/httpd/antoniobaltazar.com/access.log cloudflare env=is_proxied
CustomLog /var/log/httpd/antoniobaltazar.com/access-local.log combined
Unfortunately, we can’t use this same technique to log the error logs separately as ErrorLog does not support this.
Tags: [apache
varnish
]
October 28, 2018
Ubuntu 18.04 LTS + Systemd + Netplan = Annoyance
by Alpha01
Unless it’s something that is suppose to help improve workflow, I really hate change; especially if the change involves changing something that worked perfectly fine.
I upgraded (fresh install) from Ubuntu Server LTS 12.04 to 18.04, among the addition of systemd, which I don’t mind to be honest, as I see it as necessary evil. I was shocked to see the old traditional Debian networking configuration does not work anymore. Instead, networking is handled by a new utility called Netplan. Using Netplan for normal static networking configurations is not terrible, however in my use-case, I needed to able to create a new virtual interface for the shared KVM bridge networking config needed for my guest VMs.
After about 30 minutes of trail and error (and wasn’t able to find any useful documentation), I opted to configure the networking config to continue using the old legacy networking config. The only problem is that reverting to my old 12.04 networking config was not quite as easy as simply copying over the old interfaces
file. So I had to do the following:
1). Remove all of the configs on /etc/netplan/
rm /etc/netplan/*.yml
2). Install ifupdown utility
sudo apt install ifupdown
Now, populate your /etc/network/interfaces
config. This is how mine looks (where eno1 is my physical interface):
# ifupdown has been replaced by netplan(5) on this system. See
# /etc/netplan for current configuration.
# To re-enable ifupdown on this system, you can run:
# sudo apt install ifupdown
#
auto lo
iface lo inet loopback
auto br0
iface br0 inet static
address 192.168.1.25
netmask 255.255.255.0
dns-nameservers 8.8.8.8 192.168.1.10 192.168.1.11
gateway 192.168.1.1
# set static route for LAN
post-up route add -net 192.168.0.0 netmask 255.255.255.0 gw 192.168.1.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
After restarting the network service, my new shared interface was successfully created with the proper IP Address and routing, however DNS was not configured. This is because now DNS configurations seem to have their own dedicated tool called systemd-resolved
. So to get my static DNS configured and working on the half-ass networking legacy configuration. Using systemd-resolved
is a two step process:
1). Update the file /etc/systemd/resolved.conf
with the corresponding DNS configuration, in my case it looks like this:
[Resolve]
DNS=192.168.1.10
DNS=192.168.1.11
DNS=8.8.8.8
Domains=rubyninja.org
2). Then finally restart the systemd-resolved service.
systemctl restart systemd-resolved
You can verify the DNS config using
systemd-resolve --status
It wasn’t easy as I first imagined, but thus said, this was the only inconvenience during my entire 12.04 to 18.04 upgrade.
Tags: [ubuntu
systemd
networking
]
October 28, 2018
Log into a Docker Container as root
by Alpha01
docker exec -u 0 -it mycontainer bash
docker
security
]
October 12, 2018
Homelab Updates!
by Alpha01
It’s been well over a month since I finally decided to retire both of my Apple Mac Minis in favor for a single (for the time being), quieter, and more powerful Intel NUC.
Migrating over my existing KVM and VirtualBox VMs to my new KVM server was a really easy process. If doing the import manually, then it’s just a matter of selecting the existing vdi and qcow2 images as the source disks when creating the guests VMs on the server. In my cause, however I also had to update the new MAC address given that all of my VMs are configured to get their respective fixed IP addresses via my isc-dhcpd server.
This this was somewhat of a fresh start, so I nuked a bunch of unused VMs that I had lingering around for testing purposes, and only kept what I really need for now. Which at the time of this writing these are my current active VMs that I use on my homelab:
proxy
- Reverse proxy Varnish and Nginx (SSL termination)dhcp
- ISC-dhcpd and PXE serverdatabase
- MySQL and PostgreSQL servermonitor
- Nagios, Graphite/Grafanaweb
- Apachens1
- Master BIND serverns2
- Slave BIND servergit
- GitLab and Subversionansible
- Ansible and Puppet Configuration Managementbuild
- Jenkinslogs
- ELK stack
Future Plans
I have lots of future plans for my homelab. Like upgrading my BIND DNS servers to a new version and rollout out DNSSec on my local network, upgrading dhcp server (running a really old version of Debian), rollout 389 Directory Server (I have a love/hate relationship with openldap). These are just to name a few!
Tags: [ubuntu
kvm
]