Skip to main content

RIP Nagios

It's an end of an era, at least with me using Nagios or Nagios Core to be exact. Unless you've been living under a rock, Prometheus has become the defacto tool when it comes to system monitoring. While professionally, I stoped using Nagios a few years ago, but I still kept a Nagios server running in my HomeLab for internal monitoring along side Prometheus. What kept me from fully dumping Nagios, was having to migrate some of my custom alerts. However, this weekend I finally decided to give Nagios its final blow and migrate my custom alerts to Prometheus. With the help of the awesome Blackbox exporter, I was able to easily port over my custom http and dns alerts to Prometheus.

Like Nagios, I feel Prometheus also has a steep learning curve. However, overall I feel the benefits Prometheus brings like integration with cloud native system infrastructures, definitely outweigh the drawbacks of this awesome monitoring tool.
Promethues Alerts

Awesome Applications: 

Updating BIND DNS records using Ansible

This is a follow up to the post. Configure BIND to support DDNS updates
Now, that I'm able to dynamically update DNS records, this is where Ansible comes in. Ansible is hands down my favorite orchestration/automation tool. So I choose to use it to update my local DNS records going forward.

I'll be using the community.general.nsupdate module.

I structured my records on my nameserver's corresponding Ansible group_vars using the following structured:

  - zone: DNS-NAME
      - record: (@ for $ORIGIN or normal record name )
        ttl: TTL-VALUE
        state: (present or absent)
        type: DNS-TYPE
        value: VALUE-OF-DNS-RECORD


  - zone: ""
      - record: "@"
        ttl: "10800"
        state: "present"
        type: "A"
        value: ""
      - record: "shit"
        ttl: "10800"
        state: "present"
        type: "A"
        value: ""
  - zone: ""
      - record: "@"
        ttl: "10800"
        state: "present"
        type: "A"
        value: ""
      - record: "test"
        ttl: "10800"
        state: "present"
        type: "A"
        value: ""

Deployment Ansible playbook:

- hosts:
    - name: Get algorithm from vault
        vault_algorithm: "{{ lookup('community.general.hashi_vault', 'secret/systems/bind:algorithm') }}"
      delegate_to: localhost

    - name: Get rndckey from vault
        vault_rndckey: "{{ lookup('community.general.hashi_vault', 'secret/systems/bind:rndckey') }}"
      delegate_to: localhost

    - name: Sync $ORIGIN records"
        key_name: "rndckey"
        key_secret: "{{ vault_rndckey }}"
        key_algorithm: "{{ vault_algorithm }}"
        server: ""
        port: "53"
        protocol: "tcp"
        ttl: "{{ item.1.ttl }}"
        record: "{{ }}."
        state: "{{ item.1.state }}"
        type: "{{ item.1.type }}"
        value: "{{ item.1.value }}"
      when: item.1.record == "@"
        - "{{ all_dns_records }}"
        - records
      notify: Sync zone files
      delegate_to: localhost

    - name: Sync DNS records"
        key_name: "rndckey"
        key_secret: "{{ vault_rndckey }}"
        key_algorithm: "{{ vault_algorithm }}"
        server: ""
        port: "53"
        protocol: "tcp"
        zone: "{{ }}"
        ttl: "{{ item.1.ttl }}"
        record: "{{ item.1.record }}"
        state: "{{ item.1.state }}"
        type: "{{ item.1.type }}"
        value: "{{ item.1.value }}"
      when: item.1.record != "@"
        - "{{ all_dns_records }}"
        - records
      notify: Sync zone files
      delegate_to: localhost

    - name: Check master config
      command: named-checkconf /var/named/chroot/etc/named.conf
      changed_when: false

    - name: Check zone config
      command: "named-checkzone {{ item }} /var/named/chroot/etc/zones/db.{{ item }}"
        - "{{ all_dns_records | map(attribute='zone') | list }}"
      changed_when: false

    - name: Sync zone files
      command: rndc -c /var/named/chroot/etc/rndc.conf sync -clean

My DNS deployment a playbook breakdown:
1). Grabs the Dynamic DNS update keys from HashiCorp Vault
2). Syncs all of @ $ORIGIN records for all zone.
3). Syncs all of the records.
4). For good measure, but necessary: Checks named.conf file
5). For good measure, but necessary: Checks each individual zone file
6). Force dynamic changes to be applied to disk.

Given that in my environment I have roughly a couple of dozen DNS records, the structured for DNS records works in my environment. Thus said, my group_vars file with all my DNS records is almost 600 lines long. The playbook executing run takes around 1-2 minutes to complete. If I were to be in an environment where I had thousands of DNS records, the approached that I described here might not be the most efficient.

Awesome Applications: 

PHP 7.4 with Remi's RPM Repository

Containerizing all my web applications has been on my things to do list for some years now. Until then, I shall continue to run some of my apps in a traditional VM shared environment.

Remi's RPM Repository is the best RPM based repository if you want to easily run the latest upstream version of PHP. One of the benefits of using this repository in a shared environment is the ability to easily run multiple versions of PHP. My sites have been on PHP 7.2 until a few minutes ago. This was because PHP 7.2 is officially deprecated and no longer maintained, so being a good internet citizen I needed to upgrade to the latest PHP 7.4.

Upgrading to PHP 7.4 is extremely easy (assuming your app is not using and legacy functionality that was removed or changed). Since I had PHP 7.2 already running, I simply query for all php72 packages installed on my system, then install their php74 counterpart.

for package in $(rpm -qa --queryformat "%{NAME}\n"|grep php72 |sed 's/php72/php74/g'); do yum install -y $package; done

All of the different PHP configurations can be found under /etc/opt/remi. Once all the packages have been installed, I ported over all my custom PHP ini and fpm settings. In addition I had to change the FPM node pool's default listening node port. For example /etc/opt/remi/php74/php-fpm.d/www.conf

listen =

This is to avoid a port collision with the already running PHP-FPM pool that is being used by 7.2

Afterwards, I'm able to start my new PHP 7.4 FPM node pool.

systemctl enable php74-php-fpm
systemctl start php74-php-fpm

The last step, is simply updating my site's Apache configuration to point to the new PHP 7.4 FPM node port.

<VirtualHost *:80>
    DocumentRoot /www/


    SetEnvIf Remote_Addr "" is_proxied=1

    ErrorLog /var/log/httpd/
    CustomLog /var/log/httpd/ cloudflare env=is_proxied
    CustomLog /var/log/httpd/ combined

    ProxyPassMatch ^/(.*\.php)$ fcgi://$1
    ProxyTimeout 120


Awesome Applications: 

Configure BIND to support DDNS updates

I use BIND on my home network for both authoritative and forwarding name resolution. In it I have a few private DNS zones I use for testing and for my homelab setup. The main primary dns zone I use for my homelab is Previously when I wanted to make DNS changes, I just ssh into my master nameserver, I update the zone file, and reload. While this worked great for me these last 10+ years that I've running BIND. It obviously doesn't follow good DevOps practices.

If you're in a normal BIND environment where you already using rndc, to administer your server, then you're almost quite there.

BIND Configuration
1). Secret Key Transaction Authentication (TSIG) key. (Where ddnskey. is the name of the key)
Approach A: Using dnssec-keygen

mkdir ddns
dnssec-keygen -a hmac-md5 -b 512 -n HOST -r /dev/urandom ddnskey.

The above command will create two Kddnskey.* files. One ending *.private while the other *.key.

Approach A: Using tsig-keygen

tsig-keygen -a hmac-md5 ddnskey.

Either approach is fine, for this example I opted to use dnssec-keygen since I'll be using the created key file to test a dynamic dns update.

2). Update named.conf file.
Include the newly created key configuration:

key "ddnskey." {
        algorithm      "hmac-md5";
        secret          "PRIVATEKEYHERE==";

Now, it's just a matter of setting the allow-update configuration to allow updates using our newly created key.

zone "" IN {
        type master;
        file "etc/zones/";
        allow-transfer { trusted-servers; };
        allow-query { any; };
        allow-update { key rndckey; };

zone "" IN {
        type master;
        file "etc/zones/";
        allow-transfer { trusted-servers; };
        allow-query { any; };
        allow-update { key "ddnskey."; };

It is worth indicating that BIND also includes the update-policy option for more finer-grained options for the type of updates that we want to allow.

3). Testing
Using the tool dnsupdate (part of bind-utils) we can easily test doing an update to verify the setup works as expected.

$ nsupdate -d -k Kddnskey.+157+06602.key
Creating key...
> server
> zone
> update add 3600 A
> send


Awesome Applications: 

389 Directory Server GUI with Cockpit

So I have a 389 Directory Server up and running. The next step to ease administration was to find a GUI. My first logical approach was to use Apache Studio, however I'm trying to the keep number of non ARM applications on my shiny Apple M1 MacBook Pro to an absolute minimum, so I opted to not install Apache Studio. At least not yet. Luckily, I learned that RedHat has a Webmin equivalent called Cockpit that comes with built-in support for 389 Directory Server Management.

The Cockpit application was already installed on my base RHEL 8 system, I simply just copied over my fullchain Let's Encrypt SSL certificate to /etc/cockpit/ws-certs.d/ssl.cert and restart the service.

systemctl restart cockpit

Then it was just a matter of updating firewalld

firewall-cmd --add-port=9090/tcp
firewall-cmd --permanent --add-port=9090/tcp
firewall-cmd --reload

The Cockpit application runs by default on :9090
Cockpit web interface


Awesome Applications: 

Creating an LDAP read-only service account

So now that I have an LDAP server up and running. I can finally start creating ldap clients to authenticate to my server. Before I can start configuring applications or even adding normal LDAP users,

1). Creating the service account

dsidm localhost user create \
--uid binduser \
--uidNumber 1001 \
--gidNumber 1001 \
--cn binduser \
--displayName binduser

2). Create a password for the service account

dsidm localhost account reset_password uid=binduser,ou=people,dc=rubyninja,dc=org

3). To Modify/add permissions of the binduser service account. I created a file called binduser.ldif with the following contents:

dn: ou=people,dc=rubyninja,dc=org
changetype: modify
add: aci
aci: (targetattr="*") (version 3.0; acl "Allow uid=binduser reading to everything";
 allow (search, read) userdn = "ldap:///uid=binduser,ou=people,dc=rubyninja,dc=org";)

Apply the changes

ldapmodify -H ldaps://localhost -D "cn=Directory Manager" -W -x -f binduser.ldif

NOTE: A fair warning, although I've worked with LDAP and had some experience with it. Even at some point one of my job responsibilities was managing an enterprise OpenLDAP infrastructure. LDAP is not quite one of my forté, so in no way shape or form are these best practices! These is just a mere POC for my homelab.


Awesome Applications: 

Deploying a 389 Directory Server

So it's been roughly nine months since I created a useful technical post on this site. So what better way to than to post the information about the newly deployed LDAP 389 Directory Server I just did on my homelab.

Ever since Red Hat announced that RHEL was going to be of no cost for developer and testing personal use (with limits, of course). This was perfect occasion for me to start using RHEL 8.

1). Disable SELinux (yes, I know. I should do better..)

sudo setenforce 0

2). Update firewall

firewall-cmd --permanent --add-port={389/tcp,636/tcp,9830/tcp}
firewall-cmd --reload
firewall-cmd --list-all

3). Install epel repo

yum install
yum module install 389-directory-server:stable/default

4). Create LDAP instance

config_version = 2


sample_entries = yes
suffix = dc=rubyninja,dc=org

5). Create 389 DS instance

dscreate from-file nstance.inf

6). Create ~/.dsrc config

# Note that '/' is replaced to '%%2f'.
uri = ldapi://%%2fvar%%2frun%%2fslapd-localhost.socket
basedn = dc=rubyninja,dc=org
binddn = cn=Directory Manager

7). Afterwards, I'm able to verify my installation

[[email protected] ldap]# dsctl localhost status
Instance "localhost" is running

8). Since, I kept the default settings when I created the Create 389 DS instance, my server received the name "localhost". Hence why my ~/.dsrc config also has the instance configured as "localhost".
The corresponding systemd service and [email protected] and with the config files stored in /etc/dirsrv/slapd-localhost

systemctl status [email protected]

ls -l /etc/dirsrv/slapd-localhost/

SSL Configuration
By default the ds-389 setup is using self-sign certificates. The following was used to install my self-sign cert for

1). Create private root CA key ssh signed cert

openssl genrsa -out rootCA.key 4096
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 4096 -out rootCA.pem

2). I created the following script to easily generate a certificate key-pair signed by my custom local CA.


[[ ! -d "./certs" ]] && mkdir certs

cat \
/etc/pki/tls/openssl.cnf \
- \
<<-CONFIG > certs/ca-selfsign-ssl.cnf

[ san ]
subjectAltName="${SAN:[email protected]}"

# generate client key
openssl genrsa -out certs/ssl.key 4096

# generate csr
openssl req \
-sha256 \
-new \
-key certs/ssl.key \
-reqexts san \
-extensions san \
-subj "/" \
-config certs/ca-selfsign-ssl.cnf \
-out certs/ssl.csr

# sign cert
openssl x509 -req -in certs/ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -days 2048 -sha256 -extensions san -extfile certs/ca-selfsign-ssl.cnf -out certs/ssl.crt

3). Then I had to certutil utility to view the names and attributes the default SSL certs had.

[[email protected]]# certutil -L -d /etc/dirsrv/slapd-localhost/ -f /etc/dirsrv/slapd-localhost/pwdfile.txt

Certificate Nickname Trust Attributes

ca_cert CT,,
Server-Cert u,u,u

4). Once I made note of the name and attributes of the SSL certificates, we will first need to delete them before replacing them with my custom SSL certs

certutil -d /etc/dirsrv/slapd-localhost/ -n Server-Cert -f /etc/dirsrv/slapd-localhost/pwdfile.txt -D Server-Cert.crt
certutil -d /etc/dirsrv/slapd-localhost/ -n Self-Signed-CA -f /etc/dirsrv/slapd-localhost/pwdfile.txt -D Self-Signed-CA.pem

Adding new SSL certs:

certutil -A -d /etc/dirsrv/slapd-localhost/ -n "ca_cert" -t "CT,," -i rootCA.pem -f /etc/dirsrv/slapd-localhost/pwdfile.txt
certutil -A -d /etc/dirsrv/slapd-localhost/ -n "Server-Cert" -t ",," -i ssl/ssl.crt -f /etc/dirsrv/slapd-localhost/pwdfile.txt

5). While the certutil utility manages signed public and CA certificates. Private SSL certificates are managed by the pk12util utility.
However, before we use this tool, we must covert the X.509 private ssl certificate to a pkcs12 format.

openssl pkcs12 -export -out certs/ssl.pfx -inkey certs/ssl.key -in certs/ssl.crt -certfile /root/ssl/rootCA.pem

Afterwards, we can added it to our LDAP SSL database.

pk12util -d /etc/dirsrv/slapd-localhost/ -i certs/ssl.pfx

6). Lastly, restart the service

systemctl restart [email protected]



Awesome Applications: 

Exclude comments and empty lines from file

Every so often theirs the need to view a configuration (usually a large one), and you want an easy way to exclude all comments and empty lines:

egrep -v '^(#|$)' your-config-file.cfg


Running Ubuntu Server on an Intel NUC 10th i7

Late last year, I purchased a secondary Intel NUC 8th i3 for my homelab. My main goal was use this secondary NUC primarily to learn Mesos and Kubernetes more in depth. Little that I knew that the dual core i3 on the NUC was not truly powerful enough to run a simple ten node DC/OS cluster, let alone another Kubernetes cluster on the same machine. So I decided to wait until the new i7 10th generation Intel NUCs were released, so I can upgrade.

The upgrade itself was not as easy as I first imagine. Both the RAM and hard drive were swapped from the old 8th gen NUC to the new 10th gen NUC. Ubuntu started up successfully, and all the memory was properly recognized on the new machine, however networking was not working. My first thought was that since now Linux was running on a new hardware, I needed to remove the old NIC's udev configuration. I soon realized that apparently in the post systemd world, we no longer need to do this. After a quick Google search, I found a Reddit post that outline my exact problem.

I was shocked to learn that the new 10th gen NUC’s network card is so new that it doesn’t even have its driver on the latest Ubuntu Server LTS! Luckily compiling and loading the newer e1000e driver was a really easy task. The only caveat was that I had to go into the UEFI Bios and disable secure boot and allow 3rd party modules, otherwise the new kernel module would fail to load.

After a few hours of usage, performance is completely night a day. The new 10th gen i7 hex core processor completely blows 8th gen i3 dual core, out of the water.


Send Email from a Shell Script Using Gmail’s SMTP

In my previous post, I enabled my local mail server to relay all outgoing mail to Google's SMTP servers. However if you want to completely bypass using any sort of MTA, then you will only need to configure your Mail User Agent client to use Gmail STMP settings directly.

In Linux, I've always used the mailx utility to send out email messages from the command line or from a shell script. By default, mailx uses the local mail server to send out messages, but configuring it to use a custom SMTP server is extremely easy.

Inside a shell script configuration would look like this:

to="[email protected]"
from="[email protected]"
-S smtp-use-starttls \
-S ssl-verify=ignore \
-S smtp-auth=login \
-S smtp=smtp:// \
-S from=$from \
-S [email protected] \
-S ssl-verify=ignore \
-S nss-config-dir=/etc/pki/nssdb \

echo "Test email from mailx" | mail -s "TEST" $email_config

To have the mail settings configured to be used by mailx from the command line simply set your settings in ~/.mail.rc

set smtp-use-starttls
set ssl-verify=ignore
set smtp=smtp://
set smtp-auth=login
set [email protected]
set smtp-auth-password=ULTRASECUREPASSWORDHERE
set from="[email protected]"
set nss-config-dir=/etc/pki/nssdb



Awesome Applications: 


Premium Drupal Themes by Adaptivethemes