Skip to main content

Updating BIND DNS records using Ansible

This is a follow up to the post. Configure BIND to support DDNS updates
Now, that I'm able to dynamically update DNS records, this is where Ansible comes in. Ansible is hands down my favorite orchestration/automation tool. So I choose to use it to update my local DNS records going forward.

I'll be using the community.general.nsupdate module.

I structured my records on my nameserver's corresponding Ansible group_vars using the following structured:

all_dns_records:
  - zone: DNS-NAME
    records:
      - record: (@ for $ORIGIN or normal record name )
        ttl: TTL-VALUE
        state: (present or absent)
        type: DNS-TYPE
        value: VALUE-OF-DNS-RECORD

Example

---
all_dns_records:
  - zone: "rubyninja.org"
    records:
      - record: "@"
        ttl: "10800"
        state: "present"
        type: "A"
        value: "192.168.1.63"
      - record: "shit"
        ttl: "10800"
        state: "present"
        type: "A"
        value: "192.168.1.64"
  - zone: "alpha.org"
    records:
      - record: "@"
        ttl: "10800"
        state: "present"
        type: "A"
        value: "192.168.1.63"
      - record: "test"
        ttl: "10800"
        state: "present"
        type: "A"
        value: "192.168.1.64"
[...]

Deployment Ansible playbook:

---
- hosts: ns1.rubyninja.org
  pre_tasks:
    - name: Get algorithm from vault
      ansible.builtin.set_fact:
        vault_algorithm: "{{ lookup('community.general.hashi_vault', 'secret/systems/bind:algorithm') }}"
      delegate_to: localhost

    - name: Get rndckey from vault
      ansible.builtin.set_fact:
        vault_rndckey: "{{ lookup('community.general.hashi_vault', 'secret/systems/bind:rndckey') }}"
      delegate_to: localhost

  tasks:
    - name: Sync $ORIGIN records"
      community.general.nsupdate:
        key_name: "rndckey"
        key_secret: "{{ vault_rndckey }}"
        key_algorithm: "{{ vault_algorithm }}"
        server: "ns1.rubyninja.org"
        port: "53"
        protocol: "tcp"
        ttl: "{{ item.1.ttl }}"
        record: "{{ item.0.zone }}."
        state: "{{ item.1.state }}"
        type: "{{ item.1.type }}"
        value: "{{ item.1.value }}"
      when: item.1.record == "@"
      with_subelements:
        - "{{ all_dns_records }}"
        - records
      notify: Sync zone files
      delegate_to: localhost

    - name: Sync DNS records"
      community.general.nsupdate:
        key_name: "rndckey"
        key_secret: "{{ vault_rndckey }}"
        key_algorithm: "{{ vault_algorithm }}"
        server: "ns1.rubyninja.org"
        port: "53"
        protocol: "tcp"
        zone: "{{ item.0.zone }}"
        ttl: "{{ item.1.ttl }}"
        record: "{{ item.1.record }}"
        state: "{{ item.1.state }}"
        type: "{{ item.1.type }}"
        value: "{{ item.1.value }}"
      when: item.1.record != "@"
      with_subelements:
        - "{{ all_dns_records }}"
        - records
      notify: Sync zone files
      delegate_to: localhost

  post_tasks:
    - name: Check master config
      command: named-checkconf /var/named/chroot/etc/named.conf
      delegate_to: ns1.rubyninja.org
      changed_when: false

    - name: Check zone config
      command: "named-checkzone {{ item }} /var/named/chroot/etc/zones/db.{{ item }}"
      with_items:
        - "{{ all_dns_records | map(attribute='zone') | list }}"
      delegate_to: ns1.rubyninja.org
      changed_when: false

  handlers:
    - name: Sync zone files
      command: rndc -c /var/named/chroot/etc/rndc.conf sync -clean
      delegate_to: ns1.rubyninja.org

My DNS deployment a playbook breakdown:
1). Grabs the Dynamic DNS update keys from HashiCorp Vault
2). Syncs all of @ $ORIGIN records for all zone.
3). Syncs all of the records.
4). For good measure, but necessary: Checks named.conf file
5). For good measure, but necessary: Checks each individual zone file
6). Force dynamic changes to be applied to disk.

Given that in my environment I have roughly a couple of dozen DNS records, the structured for DNS records works in my environment. Thus said, my group_vars file with all my DNS records is almost 600 lines long. The playbook executing run takes around 1-2 minutes to complete. If I were to be in an environment where I had thousands of DNS records, the approached that I described here might not be the most efficient.

Awesome Applications: 

PHP 7.4 with Remi's RPM Repository

Containerizing all my web applications has been on my things to do list for some years now. Until then, I shall continue to run some of my apps in a traditional VM shared environment.

Remi's RPM Repository is the best RPM based repository if you want to easily run the latest upstream version of PHP. One of the benefits of using this repository in a shared environment is the ability to easily run multiple versions of PHP. My sites have been on PHP 7.2 until a few minutes ago. This was because PHP 7.2 is officially deprecated and no longer maintained, so being a good internet citizen I needed to upgrade to the latest PHP 7.4.

Upgrading to PHP 7.4 is extremely easy (assuming your app is not using and legacy functionality that was removed or changed). Since I had PHP 7.2 already running, I simply query for all php72 packages installed on my system, then install their php74 counterpart.

for package in $(rpm -qa --queryformat "%{NAME}\n"|grep php72 |sed 's/php72/php74/g'); do yum install -y $package; done

All of the different PHP configurations can be found under /etc/opt/remi. Once all the packages have been installed, I ported over all my custom PHP ini and fpm settings. In addition I had to change the FPM node pool's default listening node port. For example /etc/opt/remi/php74/php-fpm.d/www.conf

listen = 127.0.0.1:9002

This is to avoid a port collision with the already running PHP-FPM pool that is being used by 7.2

Afterwards, I'm able to start my new PHP 7.4 FPM node pool.

systemctl enable php74-php-fpm
systemctl start php74-php-fpm

The last step, is simply updating my site's Apache configuration to point to the new PHP 7.4 FPM node port.

<VirtualHost *:80>
    DocumentRoot /www/shit.alpha01.org

    ServerName shit.alpha01.org
    ServerAlias www.shit.alpha01.org

    SetEnvIf Remote_Addr "192.168.1.150" is_proxied=1

    ErrorLog /var/log/httpd/shit.alpha01.org/error.log
    CustomLog /var/log/httpd/shit.alpha01.org/access.log cloudflare env=is_proxied
    CustomLog /var/log/httpd/shit.alpha01.org/access-local.log combined

    ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9002/www/shit.alpha01.org/$1
    ProxyTimeout 120
</VirtualHost>

Programming: 

Awesome Applications: 

Configure BIND to support DDNS updates

I use BIND on my home network for both authoritative and forwarding name resolution. In it I have a few private DNS zones I use for testing and for my homelab setup. The main primary dns zone I use for my homelab is rubyninja.org. Previously when I wanted to make DNS changes, I just ssh into my master nameserver, I update the zone file, and reload. While this worked great for me these last 10+ years that I've running BIND. It obviously doesn't follow good DevOps practices.

If you're in a normal BIND environment where you already using rndc, to administer your server, then you're almost quite there.

BIND Configuration
1). Secret Key Transaction Authentication (TSIG) key. (Where ddnskey. is the name of the key)
Approach A: Using dnssec-keygen

mkdir ddns
dnssec-keygen -a hmac-md5 -b 512 -n HOST -r /dev/urandom ddnskey.

The above command will create two Kddnskey.* files. One ending *.private while the other *.key.

Approach A: Using tsig-keygen

tsig-keygen -a hmac-md5 ddnskey.

Either approach is fine, for this example I opted to use dnssec-keygen since I'll be using the created key file to test a dynamic dns update.

2). Update named.conf file.
Include the newly created key configuration:

key "ddnskey." {
        algorithm      "hmac-md5";
        secret          "PRIVATEKEYHERE==";
};

Now, it's just a matter of setting the allow-update configuration to allow updates using our newly created key.

zone "rubyninja.org." IN {
        type master;
        file "etc/zones/db.rubyninja.org";
        allow-transfer { trusted-servers; };
        allow-query { any; };
        allow-update { key rndckey; };
};

zone "k8s.rubyninja.org." IN {
        type master;
        file "etc/zones/db.k8s.rubyninja.org";
        allow-transfer { trusted-servers; };
        allow-query { any; };
        allow-update { key "ddnskey."; };
};

It is worth indicating that BIND also includes the update-policy option for more finer-grained options for the type of updates that we want to allow.

3). Testing
Using the tool dnsupdate (part of bind-utils) we can easily test doing an update to verify the setup works as expected.

$ nsupdate -d -k Kddnskey.+157+06602.key
Creating key...
> server ns1.rubyninja.org
> zone k8s.rubyninja.org.
> update add tonytest.k8s.rubyninja.org. 3600 A 192.168.1.25
> send

Resources:
https://docs.netgate.com/pfsense/en/latest/recipes/bind-rfc2136.html
https://www.thegeekdiary.com/how-to-use-rndc-command-command-line-admini...

Awesome Applications: 

389 Directory Server GUI with Cockpit

So I have a 389 Directory Server up and running. The next step to ease administration was to find a GUI. My first logical approach was to use Apache Studio, however I'm trying to the keep number of non ARM applications on my shiny Apple M1 MacBook Pro to an absolute minimum, so I opted to not install Apache Studio. At least not yet. Luckily, I learned that RedHat has a Webmin equivalent called Cockpit that comes with built-in support for 389 Directory Server Management.

The Cockpit application was already installed on my base RHEL 8 system, I simply just copied over my fullchain Let's Encrypt SSL certificate to /etc/cockpit/ws-certs.d/ssl.cert and restart the service.

systemctl restart cockpit

Then it was just a matter of updating firewalld

firewall-cmd --add-port=9090/tcp
firewall-cmd --permanent --add-port=9090/tcp
firewall-cmd --reload

The Cockpit application runs by default on :9090
Cockpit web interface

Linux: 

Awesome Applications: 

Creating an LDAP read-only service account

So now that I have an LDAP server up and running. I can finally start creating ldap clients to authenticate to my ldap.rubyninja.org server. Before I can start configuring applications or even adding normal LDAP users,

1). Creating the service account

dsidm localhost user create \
--uid binduser \
--uidNumber 1001 \
--gidNumber 1001 \
--cn binduser \
--displayName binduser

2). Create a password for the service account

dsidm localhost account reset_password uid=binduser,ou=people,dc=rubyninja,dc=org

3). To Modify/add permissions of the binduser service account. I created a file called binduser.ldif with the following contents:

dn: ou=people,dc=rubyninja,dc=org
changetype: modify
add: aci
aci: (targetattr="*") (version 3.0; acl "Allow uid=binduser reading to everything";
 allow (search, read) userdn = "ldap:///uid=binduser,ou=people,dc=rubyninja,dc=org";)

Apply the changes

ldapmodify -H ldaps://localhost -D "cn=Directory Manager" -W -x -f binduser.ldif

NOTE: A fair warning, although I've worked with LDAP and had some experience with it. Even at some point one of my job responsibilities was managing an enterprise OpenLDAP infrastructure. LDAP is not quite one of my forté, so in no way shape or form are these best practices! These is just a mere POC for my homelab.

Resources:
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/defining_bind_rules#granting_access_to_authenticated_users

Awesome Applications: 

Deploying a 389 Directory Server

So it's been roughly nine months since I created a useful technical post on this site. So what better way to than to post the information about the newly deployed LDAP 389 Directory Server I just did on my homelab.

Ever since Red Hat announced that RHEL was going to be of no cost for developer and testing personal use (with limits, of course). This was perfect occasion for me to start using RHEL 8.

Install
1). Disable SELinux (yes, I know. I should do better..)

sudo setenforce 0

2). Update firewall

firewall-cmd --permanent --add-port={389/tcp,636/tcp,9830/tcp}
firewall-cmd --reload
firewall-cmd --list-all

3). Install epel repo

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
yum module install 389-directory-server:stable/default

4). Create LDAP instance

[general]
config_version = 2

[slapd]
root_password = MY_SUPER_ULTRA_SECURE_PASSWORD_HERE

[backend-userroot]
sample_entries = yes
suffix = dc=rubyninja,dc=org

5). Create 389 DS instance

dscreate from-file nstance.inf

6). Create ~/.dsrc config

[localhost]
# Note that '/' is replaced to '%%2f'.
uri = ldapi://%%2fvar%%2frun%%2fslapd-localhost.socket
basedn = dc=rubyninja,dc=org
binddn = cn=Directory Manager

7). Afterwards, I'm able to verify my installation

[[email protected] ldap]# dsctl localhost status
Instance "localhost" is running

8). Since, I kept the default settings when I created the Create 389 DS instance, my server received the name "localhost". Hence why my ~/.dsrc config also has the instance configured as "localhost".
The corresponding systemd service and [email protected] and with the config files stored in /etc/dirsrv/slapd-localhost

systemctl status [email protected]

ls -l /etc/dirsrv/slapd-localhost/

SSL Configuration
By default the ds-389 setup is using self-sign certificates. The following was used to install my self-sign cert for ldap.rubyninja.org.

1). Create private root CA key ssh signed cert

openssl genrsa -out rootCA.key 4096
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 4096 -out rootCA.pem

2). I created the following script to easily generate a certificate key-pair signed by my custom local CA.

#!/bin/bash
SAN="DNS:ldap.rubyninja.org,DNS:login.rubyninja.org"

[[ ! -d "./certs" ]] && mkdir certs

cat \
/etc/pki/tls/openssl.cnf \
- \
<<-CONFIG > certs/ca-selfsign-ssl.cnf

[ san ]
subjectAltName="${SAN:[email protected]}"
CONFIG

# generate client key
openssl genrsa -out certs/ssl.key 4096

# generate csr
openssl req \
-sha256 \
-new \
-key certs/ssl.key \
-reqexts san \
-extensions san \
-subj "/CN=ldap.rubyninja.org" \
-config certs/ca-selfsign-ssl.cnf \
-out certs/ssl.csr


# sign cert
openssl x509 -req -in certs/ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -days 2048 -sha256 -extensions san -extfile certs/ca-selfsign-ssl.cnf -out certs/ssl.crt

3). Then I had to certutil utility to view the names and attributes the default SSL certs had.

[[email protected]]# certutil -L -d /etc/dirsrv/slapd-localhost/ -f /etc/dirsrv/slapd-localhost/pwdfile.txt

Certificate Nickname Trust Attributes
SSL,S/MIME,JAR/XPI

ca_cert CT,,
Server-Cert u,u,u

4). Once I made note of the name and attributes of the SSL certificates, we will first need to delete them before replacing them with my custom SSL certs
Deletion:

certutil -d /etc/dirsrv/slapd-localhost/ -n Server-Cert -f /etc/dirsrv/slapd-localhost/pwdfile.txt -D Server-Cert.crt
certutil -d /etc/dirsrv/slapd-localhost/ -n Self-Signed-CA -f /etc/dirsrv/slapd-localhost/pwdfile.txt -D Self-Signed-CA.pem

Adding new SSL certs:

certutil -A -d /etc/dirsrv/slapd-localhost/ -n "ca_cert" -t "CT,," -i rootCA.pem -f /etc/dirsrv/slapd-localhost/pwdfile.txt
certutil -A -d /etc/dirsrv/slapd-localhost/ -n "Server-Cert" -t ",," -i ssl/ssl.crt -f /etc/dirsrv/slapd-localhost/pwdfile.txt

5). While the certutil utility manages signed public and CA certificates. Private SSL certificates are managed by the pk12util utility.
However, before we use this tool, we must covert the X.509 private ssl certificate to a pkcs12 format.

openssl pkcs12 -export -out certs/ssl.pfx -inkey certs/ssl.key -in certs/ssl.crt -certfile /root/ssl/rootCA.pem

Afterwards, we can added it to our LDAP SSL database.

pk12util -d /etc/dirsrv/slapd-localhost/ -i certs/ssl.pfx

6). Lastly, restart the service

systemctl restart [email protected]

Resources:
https://directory.fedoraproject.org/docs/389ds/howto/quickstart.html#set...
https://directory.fedoraproject.org/docs/389ds/howto/howto-install-389.html
https://directory.fedoraproject.org/docs/389ds/howto/howto-ssl-archive.html
https://support.globalsign.com/ssl/ssl-certificates-installation/convert...

Linux: 

Awesome Applications: 

Exclude comments and empty lines from file

Every so often theirs the need to view a configuration (usually a large one), and you want an easy way to exclude all comments and empty lines:

egrep -v '^(#|$)' your-config-file.cfg

Programming: 

Running Ubuntu Server on an Intel NUC 10th i7

Late last year, I purchased a secondary Intel NUC 8th i3 for my homelab. My main goal was use this secondary NUC primarily to learn Mesos and Kubernetes more in depth. Little that I knew that the dual core i3 on the NUC was not truly powerful enough to run a simple ten node DC/OS cluster, let alone another Kubernetes cluster on the same machine. So I decided to wait until the new i7 10th generation Intel NUCs were released, so I can upgrade.

The upgrade itself was not as easy as I first imagine. Both the RAM and hard drive were swapped from the old 8th gen NUC to the new 10th gen NUC. Ubuntu started up successfully, and all the memory was properly recognized on the new machine, however networking was not working. My first thought was that since now Linux was running on a new hardware, I needed to remove the old NIC's udev configuration. I soon realized that apparently in the post systemd world, we no longer need to do this. After a quick Google search, I found a Reddit post that outline my exact problem. https://www.reddit.com/r/intelnuc/comments/eox6k1/caution_new_frost_canyon_nucs_have_an_integrated/

I was shocked to learn that the new 10th gen NUC’s network card is so new that it doesn’t even have its driver on the latest Ubuntu Server LTS! Luckily compiling and loading the newer e1000e driver was a really easy task. The only caveat was that I had to go into the UEFI Bios and disable secure boot and allow 3rd party modules, otherwise the new kernel module would fail to load.

After a few hours of usage, performance is completely night a day. The new 10th gen i7 hex core processor completely blows 8th gen i3 dual core, out of the water.

Linux: 

Send Email from a Shell Script Using Gmail’s SMTP

In my previous post, I enabled my local mail server to relay all outgoing mail to Google's SMTP servers. However if you want to completely bypass using any sort of MTA, then you will only need to configure your Mail User Agent client to use Gmail STMP settings directly.

In Linux, I've always used the mailx utility to send out email messages from the command line or from a shell script. By default, mailx uses the local mail server to send out messages, but configuring it to use a custom SMTP server is extremely easy.

Inside a shell script configuration would look like this:

to="[email protected]"
from="[email protected]"
email_config="
-S smtp-use-starttls \
-S ssl-verify=ignore \
-S smtp-auth=login \
-S smtp=smtp://smtp.gmail.com:587 \
-S from=$from \
-S [email protected] \
-S smtp-auth-password=ULTRASECUREPASSWORDHERE \
-S ssl-verify=ignore \
-S nss-config-dir=/etc/pki/nssdb \
$to"

echo "Test email from mailx" | mail -s "TEST" $email_config

To have the mail settings configured to be used by mailx from the command line simply set your settings in ~/.mail.rc

set smtp-use-starttls
set ssl-verify=ignore
set smtp=smtp://smtp.gmail.com:587
set smtp-auth=login
set [email protected]
set smtp-auth-password=ULTRASECUREPASSWORDHERE
set from="[email protected]"
set nss-config-dir=/etc/pki/nssdb

References:
https://www.systutorials.com/1411/sending-email-from-mailx-command-in-linux-using-gmails-smtp/

Linux: 

Awesome Applications: 

Configuring Postfix to use Gmail

Configuring Postfix to use Gmail as the outgoing SMTP relay endpoint is a relatively simple process. I’m my case, I’m not using an @gmail.com account. Rather, since all of my domains use G Suite, I’ve created a special dedicated email account that I’ll be using to send out email from.

Before starting configuring Postfix, it is important that you enable "Less secure app access" on the Gmail account that you will be configuring to send outgoing messages.

I’m using CentOS 7.x as my mail server OS. These were the steps I used to configure Postfix.

1. Install necessary packages:

yum install postfix mailx cyrus-sasl cyrus-sasl-plain

2. Create /etc/postfix/sasl_passwd file with the your authentication credentials:

[smtp.gmail.com]:587    [email protected]:mypassword

3. Update file permissions to lockdown access to our newly created authentication config file:

chmod 600 /etc/postfix/sasl_passwd

4. Use the postmap command to compile and hash the contents of sasl_passwd:

postmap /etc/postfix/sasl_passwd

5. Update /etc/postfix/main.cf

relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt

6. Finally, enable and restart postfix:

systemctl enable postfix
systemctl restart postfix

Lastly, although it’s not needed to get a working Postfix to Gmail STMP config working. I would recommend enabling outgoing throttling. Otherwise Google might temporarily suspend your account from sending messages!

Additional /etc/postfix/main.cf update:

smtp_destination_concurrency_limit = 2
smtp_destination_rate_delay = 10s
smtp_extra_recipient_limit = 5

In my case, I configured Postfix to only handle two concurrent relay connections, wait at least 10 seconds to send out the email and set the recipient limit to 5 (per queue message session).

NOTE: As I mentioned, since I'm not using an @gmail.com, I had to add an SPF DNS record so that the outgoing emails pass all of Google’s spam tests.

DNS txt record:

v=spf1 include:_spf.google.com ~all

Example received email header that was sent from the newly Postfix to Gmail smtp configuration:
Passing Gmail Email Header

To conclude, it is import to remember that this Postfix configuration will overwrite whatever "From" source set by your mail user agent (as the above email header image demonstrates).

Resources:
https://www.howtoforge.com/tutorial/configure-postfix-to-use-gmail-as-a-mail-relay
https://wiki.deimos.fr/Postfix:_limit_outgoing_mail_throttling.html

Linux: 

Awesome Applications: 

Pages

Premium Drupal Themes by Adaptivethemes