Skip to main content

Send Email from a Shell Script Using Gmail’s SMTP

In my previous post, I enabled my local mail server to relay all outgoing mail to Google's SMTP servers. However if you want to completely bypass using any sort of MTA, then you will only need to configure your Mail User Agent client to use Gmail STMP settings directly.

In Linux, I've always used the mailx utility to send out email messages from the command line or from a shell script. By default, mailx uses the local mail server to send out messages, but configuring it to use a custom SMTP server is extremely easy.

Inside a shell script configuration would look like this:

to="[email protected]"
from="[email protected]"
email_config="
-S smtp-use-starttls \
-S ssl-verify=ignore \
-S smtp-auth=login \
-S smtp=smtp://smtp.gmail.com:587 \
-S from=$from \
-S [email protected] \
-S smtp-auth-password=ULTRASECUREPASSWORDHERE \
-S ssl-verify=ignore \
-S nss-config-dir=/etc/pki/nssdb \
$to"

echo "Test email from mailx" | mail -s "TEST" $email_config

To have the mail settings configured to be used by mailx from the command line simply set your settings in ~/.mail.rc

set smtp-use-starttls
set ssl-verify=ignore
set smtp=smtp://smtp.gmail.com:587
set smtp-auth=login
set [email protected]
set smtp-auth-password=ULTRASECUREPASSWORDHERE
set from="[email protected]"
set nss-config-dir=/etc/pki/nssdb

References:
https://www.systutorials.com/1411/sending-email-from-mailx-command-in-linux-using-gmails-smtp/

Linux: 

Awesome Applications: 

Configuring Postfix to use Gmail

Configuring Postfix to use Gmail as the outgoing SMTP relay endpoint is a relatively simple process. I’m my case, I’m not using an @gmail.com account. Rather, since all of my domains use G Suite, I’ve created a special dedicated email account that I’ll be using to send out email from.

Before starting configuring Postfix, it is important that you enable "Less secure app access" on the Gmail account that you will be configuring to send outgoing messages.

I’m using CentOS 7.x as my mail server OS. These were the steps I used to configure Postfix.

1. Install necessary packages:

yum install postfix mailx cyrus-sasl cyrus-sasl-plain

2. Create /etc/postfix/sasl_passwd file with the your authentication credentials:

[smtp.gmail.com]:587    [email protected]:mypassword

3. Update file permissions to lockdown access to our newly created authentication config file:

chmod 600 /etc/postfix/sasl_passwd

4. Use the postmap command to compile and hash the contents of sasl_passwd:

postmap /etc/postfix/sasl_passwd

5. Update /etc/postfix/main.cf

relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt

6. Finally, enable and restart postfix:

systemctl enable postfix
systemctl restart postfix

Lastly, although it’s not needed to get a working Postfix to Gmail STMP config working. I would recommend enabling outgoing throttling. Otherwise Google might temporarily suspend your account from sending messages!

Additional /etc/postfix/main.cf update:

smtp_destination_concurrency_limit = 2
smtp_destination_rate_delay = 10s
smtp_extra_recipient_limit = 5

In my case, I configured Postfix to only handle two concurrent relay connections, wait at least 10 seconds to send out the email and set the recipient limit to 5 (per queue message session).

NOTE: As I mentioned, since I'm not using an @gmail.com, I had to add an SPF DNS record so that the outgoing emails pass all of Google’s spam tests.

DNS txt record:

v=spf1 include:_spf.google.com ~all

Example received email header that was sent from the newly Postfix to Gmail smtp configuration:
Passing Gmail Email Header

To conclude, it is import to remember that this Postfix configuration will overwrite whatever "From" source set by your mail user agent (as the above email header image demonstrates).

Resources:
https://www.howtoforge.com/tutorial/configure-postfix-to-use-gmail-as-a-mail-relay
https://wiki.deimos.fr/Postfix:_limit_outgoing_mail_throttling.html

Linux: 

Awesome Applications: 

Automatically Disable Different Jenkins Projects at Build Time

I use Jenkins as my CI tool for all my personal projects. My current Jenkins build plans are fairly simple and not quite particularly complex (though I do plan on eventually start using Jenkins pipelines on my build jobs in the near future), given that most of my personal projects are WordPress and Drupal sites.

My current configuration consists of two different basic Freestyle projects Jenkins builds. One for my staging build/job and the other for my production build/job respectively. Each time my staging Freestyle project builds, it automatically creates a git tag, which is later used by my production Freestyle project; where it's pulled, build, and deploy from. This means that at no point I want my production Freestyle project to build whenever the corresponding staging Freestyle project fails (for example unit tests).

Using the Groovy Postbuild Plugin, will give you the ability to modify Jenkins itself. In my case, I want to disable the productions Freestyle project whenever my staging Freestyle project.

On this example my production project build/job is called rubysecurity.org.

import jenkins.*
import jenkins.model.*

String production_project = "rubysecurity.org";

try {
  if (manager.build.result.isWorseThan(hudson.model.Result.SUCCESS)) {
    Jenkins.instance.getItem(production_project).disable()
    manager.listener.logger.println("Disabled ${production_project} build plan!");
    manager.createSummary("warning.gif").appendText("No production builds will be available on ${production_project} until the errors here are fixed!", false, false, false, "red")

  } else {
    Jenkins.instance.getItem(production_project).enable();
    manager.listener.logger.println("Enabled ${production_project} build plan.");
  }
} catch (Exception ex) {
   manager.listener.logger.println("Error disabling ${production_project}." + ex.getMessage());
}

The example Groovy Post-Build script ensures the project build/job rubysecurity.org is enabled if it successfully finishes without any errors, otherwise rubysecurity.org is disabled, and a custom error message is displayed on the failing staging build/job.

Example error:
Jenkins error

References:
https://stackoverflow.com/questions/8661349/disable-jenkins-job-from-ano...

Awesome Applications: 

Accessing KVM Guest Using Virtual Serial Console

For the longest time, after creating my KVM guest virtual machines, I’ve only used virt-manager afterwards to do any sort of remote non-direct ssh connection. It wasn’t until now that I finally decided to start using the serial console feature of KVM, and I have to say, I kind of regret procrastinating on this, because this feature is really convenient.

Enabling serial console access to a guest VM is a relatively easy process.
In CentOS, it’s simply a matter of adding the following kernel parameter to GRUB_CMDLINE_LINUX in /etc/default/grub

console=ttyS0

After adding the console kernel parameter with the value of our virtual console's device block file. Then we have to build new a grub menu and reboot:

grub2-mkconfig -o /boot/grub2/grub.cfg

Afterwards from the host system, you should be able to virsh console onto the guest VM.

The only caveat with connecting to a guest using the virtual serial console is existing the console. In my case, the way to log off the console connection was using Ctrl+5 keyboard keys. This disconnection quirk reminded me of the good old days were I actually worked on physical servers and used IPMI’s serial over network feature and it’s associated unique key combination to properly close the serial connection.

Resources:
https://www.certdepot.net/rhel7-access-virtual-machines-console/
https://superuser.com/questions/637669/how-to-exit-a-virsh-console-connection

Awesome Applications: 

Log Varnish/proxy and Local Access Separately in Apache

I use Varnish on all of my web sites, with Apache as the backend web server. All Varnish traffic that hits my sites, is traffic that originates from the internet, while all access from my local home network hits Apache directly (Accomplished using local BIND authoritative servers).

For the longest time, I've been logging all direct Apache traffic and traffic originating from Varnish to the same Apache access file. It turns out, segmenting the access logs is a very easy task. This can be accomplish, with the help of environment variables in Apache using SetEnvIf.

For example, my Varnish server's local IP is 192.168.1.150, and SetEnvIf can use Remote_Addr (IP address of the client making the request), as part of it's set condition. So in my case, I can check if the originating request came from my Varnish server's "192.168.1.150" address, if so set the is_proxied environment variable. Afterwards I can use the is_proxied environment variable to tell Apache where to log that access request too.

Inside my VirtualHost directive, the log configuration looks like this:

        SetEnvIf Remote_Addr "192.168.1.150" is_proxied=1

        ErrorLog /var/log/httpd/antoniobaltazar.com/error.log

        CustomLog /var/log/httpd/antoniobaltazar.com/access.log cloudflare env=is_proxied
        CustomLog /var/log/httpd/antoniobaltazar.com/access-local.log combined

Unfortunately, we can't use this same technique to log the error logs separately as ErrorLog does not support this.

Awesome Applications: 

Nagios SSL Certificate Expiration Check

So, a while back I demonstrated a way to to set up an automated SSL certificate expiration monitoring solution.
Well, it turns out the check_http Nagios plugin has built-in support to monitor SSL certificate expiration as well. This is accompished using the -C / --certificate options.

Example check on a local expired Let's Encrypt Certificate:

[[email protected] plugins]# ./check_http -t 10 -H www.rubysecurity.org -I 192.168.1.61 -C 10
SSL CRITICAL - Certificate 'www.rubysecurity.org' expired on 2018-07-25 18:39 -0700/PDT.

check_http help doc:

-C, --certificate=INTEGER[,INTEGER]
    Minimum number of days a certificate has to be valid. Port defaults to 443
    (when this option is used the URL is not checked.)

CHECK CERTIFICATE: check_http -H www.verisign.com -C 30,14

 When the certificate of 'www.verisign.com' is valid for more than 30 days,
 a STATE_OK is returned. When the certificate is still valid, but for less than
 30 days, but more than 14 days, a STATE_WARNING is returned.
 A STATE_CRITICAL will be returned when certificate expires in less than 14 days

Awesome Applications: 

Log into a Docker Container as root

docker exec -u 0 -it mycontainer bash

Awesome Applications: 

Ubuntu 18.04 LTS + Systemd + Netplan = Annoyance

Unless it's something that is suppose to help improve workflow, I really hate change; especially if the change involves changing something that worked perfectly fine.

I upgraded (fresh install) from Ubuntu Server LTS 12.04 to 18.04, among the addition of systemd, which I don't mind to be honest, as I see it as necessary evil. I was shocked to see the old traditional Debian networking configuration does not work anymore. Instead, networking is handled by a new utility called Netplan. Using Netplan for normal static networking configurations is not terrible, however in my use-case, I needed to able to create a new virtual interface for the shared KVM bridge networking config needed for my guest VMs.

After about 30 minutes of trail and error (and wasn't able to find any useful documentation), I opted to configure the networking config to continue using the old legacy networking config. The only problem is that reverting to my old 12.04 networking config was not quite as easy as simply copying over the old interfaces file. So I had to do the following:

1. Remove all of the configs on /etc/netplan/

rm /etc/netplan/*.yml

2. Install ifupdown utility

sudo apt install ifupdown

Now, populate your /etc/network/interfaces config. This is how mine looks (where eno1 is my physical interface):

# ifupdown has been replaced by netplan(5) on this system.  See
# /etc/netplan for current configuration.
# To re-enable ifupdown on this system, you can run:
#    sudo apt install ifupdown
#
auto lo
iface lo inet loopback

 auto br0
 iface br0 inet static
         address 192.168.1.25
         netmask 255.255.255.0
         dns-nameservers 8.8.8.8 192.168.1.10 192.168.1.11
         gateway 192.168.1.1
         # set static route for LAN
         post-up route add -net 192.168.0.0 netmask 255.255.255.0 gw 192.168.1.1
         bridge_ports eno1
         bridge_stp off
         bridge_fd 0
         bridge_maxwait 0

After restarting the network service, my new shared interface was successfully created with the proper IP Address and routing, however DNS was not configured. This is because now DNS configurations seem to have their own dedicated tool called systemd-resolved. So to get my static DNS configured and working on the half-ass networking legacy configuration. Using systemd-resolved is a two step process:

1. Update the file /etc/systemd/resolved.conf with the corresponding DNS configuration, in my case it looks like this

[Resolve]
DNS=192.168.1.10
DNS=192.168.1.11
DNS=8.8.8.8
Domains=rubyninja.org

2. Then finally restart the systemd-resolved service.

systemctl restart systemd-resolved

You can verify the DNS config using

systemd-resolve --status

It wasn't easy as I first imagined, but thus said, this was the only inconvenience during my entire 12.04 to 18.04 upgrade.

Linux: 

Homelab Updates!

It's been well over a month since I finally decided to retire both of my Apple Mac Minis in favor for a single (for the time being), quieter, and more powerful Intel NUC.

Migrating over my existing KVM and VirtualBox VMs to my new KVM server was a really easy process. If doing the import manually, then it's just a matter of selecting the existing vdi and qcow2 images as the source disks when creating the guests VMs on the server. In my cause, however I also had to update the new MAC address given that all of my VMs are configured to get their respective fixed IP addresses via my isc-dhcpd server.

This this was somewhat of a fresh start, so I nuked a bunch of unused VMs that I had lingering around for testing purposes, and only kept what I really need for now. Which at the time of this writing these are my current active VMs that I use on my homelab:
proxy - Reverse proxy Varnish and Nginx (SSL termination)
dhcp - ISC-dhcpd and PXE server
database - MySQL and PostgreSQL server
monitor - Nagios, Graphite/Grafana
web - Apache
ns1 - Master BIND server
ns2 - Slave BIND server
git - GitLab and Subversion
ansible - Ansible and Puppet Configuration Management
build - Jenkins
logs - ELK stack

Future Plans:
I have lots of future plans for my homelab. Like upgrading my BIND DNS servers to a new version and rollout out DNSSec on my local network, upgrading dhcp server (running a really old version of Debian), rollout 389 Directory Server (I have a love/hate relationship with openldap). These are just to name a few!

Linux: 

Awesome Applications: 

Annoying Ansible Gotcha

Ansible is by far my favorite Configuration Management tool, however it certainly has it's own unique quirks and annoyances. To start, I prefer the Ansible's YAML/Jinja approach instead of Puppet and Chef's own DSL custom configurations.

Today I ran into an interesting YAML parsing quirk. It turns out if you use colon ':' character inside a string anywhere in your playbooks, Ansible will fail to properly parse it.

Example playbook:

---
- hosts: 127.0.0.1
  tasks:
    - lineinfile: dest=/etc/sudoers regexp='^testuser ALL=' state=present line="testuser ALL=(ALL) NOPASSWD: TEST_PROGRAM" state=present

When running the playbook, triggers the following error:

ERROR! Syntax Error while loading YAML.


The error appears to have been in '/etc/ansible/one_off_playbooks/example.yml': line 4, column 104, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  tasks:
    - lineinfile: dest=/etc/sudoers regexp='^testuser ALL=' state=present line="testuser ALL=(ALL) NOPASSWD: TEST_PROGRAM" state=present
                                                                                                       ^ here

Fix:
This is a known issue https://github.com/ansible/ansible/issues/1341 and the easiest work around for this, is to force the colon ':' character to be evaluated by the Jinja templating engine.

{{':'}}

The hilarious part of this, is that it doesn't look like this stupid quirk is going to be fixed.

Awesome Applications: 

Pages

Premium Drupal Themes by Adaptivethemes