Tag Cloud
Latest Book Reviews
- Automating Workflows with GitHub Actions Posted on October 13, 2022
- Deep-Dive Terraform on Azure Posted on August 30, 2022
- Effective DevOps Posted on January 5, 2022
- Kubernetes: Up and Running Posted on November 28, 2021
- Learning Helm Posted on November 26, 2021 All Book Reviews
Latest Posts
- Moved antoniobaltazar.com to GitHub Pages Posted on August 6, 2022
- Goodbye Drupal, Hello Jekyll Posted on August 5, 2022
- Goodbye CentOS 8, Hello Rocky Linux Posted on February 9, 2022
- #100DaysOfCode Go Posted on January 1, 2022
- RIP Nagios Posted on June 7, 2021
August 6, 2022
Moved antoniobaltazar.com to GitHub Pages
by Alpha01
Since I recently shutdown one my Intel Nuc homelab servers due to space, in addition to going forward I’ll be using public clouds for any testing that requires additional extensive computing. I was forced to migrate off my portfolio from a Kubernetes platform. I’ve been in a GitHub Pages honeymoon, so this was my first choice to move the site too. Since the portfolio site is a simple Node app, the containerized app was already a complete static site. The only dynamic aspect of the application is the custom Gulp automation that is used to compile the Sass assets.
The only changes I made to get the site to easily publish to Git Hub Pages was restructuring the site files under the _site
directory. By default this is the directory that gets used by the configure-pages
, upload-pages-artifact
, and deploy-pages
actions. GitHub has awesome documentation, using their examples I was able to quickly write a Workflow to build the node site, and publish it to GitHub Pages. It was a very extremely easy process! The difficult process was all my fault.
A while back, I consolidated my Blog, Photos, and Collection Wordpress sites under the domain antoniobaltazar.com
. This presented a problem because I can’t simply update DNS and point antoniobaltazar.com
to GitHub Pages because it will break access to my other WordPress sites. I use Cloudflare free DNS hosting, so I have very limited access to their Rewrite Page Rules features. So this meant that I would still need to keep antoniobaltazar.com
DNS pointing to my current infrastructure, rather than having the complex redirects at the DNS level. Fortunately setting the redirects at the Varnish (http 80) and Nginx (https 443) app level was extremely easy.
Nginx
On the Nginx side of things, I didn’t had to change anything on my configuration. Nginx serves as an SSL termination proxy on my environment, so all incoming antoniobaltazar.com
https requests are automatically forwarded to my http Varnish backend.
location / {
proxy_pass http://my-varnish-backend/;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
### Most PHP, Python, Rails, Java App can use this header ###
#proxy_set_header X-Forwarded-Proto https;##
#This is better##
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
add_header X-Powered-By "Unicorns";
add_header X-hacker "Alpha01";
client_max_body_size 10m;
# These headers are only accessible internally
#more_clear_headers 'x-generator';
#more_clear_headers 'x-drupal-cache';
# We expect the downsteam servers to redirect to the right hostname, so don't do any rewrites here.
proxy_redirect off;
}
Varnish
Its in Varnish where all the magic happens. For the configuration I’m simply redirecting all antoniobaltazar.com
requests that don’t belong to my WordPress sites, to the GitHub Pages location https://alpha01.github.io/antoniobaltazar.com
.
sub vcl_recv {
# Portoflio is now hosted on GitHub Pages
if (req.http.host ~ "(?i)^(www.)?antoniobaltazar\.(com|org)" && (req.url !~ "/(blog|collection|photos)")) {
set req.http.host = "https://alpha01.github.io";
return (synth(750, req.http.host + "/antoniobaltazar.com" + req.url));
}
}
sub vcl_backend_error {
# Take care of custom redirects
if (beresp.status == 750) {
set beresp.http.Location = beresp.reason;
set beresp.status = 301;
return(deliver);
}
}
Not a pretty solution, but it does the job.
Tags: [github
varnish
nginx
jekyll
]
August 5, 2022
Goodbye Drupal, Hello Jekyll
by Alpha01
Ever since the Drupal Project had announced support for Drupal was going to drop on November 1, 2023. I’ve been dreading the fact that I was going to be forced to upgrade to Drupal 9. This is mainly because I’m using some modules that I know are no longer being actively developed, and I’ve made some changes to my theme that I know for certain will not be compatible with the new version of Drupal.
I’ve looked into some static site generations like, Harp and Surge in the past, I but didn’t seemed to get a proper workflow with them to replace Drupal or WordPress for that matter. While I’m not new to the GitHub Pages world, I am new to it’s built-in integration with Jekyll. Recently, I had to use this feature for a work project, and I must say the GitHub Pages built-in integration is Jekyll awesome! Jekyll is a fantastic piece of software. The documentation is very comprehensible and easy to follow.
So I decided to migrate this site from Drupal 7 to Jekyll. The migration process was relatively painless. The Jekyll project has tons of exporters, including one for Drupal 7. The only caveat (though expected) was that the exported posts were using the content type and taxonomies set for Drupal. In Jekyll these are differently, so after few global search and replace tasks, I was able to easily transform the exported data into useable valid Jekyll content. As far as the theming is concerned, I tried to follow a similar look and view as the previous Drupal 7 site. Overall, as a person with a decent (though at times limited since JavaScript is not my forté) web development skills, customizing Jekyll has been a straight forward process. I would even say it’s easier than Drupal development!
Perhaps the only drawback of the built-in integration with GitHub Pages and Jekyll is that you only have a set of plugins and themes to work with. This became evident when I was creating a custom pagination page, and GitHub Pages uses a older version of the pagination plugin that isn’t compatible with newer v2 version which most examples found in when Google searching! Thus said, nothing can prevent you from using GitHub Actions to build and publish a Jekyll (or anything static site generator) generated site regardless of plugins or themes.
This site is now hosted on GitHub Pages, instead of on my homelab infrastructure.
Tags: [jekyll
github
]
February 9, 2022
Goodbye CentOS 8, Hello Rocky Linux
by Alpha01
I’m over two months late, to the deadline as support for CentOS 8 stopped on December 31, 2021 and now the project is focusing on the CentOS 8 Stream rolling update distro.
Instead of converting my CentOS 8 to Stream, I opted to the popular approach of just dumping CentOS in favor of Rocky Linux. The migrating process itself was super easy. My original CentOS 8 system was NOT running the latest version of CentOS prior to its End-Of-Life.
[[email protected] ~]# cat /etc/centos-release
CentOS Linux release 8.4.2105
[[email protected] ~]# sudo dnf update
CentOS Linux 8 - AppStream 219 B/s | 38 B 00:00
Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist
Instead of opting to update the repos to point the CentOS archive vault repositories. I wanted to just try the migration from my 8.4.2105 running version. After all this particular system is just a Postfix mailserver, and if it botched completely, I’m easily able to recreate the mail server using my Ansible automation.
The update process is just a matter of running the migrate2rocky.sh migration shell script.
[[email protected] migrate2rocky]# ./migrate2rocky.sh -r
migrate2rocky - Begin logging at Tue 08 Feb 2022 04:20:57 PM PST.
Removing dnf cache
Preparing to migrate CentOS Linux 8 to Rocky Linux 8.
[...]
Done, please reboot your system.
A log of this installation can be found at /var/log/migrate2rocky.log
After a few minutes where it applied the changes and them some package updates, the migration script ending without any errors. After rebooting the system, I was able to ssh in normally, verify that the system was in a working state, and verify Postfix still working. The migration worked!
[[email protected] ~]# cat /etc/redhat-release
Rocky Linux release 8.5 (Green Obsidian)
[[email protected] ~]# cat /etc/rocky-release
Rocky Linux release 8.5 (Green Obsidian)
[[email protected] ~]# cat /etc/rocky-release-upstream
Derived from Red Hat Enterprise Linux 8.5
I even used Ansible to see what version it was reading after the migration:
[[email protected] ~]# ansible mail -m setup -a 'filter=ansible_distribution'
mail.rubyninja.org | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Rocky",
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false
}
[[email protected] ~]# ansible mail -m setup -a 'filter=ansible_distribution_version'
mail.rubyninja.org | SUCCESS => {
"ansible_facts": {
"ansible_distribution_version": "8.5",
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false
}
The beauty of binary compatible Linux distributions is that although updates are from new repackaged packages, when applied they work flawlessly.
Ironically, over a decade ago I had to do similar migration from White Box Enterprise Linux to CentOS 3. At the time, I simply had to update the repo URLs, refresh yum, and pull down the latest updates from the CentOS 3 repos. All of which worked beautifully.
Resources
- https://github.com/rocky-linux/rocky-tools/tree/main/migrate2rocky
- https://www.cyberciti.biz/howto/migrate-from-centos-8-to-rocky-linux-conversion
centos
rhel
]
January 1, 2022
#100DaysOfCode Go
by Alpha01
It’s been well over 10 years since I’ve learned a new programming language. While I’ve flirted with JavaScript to a certain point, I never truly did made an effort to learn it, given how horrifying that language is (there I said it). My programming journey began with simply Bash shell scripting, and Ruby in college. I won’t include Microsoft VisualBasic which I had a couple of courses in, because quite frankly I don’t remember much of it. As I got my second job as a Linux sysadmin in early 2008, that shifted towards me needing to learn PHP, and Perl; so I did. Then around that same time, seeing the popularity of Python, I’ve also decided to learn Python. So throughout all my tech career, I’ve extensively used Bash, Ruby, PHP, Perl, and Python in one way or another. So much, that I’m definitely comfortable using any of them, varying the problem I want to solve, hence I’ve have included them in my resumé.
Now in January 1, 2022, as stated in my New Year’s Resolution, I’ve made it an actual goal to learn Go in depth. It’s quite amazing to see how the tech industry has really embraced Go, as one of the defacto languages. My background is mostly all Linux DevOps, and the past two years I’ve been working extensively with Kubernetes. So being involved in the Kubernetes world, I feel somewhat constraint by the fact that I’m not well knowledge in the Go programming language. That’s why I want to learn this new powerful programming language.
I love the idea of the #100DaysOfCode challenge, as well as its community aspect. So I’ve decided that for the next 100 days, I’ll be learning Go. I’m going to be using Twitter for daily assertions, as well a weekly post on this blog to keep myself accountable. All of my code will be on https://github.com/alpha01/100DaysOfCode-Go. I already have plans for some practical projects, such as writing a custom Kubernetes controller for a CRD using https://github.com/kubernetes/sample-controller.
For the study material, I’ll reading the book Go Programming Language and using the Udemy courses Learn How To Code: Google’s Go (golang) Programming Language and Go: The Complete Developer’s Guide (Golang).
Happy New Year, and happy Go hacking!
Tags: [go
]
June 7, 2021
RIP Nagios
by Alpha01
It’s an end of an era, at least with me using Nagios or Nagios Core to be exact. Unless you’ve been living under a rock, Prometheus has become the defacto tool when it comes to system monitoring. While professionally, I stoped using Nagios a few years ago, but I still kept a Nagios server running in my HomeLab for internal monitoring along side Prometheus. What kept me from fully dumping Nagios, was having to migrate some of my custom alerts. However, this weekend I finally decided to give Nagios its final blow and migrate my custom alerts to Prometheus. With the help of the awesome Blackbox exporter, I was able to easily port over my custom http and dns alerts to Prometheus.
Like Nagios, I feel Prometheus also has a steep learning curve. However, overall I feel the benefits Prometheus brings like integration with cloud native system infrastructures, definitely outweigh the drawbacks of this awesome monitoring tool.
prometheus
nagios
]