Skip to main content

You are here

Ubuntu

Running Ubuntu Server on an Intel NUC 10th i7

Late last year, I purchased a secondary Intel NUC 8th i3 for my homelab. My main goal was use this secondary NUC primarily to learn Mesos and Kubernetes more in depth. Little that I knew that the dual core i3 on the NUC was not truly powerful enough to run a simple ten node DC/OS cluster, let alone another Kubernetes cluster on the same machine. So I decided to wait until the new i7 10th generation Intel NUCs were released, so I can upgrade.

The upgrade itself was not as easy as I first imagine. Both the RAM and hard drive were swapped from the old 8th gen NUC to the new 10th gen NUC. Ubuntu started up successfully, and all the memory was properly recognized on the new machine, however networking was not working. My first thought was that since now Linux was running on a new hardware, I needed to remove the old NIC's udev configuration. I soon realized that apparently in the post systemd world, we no longer need to do this. After a quick Google search, I found a Reddit post that outline my exact problem. https://www.reddit.com/r/intelnuc/comments/eox6k1/caution_new_frost_canyon_nucs_have_an_integrated/

I was shocked to learn that the new 10th gen NUC’s network card is so new that it doesn’t even have its driver on the latest Ubuntu Server LTS! Luckily compiling and loading the newer e1000e driver was a really easy task. The only caveat was that I had to go into the UEFI Bios and disable secure boot and allow 3rd party modules, otherwise the new kernel module would fail to load.

After a few hours of usage, performance is completely night a day. The new 10th gen i7 hex core processor completely blows 8th gen i3 dual core, out of the water.

Linux: 

Ubuntu 18.04 LTS + Systemd + Netplan = Annoyance

Unless it's something that is suppose to help improve workflow, I really hate change; especially if the change involves changing something that worked perfectly fine.

I upgraded (fresh install) from Ubuntu Server LTS 12.04 to 18.04, among the addition of systemd, which I don't mind to be honest, as I see it as necessary evil. I was shocked to see the old traditional Debian networking configuration does not work anymore. Instead, networking is handled by a new utility called Netplan. Using Netplan for normal static networking configurations is not terrible, however in my use-case, I needed to able to create a new virtual interface for the shared KVM bridge networking config needed for my guest VMs.

After about 30 minutes of trail and error (and wasn't able to find any useful documentation), I opted to configure the networking config to continue using the old legacy networking config. The only problem is that reverting to my old 12.04 networking config was not quite as easy as simply copying over the old interfaces file. So I had to do the following:

1. Remove all of the configs on /etc/netplan/

rm /etc/netplan/*.yml

2. Install ifupdown utility

sudo apt install ifupdown

Now, populate your /etc/network/interfaces config. This is how mine looks (where eno1 is my physical interface):

# ifupdown has been replaced by netplan(5) on this system.  See
# /etc/netplan for current configuration.
# To re-enable ifupdown on this system, you can run:
#    sudo apt install ifupdown
#
auto lo
iface lo inet loopback

 auto br0
 iface br0 inet static
         address 192.168.1.25
         netmask 255.255.255.0
         dns-nameservers 8.8.8.8 192.168.1.10 192.168.1.11
         gateway 192.168.1.1
         # set static route for LAN
         post-up route add -net 192.168.0.0 netmask 255.255.255.0 gw 192.168.1.1
         bridge_ports eno1
         bridge_stp off
         bridge_fd 0
         bridge_maxwait 0

After restarting the network service, my new shared interface was successfully created with the proper IP Address and routing, however DNS was not configured. This is because now DNS configurations seem to have their own dedicated tool called systemd-resolved. So to get my static DNS configured and working on the half-ass networking legacy configuration. Using systemd-resolved is a two step process:

1. Update the file /etc/systemd/resolved.conf with the corresponding DNS configuration, in my case it looks like this

[Resolve]
DNS=192.168.1.10
DNS=192.168.1.11
DNS=8.8.8.8
Domains=rubyninja.org

2. Then finally restart the systemd-resolved service.

systemctl restart systemd-resolved

You can verify the DNS config using

systemd-resolve --status

It wasn't easy as I first imagined, but thus said, this was the only inconvenience during my entire 12.04 to 18.04 upgrade.

Linux: 

Homelab Updates!

It's been well over a month since I finally decided to retire both of my Apple Mac Minis in favor for a single (for the time being), quieter, and more powerful Intel NUC.

Migrating over my existing KVM and VirtualBox VMs to my new KVM server was a really easy process. If doing the import manually, then it's just a matter of selecting the existing vdi and qcow2 images as the source disks when creating the guests VMs on the server. In my cause, however I also had to update the new MAC address given that all of my VMs are configured to get their respective fixed IP addresses via my isc-dhcpd server.

This this was somewhat of a fresh start, so I nuked a bunch of unused VMs that I had lingering around for testing purposes, and only kept what I really need for now. Which at the time of this writing these are my current active VMs that I use on my homelab:
proxy - Reverse proxy Varnish and Nginx (SSL termination)
dhcp - ISC-dhcpd and PXE server
database - MySQL and PostgreSQL server
monitor - Nagios, Graphite/Grafana
web - Apache
ns1 - Master BIND server
ns2 - Slave BIND server
git - GitLab and Subversion
ansible - Ansible and Puppet Configuration Management
build - Jenkins
logs - ELK stack

Future Plans:
I have lots of future plans for my homelab. Like upgrading my BIND DNS servers to a new version and rollout out DNSSec on my local network, upgrading dhcp server (running a really old version of Debian), rollout 389 Directory Server (I have a love/hate relationship with openldap). These are just to name a few!

Linux: 

Awesome Applications: 

Restoring access to Fedora after Ubuntu upgrade

I have a quadroboot OS installation environment in my primary laptop.

  • Ubuntu (primary OS)
  • Kali
  • Fedora
  • Windows 7

I decided to upgrade my Ubuntu installing to the latest 15.04. As soon the upgrade completed and rebooted, I noticed the GRUB menu was no longer displaying my Fedora 21 environment. The problem was because I had installed Fedora under an LVM partition, while the others weren't.

Restoring boot access to Fedora was fairly simple.

First, I had install lvm2 package in Ubuntu so it's able to view and configure the LVM

[email protected]:~$ sudo apt-get install lvm2

Then I had to activate the Volume Group.

[email protected]:~$ sudo vgchange -a y

After updating the Volume Group, I was able to verify that Ubuntu was able to my Fedora 21 install.

[email protected]:~$ sudo os-prober
/dev/sda1:Windows 7 (loader):Windows:chain
/dev/sda6:Debian GNU/Linux (Kali Linux 1.0):Debian:linux
/dev/mapper/fedora-root:Fedora release 21 (Twenty One):Fedora:linux

So the last step was to generate a new grub config.

[email protected]:~$ sudo grub-mkconfig > /boot/grub/grub.cfg

Linux: 

Awesome Applications: 

Installing system-config-kickstart on Ubuntu

So, system-config-kickstart fails to start after the initial install.

Error:

[email protected]:~$ system-config-kickstart
Traceback (most recent call last):
File "/usr/share/system-config-kickstart/system-config-kickstart.py", line 92, in
kickstartGui.kickstartGui(file)
File "/usr/share/system-config-kickstart/kickstartGui.py", line 131, in __init__
self.X_class = xconfig.xconfig(xml, self.kickstartData)
File "/usr/share/system-config-kickstart/xconfig.py", line 80, in __init__
self.fill_driver_list()
File "/usr/share/system-config-kickstart/xconfig.py", line 115, in fill_driver_list
raise RuntimeError, (_("Could not read video driver database"))
RuntimeError: Could not read video driver database

Fix:
Downgrade the hwdata package.

# apt-get remove hwdata
# wget ftp://mirror.ovh.net/mirrors/ftp.debian.org/debian/pool/main/h/hwdata/hw...
# dpkg -i hwdata_0.234-1_all.deb
# apt-mark hold hwdata
# apt-get install system-config-kickstart

This is a known bug in Ubuntu that is yet to be fixed...
https://bugs.launchpad.net/ubuntu/+source/system-config-kickstart/+bug/1260107
https://bugs.launchpad.net/ubuntu/+source/system-config-kickstart/+bug/1236315

Linux: 

Awesome Applications: 

Black background in all desktops after Ubuntu 13.10 upgrade

So I just upgraded my Dell XPS 13 laptop from Ubuntu 13.04 to 13.10, and immeditely the first thing I noticed that all of my desktops had a black background. and manually changing the background wallpaper took no effect. Turns out this is a common problem. In my case it turned out to be related to Gnome, which I found it to be rather interesting giving that a Gnome specific setting will case this in Unity.
Fix:

gsettings set org.gnome.settings-daemon.plugins.background active true

Reference: http://askubuntu.com/questions/287571/desktop-shows-a-white-or-black-bac...

Linux: 

Awesome Applications: 

Chef encountered an error attempting to create the client

So I'm finally starting to keep up with modern times and started to learn Chef more in depth. My goal is to completely automate and easily manage all of virtual machine instances running in my home network.

Upon attempting to bootstrap my very first node, I received the following error:

ubuntu Creating a new client identity for ubuntu01 using the validator key.
ubuntu
ubuntu ===================================================================
ubuntu Chef encountered an error attempting to create the client "ubuntu01"
ubuntu ===================================================================
ubuntu
ubuntu
ubuntu Resource Not Found:
ubuntu -------------------
ubuntu The server returned a HTTP 404. This usually indicates that your chef_server_url is incorrect.
ubuntu
ubuntu
ubuntu
ubuntu Relevant Config Settings:
ubuntu -------------------------
ubuntu chef_server_url "https://chef.rubyninja.org:443"
ubuntu
ubuntu
ubuntu
ubuntu [2013-09-15T22:25:28-07:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
ubuntu Chef Client failed. 0 resources updated
ubuntu [2013-09-15T22:25:28-07:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

This essentially means that the node is not able to communicate with the Chef server. In my case, it turned out that the ubuntu01 machine was not using my local DNS servers, thus the chef.rubyninja.org lookup from the machine was failing.

Linux: 

Awesome Applications: 

Can't locate local/lib.pm CPAN error on Ubuntu 12.04

So the default Perl installation that ships with Ubuntu 12.04 LTS, does not include local::lib which is necessary if you want to use CPAN.

Error:

Can't locate local/lib.pm in @INC (@INC contains: /home/tony/perl5/lib/perl5 /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl /home/tony) at /usr/share/perl/5.14/CPAN/FirstTime.pm line 1300.

Fix:

sudo apt-get install liblocal-lib-perl

Reference: http://stackoverflow.com/questions/16702642/cant-locate-local-lib-pm-in-...

Programming: 

Linux: 

ZFS on Linux: Kernel updates

Just as I would expect, updating both the kernel's of the machine that is running VirtualBox and its virtual machines and the ZFS enabled Linux virtual machine was completed with absolutely no issues. I originally was more concern on updating the host VirtualBox machine's kernel given that I've never really done this in the past using the additional VirtualBox Extension Pack add-on before, on the other hand I wasn't to concern regarding the ZFS kernel module, given that it was installed as part of a dkms kernel module rpm. Which regardless of what people think about dkms modules, as a sysadmin that have worked with Linux systems with them (proprietary), it's certainly a relief knowing that little or no additional work is needed to rebuild the respective module after updating to a newer kernel.

Linux: 

Awesome Applications: 

ZFS on Linux: Storage setup

For my media storage, I'm using a 500GB 5400 RPM USB drive. Since my Linux ZFS backup server is a virtual machine under VirtualBox, in order for the VM to be able to access the entire USB drive completely, the VirtualBox Extension Pack add-on needs to be installed.

The VirtualBox Extension Pack for all versions can be found on the following web site http://download.virtualbox.org/virtualbox/ . It is important that the Extension Pack installed must be for the same version as VirtualBox.



VirtualBox about

wget http://download.virtualbox.org/virtualbox/4.1.12/Oracle_VM_VirtualBox_Ex...
VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.12.vbox-extpack

Additionally, it is also important that the user which VirtualBox will run under is member of the vboxusers group.

groups tony
tony : tony adm cdrom sudo dip plugdev lpadmin sambashare
sudo usermod -G adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,vboxusers tony
groups tony
tony : tony adm cdrom sudo dip plugdev lpadmin sambashare vboxusers

Since my computer is already using two other 500GB external USB drives, I had to properly identify the drive that I wanted to use for my ZFS data. This was a really simple process (I don't give a flying fuck about sharing my drive's serial).

sudo hdparm -I /dev/sdd|grep Serial
Serial Number: J2260051H80D8C
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b

Now that I know the serial number of the USB drive, I can configure my VirtualBox Linux ZFS server VM to automatically use the drive.
VirtualBox drive configuration

At this point I'm about to use the 500 GB hard drive as /dev/sdb under my Linux ZFS server and use it to create ZFS pools and file systems.

zpool create pool backups /dev/sdb
zfs create backups/dhcp

Since I haven't used ZFS on Linux extensively before, I'm manually mounting my ZFS pool after a reboot.

[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
3.5G 1.6G 1.8G 47% /
tmpfs 1.5G 0 1.5G 0% /dev/shm
/dev/sda1 485M 67M 393M 15% /boot
[[email protected] ~]# zpool import
pool: backups
id: 15563678275580781179
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

backups ONLINE
sdb ONLINE
[[email protected] ~]# zpool import backups
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
3.5G 1.6G 1.8G 47% /
tmpfs 1.5G 0 1.5G 0% /dev/shm
/dev/sda1 485M 67M 393M 15% /boot
backups 446G 128K 446G 1% /backups
backups/afs 447G 975M 446G 1% /backups/afs
backups/afs2 447G 750M 446G 1% /backups/afs2
backups/bashninja 448G 1.4G 446G 1% /backups/bashninja
backups/debian 449G 2.5G 446G 1% /backups/debian
backups/dhcp 451G 4.4G 446G 1% /backups/dhcp
backups/macbookair 446G 128K 446G 1% /backups/macbookair
backups/monitor 447G 880M 446G 1% /backups/monitor
backups/monitor2 446G 128K 446G 1% /backups/monitor2
backups/rubyninja.net
446G 128K 446G 1% /backups/rubyninja.net
backups/rubysecurity 447G 372M 446G 1% /backups/rubysecurity
backups/solaris 446G 128K 446G 1% /backups/solaris
backups/ubuntu 446G 128K 446G 1% /backups/ubuntu

Linux: 

Awesome Applications: 

Pages

Premium Drupal Themes by Adaptivethemes