Skip to main content

You are here

ZFS

ZFS on Linux: Kernel updates

Just as I would expect, updating both the kernel's of the machine that is running VirtualBox and its virtual machines and the ZFS enabled Linux virtual machine was completed with absolutely no issues. I originally was more concern on updating the host VirtualBox machine's kernel given that I've never really done this in the past using the additional VirtualBox Extension Pack add-on before, on the other hand I wasn't to concern regarding the ZFS kernel module, given that it was installed as part of a dkms kernel module rpm. Which regardless of what people think about dkms modules, as a sysadmin that have worked with Linux systems with them (proprietary), it's certainly a relief knowing that little or no additional work is needed to rebuild the respective module after updating to a newer kernel.

Linux: 

Awesome Applications: 

ZFS on Linux: Stability Issues

So far I've had one stability issue on my backup virtual machine. Though, I can't really blame ZFS for crashing my VM, instead I believe this was a consequence the VM running out of memory due to large amount of rsync, and the heavy I/O caused on the ZFS drive.

After updating the dedicated memory on my backup VM from 512 MB to 3.5 GB and updating my rsync's to run with low process and I/O priority, I have yet experience any more problems.

nice -n 19 ionice -c 3

Awesome Applications: 

ZFS on Linux: Nagios check_zfs plugin

To monitor my ZFS pool, of course I'm using Nagios, duh. Nagios Exchange provide a check_zfs plugin written in Perl. http://exchange.nagios.org/directory/Plugins/Operating-Systems/Solaris/c...

Although the plugin was originally designed for Solaris and FreeBSD systems, I got it to work under my Linux system with very little modification. The code can be found on my SysAdmin-Scripts git repo on GitHub https://github.com/alpha01/SysAdmin-Scripts/blob/master/nagios-plugins/c...

[[email protected] ~]# su - nagios -c "/usr/local/nagios/libexec/check_zfs backups 3"
OK ZPOOL backups : ONLINE {Size:464G Used:11.1G Avail:453G Cap:2%}

Programming: 

Awesome Applications: 

ZFS on Linux: Storage setup

For my media storage, I'm using a 500GB 5400 RPM USB drive. Since my Linux ZFS backup server is a virtual machine under VirtualBox, in order for the VM to be able to access the entire USB drive completely, the VirtualBox Extension Pack add-on needs to be installed.

The VirtualBox Extension Pack for all versions can be found on the following web site http://download.virtualbox.org/virtualbox/ . It is important that the Extension Pack installed must be for the same version as VirtualBox.



VirtualBox about

wget http://download.virtualbox.org/virtualbox/4.1.12/Oracle_VM_VirtualBox_Ex...
VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.12.vbox-extpack

Additionally, it is also important that the user which VirtualBox will run under is member of the vboxusers group.

groups tony
tony : tony adm cdrom sudo dip plugdev lpadmin sambashare
sudo usermod -G adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,vboxusers tony
groups tony
tony : tony adm cdrom sudo dip plugdev lpadmin sambashare vboxusers

Since my computer is already using two other 500GB external USB drives, I had to properly identify the drive that I wanted to use for my ZFS data. This was a really simple process (I don't give a flying fuck about sharing my drive's serial).

sudo hdparm -I /dev/sdd|grep Serial
Serial Number: J2260051H80D8C
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b

Now that I know the serial number of the USB drive, I can configure my VirtualBox Linux ZFS server VM to automatically use the drive.
VirtualBox drive configuration

At this point I'm about to use the 500 GB hard drive as /dev/sdb under my Linux ZFS server and use it to create ZFS pools and file systems.

zpool create pool backups /dev/sdb
zfs create backups/dhcp

Since I haven't used ZFS on Linux extensively before, I'm manually mounting my ZFS pool after a reboot.

[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
3.5G 1.6G 1.8G 47% /
tmpfs 1.5G 0 1.5G 0% /dev/shm
/dev/sda1 485M 67M 393M 15% /boot
[[email protected] ~]# zpool import
pool: backups
id: 15563678275580781179
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

backups ONLINE
sdb ONLINE
[[email protected] ~]# zpool import backups
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
3.5G 1.6G 1.8G 47% /
tmpfs 1.5G 0 1.5G 0% /dev/shm
/dev/sda1 485M 67M 393M 15% /boot
backups 446G 128K 446G 1% /backups
backups/afs 447G 975M 446G 1% /backups/afs
backups/afs2 447G 750M 446G 1% /backups/afs2
backups/bashninja 448G 1.4G 446G 1% /backups/bashninja
backups/debian 449G 2.5G 446G 1% /backups/debian
backups/dhcp 451G 4.4G 446G 1% /backups/dhcp
backups/macbookair 446G 128K 446G 1% /backups/macbookair
backups/monitor 447G 880M 446G 1% /backups/monitor
backups/monitor2 446G 128K 446G 1% /backups/monitor2
backups/rubyninja.net
446G 128K 446G 1% /backups/rubyninja.net
backups/rubysecurity 447G 372M 446G 1% /backups/rubysecurity
backups/solaris 446G 128K 446G 1% /backups/solaris
backups/ubuntu 446G 128K 446G 1% /backups/ubuntu

Linux: 

Awesome Applications: 

ZFS on Linux: Installation

Attending the ZFS Administration talk on SCALE 11x a couple of weeks ago made me interested in trying ZFS on Linux. Given that the speaker said that he uses ZFS on Linux on his production machines, made me think that ZFS on Linux may be finally ready for everyday use. So I'm currently looking into using the ZFS snapshots feature for my personal local file backups.

For my Linux ZFS backup server, I'm using the latest CentOS 6. Below are the steps I took to get ZFS on Linux working.

yum install automake make gcc kernel-devel kernel-headers zlib zlib-devel libuuid libuuid-devel

Since the ZFS modules get build using dkms, the latest dkms package will be needed. This can be downloaded from from Dell's website at http://linux.dell.com/dkms/

wget http://linux.dell.com/dkms/permalink/dkms-2.2.0.3-1.noarch.rpm
rpm -ivh dkms-2.2.0.3-1.noarch.rpm

Now, the spl-modules-dkms-X rpms need to be installed.

wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.0-rc14.sr...
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-modules-0.6.0...
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-modules-dkms-...
rpm -ivh spl*.rpm

After the spl-modules-dkms-X rpms have been installed, the ZFS rpm packages can now be finally installed.

wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.0-rc14.sr...
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-modules-0.6.0...
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-modules-dkms-...
rpm -ivh zfs*.rpm

One thing that confused me was that after all rpm's were installed, the zfs or zfspool binaries were no where on my system, which according to the documentation the zfs-* rpm process would have build the kernel modules and installed them on my running kernel, however this didn't look to be the case. http://zfsonlinux.org/zfs-building-srpm.html
Instead I had to do the following:

cd /usr/src/zfs-0.6.0
make
make install

After the install completed both zfs and zfspool utilities were available and ready to use.

Linux: 

Awesome Applications: 

Creating large files in Solaris for testing purposes

In the Linux world, I use the dd utility to create files that need to be a certain size. Even though it works perfectly fine, its kind of annoying figuring out the output file's size of the file. This is because the size is based on the "bs" (block size) value and the total number of block size "count" together.

For example, the following dd command creates a 300 mb file called 300mb-test-fil. Each block size will be 1000 bytes, and I want of a total of 300,000 blocks.
Formula: ( (1000 x 300000) / 1000000 )

[[email protected] ~]$ dd if=/dev/zero of=300mb-test-file bs=1000 count=300000
300000+0 records in
300000+0 records out
300000000 bytes (300 MB) copied, 2.0363 s, 147 MB/s

Luckily in the Solaris world this can be easily accomplished using the mkfile utility, without doing any conversion.
I used the mkfile utility to easily create test disk files to experiment with ZFS.

[email protected]:~# mkfile 300m testdisk1
[email protected]:~# mkfile 300m testdisk2
[email protected]:~# ln -s /root/testdisk1 /dev/dsk/testdisk1
[email protected]:~# ln -s /root/testdisk2 /dev/dsk/testdisk2
[email protected]:~# zpool create tonytestpool mirror testdisk1 testdisk2
[email protected]:~# zpool status tonytestpool
pool: tonytestpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
tonytestpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
testdisk1 ONLINE 0 0 0
testdisk2 ONLINE 0 0 0

errors: No known data errors

Linux: 

Awesome Applications: 

Unix: 

Premium Drupal Themes by Adaptivethemes