Cheatsheet – ZFS command

http://thegeekdiary.com/solaris-zfs-command-line-reference-cheat-sheet/

Pool Related Commands

# zpool create datapool c0t0d0Create a basic pool named datapool
# zpool create -f datapool c0t0d0Force the creation of a pool
# zpool create -m /data datapool c0t0d0Create a pool with a different mount point than the default.
# zpool create datapool raidz c3t0d0 c3t1d0 c3t2d0Create RAID-Z vdev pool
# zpool add datapool raidz c4t0d0 c4t1d0 c4t2d0Add RAID-Z vdev to pool datapool
# zpool create datapool raidz1 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0Create RAID-Z1 pool
# zpool create datapool raidz2 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0Create RAID-Z2 pool
# zpool create datapool mirror c0t0d0 c0t5d0Mirror c0t0d0 to c0t5d0
# zpool create datapool mirror c0t0d0 c0t5d0 mirror c0t2d0 c0t4d0disk c0t0d0 is mirrored with c0t5d0 and disk c0t2d0 is mirrored withc0t4d0
# zpool add datapool mirror c3t0d0 c3t1d0Add new mirrored vdev to datapool
# zpool add datapool spare c1t3d0Add spare device c1t3d0 to the datapool
## zpool create -n geekpool c1t3d0Do a dry run on pool creation

Show Pool Information

# zpool status -xShow pool status
# zpool status -v datapoolShow individual pool status in verbose mode
# zpool listShow all the pools
# zpool list -o name,sizeShow particular properties of all the pools (here, name and size)
# zpool list -Ho nameShow all pools without headers and columns

File-system/Volume related commands

# zfs create datapool/fs1Create file-system fs1 under datapool
# zfs create -V 1gb datapool/vol01Create 1 GB volume (Block device) in datapool
# zfs destroy -r datapooldestroy datapool and all datasets under it.
# zfs destroy -fr datapool/datadestroy file-system or volume (data) and all related snapshots

Set ZFS file system properties

# zfs set quota=1G datapool/fs1Set quota of 1 GB on filesystem fs1
# zfs set reservation=1G datapool/fs1Set Reservation of 1 GB on filesystem fs1
# zfs set mountpoint=legacy datapool/fs1Disable ZFS auto mounting and enable mounting through /etc/vfstab.
# zfs set sharenfs=on datapool/fs1Share fs1 as NFS
# zfs set compression=on datapool/fs1Enable compression on fs1

File-system/Volume related commands

# zfs create datapool/fs1Create file-system fs1 under datapool
# zfs create -V 1gb datapool/vol01Create 1 GB volume (Block device) in datapool
# zfs destroy -r datapooldestroy datapool and all datasets under it.
# zfs destroy -fr datapool/datadestroy file-system or volume (data) and all related snapshots

Show file system info

# zfs listList all ZFS file system
# zfs get all datapool”List all properties of a ZFS file system

Mount/Umount Related Commands

# zfs set mountpoint=/data datapool/fs1Set the mount-point of file system fs1 to /data
# zfs mount datapool/fs1Mount fs1 file system
# zfs umount datapool/fs1Umount ZFS file system fs1
# zfs mount -aMount all ZFS file systems
# zfs umount -aUmount all ZFS file systems

ZFS I/O performance

# zpool iostat 2Display ZFS I/O Statistics every 2 seconds
# zpool iostat -v 2Display detailed ZFS I/O statistics every 2 seconds

ZFS maintenance commands

# zpool scrub datapoolRun scrub on all file systems under data pool
# zpool offline -t datapool c0t0d0Temporarily offline a disk (until next reboot)
# zpool onlineOnline a disk to clear error count
# zpool clearClear error count without a need to the disk

Import/Export Commands

# zpool importList pools available for import
# zpool import -aImports all pools found in the search directories
# zpool import -dTo search for pools with block devices not located in /dev/dsk
# zpool import -d /zfs datapoolSearch for a pool with block devices created in /zfs
# zpool import oldpool newpoolImport a pool originally named oldpool under new name newpool
# zpool import 3987837483Import pool using pool ID
# zpool export datapoolDeport a ZFS pool named mypool
# zpool export -f datapoolForce the unmount and deport of a ZFS pool

Snapshot Commands

Combine the send and receive operation
# zfs snapshot datapool/fs1@12jan2014Create a snapshot named 12jan2014 of the fs1 filesystem
# zfs list -t snapshotList snapshots
# zfs rollback -r datapool/fs1@10jan2014Roll back to 10jan2014 (recursively destroy intermediate snapshots)
# zfs rollback -rf datapool/fs1@10jan2014Roll back must and force unmount and remount
# zfs destroy datapool/fs1@10jan2014Destroy snapshot created earlier
# zfs send datapool/fs1@oct2013 > /geekpool/fs1/oct2013.bakTake a backup of ZFS snapshot locally
# zfs receive anotherpool/fs1 < /geekpool/fs1/oct2013.bakRestore from the snapshot backup backup taken
# zfs send datapool/fs1@oct2013 | zfs receive anotherpool/fs1
# zfs send datapool/fs1@oct2013 | ssh node02 “zfs receive testpool/testfs”Send the snapshot to a remote system node02

Clone Commands

# zfs clone datapool/fs1@10jan2014 /clones/fs1Clone an existing snapshot
# zfs destroy datapool/fs1@10jan2014Destroy clone
ZFS filesystem

ZFS filesystem

What is it?

Deduplication is the process of eliminating duplicate copies of data. Dedup is generally either file-level, block-level, or byte-level. Chunks of data — files, blocks, or byte ranges — are checksummed using some hash function that uniquely identifies data with very high probability. When using a secure hash like SHA256, the probability of a hash collision is about 2^-256 = 10^-77 or, in more familiar notation, 0.00000000000000000000000000000000000000000000000000000000000000000000000000001. For reference, this is 50 orders of magnitude less likely than an undetected, uncorrected ECC memory error on the most reliable hardware you can buy.

Chunks of data are remembered in a table of some sort that maps the data’s checksum to its storage location and reference count. When you store another copy of existing data, instead of allocating new space on disk, the dedup code just increments the reference count on the existing data. When data is highly replicated, which is typical of backup servers, virtual machine images, and source code repositories, deduplication can reduce space consumption not just by percentages, but by multiples.

Installing zfs on Ubuntu server 12.04

sudo apt-get -y install python-software-properties
sudo add-apt-repository ppa:zfs-native/stable
sudo apt-get update
sudo apt-cache search zfs
sudo apt-get install ubuntu-zfs

Note: this might take some time as it compiles the kernel module for your kernel

Run the zfs commands to make sure it works

sudo zfs
sudo zpool

There is no extra configuration for ZFS running on Ubuntu

zpool command configure zfs storage pools
zfs command configure zfs filesystem

zpool list shows the total bytes of storage available in the pool. 

zfs list shows the total bytes of storage available to the filesystem, after
redundancy is taken into account. 

du shows the total bytes of storage used by a directory, after compression
and dedupe is taken into account. 

“ls -l” shows the total bytes of storage currently used to store a file,
after compression, dedupe, thin-provisioning, sparseness, etc.

Link for reference:
https://blogs.oracle.com/bonwick/entry/zfs_dedup

Linux lvm – Logical Volume Manager

Linux lvm – Logical Volume Manager

The Linux Logical Volume Manager (LVM) is a mechanism for virtualizing disks. It can create “virtual” disk partitions out of one or more physical hard drives, allowing you to grow, shrink, or move those partitions from drive to drive as your needs change.

The coolest part of Logical Volume Management is the ability to resize disks without powering off the machine or even interrupting service. Disks can be added and the volume groups can be extended onto the disks. This can be used in conjunction with software or hardware RAID as well.

Physical volumes your physical disks or disk partitions, such as /dev/hda or /dev/hdb1 -> combine multiple physical volumes into volume groups.

Volume groups comprised of real physical volumes -> create logical volumes which you can create/resize/remove and use. You can consider a volume group as a “virtual partition” which is comprised of an arbitary number of physical volumes. Ex: (VG1 = /dev/sda1 + /dev/sdb3 + /dev/sdc1)

logical volumes are the volumes that you’ll ultimately end up mounting upon your system. They can be added, removed, and resized on the fly. Since these are contained in the volume groups they can be bigger than any single physical volume you might have. (ie. 4x5Gb drives can be combined into one 20Gb volume group, and you can then create two 10Gb logical volumes.)

PVLM: Physical volumes -> Volume group -> Logical volume -> Mount on filesystem 

Create Physiscal volumes

pvcreate /dev/sda1
pvcreate /dev/sdb
pvcreate /dev/sdc2

Display attributes of a physical volume

pvdisplay
pvs

Create Volume group

vgreate datavg /dev/sda1 /dev/sdb

add more physical volume to volume group

vgextend datavg /dev/sdc2

Display Volume group information

vgdisplay
vgs

Create Logical volume

lvcreate -n backup –size 500G datavg

Create Logical volume name backup size 500GB on datavg Volume group

Display Logical volume information

lvdisplay
lvs

Mount logical volume into filesystem

mkfs.ext4 /dev/datavg/backup
mkdir /srv/backup
mount /dev/datavg/backup /srv/backup

Append /etc/fstab

/dev/datavg/backup /srv/backup    ext4    defaults        0       2

Configuration file located at /etc/lvm/backup/datavg

Resize Logical volume when system online

#extend 200GB available on free space of /dev/datavg/backup
lvextend -L +200G /dev/datavg/backup

#extend the size by the amount of free space on physical volume /dev/sdc2

lvextend  /dev/datavg/backup  /dev/sdc2
resize2fs -p /dev/datavg/backup


Watch the resize process going on with df -h