Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
55 views13 pages

ZFS Command

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 13

ZFS

ZFS :- Zetta Byte filesystem is introduced on Solaris 10 Release.To


develop this filesystem cum volume manager,Sun Micro-systems had
spend lot of years and some billion dollars money. ZFS has many cool
features over traditional volume managers like SVM,VXVM.

MULTIPLE TERABYTE DISK SUPPORT IN ZFS, BUT SVM NOT SUPPORT.

ZFS 128BIT SUPPORT,BUT UFS 64BIT SUPPORT.

Advantages:-
1.Zpool Capacity of 256 zettabytes

2.ZFS snapshots,clones and Sending-receiving snapshots

3.Lightweight filesystem creation

4.Encryption

5.Software RAID

6.Data integrity

7.Integrated Volume management (No need an additional volume


manager)

Disadvantages :-
1.No way to reduce the zpool capacity

2. Re-silver takes more time in zpool raid.

(1) To Create a simple zpool :-


# zpool create <zpool_name> <device_name>

#zpool create szpool c1t2d0

# zpool list szpool

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

szpool 89M 97K 88.9M 0% ONLINE -


(2) To create a mirror zpool :-
# zpool create mzpool mirror c1t5d0 c1t6d0

# zpool list mzpool

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

mzpool 89M 97K 88.9M 0% ONLINE -

(3) To Create a raidz zpool :-


# zpool create rzpool raidz c1t2d0 c1t1d0 c1t8d0

# zpool list rzpool

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

rzpool 266M 176K 266M 0% ONLINE -

(4) In this second task, we are going to see how to create


new dataset under zpool.This is like creating new volumes
in vxvm :-

To create zfs dataset:You can see after creating the


volume,automatically dataset is mounted on /szpool/vol1
and zfs doesn’t require any vfstab entry for this :-
bash-3.00# zfs create szpool/vol1

bash-3.00# zfs list |grep szpool

szpool 105K 56.9M 21K /szpool

szpool/vol1 21K 56.9M 21K /szpool/vol1

(5) To set manual mount point:If you want to set specific


mount point for zfs dataset, use the below command :-
bash-3.00# zfs set mountpoint=/ora_vol1 szpool/vol1

bash-3.00# zfs list |grep szpool

szpool 115K 56.9M 22K /szpool

szpool/vol1 21K 56.9M 21K /ora_vol1

bash-3.00# df -h /ora_vol1
Filesystem size used avail capacity Mounted on

szpool/vol1 57M 21K 57M 1% /ora_vol1

(6) To share dataset(volume in vxvm,svm etc.) through


NFS:We can share the zfs dataset by modifying the zfs
attribute :-
bash-3.00# zfs get sharenfs szpool/vol1

NAME PROPERTY VALUE SOURCE

szpool/vol1 sharenfs off default

bash-3.00# zfs set sharenfs=on szpool/vol1

bash-3.00# zfs get sharenfs szpool/vol1

NAME PROPERTY VALUE SOURCE

szpool/vol1 sharenfs on local

(7) To compress dataset(volume in vxvm, svm etc.) ZFS


has default compression option.You can enable it using
zfs set command :-

bash-3.00# zfs get compression szpool/vol1

NAME PROPERTY VALUE SOURCE

szpool/vol1 compression off default

bash-3.00# zfs set compression=on szpool/vol1

bash-3.00# zfs get compression szpool/vol1

NAME PROPERTY VALUE SOURCE

szpool/vol1 compression on local

(8) To create dataset under dataset (sub volume in


volume) :-
bash-3.00# zfs create szpool/vol1/oraarch

bash-3.00# zfs list |grep ora


szpool/vol1 42K 56.9M 21K /ora_vol1

szpool/vol1/oraarch 21K 56.9M 21K /ora_vol1/oraarch

Diff btw Quota & Reservation :-

You can use the quota property to set a limit on the amount of
space a file system can use. In addition, you can use
the reservation property to guarantee that some amount of space
is available to a file system. Both properties apply to the dataset
they are set on and all descendents of that dataset.

(9) Setting reservation to dataset :-


bash-3.00# zfs set reservation=20M szpool/vol1/oraarch

bash-3.00# zfs get reservation szpool/vol1/oraarch

NAME PROPERTY VALUE SOURCE

szpool/vol1/oraarch reservation 20M local

bash-3.00# zfs list |grep ora

szpool/vol1 20.0M 36.9M 23K /ora_vol1

szpool/vol1/oraarch 21K 56.9M 21K /ora_vol1/oraarch

(10) By doing the above,you can see 20M is reserved for


oraarch and this space can’t be used by other
dataset.Setting quota to dataset :-
bash-3.00# zfs get quota szpool/vol1/oraarch

NAME PROPERTY VALUE SOURCE

szpool/vol1/oraarch quota none default

bash-3.00# zfs set quota=20M szpool/vol1/oraarch

bash-3.00# zfs get quota szpool/vol1/oraarch

NAME PROPERTY VALUE SOURCE


szpool/vol1/oraarch quota 20M local

bash-3.00# zfs list |grep ora

szpool/vol1 20.0M 36.9M 23K /ora_vol1

szpool/vol1/oraarch 21K 20.0M 21K /ora_vol1/oraarch

(11) To check the zpool status :-


bash-3.00# zpool status

pool: szpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

szpool ONLINE 0 0 0

c1t3d0 ONLINE 0 0 0

errors: No known data errors

pool: rzpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

rzpool ONLINE 0 0 0

raidz1-0 ONLINE 0 0 0

c1t2d0 ONLINE 0 0 0

c1t1d0 ONLINE 0 0 0

c1t8d0 ONLINE 0 0 0

errors: No known data errors


pool: mzpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

mzpool ONLINE 0 0 0

mirror-0 ONLINE 0 0 0

c1t5d0 ONLINE 0 0 0

c1t6d0 ONLINE 0 0 0

errors: No known data errors

(12) smcwebserver remote access :-

To access webbased zfs admin portal, use the following


link “https://system-name:6789/zfs”

In case if you are not getting the webpage in your server,


start the smcwebserver using the following command.
# /usr/sbin/smcwebserver start

If it is disbale,enable the service through the following command

# /usr/sbin/smcwebserver enable

sometimes smcwebserver will not able accessed remotly. In this case,


please follow the below steps to enable the remote access.

bash-3.00# svccfg -s svc:/system/webconsole setprop options/tcp_listen =


true

bash-3.00# svcadm refresh svc:/system/webconsole

bash-3.00# svcs -a |grep web

disabled Apr_01 svc:/application/management/webmin:default


online 1:01:02 svc:/system/webconsole:console

bash-3.00# /usr/sbin/smcwebserver restart

Restarting Oracle Java(TM) Web Console Version 3.1 ...

(13) If you want to create dataset with different mount


point, use the following command :-
# zpool create -m /export/zfs home c1t0d0

This is the example, szpool is an existing two-way mirror that is


transformed to a three-way mirror by attaching c2t1d0, the new device, to
the existing device, c1t1d0.

# zpool attach szpool c1t1d0 c2t1d0

# zpool detach zeepool c2t1d0

(14) To set auto-replace property on :-


# zpool set autoreplace=on wrkpool

To check property value

# zpool get autoreplace wrkpool

NAME PROPERTY VALUE SOURCE

wrkpool autoreplace on default

(15) Creating Emulated Volumes :-


# zfs create -V 5gb datapool/vol

To activate ZFS emulated volume as swap,

# swap -a /dev/zvol/dsk/datapool/vol

How to increase volume size:

# zfs set volsize=2g datapool/vol


(16) Creating ZFS Alternate Root Pools :-
# zpool create -R /mnt alt_rpool c0t0d0

Here we are giving ALT_ROOT pool name as alt_pool

# zfs list alt_pool

NAME USED AVAIL REFER MOUNTPOINT

morpheus 32.5K 33.5G 8K /mnt/alt_pool

Importing Alternate Root Pools:


# zpool import -R /mnt alt_pool

# zpool list alt_pool

NAME SIZE USED AVAIL CAP HEALTH ALTROOT

Morpheus 33.8G 68.0K 33.7G 0% ONLINE /mnt

# zfs list alt_pool

NAME USED AVAIL REFER MOUNTPOINT

Morpheus 32.5K 33.5G 8K /mnt/alt_pool

(17) To check the pool integity (Like fsck in UFS) :-


# zpool scrub datapool

i.e pool name is datapool

# zpool status -x

all pools are healthy

(18) To check the pool with detailed errors :-


# zpool status -v datapool
(19) Taking a Device Offline :-
# zpool offline datapool c0t0d0

bringing device ’c0t0d0’ offline

# zpool online datapool c0t0d0

bringing device ’c0t0d0’ online

(20) Replacing Devices :-


# zpool replace datapool c0t0d0 c0t0d1

In the above example, the previous device, c0t0d0, is replaced by c0t0d1

(21) IOSTAT :-
# zpool iostat

capacity operations bandwidth

pool used avail read write read write

———- —– —– —– —– —– —–

datapool 100G 20.0G 1.2M 102K 1.2M 3.45K

dozer 12.3G 67.7G 132K 15.2K 32.1K 1.20K

(22) Exporting a Pool :-


# zpool export datapool

cannot unmount ’/export/home/eschrock’: Device busy

# zpool export -f datapool

Determining Available Pools to Import

# zpool import

pool: datapool
id: 3824973938571987430916523081746329

Importing Pools

# zpool import datapool

To delete dataset

# zfs destroy datapool/home/tabriz

(23) To rename dataset :-


# zfs rename datapool/home/kustarz datapool/home/kustarz_old

First we need to mention existing volume name and new voume name
which you want to given.

To list all dataset under datapool/home/oracle3

zfs list -r datapool/home/oracle3

NAME USED AVAIL REFER MOUNTPOINT

datapool/home/oracle3 26.0K 4.81G 10.0K /datapool/home/oracle3

datapool/home/oracle3/projects 16K 4.81G 9.0K


/datapool/home/oracle3/projects

datapool/home/oracle3/projects/fs1 8K 4.81G 8K
/datapool/home/oracle3/projects/fs1

datapool/home/oracle3/projects/fs2 8K 4.81G 8K
/datapool/home/oracle3/projects/fs2

(24) Legacy Mount Points :-


# zfs set mountpoint=legacy datapool/home/eschrock

So that filesystem will mount automatically.We need to make entry in


vfstab to mount the FS

If you want to manually,use the following command


# mount -F zfs datapool/home/eschrock /mnt

The -a option can be used to mount all ZFS managed filesystems. Legacy
managed filesystems are not mounted.

# zfs mount -a

You can also share/unshare all ZFS filesystems on the system:

# zfs share -a

# zfs unshare datapool/home/tabriz

# zfs unshare -a

If the sharenfs property is off, then ZFS does not attempt to share or
unshare the filesystem at any time.

This allows the filesystem to be administered through traditional means


such as the /etc/dfs/dfstab file.

(25) Backing Up and Restoring ZFS Data :-


# zfs backup datapool/web1@111505 > /dev/rmt/0

# zfs restore datapool/test2@today < /dev/rmt/0

# zfs rename datapool/test datapool/test.old

# zfs rename datapool/test2 datapool/test

# zfs rollback datapool/web1@111505

cannot rollback to ’datapool/web1@111505’: more recent snapshots exist

use ’-r’ to force deletion of the following snapshots:

datapool/web1@now

# zfs rollback -r datapool/web1@111505

# zfs restore datapool/web1 < /dev/rmt/0

During the incremental restore process, the filesystem is unmounted and


cannot be accessed.

Remote Replication of a ZFS File System


# zfs backup datapool/sphere1@today | ssh newsys zfs restore
sandbox/restfs@today

restoring backup of datapool/sphere1@today

into sandbox/restfs@today …

restored 17.8Kb backup in 1 seconds (17.8Kb/sec)

# zfs send -I pool/fs@snap1 pool/clone@snapA > /snaps/fsclonesnap-I

(26) ZFS Snapshots and Clones :-


The following example creates a snapshot of datapool/home/ahrens that is
named friday.

# zfs snapshot datapool/home/ahrens@friday

# zfs destroy datapool/home/ahrens@friday

# zfs rename datapool/home/sphere1s@111205


datapool/home/sphere1s@today

Displaying and Accessing ZFS Snapshots

# ls /home/ahrens/.zfs/snapshot

tuesday wednesday thursday friday

Snapshots can be listed as follows:

# zfs list -t snapshot

NAME USED AVAIL REFER MOUNTPOINT

pool/home/ahrens@tuesday 13.3M – 2.13G –

zfs send -Rv wrkpool@0311 > /net/remote-system/rpool/snaps/wrkpool.0311

sending from @ to wrkpool@0311

sending from @ to wrkpool/swap@0311

sending from @ to wrkpool/dump@0311

sending from @ to wrkpool/ROOT@0311


sending from @ to wrkpool/ROOT/zfsnv109BE@zfsnv1092BE

sending from @zfsnv1092BE to wrkpool/ROOT/zfsnv109BE@0311

sending from @ to wrkpool/ROOT/zfsnv1092BE@0311

(27) ZFS Clones :-


# zfs clone pool/ws/gate@yesterday pool/home/ahrens/bug123

The following example creates a cloned work space from the

projects/newproject@today snapshot for a temporary user as

projects/teamA/tempuser and then sets properties on the cloned work


space.

# zfs snapshot projects/newproject@today

# zfs clone projects/newproject@today projects/teamA/tempuser

# zfs set sharenfs=on projects/teamA/tempuser

# zfs set quota=5G projects/teamA/tempuser

Destroying a Clone

ZFS clones are destroyed with the zfs destroy command.

# zfs destroy pool/home/ahrens/bug123

Clones must be destroyed before the parent snapshot can be destroyed

You might also like