IT notes

ZFS swap

To add more swap on a ZFS system: # zfs create -V 64G -o org.freebsd:swap=on -o checksum=off -o compression=off -o dedup=off -o sync=disabled -o primarycache=none tank/swap2 To add it: # swapon /dev/zvol/tank/swap2 To resize you could first remove it: # swapoff /dev/zvol/tank/swap Then destroy it: # zfs destroy tank/swap

ZFS send receive

To send a zfs dataset being no root, on the origin server: # zfs allow -g wheel send,snapshot,hold tank/foo On the receiver, create the dataset and allow users in group wheel: # zfs create -o mountpoint=/foo tank/foo # zfs allow -g wheel compression,mountpoint,create,mount,receive tank/foo # umount /foo On the origin server create the snapshot to send: # zfs snapshot -r tank/foo On the origin server send the dataset by using:


Create a small distributed file system using 2 servers one acting as a mater/chunkserver and other only as a chunkserver. Install the required packages: pkg install moosefs3-cgi moosefs3-cgiserv moosefs3-chunkserver moosefs3-client For the master just add: pkg install moosefs3-master In all the servers add to your /etc/hosts and entry for the mfsmaster: X.X.X.X mfsmaster Before starting the master, crate the file /var/mfs/metadata.mfs with: MFSM NEW Create a pool to share:


If using a DELL PERL 6/i Integrated, configure each disk as RAID-0: > mfiutil show config mfi0 Configuration: 8 arrays, 8 volumes, 1 spares array 0 of 2 drives: drive 1 ( 1863G) ONLINE <ST2000DM006-2DM1 CC26 serial=Z4Z8BX6V> SATA drive 6 ( 1863G) ONLINE <ST2000DM006-2DM1 CC26 serial=Z4Z8BY35> SATA array 1 of 1 drives: drive 0 ( 466G) ONLINE <SEAGATE ST500NM0001 PS07 serial=Z1M17ZJ0> SCSI-6 array 2 of 1 drives: drive 2 ( 466G) ONLINE <SEAGATE ST500NM0001 0002 serial=Z1M0DAVE> SCSI-6 array 3 of 1 drives: drive 3 ( 932G) ONLINE <SEAGATE ST1000NM0023 0004 serial=Z1W54HJV> SCSI-6 array 4 of 1 drives: drive 4 ( 466G) ONLINE <SEAGATE ST3500620SS MS04 serial=9QM5L94N> SAS array 5 of 1 drives: drive 5 ( 466G) ONLINE <SEAGATE ST3500620SS MS04 serial=9QM5L9C4> SAS array 6 of 1 drives: drive 8 ( 466G) ONLINE <SEAGATE ST3500620SS MS04 serial=9QM5L8DD> SAS array 7 of 1 drives: drive 9 ( 466G) ONLINE <SEAGATE ST3500620SS MS04 serial=9QM5L824> SAS volume mfid0 (1863G) RAID-1 64K OPTIMAL <raid1> spans: array 0 volume mfid1 (465G) RAID-0 64K OPTIMAL <r0> spans: array 1 volume mfid2 (465G) RAID-0 64K OPTIMAL <r2> spans: array 2 volume mfid3 (931G) RAID-0 64K OPTIMAL <r3> spans: array 3 volume mfid4 (465G) RAID-0 64K OPTIMAL <r4> spans: array 4 volume mfid5 (465G) RAID-0 64K OPTIMAL <r5> spans: array 5 volume mfid6 (465G) RAID-0 64K OPTIMAL <r8> spans: array 6 volume mfid7 (465G) RAID-0 64K OPTIMAL <r9> spans: array 7 dedicated spare 7 ( 1863G) HOT SPARE <ST2000DM006-2DM1 CC26 serial=Z4Z8BPTG> SATA backs: array 0 Array has one RAID-1 using 3 disks and the others are disk are each one a RAID-0 volume


FreeBSD zfs disk image Use VirtualBox to install FreeBSD using UFS. After having FreeBSD installed, update your sources and build a custom world and kernel based on your needs for the new image to be created: # cd /usr/src # make -j4 buildworld buildkernel adjust -j4 to the number or cpu cores Use this script to create the image: $ mkdir /raw && cd /raw $ fetch --no-verify-peer https://raw.