# -*- coding: latin-1 -*- 4K disks with boot, swap, and zfs partitions on UEFI ==================================================== # Copyright © 2016, 2020, Trond Endrestøl # All rights reserved. # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # 1. Redistributions of source code must retain the above copyright notice, this # list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR # ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. WORK IN PROGRESS! DO NOT USE UNLESS YOU KNOW WHAT YOU ARE DOING! See below for disks with pure data pools. For PATA/SATA use ada0, ada1, ada2, etc. For SCSI/SAS use da0, da1, da2, etc. Create ZFS partitions a few megabytes shorter than your original hard drives to accomodate future hard drives lacking a couple of disk blocks, unless you're using enterprise grade hard drives. [Is 100 MiB too much?] When using SSDs, you should probably align your boot, swap, and ZFS partitions on 1 MiB or 2 MiB boundaries to account for the read-erase-write block size employed by SSDs and other flash memory. Replace -a 4K with -a 1M for the commands below. Try to organize large pools into multiple raidz{1,2,3} vdevs, i.e. RAID 5+0, 6+0, "7+0", using as few drives as possible per vdev. This will help minimize the time needed for resilvering when replacing drives. Three drives per raidz1 vdev, six drives per raidz2 vdev, and eleven drives per raidz3 vdev, should be sufficient to allow for using 2/3 of the storage capacity for data storage, and using the remaining 1/3 of the storage capacity for redundancy/parity. [I guess nine drives should be sufficient per raidz3 vdev, but Matt Ahrens mentioned eleven drives in his blog post.] However, ZFS is not restricted in any way when it comes to the number of drives per vdev. When using external disk shelves, use disk #0 from each shelf for the first vdev, then disk #1 from each shelf for the second vdev, then disk #2 from each shelf for the third vdev, etc. This way a fault within a single shelf results only in the loss of one drive per vdev, enabling the system to limp away using the remaining shelves and drives. When using raidz1, you should really consider having spare drives configured. Provide as many spare drives as you have vdev groups. Spare drives should be placed on each disk shelf. Place at least two spares per shelf, maybe even three spares per shelf, just in case a whole shelf disappears. Make sure you know how many drives are visible from the perspective of the firmware and the boot loaders when attempting to load the operating system from a zpool with many members. Mirrored configuration for the boot pool and/or root pool should suffice in most cases. You may verify that ashift has indeed been set to 12 using commands like: zdb -e zroot | grep ashift zdb -e zdata | grep ashift Make a backup of the GPT for each drive: gpart backup ada0 > gpart.ada0.txt Print these text files and keep them near the server. That way you can at least recreate the files describing the partitioning, and subsequently recreate the partitioning. Excess whitespace can be omitted while typing. You can later restore the partitioning and labels: gpart restore -l ada0 < gpart.ada0.txt To quote http://www.freebsd.org/doc/handbook/bsdinstall-partitioning.html, and pay special attention to the last sentence: Tip: Proper sector alignment provides the best performance, and making partition sizes even multiples of 4K bytes helps to ensure alignment on drives with either 512-byte or 4K-byte sectors. Generally, using partition sizes that are even multiples of 1M or 1G is the easiest way to make sure every partition starts at an even multiple of 4K. One exception: at present, the freebsd-boot partition should be no larger than 512K due to boot code limitations. End quote. Note: A freebsd-boot partition is not the same as an efi (boot) partition. Protip: Consider adding the hostname to the pool names. E.g.: Hostname: hostname.some.domain Poolname: hostname_zroot Poolname: hostname_zdata Protip 2: If you boot between multiple operating systems, then you could either use multiple EFI System Partitions (ESPs) or use a common ESP. Install the least cooperative OS first, e.g. Microsoft Windows. Make sure neither operating system overwrite the commonly named bootloader /EFI/BOOT/BOOTX64.EFI. Make extra copies of each OS's bootloader. Consider using a boot manager instead of, or in addition to, the EFI firmware's boot menu. rEFInd serves such a purpose. See http://www.rodsbooks.com/refind/ for more information. GRUB 2 is an alternative boot manager. See http://www.gnu.org/software/grub/ for more information. Protip 3: Browsing or reading the UEFI specification is highly recommended. It's currently at version 2.8A. See https://www.uefi.org/specifications for more information. Read as a minimum chapter 1 (Introduction) and chapter 2 (Overview) up to section 2.2.2 (Runtime Services), the entire chapter 5 (GUID Partition Table (GPT) Disk Layout), and the entire section 13.3 (File System Format). This amounts to about 38 pages to read. Protip 4: You are better off creating your own ESP of a decent size, say 64 MiB. The minimum for FAT32 is 32 MiB with 65525 clusters and 512 byte block size, or 256 MiB with 65525 clusters and 4K block size. In the future /boot/loader.efi will replace the use of /boot/boot1.efi. The former is already well over 500K, and I highly recommend to keep a couple of spare copies of the bootloader on the ESP. An ESP of 800K in size is way too small to accommodate three copies, let alone two. E.g.: gpart add -a 4K -s 128M -t efi -l esp0 ada0 newfs_msdos -A -F 32 -L ESP0 -c 1 /dev/gpt/esp0 mkdir -p /esp0 mount_msdosfs -m 644 -M 755 /dev/gpt/esp0 /esp0 mkdir -p /esp0/EFI/BOOT mkdir -p /esp0/EFI/FreeBSD cp -p /boot/loader.efi /esp0/EFI/BOOT/BOOTx64.EFI cp -p /boot/loader.efi /esp0/EFI/BOOT/BOOTx64.old.EFI cp -p /boot/loader.efi /esp0/EFI/BOOT/BOOTx64.good.EFI cp -p /boot/boot1.efi /esp0/EFI/FreeBSD/boot1.efi cp -p /boot/boot1.efi /esp0/EFI/FreeBSD/boot1.old.efi cp -p /boot/boot1.efi /esp0/EFI/FreeBSD/boot1.good.efi cp -p /boot/loader.efi /esp0/EFI/FreeBSD/loader.efi cp -p /boot/loader.efi /esp0/EFI/FreeBSD/loader.old.efi cp -p /boot/loader.efi /esp0/EFI/FreeBSD/loader.good.efi echo BOOTx64.EFI > /esp0/EFI/BOOT/STARTUP.NSH umount /esp0 Use rEFInd or GRUB to give you a choice of loader.efi, loader.old.efi, or loader.good.efi at boot time. Copy *.efi to *.old.efi prior to manually installing a new version of the bootloader. Copy *.efi to *.good.efi only when new functionality is needed, e.g. new pool features, and the newest bootloader is verified to work correctly. Protip 5: Consider using a dedicated disk, or two, for swap, and possibly dump partitions. This eases the partitioning of pure ZFS disks, rootpools and datapools alike. Single-disk configuration: -------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart add -a 4K -s 800K -t efi -l esp0 ada0 gpart add -a 4K -s 4G -t freebsd-swap -l swap0 ada0 gpart add -a 4K -t freebsd-zfs -l zroot0 ada0 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B, from the remaining capacity: !! ZROOTSIZE=`gpart show ada0 | tail -2 | head -1 | awk '{print $2-204800}'` !! gpart add -a 4K -s $ZROOTSIZE -t freebsd-zfs -l zroot0 ada0 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada0 swapon /dev/gpt/swap0 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zroot gpt/zroot0 Maybe you should consider setting copies=2 for select vital filesystems, or even from the top file system and down (it's an inherited property). E.g.: zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O copies=2 -O mountpoint=legacy zroot gpt/zroot0 zpool export zroot zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zroot Mirrored configuration using two disks: --------------------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart create -s gpt ada1 gpart add -a 4K -s 800K -t efi -l esp0 ada0 gpart add -a 4K -s 800K -t efi -l esp1 ada1 gpart add -a 4K -s 4G -t freebsd-swap -l swap0 ada0 gpart add -a 4K -s 4G -t freebsd-swap -l swap1 ada1 gpart add -a 4K -t freebsd-zfs -l zroot0 ada0 gpart add -a 4K -t freebsd-zfs -l zroot1 ada1 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B, from the remaining capacity !! ZFSROOTSIZE=`gpart show ada0 | tail -2 | head -1 | awk '{print $2-204800}'` !! gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot0 ada0 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot1 ada1 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada0 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada1 swapon /dev/gpt/swap0 swapon /dev/gpt/swap1 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zroot mirror gpt/zroot0 gpt/zroot1 zpool export zroot zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zroot raidz-1 configuration using three disks: ---------------------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart create -s gpt ada1 gpart create -s gpt ada2 gpart add -a 4K -s 800K -t efi -l esp0 ada0 gpart add -a 4K -s 800K -t efi -l esp1 ada1 gpart add -a 4K -s 800K -t efi -l esp2 ada2 gpart add -a 4K -s 4G -t freebsd-swap -l swap0 ada0 gpart add -a 4K -s 4G -t freebsd-swap -l swap1 ada1 gpart add -a 4K -s 4G -t freebsd-swap -l swap2 ada2 gpart add -a 4K -t freebsd-zfs -l zroot0 ada0 gpart add -a 4K -t freebsd-zfs -l zroot1 ada1 gpart add -a 4K -t freebsd-zfs -l zroot2 ada2 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B, from the remaining capacity !! ZFSROOTSIZE=`gpart show ada0 | tail -2 | head -1 | awk '{print $2-204800}'` !! gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot0 ada0 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot1 ada1 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot2 ada2 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada0 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada1 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada2 swapon /dev/gpt/swap0 swapon /dev/gpt/swap1 swapon /dev/gpt/swap2 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zroot raidz1 gpt/zroot0 gpt/zroot1 gpt/zroot2 zpool export zroot zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zroot raidz-2 configuration using six disks: -------------------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart create -s gpt ada1 gpart create -s gpt ada2 gpart create -s gpt ada3 gpart create -s gpt ada4 gpart create -s gpt ada5 gpart add -a 4K -s 800K -t efi -l esp0 ada0 gpart add -a 4K -s 800K -t efi -l esp1 ada1 gpart add -a 4K -s 800K -t efi -l esp2 ada2 gpart add -a 4K -s 800K -t efi -l esp3 ada3 gpart add -a 4K -s 800K -t efi -l esp4 ada4 gpart add -a 4K -s 800K -t efi -l esp5 ada5 gpart add -a 4K -s 4G -t freebsd-swap -l swap0 ada0 gpart add -a 4K -s 4G -t freebsd-swap -l swap1 ada1 gpart add -a 4K -s 4G -t freebsd-swap -l swap2 ada2 gpart add -a 4K -s 4G -t freebsd-swap -l swap3 ada3 gpart add -a 4K -s 4G -t freebsd-swap -l swap4 ada4 gpart add -a 4K -s 4G -t freebsd-swap -l swap5 ada5 gpart add -a 4K -t freebsd-zfs -l zroot0 ada0 gpart add -a 4K -t freebsd-zfs -l zroot1 ada1 gpart add -a 4K -t freebsd-zfs -l zroot2 ada2 gpart add -a 4K -t freebsd-zfs -l zroot3 ada3 gpart add -a 4K -t freebsd-zfs -l zroot4 ada4 gpart add -a 4K -t freebsd-zfs -l zroot5 ada5 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B, from the remaining capacity !! ZFSROOTSIZE=`gpart show ada0 | tail -2 | head -1 | awk '{print $2-204800}'` !! gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot0 ada0 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot1 ada1 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot2 ada2 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot3 ada3 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot4 ada4 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot5 ada5 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada0 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada1 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada2 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada3 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada4 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada5 swapon /dev/gpt/swap0 swapon /dev/gpt/swap1 swapon /dev/gpt/swap2 swapon /dev/gpt/swap3 swapon /dev/gpt/swap4 swapon /dev/gpt/swap5 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zroot raidz2 gpt/zroot0 gpt/zroot1 gpt/zroot2 gpt/zroot3 gpt/zroot4 gpt/zroot5 zpool export zroot zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zroot raidz-3 configuration using eleven disks: ----------------------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart create -s gpt ada1 gpart create -s gpt ada2 gpart create -s gpt ada3 gpart create -s gpt ada4 gpart create -s gpt ada5 gpart create -s gpt ada6 gpart create -s gpt ada7 gpart create -s gpt ada8 gpart create -s gpt ada9 gpart create -s gpt ada10 gpart add -a 4K -s 800K -t efi -l esp0 ada0 gpart add -a 4K -s 800K -t efi -l esp1 ada1 gpart add -a 4K -s 800K -t efi -l esp2 ada2 gpart add -a 4K -s 800K -t efi -l esp3 ada3 gpart add -a 4K -s 800K -t efi -l esp4 ada4 gpart add -a 4K -s 800K -t efi -l esp5 ada5 gpart add -a 4K -s 800K -t efi -l esp6 ada6 gpart add -a 4K -s 800K -t efi -l esp7 ada7 gpart add -a 4K -s 800K -t efi -l esp8 ada8 gpart add -a 4K -s 800K -t efi -l esp9 ada9 gpart add -a 4K -s 800K -t efi -l esp10 ada10 gpart add -a 4K -s 4G -t freebsd-swap -l swap0 ada0 gpart add -a 4K -s 4G -t freebsd-swap -l swap1 ada1 gpart add -a 4K -s 4G -t freebsd-swap -l swap2 ada2 gpart add -a 4K -s 4G -t freebsd-swap -l swap3 ada3 gpart add -a 4K -s 4G -t freebsd-swap -l swap4 ada4 gpart add -a 4K -s 4G -t freebsd-swap -l swap5 ada5 gpart add -a 4K -s 4G -t freebsd-swap -l swap6 ada6 gpart add -a 4K -s 4G -t freebsd-swap -l swap7 ada7 gpart add -a 4K -s 4G -t freebsd-swap -l swap8 ada8 gpart add -a 4K -s 4G -t freebsd-swap -l swap9 ada9 gpart add -a 4K -s 4G -t freebsd-swap -l swap10 ada10 gpart add -a 4K -t freebsd-zfs -l zroot0 ada0 gpart add -a 4K -t freebsd-zfs -l zroot1 ada1 gpart add -a 4K -t freebsd-zfs -l zroot2 ada2 gpart add -a 4K -t freebsd-zfs -l zroot3 ada3 gpart add -a 4K -t freebsd-zfs -l zroot4 ada4 gpart add -a 4K -t freebsd-zfs -l zroot5 ada5 gpart add -a 4K -t freebsd-zfs -l zroot6 ada6 gpart add -a 4K -t freebsd-zfs -l zroot7 ada7 gpart add -a 4K -t freebsd-zfs -l zroot8 ada8 gpart add -a 4K -t freebsd-zfs -l zroot9 ada9 gpart add -a 4K -t freebsd-zfs -l zroot10 ada10 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B, from the remaining capacity !! ZFSROOTSIZE=`gpart show ada0 | tail -2 | head -1 | awk '{print $2-204800}'` !! gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot0 ada0 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot1 ada1 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot2 ada2 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot3 ada3 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot4 ada4 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot5 ada5 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot6 ada6 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot7 ada7 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot8 ada8 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot9 ada9 gpart add -a 4K -s ${ZFSROOTSIZE} -t freebsd-zfs -l zroot10 ada10 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada0 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada1 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada2 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada3 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada4 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada5 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada6 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada7 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada8 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada9 gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada10 swapon /dev/gpt/swap0 swapon /dev/gpt/swap1 swapon /dev/gpt/swap2 swapon /dev/gpt/swap3 swapon /dev/gpt/swap4 swapon /dev/gpt/swap5 swapon /dev/gpt/swap6 swapon /dev/gpt/swap7 swapon /dev/gpt/swap8 swapon /dev/gpt/swap9 swapon /dev/gpt/swap10 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zroot raidz3 gpt/zroot0 gpt/zroot1 gpt/zroot2 gpt/zroot3 gpt/zroot4 gpt/zroot5 gpt/zroot6 gpt/zroot7 gpt/zroot8 gpt/zroot9 gpt/zroot10 zpool export zroot zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zroot 4K disks with only zfs partitions ================================= For PATA/SATA use ada0, ada1, ada2, etc. For SCSI/SAS use da0, da1, da2, etc. Single-disk configuration: -------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart add -a 4K -t freebsd-zfs -l zdata0 ada0 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B + 4.1875 * 512B = 206944 * 512B, from the remaining capacity !! ZFSDATASIZE=`gpart show ada0 | head -1 | awk '{print $3-206944}'` !! gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata0 ada0 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zdata gpt/zdata0 zpool export zdata zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zdata [Maybe you should consider setting copies=2 for select vital filesystems, or even from the top file system and down (it's an inherited property).] Mirrored configuration using two disks: --------------------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart create -s gpt ada1 gpart add -a 4K -t freebsd-zfs -l zdata0 ada0 gpart add -a 4K -t freebsd-zfs -l zdata1 ada1 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B + 4.1875 * 512B = 206944 * 512B, from the remaining capacity !! ZFSDATASIZE=`gpart show ada0 | head -1 | awk '{print $3-206944}'` !! gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata0 ada0 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata1 ada1 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zdata mirror gpt/zdata0 gpt/zdata1 zpool export zdata zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zdata raidz-1 configuration using three disks: ---------------------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart create -s gpt ada1 gpart create -s gpt ada2 gpart add -a 4K -t freebsd-zfs -l zdata0 ada0 gpart add -a 4K -t freebsd-zfs -l zdata1 ada1 gpart add -a 4K -t freebsd-zfs -l zdata2 ada2 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B + 4.1875 * 512B = 206944 * 512B, from the remaining capacity !! ZFSDATASIZE=`gpart show ada0 | head -1 | awk '{print $3-206944}'` !! gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata0 ada0 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata1 ada1 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata2 ada2 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zdata raidz1 gpt/zdata0 gpt/zdata1 gpt/zdata2 zpool export zdata zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zdata raidz-2 configuration using six disks: -------------------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart create -s gpt ada1 gpart create -s gpt ada2 gpart create -s gpt ada3 gpart create -s gpt ada4 gpart create -s gpt ada5 gpart add -a 4K -t freebsd-zfs -l zdata0 ada0 gpart add -a 4K -t freebsd-zfs -l zdata1 ada1 gpart add -a 4K -t freebsd-zfs -l zdata2 ada2 gpart add -a 4K -t freebsd-zfs -l zdata3 ada3 gpart add -a 4K -t freebsd-zfs -l zdata4 ada4 gpart add -a 4K -t freebsd-zfs -l zdata5 ada5 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B + 4.1875 * 512B = 206944 * 512B, from the remaining capacity !! ZFSDATASIZE=`gpart show ada0 | head -1 | awk '{print $3-206944}'` !! gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata0 ada0 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata1 ada1 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata2 ada2 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata3 ada3 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata4 ada4 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata5 ada5 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zdata raidz2 gpt/zdata0 gpt/zdata1 gpt/zdata2 gpt/zdata3 gpt/zdata4 gpt/zdata5 zpool export zdata zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zdata raidz-3 configuration using eleven disks: ----------------------------------------- hostname hostname.some.domain kldload zfs sysctl vfs.zfs.min_auto_ashift=12 gpart create -s gpt ada0 gpart create -s gpt ada1 gpart create -s gpt ada2 gpart create -s gpt ada3 gpart create -s gpt ada4 gpart create -s gpt ada5 gpart create -s gpt ada6 gpart create -s gpt ada7 gpart create -s gpt ada8 gpart create -s gpt ada9 gpart create -s gpt ada10 gpart add -a 4K -t freebsd-zfs -l zdata0 ada0 gpart add -a 4K -t freebsd-zfs -l zdata1 ada1 gpart add -a 4K -t freebsd-zfs -l zdata2 ada2 gpart add -a 4K -t freebsd-zfs -l zdata3 ada3 gpart add -a 4K -t freebsd-zfs -l zdata4 ada4 gpart add -a 4K -t freebsd-zfs -l zdata5 ada5 gpart add -a 4K -t freebsd-zfs -l zdata6 ada6 gpart add -a 4K -t freebsd-zfs -l zdata7 ada7 gpart add -a 4K -t freebsd-zfs -l zdata8 ada8 gpart add -a 4K -t freebsd-zfs -l zdata9 ada9 gpart add -a 4K -t freebsd-zfs -l zdata10 ada10 !! !! OR !! !! Subtract 100 MiB, aka 204800 * 512B + 4.1875 * 512B = 206944 * 512B, from the remaining capacity !! ZFSDATASIZE=`gpart show ada0 | head -1 | awk '{print $3-206944}'` !! gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata0 ada0 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata1 ada1 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata2 ada2 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata3 ada3 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata4 ada4 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata5 ada5 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata6 ada6 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata0 ada7 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata1 ada8 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata2 ada9 gpart add -a 4K -s ${ZFSDATASIZE} -t freebsd-zfs -l zdata3 ada10 zpool create -o autoexpand=on -o autoreplace=on -o cachefile=/tmp/zpool.cache -o failmode=continue -O mountpoint=legacy zdata raidz3 gpt/zdata0 gpt/zdata1 gpt/zdata2 gpt/zdata3 gpt/zdata4 gpt/zdata5 gpt/zdata6 gpt/zdata7 gpt/zdata8 gpt/zdata9 gpt/zdata10 zpool export zdata zpool import -d /dev/gpt -o cachefile=/tmp/zpool.cache zdata