<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>SCHiLLER.iM</title><description>Software development blog featuring posts and code snippets</description><link>https://schiller.im/</link><language>en-us</language><atom:link href="https://schiller.im/rss.xml" rel="self" type="application/rss+xml"/><lastBuildDate>Sun, 07 Dec 2025 20:43:59 GMT</lastBuildDate><generator>Astro</generator><item><title>Installing Docker on Ubuntu 24.04 with Dedicated ZFS Mirror Pool</title><link>https://schiller.im/posts/ubuntu-2404-docker-zfs/</link><guid isPermaLink="true">https://schiller.im/posts/ubuntu-2404-docker-zfs/</guid><description>Set up Docker with a dedicated encrypted ZFS mirror pool for optimal isolation, performance, and data integrity.</description><pubDate>Fri, 28 Nov 2025 00:00:00 GMT</pubDate><content:encoded>{/* cspell:ignore acltype autotrim ashift canmount dnodesize fsname keyformat keylocation mountpoint posixacl */}
{/* cspell:ignore  recordsize relatime usermod xattr zcontainer zroot */}

{/* prettier-ignore */}
&lt;div tabindex=&quot;0&quot; class=&quot;collapse-arrow bg-base-100 border-base-300 collapse border&quot;&gt;
  &lt;div class=&quot;collapse-title font-semibold&quot;&gt;Table of Contents&lt;/div&gt;
  &lt;div class=&quot;collapse-content text-sm&quot;&gt;
    1. [Prerequisites](#prerequisites)
    2. [Install Docker Engine](#install-docker-engine)
    3. [Create Dedicated ZFS Pool for Docker](#create-dedicated-zfs-pool-for-docker)
    4. [Configure Docker Daemon](#configure-docker-daemon)
    5. [Verify Installation](#verify-installation)
    6. [Managing the Docker Pool](#managing-the-docker-pool)
  &lt;/div&gt;
&lt;/div&gt;

## Prerequisites

System requirements and assumptions for this guide.

- Ubuntu 24.04 LTS with ZFS support
- Root access or sudo privileges
- Two identical disks for the mirror (e.g., `/dev/sdb` and `/dev/sdc`)
- ZFS encryption key directory set up at `/etc/zfs/keys/`

This guide creates a dedicated encrypted ZFS mirror pool named `zcontainer` specifically for Docker containers and
images, providing complete isolation from the system pool.

## Install Docker Engine

Follow the
[official Docker installation guide for Ubuntu](https://docs.docker.com/engine/install/ubuntu/#installation-methods) to
install Docker Engine using the apt repository method.

After installation, add your user to the docker group:

```bash
sudo usermod -aG docker $USER
```

Log out and back in for group membership to take effect.

## Create Dedicated ZFS Pool for Docker

Create a dedicated encrypted ZFS mirror pool optimized for Docker workloads.

This guide uses the same encryption key as your root pool (`/etc/zfs/keys/zroot.key`) for consistency and simplified
key management.

### Identify Container Disks

Identify the two disks you&apos;ll use for the mirror using persistent disk-by-id paths:

```bash
# List available disks with their by-id paths
ls -lh /dev/disk/by-id/ | grep -v part

# Set your disk paths (replace with your actual disk IDs)
export CONTAINER_DISK1=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_XXXXXXXXXXXXX
export CONTAINER_DISK2=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_YYYYYYYYYYYYY
```

**Warning:** The disks will be completely wiped. Verify you&apos;re using the correct disks before proceeding.

### Stop Docker Service

If Docker is already installed, stop it before creating the pool:

```bash
sudo systemctl stop docker
sudo systemctl stop containerd
```

### Create ZFS Mirror Pool

Create the encrypted mirror pool with optimized settings:

```bash
sudo zpool create -f \
  -m none \
  -O acltype=posixacl \
  -o ashift=12 \
  -O atime=off \
  -o autotrim=on \
  -o cachefile=none \
  -O canmount=off \
  -O compression=zstd \
  -O dnodesize=auto \
  -O encryption=aes-256-gcm \
  -O keyformat=passphrase \
  -O keylocation=file:///etc/zfs/keys/zroot.key \
  -O normalization=formD \
  -O relatime=off \
  -O xattr=sa \
  zcontainer mirror ${CONTAINER_DISK1} ${CONTAINER_DISK2}
```

### Create Docker Dataset

Create the dataset that Docker will use:

```bash
sudo zfs create \
  -o mountpoint=/var/lib/docker \
  -o recordsize=128K \
  zcontainer/DOCKER
```

### Verify Pool Creation

```bash
# Check pool status
sudo zpool status zcontainer

# Check pool properties
sudo zpool get all zcontainer

# Check dataset properties
sudo zfs get all zcontainer/DOCKER

# Verify mountpoint
mount | grep docker
```

Expected output for mountpoint: `zcontainer/DOCKER on /var/lib/docker type zfs (rw,...)`

## Configure Docker Daemon

Configure Docker daemon with ZFS storage driver and logging.

### Create Docker Daemon Configuration

```bash
sudo mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json &gt; /dev/null &lt;&lt; &apos;EOF&apos;
{
  &quot;storage-driver&quot;: &quot;zfs&quot;,
  &quot;storage-opts&quot;: [
    &quot;zfs.fsname=zcontainer/DOCKER&quot;
  ],
  &quot;log-driver&quot;: &quot;journald&quot;,
  &quot;log-opts&quot;: {
    &quot;tag&quot;: &quot;{{.Name}}&quot;
  }
}
EOF
```

Configuration breakdown:

**Storage Driver:**

- `storage-driver: zfs` - Use native ZFS storage driver instead of overlay2
- `zfs.fsname=zcontainer/DOCKER` - ZFS dataset for container storage

**Logging:**

- `log-driver: journald` - Send container logs to systemd journal
- `tag: {{.Name}}` - Tag journal entries with container name for easy filtering

**Benefits of this configuration:**

- Dedicated pool isolates Docker from system storage
- Native ZFS snapshots for containers and images
- Full encryption at rest (AES-256-GCM)
- Transparent compression (ZSTD) reduces storage usage
- Mirror redundancy protects against disk failure
- No overlay filesystem overhead
- Copy-on-write efficiency for container layers
- Easy backup via ZFS send/receive

### Start Docker Service

```bash
sudo systemctl daemon-reload
sudo systemctl enable --now docker
sudo systemctl enable --now containerd
```

## Verify Installation

Validate Docker installation and ZFS integration.

### Check Docker Service Status

```bash
sudo systemctl status docker
```

Should show: `Active: active (running)`

### Verify ZFS Storage Driver

```bash
docker info | grep -A 10 &quot;Storage Driver&quot;
```

Expected output:

```
 Storage Driver: zfs
  ZFS Dataset: zcontainer/DOCKER
  Parent Dataset: zcontainer/DOCKER
  ZFS Filesystem: zcontainer/DOCKER
```</content:encoded><category>ubuntu</category><category>docker</category><category>zfs</category><author>marc@schiller.im (Marc Schiller)</author></item><item><title>Ubuntu 24.04 ZFS: Post-Install Optimization</title><link>https://schiller.im/posts/ubuntu-2404-zfs-post-install-optimization/</link><guid isPermaLink="true">https://schiller.im/posts/ubuntu-2404-zfs-post-install-optimization/</guid><description>ZFS optimization, kernel tuning, and system monitoring configuration for Ubuntu 24.04.</description><pubDate>Sun, 26 Oct 2025 00:00:01 GMT</pubDate><content:encoded>{/* cspell:ignore arter autoclean centisecs dentries devicescan dkms drivedb fastopen fwupd hugepages intvl kbytes */}
{/* cspell:ignore logfile mailutils metaslab msmtp msmtprc netdev noprefetch nvme onecheck oneshot qdisc rcvbuf */}
{/* cspell:ignore recordsize rmem smartd smartmontools smtps starttls tempaddr udevadm usecs vdev wmem writeback */}
{/* cspell:ignore zroot zvol */}

Continues from [installation guide](/posts/installing-ubuntu-2404-encrypted-zfs-zfsbootmenu/).

Prerequisites: System installed, rebooted, and accessible with sudo.

{/* prettier-ignore */}
&lt;div tabindex=&quot;0&quot; class=&quot;collapse-arrow bg-base-100 border-base-300 collapse border&quot;&gt;
  &lt;div class=&quot;collapse-title font-semibold&quot;&gt;Table of Contents&lt;/div&gt;
  &lt;div class=&quot;collapse-content text-sm&quot;&gt;
    1. [Install Additional System Packages](#install-additional-system-packages)
    2. [Upgrade to Latest OpenZFS](#upgrade-to-latest-openzfs)
    3. [ZFS Module Parameters](#zfs-module-parameters)
    4. [System Kernel Parameters](#system-kernel-parameters)
    5. [Storage I/O Scheduler Optimization](#storage-io-scheduler-optimization)
    6. [Email Configuration for System Notifications](#email-configuration-for-system-notifications)
    7. [Automatic Security Updates](#automatic-security-updates)
    8. [Drive Health Monitoring with SMART](#drive-health-monitoring-with-smart)
  &lt;/div&gt;
&lt;/div&gt;

## Install Additional System Packages

Essential utilities for system management, monitoring, and remote access.

```bash
sudo apt update
sudo apt install --yes \
  fwupd \
  htop \
  linux-tools-generic-hwe-24.04 \
  openssh-server \
  tmux \
  ubuntu-server-minimal \
  ubuntu-standard \
  vim
```

## Upgrade to Latest OpenZFS

Install latest OpenZFS version from arter97 PPA for improved features and performance.

```bash
sudo add-apt-repository ppa:arter97/zfs
sudo apt update
sudo apt install --yes zfs-dkms
sudo reboot
```

Verify and upgrade pool:

```bash
zfs --version
zpool version

sudo zpool set compatibility=off zroot
sudo zpool upgrade zroot
zpool status zroot
zpool get all zroot | grep feature
```

## ZFS Module Parameters

Configure ZFS kernel module settings for optimal memory usage, caching, and I/O performance.

Values optimized for 64GB RAM with large files workload. Scale `zfs_arc_max` to ~75% of total RAM, `zfs_arc_min` to
~12-15%.

```bash
sudo tee /etc/modprobe.d/zfs.conf &gt; /dev/null &lt;&lt; &apos;EOF&apos;
# ARC: 48GB max, 8GB min
options zfs zfs_arc_max=51539607552
options zfs zfs_arc_min=8589934592

# L2ARC for M.2 cache
options zfs l2arc_write_max=268435456
options zfs l2arc_write_boost=536870912
options zfs l2arc_headroom=4
options zfs l2arc_feed_again=1
options zfs l2arc_noprefetch=0

# Prefetch
options zfs zfs_prefetch_disable=0
options zfs zfs_txg_timeout=5
options zfs metaslab_preload_enabled=1

# Write performance
options zfs zfs_dirty_data_max=17179869184
options zfs zfs_dirty_data_max_percent=50
options zfs zfs_dirty_data_sync_percent=20

# I/O scheduling
options zfs zfs_vdev_async_read_max_active=4
options zfs zfs_vdev_async_write_max_active=4
options zfs zfs_vdev_sync_read_max_active=10
options zfs zfs_vdev_sync_write_max_active=10
options zfs zfs_vdev_scrub_max_active=2

# I/O aggregation
options zfs zfs_vdev_read_gap_limit=32768
options zfs zfs_vdev_aggregation_limit=1048576
options zfs zfs_vdev_write_gap_limit=32768

# Record size
options zfs zfs_max_recordsize=1048576
EOF
```

Apply changes &amp; reboot:

```bash
sudo update-initramfs -u
sudo reboot
```

## System Kernel Parameters

Configure sysctl parameters to optimize system performance. Memory settings minimize swap usage and control dirty page
writeback to work efficiently with ZFS&apos;s ARC cache. Network parameters increase buffer sizes and enable fair queuing
for better throughput. TCP settings enable BBR congestion control, TCP Fast Open, and optimize keepalive timers for
modern networks. Routing configuration enables IPv4/IPv6 forwarding and disables privacy extensions for stable
addressing.

### Memory and VM Settings

```bash
sudo tee /etc/sysctl.d/10-memory-vm.conf &gt; /dev/null &lt;&lt; &apos;EOF&apos;
vm.swappiness=0
vm.page-cluster=0
vm.vfs_cache_pressure=50
vm.dirty_background_ratio=5
vm.dirty_ratio=10
vm.dirty_expire_centisecs=6000
vm.min_free_kbytes=262144
EOF
```

### Network Settings

```bash
sudo tee /etc/sysctl.d/20-network-core.conf &gt; /dev/null &lt;&lt; &apos;EOF&apos;
net.core.rmem_max=67108864
net.core.wmem_max=67108864
net.core.rmem_default=1048576
net.core.wmem_default=1048576
net.core.netdev_max_backlog=4096
net.core.default_qdisc=fq
EOF
```

### TCP Settings

```bash
sudo tee /etc/sysctl.d/30-network-tcp.conf &gt; /dev/null &lt;&lt; &apos;EOF&apos;
net.ipv4.tcp_rmem=4096 131072 67108864
net.ipv4.tcp_wmem=4096 65536 67108864
net.ipv6.tcp_rmem=4096 131072 67108864
net.ipv6.tcp_wmem=4096 65536 67108864
net.ipv4.tcp_congestion_control=bbr
net.ipv4.tcp_no_metrics_save=1
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_fastopen=3
net.ipv4.tcp_mtu_probing=1
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=3
net.ipv6.neigh.default.gc_thresh1 = 1024
net.ipv6.neigh.default.gc_thresh2 = 2048
net.ipv6.neigh.default.gc_thresh3 = 4096
EOF
```

### Routing Settings

```bash
sudo tee /etc/sysctl.d/40-network-routing.conf &gt; /dev/null &lt;&lt; &apos;EOF&apos;
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.all.use_tempaddr=0
net.ipv6.conf.default.use_tempaddr=0
EOF

sudo sysctl --system
```

## Storage I/O Scheduler Optimization

Configure optimal I/O schedulers and queue settings for SSDs, HDDs, and NVMe drives.

```bash
sudo tee /etc/udev/rules.d/10-storage-scheduler.rules &gt; /dev/null &lt;&lt; &apos;EOF&apos;
ACTION==&quot;add|change&quot;, KERNEL==&quot;sd[a-z]*&quot;, ATTR{queue/rotational}==&quot;0&quot;, ATTR{queue/scheduler}=&quot;mq-deadline&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;sd[a-z]*&quot;, ATTR{queue/rotational}==&quot;0&quot;, ATTR{queue/nr_requests}=&quot;64&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;sd[a-z]*&quot;, ATTR{queue/rotational}==&quot;0&quot;, ATTR{queue/read_ahead_kb}=&quot;128&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;sd[a-z]*&quot;, ATTR{queue/rotational}==&quot;0&quot;, ATTR{queue/add_random}=&quot;0&quot;

ACTION==&quot;add|change&quot;, KERNEL==&quot;sd[a-z]*&quot;, ATTR{queue/rotational}==&quot;1&quot;, ATTR{queue/scheduler}=&quot;mq-deadline&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;sd[a-z]*&quot;, ATTR{queue/rotational}==&quot;1&quot;, ATTR{queue/nr_requests}=&quot;128&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;sd[a-z]*&quot;, ATTR{queue/rotational}==&quot;1&quot;, ATTR{queue/read_ahead_kb}=&quot;4096&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;sd[a-z]*&quot;, ATTR{queue/rotational}==&quot;1&quot;, ATTR{queue/add_random}=&quot;1&quot;

ACTION==&quot;add|change&quot;, KERNEL==&quot;nvme[0-9]*n[0-9]*&quot;, ATTR{queue/scheduler}=&quot;none&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;nvme[0-9]*n[0-9]*&quot;, ATTR{queue/nr_requests}=&quot;32&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;nvme[0-9]*n[0-9]*&quot;, ATTR{queue/read_ahead_kb}=&quot;128&quot;
ACTION==&quot;add|change&quot;, KERNEL==&quot;nvme[0-9]*n[0-9]*&quot;, ATTR{queue/add_random}=&quot;0&quot;
EOF

sudo udevadm control --reload-rules
sudo udevadm trigger
sudo reboot
```

## Email Configuration for System Notifications

Set up SMTP email delivery for system alerts from ZFS, SMART, and unattended-upgrades.

### Install and Configure

```bash
sudo apt install msmtp msmtp-mta mailutils
```

Edit `/etc/msmtprc`:

```bash
defaults
auth           on
tls            on
tls_starttls   off
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile        /var/log/msmtp.log

account        smtp
host           smtp.example.com
port           465
from           server@example.com
user           username
password       password

account default : smtp
```

Set permissions:

```bash
sudo chmod 644 /etc/msmtprc
sudo chown root:root /etc/msmtprc
sudo touch /var/log/msmtp.log
sudo chmod 666 /var/log/msmtp.log
```

Edit `/etc/mail.rc`:

```properties
set sendmail=&quot;/usr/bin/msmtp -t&quot;
```

### Send Test Email

```bash
echo &quot;Test email from $(hostname)&quot; | mail -s &quot;Test: msmtp configuration&quot; you@example.com
sudo tail -f /var/log/msmtp.log
```

## Automatic Security Updates

Enable automatic installation of security updates with email notifications.

### Install and Configure

```bash
sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades
```

Edit `/etc/apt/apt.conf.d/50unattended-upgrades`:

```bash
Unattended-Upgrade::Allowed-Origins {
};

Unattended-Upgrade::Origins-Pattern {
  &quot;origin=*&quot;;
};

Unattended-Upgrade::Remove-Unused-Kernel-Packages &quot;true&quot;;
Unattended-Upgrade::Remove-New-Unused-Dependencies &quot;true&quot;;
Unattended-Upgrade::Remove-Unused-Dependencies &quot;true&quot;;

Unattended-Upgrade::Sender &quot;server@example.com&quot;;
Unattended-Upgrade::Mail &quot;you@example.com&quot;;
```

Edit `/etc/apt/apt.conf.d/20auto-upgrades`:

```bash
APT::Periodic::Update-Package-Lists &quot;1&quot;;
APT::Periodic::Download-Upgradeable-Packages &quot;1&quot;;
APT::Periodic::AutocleanInterval &quot;7&quot;;
APT::Periodic::Unattended-Upgrade &quot;1&quot;;
```

### Enable Services

```bash
sudo systemctl enable unattended-upgrades
sudo systemctl start unattended-upgrades
sudo systemctl enable apt-daily-upgrade.timer
sudo systemctl start apt-daily-upgrade.timer
sudo unattended-upgrades --dry-run --debug
sudo systemctl status unattended-upgrades
```

## Drive Health Monitoring with SMART

Configure automated drive health monitoring and testing with SMART email alerts.

### Install and Configure

```bash
sudo apt install --yes nvme-cli smartmontools
sudo /usr/sbin/update-smart-drivedb
```

Edit `/etc/smartd.conf`:

```bash
DEVICESCAN -a -o on -S on -n standby,q -s (S/../.././02|L/../../6/03) -m you@example.com

DEVICESCAN -d sat -W 4,40,55
DEVICESCAN -d nvme -W 4,50,70
```

Configuration breakdown:

- `-a`: Monitor all SMART attributes and report failures
- `-o on`: Enable automatic offline testing (runs background tests when idle)
- `-S on`: Enable SMART attribute autosave (preserves attribute history across power cycles)
- `-n standby,q`: Skip checks if drive is in standby/sleep mode to avoid unnecessary wake-ups
- `-s (S/../.././02|L/../../6/03)`: Test schedule using cron-like syntax
  - `S/../.././02`: Short self-test daily at 2:00 AM
  - `L/../../6/03`: Long self-test weekly on Saturday (day 6) at 3:00 AM
- `-m you@example.com`: Send email alerts for SMART failures and test results
- `-d sat`: Specify SATA device type for proper SMART command interpretation
- `-d nvme`: Specify NVMe device type for NVMe-specific health monitoring
- `-W 4,40,55`: Temperature monitoring with difference, warning, and critical thresholds
  - `4`: Alert if temperature changes by 4°C or more
  - `40°C` (SATA) / `50°C` (NVMe): Warning threshold
  - `55°C` (SATA) / `70°C` (NVMe): Critical threshold

### Enable Service

```bash
sudo smartd -d -q onecheck
sudo systemctl enable smartd
sudo systemctl start smartd
sudo systemctl status smartd
```</content:encoded><category>ubuntu</category><category>zfs</category><category>configuration</category><category>performance</category><author>marc@schiller.im (Marc Schiller)</author></item><item><title>Installing Ubuntu 24.04 on Encrypted ZFS with ZFSBootMenu</title><link>https://schiller.im/posts/installing-ubuntu-2404-encrypted-zfs-zfsbootmenu/</link><guid isPermaLink="true">https://schiller.im/posts/installing-ubuntu-2404-encrypted-zfs-zfsbootmenu/</guid><description>Ubuntu 24.04 installation on encrypted ZFS mirror with ZFSBootMenu and redundant mdraid ESP.</description><pubDate>Sun, 26 Oct 2025 00:00:00 GMT</pubDate><content:encoded>{/* cspell:ignore acltype adduser ashift automount autotrim blkdiscard blkid bootable bootfs canmount debootstrap */}
{/* cspell:ignore devpts dnodesize dosfstools efibootmgr efivarfs efivars ethernets fstype gdisk gsettings guids */}
{/* cspell:ignore homehost hostid keyformat keylocation keysource labelclear logbias mdraid mdstat mountpoint */}
{/* cspell:ignore netplan networkd noauto nvme openzfs partprobe plugdev posixacl recordsize relatime resolv sgdisk */}
{/* cspell:ignore smartctl sysfs tzdata udevadm uefi usergroups usermod vfat wipefs xattr zfsbootmenu zfsutils */}
{/* cspell:ignore zgenhostid zroot */}

{/* prettier-ignore */}
&lt;div tabindex=&quot;0&quot; class=&quot;collapse-arrow bg-base-100 border-base-300 collapse border&quot;&gt;
  &lt;div class=&quot;collapse-title font-semibold&quot;&gt;Table of Contents&lt;/div&gt;
  &lt;div class=&quot;collapse-content text-sm&quot;&gt;
    1. [Prerequisites](#prerequisites)
    2. [Live Environment Setup](#live-environment-setup)
    3. [Disk Preparation](#disk-preparation)
    4. [Redundant ESP with mdraid](#redundant-esp-with-mdraid)
    5. [ZFS OS Pool Creation](#zfs-os-pool-creation)
    6. [ZFS Filesystem Creation](#zfs-filesystem-creation)
    7. [Ubuntu Installation](#ubuntu-installation)
    8. [ZFS Configuration](#zfs-configuration)
    9. [ZFSBootMenu Setup](#zfsbootmenu-setup)
    10. [Final Steps](#final-steps)
    11. [Appendix 1: Recovery Mode](#appendix-1-recovery-mode)
    12. [Appendix 2: Replacing a Faulted Drive](#appendix-2-replacing-a-faulted-drive)
    13. [Appendix 3: Unlocking Other Pools with the Same Key](#appendix-3-unlocking-other-pools-with-the-same-key)
  &lt;/div&gt;
&lt;/div&gt;

## Prerequisites

Hardware and network requirements needed before starting the installation.

- UEFI boot
- x86_64 architecture
- Network access

## Live Environment Setup

Boot Ubuntu Live USB, configure remote access, and prepare the installation environment.

Download [Ubuntu Desktop 24.04 Live image](https://ubuntu.com/download/desktop/) and boot in EFI mode.

### Optional: Remote Access

```bash
passwd ubuntu
sudo apt update &amp;&amp; sudo apt install --yes openssh-server
ip addr show
# ssh ubuntu@&lt;ip-address&gt;
```

### Prepare Environment

```bash
gsettings set org.gnome.desktop.media-handling automount false
sudo -i
apt update
apt install --yes debootstrap gdisk zfsutils-linux
systemctl stop zed
```

### Generate Host ID

```bash
zgenhostid -f
```

## Disk Preparation

Identify target disks and completely wipe them for clean installation.

### Identify Disks by ID

```bash
ls -la /dev/disk/by-id/ | grep -v part
lsblk -o NAME,SIZE,MODEL,SERIAL
```

**Use `/dev/disk/by-id/*` paths** - persistent across reboots unlike `/dev/sdX`.

### Set Disk Variables

```bash
export OS_DISK1=&quot;/dev/disk/by-id/nvme-Force_MP510_1919820500012769305E&quot;
export OS_DISK2=&quot;/dev/disk/by-id/nvme-WD_BLACK_SN770_250GB_2346FX400125&quot;
```

### Clear Disks

**WARNING:** Destroys all data. Verify disk variables before proceeding.

```bash
zpool labelclear -f $OS_DISK1 2&gt;/dev/null || true
zpool labelclear -f $OS_DISK2 2&gt;/dev/null || true

umount /boot/efi 2&gt;/dev/null || true
mdadm --stop /dev/md127 2&gt;/dev/null || true
mdadm --stop /dev/md/esp 2&gt;/dev/null || true

mdadm --zero-superblock --force ${OS_DISK1}-part1 2&gt;/dev/null || true
mdadm --zero-superblock --force ${OS_DISK2}-part1 2&gt;/dev/null || true

wipefs -a $OS_DISK1
wipefs -a $OS_DISK2

blkdiscard -f $OS_DISK1 2&gt;/dev/null || true
blkdiscard -f $OS_DISK2 2&gt;/dev/null || true

sgdisk --zap-all $OS_DISK1
sgdisk --zap-all $OS_DISK2

lsblk -o NAME,SIZE,FSTYPE,LABEL $OS_DISK1 $OS_DISK2
```

## Redundant ESP with mdraid

Create mirrored EFI System Partition using mdraid for boot redundancy.

### Create EFI System Partitions

```bash
OS_DISKS=&quot;$OS_DISK1 $OS_DISK2&quot;
for disk in $OS_DISKS; do
    sgdisk -n &quot;1:1m:+512m&quot; -t &quot;1:ef00&quot; &quot;$disk&quot;
done
```

### Create mdraid Array for ESP

```bash
mdadm --create --verbose --level 1 --metadata 1.0 --homehost any --raid-devices 2 /dev/md/esp \
  ${OS_DISK1}-part1 ${OS_DISK2}-part1
mdadm --assemble --scan
mdadm --detail --scan &gt;&gt; /etc/mdadm.conf
```

Metadata 1.0 writes RAID metadata to the end of the partition rather than the beginning. This is critical for ESP
mirroring because UEFI firmware reads from the partition start expecting a valid FAT filesystem. With metadata at the
end, each individual partition appears as a valid standalone EFI partition to the firmware, enabling the system to boot
from either disk if one fails. Newer metadata formats (1.1, 1.2) write to the beginning and would prevent firmware from
recognizing the partitions as bootable.

### Create ZFS Partitions

```bash
for disk in $OS_DISKS; do
    sgdisk -n &quot;2:0:-8m&quot; -t &quot;2:bf00&quot; &quot;$disk&quot;
done
partprobe
```

## ZFS OS Pool Creation

Create encrypted ZFS mirror pool with optimal settings for SSDs.

Pool uses `ashift=12` (4K sectors) for modern SSDs.

### Prepare Encryption Key

```bash
echo &apos;password&apos; &gt; /etc/zfs/zroot.key
chmod 000 /etc/zfs/zroot.key
```

Replace `password` with your desired encryption password. This password will be required every time you boot -
ZFSBootMenu will prompt you to enter it to unlock the encrypted pool before the system can start. Choose a strong
password and remember it, as losing it means permanent data loss.

### Create Pool

```bash
zpool create -f \
  -m none \
  -O acltype=posixacl \
  -o ashift=12 \
  -O atime=off \
  -o autotrim=on \
  -o cachefile=none \
  -O canmount=off \
  -o compatibility=openzfs-2.2-linux \
  -O compression=zstd \
  -O dnodesize=auto \
  -O encryption=aes-256-gcm \
  -O keyformat=passphrase \
  -O keylocation=file:///etc/zfs/zroot.key \
  -O normalization=formD \
  -O recordsize=16K \
  -O relatime=off \
  -O xattr=sa \
  zroot mirror ${OS_DISK1}-part2 ${OS_DISK2}-part2
```

## ZFS Filesystem Creation

Create ZFS datasets for root, system directories, and user data with optimized properties.

### Root Filesystem

```bash
zfs create -o canmount=off -o mountpoint=none zroot/ROOT
zfs create -o canmount=noauto -o mountpoint=/ zroot/ROOT/ubuntu
```

### Keystore

```bash
zfs create -o mountpoint=/etc/zfs/keys zroot/keystore
```

### System Directories

```bash
zfs create -o mountpoint=/var zroot/ROOT/ubuntu/var
zfs create -o mountpoint=/var/cache -o recordsize=128K -o sync=disabled \
  zroot/ROOT/ubuntu/var/cache

zfs create -o mountpoint=/var/lib -o recordsize=8K zroot/ROOT/ubuntu/var/lib
zfs create -o mountpoint=/var/log -o recordsize=128K -o logbias=throughput \
  zroot/ROOT/ubuntu/var/log

zfs create -o mountpoint=/tmp -o recordsize=32K -o compression=lz4 -o devices=off -o exec=off \
  -o setuid=off -o sync=disabled zroot/ROOT/ubuntu/tmp

zfs create -o mountpoint=/var/tmp -o recordsize=32K -o compression=lz4 -o devices=off -o exec=off \
  -o setuid=off -o sync=disabled zroot/ROOT/ubuntu/var/tmp
```

### User Data

```bash
zfs create -o mountpoint=/home -o recordsize=128K zroot/USERDATA
zfs create -o mountpoint=/root zroot/USERDATA/root
zfs create zroot/USERDATA/user
```

### Finalize and Mount

```bash
zpool set bootfs=zroot/ROOT/ubuntu zroot
zfs set keylocation=file:///etc/zfs/keys/zroot.key zroot
zfs set org.zfsbootmenu:keysource=zroot/keystore zroot

zpool export zroot
zpool import -N -R /mnt zroot
zfs load-key -L prompt zroot

zfs mount zroot/ROOT/ubuntu
zfs mount zroot/keystore
zfs mount -a

udevadm trigger
```

### ZFS Dataset Properties Overview

| Dataset                     | canmount      | mountpoint    | recordsize    | compression   | Additional Properties                            |
| --------------------------- | ------------- | ------------- | ------------- | ------------- | ------------------------------------------------ |
| **zroot/ROOT**              | off           | none          | _(inherited)_ | _(inherited)_ | -                                                |
| **zroot/ROOT/ubuntu**       | noauto        | /             | _(inherited)_ | _(inherited)_ | -                                                |
| **zroot/keystore**          | _(inherited)_ | /etc/zfs/keys | _(inherited)_ | _(inherited)_ | readonly=on                                      |
| zroot/ROOT/ubuntu/var       | _(inherited)_ | /var          | _(inherited)_ | _(inherited)_ | -                                                |
| zroot/ROOT/ubuntu/var/cache | _(inherited)_ | /var/cache    | 128K          | _(inherited)_ | sync=disabled                                    |
| zroot/ROOT/ubuntu/var/lib   | _(inherited)_ | /var/lib      | 8K            | _(inherited)_ | -                                                |
| zroot/ROOT/ubuntu/var/log   | _(inherited)_ | /var/log      | 128K          | _(inherited)_ | logbias=throughput                               |
| zroot/ROOT/ubuntu/tmp       | _(inherited)_ | /tmp          | 32K           | lz4           | devices=off, exec=off, setuid=off, sync=disabled |
| zroot/ROOT/ubuntu/var/tmp   | _(inherited)_ | /var/tmp      | 32K           | lz4           | devices=off, exec=off, setuid=off, sync=disabled |
| **zroot/USERDATA**          | _(inherited)_ | /home         | 128K          | _(inherited)_ | -                                                |
| zroot/USERDATA/root         | _(inherited)_ | /root         | _(inherited)_ | _(inherited)_ | -                                                |
| zroot/USERDATA/user         | _(inherited)_ | /home/user    | _(inherited)_ | _(inherited)_ | -                                                |

## Ubuntu Installation

Install Ubuntu base system, configure hostname, users, and network settings.

### Install Base System

```bash
debootstrap noble /mnt
```

### Copy Configuration Files

```bash
cp /etc/hostid /mnt/etc/hostid
cp /etc/mdadm.conf /mnt/etc/
cp /etc/resolv.conf /mnt/etc/resolv.conf
cp /etc/zfs/zroot.key /mnt/etc/zfs/keys/zroot.key
```

### Chroot into New System

```bash
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -B /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts
chroot /mnt /bin/bash
```

### Configure System

```bash
echo &apos;hostname&apos; &gt; /etc/hostname
echo -e &apos;127.0.1.1\hostname&apos; &gt;&gt; /etc/hosts

passwd

cat &lt;&lt;EOF &gt; /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu/ noble main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu/ noble-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu/ noble-security main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu/ noble-backports main restricted universe multiverse
EOF

apt update
apt upgrade --yes
apt install --yes --no-install-recommends \
  console-setup \
  keyboard-configuration \
  linux-generic-hwe-24.04 \
  locales

dpkg-reconfigure locales tzdata keyboard-configuration console-setup
```

### Create User Account

```bash
adduser user
cp -a /etc/skel/. /home/user
chown -R user:user /home/user
usermod -aG adm,plugdev,sudo user
```

### Configure Network

```bash
cat &gt; /etc/netplan/01-enp4s0.yaml &lt;&lt; &apos;EOF&apos;
network:
  version: 2
  renderer: networkd
  ethernets:
    enp4s0:
      dhcp4: true
      dhcp6: true
      accept-ra: true
      ipv6-privacy: false
EOF

chown 600 /etc/netplan/01-enp4s0.yaml
netplan apply
```

## ZFS Configuration

Install ZFS packages, enable services, and configure encryption keystore.

### Install ZFS Packages

```bash
apt install --yes dosfstools mdadm zfs-initramfs zfsutils-linux
```

### Enable Services

```bash
systemctl enable zfs.target
systemctl enable zfs-mount
systemctl enable zfs-import.target
```

### Secure Keystore

```bash
zfs set readonly=on zroot/keystore
```

### Configure Keystore Auto-Mount

To unlock other encrypted ZFS pools using the same key from `zroot/keystore`, configure systemd to ensure the keystore
is mounted before key-loading services run.

#### Set Keystore to Manual Mount

```bash
zfs set canmount=noauto zroot/keystore
```

This prevents ZFS from auto-mounting during pool import, avoiding a systemd race condition.

#### Create Keystore Mount Service

Create `/etc/systemd/system/zfs-mount-keystore.service`:

```bash
cat &gt; /etc/systemd/system/zfs-mount-keystore.service &lt;&lt; &apos;EOF&apos;
[Unit]
Description=Mount ZFS keystore dataset zroot/keystore at /etc/zfs/keys
DefaultDependencies=no
Before=local-fs.target
Requires=zfs-import.target
After=zfs-import.target

[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs mount zroot/keystore
RemainAfterExit=yes

[Install]
WantedBy=local-fs.target
EOF
```

Enable the service:

```bash
systemctl daemon-reload
systemctl enable zfs-mount-keystore.service
```

### Configure initramfs

```bash
echo &quot;UMASK=0077&quot; &gt; /etc/initramfs-tools/conf.d/umask.conf
update-initramfs -c -k all
zfs set org.zfsbootmenu:commandline=&quot;quiet&quot; zroot/ROOT
```

## ZFSBootMenu Setup

Install and configure ZFSBootMenu bootloader with UEFI boot entries.

### Format and Mount ESP

```bash
mkfs.vfat -F32 -nBOOT /dev/md/esp
mkdir -p /boot/efi
mount -t vfat /dev/md/esp /boot/efi/
```

### Add ESP to fstab

```bash
cat &lt;&lt; EOF &gt;&gt; /etc/fstab
$( blkid | grep BOOT | cut -d &apos; &apos; -f 4 ) /boot/efi vfat defaults 0 0
EOF
```

### Install ZFSBootMenu

```bash
apt install --yes curl

mkdir -p /boot/efi/EFI/ZBM
curl -o /boot/efi/EFI/ZBM/VMLINUZ.EFI -L https://get.zfsbootmenu.org/efi
cp /boot/efi/EFI/ZBM/VMLINUZ.EFI /boot/efi/EFI/ZBM/VMLINUZ-BACKUP.EFI
```

### Create UEFI Boot Entries

```bash
mount -t efivarfs efivarfs /sys/firmware/efi/efivars
apt install --yes efibootmgr
```

```bash
efibootmgr -c -d &quot;$OS_DISK2&quot; -p 1 -L &quot;ZBM 2 (Backup)&quot; -l &apos;\EFI\ZBM\VMLINUZ-BACKUP.EFI&apos;
efibootmgr -c -d &quot;$OS_DISK2&quot; -p 1 -L &quot;ZBM 2&quot; -l &apos;\EFI\ZBM\VMLINUZ.EFI&apos;

efibootmgr -c -d &quot;$OS_DISK1&quot; -p 1 -L &quot;ZBM 1 (Backup)&quot; -l &apos;\EFI\ZBM\VMLINUZ-BACKUP.EFI&apos;
efibootmgr -c -d &quot;$OS_DISK1&quot; -p 1 -L &quot;ZBM 1&quot; -l &apos;\EFI\ZBM\VMLINUZ.EFI&apos;

efibootmgr -v
```

## Final Steps

Unmount filesystems, reboot, and verify the installation.

### Exit Chroot and Cleanup

```bash
exit
umount -n -R /mnt
zpool export zroot
```

### Reboot

```bash
reboot
```

### Post-Installation Verification

```bash
zpool status
zfs list
cat /proc/mdstat
ip addr show
ping -c3 google.com
systemctl status zfs-mount
```

### Create Snapshot

```bash
tee /etc/apt/preferences.d/no-grub &lt;&lt; &apos;EOF&apos;
Package: grub* grub2*
Pin: release *
Pin-Priority: -1
EOF
```

GRUB is pinned with negative priority to prevent accidental installation. This system uses ZFSBootMenu as the
bootloader, and installing GRUB would conflict with it by overwriting EFI boot entries and potentially breaking the
boot process. Many Ubuntu packages and kernel updates attempt to install GRUB as a dependency, so this pin ensures the
system remains GRUB-free.

```bash
zfs snapshot zroot/ROOT/ubuntu@fresh-install
zfs snapshot zroot/ROOT/ubuntu/var@fresh-install
zfs snapshot zroot/ROOT/ubuntu/var/lib@fresh-install
zfs list -t snapshot
```

---

## Appendix 1: Recovery Mode

Boot from Live USB and access installed system for maintenance or recovery.

### Recovery Steps

```bash
sudo -i
apt update
apt install --yes zfsutils-linux

cat /proc/mdstat
mdadm --run /dev/md127

mkdir -p /mnt/boot/efi/
mount -t vfat /dev/md/esp /mnt/boot/efi/

zpool export -a
zpool import -f -N -R /mnt zroot
zfs load-key -L prompt zroot

zfs mount zroot/ROOT/ubuntu
zfs mount zroot/keystore
zfs mount -a

mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -B /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts

chroot /mnt /bin/bash
```

### Exit Recovery

```bash
exit
umount -n -R /mnt
zpool export zroot
reboot
```

---

## Appendix 2: Replacing a Faulted Drive

Replace a failed drive in ZFS mirror and mdraid ESP array.

### Check Status

```bash
zpool status zroot
cat /proc/mdstat
mdadm --run /dev/md127
mdadm --detail /dev/md/esp
```

Example faulted output:

```
  pool: zroot
 state: DEGRADED
config:
    NAME                                             STATE
    zroot                                            DEGRADED
      mirror-0                                       DEGRADED
        nvme-Force_MP510_1919820500012769305E-part2  ONLINE
        nvme-WD_BLACK_SN770_250GB_2346FX400125-part2 FAULTED
```

### Replace Drive

```bash
shutdown -h now
```

Replace drive, boot, then:

```bash
ls -la /dev/disk/by-id/ | grep -v part
lsblk -o NAME,SIZE,MODEL,SERIAL

export NEW_DISK=&quot;/dev/disk/by-id/nvme-NEW_DRIVE_SERIAL_HERE&quot;
export WORKING_DISK=&quot;/dev/disk/by-id/nvme-Force_MP510_1919820500012769305E&quot;
```

### Partition New Drive

```bash
sgdisk --replicate=$NEW_DISK $WORKING_DISK
sgdisk --randomize-guids $NEW_DISK
lsblk $NEW_DISK
```

### Replace in mdraid

```bash
mdadm --run /dev/md127
mdadm --manage /dev/md127 --add ${NEW_DISK}-part1
watch cat /proc/mdstat
```

### Replace in ZFS Pool

```bash
export OLD_DISK=&quot;/dev/disk/by-id/nvme-WD_BLACK_SN770_250GB_2346FX400125&quot;
zpool replace zroot ${OLD_DISK}-part2 ${NEW_DISK}-part2
watch zpool status zroot
```

### Create Boot Entries

```bash
mount -t efivarfs efivarfs /sys/firmware/efi/efivars 2&gt;/dev/null || true

efibootmgr -c -d &quot;$NEW_DISK&quot; -p 1 -L &quot;ZBM NEW (Backup)&quot; -l &apos;\EFI\ZBM\VMLINUZ-BACKUP.EFI&apos;
efibootmgr -c -d &quot;$NEW_DISK&quot; -p 1 -L &quot;ZBM NEW&quot; -l &apos;\EFI\ZBM\VMLINUZ.EFI&apos;
efibootmgr -v
```

### Verify

```bash
zpool status zroot
cat /proc/mdstat
mdadm --detail /dev/md/esp
zfs list
smartctl -a $NEW_DISK
efibootmgr -v
```

Expected output:

```
  pool: zroot
 state: ONLINE
config:
    NAME                                             STATE
    zroot                                            ONLINE
      mirror-0                                       ONLINE
        nvme-Force_MP510_1919820500012769305E-part2  ONLINE
        nvme-NEW_DRIVE_SERIAL_HERE-part2             ONLINE
```

---

## Appendix 3: Unlocking Other Pools with the Same Key

If you have additional encrypted ZFS pools (e.g., `tank-data`) that use the same key stored in `zroot/keystore`,
configure their key-load services to wait for the keystore mount.

**Note:** This configuration assumes you are using `zfs-zed` with the `zfs-mount-generator` for systemd integration,
which is the default setup on Ubuntu 24.04.

### Configure Key-Load Dependencies

For each additional encrypted pool, create a systemd drop-in to add dependencies:

```bash
systemctl edit zfs-load-key@tank-data.service
```

Add these lines:

```ini
[Unit]
Requires=zfs-mount-keystore.service
After=zfs-mount-keystore.service
```

Save and reload:

```bash
systemctl daemon-reload
```

### Boot Sequence

This ensures the correct boot order:

1. Pool is imported (ZFSBootMenu has already decrypted `zroot`)
2. `zfs-mount-keystore.service` mounts `zroot/keystore` at `/etc/zfs/keys`
3. `zfs-load-key@tank-data.service` starts only after keystore is available
4. `/etc/zfs/keys/zroot.key` exists when the key-load service runs

This prevents race conditions where the key-load service attempts to access keys before the keystore dataset is
mounted.</content:encoded><category>ubuntu</category><category>zfs</category><category>zbm</category><category>installation</category><category>encryption</category><author>marc@schiller.im (Marc Schiller)</author></item><item><title>No blurry fonts</title><link>https://schiller.im/snippets/shell/better-rendering/</link><guid isPermaLink="true">https://schiller.im/snippets/shell/better-rendering/</guid><pubDate>Thu, 02 Oct 2025 00:00:00 GMT</pubDate><content:encoded>```shell
FREETYPE_PROPERTIES=&quot;cff:no-stem-darkening=0 autofitter:no-stem-darkening=0&quot;
```</content:encoded><category>meta</category><author>marc@schiller.im (Marc Schiller)</author></item><item><title>Fiat Lux!</title><link>https://schiller.im/posts/fiat-lux/</link><guid isPermaLink="true">https://schiller.im/posts/fiat-lux/</guid><pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate><content:encoded>🔥</content:encoded><category>meta</category><author>marc@schiller.im (Marc Schiller)</author></item></channel></rss>