Add Second Disk to Existing TrueNAS Pool in Mirror Configuration (RAID1)

We’re going to add a mirror disk to an existing TrueNAS ZFS pool.

The Reason

We are using the TrueNAS homelab server that we created some time ago to provide a share storage solution for Kubernetes.

When we built the TrueNAS server, we went for the most basic and least expensive ZFS pool with a single disk. It worked well but did not provide any redundancy. While we didn’t store any important data in Kubernetes at the time, we do now. We’ve got ElasticSearch logs, WordPress MySQL databases, Prometheus metrics etc.

We’ve purchased a second hard drive that is of the same size, and we want to use it as a mirror disk, also known as RAID1, to ensure that no data loss occurs in a case of a single drive failure.

The Plan

We currently have a single disk /dev/ada1 attached to the system. This disk is used for the ZFS storage pool called homelab-hdd.

We have a second disk /dev/da0 that we want to mirror the homelab-hdd pool to it.

Disclaimer

Obviously, there’s no warranty. Backup your data now.

Add Disk to Existing TrueNAS Storage Pool

Check the status of the existing storage pool and write down the gptid of the existing disk:

# zpool status homelab-hdd
  pool: homelab-hdd
 state: ONLINE
  scan: none requested
config:

	NAME                                          STATE     READ WRITE CKSUM
	homelab-hdd                                   ONLINE       0     0     0
	  gptid/00ed0daf-a2c8-11eb-9602-e0db55aed2e8  ONLINE       0     0     0

Format the new drive as gpt:

# gpart destroy -F /dev/da0
# gpart create -s gpt /dev/da0

Create a swap partition on the new drive:

# gpart add -b 128 -t freebsd-swap -s 2G /dev/da0

Create a ZFS partition:

# gpart add -t freebsd-zfs /dev/da0

Write down the ID of the new data partition we’ve just created (look for rawuuid):

# gpart list

[...output truncated...]

Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 625142407
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,00c6b02e-a2c8-11eb-9602-e0db55aed2e8,0x80,0x400000)
   rawuuid: 00c6b02e-a2c8-11eb-9602-e0db55aed2e8
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada1p2
   Mediasize: 317925363712 (296G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,00ed0daf-a2c8-11eb-9602-e0db55aed2e8,0x400080,0x2502ea08)
   rawuuid: 00ed0daf-a2c8-11eb-9602-e0db55aed2e8
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 317925363712
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 625142407
   start: 4194432
Consumers:
1. Name: ada1
   Mediasize: 320072933376 (298G)
   Sectorsize: 512
   Mode: r2w2e5


Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 625142407
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r0w0e0
   efimedia: HD(1,GPT,6709abf3-5540-11ec-8dd5-e0db55aed2e8,0x80,0x400000)
   rawuuid: 6709abf3-5540-11ec-8dd5-e0db55aed2e8
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da0p2
   Mediasize: 317925363712 (296G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r0w0e0
   efimedia: HD(2,GPT,6a5eb019-5540-11ec-8dd5-e0db55aed2e8,0x400080,0x2502ea08)
   rawuuid: 6a5eb019-5540-11ec-8dd5-e0db55aed2e8
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 317925363712
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 625142407
   start: 4194432
Consumers:
1. Name: da0
   Mediasize: 320072932352 (298G)
   Sectorsize: 512
   Mode: r0w0e0

Attach the new partition to the existing pool, mirroring all data from the old drive to the new one. Syntax is as follows:

# zpool attach StoragePoolName /dev/gptid/[gptid_of_existing_disk] /dev/gptid/[gptid_of_new_partition]

In our case:

# zpool attach homelab-hdd \
  /dev/gptid/00ed0daf-a2c8-11eb-9602-e0db55aed2e8 \
  /dev/gptid/6a5eb019-5540-11ec-8dd5-e0db55aed2e8

Depending on the amount of data that needs copying, this process may take some time.

# zpool status homelab-hdd
  pool: homelab-hdd
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Dec  4 19:29:14 2021
	48.7G scanned at 13.9M/s, 42.2G issued at 12.1M/s, 48.7G total
	37.1G resilvered, 86.67% done, 00:09:10 to go
config:

	NAME                                            STATE     READ WRITE CKSUM
	homelab-hdd                                     ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/00ed0daf-a2c8-11eb-9602-e0db55aed2e8  ONLINE       0     0     0
	    gptid/6a5eb019-5540-11ec-8dd5-e0db55aed2e8  ONLINE       0     0     0  (resilvering)

errors: No known data errors

Leave a Reply

Your email address will not be published.