One of the common problems with ZFS-pools, is the inability to shrink a pool. This especially pops up when you try to replace a failing “1TB” disk with another “1TB” disk, and fail to do so:
# zpool create tank raidz /dev/da{1,2,3} # zpool replace tank da1 da4 cannot replace da1 with da4: device is too small
Even if the drives are both “1TB”, they might differ a few sectors. The only solution for now is to make sure ZFS does not use the full disk when creating the pool, leaving some margin for “shrinking” if needed. One way to accomplish this is to use partitions (or slices) as ZFS vdev’s. This is not recommended, because ZFS can’t use the full disk cache.
Here is my solution: create your pool with sparse files that are just smaller than your disks. How much smaller is up to you, but this is the minimal disk size you will need to replace failing disks. Since the files are sparse, they don’t actually take up much space on the root disk. As soon as the pool is created, replace each sparse file with the corresponding real disk. Make sure autoexpand is set to off, otherwise the pool will be expanded to fill the entire disks, ruining our effort to keep the pool just slightly smaller.
# geom disk list [...] Geom name: da1 Providers: 1. Name: da1 Mediasize: 1153433600 (1.1G) Sectorsize: 512 [...] # foreach i ( 1 2 3 ) dd if=/dev/zero of=sparse-file-$i.bin bs=512 count=1 seek=`echo 1000000000/512 - 1 | bc` end 1+0 records in 1+0 records out 512 bytes transferred in 0.000030 secs (17043521 bytes/sec) 1+0 records in 1+0 records out 512 bytes transferred in 0.000031 secs (16519105 bytes/sec) 1+0 records in 1+0 records out 512 bytes transferred in 0.000030 secs (17043521 bytes/sec) # # zpool create tank raidz /root/sparse-file-{1,2,3}.bin # zpool set autoexpand=off tank # for i (1 2 3) zpool replace tank sparse-file-$i.bin /dev/da$i end
Will says:
Hi, I used your guide as a basis for some FreeNAS work I was doing so my reason for posting is two-fold.
Firstly, thanks for your guide. I’ve attributed your post but thought you might like to know it helped someone 🙂
Secondly, a moderator mentioned he has seen some problems with what I was doing and I wondered if you had heard anything similar. My searching/forum-ing has come up blank.
This is the post btw: http://forums.freenas.org/index.php?threads/creating-a-degraded-raidz-via-sparse-files.18276/
2014-02-28, 7:58Niobos says:
I’ve read your thread, but since “cyberjock” doesn’t give any info on what might be wrong, I can’t give an answer either.
2014-02-28, 8:15As linked in my post, doing ZFS on full disks is the recommended way. And I don’t see anything wrong with taking the intermediate step with files to make the array “shrinkable”.
I do agree that creating a degraded array is somewhat strange. What are you trying to accomplish with that?
Will says:
I’m using a degraded array as a temporary measure whilst I juggle HDD’s / money. Buying 4HDD’s all in one go is expensive, so I was spreading it out whilst I sorted out the systems the data is coming from. Its certainly not a long-term solution 🙂
My expectation is that whilst the array is degraded I can still lose 1 disk without loosing everything, and then once I have a 4th disk available I can repair the degraded array and have 2 disks worth of redundancy.
All that being said, if it went belly-up, it would be irritating and cost me time. But it shouldn’t cost me all my data. I still have the source disks with the data on them.
2014-03-02, 0:47