I’ve posted a few days ago, asking how to setup my storage for Proxmox on my Lenovo M90q, which I since then settled. Or so I thought. The Lenovo has space for two NVME and one SATA SSD.
There seems to a general consensus, that you shouldn’t use consumer SSDs (even NAS SSDs like WD Red) for ZFS, since there will be lots of writes which in turn will wear out the SSD fast.
Some conflicting information is out there with some saying it’s fine and a few GB writes per day is okay and others warning of several TBs writes per day.
I plan on using Proxmox as a hypervisor for homelab use with one or two VMs runnning Docker, Nextcloud, Jellyfin, Arr-Stack, TubeArchivist, PiHole and such. All static data (files, videos, music) will not be stored on ZFS, just the VM images themselves.
I did some research and found a few SSDs with good write endurance (see table below) and settled on two WD Red SN700 2TB in a ZFS Mirror. Those drives have 2500TBW. For file storage, I’ll just use a Samsung 870EVO with 4TB and 2400TBW.
SSD | TB | TBW | € |
---|---|---|---|
980 PRO | 1TB | 600 | 68 |
2TB | 1200 | 128 | |
SN 700 | 500GB | 1000 | 48 |
1TB | 2000 | 70 | |
2TB | 2500 | 141 | |
870 EVO | 2TB | 1200 | 117 |
4TB | 2400 | 216 | |
SA 500 | 2TB | 1300 | 137 |
4TB | 2500 | 325 |
Is that good enough? Would you rather recommend enterprise grade SSDs? And if so, which ones would you recommend, that are m.2 NVME? Or should I just stick with ext4 as a file system, loosing data security and the ability for snapshots?
I’d love to hear your thought’s about this, thanks!
I’m kinda repeating things already said here, but there’s a couple of points I wanted to highlight…
Monitor the SMART health: Enterprize and consumer drives fail, it’s good to know in advance.
Plan for failure: something will go wrong… might be a drive failure, might be you wiping it by accident… just do backups.
Use redundancy; several cheapo rubbish drives in a RAID / ZFS / BTRFS pool are always better than 1 “good” drive on it’s own.
Main point: build something and destroy it to see what happens, before you build your “final” setup - experience is always better than theory.
I built my own NAS and was going with ZFS until I fkd around with it… for me… I then went with BTRFS because of my skills, tools I use, etc… BTRFS just made more sense to me… so I know I can repair it.
And test your backups 🎃
I’m currently playing around in VMs even before I order my hard drives. Just to see, what I can do. Next up is to simulate a root drive failure and how to replace that. I also want to test rolling back from snapshots.
The data that I really do need and can’t replace is redundant anyway: one copy on my PC, one on my external HDD, one on my NAS and one on a system at my sisters place. Thats 4 copies on several media (one cold) and at another place. :)