Hi!

I used to have three raid1:

2 x 4Tb Ssd dedicated to store personal data

2 x 6Tb HDD dedicated to store “iso’s”, the eye patched ones.

2 x 4Tb ssd for backup.

Ext4 everywhere.

I run this setup for years, maybe even 20 years (with many different disks and sizes over time).

I decided that was time to be more efficient.

I removed the two HDD, saving quite a lot of power, and switched the four sdd to RAID5, Then put BTRFS on top of that. Please note, I am not using btrfs raid feature, but Linux mdadm software raid (which I have been using rock solid for years) with btrfs on top as if on a single drive.

I choose MD not only for my past very positive experience, but specially because I love how easy is to recover and restore from many issues.

I choose not to try zfs because I don’t feel comfortable in using out of kernel drivers and I dislike how zfs seems to be RAM hungry.

What do you guys think?

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 days ago

    Looks like a good setup to me. Hdds have a lot of downsides, so if you can afford the extra $20/TB, an all flash array is super useful. Mdadm is rock solid.

    The only issue I think is that it’s not possible to expand this array like you can on LVM or ZFS, so just watch out for that.

    • Shimitar@feddit.itOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 days ago

      Good point on the expansion. But o am not too bothered about it, as I have always done by moving data around. Takes a while, but leaves you with a set of disks with the old data still there, and it saved my ass a few times in the past. Now I should be fine with good backups, but you never know.