I have recently repurposed and old Hp Stream to a home server and successfully run Immich. I really like it and even a small 500GB disk is way more than the 15GB Google offers.

My issue though is about backup. I would only be comfortable if all the data is backed up in an off-site server (cloud). But the back up storage will probably cost as much as paying for a service like ente or similar, directly replacing Google photo.

What am I missing? Where do you store your backup?

  • butitsnotme@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 year ago

    I backup to a external hard disk that I keep in a fireproof and water resistant safe at home. Each service has its own LVM volume which I snapshot and then backup the snapshots with borg, all into one repository. The backup is triggered by a udev rule so it happens automatically when I plug the drive in; the backup script uses ntfy.sh (running locally) to let me know when it is finished so I can put the drive back in the safe. I can share the script later, if anyone is interested.

      • butitsnotme@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 months ago

        I followed the guide found here, however with a few modifications.

        Notably, I did not encrypt the borg repository, and heavily modified the backup script.

        #!/bin/bash -ue
        
        # The udev rule is not terribly accurate and may trigger our service before
        # the kernel has finished probing partitions. Sleep for a bit to ensure
        # the kernel is done.
        #
        # This can be avoided by using a more precise udev rule, e.g. matching
        # a specific hardware path and partition.
        sleep 5
        
        #
        # Script configuration
        #
        
        # The backup partition is mounted there
        MOUNTPOINT=/mnt/external
        
        # This is the location of the Borg repository
        TARGET=$MOUNTPOINT/backups/backups.borg
        
        # Archive name schema
        DATE=$(date '+%Y-%m-%d-%H-%M-%S')-$(hostname)
        
        # This is the file that will later contain UUIDs of registered backup drives
        DISKS=/etc/backups/backup.disk
        
        # Find whether the connected block device is a backup drive
        for uuid in $(lsblk --noheadings --list --output uuid)
        do
                if grep --quiet --fixed-strings $uuid $DISKS; then
                        break
                fi
                uuid=
        done
        
        if [ ! $uuid ]; then
                echo "No backup disk found, exiting"
                exit 0
        fi
        
        echo "Disk $uuid is a backup disk"
        partition_path=/dev/disk/by-uuid/$uuid
        # Mount file system if not already done. This assumes that if something is already
        # mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
        # it was mounted somewhere else.
        (mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
        drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
        echo "Drive path: $drive"
        
        # Log Borg version
        borg --version
        
        echo "Starting backup for $DATE"
        
        # Make sure all data is written before creating the snapshot
        sync
        
        
        # Options for borg create
        BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"
        
        # No one can answer if Borg asks these questions, it is better to just fail quickly
        # instead of hanging.
        export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
        export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no
        
        
        #
        # Create backups
        #
        
        function backup () {
          local DISK="$1"
          local LABEL="$2"
          shift 2
        
          local SNAPSHOT="$DISK-snapshot"
          local SNAPSHOT_DIR="/mnt/snapshot/$DISK"
        
          local DIRS=""
          while (( "$#" )); do
            DIRS="$DIRS $SNAPSHOT_DIR/$1"
            shift
          done
        
          # Make and mount the snapshot volume
          mkdir -p $SNAPSHOT_DIR
          lvcreate --size 50G --snapshot --name $SNAPSHOT /dev/data/$DISK
          mount /dev/data/$SNAPSHOT $SNAPSHOT_DIR
        
          # Create the backup
          borg create $BORG_OPTS $TARGET::$DATE-$DISK $DIRS
        
        
          # Check the snapshot usage before removing it
          lvs
          umount $SNAPSHOT_DIR
          lvremove --yes /dev/data/$SNAPSHOT
        }
        
        # usage: backup <lvm volume> <snapshot name> <list of folders to backup>
        backup photos immich immich
        # Other backups listed here
        
        echo "Completed backup for $DATE"
        
        # Just to be completely paranoid
        sync
        
        if [ -f /etc/backups/autoeject ]; then
                umount $MOUNTPOINT
                udisksctl power-off -b $drive
        fi
        
        # Send a notification
        curl -H 'Title: Backup Complete' -d "Server backup for $DATE finished" 'http://10.30.0.1:28080/backups'
        

        Most of my services are stored on individual LVM volumes, all mounted under /mnt, so immich is completely self-contained under /mnt/photos/immich/. The last line of my script sends a notification to my phone using ntfy.

    • governorkeagan@lemdro.id
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I would love to see your script! I’m in desperate need of a better backup strategy for my video projects

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Fireproof safes don’t protect against heat except what’s high enough to combust paper. Temps will still probably be high enough to destroy a drive with a regular fireproof safe.

  • Tekhne@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I use Backblaze B2 for my backups. Storing about 2tb, comes out to about $10/mo, which is on par with Google One pricing. However, I get the benefit of controlling my data, and I use it for tons more than just photos (movies/shows etc).

    If you want a cheaper solution and have somewhere else you can store off-site (e.g. family/friend’s house), you can probably use a raspberry pi to make a super cheap backup solution.

  • Father_Redbeard@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I have my Immich library backed up to Backblaze B2 via Duplicacy. That job runs nightly. I also have a secondary sync to Nextcloud running on another server. That said, I need another off prem backup and will likely run a monthly job to my parents house either via manually copying to an external disk then taking it over or setting up a Pi or other low power server and a VPN to do it remotely.

    • palitu@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      So you just mount the immich folder in the duplicate container? Or run it native?

      • Father_Redbeard@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Immich and Duplicacy both run on my unraid server. Duplicacy just watches the Immich pics folder and backs that up nightly.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LVM (Linux) Logical Volume Manager for filesystem mapping
    RAID Redundant Array of Independent Disks for mass storage
    VPN Virtual Private Network
    ZFS Solaris/Linux filesystem focusing on data integrity

    4 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #401 for this sub, first seen 4th Jan 2024, 18:05] [FAQ] [Full list] [Contact] [Source code]

  • rambos@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Backblaze B2 is 6$ a month for 1TB and first 10GB is free. You pay proportionally (it cost me 2-3$ for last 7-8 months for 20-150 GB that accumulated over time). Keep in mind that you will spend more if you download backup, but you should use cloud backup as last resort anyway. I backup to 2nd local disk and also to B2 daily with Kopia. Didnt need backup fortunately, downloading from B2 small files ocasionally just for testing setup

    Its not just cheaper, I love it because I dont have to deal with Gshit company

  • oranki@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    There was a good blog post about the real cost of storage, but I can’t find it now.

    The gist was that to store 1TB of data somewhat reliably, you probably need at least:

    • mirrored main storage 2TB
    • frequent/local backup space, also at least mirrored disks 2TB + more if using a versioned backup system
    • remote / cold storage backup space about the same as the frequent backups

    Which amounts to something like 6TB of disk for 1TB of actual data. In real life you’d probably use some other level of RAID, at least for larger amounts so it’s perhaps not as harsh, and compression can reduce the required backup space too.

    I have around 130G of data in Nextcloud, and the off-site borg repo for it is about 180G. Then there’s local backups on a mirrored HDD, with the ZFS snapshots that are not yet pruned that’s maybe 200G of raw disk space. So 130G becomes 510G in my setup.

  • machinin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I bought a Synology that I keep at my in-laws, then use Syncthing to keep my pictures backed up. I just started, so I don’t know how it will go long term.

    If anyone else has a better option than Syncthing for Linux to Synology, I would love to hear it.

  • supes@lemmy.csupes.page
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I use backblaze on my synology. I backup photos automatically to it with their built in app on my phone, then every night I run encryped backups. I also could setup an encrypted backup to go to my parent’s synology.

    My backup is about 900gb and costs <$5/mo. That is my music, pictures, movies, and TV shows. Obviously that will increase, but well worth the nominal coat to have that much backup encrypted and in the cloud.

      • supes@lemmy.csupes.page
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I do. And since I’ve been slowly taking back control over all my online stuff as much as I can, I’m very happy with it. It gives me peace of mind it’s secure and I am super unlikely to just lose it.

  • ninjan@lemmy.mildgrim.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    For me the answer is that I need off site backup anyway for stuff like important digital documents, passwords and more. For me a dedicated storage provider I trust far more than Google/Apple/Microsoft which all have a financial interest in understanding me and my patterns to better sell additional services too me. So I use Dropbox but if you’re more technically inclined and have a lot of data then something akin to say Wasabi could make financial sense.

  • Telodzrum@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    pCloud sells itself as a privacy-focused alternative to Dropbox, Google, iCloud, etc. They’re running a deal right now on lifetime accounts, too.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Never trust a company selling lifetime accounts. It’s entirely unsustainable and eventually the other shoe always drops.

        • roofuskit@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          We’re talking about data storage, not software. There are real every day costs, maintenance, replacement, power, etc… that are involved in reliably storing data.

          I share the sentiment that you should be able to buy software.

          Paying for data storage in a single lifetime payment is like buying one square foot of storage space in someone’s apartment for a flat fee and expecting it to actually be there forever.

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I have a 2N+C backup strategy. I have two NASes, and I use rclone to backup my data from one NAS to the other, and then backup (with encryption) my data to Amazon S3. I have a policy on that bucket in S3 that shoves all files into Glacier Deep Archive at day 0, so I pay the cheapest rate possible.

    For example, I’m storing just shy of 400GB of personal photos and videos in one particular bucket, and that’s costing me about $0.77USD per month. Pennies.

    Yes, it’ll cost me a lot more to pull it out and, yes, it’ll take a day or two to get it back. But it’s an insurance policy I can rely on and a (future) price I’m willing to pay should the dire day (lost both NASes, or worse) ever arrive when I need it.

    Why Amazon S3? I’m in Australia, and that means local access is important to me. We’re pretty far from most other places around the world. It means I can target my nearest AWS region with my rclone jobs and there’s less latency. Backblaze is a great alternative, but I’m not in the US or Europe. Admittedly, I haven’t tested this theory, but I’m willing to bet that in-country speeds are still a lot quicker than any CDN that might help get me into B2.

    Also, something others haven’t yet mentioned is, per Immich’s guidance on their repo (Disclaimer right at the top) is not NOT rely on Immich as your sole backup. Immich is under very active development, and breaking changes are a real possibility all the time right now.

    So, I use SyncThing to also backup all my photos and videos to my NAS, and that’s also backed up to the other NAS and S3. That’s why I have nearly 400GB of photos and videos - it’s effectively double my actual library size. But, again, at less than a buck a month to store all that, I don’t really mind double-handling all that data, for the peace of mind I get.