And you can even export it there.
And you can even export it there.
Did you upgrade to the latest bios version?
Factory reset the bios?
Check for option pins on the motherboard?
How did brazzite install with secure boot turned on?
The problem is that I want failover to work if a site goes offline, this happens quite a bit with private ISP where I live and instead of waiting for the connection to be restored my idea was that kubernetes would see the failed node and replace it.
Most data will be transfered locally (with node affinity) and only on failure would the pods spread out. The problem that remained in this was storage which is why I’m here looking for options.
Thanks for the info!
I’ll try Rook-Ceph, Ceph has been recommended quite a lot now, but my nvme drives sadly don’t have PLP. Afaict that should still work because not all nodes will face power loss at the same time.
I’d rather start with the hardware I have and upgrade as necessary, backups are always running for emergency cases and I can’t afford to replace all hard drives.
I’ll join Home Operations and see what infos I can find
It’s fine if the bottleneck is upload/download speed, there’s no easy way around that.
The other problems like high latency or using more bandwith than is required are more my fear. Maybe local read cache or stuff like that can be a solution too but that’s why I’m asking for what is in use and what works vs what is better reserved for dedicated networks.
Ceph (and longhorn) want “10 Gbps network bandwidth between nodes” while I’ll have around 1gbit between nodes, or even lower.
What’s your experience with Garage?
I heard that ceph lives and dies with the network hardware. Is a slow internet connection even usable when the docs want 10 gbit/s networking between nodes?
They both support k8s, juicefs with either just a hostpath (not what i’d use) or the JuiceFS CSI Driver. Linstore has an operator which uses drbd and provides it too.
If you know of storage classes which are useful for this deployment (or just ones you want to talk about in general) then go on. From what I’m seeing in this thread I’ll probably have to deploy all options that seem reasonable and test them myself anyways.
Well, if that is the case then I will have to try them all but I’m hoping at least general behaviour would be similar to others so that I can start with a good option.
I want the failover to work in case of internet or power outage, not local cluster node failure. Multiple clusters would make configuration and failover across locations difficult or am I wrong?
I mean storage backends as in the provisioner, I will use local storage on the nodes with either lvm or just storage on a filesystem.
I already set up a cluster and tried linstore, I’m searching for experiences with the options because I don’t want to test them all.
I currently manage all the servers with a NixOS repository but am looking for better failover.
Burns just a little bit
I’m talking about software RAID, for example btrfs.
Most RAID levels are for redundancy, not speed and software RAID doesn’t need drives of the same size.
if one drive dies it’s got a copy on another drive
Do you want JBOD or a RAID? Cause dealing with disk failure is a feature of RAID
I hate arch users btw
metallb sounds like what you need, basicall you give it a range in your subnet (excluded from dhcp/Router!) and it assigns those ips to your loadbalancer services, it broadcasts this IP over Arp or bgp which makes automatic failover work.
You do backup important data, right?
Tap for spoiler
Backup important data