I mean, yes, I could. But I’m committed to the #selfhosted life where I spend hours building unnecessarily complicated systems to make my life easier in small ways.
I mean, yes, I could. But I’m committed to the #selfhosted life where I spend hours building unnecessarily complicated systems to make my life easier in small ways.
I’m starting to think my commitment to the Apple ecosystem and my desire for self-hosting are at odds.
The process for this is to obtain an EPS32 with bluetooth and wifi, pair it to the scale with bluetooth then keep it powered on in range of the scale, then the data goes into HA?
I have the opposite experience of this. All of my local services are a single docker container inside an LXC. I don’t like that it’s conceptually messy, but in practice it’s easy to manage. What I love about it is the simplicity of backing up or moving the entire LXC between servers.
I’ve not had any drama with things breaking across Proxmox updates. The only non-gui thing I need to do during the process is adding two lines to the LXC conf to have Tailscale work correctly.
It’s mind-bogglingly convenient, especially compared to the before times. Consider donating to them if you can.
No one’s mentioned Forgejo yet? Solid git and artifact repository.
+1 for the Seiko 5s. Love me a SNZG07J1
There’s lots of ways to skin this particular cat. My current approach is low powered Synology (j series?) for mass storage, then 1 litre PC’s running proxmox for my compute power using their NVME for storage, all backed up to the Synology.
Two good points here OP. Type docker image ls
to see all the images you currently have locally - you’ll possibly be surprised how many. All the ones tagged <none>
are old versions.
If you’re already using github, it includes an package repository you could push retagged images to, or for more self-hosty, a local instance of Forgejo would be a good option.
Build anything small into a container on your laptop, push it to DockerHub or the Github package registry then host it on fly.io for free.
Great write up, thanks. For video learners, Wolfgang does a good step-by-step on YouTube
I’d love you to check back later with your conclusions.
Guide to Self Hosting LLMs with Ollama.
ollama run llama3.2
If it’s an M1, you def can and it will work great. With Ollama.
Thanks, I ended up going with Garage, but it has the same issue. I assumed I could just specify some buckets with their keys in the docker-compose or garage.toml, but no - they had to be done through the api or command line.
This is correct, I already installed the minio cli, but when I came back and read this, I tried it out and yes, once garage is running in the container, you can
alias garage="docker exec -ti <container name> /garage"
so you can do the cli things like garage bucket info test-bucket
or whatever. The --help
for the garage
command is pretty great, which is good since they don’t write it up much in the docs.
Thanks. I ended up going with Garage (in Docker), and installed the minio client cli for these tasks.
One I’m writing. I use the host file system (as I have a strong preference for simple) for it’s storage, but I’m interested in adding Litestream for replicating the database onto AWS.
Love the effort you’ve put into this question. You’ve clearly done some quality research and thinking.
When I asked myself this same question a couple of years ago, I ended up just buying a second hand Synology NAS to use alongside my mini-pc. That would meet your criteria, and avoids the (I’m not sure what magnitude) reliability risk of using disks connected over USB. It’s more proprietary than I’d like, but it’s battle tested and reliable for me.
I like data, I like tech, I like investing large amounts of time and energy to self-host things that muggles would not bother with.