You cannot install addons from the UI, but you can manually install them. Addons are just Docker containers that get configured automatically.
You cannot install addons from the UI, but you can manually install them. Addons are just Docker containers that get configured automatically.
FAT32 doesn’t support unix file permission, so when you mount the disk linux has to assign a default ownership which usually is to root. And this is the issue you are facing.
You confused the disk permission with the filesystem permission. The udev rule you wrote gives you permission to write the disk (in other words, you can format it or rewrite the whole content) but doesn’t give you permission on the files stored inside because they are on a higher abstraction level.
If you use this computer in interactive mode (in other words if you usually sit in front of it and plug the disk on demand) my suggestion is to remove that line in /etc/fstab and let the ubuntu desktop environment mounting the external hard drive for the current logged in user.
If you use this computer as a server with the USB disk always connected (likely since you mention Jellyfin) you need to modify the fstab line to specify which user should get permission on the files written on the disk.
You can see the full list of options at https://www.kernel.org/doc/Documentation/filesystems/vfat.txt
You either want uid=Mongostein
(assuming that’s your username on your computer too) to assign to yourself the ownership of all the files, or umask=000
to give everyone all the permissions to the files and directories while ownership will remain to root. You should prefer the second option if jellifin runs as a different user, while the first one is better if there are other users on your computer which shouldn’t access your external disk.
To summarize, the line in /etc/fstab should be one of these two.
LABEL=drivename /mnt/drivename/ auto rw,user,exec,nofail,x-gvfs-show,dev,auto,umask=000 0 0
LABEL=drivename /mnt/drivename/ auto rw,user,exec,nofail,x-gvfs-show,dev,auto,uid=Mongostein 0 0
There is no need to add a udev rule to make the device writeble by your user. If you have a full Ubuntu setup the external drive should appear in Nautilus as soon as you attach it and it can be mounted and umounted from UI.
if it doesn’t work you can add a line te /etc/fstab like
/dev/sdb1 /mnt/mydisk noauto,user,uid=yourname 0 0
duble check the man page for the right sintax (I’m going by memory), but what you are saying here is that any user can mount this device which shouldn’t be mount automatically on boot, and files there are owned by the user “yourname” The issue with this approach is that the device name changes depending on what you have connected, Udev should also add some symlink which contains the device ID so it is more stable.
I got a terramaster nas and I’m super happy https://www.terra-master.com/global/f4-5067.html
The main reason to choose it is that it is just a PC in the form factor of a NAS. You can just boot it from a pendrive and install your favourite operating system. I had a Qnap before, and while it was great to start, self hosting wasn’t the best experience on their OS.
this is a small form factor, it should be low power consumption (I’ve never measured to confirm it) and supports both nvme and sata drives. Currently I’ve an nvme for the OS and two sata for storage. CPU is powerful enough to run home assistant, vpn, pihole, commafeed, and a bunch of other Docker images. I just plan to increase the ram soonish because the stock feels a little constrained.
I did some experiments in the past. The nicer option I could find was enabling webdav API on the hosting side (it was an option on cPanel if I recall correctly, but there are likely other ways to do it). These allow using the webserver as a remote read/write filesystem. After you can use rclone to transfer files, the nice part is that rclone supports client side encryption so you don’t have to worry too much about other people accessing files.
Could it be that the domain name has both IPv4 and IPv6 and depending on the network you try to reach one or another? Wireguard can work on both protocols, but from my experience it doesn’t try both to see which one works (like browsers do). So if at the first try the dns resolves the “wrong” IP version, wireguard cannot connect and doesn’t fallback trying the alternative.
QNAP sells extensions unit https://www.qnap.com/en/product/tr-004
They usually connect with USB (at least for home grade devices), but my understanding is that they are not seen as block devices so the nas has access to all the single drives like they were internal.
Back to the days I was fixing a lot of computers of friends and relatives, my Swiss army knife of Linux was https://www.system-rescue.org/
Very lightweight but with a full set of recovery tools. I’ve tried it recently and I still find it up to the expectations.
I’ve also used a fair amount of https://clonezilla.org/ to (re)store images of freshly installed OSes (mostly windows XP and 7 to give you an idea of the timeframe) for people who I know would have messed up faster.
A lot of technical aspects here, but IMHO the biggest drawback is liability. Do you offer free storage connected to internet to a group of “random tech nerds”. Do you trust all of them to use it properly? Are you really sure that none of them will store and distribute illegal stuff with it? Do you know them in person so you can forward the police to them in case they came knocking at your door?
Yes, you can do it on your server with a simple iptable rule.
I’m a little rusted, but something like this should work.
iptables -t nat -A PREROUTING -d [your IP] -p tcp --dport 11500 -j DNAT --to-destination [your IP:443]
You can find more information searching for “iptables dnat”. What you are saying here is: in the prerouting table (ie: before we decide what to do with this packet) tcp connections to my IP at the port 11500 must be forwarded to my IP at port 443.
For automatically unlock encrypted drives I followed the approach described in https://michael.stapelberg.ch/posts/2023-10-25-my-all-flash-zfs-network-storage-build/#auto-crypto-unlock
The password is split half in the server itself and half in a file on the web. During boot the server retrieves the second half via http, concatenates the two halves and use the result to unlock the drive. In this way I can always remove the online key and block the automatic decryption.
Another approach that I’ve considered was to store the decryption keys on a USB drive connected with a long extension cable. The idea is that if someone will steal your server likely won’t bother to get the cables too.
TPM is a different beast I didn’t study yet, but my understand is that it protects you in case someone steals your drives or tries to read them from another computer. But as long as they are on your server it will always decrypt them automatically. Therefore you delegate the safety of your data to all the software that starts on boot: your photos may still be fully encrypted at rest so a thief cannot get them out from the disk directly, but if you have an open smb share they can just boot your stolen server and get them out from there
Not anymore, it supports txt records now
You can use the flag
–add-host myname=host-gateway
in your container “myname” will resolve as the IP of your host.
documentation at: https://docs.docker.com/reference/cli/docker/container/run/#add-host>
I tried a few and eventually settled on commafeed. It has categories, can be executed from a single docker image (in other words, can run without the hassle of an external database), and the responsive UI works well both on pc and phone.
I remember this blog post (I cannot find right now) where the person split the decryption password in two: half stored on the server itself and half on a different http server. And there was an init script which downloaded the second half to decrypt the drive. There is a small window of time between when you realize that the server is stolen and when you take off the other half of the password where an attacker could decrypt your data. But if you want to protect from random thieves this should be safe enough as long as the two servers are in different locations and not likely to be stolen toghether.
TPM solves a sigthly different threat model: if you dispose the hd or if someone takes it out from your computer it is fully encrypted and safe. But if someone steals your whole server it can start and decrypt the drive. So you have to trust you have good passwords and protection for each service you run. depending on what you want to protect for this is either great solution or sub optimal
I use backblaze and rclone to encrypt and sync. It was the cheapest and most flexible solution when I checked a few years ago and I didn’t find any reason to change it so far
I use rclone, which is essentially rsync for cloud services. It supports encrypion out of the box.
I use https://mycorrhiza.wiki/ it is not very fancy but it is a single executable file and stores pages in a git repository, so no database is needed and doing the export is as simple as reading some files.
NAS are essentially small computers made for connecting a lot of storage and with a fancy OS that can be configured with a browser.
So the real question between the NAS or a custom build is how much time do you want to spend being a sysadmin. NAS mostly work out of the box, you can configure them to autoupdate and get notification only when something important happens. While with a custom build everything is completely on your own. Are you already familiar with some linux distribution? How much do you want to learn?
Once you answer the previous question, the next is about the power. To store files on the network you don’t need any big CPU, on the contrary, you may want something small that doesn’t cost too much in electricity. But you mentioned you want to stream video. If you need transcoding (because you have a chromcast that wants only video in a specific format for example) you need something more powerful. If you stream only to computer there is no need for transcoding because they can digest any format, so anything will work.
After this you need to decide how much space you need, and what type. NMVE are faster, but spinning HD were still more reliable (and cheaper per TB) last time I checked. Also, do you want some kind of raid? RAID1 is the bare minimum to protect you from a disk failure, but you need twice as much disks to store the same amount of data. RAID5 is more efficient but you need at least 3 disks. Said so, remember that RAID is not backup. You still need a backup for important stuff.
My honest suggestion is to start experimenting with your raspberry and see what you need. Likely it will fit already most of your needs, just attach an external HD and configure samba shares. I don’t do any automated backup, but I know that syncthing and Syncthing-Fork are very widely used tools. On linux you can very easily use rsync in a crontab.
If you want an operating system that offers you an out of the box experience more similar to a commercial NAS you can check FreeNAS. I personally started with a QNAP and have been happy for years, but after starting self hosting some stuff I wanted more flexibility so I decided to change to a TerraMaster where I installed a plain Debian and I’m happy with it, but it definitely requires more knowledge and patience to configure and administrate it.