🇨🇦

  • 8 Posts
  • 570 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle


  • Trying to set that up to try out, but I can’t get it to see/use my config.yaml.

    /srv/filebrowser-new/data/config.yaml

    volumes:

    • /srv/filebrowser-new/data:/config environment:
    • FILEBROWSER_CONFIG=“/config/config.yaml”

    Says ‘/config/config.yaml’ doesn’t exist and will not start. Same thing if I mount the config file directly, instead of just its folder.

    If I remove the env var, it changes to “could not open config file ‘config.yaml’, using default settings” and starts at least. From there I can ‘ls -l’ through docker exec and see that my config is mounted exactly where it’s supposed to be ‘/config/config.yaml’ and has 777 perms, but filebrowser insists it doesn’t exist…

    My config is just the example for now.

    I don’t understand what I could possibly be doing wrong.

    /edit: three hours of messing around and I figured it out:

    • FILEBROWSER_CONFIG=“/config/config.yaml”

    Must not have quotation marks. Removed them and now it’s working.


  • Decided to do some more reading on this topic. TIL:

    TCP, the more common protocol; requires at least one side to have a port forwarded through their NAT to the client, so the other side can make a connection to that open port.

    uTP on the other hand, can ‘holepunch’ by sending a packet to a known IP, which opens a port through the sending clients NAT, specifically for that IP. That port can then be used to send and receive by either side until it closes due to inactivity.

    So, torrent clients can use uTP holepunching to open a port without requiring manual forwarding, then advertise that open port to public trackers. Client ‘A’ will try to connect to an IP+port it got from the tracker and get ignored (because the recipient NAT isn’t expecting data from that IP and drops the packets). Then when client ‘B’ decides to connect to client ‘A’, 'A’s port will now be open and allowing data from 'B’s IP, thus establishing a connection.

    This is slower than a direct connection because both clients need to be made aware of each other and decide to attempt to connect at reasonably similar times. It also requires public trackers with peerexchange enabled and the torrents cannot be flagged as private.



  • FolderSync selectively syncs files/folders from my phone back to my server via ssh. Some folders are on a schedule, some monitor for changes and sync immediately; most are just one-way, some are two-way (files added to the server will sync back to the phone as well as uploading data to the server). There’s even one that automatically drops files into paperless-ngx’ consume folder for automatic document importing.

    From there BorgBackup makes a daily backup of the data, keeping historical backups for years with absolutely incredible efficiency. I currently have 21 backups of about ~550gb each. Borg stores this in 447gb of total disc space.



  • Once the flapper lifts, it won’t close again until the tank empties completely. If the toilet clogs and you try too many times to flush it down instead of breaking out the plunger right away; sometimes the water can’t overflow out of the bowl fast enough to let the tank drain fully, so it just endlessly flows. Doesn’t happen to all toilets, but it’s still good to know when your toilet full of turds just won’t stop dumping water on the floor.


  • The circumstances that led you to any particular decision are pre-determined at the time you’re making that decision, simply through the fact that those circumstances have already happened prior to the current decision at hand; but that doesn’t mean you don’t have the free will to make that decision in the moment.

    To extend on that a little: if you were able to make the same person face the same decision multiple times under identical circumstances, I don’t believe you’d get identical results every time. It may not be an even distribution between the possible choices; but it wouldn’t be a consistent answer either. The Human element introduces too much chaos for that kind of uniformity.



  • Without authentication; it’s possible to randomly generate UUIDs and use them to retrieve media from a jellyfin server. That’s about the only actually concerning issue on that list, and it’s incredibly minor IMO.

    With authentication, users (ie, the people you have trusted to access your server) can potentially attack each other, by changing each others settings and viewing each other’s watch history/favorites/etc.

    That’s it. These issues aren’t even worth talking about for 99.9% of jellyfin users.

    Should they be fixed? Sure, eventually. But these issues aren’t cause to yell about how insecure jellyfin is in every single conversation, and to go trying to scare everyone off of hosting it publicly. Stop spreading FUD.



  • Yeah; Emby was originally called MediaBrowser and was a free open source project. ‘MediaBrowsers’ developers decided to move to a closed source paid model to establish some more consistent income and support the dedicated developers they have. Thus Emby was born.

    Some users were really unhappy with this decision and forked MediaBrowsers last release to create Jellyfin. Their development has been quite a bit slower, but they’ve made some significant strides in recent years. It’s a more and more attractive option.

    One of my biggest reasons for sticking with Emby (besides already having a lifetime premier license) is the dedicated clients available on more platforms. Xbone is my primary streaming device, besides android: Emby has a dedicated xbox client you can install that will take full advantage of the the hardware(more content direct plays, HEVC video for example), where as Jellyfin you’ve gotta use the web browser which is cumbersome and forces the server to transcode media a lot more.


  • In the case of plex, it’s not 100% selfhosted. There’s a dependence on plexs public infrastructure for user management/authentication. They also help bypass NAT by proxying connections through their servers so you don’t have to setup port forwarding and can even easily escape double NAT situations.

    I can understand paying for that convenience, but cost keeps rising while previously free features continue to get locked behind paywalls.

    Tbh, having users required to authenticate with plex.tv was enough for me to look elsewhere. The biggest reason to self host for me is to remove dependency on public services.






  • Most of my web services are behind my vpn, but there are a couple I expose publicly for friends/family to use. Things like emby, ombi, and some generic file sharing with file browser.

    One of these has a long custom path setup in nginx which, instead of proxying to the named service, will ask for http basic auth credentials. Use the correct host+path, then provide the correct user+pass, and you’ll be served an openvpn configuration file which includes an encrypted private key. Decrypt that and you’ve got backdoor vpn access.


  • I keep vaultwarden behind a vpn so it’s not exposed directly to the net. You don’t need a constant connection to the server; that’s only needed to add/change vault items.

    This does require some planning though; it’s easy to lock yourself out of your accounts when you’re away, if you don’t incorporate a backdoor of some kind to let yourself in in an emergency. (lost your device while away from home for example)

    My normal vpn connection requires a private key and a password that’s stored in my vault to decrypt it. I’ve setup a method for retrieving a backup set of keys using a series of usernames, emails, passwords, and undocumented paths (these are the only passwords I actually memorize); allowing me to reach vaultwarden where I can retrieve my vault with the data needed to login to everything else properly.