Y’all, this is gonna be super broad, and I apologize for that, but I’m pretty new to all this and am looking for advice and guidance because I’m pretty overwhelmed at the moment. Any help is very, very appreciated.

For the last ~3 years, I’ve been running a basic home server on an old computer. Right now, it is hosting HomeAssistant, Frigate NVR, their various dependencies, and other things I use (such as zigbee2mqtt, zwave-js-ui, node-red, mosquitto, vscode, etc).

This old server has been my “learning playground” for the last few years, as it was my very first home server and my first foray into linux. That said, it’s obviously got some shortcomings in terms of basic setup (it’s probably not secure, it’s definitely messy, some things don’t work as I’d like, etc). It’s currently on its way out (the motherboard is slowly kicking the bucket on me), so it’s time to replace it, and I kind of what to start over (not completely - I’ve hundreds of automations in home assistant and node-red, for instance, that I don’t want to have to completely re-write, so I intend to export/import those as needed) and do it “right” this time - at this point, I think this is where I’m hung up, paralyzed by a fear of doing it “wrong” and winding up with an inefficient, insecure mess.

The new server, I want to be much more robust in terms of capability, and I have a handful of things I’d really love to do: pi-hole (though I need to buy a new router for this, so that has to come later on unless it’d save a bunch of headache doing it from the get-go), NAS, media server (plex/jellyfin), *arr stuff, as well as plenty of new things I’d love to self-host like Trilium notes, Tandoor or Mealie, Grocy, backups of local PCs/phones/etc (nextcloud?)… obviously this part is impossible to completely cover, but I suspect the hardware (list below) should be capable?

I would love to put all my security cameras on their own subnet or vlan or something to keep them more secure.

I need everything to be fully but securely accessible from outside the network. I’ve recently set up nginx for this on my current server and it works well, though I probably didn’t do it 100% “right.” Is something like Tailscale something I should look to use in conjuction with that? In place of? Not at all?

I’ve also looked at something like Authelia for SSO, which would probably be convenient but also probably isn’t entirely necessary.

Currently considering Proxmox, but then again, TrueNAS would be helpful for the storage aspect of all this. Can/should you run TrueNAS inside Proxmox? Should I be looking elsewhere entirely?

Here’s the hardware for the recently-retired gaming PC I’ll be using:
https://pcpartpicker.com/list/chV3jH
Also various SSDs and HDDs.

I’m in this weird place where I don’t have too much room to play around because I want to get all my home automation and security stuff back up as quickly as possible, but I don’t want to screw this all up.

Again, any help/advice/input at all is super, super appreciated.

  • LufyCZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    11 months ago

    Just fyi - running TrueNAS with zfs as a VM under Proxmox is a recipe for disaster, as me how I know.

    Zfs needs direct drive access, with VMs, the hypervisor virtualizes the adapter which is then passed through, which can mess things up.

    What you’d need to do is buy a sata/sas card and pass the whole card through, then you can use a vm.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      The more replies like this I get, the more I’m inclined to set up a second computer with just TrueNAS and let it do nothing but handle that. I assume that, then, would be usable by the server running proxmox with all its containers and whatnots.

      Thank you for the input!

      • LufyCZ@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        If you want to learn zfs a bit better though, you can just stick with Proxmox. It supports it, you just don’t get the nice UI that TrueNAS provides, meaning you’ve got to configure everything manually, through config files and the terminal.

  • ninjan@lemmy.mildgrim.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 months ago

    My best advice is use that your old setup hasn’t died yet while you can. I.e. start now and setup Proxmox because it’s vastly superior to TrueNAS for the more general type hardware you have and then run a more focused NAS project like Openmediavault in a proxmox VM.

    My recommendation, from experience, would be to setup a VM for anything touching hardware directly, like a NAS or Jellyfin (if you want to have GPU assisted transcoding) and I personally find it smoothest to run all my Docker containers from one Docker dedicated VM. LXCs are popular for some but I strongly dislike how you set hardware allocations for them, and running all Docker containers in one LXC is just worse than doing it in a VM. My future approach will be to move to more dedicated container setup as opposed to the VM focused proxmox but that is another topic.

    I also strongly recommend using portainer or similar to get a good overview of your containers and centralize configuration management.

    As for external access all I can say is do be careful. Direct internet exposure is likely a really bad idea unless you know what you’re doing and trust the project you expose. Hiding access behind a VPN is fairly easy if your router has a VPN server built in. And WireGuard (like Netbird / tailscale / Cloudflare tunnels etc all use) is great if not.

    As for authentication it’s pretty tricky but well worth it and imo needed if you want to expose stuff to friends/family. I recommend Authentik over other alternatives.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I like the advice to use a VM for anything specifically touching hardware. I think I’ll run with that. Thank you! External access is tricky, I know, and doing it securely and safely is really paramount for me. This is the one thing that’s keeping me from just “jumping in” with things. I don’t want to mess that part up.

      • ninjan@lemmy.mildgrim.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Well good part there is that you can build everything for internal use and then add external access and security later. While VLAN segmentation and overall secure / zero-trust architecture is of course great it’s very overkill for a selfhosted environment if there isn’t an additional purpose like learning for work or you find it fun. The important thing really is the shell protection, that nothing gets in. All the other stuff is to limit potential damage if someone gets in (and in the corporate world it’s not “if” it’s “when”, because with hundreds of users you always have people being sloppy with their passwords, MFA, devices etc.). That’s where secure architecture is important, not in the homelab.

        • Malice@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          That is true that the most important part is just to keep the outside… out. I’d love to learn more intricate/advanced network setups and security too. I do work in IT, and knowing this stuff certainly wouldn’t be bad on my resume, and I’ve actually always been interested in learning it regardless. But perhaps you make a good point that I can secure it from the outside and get things functional, and then work on further optimization down the line. Makes things a little less daunting, haha.

      • ninjan@lemmy.mildgrim.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        There’s absolutely no issues whatsoever with passing through hardware directly to a VM. And Virtualized is good because we don’t want to “waste” a whole machine for just a file server. Sure dedicated NAS hardware has some upsides in terms of ease of use but you also pay an, imo, ridiculous premium for that ease. I run my OMV NAS as a VM on 2 cores and 8 GB of RAM (with four hard drives) but you can make do perfectly fine on 1 Core and 2 GB RAM if you want and don’t have too many devices attached / do too many iops intensive tasks.

  • teawrecks@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I need everything to be fully but securely accessible from outside the network

    I wouldn’t be able to sleep at night. Who is going to need to access it from outside the network? Is it good enough for you to set up a VPN?

    The more stuff visible on the internet, the more you have to play IT to keep it safe. Personally, I don’t have time for that. The safest and easiest system to maintain a system is one where possible connections are minimized.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I sometimes travel for work, as an example, and need to be able to access things to take care of things while I’m away and the girlfriend is home, or when she’s with me and someone else is watching the place (I have a dog that needs petsat). I definitely have the time to tinker with it. Patience may be another thing, though, lol.

      • Linuturk@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        11 months ago

        Tailscale would allow you access to everything inside your network without having it publicly accessible. I highly recommend that since you are new to security.

        • teawrecks@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          It’s not clear to me how tailscale does this without being a VPN of some kind. Is it just masking your IP and otherwise just forwarding packets to your open ports? Maybe also auto blocking suspicious behavior if they’re clearly scanning or probing for vulnerabilities?

          • lowdude@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            That’s exactly what it is. I haven’t looked into it too much, but as far as I know it’s main advantage is simplifying the setup process, which in turn reduces the chances of a misconfigured VPN.

  • Monkey With A Shell@lemmy.socdojo.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    11 months ago

    The right way is the way that works best for your own use case. I like a 3 box setup, firewall, hypervisor, nas, with a switch in between. Let’s you set up vlans to your heart’s content, manage flows from an external point (virtual firewalls are fine, but if it’s the authoritative DNS/DHCP for your net it gets a bit chicken and egg when it’s inside a vm host), and store the actual data like vids/pics/docs on the NAS that has just that one job of storing the files, less chance of borking it up that way.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I might be able to scrounge together another physical server to use strictly as a NAS, that isn’t a bad idea. Thank you for the suggestion!

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    As a general rule: One system, one service. That system can be metal, vm, or container. Keeping things isolated makes maintenance much easier. Though sometimes it makes sense to break the rules. Just do so for the right reasons and not out of laziness.

    Your file server should be it’s own hardware. Don’t make that system do anything else. Keeping it simple means it will be reliable.

    Proxmox is great for managing VMs. Your could start with one server, and add more as needed to a cluster.

    It’s easy enough to setup wireguard for roaming systems that you should. Make a VM for your VPN endpoint and off you go.

    I’m a big fan of automation. Look into ansible and terraform. At least consider ansible for updating all your systems easily - that way you’re more likely to do it often.

  • paf@jlai.lu
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    If z2m, zwavejs,… Are installed from the adon store of HA, all you have to do is create a full backup of HA, and all your automations will be saved and restored automatically.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I am running HA in a container, so that’s not an option, unfortunately. If I’m being honest, though, it’s probably not a bad idea to start fresh with HA and re-import individual automations one-by-one, because HA has a lot of “slop” leftover from when I was first learning it and playing around with it.

  • BearOfaTime@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 months ago

    Not sure why you need a new router for PiHole. If your machines all point to the Pihole for DNS, it works. Router has almost nothing to do with what provides DNS, other than maybe having it’s DHCP config include the Pihole for DNS.

    Even then, you can setup the Pihole to be both DHCP and DNS (which helps for local name resolution anyway), and then just turn off DHCP in your router.

    As I understand it, Tailscale and Nginx fulfill the same requirements. I lean toward TS myself, I like how administration works, and how it’s a virtual network instead of an in-bound VPN. This means devices just see each other on this network, regardless of the physical network to which they’re connected. This makes it easy to use the same local-network tools you normally use. For example, you can use just one sync tool, rather than one inside the LAN, and one that can span the internet. You can map shares right across a virtual network as if it were a LAN. TS also enables you to access devices that can’t run TS, such as printers, routers, access points, etc, by enabling its Subnet Router.

    Tailscale also has a couple features (Funnel and Share) which enable you to (respectively), provide internet access to specific resources for anyone, or enable foreign Tailscale networks to access specific resources.

    I see Proxmox and TrueNAS as essentially the same kind of thing - they’re both Hypervisors (virtualizatiin hosts) with True adding NAS capability. So I can’t think of a use-case for running one on the other (TrueNAS has some docs around virtualizing it, I assume the use-case is for a test lab, I wouldn’t think running TN, or any NAS, virtualized is an optimal choice, but hey, what do I know? ).

    While I haven’t explored both deeply, I lean toward TrueNAS, but that’s because I need a NAS solution and a hypervisor, and I’ve seen similar solutions spec’d many times for businesses - I’ve seen it work well. Plus TrueNAS as a company seems to know what they’re doing, they have a strong commercial arm with an array of hardware options. This tells me they are very invested in making True work well, and they do a lot of testing to ensure it works, at least on their hardware. Having multiple hardware products requires both an extensive test group and support organization.

    Proxmox seems equivalent, except they do just the software part, as far as I’ve seen.

    Two similar products for different, but similar/overlapping use-cases.

    Best advice I have is to make a list of Functional Requirements, abstract/high-level needs, such as “need external access to network for management”. Don’t think about specific solutions, just make the list of requirements. Then map those Functional requirements to System requirements. This is often a one-to-many mapping, as it often takes multiple System requirements to address a single functional requirement.

    For example, that “external access” requirement could map out to a VPN system requirement, but also to an access control requirement like SSO, and then also to user management definitions.

    You don’t have to be that detailed, but it’s good to at least have the Functional-to-System mapping so you always know why you did something.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      You make a very good argument for Tailscale, and I think I’ll definitely be looking deeper into that.

      I like your suggestion to map out functional requirements, and then go from there. I think I’ll go ahead and start working on a decent map for that.

      As far as the new router for pi-hole… my super-great, wonderful, most awesome ISP (I hope the sarcasm is evident, haha; the provider is AT&T) dictates that I use their specific modem/router (not optional), and they also do not allow me to change DHCP on that mandated hardware. So my best option, so far as I’ve seen, is to use the ISP’s box in pass-through with a better router behind it that I can actually set up to use pi-hole.

      Thank you for your thoughts and suggestions! I’m going to take a deeper look at Tailscale and get started properly mapping high-level needs/wants out, with options for each.

      • BearOfaTime@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Lol, sarcasm received, loud n clear!

        Yea, they all suck that way. I still use my own router for wifi. It’s just routing, and your own router will know which way to the internet, unless there’s something I don’t understand about your internet connection. See my other comment below.

        Yea, requirements mapping like this is standard stuff in the business world, usually handled by people like Technical Business/Systems Analysts. Typically they start with Business/Functional Requirements, hammered out in conversations with the organization that needs those functions. Those are mapped into System Requirements. This is the stage where you can start looking at solutions, vendor systems, etc, for systems that meet those requirements.

        System Requirements get mapped into Technical Requirements - these are very specific: cpu, memory, networking, access control, monitor size, every nitpicky detail you can imagine, including every firewall rule, IP address, interface config. The System and Technical docs tend to be 100+/several hundred lines in excel respectively, as the Tech Requirements turn into your change management submissions. They’re the actual changes required to make a system functional.

      • terminhell@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Ya don’t need ATT’s modem. Some copy pasta I’ve put together:

        If it’s fiber, you don’t need the modem. You’ll still need it once every few months.

        Things you’ll need:

        1. your own router
        2. cheap 4 port switch (1gig pref)

        Setup: Connect gpon (the little fiber converter box they installed on the wall near modem) wan to any port on 4port switch. Then from switch to gpon port of modem (usually red or green port). Make sure modem fully syncs. Once this happens, you can move the cable from the modem to your own routers wan port. Done! Allow router a few moments to sync as well.

        Now, every once in a while they’ll send a line refresh signal that will break this, or if a power outage occurs. In such case, you’ll just plug back in their modem, move cable back to gpon port of modem, wait for sync. Move cable back to router.

        Bonus: Hook up all this to a battery backup and you’ll have Internet even during power outages, at least for a while.

        • BearOfaTime@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Since their modem is handing out DHCP addresses, is there any reason why you couldn’t just connect that cable to your router’s internet port, and configure it for DHCP on that interface? Then the provider would always see their modem, and you’d still have functional routing that you control.

          Since consumer routers have a dedicated interface for this, you don’t have to make routing tables to tell it which way to the internet, it already knows it’s all out that interface.

          Just make sure your router uses a different private address range for your network than the one handed out by the modem.

          So your router should get a DHCP and DNS settings from the modem, and will know it’s the first hop to the internet.

          I do this to create test networks at home (my cable modem has multiple ethernet ports), using cheap consumer wifi routers. By using the internet port to connect, I can do some minimal isolation just by using different address ranges, not configuring DNS on those boxes, and disabling DNS on my router.

          • Malice@lemmy.dbzer0.comOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            Their modem is my router; it’s both. That’s why I need a new one, to do exactly as you’re describing (is my understanding, although another post here suggests otherwise).

            • BearOfaTime@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              You should still be able to run your own router with it treating their router as the next hop.

        • Malice@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Huh, this is interesting, I’ll have to take another look into this. Thanks for the lead!
          And I do have a UPS, and it is, indeed, pretty glorious that my internet, security cameras, and server all stay online for a good bit of time after an outage, and don’t even flinch when the power is only out briefly. Convenience and peace of mind. Well worth a UPS.

  • VelociCatTurd@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    I will provide a word of advice since you mentioned messiness. My original server was just one phyiscla host which I would install new stuff to. And then I started realizing that I would forget about stuff or that if I removed something later there may still be lingering related files or dependencies. Now I run all my apps in docker containers and use docker-compose for every single one. No more messiness or extra dependencies. If I try out an app and don’t like it, boom container deleted, end of story.

    Extra benefit is that I have less to backup. I only need to backup the docker compose files themselves and whatever persistent volumes are mounted to each container.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I forgot to mention, I do use docker-compose for (almost) all the stuff I’m currently using and, yes, it’s pretty great for keeping things, well… containerized, haha. Clean, organized, and easy to tinker with something and completely ditch it if it doesn’t work out.

      Thanks for the input!

  • ratman150@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    I’ll freely admit to skimming a bit but yes proxmox can run trunas inside of it. Proxmox is powerful but might be a little frustrating to learn at first. For example by default proxmox expects to use the boot drive for itself and it’s not immediately clear how to change that to use that disk for other things.

    The noctua dh-15 is overkill for that cpu btw unless you’re doing an overclock which I wouldn’t recommend for server use. What’s your plans for the 1060? If using proxmox you’ll want to get one of the “G” series AMD CPUs do that proxmox binds to the apu and then you should be able to do gpu passthrough on the 1060.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I’d planned on using the GPU for things like video transcoding (which I know it’s probably way overkill for). Perhaps something like stable diffusion to play around with down the line? I’m not entirely sure. I do know that, since the CPU isn’t a G series, it’ll need to be plugged in at least if/when I need to put a monitor on it. Laziness suggests I’ll likely just end up leaving it in there, lol. As far as the dh-15, yeah, that’s outrageously overkill, I know, and I may very well slap the stock cooler on it and sell the dh-15.

      Thank you!

      • ratman150@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        I have a proxbox with a R5 4600G even under extreme loads the stock cooler is fine. Honestly once prox is setup you don’t need a GPU. The video output of proxmox is just a terminal (Debian) so as long as things are running normally you can do everything through the web interface even without the gpu. I do highly recommend a second GPU (either a G series CPU or a cheap GPU) if you want to try proxmox GPU passthrough. I’ve done it and can say it is extremely difficult to get working reliably with just a single GPU.

        • Malice@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Yeah, I’d definitely considered the fact that I can probably just take the GPU out as soon as proxmox is set up. The only thing I’d leave it for is for transcoding, which may or may not be something I even need to/want to bother with.

  • OminousOrange@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    For ease of setup and use, I’ve found Twingate to be great for outside access to my network.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    11 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    HA Home Assistant automation software
    ~ High Availability
    IP Internet Protocol
    LXC Linux Containers
    NAS Network-Attached Storage
    PiHole Network-wide ad-blocker (DNS sinkhole)
    SSO Single Sign-On
    VPN Virtual Private Network

    8 acronyms in this thread; the most compressed thread commented on today has 16 acronyms.

    [Thread #453 for this sub, first seen 25th Jan 2024, 21:15] [FAQ] [Full list] [Contact] [Source code]