• 24 Posts
  • 2.11K Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • TCB13@lemmy.worldtoLinux@lemmy.mlIncus 6.8 has been released
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 days ago

    Well… If you’re running a modern version of Proxmox then you’re already running LXC containers so why not move to Incus that is made by the same people?

    Proxmox (…) They start off with stock Debian and work up from there which is the way many distros work.

    Proxmox has been using Ubuntu’s kernel for a while now.

    Now, if Proxmox becomes toxic

    Proxmox is already toxic, it requires a payed license for the stable version and updates. Furthermore the Proxmox guys have been found to withhold important security updates from non-stable (not paying) users for weeks.

    My little company has a lot of VMware customers and I am rather busy moving them over. I picked Proxmox (Hyper-V? No thanks) about 18 months ago when the Broadcom thing came about and did my own home system first and then rather a lot of testing.

    If you’re expecting the same type of reliably you’ve from VMware on Proxmox you’re going to have a very hard time soon. I hope not, but I also know how Proxmox works.

    I run Promox since 2009 and until very recently, professionally, in datacenters, multiple clusters around 10-15 nodes each which means that I’ve been around for all wins and fails of Proxmox. I saw the raise and fall of OpenVZ, the subsequent and painful move to LXC and the SLES/RHEL compatibility issues.

    While Proxmox works most of the time and their payed support is decent I would never recommend it to anyone since Incus became a thing. The Promox PVE kernel has a lot of quirks, for starters it is build upon Ubuntu’s kernel – that is already a dumpster fire of hacks waiting for someone upstream to implement things properly so they can backport them and ditch their own implementations – and then it is a typically older version so mangled and twisted by the extra features garbage added on top.

    I got burned countless times by Proxmox’s kernel. Broken drivers, waiting months for fixes already available upstream or so they would fix their own bugs. As practice examples, at some point OpenVPN was broken under Proxmox’s kernel, the Realtek networking has probably been broken for more time than working. ZFS support was introduced only to bring kernel panics. Upgrading Proxmox is always a shot in the dark and half of the time you get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later.

    Proxmox’s startup is slow, slower than any other solution – it even includes management daemons that are there just there to ensure that other daemons are running. Most of the built-in daemons are so poorly written and tied together that they don’t even start with the system properly on the first try.

    Why keep dragging all of the Proxmox overhead and potencial issues, if you can run a clean shop with Incus, actually made by the same people who make LXC?














  • Kinda Scenario 1 is the standard way: firewall at the perimeter with separately isolated networks for DMZ, LAN & Wifi

    What you’re describing is close to scenario 1, but not purely scenario 1. It is a mix between public and private traffic on a single IP address and single firewall that a lot of people use because they can’t have two separate public IP addresses running side by side on their connection.

    The advantage of that setup is that it greatly reduces the attack surface by NOT exposing your home network public IP to whatever you’re hosting and by not relying on the same firewall for both. Even if your entire hosting stack gets hacked there’s no way the hacker can get in your home network because they’re two separate networks.

    The scenario one describes having 2 public IPs, a switch after the ISP ONT and one cable goes to the home firewall/router and another to the server (or another router / firewall). Much more isolated. It isn’t a simple DMZ, it’s literally the same as two different internet connections for each thing.





  • I’m curious is there documented attacks that could’ve been prevented by this?

    From my understanding CPU pinning shouldn’t be used that much, the host scheduler is aware that your VM threads are linked and will schedule child threads together. If you pin cores to VM’s, you block the host scheduler from making smart choices about scheduling. This is mostly only an issue if your CPU is under constraint, IE its being asked to perform more work than it can handle at once. Pinning is not dedicated, the host scheduler will schedule non-VM work to your pined cores.

    I’m under the impression that CPU pinning is an old approach from a time before CPU schedulers were as sophisticated, and did not handle VM threads in a smart manner. This is not the case anymore and might there be a negative performance impact with it.



  • the more complicated it gets the more likely you are to either screw up unintentionally, or get annoyed at it, and do something dumb on purpose, even though you totally were going to fix it later. (…) Pick the one that makes sense, is easy for you to deploy and maintain

    This is an interesting piece of advice.

    Anyway maybe I wasn’t clear enough, I’m not looking to pick a setup, I’ve been doing 2.B. for a very long time and I do work on tech and know my way around. Just gauging what others are doing and maybe find a few blind spots :).

    Thanks.