• 0 Posts
  • 56 Comments
Joined 11 months ago
cake
Cake day: January 26th, 2025

help-circle
  • Software compatibility is a problem on X as well, so I’m extrapolating. I don’t expect the situation to get better though. I’ve managed software that caused fucking kernel panics unless it ran on Gnome. The support window for this type of software is extremely narrow and some vendors will tell you to go pound sand unless you run exactly what they want.

    I’m no longer working with either educational or research IT, so at least it’s someone else’s problem.

    As for ThinLinc, their customers have asked about what their plan is for the past decade, but to quote them: ”Fundamentally, Wayland is not compatible with remote desktops in its core design.” (And that was made clear by everyone back in 2008)

    Edit: tangentially related, the only reasonable way to run VNC now against Wayland is to use the tightly coupled VNC-server within the compositor (as you want intel on window placements and redraws and such, encoding the framebuffer is just bad). If you want to build a system on top of that, you need to integrate with every compositor separately, even though they all support ”VNC” in some capacity. The result is that vendors will go for the common denominatior, which is running in a VM and grabbing the framebuffer from the hypervisor. The user experience is absolute hot garbage compared to TigerVNC on X.


  • It’s great that most showstoppers are fixed now. Seventeen years later.

    But I’ll bite: Viable software rendered and/or hardware accelerated remote deskop support with load balancing and multiple users per server (headless and GPU-less). So far - maybe possible. But then you need to allow different users to select different desktop environments (due to either user preferences or actual business requirements). All this may be technically possible, but the architecture of Wayland makes this very hard to implement and support in practice. And if you get it going, the hard focus on GPU acceleration yields an extreme cost increase, as you now need to buy expensive Nvidia-GPUs for VDI with even more expensive licenses. Every frame can’t be perfect over a WAN link.

    This is trivial with X, multiple commercially supported solutions exist, see for example Thinlinc. This is deployable in literally ten minutes. Battle tested and works well. I know of multiple institutional users actively selecting X in current greenfield deployments due to this, rolling out to thousands of users in well funded high profile projects.

    As for the KDE showstopper list - that’s exactly my point. I can’t put my showstoppers in a single place, I need to report to KDE, Gnome and wlroots and then track all of them, that’s the huge architectural flaw here. We can barely get commercial vendors to interact with a single project, and the Wayland architecture requires commercial vendors to interact with a shitton of issue trackers and different APIs (apparently also dbus). Suddenly you have a CAD suite that only works on KDE and some FEM software that only runs on a particular version of Gnome, with a user that wants both running at the same time. I don’t care about how well KDE works. I care that users can run the software they need, the desktop environment is just a tool to do that. The fragmentation between compositors really fucks this up by coupling software to display manager. Eventually, this will focus commercial efforts on the biggest commercial desktop environment (i.e. whatever RHEL uses), leaving the rest behind.

    (Fun story, one of my colleagues using Wayland had a postit with ”DO NOT TURN OFF” on his monitor the entire pandemic - his VNC session died if the DisplayPort link went down.)



  • It’s hilarious that all of this was foreseen 17 years ago by basically everyone, and here is a nice list providing just those exact points. I’ve never seen a better structured ”told ya so” in my life.

    The point isn’t that the features are there or not, but how horrendously fragmented the ecosystem is. Implementing anything trying to use that mess of API surface would be insane to attempt for any open source project, even when ignoring that the compositors are still moving targets.

    (Also, holy shit the Gnome people really wants everyone to use dbus for everything.)

    Edit: 17 years. Seventeen years. This is what we got. While the list is status quo, it’s telling that it took 17 years to implement most of the features expected of a display server back in the last millenium. Most features, but not all.



  • enumerator4829@sh.itjust.workstoLinux@lemmy.mlKDE Going all-in on a Wayland future
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    12 days ago

    Because instead of just using a common well defined API, every developer is supposed to ”work together with Wayland compositors”, of which there are many, none of which are up to feature parity with X. Working together with the (at least) three major compositors is far top much work for most projects, if you can even get them on board.

    Every compositor must reimplement everything previously covered by third party software, or at least define and reimplement APIs to cover that functionality. We have been screaming about this obvious design fuckup since Wayland was first introduced, but nooo, every frame is perfect.

    Take a look at https://arcan-fe.com/ for what a properly architected display server could look like instead of the mess we currently have with Wayland. I’m holding off on moving to Wayland for many reasons, and it wouldn’t surprise me if Arcan becomes mature and fully usable before Wayland. If I get to place a bet on either on Wayland or a few guys in a basement with a proper architecture, I know what I’ll put my money on.




  • For anyone working on or around stages:

    Most sane production companies standardise on over-under. Even if you find some other method superior (nothing is), you’ll get thrown out headfirst if you don’t follow the standard. Having a tech fuck around with a non-compliant cable during a changeover is far too risky.

    Should be noted that there are special cases. For example, thicccc cables (i.e. 24ch analog multi) that have their own dedicated cases often go down in an 8 instead - easier to pull out and you can use a smaller case. Thank god for digital audio.

    (Also, when using over-under correctly, you can throw the cable and it will land straight without any internal stresses winding it up like a spring)


  • Here I am, running separate tailscale instances and a separate reverse proxy for like 15 different services, and that’s just one VM… All in all, probably 20-25 tailscale instances in a single physical machine.

    Don’t think about Tailscale like a normal VPN. Just put it everywhere. Put it directly on your endpoints, don’t route. Then lock down all your services to the tailnet and shut down any open ports to the internet.


  • My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.

    As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.

    Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)

    So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.



  • SQLite is one of the very few open source projects with a reasonable plan for monetisation.

    • Do you want to use one of the proprietary extensions? Fork up a few thousand. No biggie.
    • Do you operate in a regulated industry (aviation) and need access to the 100% coverage test suite along with a paper trail? Fork up ”Call us”.
    • Is your company insisting that you only use licensed or supported software? Well, you can apparently pay them for a licence to their public domain software.

    Basically, squeeze regulated industries, hard.

    I’m all for open source, but at some point developers should stop acting surprised when people use their work at the edges of the licence terms (Looking at you Mongo, Redis and Hashicorp). As for developers running projects on their free time, maybe start setting boundaries? The only reason companies aren’t paying is because they know they can get away withholding money, because some sucker will step up and do it for free, ”for the greater good”. Stop letting them get it for free.

    Looks like RedHat is kinda going in this direction (pay to get a paper trail saying a CVE-number is patched), and basically always have been squeezing regulated industry. Say what you want about that strategy, it’s at least financially viable long term. (Again, looking at you Hashicorp, Redis, Mongo, Minio and friends)



  • It’s 2025. Any internet connected machine on any EOL OS or without updates applied in a timely manner should get nuked from orbit.

    And that goes for all Linux and Android users out there too. Update your bloody phones.

    I have a Windows 10 machine with firewalls, updates and antivirus all turned off, for a single specific software. Works fine, and will keep working fine for a long time, but that installation will never again see a route to the internet.


  • I know it’s possible to run music production on Linux, in fact it’s better than ever.

    But:

    • OP explicitly asks for keeping his Cakewalk and Ableton files working.
    • OP has a small child and just wants a working music production machine with minimal fuff and time investment.
    • Like 95% of people doing any kind of music production (outside of our Linux bubble) will have an iLok licenced favourite plugin somewhere. Never seen a professional without several.

    Please stop recommending Linux to people who aren’t ready for it yet. Find the people who are, get them over. The rest will follow.




  • For music production on a hobby level? Linux is not what you want.

    The VST availability is abysmal. For a DAW, you can choose between Reaper and Ardour. Both are reasonably good, but without decent third party VSTs you’ll suffer. You won’t get iLok working, you won’t get any commercial plugins working. Your old project files won’t open.

    Now, if you are exclusively working with Airwindows plugins (look it up!) in Reaper, you could get away with a Linux migration. Cakewalk and Ableton? Not a chance in hell.

    Go buy a cheap used 16GB M1 Mac Mini. Music production stuff ”just works”. Given your config, looks like that could be within budget. Or upgrade your old machine to Windows 11, pick your poison.


  • Fine, take the structured approach to ”Linux”:

    • 3-5 years of university studies with a well designed curriculum, including operating systems basics, networking, security, data structures and compilers. This will get you the basic stuff you need to know to further delve into ”Linux”.
    • Add MIT’s ”Missing Semester” online course. This will get you more proficient in practice.
    • Go grab a RedHat certification (or don’t, it’s not worth the paper it’s printed on). This will ensure you have a paper certifying you are sufficiently indoctrinated. It’s also a structured course in Linux.
    • Go do stuff with your newly acquired knowledge and gradually build up your competences.

    If that investment seems a bit steep, take only the last step, build a homelab and take a structured approach to any interesting subjects you encounter doing that.