• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle




  • Depends what you want to play it on. In my house we have:

    3 laptops 2 tablets 2 mobile phones (1 android, 1 iPhone) TV

    Not all these devices support local storage for music and it’s a pain to sync files between them. With Jellyfin the complete library is in one location with a consistent interface. It can also be made available remotely if I choose.


  • Ok. I missed which sub I was in, sorry. There is a Linux desktop Jellyfin app but I haven’t used it myself. In my own case I am running Jellyfin on Linux. I use various clients, including web browser (laptop), Android and Roku (TV) and find it works really well. In the past I had tried with the ‘connect directly to the server’ route with XBMC (as Kodi was called then) and it never worked well, with similar issues those described in other comments.




  • We’re going to need to know as a minimum:

    • Linux distribution and version
    • Jellyfin install method and version
    • what you have already tried- not sure where all those flags are coming from

    I would also support the comments here recommending that you use docker. There’s only a small number of Linux distributions and versions where a distribution package installation of jellyfin is fully supported, but even then what you need to do varies across each one. All Linux distributions and versions support docker and the process is essentially the same for all of them.


  • Ok, aside from Android, I’ve yet to see any serious usage of SELinux in the real world and I’ve been working on cloud tech for years. Acknowledged issues such as complexity aside, it’s really just that much less relevant in a modern, single purpose environment such as Docker/kubernetes/cloud functions/etc



  • GitLab just doesn’t compare in my view:

    To begin with, you have three different major versions to work with:

    • Self-Hosted open source
    • SAAS open source
    • Enterprise SAAS

    Each of which have different features available and limitations, but all sharing the same documentation- A recipe for confusion if ever I saw one. Some of what’s documented only applies to you the enterprise SAAS as used by GitLab themselves and not available to customers.

    Whilst theoretically, it should be possible to have a gitlab pipeline equivalent to GitHub actions, invariably these seem to metastasize In production to use includes making them tens or hundreds of thousands of lines long. Yes, I’m speaking from production experience across multiple organisations. Things that you would think were obvious and straightforward, especially coming from GitHub actions, seen difficult or impossible, example:

    I wanted to set up a GitHub action for a little Golang app: on push to any branch run tests and make a release build available, retaining artefacts for a week. On merging to main, make a release build available with artefacts retained indefinitely. Took me a couple of hours when I’d never done this before but all more or less as one would expect. I tried to do the equivalent in gitlab free SAAS and I gave up after a day and a half- testing and building was okay but it seems that you’re expected to use a third party artefact store. Yes, you could make the case that this is outside of remit, although given that the major competitor or alternative supports this, that seems a strange position. In any case though, you would expect it to be clearly documented, it isn’t or at least wasn’t 6 months ago.




  • Coming from what looks to me like a different perspective to many of the commenters here (Disclosure I am a professional platform engineer):

    If you are already scripting your setups then yes you should absolutely learn/use Ansible. The key reasons are that it is robust, explicit, and repeatable- doesn’t matter whether that’s the same host multiple times or multiple hosts. I have lost count of the number of pet Bash scripts I have encountered in various shops, many of them created by quite talented people. They all had problems. Some typical ones:

    Issue Example
    Most people write bash scripts without dependency checks ‘Of course everyone will have gnu coreutils installed, it’s part of every Linux distro’ - someone runs the script on a Mac
    We need to pass this action out to a command-line tool, that’s obvious Fails if command-line tool isn’t available, no handling errors from tool if they aren’t exactly what’s expected
    Of course people will realise that they need to run this from an environment prepared in this exact (undocumented) way Someone runs the script in a different environment
    Of course people will be running this on x86_64/AMD64, all these third party binaries are available for that Someone runs it on ARM
    Of course people will know what to do if the script fails midway through People try to re-run the script when it fails mid-way through and it’s a mess

    The thing about Ansible is that it can be modular (if you want) and you can use other people’s code but fundamentally it runs one step at a time. You will know for each step:

    • Are dependencies met?
    • Did that step succeed or fail (in realtime!)?
    • (If it failed) what was the error?
    • (Assuming you have written sane Ansible) you can re-run your playbook at any time to get the ‘same’ result. No worries about being left in an indeterminate state
    • (To an extent) It is self-documenting
    • Host architecture doesn’t really matter
    • Target architecture/OS is specified and clear