• Blue_Morpho@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    7
    ·
    23 hours ago

    Writing in Rust or “an efficient language” does nothing for ram bloat. The problem is using 3rd party libraries and frameworks. For example a JavaScript interpreter uses around 400k. The JavaScript problem is developers importing a 1GB library to compare a string.

    You’d have the same bloat if you wrote in assembly.

    • WittyShizard@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      23 hours ago

      Maybe you’re confusing memory (RAM) vs storage ? Because I converted some backend processing services from nodejs to rust, and it’s almost laughable how little RAM the rust counterparts used.

      Just running a nodejs service took a couple of hundred mb of ram, iirc. While the rust services could run at below 10mb.

      But I’m guessing that if you went the route of compiling an actual binary from the nodejs service, you could achieve some saving of ram & storage either way. With bun or deno ?

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        21 hours ago

        Because I converted some backend processing services from nodejs to rust,

        You converted only the functions you needed and only included the functions you needed. You did not convert the entire node.js codebase and then include the entire library. That’s the problem I’m describing. A few years ago I toyed with javascript to make a LCARS style wall home automation panel. The overhead of what other people had published was absurd. I did what you did. I took out only the functions I needed, rewrote them, and reduced my program from gigabytes to megabytes even though it was still all Javascript.

          • squaresinger@lemmy.world
            link
            fedilink
            arrow-up
            9
            ·
            19 hours ago

            On the one hand, tree shaking is often not used, even in large corporate projects.

            On the other hand, tree shaking is much less effective than what a good compiler does. Tree shaking only works on a per-module basis, while compilers can optimize down to a code-line basis. Unused functions are not included, and not even variables that can be optimized out are included.

            But the biggest issue (and one that tree shaking can also not really help against) is that due to the weak standard library of JS a ton of very simple things are implemented in lots of different ways. It’s not uncommon for a decently sized project (including all of the dependencies) to contain a dozen or so implementations of a padding function or some other small helper functions.

            And since all of them are used somewhere in the dependency tree, none of them can be optimized out.

            That’s not really a problem of the runtime or the language itself, but since the language and its environment are quite tightly coupled, it is a big problem when developing on JS.

            • Caveman@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              7 hours ago

              “Mature ecosystem” it’s called in JS land.

              I wish nodejs or ecmascript would have just done the Go thing and included a legit standard library.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      22 hours ago

      This isn’t Reddit. You don’t need to talk in absolutes.

      Similar to WittyShizard, my experience is very different. Said Rust application uses 1200 dependencies and I think around 50 MB RAM. We had a Kotlin application beforehand, which used around 300 dependencies and 1 GB RAM, I believe. I would expect a JavaScript application of similar complexity to use a similar amount or more RAM.

      And more efficient languages do have an effect on RAM usage, for example:

      • Not using garbage collection means objects generally get cleared from RAM quicker.
      • Iterating over substrings or list elements is likely to be implement more efficiently, for example Rust has string slices and explicit .iter() + .collect().
      • People in the ecosystem will want to use the language for use-cases where efficiency is important and then help optimize libraries.
      • You’ve even got stupid shit, for example in garbage-collected languages, it has traditionally been considered best practice, that if you’re doing async, you should use immutable data types and then always create a copy of them when you want to update them. That uses a ton of RAM for stupid reasons.
      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        21 hours ago

        This isn’t Reddit. You don’t need to talk in absolutes.

        I haven’t posted anything on reddit in years. There is no need to start off a post with insults.

        re: garbage collection

        I wrote java back in 1997 and the programs used a few megabytes. Garbage collection doesn’t in itself require significantly more ram because it only delays the freeing of ram that would have been allocated using a non garbage collection language. Syntatic sugar like iterators does not in general save gigabytes of ram.

        The OP isn’t talking about 500k apps now requiring 1MB. The article talks about former 85K apps now taking GB’s of ram.

        • Ephera@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          20 hours ago

          I don’t know what part of that is supposed to be an insult.

          And the article may have talked of such stark differences, but I didn’t. I’m just saying that the resource usage is noticeably lower.