I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

  • Quibblekrust@thelemmy.club
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    8
    ·
    19 hours ago

    I don’t see how it could be more effecient to have [a junior developer write] something that you then have to review and make sure actually works over just writing the code yourself…

    • iglou@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      13 hours ago
      1. A junior dev wont be a junior dev their whole career, code reviews also educates them
      2. You can’t trust the quality of a junior’s work, but you can trust that they are able to understand the project and their role in it. LLMs are by definition unable to think and understand. Just pretty good at pretending they are. Which leads to the third point:
      3. When you “vibe code”, you don’t “just” have to review the produced code, you also have to constantly tell the LLM what you want it to do. And fight with it when it fucks up.
      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        12 hours ago

        if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs

        LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete…

        type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure

        let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten

        fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this

        let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc

        there’s load of things LLMs are good for, but unless you’re just learning something new and you know your code will be garbage anyway, none of those things replace your brain: just repetitive crap you probably hate to start with because you could explain it to a non-programmer and they could carry out the tasks

        • iglou@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          10 hours ago

          if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs

          I never said that, and a single review already will make a junior dev better off the bat

          LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete

          I agree, but then you say…

          type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure

          …which says the other thing. Implementing a function isn’t for a “fancy autocomplete”, it’s for a brain to do. Unless all you do is reinventing the wheel, then yeah, it can generate a decent wheel for you.

          let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten

          Fuck no. If it gets the test wrong, it won’t necessarily fail. It might very well pass even when it should fail, and that’s something you won’t know unless you review every single line it spits out. That’s one of the worst areas to use an LLM.

          fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this

          I’m not sure what you mean by that.

          let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc

          I agree with that, naming or even documenting is a good way to use an LLM. With supervision of course, but an imprecise name or documentation is not critical.

          • Quibblekrust@thelemmy.club
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            9 hours ago

            fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at thisI’m

            not sure what you mean by that.

            Not speaking for them, but I use LLMs for this. You have lines of repetitive code, and you realize you need to swap the order of things within each line. You could brute force it, or you could write a regex search/replace. Instead, you tell the LLM to do it and it saves a lot of time.

            Swapping the order of things is just one example. It can change capitalization, insert values, or generate endless amounts of mock data.

              • Quibblekrust@thelemmy.club
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 hours ago

                I was tasked once with writing a front-end for an API that didn’t exist yet, but I had a model. I could have written a loop that generated “Person Man 1”, “Person Man 2”, etc. with all of the associated fields, but instead I gave the LLM my class definition and it spat out 50 people with unique names, phone numbers, emails, and everything. It made it easy to test the paging and especially the filtering. It also took like 30 seconds to ask for and receive.

                I originally asked it to make punny names based on celebrities, and it said “I can’t do that.” ☹️

        • leftzero@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          12 hours ago

          They’ll never be able to learn, though.

          A LLM is merely a statistical model of its training material. Very well indexed but extremely lossy compression.

          It will always be outdated. It can never become familiar with your codebase and coding practices. And it’ll always be extremely unreliable, because it’s just a text generator without any semblance of comprehension about what the texts it generates actually mean.

          All it’ll ever be able to do is reproduce the standards as they were when its training model was captured.

          If we are to compare it to a junior developer, it’d be someone who suffered a traumatic brain injury just after leaving college, which prevents them from ever learning anything new, makes them unaware that they can’t learn, and incapable of realising when they don’t know something, makes them unable to reason or comprehend what they are saying, and causes them to suffer from verbal diarrhoea and excessive sycophancy.

          Now, such a tragically brain damaged individual might look like the ideal worker to the average CEO, but I definitely wouldn’t want them anywhere near my code.