• i_stole_ur_taco@lemmy.ca
    link
    fedilink
    arrow-up
    44
    ·
    7 months ago

    It’s a little worrisome, actually. Professionally written software still needs a human to verify things are correct, consistent, and safe, but the tasks we used to foist off on more junior developers are being increasingly done by AI.

    Part of that is fine - offloading minor documentation updates and “trivial” tasks to AI is easy to do and review while remaining productive. But it comes at the expense of the next generation of junior developers being deprived of tasks that are valuable for them to gain experience to work towards a more senior level.

    If companies lean too hard into that, we’re going to have serious problems when this generation of developers starts retiring and the next generation is understaffed, underpopulated, and probably underpaid.

    • frog 🐸@beehaw.org
      link
      fedilink
      English
      arrow-up
      34
      ·
      7 months ago

      AI is also going to run into a wall because it needs continual updates with more human-made data, but the supply of all that is going to dry up once the humans who create new content have been driven out of business.

      It’s almost like AIs have been developed and promoted by people who have no ability to think about anything but their profits for the next 12 months.

      • greenskye@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 months ago

        I just tend to think of it as the further enshittification of life. I’m not even that old and it’s super obvious how poorly most companies are actually run these days, including my own. It’s not that we’re doing more with less, it’s a global reduction in standards and expectations. Issues that used to be solved in a day now bounce between a dozen different departments staffed with either a handful of extremely overworked people, complete newbies, or clueless contractors. AI is just going to further cement the shitty new standard both inside and outside the company.

        • frog 🐸@beehaw.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          Yep. Life does just seem… permanently enshittified now. I honestly don’t see it ever getting better, either. AI will just ensure it carries on.

      • HobbitFoot @thelemmy.club
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        It looks like we are already at the point with some AI where we can correct the output instead of add new input. Microsoft is using LinkedIn to help get professional input for free.

        • frog 🐸@beehaw.org
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 months ago

          But this is the point: the AIs will always need input from some source or another. Consider using AI to generate search results. Those will need to be updated with new information and knowledge, because an AI that can only answer questions related to things known before 2023 will very quickly become obsolete. So it must be updated. But AIs do not know what is going on in the world. They have no sensory capacity of their own, and so their inputs require data that is ultimately, at some point in the process, created by a human who does have the sensory capacity to observe what is happening in the world and write it down. And if the AI simply takes that writing without compensating the human, then the human will stop writing, because they will have had to get a different job to buy food, rent, etc.

          No amount of “we can train AIs on AI-generated content” is going to fix the fundamental problem that the world is not static and AI’s don’t have the capacity to observe what is changing. They will always be reliant on humans. Taking human input without paying for it disincentivises humans from producing content, and this will eventually create problems for the AI.

          • pbjamm@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 months ago

            “we can train AIs on AI-generated content”

            and 20yrs from now polydactylism will be the new human beauty standard

            • frog 🐸@beehaw.org
              link
              fedilink
              English
              arrow-up
              7
              ·
              7 months ago

              The scales of the two are nowhere near comparable. A human can’t steal and regurgitate so much content that they put millions of other humans out of work.

    • burningmatches@feddit.uk
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      It’s the same in many fields. Trainees learn by doing the easy, repetitive work that can now be automated.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 months ago

        Yep. I used to be an accountant, and that’s how trainees learn in that field too. The company I worked at had a fairly even split between clients with manual and computerised records, and trainees always spent the first year or so almost exclusively working on manual records because that was how you learned to recognise when something had gone wrong in the computerised records, which would always look “right” on a first glance.

      • supersquirrel@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        6 months ago

        I get sooooo much schadenfreude from programmers smugly acting like their jobs aren’t going to be obliterated by AI… because the AI won’t be able to do the job correctly, as if that matters in this late stage of collapse and end state capitalism.

        Y’all (programmers and tech people) cheered this on and facilitated the ruling class destroying countless decent, good careers and now it is everybody else’s turn to laugh at programmers as they go from having one of the few non-dysfunctional careers left to being worthless chatgpt prompt monkeys that can never convince management they are valuable and not just a subpar, expensive alternative to “AI”.

        This is going to be awful, but that doesn’t mean I can’t find the silver linings!

        Maybe if programming wasn’t full of overconfident naive libertarian adjacent people y’all could have stopped this by unionizing but again… just check hacker news and all the boot licking for the ruling class there to see why that didn’t happen lol.