My title might be a bit hyperbolic, but stuff like this worries me. I love to read and I love reading on a kindle. This has been going on for a while, but it has now reached absurd levels.

  • TheTrueLinuxDev@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    I thought about bringing up technical writing, then I realized that it’s a possibility that even that job isn’t safe within the next 5 years considering the promising development of Spiking Neural Net. This is something I would probably suggests to your daughter at this point that she should probably reconsider her chosen field and try to enter biology or some stable job.

    • Valmond@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      And work with AI not against it. I mean if AI can quickly make a filler chapter that can be tweaked, more time can be used to make it all get together etc etc. Or so I figure.

      • potpie@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        That’s a really good point. Use the AI to bridge gaps and for short segments. Probably a good way to get around some writer’s block.

      • potpie@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        That’s a really good point. Use the AI to bridge gaps and for short segments. Probably a good way to get around some writer’s block.

        • Valmond@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          Yeah for sure, but someone good at biology can surely handle AI, while other writers might not.

          • jmp242@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 years ago

            This seems way to stem biased imho. Interacting with chatgpt isn’t really a technical skill. And editing prose certainly isn’t. I think writers, especially creative writers would be way ahead on prompts (basically an outline) and massaging the output into one more cohesive whole. Good writers can probably also discriminate between powerful prose and overblown pompous language that GPT can output sometimes.

            The other thing is I would hope that good writers would never have a filler chapter. I don’t like needlessly padded content of any type, and if I notice that my ranking of the content goes down.

    • Baggins@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Been there, done that. She has her own mind, so I’ll just have to get on board.

      Kids eh?

    • tanglisha [she/her]@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      I dunno, people have been trying to automate technical writing for at least 30 years. The results have been mostly garbage. I’m not sure an LLM is going to understand what’s going on any better than the folks doing this work now, it tends to involve lengthy discussions.

      • TheTrueLinuxDev@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        There are active researches on world model working alongside with llm. The idea generally is that llm is used for generating text, but world model provide more context for llm to understand the world.

        • tanglisha [she/her]@beehaw.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          When you say “the world”, what do you mean? If it means the actual world, I don’t understand how that would help with technical writing. Plenty of people can get around in the real world but struggle to use Excel.

          • TheTrueLinuxDev@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 years ago

            As in actual world, providing context to physics of things, providing logical association/evaluation, and so go on. It is basically something that supposed to help LLM get closer to understanding the “world” rather than just spewing out whatever the training dataset give it. It does have a direct implication for technical writing, because with stronger understanding of the things you wanted to write about in technical writing, LLM with World Model would basically auto-fill that.

            This is something that the researchers are pretty much all hand on deck working on to create.

            One example of the research involving this