• Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      15
      arrow-down
      4
      ·
      1 day ago

      You can ask it much more complex questions than you can google and you can ask follow-up questions too.

        • stray@pawb.social
          link
          fedilink
          arrow-up
          18
          arrow-down
          1
          ·
          1 day ago

          That’s also true of traditional searches because the resulting webpages can just be whatever bullshit someone wrote. It will only be true that they said it. You still have to use your brain to assess the trustworthiness of the info.

          • megopie@beehaw.org
            link
            fedilink
            English
            arrow-up
            18
            arrow-down
            1
            ·
            1 day ago

            They get things wrong at a far higher rate than most of the websites that tend to end up at the top of a web result, and they get things wrong in weird ways that won’t stand out to users in the same way a shitty website will. These probabilistic text generators are much better at seeming like they have the correct answer than actually providing it.

        • Chozo@fedia.io
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          1 day ago

          For what it’s worth, ChatGPT has gotten better at citing its sources, so it’s easier to fact-check it.

          • rozodru@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            it’s true that it has gotten better with sources. However remember the context of the conversation? much worse. But I can see the direction OpenAI is trying to take it. short one off responses/solutions with little followup.

            It is better than Claude though. Claude will just make stuff up or say EVERYTHING is a “known issue” when it isn’t.

        • Perspectivist@feddit.uk
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          1 day ago

          No, but you can see if the answer makes sense and then fact check it using Google if you need to. Which still doesn’t give you 100% gurantee either.

    • Hirom@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      Yep, using ChatGPT is a way to increase one’s environmental footprint.

      And the energy cost doesn’t appear to be fully passed to users yet, as OpenAI isn’t profitable yet. There are even free LLM services. So users don’t have an insentive to prefer less polluting alternatives, such as classic search engines.

      • megopie@beehaw.org
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 day ago

        It’s crazy how much money they are losing, and that’s with most of their compute being provided by Microsoft at cost, if not for free in exchange for the use of their models in Microsoft products.

        Both they and Anthropic talk about their business as if they’re a software as a service company, but most SAS doesn’t get more expensive to run the more users there are, not to mention their conversion rate of free users to payed users is abysmal. Like, it’s an unsalvageable train wreck of a business model, I don’t see ether surviving more than a year unless they radically change their business models.