Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

  • Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    8
    ·
    10 months ago

    It is a pretty silly scenario lol, I personally don’t really care but I can understand why they implemented the safeguard but also why it’s overly aggressive and needs to be tuned more.

      • Kichae@lemmy.ca
        link
        fedilink
        English
        arrow-up
        15
        ·
        10 months ago

        If you create an image generator that always returns clean cut white men whenever you ask it to produce a “doctor” or a “business man”, but only ever spits out black when when you ask for a picture of someone cleaning, your PR department is going to have a bad time.

      • entropicdrift@lemmy.sdf.org
        link
        fedilink
        arrow-up
        4
        ·
        10 months ago

        Corporations making AI tools available to the general public are under a ton of scrutiny right now and are kinda in a “damned if you do, damned if you don’t” situation. At the other extreme, if they completely uncensored it, the big controversial story would be that pedophiles are generating images of child porn or some other equally heinous shit.

        These are the inevitable growing pains of a new industry with a ton of hype and PR behind it.

        • maynarkh@feddit.nl
          link
          fedilink
          arrow-up
          7
          ·
          10 months ago

          TBH it’s just a byproduct of the “everything is a service, nothing is a product” age of the industry. Google is responsible for what random people do with their products.