• 37 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle



















  • pips@lemmy.filmtoLemmy.World Announcements@lemmy.worldLemmy World outages
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    I get you. There’s good and bad in law enforcement, especially when it comes to tech and social media. On the one hand, there’s pretty serious crime happening online that needs to be stopped. On the other, wild invasions of privacy. There’s no easy answer at this point and governments obviously won’t police themselves.















  • You’re making a hasty generalization here

    I’m really not, though I’ll readily admit I’m simplifying things. An LLM can only create something it’s been given. I guess it can generate a string of characters and assign a definition to it, but it’s not really intentional creation. There are many similarities between how a human generates something and how an LLM does, but to argue they’re the same radically oversimplifies how humans work. While we can program an LLM, we literally do not have the capability to replicate a human brain.

    For example, can you tell me what emotions the LLM had when it produced the output it did? Did its physical condition have any effect? What about its past, not just what it has learned but how it was treated? What is its motivation? A human response to anything involving creativity factors in many things that we aren’t even consciously aware of, and these are things an LLM doesn’t have.

    The study you’re citing is from Google, there’s likely some bias and selective reporting. That said, we were talking about creativity, not regurgitating facts or analyzing data. I think it’s universally accepted that as the tech gets better, it’s preferable to have a computer make the first attempt at a diagnosis, especially for a scan or large data analysis, then have a human confirm.

    For the remix example, don’t forget that samples get attribution. Artists credit what they sampled and get called out when they don’t. I’m actually unclear as to whether an LLM actually can cite to how it derived its output just because the coders haven’t revealed if there’s some sort of derivation log.