• utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    12 days ago

    Language models on their own do indeed have lots of limitations, however there is a lot of potential in coupling them with other types of expert systems.

    Absolutely, I even have a dedicated section “Trying to insure combinatoriality/compositionality” in my notes on the topic https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence

    Still, while keeping this in mind we also must remain mindful of what each system can actually do, not conflate with what we WANT it do yet it can not do yet, and might never will.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      11 days ago

      Sure we have to be realistic about capabilities of different systems. Thing is that we don’t know what the actual limitations are yet. In the past few years we’ve seen huge progress in terms of making language models mode efficient, and more capable.

      My expectation is that language models, and the whole GPT algorithm, will end up being a building block in more sophisticated systems. We’re already seeing research shift from simply making models bigger to having models do reasoning about the output. I suspect that we’ll start seeing people rediscovering a lot of symbolic logic research that was done back in the 80s.

      The overall point here is that we don’t know what the limits of this tech are, and the only way to find out is to continue researching it, and trying new things. So, it’s clearly not a waste of resources to pursue this. What makes this the most important race isn’t what it’s delivered so far, but what it has potential to deliver.

      If we can make AI systems that are capable of doing reasoning tasks in a sufficiently useful fashion that would be a game changer because it would allow automating tasks that fundamentally could not be automated before. It’s also worth noting that reasoning isn’t a binary thing where it’s either correct or wrong. Humans are notorious for making logical errors, and most can’t do formal logic to save their lives. Yet, most humans can reason about tasks they need to complete in their daily lives sufficiently well to function. We should be applying the same standard to AI systems. The system just needs to be able to function well enough to accomplish tasks within the domain it’s being used in.