• huginn
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    11 months ago

    Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

    • Kogasa@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

      • huginn
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        11 months ago

        Here’s a white paper explicitly proving:

        1. No emergent properties (illusory due to bad measures)
        2. Predictable linear progress with model size

        https://arxiv.org/abs/2304.15004

        The field changes fast, I understand it is hard to keep up

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          Sure, if you define “emergent abilities” just so. It’s obvious from context that this is not what I described.

          • huginn
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            11 months ago

            Their paper uses industry standard definitions

            • Kogasa@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Their paper uses terminology that makes sense in context. It’s not a definition of “emergent behavior.”

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      11 months ago

      I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

      • huginn
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        Sure thing: here’s a white paper explicitly proving:

        1. No emergent properties (illusory due to bad measures)
        2. Predictable linear progress with model size

        https://arxiv.org/abs/2304.15004

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          11 months ago

          Thank you. This paper though does not state that there are no emergent abilities. It only states that one can introduce a metric with respect to which the emergent ability behaves smoothly and not threshold-like. While interesting, it only suggests that things like intelligence are smooth functions, but so what? Some other metrics show exponential or threshold dependence and whether the metric is right depends only how one will use it. And there is no law that emerging properties have to be threshold like. Quite the opposite - nearly all examples in physics that I know, the emergence appears gradually.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      11 months ago

      It is obvious that you do not know what either “mathematical proof” or “emergence” mean. Unfortunately, you are misrepresenting the facts.

      I don’t mean to criticize your religious (or philosophical) convictions. There is a reason people mostly try to keep faith and science separate.

      • huginn
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        11 months ago

        Here’s a white paper explicitly proving:

        No emergent properties (illusory due to bad measures)

        Predictable linear progress with model size

        https://arxiv.org/abs/2304.15004

        The field changes fast, I understand it is hard to keep up

          • huginn
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            11 months ago
            1. Emergence is the whole being greater than the sum of its parts. That’s the original meaning of emergent properties, which is laid out in the first paragraph of the article. It’s the scholarly usage as well, and what the claims of observed emergence are using as the base of their claim.

            2. The article very explicitly demonstrated that only about 10% of any of the measures for LLMs displayed any emergence and that illusory emergence was the result of overly rigid metrics. Swapping to edit distance as an approximately close metric causes the sharp spikes to disappear for obvious reasons: no longer having a sharp yes/no allows for linear progression to reappear. It was always there, merely masked by flawed statistics.

            If you can’t be bothered to read here’s a very easy to understand video by one of the authors: https://www.youtube.com/watch?v=ypKwNrmuuPM