I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Veraticus@lib.lgbt
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    7
    ·
    10 months ago

    The difference is LLMs don’t “remember” anything because they don’t “know” anything. They don’t know facts, English, that reality exists; they have no internal truths, simply a mathematical model of word weights. You can’t ask it to forget information because it knows no information.

    This is obviously quite different from asking a human to forget anything; we can identify the information in our brain, it exists there. We simply have no conscious control over our ability to remember it.

    The fact that LLMs employ neural networks doesn’t make them like humans or like brains at all.

    • SpiderShoeCult@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      I never implied they “remembered”, I asked you how you interpret humans remembering since you likened it to a database, which science says it is not. Nor did I make any claims about AI knowing stuff, you inferred that by yourself. I also did not claim they possess any sort of human like traits. I honestly do not care to speculate.

      The modelling statement speaks to how it came to be and the intention of programmers and serves to illustrate my point regarding the functioning of the brain.

      My question remains unanswered.

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        10 months ago

        I said:

        No, you knowing your old phone number is closer to how a database knows things than how LLMs know things.

        Which is true. Human memory is more like a database than an LLM’s “memory.” You have knowledge in your brain which you can consult. There is data in a database that it can consult. While memory is not a database, in this sense they are similar. They both exist and contain information in some way that can be acted upon.

        LLMs do not have any database, no memories, and contain no knowledge. They are fundamentally different from how humans know anything, and it’s pretty accurate to say LLMs “know” nothing at all.

        • SpiderShoeCult@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Leaving aside LLMs, the brain is not a database. there is no specific place that you can point to and say ‘there resides the word for orange’. Assuming that would be the case, it would be highly inefficient to assign a spot somewhere for each bit of information (again, not talking about software here, still the brain). And if you would, then you would be able to isolate that place, cut it out, and actually induce somebody to forget the word and the notion (since we link words with meaning - say orange and you think of the fruit, colour or perhaps a carrot). If we hade a database organized into tables and say orange was a member of colours and another table, ‘orange things’, deleting the member ‘orange’ would make you not recognize that carrots nowadays are orange.

          Instead, what happens - for example in those who have a stroke or those who suffer from epilepsy (a misfiring of meurons) - is that there appears a tip-of-the tongue phenomenon where they know what they want to say and can recognize notions, it’s just the pathway to that specific word is interrupted and causes a miss, presumably when the brain tries to go on the path it knows it should take because it’s the path taken many times for that specific notion and is prevented. But they don’t lose the ability to say their phone number, they might lose the ability to say ‘four’ and some just resort to describing the notion - say the fruit that makes breakfast juice instead. Of course, if the damage done is high enough to wipe out a large amout of neurons, you lose larger amounts of words.

          Downsides - you cannot learn stuff instantly, as you could if the brain was a database. That’s why practice makes perfect. You remember your childhood phone number because you repeated it so many times that there is a strong enough link between some neurons.

          Upsides - there is more learning capacity if you just relate notions and words versus, for lack of a better term, hardcoding them. Again, not talking about software here.

          Also leads to some funky things like a pencil sharpener being called literally a pencil eater in Danish.

          • Veraticus@lib.lgbt
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            I never said the brain (or memory) was a database. I said it was more like a database than what LLMs have, which is nothing.

            • SpiderShoeCult@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              And human beings are more like a fungus (eukaryotes, saprophites) than an LLM is, that doesn’t mean we’re mushrooms.

              However, the human brain is more like an LLM than a database, because the LLM was modelled after the human brain. It’s also very similar in the way that nobody actually can tell precisely how it works, for some reason it just does.

              Now I wouldn’t worry about philosophical implications about the nature of consciousness and such, we’re a long way and we’ll find a way of screwing it up.

              I do question why people are so vehement to always point out what we ‘have’ and how special we are. Nobody sane is saying LLMs are human consciousness 2.0. So why act threatened?

              • Veraticus@lib.lgbt
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                10 months ago

                Lol what the fuck? We know exactly how LLMs work. It’s not magic, and it’s nothing like a human brain. They’re literally word frequency algorithms. There’s nothing special about them and I’m the opposite of threatened; I think it’s absurd people who patently don’t understand them are weighing on this debate disagreeing with me when it’s obvious their position can best be described as ignorant.

                • SpiderShoeCult@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 months ago

                  I’m just going to leave this here.

                  some random article

                  A quote from the article, I found especially interesting.

                  “As a result, no one on Earth fully understands the inner workings of LLMs. Researchers are working to gain a better understanding, but this is a slow process that will take years—perhaps decades—to complete.”

                  Quite an interesting read and I’m sure you can find some others if you want to and try hard enough.

                  • Veraticus@lib.lgbt
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    10 months ago

                    This is a somewhat sensationalist and frankly uninteresting way to describe neural networks. Obviously it would take years of analysis to understand the weights of each individual node and what they’re accomplishing (if it is even understandable in a way that would make sense to people without very advanced math degrees). But that doesn’t mean we don’t understand the model or what it does. We can and we do.

                    You have misunderstood this article if what you took from it is this:

                    It’s also very similar in the way that nobody actually can tell precisely how it works, for some reason it just does.

                    We do understand how it works – as an overall system. Inspecting the individual nodes is as irrelevant to understanding an LLM as cataloguing trees in a forest tells you the name of the city to which the forest is adjacent.