• bitjunkie@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    3 days ago

    I’m not sure that not bullshitting should be a strict criterion of AGI if whether or not it’s been achieved is gauged by its capacity to mimic human thought

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      3 days ago

      The LLM aren’t bullshitting. They can’t lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

      • 11111one11111@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        edit-2
        3 days ago

        Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn’t a single thing in the universe that can’t be broken down to a mathematical equation for physics or chemistry? I’m curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it’s a leap and I could be wrong but I thought I’ve heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

        Like I said in the beginning this is straight up bong rips philosophy and haven’t looked up any of the shit I brought up.

        I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won’t see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can’t out perform any more independently than a 3 year old.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          3 days ago

          First of all, I’m about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI “emergent behavior” and “overfitting”. More specifically about how emergent behavior doesn’t really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

          Anyways, human’s don’t assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

          Humans suck at math.

          Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don’t exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn’t know any better. Just doesn’t know, period.

          Maybe an LLM could approach that at some scale if each word had it’s own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI’s statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They’re probably underestimating the costs by magnitudes).

          • naught101@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            emergent behavior doesn’t really exist in certain model archetypes

            Hey, would you have a reference for this? I’d love to read it. Does it apply to deep neural nets? And/or recurrent NNs?

            • finitebanjo@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 days ago

              There is this 2023 study from Stanford which states AI likely do not have emergent abilities LINK

              And there is this 2020 study by… OpenAI… which states the error rate is predictable based on 3 factors, that AI cannot cross below the line or approach 0 error rate without exponentially increasing costs several iterations beyond current models, lending to the idea that they’re predictable to a fault LINK

              There is another paper by DeepMind in 2022 that comes to the conclusion that even at infinite scales it can never approach below 1.69 irreducable error LINK

              This all lends to the idea that AI lacks the same Emergent behavior in Human Language.

          • 11111one11111@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            3 days ago

            So that doesn’t really address the concept I’m questioning. You’re leaning hard into the fact the computer is using numbers in place of words but I’m saying why is that any different than assigning native language to a book written in a foreign language? The vernacular, language, formula or code that is being used to formulate a thought shouldn’t delineate if something was a legitimate thought.

            I think the gap between our reasoning is a perfect example of why I think FUTURE models (wanna be real clear this is entirely hypothetical assumption that LLMs will continue improving.)

            What I mean is, you can give 100 people the same problem and come out with 100 different cognitive pathways being used to come to a right or wrong solution.

            When I was learning to play the trumpet in middle school and later learned the guitar and drums, I was told I did not play instruments like most musicians. Use that term super fuckin loosely, I am very bad lol but the reason was because I do not have an ear for music, I can’t listen and tell you something is in tune or out of tune by hearing a song played, but I could tune the instrument just fine if I have an in tune note played for me to match. My instructor explained that I was someone who read music the way others read but instead of words I read the notes as numbers. Especially when I got older when I learned the guitar. I knew how to read music at that point but to this day I can’t learn a new song unless I read the guitar tabs which are literal numbers on a guitar fretboard instead of a actual scale.

            I know I’m making huge leaps here and I’m not really trying to prove any point. I just feel strongly that at our most basic core, a human’s understanding of their existence is derived from “I think. Therefore I am.” Which in itself is nothing more than an electrochemical reaction between neurons that either release something or receive something. We are nothing more than a series of plc commands on a cnc machine. No matter how advanced we are capable of being, we are nothing but a complex series of on and off switches that theoretically could be emulated into operating on an infinate string of commands spelled out by 1’s and 0’s.

            Im sorry, my brother prolly got me way too much weed for Xmas.

            • finitebanjo@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              3 days ago

              98% and 98% are identical terms, but the machine can use the terms to describe separate word’s accuracy.

              It doesn’t have languages. It’s not emulating concepts. It’s emulating statistical averages.

              “pie” to us is a delicious desert with a variety of possible fillings.

              “pie” to an llm is 32%. “cake” is also 32%. An LLM might say Cake when it should be Pie, because it doesn’t know what either of those things are aside from their placement next to terms like flour, sugar, and butter.

              • 11111one11111@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                3 days ago

                So by your logic a child locked in a room with no understanding of language is not capable of thought? All of your reasoning for why a computers aren’t generating thoughts are actual psychological case studies tought in the abnormal psychology course I took in high school back in 2005. You don’t even have to go that far into the abnormal portion of it either. I’ve never sat with my buddies daughter’s “classes” but she is 4 years old now and on the autism spectrum. She is doing wonderfully since she started with this special Ed preschool program she’s in but at 4 years old she still cannot speak and she is still in diapers. Not saying this to say she’s really bad or far on the spectrum, I’m using this example because it’s exactly what you are out lining. She isn’t a dumb kid by any means. She’s 100x’s more athletic and coordinated than any other kid I’ve seen her age. What he was told and once he told me I noticed it immediately, which was that with autistic babies, they don’t have the ability to mimic what other humans around them are doing. I’m talking not even the littlest thing like learning how to smile or laugh by seeing a parent smiling at them. It was so tough on my dude watching him work like it meant life or death trying to get his daughter to wave back when she was a baby cuz it was the first test they told him they would do to try and diagnose why his daughter wasn’t developing like other kids.

                Fuck my bad I went full tails pin tangent there but what I mean to say is, who are we to determine what defines a generated independent thought when the industry of doctors, educators and philosophers haven’t done all that much understanding our own cognizant existence past "I think, Therefore I am.

                People like my buddy’s daughter could go their entire life as a burden of the state incapable of caring for themself and some will never learn to talk well enough to give any insight to the thoughts being processed behind their curtains. So why is the argument always pointing toward the need for language to prove thought and existence?

                Like is said in NY other comment. Not trying to prove or argue any specific point. This shit is just wildly interesting to me. I worked in a low income nursing home for years where they catered to residents who were considered burdens of the state after NY closed the doors on psychological institutions everywhere, which pushed anyone under 45y/o to the streets and anyone over 45 into nursing homes. So there were so many, excuse the crash term but it’s what they were, brain dead former drug addics or brain dead alzheimer residents. All of whom spent thw last decades of their life mumbling, incoherent, and staring off into space with noone home. We’re they still humans cababl3 of generative intelligence cua every 12 days they’d reach the hand up and scratch their nose?

                • finitebanjo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 days ago

                  IDK what you dudes aren’t understanding, tbh. To the LLM every word is a fungible statistic. To the human every word is unique. It’s not a child, it’s hardware and programming are worlds apart.

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 days ago

          I’d say that difference between nature boiling down to maths and LLMs boiling down to maths is that in LLMs it’s not the knowledge itself that is abstracted, it’s language. This makes it both more believable to us humans, because we’re wired to use language, and less suitable to actually achieve something, because it’s just language all the way down.

          Would be nice if it gets us something in the long run, but I wouldn’t keep my hopes up

          • 11111one11111@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            I’m super stoked now to follow this and to also follow the progress being made mapping the neurological pathways of the human brain. Wanna say i saw an article on lemmy recently where the mapped the entire network of neurons in either an insect or a mouse, I can’t remember. So I’m guna assume like 3-5 years until we can map out human brains and know exactly what is firing off which brain cells as someone is doing puzzles in real time.

            I think it would be so crazy cool if we get to a pint where the understanding of our cognitive processes is so detailed that scientists are left with nothing but faith as their only way of defining the difference between a computer processing information and a person. Obviously the subsequent dark ages that follow will suck after all people of science snap and revert into becoming idiot priests. But that’s a risk I’m willing to take. 🤣🤣🍻

            • lad@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Maybe a rat brain project? I think the mapping of human may take longer, but yeah, once it happens interesting times are on the horizon

              • 11111one11111@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                For fucking reals. Sign me up to get scanned while taking every drug imaginable!!! I would love to see for example why cocaine for me with my adhd has like none of the affects that other people have. My buddy rips a line “IM ON TOP OF THE WPRLD” I rip a line “fuck I should prolly do my taxes”