• kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I think we need to shift our paradigm one or two more times before we can start seriously talking about AGI. Current transformer models are impressive, but they’re much better suited to modeling language than what I would call “cognition”.
    I think we’re close, but I don’t think we’ll get there by increasing/improving current technology.

    • thantik@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Hell, honestly – LLMs are smarter than half of the people I know already.