- cross-posted to:
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- hackernews@lemmy.smeargle.fans
There is a discussion on Hacker News, but feel free to comment here as well.
There is a discussion on Hacker News, but feel free to comment here as well.
[Double reply to avoid editing my earlier comment]
From the HN thread:
I think that the first sentence is accurate, but I disagree with the second one.
Probabilistic likelihood is not enough to create a good illusion of understanding/intelligence. Relying on it will create situations as in the OP, where the bot outputs nonsense because of an unexpected prompt.
To avoid that, the model would need some symbolic (or semantic, or conceptual) layer[s], and handle the concepts being conveyed by the tokens, not just the tokens themselves. But that’s already closer to intelligence than to prob likelihood.