I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

  • huginn
    link
    fedilink
    arrow-up
    3
    ·
    11 months ago

    Sure: I get that they’re not exactly the same. The ChatGPT issue is orders of magnitude more removed from humanity than a dog, but it’s a daily example of anthropomorphic bias that is relatable and easy to understand. Just was using it as an example.