Context: jest is a Javascript testing library, mocking is something you do in test in order not to use production services. AI understood both terms in a none programming context

  • Ephera@lemmy.ml
    link
    fedilink
    arrow-up
    163
    ·
    3 months ago

    Man, it really is like an extremely dense but dedicated intern. Does not question for a moment why it’s supposed to make fun of an interval, but delivers a complete essay.

    Just make sure to never say “let’s eat Grandpa” around an AI or it’ll have half the leg chomped down before you can clarify that a comma is missing.

    • Aggravationstation@feddit.uk
      link
      fedilink
      arrow-up
      42
      ·
      3 months ago

      Yea I didn’t think about that but if someone said to an AI powered robot “Hey, can you shred my reports?” as they leave work they could easily come back in the morning to it tearing their junior staff into strips like “Morning boss, almost done”.

          • Hazzard@lemm.ee
            link
            fedilink
            arrow-up
            4
            ·
            3 months ago

            Yeah, this is the problem with frankensteining two systems together. Giving an LLM a prompt, and giving it a module that can interpret images for it, leads to this.

            The image parser goes “a crossword, with the following hints”, when what the AI needs to do the job is an actual understanding of the grid. If one singular system understood both images and text, it could hypothetically understand the task well enough to fetch the information it needed from the image. But LLMs aren’t really an approach to any true “intelligence”, so they’ll forever be unable to do that as one piece.

          • stebo@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            3 months ago

            Well tbf this isn’t what chatgpt is designed for. It can interpret images and give information/advice/whatever, but not solve puzzles crossword puzzles entirely.

              • stebo@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                4
                ·
                edit-2
                3 months ago

                There’s a difference between helping to solve puzzles and actually solving them.

                You have to be more specific:

                • Ptsf@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  3 months ago

                  I later did ask it to just be helpful, specifically requesting it give me some possible words that fit for the 5 letter possibility for #1. It repeated “floor it” lol.

  • arandomthought@sh.itjust.works
    link
    fedilink
    arrow-up
    55
    arrow-down
    2
    ·
    3 months ago

    As much as I’m still skeptical about “AI taking all of our jobs” anytime soon, interactions like these still blow my mind…

    • pelespirit@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      3 months ago

      It already has taken tons of graphic jobs. Also look up Graphite by IBM, it’s advertising to companies to hire them to do their code. Which would be taking code jobs. I kind of think people don’t understand how it’s going to be taken as far as it can be taken by these corporations. If half the populace loses their jobs, they just don’t care. I really don’t get on who they think is going to be all of these products and services if no one has jobs, but their following quarter might be better.

  • Aggravationstation@feddit.uk
    link
    fedilink
    arrow-up
    31
    arrow-down
    1
    ·
    3 months ago

    That first one reminds me of a part of HHGTTG where I think Ford starts counting in front of a computer to intimidate it because its like walking up to a human and chanting “blood, blood, blood”.

  • gamermanh@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 months ago

    ChatGPT has gotten scary good with both entirely misunderstanding me in almost the exact way my ND ass does to NTs AND with how well it responds to “no, silly, and please remember this in future”

    I don’t use it super often, but every 6mo or so and it’s gotten crazy good at remembering all my little nuances when I have it so shit for me (nothing like research, mostly “restructure data in format a to format b pls”)

  • JATth@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    3 months ago

    The point when the AI hallucinations become useful is the point where I raise my eye brows. This not one of those.

    • JustARegularNerd@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      In this case though, it’s not a hallucination, there’s nothing false in that response, it just completely misinterpreted what the user was asking.

  • nroth@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    I would have interpreted this the same way as the AI did FWIW. Then again, I don’t do frontend stuff, and I run when I see TypeScript in my hobby projects because it’s such a pain.