How reliable is AI lke ChatGPT in giving you code that you request?

  • asteroidrainfall@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Outside producing one simple WebPack configuration, I haven’t had good experiences using ChatGPT. It often causes me more trouble than it helps. I’ve tried to use it multiple times to write some BASH script, and every time it gives me know that looks nice but is just broken. It’s not syntactically incorrect, it’s more like functionally incorrect.

    For example, it told me that you could pass arrays as function arguments, which you can’t do. Or, it gave me a script that was using variables within a URL string that would be passed into CURL, which won’t work since the URL won’t be encoded properly.

    When I do it, I spend more time trying to fix the code that it gives me. Which, I guess, does have the benefit that it means I got to learn something afterward (both examples above I didn’t know about until ChatGPT gave me the bad code).

    The thing that ensured me that AI won’t take over the programming side of software engineering was when I asked ChatGPT to help me out with some date-time bugs. It just kept making up native JavaScript API functions, couldn’t understand how to parse UTC to figure out a date-time’s timezone, among other issues. The day that AI is able to solve software issues around date-times or currencies is the day that we’ll all be out of a job.

    Edit:
    I guess you could summarize using ChatGPT is like peer-programming with an overly confident CS grad.

    • experbia@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      peer-programming with an overly confident CS grad
      I love this, and agree. I’ve always said that for all tasks, it’s like you’re working with an ADHD eager-to-please intern.