How reliable is AI lke ChatGPT in giving you code that you request?

  • experbia@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I find ChatGPT to be less useful for code and more useful for generating boilerplate more in the ‘configuration’ realm. Ansible playbooks or tasks, Nginx configs, Dockerfiles, docker-compose files, etc. Well-bounded things with an abundance of clear documentation.

    I generate a lot of first-draft Dockerfiles and docker-compose files through ChatGPT now with a short description of what I want. It’s always worth reviewing it because sometimes it just invents things that look like a Dockerfile, but it can save a lot of the boring boilerplate writing of volumes and networks and depends_ons and obvious env vars you need to override.

    I do use Codeium in my VS Code instance, though. It’s like a free more ethical Github Copilot, and I’ve been really really happy with it. Not so much to make a whole program, but I use it a lot more as a kind of super-autocomplete.

    I’ll go in to a class and go to a method that needs a change and I’ll just type a comment like the following and it will basically spit out the authentication logicc that I do a quick review on.

    // check the request authentication header against the user service to verify we're allowed to do this
    
    

    It’s also an amazing “static” debugger - I can highlight particularly convoluted segments of math or recursion or iteration and ask it to explain it. Then I can ask follow-up questions like “Is there any scenario in which totalFound remains at 0” and it will tell me yes or no and why it thinks that, which is really nice. I tend to save it for instances where I’m reasonably certain that it was all correct, but I wanted to check it instead. Now instead of breaking out the paper and pen and reasoning it out, I can ask it for a second opinion, and if it has no doubts, my paranoid mind is put at ease a bit.

    I’ve been unimpressed with the ability of any of these “AI” systems to spit out larger volumes of good code. They’re more like ADHD, eager-to-please little interns. They’ll spit out the first answer that comes to their mind even if it’s wrong, and they fall for all kinds of well-known development pitfalls.