I’m trying to feel more comfortable using random GitHub projects, basically.

  • unknowing8343@discuss.tchncs.deOP
    link
    fedilink
    arrow-up
    5
    arrow-down
    21
    ·
    edit-2
    4 months ago

    I don’t care if the solution is AI based or not, indeed.

    I guess I thought it like that because AI is quite fit for the task of understanding what might be the purpose of code in a few seconds/minutes without you having to review it. I don’t know how some non-AI tool could be better for such task.

    Edit: so many people against the idea. Have you guys used GitHub Copilot? It understands the context of your repo to help you write the next thing… Right? Well, what if you apply the same idea to simply review for malicious/unexpected behaviour on third party repos? Doesn’t seem too weird for me.

    • TootSweet@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      edit-2
      4 months ago

      AI is quite fit for the task of understanding what might be the purpose of code

      Disagree.

      I don’t know how some non-AI tool could be better for such task.

      ClamAV has been filling a somewhat similar use case for a long time, and I don’t think I’ve ever heard anyone call it “AI”.

      I guess bayesian filters like email providers use to filter spam could be considered “AI” (though old-school AI, not the kind of stuff that’s such a bubble now) and may possibly be applicable to your use case.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        4 months ago

        Bayesian filters are statistical, they have nothing to do with machine learning.

        • 31337@sh.itjust.works
          link
          fedilink
          arrow-up
          6
          ·
          4 months ago

          If you’re talking about naive bayes filtering, it most definitely is an ML model. Modern spam filters use more complex ML models (or at least I know Yahoo Mail used to ~15 years ago, because I saw a lecture where John Langford talked a little bit about it). Statistical ML is an “AI” field. Stuff like anomaly detection are also usually ML models.

        • TootSweet@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          4 months ago

          The A* algorithm doesn’t have anything to do with machine learning either, but the first time I ever learned about it was in a computer science class in college called something like “Introduction To Artificial Intelligence”.

          But it’s very much the case that the term “AI” has a very different meaning now-a-days during this cringy bubble than it did back in 2004 or 2005 or whenever that was.

          Today “AI” is basically synonymous with “BS”. Lol.

    • Shareni@programming.dev
      link
      fedilink
      arrow-up
      8
      arrow-down
      4
      ·
      4 months ago

      AI is quite fit for the task of understanding

      Sure, and parrots are amazing at spotting fallacies like cherry picking…

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      6
      arrow-down
      11
      ·
      4 months ago

      Don’t listen to the idiots downvoting you. This is absolutely a good task for AI. I suspect current AI isn’t quite clever enough to detect this sort of thing reliably unless it is very blatant malicious code, but a lot of malicious code is fairly blatant if you have the time to actually read an entire codebase in detail, which of course AI can do and humans can’t.

      For example the extra . that disabled a test in xz? I think current AI would easily be capable of highlighting it as wrong. It probably wouldn’t be able to figure out that it was malicious rather than a mistake yet though.

      • thesmokingman@programming.dev
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        4 months ago

        I mean anything is a good fit for future, science fiction AI if we imagine hard enough.

        What you describe as “blatant malicious code” is probably only things like very specific C&C domains or instruction sets. We already have very efficient string matching tools for those, though, and they don’t burn power at an atrocious rate.

        You’ve given us an example so PoC||GTFO. Major code AI tools like Copilot struggle to explain test files with a variety of styles, skips, and comments, so I think you have your work cut out for you.

        • FizzyOrange@programming.dev
          link
          fedilink
          arrow-up
          1
          arrow-down
          7
          ·
          4 months ago

          We already have very efficient string matching tools for those, though

          How is a string matching tool going to find a single .?

          You’ve given us an example so PoC||GTFO

          🙄

          • thesmokingman@programming.dev
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            4 months ago

            A single character, per your definition, is not blatant malicious code. Stop moving the goalposts.

            It’s clear you don’t understand the space and you don’t seem to have any interest in acting in good faith based on your other comments so good luck.

            • FizzyOrange@programming.dev
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              4 months ago

              I’m not moving any goalposts. The addition of the . was very blatant. They literally just added a syntax error. It went undetected because humans don’t have the stamina to exhaustively do code review down to that level. Computers (even AI) don’t have that issue.

              You are clearly out of your depth here.