Python security developer-in-residence decries use of bots that ‘cannot understand code’

Software vulnerability submissions generated by AI models have ushered in a “new era of slop security reports for open source” – and the devs maintaining these projects wish bug hunters would rely less on results produced by machine learning assistants.

Seth Larson, security developer-in-residence at the Python Software Foundation, raised the issue in a blog post last week, urging those reporting bugs not to use AI systems for bug hunting.

“Recently I’ve noticed an uptick in extremely low-quality, spammy, and LLM-hallucinated security reports to open source projects,” he wrote, pointing to similar findings from the Curl project in January. “These reports appear at first glance to be potentially legitimate and thus require time to refute.”

Larson argued that low-quality reports should be treated as if they’re malicious.

As if to underscore the persistence of these concerns, a Curl project bug report posted on December 8 shows that nearly a year after maintainer Daniel Stenberg raised the issue, he’s still confronted by “AI slop” – and wasting his time arguing with a bug submitter who may be partially or entirely automated.

In response to the bug report, Stenberg wrote:

We receive AI slop like this regularly and at volume. You contribute to [the] unnecessary load of Curl maintainers and I refuse to take that lightly and I am determined to act swiftly against it. Now and going forward.

You submitted what seems to be an obvious AI slop ‘report’ where you say there is a security problem, probably because an AI tricked you into believing this. You then waste our time by not telling us that an AI did this for you and you then continue the discussion with even more crap responses – seemingly also generated by AI.

Spammy, low-grade online content existed long before chatbots, but generative AI models have made it easier to produce the stuff. The result is pollution in journalism, web search, and of course social media.

For open source projects, AI-assisted bug reports are particularly pernicious because they require consideration and evaluation from security engineers – many of them volunteers – who are already pressed for time.

Larson told The Register that while he sees relatively few low-quality AI bug reports – fewer than ten each month – they represent the proverbial canary in the coal mine.

“Whatever happens to Python or pip is likely to eventually happen to more projects or more frequently,” he warned. “I am concerned mostly about maintainers that are handling this in isolation. If they don’t know that AI-generated reports are commonplace, they might not be able to recognize what’s happening before wasting tons of time on a false report. Wasting precious volunteer time doing something you don’t love and in the end for nothing is the surest way to burn out maintainers or drive them away from security work.”

Larson argued that the open source community needs to get ahead of this trend to mitigate potential damage.

“I am hesitant to say that ‘more tech’ is what will solve the problem,” he said. "I think open source security needs some fundamental changes. It can’t keep falling onto a small number of maintainers to do the work, and we need more normalization and visibility into these types of open source contributions.

“We should be answering the question: ‘how do we get more trusted individuals involved in open source?’ Funding for staffing is one answer – such as my own grant through Alpha-Omega – and involvement from donated employment time is another.”

While the open source community mulls how to respond, Larson asks that bug submitters not submit reports unless they’ve been verified by a human – and don’t use AI, because “these systems today cannot understand code.” He also urges platforms that accept vulnerability reports on behalf of maintainers to take steps to limit automated or abusive security report creation.

    • DdCno1@beehaw.orgOP
      link
      fedilink
      arrow-up
      7
      ·
      7 hours ago

      Do you really think that had AI been available to apparatchiks in Communist countries, they wouldn’t have used it to advance their careers?

      The problem isn’t capitalism, it’s human nature, regardless of the system. Incentivize behavior that is beneficial to the individual (even if just in the short term), but not society as a whole and people will engage in it. It doesn’t matter if there’s a democratically elected leader, monarch or first party secretary at the helm of the nation.

      • forrgott@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        This type of parasitic, even sociopathic behavior is directly rewarded in capitalism, though. Kinda figure that’s all they meant.

        Also, it capitalism is anywhere, it’s everywhere. Or this is at least true as long as the United States is in the picture…

        • DdCno1@beehaw.orgOP
          link
          fedilink
          arrow-up
          1
          ·
          8 minutes ago

          Do you blame capitalism and America for bad weather too - or when you stab your toe in the morning?

          Capitalism is a product of human nature; nobody designed it that way. When people attempt to design better systems from the ground up, far worse human behavior is being directly rewarded. Seriously, do you have any idea how much more disgustingly selfish and self-centered people are under economic and political systems that are supposedly better?

          If you look at the most democratic nations on Earth, the ones with the best functioning institutions, the best education, the most innovation, least inequality, you’ll find nations that are fiercely capitalist, with strong mercantile tradition dating back centuries. These people were capitalists before the term was first coined and they selfishly wanted the state to protect their investments, so they created strong institutions for that purpose. They had no idea that these institutions would end up doing so much more, spreading and maintaining wealth far beyond the small elite that they were supposed to serve while at the same time slowly moving power away from them. The many smaller educated merchants, who only educated themselves, because they selfishly wanted more prosperity for themselves, ended up being an amazing nucleus of a well-formed civil society, which is the backbone of every single successful free country.

          Forget about America for a second or pie in the sky ideas that failed spectacularly any time they came in contact with the basic reality of human nature. This is what works: Stumble into a system that accidentally rewards selfish human behavior in such a way that everyone ends up benefiting from it. The problem from the perspective of ideologues is that this isn’t glamorous, there are no dashing revolutionaries applying catchy slogans with the butts of their rifles. It’s slow, incredibly difficult to replicate, requires rewarding the “wrong” kind of people for the longest time and. There’s no trickling down or other such nonsense, but rather the slow collective realization that the same system that protects investments and the free exchange of goods and services can do a rather excellent job at protecting and increasing civil rights. It was neither linear nor planned and the resulting societies are by no means perfect, but they are the best we managed to achieve as a species so far, so consider learning from them how they were able to make capitalism work.

          Sorry for the uncalled for wall of text, but I’m increasingly tired of people here blaming capitalism for everything. It comes across as performative, even downright intellectually lazy. I get that this is a left-leaning place to say the least and there’s a reason why I’m here too, because I’m identifying with many typical left political positions - but certainly not all of them and most definitely not those that have failed historically and don’t hold up to the most basic of scrutiny.