Cybercrime Forums Push Back on AI Slop

AI slop – Misryoum reports cybercrime communities are irritated by low-quality AI posts and fear they undermine reputation and human interaction.
Cybercrime forums are not just adapting to generative AI, they’re starting to complain about it.
In what Misryoum has been tracking as a growing undercurrent online. some users inside underground hacking and scam communities say they’re seeing an influx of low-effort. AI-generated content that clutters discussions instead of improving them.. The frustration sounds familiar to anyone who’s browsed the internet after the latest AI wave. but in this case it’s coming from people who built their online credibility on skill. experience. and reputations.
This tension is showing up in direct complaints about “AI garbage. ” thin “bullet-point” explanations. and the feeling that new posts are being used to game attention rather than contribute genuinely useful knowledge.. Across these forums, the pushback isn’t only about quality, it’s also about who gets to participate and how.
Misryoum notes that as generative tools became easier to use, many people began posting content with minimal edits, which can quickly dilute a community’s signal-to-noise ratio. When threads stop reflecting real human effort, members who value earned reputations often respond.
Meanwhile. researchers studying how low-level cybercriminals use AI have found that the number of AI-related discussions rose rapidly after generative AI tools went mainstream. and so did the backlash.. They analyzed tens of thousands of AI-related conversations across cybercrime forums over the period following the big generative AI launch. observing repeated themes: complaints about low-quality output. concerns that AI summarization may reduce forum traffic. and skepticism that AI-generated explanations accurately reflect real expertise.
One of the sharpest dynamics, according to Misryoum, is social.. These spaces function like more than marketplaces for fraud.. Members build trust, trade reputations, and rely on human signals when deciding who to collaborate with.. When AI-generated posts appear to be used as shortcuts to visibility, it can undermine that unwritten system of credibility.
Misryoum also highlights the irony at the center of the current debate: underground communities have long experimented with new tools. including AI. to speed up work and lower barriers for would-be attackers.. But when AI becomes associated with sloppy content. it can create backlash from the same people who might otherwise be willing to use the technology.
This matters because it suggests generative AI’s impact on cybercrime won’t be one-directional. Even in the shadows, communities are negotiating what counts as value, and what counts as noise, shaping how AI is adopted—or rejected—at the grassroots level.
In the end, the most consistent takeaway in Misryoum’s reporting is that the “human interaction” angle is the point of friction. For many forum users, AI posts are less a helpful shortcut and more a threat to the social fabric that makes these spaces function.