The integration of artificial intelligence into cybercrime has been a persistent concern for security professionals, with many anticipating a rapid evolution in the sophistication and automation of attacks. However, a recent analysis by IBM's security research division suggests that the era of widespread, AI-powered ransomware may be developing more slowly than predicted. While threat actors are undoubtedly experimenting with AI tools, their application in ransomware operations appears to be in a nascent, "slopoly" phase—characterized by slow adoption and limited, tactical use rather than a revolutionary overhaul of their attack chains. This measured pace provides a critical window for defenders to bolster their security postures against the more advanced AI threats that are expected to materialize in the future.
Currently, IBM's threat intelligence indicates that cybercriminals are primarily leveraging AI for efficiency gains in the preparatory and supporting stages of an attack, rather than in the core encryption and extortion mechanisms. These auxiliary uses include refining phishing campaigns through more convincing, grammatically correct, and localized phishing emails, generating malicious code snippets, and automating the reconnaissance of potential targets. This represents a shift in the economics of cybercrime, lowering the barrier to entry for less-skilled actors and increasing the scale of operations for established groups. The core ransomware payloads themselves, however, have not yet demonstrated a significant leap in autonomous evasion or targeting capabilities directly attributable to generative AI.
Several factors contribute to this slower-than-expected adoption. First, the current ransomware-as-a-service (RaaS) ecosystem is already highly effective and profitable, reducing the immediate incentive for major operational changes. Second, integrating sophisticated AI into malware presents technical challenges, including the need for stable command-and-control infrastructure to host models and the risk of AI "hallucinations" producing unreliable code. Finally, there is an inherent operational security risk for threat actors; using public AI APIs can leave forensic traces, while building private models requires significant resources and expertise that may not be readily available within typical cybercriminal enterprises.
Looking ahead, the trajectory points toward increasing AI integration. Security experts warn that the current auxiliary uses will likely evolve into more direct applications, such as AI agents that autonomously navigate a network, identify critical assets, and execute tailored encryption without human intervention. This could lead to faster, more targeted, and more damaging attacks. The current "slopoly" phase is not a reprieve but a preparation period. Organizations must use this time to implement foundational security practices: robust patch management, strict enforcement of multi-factor authentication, comprehensive employee training on next-generation phishing, and the adoption of zero-trust architectures. Proactive threat hunting and investments in AI-powered defensive tools will be essential to counter the asymmetric advantage that AI will eventually grant to attackers.



