Editor’s Note: As Generative AI becomes increasingly embedded in enterprise systems and security workflows, it is no longer just a technological innovation—it’s a cybersecurity paradox. This article draws on the European Commission’s Generative AI Outlook Report – Exploring the Intersection of Technology, Society, and Policy (JRC142598) to examine how GenAI is reshaping both the tools of defense and the tactics of attackers.
The article investigates the dual role GenAI plays in modern cybersecurity: enabling sophisticated threat detection and penetration testing on one hand, while simultaneously lowering the barrier to entry for adversaries who exploit its capabilities to launch convincing phishing campaigns, generate malicious code, and poison AI models from within. It examines new challenges in securing AI supply chains, tracking data provenance, and assessing risk across AI-integrated environments.
For CISOs, threat analysts, and infosec professionals, this piece offers a timely narrative on how GenAI is altering the strategic assumptions of digital defense. It’s not just about adopting new tools—it’s about rethinking what security even means when intelligence can be both artificial and adversarial.
Industry News – Artificial Intelligence Beat
Model Poisoning and Malware: GenAI’s Double-Edged Sword
ComplexDiscovery Staff
Generative artificial intelligence, once the subject of experimental labs and speculative fiction, is now a central force in digital transformation, and cybersecurity professionals are finding themselves on unfamiliar ground. As GenAI tools become more integrated into enterprise environments, they are reshaping threat models, expanding attack surfaces, and introducing vulnerabilities that cannot be fully addressed with traditional controls. A recent report from the European Commission’s Joint Research Centre (JRC), titled Generative AI Outlook, dives deep into this unfolding reality, urging cybersecurity leaders to recognize and respond to a paradigm shift in digital risk.
GenAI differs from previous waves of innovation in one key respect: it enables both offense and defense in the cyber domain. On one hand, its capabilities can be harnessed for threat detection, anomaly analysis, and even educational training simulations. On the other hand, malicious actors are now using these same capabilities to scale and refine their attacks. The arms race between defenders and adversaries is not new, but GenAI has collapsed the expertise gap. With simple natural language prompts, attackers can now automate phishing schemes, generate polymorphic malware, and create convincing synthetic identities, often without needing deep technical skills.
This democratization of offensive tools has profound implications. Whereas traditional cybersecurity threats often rely on known signatures or behaviors, GenAI-generated threats can mimic human language and behavior with astonishing nuance. Social engineering, for instance, no longer requires trial and error by a human scammer. A language model, properly instructed, can craft targeted spear-phishing emails, deepfake voicemail messages, or even real-time video impersonations. What once took hours of reconnaissance and manual finesse can now be produced in minutes by automated agents.
Yet it isn’t just the output of GenAI that introduces risk—it’s the infrastructure behind it. The JRC report outlines how the supply chains of GenAI systems are inherently more complex and opaque than those of traditional software. Many of these models rely on third-party datasets harvested from the open internet, often with questionable provenance. Unlike code libraries with versioning and signed releases, datasets used in model training are rarely tracked with similar rigor. Attackers are already exploiting this weak point by injecting malicious samples into open-source data repositories, a tactic known as data poisoning. If such poisoned data makes its way into a training corpus, the resulting model may produce biased, unsafe, or even dangerous outputs—outputs that could misinform security decisions or suggest insecure configurations.
The models themselves are not immune. A more insidious threat emerges from what researchers describe as model poisoning, in which the internal weights or decision logic of a machine learning system are subtly altered. This form of tampering is particularly difficult to detect and may only manifest under specific conditions or prompts. For organizations deploying third-party AI models, the question is no longer just about trust—it is about traceability and verification. Without access to the original training data or model architecture, companies may be blind to the very threats they are attempting to mitigate.
Despite these challenges, the same report recognizes GenAI’s promise as a powerful asset in the defensive toolkit. AI-powered systems are increasingly being used to triage incident alerts, interpret system logs, and flag potential anomalies in real-time. These capabilities are invaluable, especially given the chronic talent shortages in cybersecurity. Where one analyst might take hours to correlate disparate events across network logs, an AI model can do it in seconds, surfacing threats that would otherwise remain hidden.
Moreover, GenAI has opened new frontiers in penetration testing and red teaming. By simulating attacker behavior, these models can identify weaknesses in systems before they are exploited in the wild. Early research has shown promising results in automating certain aspects of vulnerability discovery and exploit generation, although the most sophisticated attacks still require human insight and judgment. Nonetheless, as these tools mature, organizations may soon find themselves relying on AI not just to respond to attacks but to anticipate and prevent them.
That said, the integration of GenAI into cybersecurity workflows demands a cautious approach. The report underscores the importance of aligning AI tools with human oversight. As models become more agentic—capable of making decisions and initiating actions autonomously—the need for robust governance, transparent auditing, and ethical guardrails becomes even more pressing. Left unchecked, an overreliance on GenAI could erode situational awareness, create false confidence, or introduce new failure modes that are not yet fully understood.
Ultimately, the future of cybersecurity in the GenAI era will not be determined solely by technology, but by leadership. Security teams must adapt not only their tools but also their mindsets, treating AI not as a silver bullet but as a complex ecosystem of capabilities and risks. This means asking tough questions about data provenance, model explainability, and decision accountability—questions that go beyond compliance checklists and into the core of responsible security practice.
The GenAI revolution is not on the horizon. It is already here, embedded in systems and shaping operations across industries. For cybersecurity leaders, the challenge is no longer whether to respond but how. Will your defenses evolve faster than the threats? Or will you find your security posture rendered obsolete by the very intelligence it seeks to control?
News Sources
- Abendroth Dias, K., Arias Cabarcos, P., Bacco, F.M., Bassani, E., Bertoletti, A. et al., Generative AI Outlook Report – Exploring the Intersection of Technology, Society and Policy, Navajas Cawood, E., Vespe, M., Kotsev, A. and van Bavel, R. (editors), Publications Office of the European Union, Luxembourg, 2025, https://publications.jrc.ec.europa.eu/repository/handle/JRC142598.
- Data at Risk: The Governance Challenge of Generative (ComplexDiscovery)
- Legal Tech in the Loop: Generative AI and the New Frontiers of Responsibility (ComplexDiscovery)
- JRC Publications Repository
Assisted by GAI and LLM Technologies
Additional Reading
- The LockBit Breach: Unmasking the Underworld of Ransomware Operations
- The TeleMessage Breach: A Cautionary Tale of Compliance Versus Security
- Inside CyberCX’s 2025 DFIR Report: MFA Failures and Espionage Risks Revealed
Source: ComplexDiscovery OÜ
The post Model Poisoning and Malware: GenAI’s Double-Edged Sword appeared first on ComplexDiscovery.