Cyber resilience in an era of AI-enabled offenserganiza ons that fail to manage an Oar ficial intelligence (AI)-fueled cyber offense may expose their intellectual property, opera ons and customer trust to risks that boards and shareholders will not accept.Cybersecurity is shiing faster than at any point in the last decade. AI is changing how a acks are executed, how vulnerabilies are iden fied, and how adversaries scale their campaigns. What used to be a human-driven threat landscape is now more automated, more adapve and far harder to defend.Tradional defenses cannot keep up with threats that learn, iterate and operate at machine speed.What has been revealed signals the next waveAnthropic recently disclosed that a state-sponsored threat group used an AI system to coordinate simultaneous cyber espionage intrusions across global companies and government agencies. ¹The AI assisted with vulnerability iden fica on, live exploita on, lateral movement and even troubleshoo ng obstacles during the a ack. Although this was not a fully agen c opera on, it accelerated the intrusion cycle and showed how quickly a ackers can scale once AI enters the loop. Similar disclosures from OpenAI and Microso documen ng state-affiliated misuse of large language models, ² ³ along with assessments and guidance from the UK Na onal Cyber Security Centre and CISA,⁴ ⁵ reinforce this trajectory.This follows a predictable pa ern. A ackers test new techniques. Defenders respond. The validated techniques become more automated over me. The takeaway is clear: Organiza ons must close persistent gaps before the next wave of automa on matures.T h i s a c c e l e ra o n a l i g n s w i t h t h e N AV I characteris cs from the 2025 EY Global Risk Transforma on Study. Risks are nonlinear, accelerated, vola le and interconnected. AI-enabled threats capture all four.The threat landscape has fundamentally changedYesterday's intrusions relied on human experse, manual reconnaissance and long planning cycles.Today's a ackers use AI inside live environments to cra malicious code, refine payloads, generate synthec communica ons and solve technical problems in real me. There are already observed cases of malware invoking language models to generate evasive commands on the fly.Tomorrow, we will see semiautonomous intrusion ecosystems opera ng con nuously across cloud, iden ty, data and applica ons. These systems will test defenses, shi taccs Ayan RoyCybersecurity Competency Leader - EY based on detec on and maintain persistence without constant human oversight.The tradional assumpon that threats move linearly is no longer valid.Threats to AI and threats through AIO r g a n i z a o n s a d o p n g A I f a c e t w o interconnected categories of risk.Threats to AI target the data, models and pipelines that power AI systems. Compromised training data, stolen models or manipulated workflows can distort behavior and undermine key decisions.Threats through AI occur when a ackers use AI as an enabler. AI with elevated privileges can be manipulated into unintended acons. A ackers use AI to map environments, iden fy weaknesses and refine exploita on paths. Synthe c audio, video and text make social engineering more convincing. Public-facing AI applica ons can leak sensive data if probed.Weak controls around AI create opportunies for misuse. Meanwhile, a ackers who use AI increase pressure on organiza ons to secure their own AI environments. Resilience requires a unified view of both.Why tradional controls struggle nowFounda onal controls remain essen al, but they were built for slower a ack cycles. Iden ty governance, network segmenta on, secure development and monitoring s ll ma er. 1819
< Page 9 | Page 11 >