top of page

AI and Managing Cyber Risk

AI and Managing Cyber Risk

8/23/25, 7:00 PM

AI Frameworks and Knowledge Bases

In 2025, cybersecurity has entered a new era. Artificial intelligence is now both a force multiplier for defenders and a powerful tool for attackers. Security teams increasingly depend on AI to triage alerts, execute real-time detection and response, and eliminate threats and vulnerabilities. At the same time, threat actors leverage generative models for convincingly cloned voices, multilingual phishing, and rapid social engineering at scale, and threat assessments continuously report a rise in AI-assisted impersonation and fraud. The cybersecurity landscape is evolving.

On the defense side, the playbook is maturing from “try AI” to “operationalize AI securely.” Organizations are building on community knowledge bases, such as MITRE’s ATLAS, which maps tactics like model theft, data poisoning, and adversarial prompts across the AI lifecycle. They are also treating the OWASP Top 10 for LLM applications as a baseline for hardening chatbots, copilots, and agentic workflows. The result is more systematic threat modeling for AI components and more consistent pre-deployment testing for LLM-specific flaws such as prompt injection and insecure output handling.

Governance has accelerated to match. The EU’s Artificial Intelligence Act became law in July 2024, with staged obligations that begin in August 2025 and fully apply to most high-risk uses by August 2026. Importantly for risk leaders, the Act codifies human oversight for high-risk systems and requires ongoing monitoring rather than one-off assessments. That combination pushes programs beyond checklists toward continuous assurance. Industry-specific rules are tightening as well. In financial services, the Digital Operational Resilience Act (DORA) took effect in January 2025, standardizing digital resilience requirements and formalizing third-party risk management. Many organizations are now using these regulations to guide integrated AI global security practices.

Secure-by-design principles now apply to the entire machine learning (ML) stack. Joint guidance from CISA and the UK’s NCSC, “Guidelines for Secure AI System Development,” provides lifecycle controls spanning design through operations for both in-house and third-party AI systems. For DevOps teams, these guidelines help with integration, securing data pipelines, implementing red-teaming, and strengthening everyday development practices. As AI-specific attack patterns become well-documented, traditional risk frameworks require adaptation. Defenders should assume exposure to LLM-related threats such as prompt injection, data leakage, model denial-of-service, training data poisoning, and supply chain compromise. These risks must be explicitly tested for during development and revalidated in production.

In the United States, federal policy shifted in early 2025 when Executive Order 14110 (2023) was rescinded; however, many of the practices it encouraged remain in place. NIST’s AI Risk Management Framework (AI RMF) guides understanding, addressing, and framing AI risk. Meanwhile, CISA has been filling operational gaps with guidance, most recently on securing data used to train and run AI systems so programs can treat data integrity as a core security concern.

Finally, provenance and integrity are becoming part of day-to-day controls. Adoption of the C2PA content-credentials standard is expanding across major platforms, giving security and fraud teams more signals to verify whether media is synthetic or altered. While emerging, it’s a useful tool alongside process safeguards like call-back verification and multi-party approvals for high-risk actions.

In conclusion, AI can make risk management faster, more predictive, and more consistent, but only if it’s managed with the same discipline as any other critical system. Align your program to recognized frameworks and standards (AI RMF, ISO/IEC 42001), implement lifecycle controls, and rehearse AI-specific failure modes (OWASP LLM Top 10, MITRE ATLAS). Combine continuous monitoring with human oversight for consequential actions, and reinforce data integrity wherever AI touches sensitive processes. Done right, this approach strengthens cybersecurity and builds resilience without sacrificing trust.

For a more detailed discussion about AI and Managing Cyber-Risk, schedule a meeting with us and learn how we can deliver solutions that meet your needs and exceed your expectations.

bottom of page