The Future for AI Risk and Security (2025–2030)
- 11 hours ago
- 2 min read
Presented by Mark Maybury, VP of Commercialization, Lockheed Martin, and Marc Zissman, Associate Head, Cyber Security and Information Sciences Division, MIT Lincoln Laboratory
AI is no longer a future technology—it’s embedded in the core of modern enterprise. The next challenge isn’t adoption, but assurance: how do we secure intelligent systems that can think, act, and make decisions at machine speed?
At the ACSC 15th Annual Member Conference on November 6, two of the field’s most respected innovators will open the day with a forward-looking presentation designed to frame a full day of NDA-covered, member-only working sessions. Their discussion will set the stage for senior CISOs, risk officers, and legal counsels to collaborate on a shared action agenda for AI risk governance and security.
In this milestone session, Mark Maybury and Marc Zissman will examine how the accelerating convergence of AI, automation, and autonomy is reshaping the threat landscape. Their insights will trace the evolution from traditional machine learning to agentic systems and world models, and what those advances mean for security architecture, human trust, and governance.
“We’re entering a phase where AI doesn’t just inform decisions—it can make them,” says Maybury. “That demands a new model for governance, transparency, and control.”
Emerging Challenges—and Opportunities
The session will tackle some of the most pressing issues shaping the AI security agenda, including:
How the expansion of agentic and autonomous systems increases attack surfaces
The rise of AI-driven attacks that can target confidentiality, integrity, and availability
New forms of cognitive and operational risk emerging from overreliance on generative tools.
The need for traceable, auditable, and secure AI models to meet regulatory and ethical standards
Maybury and Zissman will explore both the vulnerabilities and the potential of AI as a force multiplier for defense, resilience, and national security.
Why the Next Five Years Will Define AI Security
Between 2025 and 2030, global AI investment is projected to exceed $400 billion per year. Yet despite this surge, studies show that only a small fraction of enterprise AI projects achieve measurable success. Understanding why—and how to build trustworthy systems that scale safely—will be central to the future of cybersecurity leadership.
As AI becomes embedded in every process and platform, leaders will need to address not just technical risks, but also the human, organizational, and geopolitical dimensions of intelligent systems.
This opening session will provide the context, foresight, and direction to guide ACSC’s ongoing executive collaborations—helping shape how organizations govern and secure AI over the next decade.
