top of page


AI Security Autonomy Framework
Working with AI Security and Risk experts from MIT Lincoln Laboratory , the ACSC developed this AI Security Automation Framework with feedback from its membership and other industry thought-leaders. This document is meant to help members work through the maturity of AI security and how much is being entrusted to automation.

-
Dec 31 min read


Applying the NIST Cyber Security Framework (CSF) to AI
ACSC's AI Co-Chairs Mark Maybury, Lockheed Martin and Marc Zissman, MIT Lincoln Laboratory presented an overview of how one might apply the five functions of the NIST Cyber Security Framework to AI capabilities at the Annual Member Conference. Their presentation framed the future of AI security and risks.

-
Dec 31 min read


Terms of Art(ificial Intelligence)
ACSC's AI Co-Chairs Mark Maybury, Lockheed Martin and Marc Zissman, MIT Lincoln Laboratory presented the following AI definitions at the Annual Member Conference to align all participants with a common lexicon for the day's discussions. Their presentation which framed the future of AI security and risks.

-
Dec 31 min read


GenAI Threats to Confidentiality, Integrity and Availability
ACSC's AI Co-Chairs Mark Maybury, Lockheed Martin and Marc Zissman, MIT Lincoln Laboratory presented the following description of the threats to Generative AI aligned to the CIA - Confidentiality, Integrity and Availability - framework during their presentation which framed the future of AI security and risks.

-
Dec 31 min read


Winning the Competition for Trusted AI: What Members Will Learn at the ACSC 15th Annual Member Conference
This Thursday, November 6, the Advanced Cyber Security Center (ACSC) will host its 15th Annual Member Conference at the Federal Reserve Bank of Boston, a full day of keynotes, panels, and interactive working sessions designed to help members tackle one of the most urgent challenges of our time: building trusted AI systems. A Focus on Action and Application on AI and Security This year’s conference theme - Winning the Competition for Trusted AI: A Risk and Security Agenda -

-
Nov 42 min read


Why Every Organization Should Revisit Insider Risk Programs
The most damaging security incidents rarely start with an external threat. They start with an insider — sometimes malicious, often careless — who already has access. The results can be catastrophic. Google’s self-driving car unit lost 9.7 gigabytes of intellectual property when a lead engineer walked out with 14,000 files on a removable device. Boeing spent $17 million remediating a breach caused by an employee who emailed sensitive data to his spouse for help formatting a sp

-
Oct 203 min read


Cloud Security Posture Management: Strategies, Talent and AI
Think about CSPM in the same way you approach traditional vulnerability management with the same discipline and practices. Cloud...

-
Sep 30, 20241 min read


Insider Risk Programs: Accelerating Around Human Factors
Key Themes Human factors first: Insider risk programs are shifting from an IT focus to organize around human behavior, reflected in cyber...

-
Sep 10, 20241 min read
bottom of page
