
Coalition for Secure AI Marks First Anniversary with New Principles for Agentic Systems and Defender Frameworks
Global Participation Expands as the Coalition Releases Essential AI Guidance
Boston, MA – 17 July 2025 – The Coalition for Secure AI (CoSAI), an OASIS Open Project, celebrates its first anniversary since launching at the Aspen Security Forum in 2024. Over the past year, CoSAI has grown into the industry’s leading collaborative ecosystem for AI security, expanding from its initial founding sponsors to more than 45 partner organizations worldwide. Its mission to enhance trust and security in AI development and deployment has resonated widely, attracting premier sponsors EY, Google, IBM, Microsoft, NVIDIA, Palo Alto Networks, PayPal, Protect AI, Snyk, Trend Micro, and Zscaler. Through multiple workstreams, the coalition has produced practical frameworks and research addressing real-world challenges in securing AI systems. Central to CoSAI’s impact this year are the most recent releases of the “Principles for Secure-by-Design Agentic Systems,” which establishes three core principles for autonomous AI, and the “Preparing Defenders of AI Systems” whitepaper.
Security Principles Help Safeguard Agentic AI Systems
CoSAI’s Technical Steering Committee (TSC) has released the “Principles for Secure-by-Design Agentic Systems,” a foundational document aimed at helping technical practitioners address the unique security challenges posed by autonomous AI.
The principles offer practical guidance on balancing operational agility with robust security controls, establishing that secure agentic systems should be Human-governed and Accountable, architected for meaningful control with clear accountability, constrained by well-defined authority boundaries aligned with risk tolerance, and subject to risk-based controls ensuring alignment with expected business outcomes. They must be Bounded and Resilient, with strict purpose-specific entitlements, robust defensive measures including AI-specific protections, and continuous validation with predictable failure modes. Finally, they should be Transparent and Verifiable, supported by secure AI supply chain controls, comprehensive telemetry of all system activities, and real-time monitoring capabilities for oversight and incident response.
This blog post provides additional context on the principles and how they can be applied in real-world environments.
“As agentic AI systems become more embedded in organizations’ operations, we need frameworks to secure them,” said David LaBianca, Project Governing Board co-chair at CoSAI. “These principles provide a technical foundation for organizations to adopt AI responsibly and securely.”
New Defender Frameworks Help Organizations Operationalize AI Security
CoSAI has published another landscape paper, “Preparing Defenders of AI Systems,” developed through our workstream on Preparing Defenders for a Changing Cybersecurity Landscape. The paper provides practical, defender-focused guidance on applying AI security frameworks, prioritizing investments, and enhancing protection strategies for AI systems in real-world environments.
A companion blog post offers additional insights on how this evolving resource bridges high-level frameworks with practical implementation and will continue adapting as AI threats and technologies advance.
“This paper provides defenders with specific guidance on how security frameworks must be adapted to mitigate risks in the AI transformation─pinpointing gaps in current approaches and prioritizing critical investments,” said Josiah Hagen of Trend Micro and Vinay Bansal of Cisco, CoSAI’s Workstream 2 Leads. “As security practices are aligned with AI adoption realities, organizations are empowered to make informed decisions and protect their assets while ensuring innovation doesn’t outpace defenders. This exemplifies CoSAI’s commitment to connecting emerging threats to AI systems with practical security solutions.”
These foundational outputs from CoSAI’s first year set the stage for even greater impact ahead.
Looking Ahead: Building a Secure AI Future
As CoSAI enters its second year, the coalition is positioned to further accelerate AI security innovation through expanded research initiatives, practical tool development, and increased global engagement. With active workstreams producing actionable guidance and a growing community of practitioners, CoSAI continues to drive adoption of secure-by-design AI systems across industries. Its commitment to open source collaboration and standardization remains central to establishing trust in AI technologies. Realizing this vision requires continued collaboration across the AI security community.
Get Involved
Technical contributors, researchers, and organizations are invited to join CoSAI’s open source community and help shape the future of secure AI. To learn more about how to get involved, contact join@oasis-open.org.
One year in: What CoSAI members are saying about our impact
Premier Sponsors:
- Google
“It’s been great to see CoSAI grow with so many new partners and instrumental frameworks since we first introduced it last year. Google is proud to have been a co-founder for this initiative and we look forward to seeing more work from CoSAI’s workstreams, specifically across critical areas like agentic security.”
– Heather Adkins, VP of security engineering, Google - IBM:
”From establishing critical work streams to launching innovative initiatives around Security Principles of Agentic AI, AI model signing and attestation, and MCP Security, CoSAI has built real momentum in securing AI at scale—all in just one year. It’s been rewarding to co-chair the Technical Steering Committee and collaborate with this talented, cross-industry community to tackle the evolving challenges of AI security and help shape industry standards.”
– J.R. Rao, IBM Fellow and CTO, Security Research, IBM - NVIDIA:
“As AI becomes increasingly integral to critical infrastructure and enterprise operations, security must be foundational at every stage of development and deployment. As an industry enabler of AI for both hardware and software, NVIDIA is proud to support CoSAI’s collaborative efforts to advance practical, open standards across industries to democratize and scale AI for the entire market.”
— Daniel Rohrer, Vice President of Software Product Security, NVIDIA - Palo Alto Networks:
“As public and private organizations increasingly integrate advanced and agentic AI models into critical networks, the development of industry-driven AI security frameworks, such as CoSAI’s ‘Principles for Secure-By-Design Agentic Systems,’ will be vital for the security of our digital ecosystem. CoSAI’s initiatives over the past year are commendable, and we eagerly anticipate continuing our contributions to their mission.”
– Munish Khetrapal, VP of Cloud Management, Palo Alto Networks - Trend Micro:
“As AI continues to reshape how businesses operate, we see tremendous value in collaboration that drives open standards and innovation across the industry. Over the past year, our work with CoSAI has reflected a shared commitment to raising the bar for security. We’re proud to stand alongside CoSAI in helping lead the way to a more secure and resilient digital future.”
– Kevin Simzer, COO at Trend Micro
General Sponsors:
- Adversa AI:
“At Adversa AI – an Agentic AI Security startup – we are proud to be a COSAI sponsor and a co-lead of the Agentic AI Security workstream. As pioneers of AI security and continuous AI red teaming, we believe Agentic AI demands a new security paradigm—one that goes beyond traditional guardrails to test cognition, autonomy, and context. COSAI’s Agentic AI Security Principles mark a pivotal step forward, and we’re committed to shaping the future of secure Agentic AI systems.”
— Alex Polyakov, Co-Founder of Adversa AI - Aim Security:
“CoSAI is building what the industry urgently needs: clarity and collaboration in securing AI systems. As pioneers in AI security, we at Aim are excited to work alongside this diverse community to help define the future of AI defense – for agentic systems and beyond.”
– Matan Getz, CEO and Co-Founder, Aim Security - Amazon:
“The first year of CoSAI highlights how industry collaboration can advance AI security. As a founding member, Amazon supports the coalition’s mission to develop open standards and frameworks that benefit the entire AI ecosystem. Together, we look forward to strengthening the foundation of secure AI.”
– Matt Saner, Sr. Manager, Security Specialist Solution Architecture; CoSAI Governing Board and Executive Steering Committee member - Anthropic:
“Safe and secure AI development has been core to our mission from the start. As AI models become more autonomous, CoSAI’s work is increasingly vital for ensuring that AI systems remain secure, trustworthy, and beneficial for humanity. We’re proud to continue this important work alongside other industry leaders.”
– Jason Clinton, Chief Information Security Officer, Anthropic
- Cisco:
“As AI systems become more agentic and interconnected, securing them is now more important than ever. During the last year, CoSAI’s workstreams helped empower defenders and innovators alike to advance AI with integrity, trust, and resilience. We’re proud to help shape industry frameworks with this global coalition; uniting leaders across disciplines to safeguard the future of AI. Together, we’re ensuring that security is foundational to every phase of AI’s evolution.”
– Omar Santos, Distinguished Engineer, Advanced AI Research and Development, Security and Trust, Cisco - Cohere:
“We’re proud to support CoSAI and collaborate with industry peers to ensure AI systems are developed and deployed securely. Over the last year, these collective efforts have built an important foundation that helps drive innovation while protecting against emerging threats. Our shared commitment to secure-by-design principles is increasingly important as AI adoption accelerates.”
-Prutha Parikh, Head of Security, Cohere - Fr0ntierx:
“CoSAI has united a global community around one of the most critical opportunities of our era: advancing safe, responsible, and innovative AI. At Fr0ntierX, we’re proud to contribute to this mission by helping build an infrastructure foundation rooted in trust, interoperability, and privacy. As AI continues to evolve, we remain committed to ensuring that innovation goes hand in hand with alignment and meaningful user control.”
– Jonathan Begg, CEO, Fr0ntierX - GenLab:
“Over the past year, CoSAI has brought clarity to securing AI across the AI supply chain. The Six Critical Controls give leaders something concrete to work from, and GenLab has been proud to support that work from the start. As AI adoption accelerates, these frameworks are going to be essential—not just for safety, but for trust across sovereign systems.”
– Daniel Riedel, Founder & CEO, GenLab Venture Studios - HiddenLayer:
“As one of the earliest members of CoSAI, HiddenLayer recognized the urgency of securing AI from the outset. CoSAI’s work over the past year has provided much-needed clarity in a rapidly evolving space, offering actionable frameworks that empower organizations to operationalize AI security and governance. Its mission has reinforced our belief that trust must be embedded into AI systems by design. As threats become more advanced and the AI attack surface expands, our continued collaboration with CoSAI remains essential to ensuring that AI innovations are safe and secure.”
— Malcolm Harkins, Chief Security & Trust Officer, HiddenLayer
- Intel:
“CoSAI’s first year has been marked by strong momentum—from the release of landscape papers of technical workstreams to the timely initiation of the Agentic AI Systems workstream. These milestones reflect the coalition’s ability to anticipate and act on emerging security needs. At Intel, we’re proud to partner with CoSAI members to ensure that secure-by-design principles are embedded early in the AI system design and deployment.”
– Dhinesh Manoharan, VP Product Security & GM of INT31, Intel - Lasso Security
“At Lasso, we believe secure-by-design must be the foundation of AI innovation. CoSAI has played a critical role in turning complex AI security challenges into practical, actionable guidance—from agentic systems to defender frameworks. As proud contributors to this effort, we’ve seen firsthand how CoSAI is helping shape a more trustworthy AI future and laying the groundwork for secure, enterprise-grade solutions.”
– Elad Schulman, CEO & Co-Founder, Lasso
- Opal Security:
“Opal is a proud early supporter of CoSAI—because Opal helps customers track agents and other NHIs in our platform, we’re deeply invested in securing AI’s future. CoSAI assembles leading minds to set standards for a world in which every employee calls on multiple agents. Opal is honored to contribute to this organization and learn from luminaries at trailblazing member organizations. We look forward to future consensus-building, standards setting, and insights.”
–Umaimah Khan, CEO, Opal Security - Operant:
“In a world where AI is rapidly reshaping everything from infrastructure to decision-making, collaboration is our best defense. I’m proud to have joined the board of Coalition for Secure AI as it brought together industry leaders, researchers, and policymakers under one roof, filling a major gap in the evolution of Responsible AI that is now more urgent than ever. CoSAI represents the kind of cross-industry partnership that will shape how we build a more secure and trustworthy AI ecosystem for everyone. A secure AI future is only possible if we build it together.”
– Priyanka Tembey, CTO, Operant AI - TrojAI:
“CoSAI’s collaborative and transparent approach to make AI safer and more secure for everyone closely reflects TrojAI’s own mission. We’re proud to support this important initiative and celebrate its first year of progress. As AI adoption increases, we believe that security will be integral to the sustainable growth of AI. CoSAI’s efforts to develop best practices and unified methodologies are invaluable for secure AI development and deployment.”
– Lee Weiner, CEO, TrojAI
- VE3:
“We joined CoSAI right at the beginning because its mission aligned with our belief that AI must be built securely, responsibly, & transparently. CoSAI insights and frameworks like critical controls, have deeply influenced how we approach AI security and governance at VE3. From shaping internal practices to launching our own AI safety, security and governance whitepaper, CoSAI’s work has been instrumental for us. As AI systems grow more complex and autonomous, this partnership becomes more vital and we’re honored to be part of CoSAI’s journey.”
— Manish Garg, Managing Director, VE3 - Wiz:
“AI’s growth echoes the early cloud era, when innovation outpaced security and the industry had to close the gap together. At Wiz, we believe that securing AI takes more than technology — it requires collaboration among industry leaders. Over the past year, CoSAI has driven these critical conversations, and Wiz is proud to stand with this coalition as new AI security challenges emerge, from autonomous AI agents to MCP.”
– Alon Schindel, VP of AI & Threat Research, Wiz
About CoSAI
The Coalition for Secure AI (CoSAI) is a global, multi-stakeholder initiative dedicated to advancing the security of AI systems. CoSAI brings together experts from industry, government, and academia to develop practical guidance, promote secure-by-design practices, and close critical gaps in AI system defense. Through its workstreams and open collaboration model, CoSAI supports the responsible development and deployment of AI technologies worldwide.
CoSAI operates under OASIS Open, the international standards and open source consortium. www.coalitionforsecureai.org
About OASIS Open
One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement. www.oasis-open.org
Media Inquiries:
communications@oasis-open.org