Developing a framework for secure third-party access to frontier AI
Think tank: RUSI
Author(s): Dr Louise Marie Hurel; Elijah Glantz; Daniel Cuthbert
April 7, 2026
This report from UK think tank RUSI delivers a critical framework for securely enabling third-party evaluation of frontier AI models.
This paper delivers a critical framework for securely enabling third-party evaluation of frontier AI models, directly addressing the urgent need for robust safety and security in the defence and security sector as AI capabilities rapidly advance. By mapping the risks and proposing actionable solutions, it empowers stakeholders to balance innovation with protection against emerging threats.
Key Recommendations
Do not let security concerns impede safety-critical evaluation: Ensure that third-party assessments can proceed without unnecessary barriers, supporting transparency and accountability.
Harmonise language and access tiers: Adopt a shared taxonomy for model access levels (black-box, grey-box, white-box) to standardise communication and expectations across developers, evaluators, and policymakers.
Operationalise secure access through shared standards and practices: Develop and implement common security controls, including technical, procedural, and contractual measures, grounded in principles like least privilege, data minimisation, and time-bound access.
Build feedback loops for continuous improvement: Establish mechanisms for ongoing learning, incident reporting, and periodic review of risk frameworks to adapt to evolving threats and regulatory requirements.
The paper introduces a threat taxonomy and an Access–Risk Matrix, providing practical tools for identifying, assessing and mitigating security risks associated with third-party access to sensitive AI models. It calls for a multistakeholder governance framework to ensure that secure access becomes the foundation for safe innovation, not a constraint.
This approach is essential for defence and security professionals seeking to harness frontier AI while safeguarding against intellectual property theft, model manipulation and weaponisation by adversaries.