Advancing Security and Accountability in Agentic AI Systems
In response to the National Institute of Standards and Technology’s (NIST) Request for Information (RFI) on agentic AI systems, BBB National Programs’ Privacy Initiatives team submitted comments outlining key security considerations for these emerging technologies.
Agentic AI systems, which may incorporate planning, tool use, memory, and the ability to autonomously execute tasks within defined boundaries, present significant opportunities for innovation and operational efficiency across sectors. At the same time, these systems introduce new governance, security, and accountability considerations that warrant thoughtful, risk-based evaluation.
In its submission, BBB National Programs made four key recommendations to help strengthen security and trust in agentic AI systems.
As agentic AI systems become more deeply integrated into commercial and societal infrastructure, recognizing independent, risk-based accountability mechanisms can help operationalize security principles in practice while supporting responsible innovation.
BBB National Programs is well positioned to contribute to the development of third-party accountability mechanisms tailored to agentic AI systems. Drawing on decades of experience administering certification programs, dispute resolution systems, and co-regulatory frameworks, the organization has developed practical expertise in translating emerging policy expectations into measurable and enforceable standards.
Agentic AI systems, which may incorporate planning, tool use, memory, and the ability to autonomously execute tasks within defined boundaries, present significant opportunities for innovation and operational efficiency across sectors. At the same time, these systems introduce new governance, security, and accountability considerations that warrant thoughtful, risk-based evaluation.
In its submission, BBB National Programs made four key recommendations to help strengthen security and trust in agentic AI systems.
1. Recognize the Role of Independent Accountability Mechanisms
First, policymakers should recognize and prioritize the role of independent third-party accountability mechanisms in verifying the responsible use of agentic AI systems. Independent oversight can play a pivotal role in strengthening governance frameworks, improving system security, and reinforcing public trust in increasingly autonomous technologies.2. Encourage Risk-Based, Tiered Oversight
Second, oversight frameworks should be risk-based and tiered. Aligning safeguards and documentation requirements with the real-world impact of a system’s deployment allows organizations and regulators to focus resources where risks are greatest while enabling responsible innovation in lower-risk contexts.3. Translate Governance Principles into Operational Standards
Third, high-level AI governance principles should be translated into operational and measurable standards. Practical guidance on issues such as tool-use controls, memory governance, human-in-the-loop thresholds, and incident response procedures can help ensure that agentic AI systems are secure by design.4. Promote Continuous Monitoring and Supply Chain Accountability
Finally, effective governance must include adaptive monitoring and clearly defined responsibilities across the AI supply chain. Continuous oversight, including post-deployment review, helps ensure that accountability evolves alongside increasingly capable and autonomous AI systems.Independent Accountability as a Complement to Government Enforcement
Independent accountability mechanisms are not intended to replace government enforcement. Rather, they serve as complementary governance tools that strengthen compliance incentives, reduce regulatory uncertainty, and enhance confidence among consumers and businesses alike.As agentic AI systems become more deeply integrated into commercial and societal infrastructure, recognizing independent, risk-based accountability mechanisms can help operationalize security principles in practice while supporting responsible innovation.
BBB National Programs is well positioned to contribute to the development of third-party accountability mechanisms tailored to agentic AI systems. Drawing on decades of experience administering certification programs, dispute resolution systems, and co-regulatory frameworks, the organization has developed practical expertise in translating emerging policy expectations into measurable and enforceable standards.