Agentic AI systems—AI systems that can pursue complex goals with limited direct supervision—are likely to be broadly useful if we can integrate them responsibly into our society.
While such systems have substantial potential to help people more efficiently and effectively achieve their own goals, they also create risks of harm.
In this white paper, we suggest a definition of agentic AI systems and the parties in the agentic AI system life-cycle, and highlight the importance of agreeing on a set of baseline responsibilities and safety best practices for each of these parties.
As our primary contribution, we offer an initial set of practices for keeping agents’ operations safe and accountable, which we hope can serve as building blocks in the development of agreed baseline best practices.
We enumerate the questions and uncertainties around operationalizing each of these practices that must be addressed before such practices can be codified.
We then highlight categories of indirect impacts from the wide-scale adoption of agentic AI systems, which are likely to necessitate additional governance frameworks.
Chapter 1 sets the stage for the comprehensive discussion on governing agentic AI systems. It introduces the concept of agentic AI, systems capable of independently pursuing complex goals with limited supervision.
The chapter outlines the dual nature of these systems, acknowledging both their potential to significantly benefit society and the inherent risks they pose.
The introduction serves as a primer for the detailed exploration of definitions, benefits, risks, and governance strategies in subsequent chapters.
The rise of increasingly agentic AI systems marks a significant evolution in the field of artificial intelligence.
These systems, characterized by their ability to adapt and pursue complex objectives autonomously, promise to expand the capabilities and applications of AI significantly.
However, this advancement also introduces new technical and social challenges.
Agentic AI systems differ from more limited AI technologies, such as image generation or language models like GPT-4, in their ability to perform a broader range of actions autonomously and reliably.
This shift towards more independent and goal-driven AI systems could vastly increase their usefulness in various domains but also raises concerns about their safety and ethical implications.
In light of these developments, this white paper aims to define agentic AI systems, identify the key parties involved in their lifecycle, and emphasize the need for establishing baseline responsibilities and safety practices.
The primary contribution of this paper is to propose initial practices for ensuring the safe and accountable operation of agentic AI systems.
These practices are intended to serve as building blocks for developing agreed-upon baseline best practices in the field.
Furthermore, the paper acknowledges the presence of uncertainties and questions around operationalizing these practices, which must be addressed before they can be formalized and widely adopted.
It also highlights the indirect impacts of the widespread adoption of agentic AI systems, such as shifts in labor markets and societal structures, necessitating comprehensive governance frameworks.
The introduction underscores the urgency and importance of addressing the challenges posed by agentic AI systems.
It sets the context for the detailed exploration of these issues in the following chapters, laying the groundwork for a holistic understanding of the subject.
Chapter 1 introduces the concept of agentic AI systems, emphasizing their potential advantages and the risks they pose.
It provides an overview of the paper's objectives, including defining these systems, highlighting the need for baseline safety and accountability practices, and addressing the broader societal impacts of their adoption.
This introduction serves as a foundation for the in-depth analysis and discussions in subsequent chapters, underscoring the importance of responsible integration of agentic AI systems into society.
Here we define key terms and concepts related to agentic AI systems, providing a clear framework for understanding the subsequent discussions on governance, risks, and best practices.
This chapter is crucial for establishing a common language and set of concepts that are referenced throughout the paper.
2.1 Agenticness, Agentic AI Systems, and “Agents”
Agentic AI systems are characterized by their ability to autonomously pursue complex goals over extended periods without detailed instructions.
Unlike more rudimentary AI systems, these agents can operate in diverse environments and respond adaptively to unforeseen situations.
The degree of an AI system's agenticness involves several dimensions:
Goal Complexity: The range and challenge of goals an AI system can achieve. For instance, an AI capable of analytical reasoning across multiple domains exhibits higher goal complexity than one restricted to basic classification tasks.
Environmental Complexity: The complexity of the environment in which the system can operate successfully. A system capable of functioning across various domains or in multi-stakeholder scenarios demonstrates higher environmental complexity.
Adaptability: The system's ability to handle novel or unexpected circumstances effectively.
Independent Execution: The extent to which the system can function autonomously without human intervention.
It's important to note that agenticness is a property rather than a classification, and it doesn't imply consciousness or self-motivation.
2.2 The Human Parties in the AI Agent Life-cycle
Understanding the roles of human actors in the lifecycle of agentic AI systems is essential for allocating responsibilities and ensuring effective governance.
The primary parties involved are:
Model Developers: They develop the underlying AI model that drives the agentic system, setting its fundamental capabilities and behavior.
System Deployers: These actors build and operate the system utilizing the model, including its integration with other tools and user interfaces. They often have domain-specific knowledge and may adjust the AI system for particular use cases.
Users: Users are individuals or entities employing the agentic AI system, providing it with specific goals and possibly overseeing its operation.
Sometimes, these roles may overlap or be shared among different entities, adding complexity to the governance and accountability structures.
Chapter 2 lays the groundwork for understanding agentic AI systems by defining key concepts and identifying the human actors involved in their lifecycle.
By elucidating the multidimensional nature of agenticness and the roles of model developers, system deployers, and users, the chapter provides a necessary foundation for discussing the governance and ethical challenges in later sections.
This framework is crucial for comprehending the complexities involved in integrating agentic AI systems into society responsibly.
Potential Benefits of Agentic AI Systems
Chapter 3 of the white paper delves into the potential benefits that agentic AI systems could bring to society.
It examines how these advanced systems might enhance various aspects of human activity and discusses the implications of their agentic capabilities.
The benefits are explored in two main facets: agenticness as a helpful property and agenticness as an impact multiplier.
3.1 Agenticness as a Helpful Property
Agentic AI systems, when designed safely and with proper governance, can offer numerous advantages:
Higher Quality and Reliable Outputs: Agentic AI systems, like those capable of autonomous internet browsing and iterative learning, can provide more accurate and reliable information, especially in dynamic or evolving contexts.
Efficient Use of Users’ Time: By autonomously performing complex tasks, agentic AI systems can save significant time for users, allowing for smoother, more efficient task completion.
Improved User Preference Solicitation: These systems can interactively and intuitively solicit user preferences, enhancing user experience and effectiveness.
Scalability: Agentic AI systems can enable a single user to perform tasks at a scale or complexity that would otherwise be unattainable, potentially revolutionizing fields like healthcare, where AI can assist in patient care and diagnostics.
3.2 Agenticness as an Impact Multiplier
Agentic AI systems have the potential to exponentially increase the impact of AI technology across various sectors:
Systemic Impacts in Society: The deployment of agentic AI systems could lead to systemic changes, much like the introduction of general-purpose technologies like the steam engine and electricity did.
Economic and Societal Well-being: There is potential for agentic AI systems to significantly boost economic productivity and contribute to various non-economic measures of societal well-being, such as health and education.
Transformation of Work: These systems could fundamentally alter the nature of work, possibly leading to a more leisure-oriented society, though this is speculative and not without its risks.
In Chapter 3, the discussion highlights the significant potential benefits of agentic AI systems.
These benefits range from improving the efficiency and quality of tasks to potentially transforming economic and societal structures.
The chapter underscores the transformative power of agentic AI systems, comparing them to past general-purpose technologies in terms of their potential impact on society.
While acknowledging the positive prospects, the chapter also hints at the need for caution and responsible integration of these systems into society, considering the profound changes they might bring.
The benefits discussed provide a strong argument for the continued development and integration of agentic AI systems in various sectors, underlining the importance of the governance practices explored in other chapters.
Practices for Keeping Agentic AI Systems Safe and Accountable
In Chapter 4, the white paper shifts focus to the practices necessary for ensuring the safe and accountable operation of agentic AI systems.
Recognizing the potential risks associated with these advanced technologies, the chapter outlines a series of best practices aimed at mitigating these risks.
It emphasizes a multi-layered defense strategy, highlighting the evolving nature of these practices as AI technology progresses.
4.1 Evaluating Suitability for the Task
This section underscores the importance of carefully assessing whether a given AI model is suitable for a specific use case.
It involves evaluating the AI system's reliability across expected deployment conditions and understanding the scope of potential harm it could enable.
The nascent field of agentic AI system evaluation presents unique challenges, especially in predicting system behavior under unanticipated conditions.
4.2 Constraining the Action-Space and Requiring Approval
The paper suggests limiting the range of actions that agentic AI systems can autonomously execute. This involves requiring human approval for critical decisions, especially where there is a risk of significant harm. Balancing the need for safety with the efficiency of AI systems is highlighted as a key challenge.
4.3 Setting Agents’ Default Behaviors
Designing AI systems with default behaviors that minimize risk and align with common-sense user preferences is recommended. This includes building in mechanisms for the systems to be aware of their uncertainties and seek user clarification when needed.
4.4 Legibility of Agent Activity
Ensuring transparency in the decision-making processes of AI agents is crucial. This involves making the agents’ reasoning processes visible to users, enabling them to understand, monitor, and, if necessary, intervene in the AI’s actions.
4.5 Automatic Monitoring
The chapter discusses the use of secondary AI systems to monitor the primary agentic AI's reasoning and actions. This approach is meant to augment human oversight but comes with challenges related to privacy and the potential for overextension of monitoring functions.
Creating mechanisms for tracing actions back to specific AI instances is proposed to enhance accountability and deter misuse. This could involve unique identifiers for AI systems, particularly in high-stakes interactions.
4.7 Interruptibility and Maintaining Control
Finally, the importance of being able to interrupt or shut down AI systems is discussed. This includes designing systems with the capability for graceful fallbacks during interruptions and ensuring that these fallbacks are robust enough to handle the complexities of agentic AI operations.
Chapter 4 of the white paper presents a comprehensive set of practices aimed at ensuring the safety and accountability of agentic AI systems.
These practices span from evaluating the suitability of AI systems for specific tasks to ensuring their interruptibility and control.
The chapter underscores the importance of transparency, user involvement, and the careful design of AI systems to minimize risks.
It recognizes the challenges in operationalizing these practices, especially as AI systems become more advanced.
The proposed practices serve as a foundation for developing robust governance frameworks, highlighting the need for continuous adaptation and collaboration among various stakeholders in the AI ecosystem.
Indirect Impacts from Agentic AI Systems
Chapter 5 explores the broader, indirect impacts of widespread agentic AI system adoption on society.
Recognizing that the influence of these systems extends beyond their immediate functionalities, the chapter categorizes potential societal shifts and challenges that could arise from the collective use of agentic AI.
It highlights the necessity for proactive measures to mitigate these impacts, involving a combination of industry-wide collaborations and broader societal interventions.
5.1 Adoption Races
The competitive advantage offered by agentic AI systems may lead to rapid adoption rates, potentially at the expense of thorough vetting for reliability and safety.
This "race" to adopt such systems could result in widespread use of untested and potentially unsafe AI technologies, especially in high-stakes domains.
The risk of overreliance on these systems, despite their unreliability in certain rare but critical situations, is a significant concern.
5.2 Labor Displacement and Differential Adoption Rates
Agentic AI systems are likely to have profound effects on the labor market. By automating and augmenting tasks, these systems could redefine what constitutes "routine" work, potentially displacing a significant number of jobs.
Conversely, they might also create opportunities for upskilling and new job creation.
The differential impact on various sectors and the potential widening of the digital divide are crucial considerations.
5.3 Shifting Offense-Defense Balances
Certain tasks might be more susceptible to automation by agentic AI systems than others, potentially disrupting existing harm mitigation equilibria in society.
For instance, if cyber-attack capabilities are automated more effectively than defense mechanisms, it could lead to heightened vulnerabilities.
Understanding and anticipating these shifts are critical for maintaining societal security and balance.
5.4 Correlated Failures
The risk of simultaneous or similar failures among AI systems, due to shared algorithms or data sources, is a concern.
Such "algorithmic monoculture" could lead to systemic vulnerabilities and biases, amplifying the impact of any single failure across multiple systems and sectors.
Ensuring diversity in AI development and robust fallback mechanisms is essential to guard against these correlated failures.
Chapter 5 delves into the complex indirect impacts that could arise from the widespread adoption of agentic AI systems.
These impacts include competitive pressures leading to premature adoption, potential labor market disruptions, shifts in societal harm mitigation balances, and the risk of correlated failures due to similar AI algorithms and data sources.
The chapter emphasizes the need for comprehensive strategies and proactive measures to address these challenges.
It calls for a collaborative approach involving various stakeholders, including policymakers, to ensure that the benefits of agentic AI systems are realized while minimizing potential negative consequences on a societal scale.
In the concluding chapter of the white paper, the discussion synthesizes the insights gained from the examination of agentic AI systems.
It reiterates the growing presence and influence of these advanced AI systems and underscores the need for significant measures to ensure their safe and reliable operation.
The chapter calls for a collaborative effort among scholars, practitioners, and policymakers to develop and refine best practices for agentic AI governance.
As we stand on the cusp of a new era marked by increasingly agentic AI systems, the white paper concludes with a reflection on the urgency and complexity of the challenges ahead.
It acknowledges that while the proposed practices and frameworks provide a starting point, they are not comprehensive solutions.
The dynamic nature of AI technology means that governance strategies will need to evolve continuously to keep pace with technological advancements.
The paper emphasizes that determining the responsibilities for implementing these practices, and ensuring their effectiveness and affordability, will be a collective endeavor.
It calls for ongoing dialogue and cooperation among various stakeholders in the AI ecosystem.
This collaborative approach is vital for addressing the multi-faceted challenges posed by agentic AI systems, ranging from technical and ethical issues to broader societal impacts.
Moreover, the conclusion acknowledges that reaching agreement on best practices is not a one-time effort but an iterative process.
As AI capabilities advance rapidly, society will need to regularly reassess and update best practices to mitigate new risks and leverage emerging opportunities.
The white paper concludes with a hopeful note, envisioning a future where agentic AI systems are integrated into society in a way that maximizes their potential benefits while minimizing risks.
This vision is contingent on the proactive and responsible engagement of all parties involved in AI development, deployment, and governance.
Chapter 6 concludes the white paper by summarizing the key points discussed and highlighting the need for ongoing efforts to govern agentic AI systems effectively.
It calls for a collaborative and adaptive approach to developing governance frameworks, reflecting the evolving nature of AI technology.
The chapter underscores the importance of continuous dialogue and reassessment of best practices among stakeholders to ensure the safe, ethical, and beneficial integration of agentic AI systems into society.
This conclusion serves as a call to action for responsible and forward-thinking governance in the rapidly advancing field of AI.
Comprehensive Analysis and Review of the White Paper on Agentic AI Systems
In-depth Definition of Agentic AI: The paper provides a clear and comprehensive definition of agentic AI systems, aiding in the understanding of their capabilities and scope. This foundational clarity is essential for informed discussion and policy-making.
Identification of Key Stakeholders: By outlining the roles of model developers, system deployers, and users, the paper effectively delineates responsibilities, which is crucial for accountability in AI governance.
Highlighting the Potential Benefits: The discussion of benefits, such as efficiency and scalability, illustrates the positive impact agentic AI systems could have on various sectors, fostering a balanced perspective on AI advancements.
Emphasis on Safety and Accountability: The focus on developing practices for safety and accountability addresses the primary concerns associated with advanced AI systems, promoting a responsible approach to AI development.
Addressing Societal Impacts: The paper thoughtfully considers indirect societal impacts, such as labor displacement and offense-defense balances, demonstrating a holistic view of AI's role in society.
Detailing Governance Practices: The detailed exploration of governance practices, like evaluating task suitability and setting default behaviors, provides practical guidance for managing agentic AI systems.
Consideration of Ethical Implications: Ethical considerations, such as user privacy in automatic monitoring, are given due attention, underscoring the importance of ethics in AI development.
Focus on Transparency: Advocating for the legibility of agent activity aligns with the growing demand for transparent AI systems, which is key for user trust and understanding.
Adaptive Governance Approach: Acknowledging the need for evolving governance practices reflects an understanding of the dynamic nature of AI technology.
Collaborative Call to Action: The conclusion’s emphasis on collaborative efforts among various stakeholders encourages a unified approach to addressing the challenges of agentic AI systems.
Potential for Over-Reliance on AI: The paper mentions the risk of over-reliance on AI systems, particularly in critical domains, which could lead to neglect of human judgment and expertise.
Risk of Accelerated Adoption: The mention of adoption races highlights the danger of rapid, unchecked AI adoption, possibly leading to widespread use of immature or unsafe AI technologies.
Labor Market Disruptions: The potential for significant job displacement due to AI automation is a major concern, signaling the need for proactive measures to address workforce transitions.
Challenge in Operationalizing Best Practices: The paper acknowledges difficulties in operationalizing proposed practices, indicating potential hurdles in practical implementation and enforcement.
Complexity in Balancing Safety and Innovation: The need to balance safety with the efficiency and capabilities of AI systems presents a complex challenge, as overly stringent controls could stifle innovation.
Source document here: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf
Yonadav Shavit∗ Sandhini Agarwal∗ Miles Brundage∗ Steven Adler Cullen O’Keefe Rosie Campbell Teddy Lee Pamela Mishkin Tyna Eloundou Alan Hickey Katarina Slama Lama Ahmad Paul McMillan Alex Beutel Alexandre Passos David G. Robinson