Rise of the KnowMeBots: Promoting the Two Dimensions of AI Agency

Abstract

“We believe that AI will be about individual empowerment and agency at a scale that we’ve never seen before, and that will elevate humanity. . .”[1] – Sam Altman, CEO, OpenAI (2023)

To date, public policy debates over artificial intelligence (AI)—from the EU AI Act, to the Biden Administration’s Executive Order on AI, to the Bletchley Park Declaration—have focused on limiting harms from the largest vertically-integrated Institutional AI providers. Typically, this involves creating greater transparency, oversight, and “guardrails” for the riskiest proposed use cases. While important, this “AI Accountability” agenda standing alone remains incomplete.

Today, no standardized ways, or designated intermediaries, exist for humans—on their own terms—to connect with, query, and gain desired actions from third-party computational systems. Under what could be called an “AI Agency” agenda, however, individuals would use advanced digital technology to actively promote their own best interests. In particular, ordinary people could employ trusted personal AI (PAI) agents to engage directly with the online world. This would include interrogating and contesting consequential decisions made by Institutional AIs and other computational systems. Where AI Accountability focuses on installing guardrails against the riskiest uses, AI Agency creates functional merge lanes in between to enable greater competition, innovation, and consumer choice.

By definition, an agent is able to (1) make decisions and take actions and (2) do so on behalf of someone else. This paper describes two interrelated dimensions of the legal concept of agency: capabilities and relationship. What OpenAI researchers call “agenticity” refers to the ability of an advanced AI system to interact with and make a multitude of decisions in complex environments. This paper introduces the second dimension of “agentiality,” which are forms of relationships that allow the AI to authentically represent its principal. This dimension roughly correlates to the “tetradic” user alignment proposed in a recent paper by Google DeepMind, which posits balancing out relationship values between the user, the AI assistant, the developer, and society. By definition, true agency requires both the advanced capabilities to do things for us (agenticity), and the legitimate relationships to fully represent us (agentiality). This paper’s premise is that both dimensions of agency must be closely aligned if we are to harness advanced computational systems in ways that enhance and promote our human autonomy.

The paper explores one crucial aspect of each agency dimension. First, in order to better promote the advanced capabilities of agenticity—along with greater competition, innovation, and human choice—vertical AI interoperability must provide a technical means for our own PAIs to connect with and influence decisions rendered for us by larger Institutional AIs. The paper relies on a layered framework based on the Levels of Conceptual Interoperability Model (LCIM) as a way to enhance agenticity.

Second, in order to better promote authentic relationships with PAIs, the paper discusses the notion of trusted intermediation, including Net fiduciaries as a design element for the agentiality dimension. Unless acting under established duties of care and loyalty, PAIs risk becoming “double agents”—claiming to represent the principal, while in fact working covertly on behalf of others.

Lastly, the paper proposes the next steps to enact a robust two-dimensional AI Agency agenda, spanning the standards-making, corporate policy, and public policy realms. Elements of these proposed action items are also discussed in the author’s book Reweaving the Web, with his proposed trust-based Web overlay called GliaNet.

 

Link to Full Article:

Rise of the KnowMeBots: Promoting the Two Dimensions of AI Agency

Scroll to Top