Read More
Want to learn more about what we've been working on? Here's some resources to dive deeper.
Developing standards and protocols for AI agents that are secure, transparent, and loyal to users' interests
Stanford's Digital Economy Lab and Consumer Reports (CR) are joining forces to create a forward-looking Initiative dedicated to developing and governing consumer-authorized AI agents. By bringing together market leaders, technology pioneers, and policy experts, this coalition will tackle the complex challenge of ensuring that AI agents act securely, transparently, and in the user's best interests. The Initiative will focus on defining standards, prototyping next-generation software protocols, and demonstrating real-world applications in a sandbox environment—ultimately creating a more dynamic, competitive marketplace driven by "loyal by design" AI.
Creating new standards for AI agent security, transparency, and user loyalty
Developing next-generation protocols for secure AI agent interactions
Testing AI agents in sandbox environments to ensure they meet safety requirements
Prioritizing user control, privacy, and security in AI agent design
Under the direction of Professor Alex 'Sandy' Pentland, with essential funding and resources provided by Project Liberty Institute, the Stanford Digital Economy Lab, and Code-X.
We're collaborating on new extensions to OAuth, OpenID, and other protocols to ensure safe delegation of permissions, robust authentication, and reliable user consent—balancing security, usability, and long-term viability.
Our team is developing and piloting AI agents in a test environment to identify best practices for agent autonomy and verify that agents meet security, privacy, and ethical requirements.
Members explore high-impact applications such as AI-driven financial assistants, identity management, and consumer advocacy tools, documenting success stories and lessons learned.
Drawing on Stanford's leadership in AI and economic research—and Consumer Reports' proven capacity for policy, standard-setting, and consumer advocacy—we produce guidelines and publications that inform both industry and policymakers.
Open Source Commitment: The Initiative develops and uses open-source software, with the intention that all software be released under an MIT License, ensuring broad accessibility and adoption.
Building frameworks for equitable data sharing and value creation in AI-driven ecosystems.
Developing methods to help web services protect themselves from unwanted bots while enabling consumer-authorized AI agents to safely interact with websites and services.
Unlocking e-commerce and financial use cases for AI agents with enhanced security, scrutiny, and KYC protocols to protect against fraud and mistakes.
Legal work on liability and risks associated with AI agent deployments, addressing duty of loyalty, principal-agent problems, and policy recommendations.
Developing robust tools for data confidentiality, privacy-preserving identity, and security controls as AI agents manage more of users' digital footprints.
All workstreams are available to all corporate stakeholders for participation. The outputs of all workstreams will be shared publicly.
Want to learn more about what we've been working on? Here's some resources to dive deeper.
For more information or to explore how you can support this effort, please contact:
Or for directed inquiries, please contact:
Administrative: Christie Ko, Executive Director, Stanford Digital Economy Lab
Technical: Tobin South, Research Lead, Stanford & MIT