AI systems that act on behalf of users—whether navigating benefits, triaging legal questions, or providing emotional support—are rapidly moving from prototypes to deployment. These “loyal agents” and “AI companions” raise a shared set of questions: loyal to whom, safe for whom, and accountable how?
- What loyalty means in practice: Multi-step AI systems that take actions, call tools, and interface with users without constant supervision—from healthcare workflows and public-service delivery to emotionally salient, sometimes therapy-adjacent companionship products.
- From speech to service: When AI systems present as helpers or companions, consumer-protection, product-liability, child-safety, and mental-health frameworks become central. The Character.AI litigation illustrates how negligence and design-choice scrutiny may shape regulation.
- Design obligations and safety-by-design: Auditability, human-oversight patterns, age gating, dependency mitigation, disclosure, escalation to human support, and interaction with standards like the NIST AI Risk Management Framework.
- Enforcement and liability: State Attorneys General are emerging as key actors using UDAP, child-protection, and privacy law; litigation is functioning as de facto governance for companion AI.
LOCATION: Paul Brest Hall, Munger Graduate Residence
DATE: April 15, 2026
TIME: 1:00 pm - 2:00 pm
Michael Atleson
Ashleigh Golden
Robert Morris
Laura Protzmann