Thinking Machine
Thinking Machine
Human-AI Governance (HAIG)
0:00
-4:46

Human-AI Governance (HAIG)

A Trust-Utility Approach

Human-AI Governance (HAIG)- A Trust-Utility Approach (2025) by Zeynep Engin

This paper introduces the Human-AI Governance (HAIG) framework, a dimensional approach designed to analyze and govern the evolving dynamics of trust in human-AI relationships. Addressing the limitations of traditional categorical frameworks (e.g., "human-in-the-loop") that inadequately capture the nuanced transitions as AI systems progress from tools to partners, the HAIG framework proposes viewing AI agency along continua rather than fixed categories. This is particularly pertinent in the context of foundation models with emergent capabilities and multi-agent systems exhibiting autonomous goal-setting behaviors. HAIG posits that agency redistributes in complex patterns best represented as positions along these continua, accommodating both gradual shifts and significant step changes. The framework operates at three levels: dimensions (Decision Authority Distribution, Process Autonomy, and Accountability Configuration), continua (representing gradual shifts along each dimension), and thresholds (critical points requiring governance adaptation). Central to HAIG is its trust-utility orientation, which prioritizes maintaining appropriate trust relationships that maximize utility while ensuring sufficient safeguards, diverging from purely risk-based or principle-based governance approaches.

Discussion about this episode

User's avatar