AI Systems (Operational Governance)
Layer-2 domain pack: descriptive invariants about metrics, cross-boundary effects, feedback, and comparator tiering in AI deployments.
Layer-2
descriptive
grounded
L2-AI-01 — Model Metrics Compress Reality
Statement
Model evaluation metrics compress diverse behaviors into tractable scores that can distort optimization.
Primitives
P10 P4 P6 P5Composites
C3 C4Notes
Benchmarks are comparators; optimizing them can miss deployment context.
L2-AI-02 — Effects Can Cross Product and Organizational Boundaries
Statement
AI system effects can manifest outside the product boundary and outside internal ledgers.
Primitives
P1 P5 P9 P6Composites
C5Notes
Boundary-accounting misalignment makes downstream effects invisible.
L2-AI-03 — Comparators Drift Under Incentives
Statement
Operational comparators tend to drift toward what is easiest to measure and most rewarded.
Primitives
P7 P10 P6Composites
C11 C3Notes
Hidden comparators become sacred; metric capture becomes governance reality.
L2-AI-04 — Local Optimization Can Alter System-Level Outcomes
Statement
Subsystem optimization can shift system-level outcomes through misaligned objectives and coupling.
Primitives
P1 P6 P10 P7Composites
C10Notes
Local KPI wins can degrade global behavior.
L2-AI-05 — Feedback Delays Distort Risk Perception
Statement
Effects may appear with delay, biasing governance toward what is immediate.
Primitives
P6 P10 P5Composites
C9 C10Notes
Delayed feedback increases oscillation and misattribution.
L2-AI-06 — Attribution Simplifies Systemic Causality
Statement
Outcomes are often attributed to individuals or single causes despite recursive system structure.
Primitives
P6 P7 P10Composites
C8 C4Notes
Blame narratives replace loop-level diagnosis.
Use this pack to map real artifacts (policies, configs, incidents) into the spine. Then run a gap-check: what grounding effects are missing?