The current discourse surrounding Large Language Model (LLM) performance remains stalled by qualitative descriptors. Terms like "reasoning," "understanding," and "emergent behavior" lack the formal rigor required for precision engineering and high-stakes capital allocation. To move beyond heuristic-based evaluation, we must transition to a framework grounded in the conservation laws of information.