The Optimization Trap
Section I · THE FORD CONTRADICTIONS · Henry Ford · Volume I
The Mechanism
Optimization narrows focus. Every efficiency gain makes the system better at producing a specific output. When the output stops being what the market wants, the dashboard continues to improve: unit costs fall, production speed rises, defect rates decline. The numbers are real. They are measuring the wrong thing. The trap cannot be detected from inside because the metrics were designed for the old reality, and the old reality is the only lens the organization has.
The Story
Between 1921 and 1926, the Model T's market share dropped from 56% to 34%. During those same five years, every internal metric Ford tracked was improving. Unit costs went down. Production speed went up. The dashboard said the machine was getting better. Sloan at GM had built a different category entirely: cars as identity, as aspiration, as annual renewal. Ford's production system, optimized beyond anything the world had seen for one specific car, could not produce anything else without being rebuilt from the ground up. Ford measured production efficiency. He should have been measuring whether anyone still wanted what he was producing.
Application Scenarios
The product team celebrating declining cost-per-acquisition while net promoter scores silently erode.
Run this diagnostic quarterly: take your five most-tracked internal metrics and, for each, name the external behavior it assumes. CPA assumes the acquired users retain. Conversion rate assumes the converted users are satisfied. Revenue-per-user assumes the revenue comes from value delivered rather than switching costs exploited. Now check each assumption against an independent external signal. If you find a gap between your optimized internal number and the external behavior it claims to represent, you have found the Model T dashboard. Specifically: pull your NPS or CSAT trend on the same graph as your CPA trend. If they are moving in opposite directions, your acquisition machine is filling a leaking bucket and the dashboard is celebrating the fill rate.
The SaaS company with improving gross margins and declining logo retention.
The margin improvement probably came from cutting customer success headcount or reducing onboarding investment. Those cuts improved the number the board watches while degrading the thing that produces next year's revenue. The structural test: which of your current metrics would still look good six months after the customers stopped loving the product? Those are the metrics that can lie to you. Flag them. Pair each one with a lagging indicator that cannot improve unless the underlying reality is healthy. Gross margin paired with net dollar retention. ACV paired with time-to-value. If the leading indicator improves while the lagging indicator stalls, someone has optimized the speedometer without checking whether the car is still on the road.
Critical Warning
Schedule a quarterly external review. Someone outside your organization, without access to your internal metrics, evaluates whether the market still wants what your metrics are optimizing for. Every quarter your metrics improve while your market share declines, you are Ford in 1924. The dashboard is lying to you, and it is lying with real numbers.