Notes from the field — opinions, not slogans.
Why we built 174, what the assessment actually measures, why the AI-startup playbook fails the L&D buyer, and how the land-and-expand model for AI literacy actually works. The launch batch — more arrives as the work teaches us new things.
Why we built 174 Solutions
The mid-market AI literacy gap, why the existing market doesn't serve it, and what 174 exists to do about it.
What the assessment actually measures
Why three dimensions and not four. Why we score 0–100 instead of stars. Why the assessment ends at a report URL rather than a sales call. The methodology behind the 10-minute organizational AI literacy assessment.
The AI-startup playbook is wrong for L&D
A short tour of the visual and copy template that defines most AI-vendor sites — gradients, exclamation marks, productivity claims — and why none of it lands with the buyer who actually controls L&D budgets.
Land-and-expand for AI literacy
Why a 1-seat pilot beats procurement marathons. How to size cohort one. What to measure at week 8 to greenlight expansion. The GTM logic of the 174 program.
The most useful thing on this site is still the assessment.
The essays describe the methodology. The assessment is the methodology applied to your organization. Ten minutes, three dimensions, leadership-shareable report.