From the intake form to the 180-day follow-up — one learner record, four Kirkpatrick levels, generated overnight, not assembled by hand.
Seven chapters on how workforce programs, accelerators, and L&D teams move from satisfaction surveys to cascade evidence. Same architecture from intake to 180-day retention.
Each chapter is published as a stand-alone PDF and as a sub-thread inside this book. Each chapter is self-contained; the whole book reads in roughly ninety minutes.
Kirkpatrick named four levels in 1959. They were always a cascade — Reaction depends on delivery, Learning depends on Reaction, Behavior depends on Learning, Results depend on Behavior. What broke is not the model. What broke is the data architecture beneath it.
The funder email arrives Tuesday. Not about satisfaction scores — those have been answered at 4.3 out of 5. The funder is asking whether participants applied the skills on the job, whether behavior changed at 90 days, whether the cohort produced outcomes worth renewing.
The question is pure Level 3 and Level 4, which is exactly where the answer gets stuck. Level 1 data lives in the post-training survey tool. Level 2 lives in the LMS. Level 3 data was never collected because the 90-day follow-up was "planned for next cycle." Level 4 data lives in the HRIS with no connection back to training. This is the Cascade Break — the four-level cascade gets measured as four disconnected events because no persistent learner identity links the levels across the tools that produce them.
The math is unforgiving. Of the 90% of programs that send a Level 1 survey, only 12% will ever produce a Level 4 results claim that connects back to the same learners. The drop-off happens at Level 3 — where the persistent ID was supposed to carry forward and didn't.
The training lifecycle has four operational stages — enrollment, training, completion, and employment — and they map cleanly onto the five-stage spine that runs through every book in this library.
Read across: at enrollment, Effective Data flows into a Kirkpatrick framework. By training start, the framework is anchored in a Data Dictionary tied to one persistent learner ID. From there, every pulse check, every mentor observation, and every 180-day follow-up becomes a Transformation that feeds the Reports — six funder-ready outputs, generated the morning the cycle closes.
learner_id from the intake form to the 180-day employer confirmation.SurveyMonkey, Google Forms, and most LMS platforms do not issue this ID by default — links are built after the fact through email matching, which fails when participants use work email for pre and personal email for post. Sopact Sense assigns the ID at first contact and every subsequent instrument inherits it automatically. Level 3 stops being a stretch goal.
Most programs collect an intake form and a pre-survey — then lose track of learners until something goes wrong or graduation arrives. The work is not collection. The work is making collection structured at the moment of intake so that Level 2, Level 3, and Level 4 can be computed against the same person ninety, one-eighty, three-sixty-five days later.
Four data buckets cover what a Kirkpatrick-ready intake form needs. Each one fails for a different reason in standard stacks; each one is where Sopact Sense puts a persistent learner_id behind the value at the moment it's captured.
The intake form is in Google Forms. The pre-assessment is in Typeform. The mentor's interview notes are in a shared doc nobody checks. Three artifacts about the same learner — and no shared key between them. When the funder asks at month six who's employed, the coordinator is reconstructing a record that should have existed since day one. Standardize intake scoring across all coordinators — automatically.
The Kirkpatrick Model has been the global standard for training evaluation since 1959. The four levels are conceptually simple. Their power is in the causal chain — each level's outcome depends on the previous level's measurement.
The Kirkpatricks added this in 2016. Reverse-design is not controversial — it is structurally correct. Level 4 results exist only if Level 3 behaviors happen. Level 3 behaviors exist only if Level 2 capability exists. If you design Level 1 first without knowing the Level 4 target, every downstream level is hope rather than plan.
65% of training evaluations stall between Level 2 and Level 3. The pre-test was in the LMS. The post-survey is in SurveyMonkey. The 90-day follow-up is going out to whoever opens a bulk email — and matching by email address fails when learners use work email for pre and personal email for post. The ceiling isn't theoretical. It's structural. The fix is the ID, not the survey.
A persistent learner ID at enrollment is the single architectural change that makes the cascade hold. Without it, every level becomes a separate snapshot with no causal link to the next one. Pre-post deltas become statistically meaningless. Mentor observations float free. The 180-day employment record never connects back to the intake form.
| Field | Instrument | Kirkpatrick | Cadence |
|---|---|---|---|
| learner_id | Issued at intake form | — | Persistent · forever |
| confidence_skill_x | Pre-survey · post-survey · 90-day | L2 + L3 | 1–5, locked scale across waves |
| skill_demo_score | Rubric scored by instructor | L2 | Week 1 baseline + Week 8 post |
| behavior_applied_x | 30/60/90-day self-report + mentor | L3 | Triangulated · learner + mentor |
| employment_status | HRIS or 180-day confirmation | L4 | 30 / 90 / 180 / 365 days |
| barrier_open_end | Intake + Week 4 pulse | L1 + L3 context | AI-coded by theme, linked to ID |
Below: a working Level 2 + Level 3 evaluation for a single learner. Pre-assessment from Day 0, post-assessment from Week 8, mentor observation from Day 60, employer confirmation from Day 90 — all on the same persistent ID. Every score cites the instrument that produced it. The funder can audit any row in seconds.
Every row above shares the same learner_id. The Week 4 confidence-drop alert (recorded in the Dictionary) was what triggered the mentor intervention. Without it, Darnell would have been in the 35% who dropped out before Week 8 — and the Level 3, 4 columns above would be empty.
The reports below are not dashboards. They are publication-ready outputs, each one synthesized from the connected learner record. The pre-assessment sets the baseline. The mid-program pulse catches at-risk learners. The 30/90/180-day follow-ups link Level 3 and 4. By the morning the cohort closes, the funder report is sitting in the coordinator's queue — same-day export, not a three-week scramble.
Aggregate skill gains and confidence deltas across all active learners. Who's improving, who's plateauing, where coordinators should focus.
END OF EACH COHORTWho is trending down, what the signal is, which coordinator owns the follow-up. Flagged the week it happens, not at graduation.
CONTINUOUS · AS SIGNALS ARRIVEWho has checked in at 30/60/90/180 days, what's missing, who needs outreach — before a deadline becomes a gap in the data.
PER MILESTONEActual employment outcomes compared with what learners projected at enrollment. AI synthesizes narrative + follow-up into thematic patterns.
AT EVERY FOLLOW-UP MILESTONEWhere outcomes diverge by demographic, geography, or funding source — with evidence to act on before the next cohort opens.
BEFORE NEXT CYCLEBoard-ready narrative with placement rates, skill gains, and next-cycle recommendations. Ready the morning the cycle closes.
FUNDER REPORT · SAME DAYBecause every follow-up instrument attaches to the same learner_id, the WIOA outcome report is not a project. It is a query. "Who enrolled, what did they learn, where did they place, are they still there at 180 days, how does their wage compare to intake." The data integrity that used to take a phone-call campaign is a default output of collection itself.
127 learners across four active cohorts. Web-Dev Spring, Welding Vocational, Healthcare CNA, Customer-Service Bootcamp. WIOA Title I funding plus a state workforce grant. Here is what the same data architecture produces across four very different training shapes.
127 learners enroll across four cohorts in the same two-week window. Each one fills out the intake form, the pre-assessment, and the mentor interview rubric. learner_id is issued at form submission. The same ID will appear on the Week 8 post-survey, the 30-day follow-up, and the 180-day employer confirmation.
A confidence pulse goes out to every learner. Priya's confidence drops 34% between Week 2 and Week 4. The system pattern-matches against her intake-stated barrier (childcare gaps) and the alert routes to her coordinator. Intervention happens in Week 5, not at the Week 8 satisfaction survey when it's too late.
Same 20-item assessment from Day 0 runs again at Week 8. The L2 delta is computable per learner — not group-level averages. Mentor observation against the same four named behaviors is captured the same week. The cohort summary report generates the next morning.
Self-report on the four named behaviors at Day 30. Manager or mentor scores the same behaviors at Day 60. 90-day employer-confirmation survey at Day 90. All three instruments inherit the learner_id automatically. Agreement and disagreement between learner and mentor surface as themes — not anomalies.
HRIS confirmation or a structured 180-day survey closes the L4 loop. Wage at hire compared to intake-reported prior wage. Retention compared to cohort baseline. The WIOA outcome report and the funder impact summary generate the same morning — both grounded in the same 127 learner records, no manual reconciliation.
A coding bootcamp and a corporate sales-training program share the same data architecture but pull on different fields. Here is what the same compounding record looks like across five very different training shapes.
WIOA-funded coding, healthcare, or skilled-trades cohort. 8–24 weeks. Lean Data follow-up to 180 days. Equity audit by funding source.
Entrepreneurship, women-in-tech, leadership fellowship. Mentor-paired, with 360° feedback. Long-tail alumni outcomes matter as much as completion.
Multi-cohort leadership-development program. Phillips ROI layer for CFO. Manager 360° at 90 days. Promotion and team-retention as L4.
Quarterly cohorts on new product or methodology. CRM-linked L4 — quota, pipeline velocity, win rate. Fastest L4 attribution of any archetype.
Training programs run by grantees of a foundation — youth workforce, public-health worker training, agricultural extension. The same learner_id architecture lets the foundation roll up across grantees, comparing cohorts on equitable outcomes rather than activity counts. Sits adjacent to Book 03 (Grant Intelligence).
The Sopact Intelligence Library is one architecture, six industry guides. What you read here applies sideways. The same five-stage spine runs through every book — only the lifecycle and the field names change.
The five-stage spine in full — Effective Data, Framework, Dictionary, Transformation, Reports. The architecture this chapter sits on.
Same compounding architecture, applied to foundations. Foundations funding grantee-run training programs sit at the seam of Books 02 and 04.
For impact funds and ESG teams. Investee_id instead of learner_id; Five Dimensions instead of Kirkpatrick. Same architecture across DD → LP reporting.
Reviewer rubrics with observable anchors, bias detection, citation trails. The pattern under intake scoring in this book, generalized to any application review.
Everything in this chapter runs on Sopact Sense — the data origin platform with a persistent ID at first contact. Skill files are the small Markdown recipes that turn the platform into a pre/post pairer, a mentor-observation tracker, or a funder-report composer. We don't distribute them; we co-author them with you in the first 60 minutes.
Data origin platform — not a downstream aggregator. Persistent IDs at intake, structured collection across the cascade, AI analysis at submit, six reports per cohort overnight.
Four skill files cover most workforce-training work. Co-written with your team and your funder rubric — not generic templates.
A skill file is small — one or two pages of Markdown. The platform gives the skill a learner_id, a named-behavior rubric, and every wave of follow-up data. Every cohort the platform gets sharper at predicting who will place and who will not. Your selection criteria improve every cycle because the system can finally see what worked.
Before Chapter 02 (where we open Level 1 reaction surveys themselves), here is the compressed version of what changes when the cascade holds instead of breaks.
Kirkpatrick's four levels have been the global standard since 1959. The cascade is the architecture beneath them — and that's what most stacks don't deliver.
A persistent learner_id at enrollment is the single architectural change that turns L3 and L4 from stretch goals into default outputs.
The New World Kirkpatrick logic is structurally correct: name the organizational result first, design the cascade backward, measure forward.
Identical items, locked scales, retrospective pre-test for response-shift bias. Pair on the ID at submit — not by email three months later.
Self-report alone is biased. Manager or mentor observation against the same named behaviors converts L3 from claim into evidence.
Six reports per cohort generate overnight when the cascade is intact. WIOA outcomes are a query — not a phone-call campaign.
Not the end of training. Not the close of the cycle. The middle of a record that runs from intake through the 180-day retention check — and onward.
"Satisfaction is a snapshot. The cascade is a chain. When the ID holds, Level 4 stops being a stretch goal."