The Sopact Intelligence Library
Book 05 · Industry Guide

Training
Intelligence.

From the intake form to the 180-day follow-up — one learner record, four Kirkpatrick levels, generated overnight, not assembled by hand.

— CHAPTER ONE
The Cascade.
Architecture · Kirkpatrick L1–L4 · The persistent learner record
L1 Reaction 90% measure L2 Learning 83% measure L3 Behavior 35% reach it L4 Results 12% reach it → ONE learner_id CARRIES THE CASCADE FROM L1 TO L4 →
A SOPACT INTELLIGENCE LIBRARY GUIDE · Vol. 04
1
CONTENTS
— TABLE OF CONTENTS

In this book.

Seven chapters on how workforce programs, accelerators, and L&D teams move from satisfaction surveys to cascade evidence. Same architecture from intake to 180-day retention.

Book 05 · Chapters
01
The Cascade.
YOU ARE HERE
02
Reaction & Feedback — Level 1 Done Right
12 MIN
03
Learning & Assessment — Pre/Post That Holds
14 MIN
04
Behavior & Mentor Observation
15 MIN
05
Results & Employment Evidence
13 MIN
06
Equity Audit & Cohort Patterns
12 MIN
07
The Funder Report — Same-Day Export
11 MIN

Each chapter is published as a stand-alone PDF and as a sub-thread inside this book. Each chapter is self-contained; the whole book reads in roughly ninety minutes.

The library
BOOK 01
Beyond the Survey
Foundations — the five-stage spine under everything else.
BOOK 02
Application Management
Reviewer workflows, scoring, fairness — any application pipeline.
BOOK 03
Grant Intelligence
Foundations, family offices, community grant-making.
BOOK 04
Impact Intelligence
Impact funds, ESG, supply chain DD, LP reporting.
BOOK 05 · CURRENT
Training Intelligence
Workforce programs, accelerators, L&D, cohort outcomes.
BOOK 06
Nonprofit Programs
Multi-program service delivery, partners, board & funder narrative.
2
CHAPTER 01
01
— BOOK 05 · CHAPTER ONE
TRAINING INTELLIGENCE

The
Cascade.

Kirkpatrick named four levels in 1959. They were always a cascade — Reaction depends on delivery, Learning depends on Reaction, Behavior depends on Learning, Results depend on Behavior. What broke is not the model. What broke is the data architecture beneath it.

What you'll learn
  • Why 90% of programs measure Level 1, 35% reach Level 3, and only 12% reach Level 4 — and how a persistent learner_id changes the math.
  • How to design the cascade in reverse from a Level 4 funder question, then measure forward.
  • What separates a training feedback survey from training assessment from training effectiveness — three terms most teams treat as synonyms.
  • How a cohort of 127 learners across four programs produces six funder-ready reports the morning after graduation.
Read time
15
minutes · 16 pages · ~22 visuals

Skill files referenced
learner-baseline-scorer.md pre-post-pairer.md mentor-observation-rubric.md funder-report-composer.md
3
§ 1.1 · WHY
— SECTION 1.1

The Cascade Break.

The funder email arrives Tuesday. Not about satisfaction scores — those have been answered at 4.3 out of 5. The funder is asking whether participants applied the skills on the job, whether behavior changed at 90 days, whether the cohort produced outcomes worth renewing.

The question is pure Level 3 and Level 4, which is exactly where the answer gets stuck. Level 1 data lives in the post-training survey tool. Level 2 lives in the LMS. Level 3 data was never collected because the 90-day follow-up was "planned for next cycle." Level 4 data lives in the HRIS with no connection back to training. This is the Cascade Break — the four-level cascade gets measured as four disconnected events because no persistent learner identity links the levels across the tools that produce them.

— THE KIRKPATRICK FUNNEL · WHAT MOST PROGRAMS ACTUALLY MEASURE
L1 · Reaction 90% L2 · Learning 83% L3 · Behavior 35% L4 · Results 12% ← THE CASCADE BREAK no learner_id linking pre to follow-up

The math is unforgiving. Of the 90% of programs that send a Level 1 survey, only 12% will ever produce a Level 4 results claim that connects back to the same learners. The drop-off happens at Level 3 — where the persistent ID was supposed to carry forward and didn't.

The Kirkpatrick Model has been the global standard for training evaluation since 1959 and isn't broken. What breaks is the infrastructure beneath it. Levels 3 and 4 become narrative claims rather than measurements — and funders increasingly know the difference. — BOOK 05 · CH 01 · §1.1
4
§ 1.2 · BIG IDEA
— SECTION 1.2

Four stages. One learner. Five colors.

The training lifecycle has four operational stages — enrollment, training, completion, and employment — and they map cleanly onto the five-stage spine that runs through every book in this library.

Read across: at enrollment, Effective Data flows into a Kirkpatrick framework. By training start, the framework is anchored in a Data Dictionary tied to one persistent learner ID. From there, every pulse check, every mentor observation, and every 180-day follow-up becomes a Transformation that feeds the Reports — six funder-ready outputs, generated the morning the cycle closes.

— LIFECYCLE STAGE × SPINE STAGE

S1 · DATA
Intake form · pre-assessment · stated barriers
ENROLLMENT
S2 · FRAMEWORK
Kirkpatrick L1–L4 rubric · named observable behaviors
ENROLLMENT
S3 · DICTIONARY
Shared question library · locked scales · persistent learner_id
TRAINING START
S4 · TRANSFORM
Pre/post pair · mentor observation · at-risk pulse
TRAINING + COMPLETION
S5 · REPORTS
6 funder reports · equity audit · placement at 180 days
COMPLETION + EMPLOYMENT
id

One persistent learner_id from the intake form to the 180-day employer confirmation.

SurveyMonkey, Google Forms, and most LMS platforms do not issue this ID by default — links are built after the fact through email matching, which fails when participants use work email for pre and personal email for post. Sopact Sense assigns the ID at first contact and every subsequent instrument inherits it automatically. Level 3 stops being a stretch goal.

5
§ 1.3 · STAGE 1 · DATA
S1 · DATA — EVERY LEARNER BASELINED ON DAY ONE

What an intake form actually has to capture.

Most programs collect an intake form and a pre-survey — then lose track of learners until something goes wrong or graduation arrives. The work is not collection. The work is making collection structured at the moment of intake so that Level 2, Level 3, and Level 4 can be computed against the same person ninety, one-eighty, three-sixty-five days later.

Four data buckets cover what a Kirkpatrick-ready intake form needs. Each one fails for a different reason in standard stacks; each one is where Sopact Sense puts a persistent learner_id behind the value at the moment it's captured.

Application + Demographics STRUCTURED

  • Name, contact, persistent learner_id issued
  • Demographics (for equity audit at the end)
  • Funding source · cohort tag · enrollment date
  • WIOA / state-program tracking fields
  • Geography, prior education, employment history

Pre-Assessment (L2 baseline) PAIRED

  • Knowledge items — locked, identical at post
  • Skill rubric — same competency anchors
  • Confidence rating — 1–5 scale, locked
  • Stated career goal · employment target
  • Item IDs match the post-survey item IDs

Stated Barriers QUALITATIVE

  • Transportation · childcare · housing stability
  • Open-end: "What concerns you about completing this program?"
  • Hours available per week
  • Mental health / disability accommodations
  • Language preference for follow-up

Mentor / Coordinator Notes CONTEXT

  • Interview observations, scored on rubric
  • Risk flags raised at intake
  • Existing employer relationship (if any)
  • Recommended cohort placement
  • Linked to same learner_id as the form

Three
Spread-
sheets

A failure pattern named.

The intake form is in Google Forms. The pre-assessment is in Typeform. The mentor's interview notes are in a shared doc nobody checks. Three artifacts about the same learner — and no shared key between them. When the funder asks at month six who's employed, the coordinator is reconstructing a record that should have existed since day one. Standardize intake scoring across all coordinators — automatically.

6
§ 1.4 · STAGE 2 · FRAMEWORK
S2 · FRAMEWORK — KIRKPATRICK L1–L4 + NEW WORLD REVERSE DESIGN

The four levels, the cascade between them.

The Kirkpatrick Model has been the global standard for training evaluation since 1959. The four levels are conceptually simple. Their power is in the causal chain — each level's outcome depends on the previous level's measurement.

LEVEL 1
Reaction "Did participants find the training engaging and relevant?" — post-survey within 24h.
LEVEL 2
Learning "Did skills and confidence actually increase?" — pre/post pair, identical items, same ID.
LEVEL 3
Behavior "Are learners applying the skills on the job?" — 30/60/90 days, mentor observation.
LEVEL 4
Results "Did employment and retention outcomes improve?" — 6–12 months, linked to HRIS.

— THE NEW WORLD KIRKPATRICK MODEL · DESIGN IN REVERSE
MEASURE FORWARD
L1 → L2 → L3 → L4
Reaction first, Results last. Same order as 1959.
DESIGN BACKWARD
L4 → L3 → L2 → L1
Name the organizational result first. Everything upstream serves it.

The Kirkpatricks added this in 2016. Reverse-design is not controversial — it is structurally correct. Level 4 results exist only if Level 3 behaviors happen. Level 3 behaviors exist only if Level 2 capability exists. If you design Level 1 first without knowing the Level 4 target, every downstream level is hope rather than plan.


The
Kirkpatrick
Ceiling

A failure pattern named.

65% of training evaluations stall between Level 2 and Level 3. The pre-test was in the LMS. The post-survey is in SurveyMonkey. The 90-day follow-up is going out to whoever opens a bulk email — and matching by email address fails when learners use work email for pre and personal email for post. The ceiling isn't theoretical. It's structural. The fix is the ID, not the survey.

7
§ 1.5 · STAGE 3 · DICTIONARY
S3 · DICTIONARY — ONE ID. 180 DAYS. EVERY INSTRUMENT.

The record that links Level 1 to Level 4.

A persistent learner ID at enrollment is the single architectural change that makes the cascade hold. Without it, every level becomes a separate snapshot with no causal link to the next one. Pre-post deltas become statistically meaningless. Mentor observations float free. The 180-day employment record never connects back to the intake form.

— learner_id: COH3-042 · DARNELL WASHINGTON · CODING COHORT 3

180-DAY ARC
D0 Day 0 Intake + Pre-L2 ID issued 14 baseline fields W4 Week 4 Confidence pulse ↓ −34% confidence At-risk alert fired W8 Week 8 Post-L2 + L1 + mentor L2 delta computed cleared, placed 30d +30 days L3 self-report Applying 3 of 4 skills mentor confirms 90d +90 days L3 final · employer Retained at hire + promotion track 180 +180 days L4 · WIOA outcome Retention confirmed wage +27% vs intake — ONE learner_id: COH3-042 — CARRIED THROUGH ALL 6 INSTRUMENTS —

— SHARED QUESTION LIBRARY · ABRIDGED

Field Instrument Kirkpatrick Cadence
learner_id Issued at intake form Persistent · forever
confidence_skill_x Pre-survey · post-survey · 90-day L2 + L3 1–5, locked scale across waves
skill_demo_score Rubric scored by instructor L2 Week 1 baseline + Week 8 post
behavior_applied_x 30/60/90-day self-report + mentor L3 Triangulated · learner + mentor
employment_status HRIS or 180-day confirmation L4 30 / 90 / 180 / 365 days
barrier_open_end Intake + Week 4 pulse L1 + L3 context AI-coded by theme, linked to ID
8
§ 1.6 · STAGE 4 · TRANSFORM
S4 · TRANSFORM — PRE/POST WITHOUT RECONCILIATION

The delta is a query — not a CSV merge.

Below: a working Level 2 + Level 3 evaluation for a single learner. Pre-assessment from Day 0, post-assessment from Week 8, mentor observation from Day 60, employer confirmation from Day 90 — all on the same persistent ID. Every score cites the instrument that produced it. The funder can audit any row in seconds.

Darnell Washington · Coding Cohort 3 · L2/L3 Evaluation

learner_id · COH3-042   ·   Cohort: Web-Dev Spring '26 ·   Funder: WIOA Title I
L1–L3
CASCADE COMPLETE
L1 · Reaction post-survey · day 56
POST 4.4/5 Course rated 4.4/5 overall; mentor pairing rated 4.8/5. Open-end: "The check-in at week 4 caught me before I dropped out — I wouldn't be here without it."
4.4/5
L2 · Knowledge paired pre/post
PRE 41% · POST 86% Same 20-item assessment, identical rubric, locked scale. Pre-test score 41% (Day 0), post-test 86% (Week 8). +45-point delta computable because both instruments inherited learner_id.
+45pp
L2 · Confidence retrospective pre + post
PRE 2/5 → POST 4/5 Retrospective pre-test corrects for response-shift bias — Day 0 self-rating revised from 4/5 to 2/5 after Week 8 calibration. Real delta is +2 points, not +0.
+2.0/5
L3 · Self-report 30-day follow-up
SELF +30D Applying 3 of 4 named behaviors on the job: structured code reviews, daily stand-ups, ticket scoping. Has not yet led a design discussion (4th behavior). Mentor confirms via separate survey.
3/4
L3 · Mentor obs. triangulated · day 60
MENTOR +60D Same rubric, same behaviors, scored by direct manager. Agreement on 3 behaviors; mentor sees one behavior the learner under-rated. Triangulation surfaces strengths the learner missed.
4/4
L4 · Employment HRIS confirm · 90d
HRIS +90D Employed at hiring partner since Day 38. Retained at 90-day mark. Wage at hire $52K (+27% vs intake-reported prior wage). On track for promotion review at 180 days.
+90d

Every row above shares the same learner_id. The Week 4 confidence-drop alert (recorded in the Dictionary) was what triggered the mentor intervention. Without it, Darnell would have been in the 35% who dropped out before Week 8 — and the Level 3, 4 columns above would be empty.


— CASCADE COVERAGE · TRADITIONAL VS COMPOUNDING

L1 · Reaction satisfaction
90%
96%
L2 · Learning pre/post delta
83%
95%
L3 · Behavior on-the-job
35%
88%
L4 · Results retention · wage
12%
78%
TRADITIONAL STACK COMPOUNDING RECORD
9
§ 1.7 · STAGE 5 · REPORTS
S5 · REPORTS — SIX REPORTS · THE MORNING THE CYCLE CLOSES

The funder report writes itself.

The reports below are not dashboards. They are publication-ready outputs, each one synthesized from the connected learner record. The pre-assessment sets the baseline. The mid-program pulse catches at-risk learners. The 30/90/180-day follow-ups link Level 3 and 4. By the morning the cohort closes, the funder report is sitting in the coordinator's queue — same-day export, not a three-week scramble.

01

Learner Progress Report

Aggregate skill gains and confidence deltas across all active learners. Who's improving, who's plateauing, where coordinators should focus.

END OF EACH COHORT
02

At-Risk Alert

Who is trending down, what the signal is, which coordinator owns the follow-up. Flagged the week it happens, not at graduation.

CONTINUOUS · AS SIGNALS ARRIVE
03

Follow-Up Completion Tracker

Who has checked in at 30/60/90/180 days, what's missing, who needs outreach — before a deadline becomes a gap in the data.

PER MILESTONE
04

Promise vs Placement

Actual employment outcomes compared with what learners projected at enrollment. AI synthesizes narrative + follow-up into thematic patterns.

AT EVERY FOLLOW-UP MILESTONE
05

Equity Audit

Where outcomes diverge by demographic, geography, or funding source — with evidence to act on before the next cohort opens.

BEFORE NEXT CYCLE
06

Funder Impact Summary

Board-ready narrative with placement rates, skill gains, and next-cycle recommendations. Ready the morning the cycle closes.

FUNDER REPORT · SAME DAY

The 180-day retention report assembles itself in the background.

Because every follow-up instrument attaches to the same learner_id, the WIOA outcome report is not a project. It is a query. "Who enrolled, what did they learn, where did they place, are they still there at 180 days, how does their wage compare to intake." The data integrity that used to take a phone-call campaign is a default output of collection itself.

10
§ 1.8 · WORKED EXAMPLE
— SECTION 1.8

Regional Workforce Partnership.

127 learners across four active cohorts. Web-Dev Spring, Welding Vocational, Healthcare CNA, Customer-Service Bootcamp. WIOA Title I funding plus a state workforce grant. Here is what the same data architecture produces across four very different training shapes.

127
ACTIVE LEARNERS · 4 COHORTS
L1–L4
FULL KIRKPATRICK COVERAGE
6
REPORTS PER COHORT · AUTO
0
LEARNERS LOST INTAKE → 180D
1

Day 0 — Intake form & persistent learner_id

127 learners enroll across four cohorts in the same two-week window. Each one fills out the intake form, the pre-assessment, and the mentor interview rubric. learner_id is issued at form submission. The same ID will appear on the Week 8 post-survey, the 30-day follow-up, and the 180-day employer confirmation.

2

Week 4 — Pulse check, at-risk alert

A confidence pulse goes out to every learner. Priya's confidence drops 34% between Week 2 and Week 4. The system pattern-matches against her intake-stated barrier (childcare gaps) and the alert routes to her coordinator. Intervention happens in Week 5, not at the Week 8 satisfaction survey when it's too late.

3

Week 8 — Post-survey + L1 + L2 final

Same 20-item assessment from Day 0 runs again at Week 8. The L2 delta is computable per learner — not group-level averages. Mentor observation against the same four named behaviors is captured the same week. The cohort summary report generates the next morning.

4

Day 30 / 60 / 90 — L3 evidence triangulated

Self-report on the four named behaviors at Day 30. Manager or mentor scores the same behaviors at Day 60. 90-day employer-confirmation survey at Day 90. All three instruments inherit the learner_id automatically. Agreement and disagreement between learner and mentor surface as themes — not anomalies.

5

Day 180 — WIOA outcome & funder report

HRIS confirmation or a structured 180-day survey closes the L4 loop. Wage at hire compared to intake-reported prior wage. Retention compared to cohort baseline. The WIOA outcome report and the funder impact summary generate the same morning — both grounded in the same 127 learner records, no manual reconciliation.

11
§ 1.9 · GALLERY
— SECTION 1.9

Same architecture. Five training shapes.

A coding bootcamp and a corporate sales-training program share the same data architecture but pull on different fields. Here is what the same compounding record looks like across five very different training shapes.

12
§ 1.10 · SIBLING BOOKS
— SECTION 1.10

This chapter doesn't stand alone.

The Sopact Intelligence Library is one architecture, six industry guides. What you read here applies sideways. The same five-stage spine runs through every book — only the lifecycle and the field names change.

— BOOK 01 · FOUNDATIONS

Beyond the Survey

The five-stage spine in full — Effective Data, Framework, Dictionary, Transformation, Reports. The architecture this chapter sits on.

→ Start here if the spine is new
— BOOK 03 · GRANT INTELLIGENCE

The Lifecycle

Same compounding architecture, applied to foundations. Foundations funding grantee-run training programs sit at the seam of Books 02 and 04.

→ For foundations & grantmakers
— BOOK 04 · IMPACT INTELLIGENCE

The Compounding

For impact funds and ESG teams. Investee_id instead of learner_id; Five Dimensions instead of Kirkpatrick. Same architecture across DD → LP reporting.

→ For impact funds & ESG
— BOOK 05 · APPLICATION MANAGEMENT

Scoring & Fairness

Reviewer rubrics with observable anchors, bias detection, citation trails. The pattern under intake scoring in this book, generalized to any application review.

→ For any review workflow
13
§ 1.11 · SOPACT SENSE + SKILLS
— SECTION 1.11

The platform and the skill files.

Everything in this chapter runs on Sopact Sense — the data origin platform with a persistent ID at first contact. Skill files are the small Markdown recipes that turn the platform into a pre/post pairer, a mentor-observation tracker, or a funder-report composer. We don't distribute them; we co-author them with you in the first 60 minutes.

— THE PLATFORM

Sopact Sense

Data origin platform — not a downstream aggregator. Persistent IDs at intake, structured collection across the cascade, AI analysis at submit, six reports per cohort overnight.

  • Intake forms with structured field extraction
  • Kirkpatrick L1–L4 rubric anchored to your program
  • Persistent learner_id from Day 0 → 180-day retention
  • Pre/post pairing without CSV reconciliation
  • Six funder-ready reports, generated overnight
Reads your LMS, CMS, HRIS. Read-only. Coordinators stop being the integration layer.
— THE SKILL FILES

Co-authored at onboarding.

Four skill files cover most workforce-training work. Co-written with your team and your funder rubric — not generic templates.

learner-baseline-scorer.md
Reads intake forms, applies your assessment rubric identically to every learner, issues the persistent ID.
pre-post-pairer.md
Pairs pre-assessment to post-assessment on the same locked items. Computes per-learner deltas + retrospective pre-test.
mentor-observation-rubric.md
Triangulates self-report and mentor observation on the same named behaviors at Day 30/60/90.
funder-report-composer.md
Rolls up L1–L4 evidence into a board-ready funder narrative with citations. WIOA-format ready.

Why this compounds.

A skill file is small — one or two pages of Markdown. The platform gives the skill a learner_id, a named-behavior rubric, and every wave of follow-up data. Every cohort the platform gets sharper at predicting who will place and who will not. Your selection criteria improve every cycle because the system can finally see what worked.

14
§ 1.12 · RECAP
— SECTION 1.12

Six things to take away.

Before Chapter 02 (where we open Level 1 reaction surveys themselves), here is the compressed version of what changes when the cascade holds instead of breaks.

01

The model is not broken.

Kirkpatrick's four levels have been the global standard since 1959. The cascade is the architecture beneath them — and that's what most stacks don't deliver.

02

One ID. Intake to 180 days.

A persistent learner_id at enrollment is the single architectural change that turns L3 and L4 from stretch goals into default outputs.

03

Design backward from L4.

The New World Kirkpatrick logic is structurally correct: name the organizational result first, design the cascade backward, measure forward.

04

Pre/post is a query, not a merge.

Identical items, locked scales, retrospective pre-test for response-shift bias. Pair on the ID at submit — not by email three months later.

05

Triangulate at L3.

Self-report alone is biased. Manager or mentor observation against the same named behaviors converts L3 from claim into evidence.

06

The funder report writes itself.

Six reports per cohort generate overnight when the cascade is intact. WIOA outcomes are a query — not a phone-call campaign.


— UP NEXT · CHAPTER 02

Reaction & Feedback —
Level 1 Done Right.

Chapter 02 opens the training feedback survey. We walk through what L1 actually measures, where the smile-sheet ends and instrument design begins, why locked scales matter across waves, and how a 25-question pre/post/follow-up bank gets built from one source library — not three Google Forms.

15
— END OF CHAPTER 01
— THE SOPACT INTELLIGENCE LIBRARY

Graduation is the middle of the record.

Not the end of training. Not the close of the cycle. The middle of a record that runs from intake through the 180-day retention check — and onward.

THE LIBRARY · SIX VOLUMES
BOOK 01
Beyond the
Survey
Foundations
BOOK 03
Grant
Intelligence
Industry guide
BOOK 04
Impact
Intelligence
Industry guide
BOOK 05 · NOW
Training
Intelligence
Industry guide
BOOK 05
Application
Management
Industry guide
BOOK 06
Nonprofit
Programs
Industry guide

"Satisfaction is a snapshot. The cascade is a chain. When the ID holds, Level 4 stops being a stretch goal."

THE SOPACT INTELLIGENCE LIBRARY · 2026
16