The Sopact Intelligence Library
Book 01 of 06 · The foundational field guide

Beyond
the Survey.

Stakeholder intelligence in the AI era. One persistent record per stakeholder, from first contact through long-term outcome — and the methodology, collection channels, and AI suite that make it work.

STAKEHOLDER · STK-04287 · 5 YEARS ON ONE RECORD
Y1 · APR Y1 · SEP Y2 · FEB Y2 · JUN Y2 · DEC Y5 · AUG Application essay · 1,847 words Onboarding confidence 2.1 Mid-program confidence 3.4 Completion confidence 4.6 6-mo follow-up employed Alumni · Y5 same ID, same row

One ID, issued once, carried for years. Cross-cycle analysis is a single query — not a multi-week reconciliation project.

By Unmesh Sheth · Founder & CEO, Sopact · Updated May 2026
Introduction · The opening problem
The opening problem

Most enterprise data
dies at collection.

The numbers your team reports on came from closed-ended fields — Likert scales, dropdowns, multiple-choice. The open-text answers, the document uploads, the interview transcripts that explain why the numbers moved sit unread in an export. That's roughly 95% of what stakeholders actually told you.

STAKEHOLDER DATA · BY WHAT HAPPENS TO IT
5%
95%
unread · filed · archived · lost
WHAT YOU COLLECTED
100%
WHAT YOU REPORTED ON
5%
WHAT YOU ACTUALLY ANALYZED
5%

Not because nobody wanted to read it. Because traditional survey tools couldn't — and the consultants who could charged for weeks per cycle. Book 01 is about what changes when that constraint lifts.

2
Introduction · The trap underneath the trend
The trap underneath the trend

Same essay.
Same rubric.
Three different scores.

Generative AI on its own is a fluent improviser. Hand Claude, GPT, or Gemini the same applicant essay and the same rubric three times — you'll get three different scores, three different reasonings, three different citations. For a foundation awarding a grant or a panel admitting a fellow, that's not a feature. It's a liability.

RUN 01 · TUE 9:14am
4.2
"Strong narrative arc, modest evidence on quantitative outcomes."
cited ¶ 2, 4
RUN 02 · WED 2:47pm
3.8
"Reasonable structure, evidence reads thin in middle sections."
cited ¶ 1, 5
RUN 03 · THU 4:02pm
4.5
"Compelling case, well-supported claim on cohort outcomes."
cited ¶ 3, 6
SCORE VARIANCE
±0.7
CITATIONS MATCHED
0 of 3
ACCEPTABLE FOR ADMIT?
No

The opposite of this — same input, same output, every time — is the architecture Book 01 walks you through. Stop here, or turn the page for the answer.

3
Introduction · The architectural answer
The architectural answer

The model reads.
Sopact locks the answer.

Sopact Sense is a thin deterministic layer above the generative models — Claude, OpenAI, Gemini. The model still does the reading: analyzing essays, extracting themes, identifying patterns. But the rubric is frozen. The output schema is typed. Citations are required for every score. Run the same record through it a hundred times, get the same answer a hundred times.

OUTPUT · DETERMINISTIC
4.2 · same score, 100 times · 3 citations attached
=
SOPACT ADVANCE LAYER · THE INTELLIGENCE ENGINE
PIN 1
Frozen rubric · same criteria every run
PIN 2
Required citations · no score without source
PIN 3
Typed schema · output shape locked
GENERATIVE MODEL · DOES THE READING
Claude
·
OpenAI
·
Gemini
DETERMINISTIC

Same input, same answer, every time. Enterprise-acceptable.

PERSISTENT

One ID per stakeholder, carried for years. Cross-cycle by query.

COMPOSABLE

Substrate for agents, dashboards, and reports — not a survey tool.

4
Introduction · The product surface
The product surface · 12 engines, 2 sides

Twelve engines.
One substrate.

Sopact Sense is the substrate. Twelve specialized intelligence engines run on top of it — six on each side. Stakeholder Intelligence handles programs that work with people (applicants, scholars, trainees). Partner Intelligence handles programs that work with organizations (investees, grantees, suppliers). Same record format, different relationship.

ENGINE 01 · SIX SUB-ENGINES

Stakeholder Intelligence

Programs that work with the same people across cycles.

01
Application
apply → admit
02
Scholarship
award → alumni
03
Award · Competition
submit → judge
04
Training
pre → mid → post
05
Survey
collect → analyze
06
Longitudinal
year 1 → year 5
ENGINE 02 · SIX SUB-ENGINES

Partner Intelligence

Organizations that work with other organizations.

01
Impact
onboard → year 5
02
Portfolio
DD → exit
03
Grant
apply → report
04
ESG
screen → monitor
05
Supplier
onboard → audit
06
Cohort
cohort 1 → N
ALREADY ON SOPACT SENSE

Carnegie Mellon · higher ed  ·  PSM Foundation · philanthropy  ·  Boys to Men Tucson · youth program

THE PROPRIETARY CORE

The Intelligent SuiteCell · Row · Column · Grid — the four scopes of AI analysis that power every engine. Chapter 04.

5
Introduction · The six chapters
The six chapters · the foundational field guide

Six chapters.
One methodology.

Book 01 is sequenced. Each chapter builds on the last. The 5-stage methodology spine — Data · Framework · Dictionary · Transformation · Reports — runs through every chapter, every book in the library.

THE 5-STAGE METHODOLOGY SPINE
DATA FRAMEWORK DICTIONARY TRANSFORMATION REPORTS
01
Workflow

The 5-stage spine, 9 vocabulary terms, and the lifecycle Sopact Sense runs underneath every engine. Read this first.

22 pages · 15 min
02
Data Design

Mixed-method · longitudinal · pre/post. Designing for the field — offline, skip logic, multi-language as three independent layers.

17 pages · 12 min
03
Data Collection

Four channels — online · offline · documents · transcripts — feeding one persistent ID. 40+ languages. Native, not bolted on.

16 pages · 12 min
04
Intelligent Suite

Cell · Row · Column · Grid. The proprietary core. Plus the four canonical report types — each with a live URL example.

18 pages · 16 min
05
Actionable Insight

The intelligence engine vs the actionable layer. MCP, BI, warehouse, Claude. Dashboards built in minutes, not weeks.

16 pages · 14 min
06
Application Management

The bridge chapter. Both engines applied to one full lifecycle — 480 applications, 6 stages, 6 days. On-ramp to Book 06.

16 pages · 14 min
6
Introduction · How to navigate
How to navigate this book

Three reading paths.
Pick yours by time.

Each chapter stands on its own. Each chapter also leans forward into the next. Three paths through the book cover three real situations.

12 min
PATH 01 · ORIENTATION

"I need to know what this is."

Read this introduction. That's it. You'll have the framing, the 12 engines, the methodology spine, and a sense of where to drill in later.

Intro only
8 pages
45 min
PATH 02 · PLANNING A ROUND

"I'm scoping my next measurement cycle."

Intro + Ch 01 (methodology) + Ch 02 (design) + Ch 03 (collection). You'll know what to measure, how to design it, and which channels to set up.

Intro + Ch 01–03
63 pages
90 min
PATH 03 · BUILDING INFRASTRUCTURE

"I'm replacing our reporting stack."

Read everything. Pay special attention to Ch 04 (the Intelligent Suite + the four canonical reports with live URLs) and Ch 05 (the actionable layer — MCP, BI, warehouse, Claude integration).

Intro + Ch 01–06
113 pages
AFTER BOOK 01

Five companion industry books — Application Management · Grant Intelligence · Impact Intelligence · Training Intelligence · Nonprofit Programs — take this methodology and apply it to specific domains. Ch 06 is the on-ramp to the last of these.

7
Introduction · What you'll be able to do
What you'll be able to do after reading

Five things you'll do
differently next cycle.

1
Design measurement that works with AI, not against it.

Frozen rubrics, typed schemas, mandatory citations. The architecture that makes deterministic scoring possible. Ch 01 + 02.

2
Run all four collection channels with one persistent ID.

Online · offline · documents · transcripts. 40+ languages. ID issued once, carried for years. Ch 03.

3
Generate canonical reports without rebuilding them every cycle.

Four canonical shapes — pre/post · correlation · panel · portfolio. Multi-language. Live URLs. Ch 04.

4
Hand off to your BI / AI stack via MCP + REST + warehouse.

Tableau · Power BI · Looker · Snowflake · Claude via MCP. Same audited API everywhere. Ch 05.

5
Apply the methodology to one full domain — end-to-end.

Application management, 480 applicants, 6 stages, 6 days. Equity audit included. Ch 06.

OR · IF YOU'D RATHER SEE IT ON YOUR OWN DATA

Bring a real cycle. 60 minutes is enough.

One application round. One cohort. One portfolio quarter. We walk through how it would live as one persistent record per stakeholder, what the Intelligent Suite would extract, and what your team would do differently next cycle.

BOOK A DISCOVERY CALL →
With Unmesh Sheth, founder & CEO.
A clear next step, or none.

Or just keep reading. Chapter 01 starts on the next page.

8
The Sopact Intelligence Library
Book 01 of 06
Beyond
the Survey.
The modern impact playbook — how teams move from point-in-time surveys to context-driven intelligence, built for the AI era.
CH 01 Workflow — the operating system for impact data
Series Map
Where this chapter sits

A six-book library
for the AI era.

This is the foundational book — the methodology every industry guide is built on. Five industry guides follow it, each applying the same spine to a different world.

Chapters in this book

00
Introduction Written last — the why behind the spine
Last
01
Workflow The operating system for impact data
02
Data Design Mixed-method · longitudinal · pre/post
Next
03
Data Collection Online · offline · documents · transcripts
Coming
04
Intelligent Suite AI · BI · warehouse working as one
Coming
05
Actionable Insight From dashboards to decisions
Coming

The library

Book 01 · this book
Beyond the Survey
The foundational field guide — methodology for the AI era.
Book 02 · industry guide
Application Management
Pitch comps, fellowships, scholarships, accelerators.
Book 03 · industry guide
Grant Intelligence
For program officers and foundation teams.
Book 04 · industry guide
Impact Intelligence
Portfolio outcomes with 5 Dimensions and IRIS+.
Book 05 · industry guide
Training Intelligence
Learner outcomes from enrollment to wage gain.
Book 06 · industry guide
Nonprofit Programs
One unified intelligence layer across many programs.
2
Chapter 01 opener
Chapter 01

Workflow.

The operating system for impact data.
What you'll learn
  • · Why workflow beats tools in the AI era
  • · The 5-stage spine every program shares
  • · Nine real workflows you can copy
  • · How Skills compound the work
Time to read
18 min
≈ 27 pages · 9 case examples
3
§ 1.1 · Why workflow
Chapter 01 · §1.1

Why workflow,
in the age of AI?

AI without a workflow is a clever intern with no desk. It produces brilliant one-offs, forgets last week, and never builds a body of evidence. Workflow is what turns AI from a gadget into a system.

Without workflow

Tool-stack chaos

SurveyMonkey Airtable Submittable Tableau Otter ChatGPT Excel Google Forms

Eight apps, six copy-pastes, two heroic spreadsheets. Knowledge lives in the person who happens to remember where things are.

With workflow

One spine, every program

01 02 03 04 05 Data Framework Dictionary Transform Reports AI runs at every stage

The same spine, every time. Every program — grant cycle, investee quarter, training cohort — runs through it. AI works inside the workflow, not around it.

The teams winning with AI right now aren't the ones with the best models. They're the ones whose data has a place to land.

4
§ 1.2 · Vocabulary
Chapter 01 · §1.4

Six words.
One sharp meaning each.

Most impact teams use data, context, workflow, framework, dictionary, and reports interchangeably. They aren't. The rest of this book — and every industry guide that follows — depends on the difference.

TERM 01

Data

The raw inputs you collect.

An application PDF. An interview transcript. A pre-survey response. A quarterly financial. A field photo. A 30-day follow-up call.

TERM 02

Workflow

The sequence of phases a program moves through.

Three phases. Linear. App Review → Onboarding & Logic Model → Outcome Loop. Each phase produces context that feeds the next.

TERM 03

Framework

The theory of why your program works.

Theory of Change. Logic Model. Logframe. Kirkpatrick L1–L4. IRIS+ / 5 Dimensions for impact investing. Choose one; it shapes everything else.

TERM 04

Data Dictionary

The signed shared vocabulary.

Often the Logic Model itself — what both parties agreed to at interview. Becomes the scoring template for every check-in that follows.

TERM 05

Transformation

The AI work inside each phase.

Scoring 500 essays overnight. Tagging quotes to Logic Model outcomes. Joining a follow-up survey to the original baseline by persistent ID.

TERM 06

Reports

The automated intelligence outputs.

Six per cycle, in every solution. Generated overnight — not assembled over three weeks. Operational, funder, portfolio. Same data, three audiences.

THE ONE THAT BINDS THE REST

Context.

The accumulating intelligence that survives across phases. Document Intelligence + Stakeholder Voice + Risk Intelligence — growing from 5% at first intake to 95% at multi-year exit. Every workflow phase deposits more context. It doesn't reset.

Next page: how that 5%-to-95% compounding actually looks.

5
§ 1.3 · Context
Chapter 01 · §1.5

Context doesn't reset.
Every stage makes the next one smarter.

Every other tool resets between phases — new documents, new staff, new spreadsheet, starting from zero. The workflow keeps the full record forward. What you knew at intake becomes the floor for what you know at exit.

STAGE 01

Intake

Beginning
STAGE 02

Onboarding

Building
STAGE 03

Program period

Deep intel
STAGE 04

Year 2+ / exit

Full picture
TRACK A
Document Intelligence
Application read, scored against rubric
Interview synthesized with the app
Every check-in & progress report read
Full lifecycle narrative, on demand
TRACK B
Stakeholder Voice
Not yet captured
Baseline survey deployed
Pulse surveys, AI-coded each cycle
Longitudinal sentiment + outcomes
TRACK C
Risk Intelligence
Bias / inconsistency flags at review
Commitments tracked, baseline set
Signals updated every reporting cycle
Predictive patterns from full lifecycle
CONTEXT KNOWN
% of full record
5%
30%
65%
95%

Why this matters

Workflow phases (App Review → Onboarding → Outcome Loop) are the steps. Context is what survives between them. Without context, every phase is a cold start. With context, your fifth cohort benefits from everything the first four taught you.

6
§ 1.4 · The spine
Chapter 01 · §1.4

The workflow spine.

Every effective impact workflow — no matter the program — moves through these five stages. Memorize them. The rest of this chapter is detail under each.

01

Effective Data

Documents, transcripts, surveys, external — collected on purpose.

02

Framework

ToC · Logic Model · Logframe — the theory of why this works.

03

Data Dictionary

Field-level spec — definitions, calculations, SROI, ownership.

04

Transformation

Raw input becomes shaped, tagged, joined, ready-to-analyze.

05

Reports

Operational, funder, portfolio — decisions, not decoration.

documents transcripts surveys external
ToC Logic Model Logframe
fields SROI calc 5 Dimensions
clean tag join summarize
operational funder portfolio

Stage colors are consistent everywhere in this book. When you see coral, think Effective Data. Blue, think Reports.

7
§ 1.5 · Stage 01
Stage 01 Effective Data

More than survey data.

A survey alone tells you what people answered. Effective data tells you what's actually happening — by pulling from four sources, not one.

Documents

Applications, proposals, agreements, attachments, PDFs of past reports. The richest source most teams ignore.

Interviews & transcripts

Onboarding calls, exit interviews, focus groups. Where context, nuance, and the real story live.

Survey data

Pre/post, longitudinal, pulse. Necessary, but never sufficient on its own. Joins back to a unique stakeholder ID.

External sources

Census, BLS, IRIS+ benchmarks, peer datasets. Anchors your numbers against the world outside your program.

The whole is more than the sum

One application form + one onboarding transcript + one mid-program survey + one labor-market benchmark told together produce a story none of them tell alone. That joined story is your real dataset.

8
§ 1.6 · Stage 02
Stage 02 Framework

Pick a theory.
Stick to it.

A framework is a theory of why your program works. Without one, every interview, every survey question, every dashboard is a guess. Three frameworks cover 95% of impact work.

FRAMEWORK A

Theory of Change

Inputs Activities Outputs Outcomes Impact

Use when: you need to communicate causal logic to funders, board, or new staff. Best for big-picture programs.

FRAMEWORK B

Logic Model

Inputs Activities Outputs S-T Outcomes

Use when: you need an operational picture — staff and activities to outputs to short-term outcomes. Default for training programs.

FRAMEWORK C

Logframe

Goal Indicators Outputs Assumptions

Use when: a funder or government grant requires it (USAID, EU). The grid format forces you to name your assumptions.

3 · pick one

Choose one framework and commit. Mixing causes more confusion than clarity. Impact investors layer in 5 Dimensions of Impact and IRIS+ as portfolio lenses — that's a Book 04 conversation.

9
§ 1.7 · Stage 03
Stage 03 Data Dictionary

The spec sheet
for every metric.

A framework says what matters. A data dictionary says how it's measured, in what unit, from what source, with what calculation. Without one, two analysts will produce two different numbers from the same data.

Field name
Definition
Source
Calculation
Owner
applicant_id
Unique stakeholder identifier — joins every dataset.
application form
UUID, generated
Ops
readiness_score
0–100 score of program-fit at intake. Rubric-based.
app + transcript
weighted rubric
PO
sroi_per_grantee
Social Return on Investment per grantee, in USD.
surveyexternal
Σ(value) ÷ spend
M&E
employment_status_t6
Employment status, 6 months post-program. Categorical.
follow-up survey
direct capture
M&E
narrative_theme
AI-extracted theme tag from open-text responses.
surveytranscript
AI extraction
M&E
1.

Every field has a single owner. No collective ownership.

2.

Every calculated field shows the formula. No hidden math.

3.

A unique applicant_id ties every row across every source.

10
§ 1.8 · Stage 04
Stage 04 Transformation

Where raw input
becomes shaped output.

Transformation is the unglamorous step that decides whether your dashboards tell the truth. AI does the heavy lifting — but the more interesting question: what tells the AI what to look for? Three sources, fully composable.

Raw input
// open-text response, applicant #1042
"i had been laid off from my warehouse job last spring and the program helped me get a CDL. now i drive for a regional carrier, making about 1100 a week which is much better, my kids are doing okay."
transform
Shaped output
applicant_id
1042
prior_status
laid_off
credential
CDL
employed_t6
TRUE
weekly_wage
$1,100
sentiment
positive
themes
income_gain family_stability

What teaches the AI what to extract?

Three sources · composable
MODE A
default

Auto-discovery

No Skills. No playbook. Sopact reads your records, infers the structure, and writes the Data Dictionary itself.

  • ·Reads docs, transcripts, surveys
  • ·Detects entities, dates, amounts, themes
  • ·Proposes fields + taxonomy
  • ·You confirm → it becomes the dictionary
MODE B
shipped

Built-in Skills

Prepackaged playbooks that come with Sopact Sense. Already know the common impact patterns. No setup needed.

  • { }impact-framework-builder
  • { }sroi-calculator
  • { }iris-plus-mapper
  • { }funder-report-composer
MODE C
custom

External Skills

Your own markdown files. Encode your team's specific rubrics, scoring rules, or domain language. Composable with the built-ins.

  • ·Plain-English natural-language prompts
  • ·Your rubrics, your scoring weights
  • ·Versioned in your team's repo
  • ·Override or extend built-ins
A B C DICT

Pick one mode, mix two, or layer all three — Sopact converges them into one Data Dictionary, then runs the work through the Intelligent Suite below.

WHAT DOES THE WORK

The Intelligent Suite

— Sopact's name for transformation in practice.
SCOPES · what the AI operates on
Cell
one value
Row
one record
Column
one field
Grid
whole table
OPERATIONS · what the AI does
Rubric
score it
Join
link sources
Compare
vs. baseline
Summarize
aggregate

Example: score every grantee essay (cell × rubric) → join scores to the Logic Model (grid × join) → compare to last cohort (grid × compare).

11
§ 1.9 · Stage 05
Stage 05 Reports

Decisions, not
decoration.

A report is good if it changes what someone does on Monday. Three audiences, three different reports — but all built from the same dictionary, same transformations.

REPORT · A

Operational

Pipeline 128 Stalled 11 Apps received · last 8 weeks

For program staff. Pipeline counts, time-in-stage, blockers. Updates weekly.

REPORT · B

Funder

CRUCIAL OUTCOME 73% grantees employed at 6 mo SROI $4.20 per $1 invested

For funders & board. Outcomes, SROI, story snippets. Quarterly or annual.

REPORT · C

Portfolio

impact → return → 7 cohort companies

For portfolio leads. Cross-cohort or cross-program patterns. Monthly.

BEYOND THE AUDIENCE CUT

The Intelligent Suite also produces report types.

The same data, queried different ways. Each report type surfaces a different kind of intelligence — depending on what's interesting in the context this cycle.

Discrepancy

What the grantee promised vs. what was delivered. Sourced to commitment text.

Unusual insights

Patterns the dashboard doesn't show — outliers, weak signals worth investigating.

Risk

Early-warning signal: a grantee or cohort drifting from baseline. Routes to the right human.

trend compliance cohort comparison SROI narrative equity audit … and more — the Suite generates what the context asks for.

If your report doesn't change what gets funded, hired, or stopped, it's a document. Not a report.

12
§ 1.10 · Worked example
Chapter 01 · §1.10

One grantee.
Four moments. Same record.

Walk a single grantee through their Grant Intelligence workflow. Watch a 30-minute onboarding call become a comprehensive impact report — and watch every metric in that final report trace back to a commitment the grantee made themselves.

GRANT MANAGEMENT WORKFLOW
The three phases we'll cross
PHASE 1 App Review PHASE 2 Onboarding (steps 1 & 2) PHASE 3 Outcome Loop (3 & 4)
PHASE 2 · ONBOARDING
1

Onboarding call → Theory of Change

A 30-minute call with the grantee, auto-transcribed and joined to grantee_id. AI extracts every statement, classifies each into the ToC layers, and links the source quote back to its timestamp.

Data Transformation Framework
[00:04:21] Grantee: "We hire formerly incarcerated folks, ten this year, train them on welding, then place them with a partner shop…"
Inputs 3 quotes Activities 5 quotes Outputs 2 quotes Outcomes 1 quote ⚠ Impact 0 — gap ToC populated · evidence-linked
PHASE 2 · LATER IN THE CALL
2

ToC × IRIS+ × IMP → Data Dictionary

The drafted ToC gets enriched: each outcome mapped to IRIS+ standardized indicators, each impact assessed against the IMP 5 Dimensions (Who · What · How Much · Contribution · Risk). The result is signed at the end of the call — and becomes the contract for every report to come.

Framework Data Dictionary
FROM STEP 1
ToC
5 layers, evidenced
CATALOG
IRIS+
standard indicators
METHOD
IMP · 5 Dim
Who/What/HM/C/R
SIGNED
Data Dictionary
placements_t6 · IRIS+ OI4108
welder_wage_gain · IMP "How Much"
recidivism_t12 · IMP "Contribution"
+ 14 more fields, all sourced
PHASE 3 · EVERY QUARTER
3

Multi-source data → Intelligent Suite

Every quarter, three streams arrive: the grantee's metric survey, their financial documents, and the social audit report. The Intelligent Suite runs the work — cell × rubric on the survey, grid × join against the Dictionary, grid × compare against last quarter.

Data ×3 Intelligent Suite
Quarterly survey
metrics from grantee
📄
Financial docs
spend, runway
Social audit
3rd-party verify
INTELLIGENT SUITE
scopes: ▢ cell ▦ grid ops: rubric join compare
Reconciled data · 3 anomalies flagged · 1 risk signal raised
PHASE 3 · END OF CYCLE
4

Reconciled data → Comprehensive report

The final report is aligned to the ToC structure from Step 1. Every metric traces back to a commitment in the Data Dictionary from Step 2. Every quote sources to a timestamp in the Step 1 transcript. Board-ready the morning the cycle closes.

Intelligent Suite Reports
COMPREHENSIVE IMPACT REPORT · Q3 2026
Inputs
$320k spend · 2 partner shops · sourced from financial doc
Activities
12 hires · 480 training hours · survey + audit
Outputs
10 placements · IRIS+ OI4108 verified
Outcomes
+$1,100/wk wage gain · IMP "How Much"
Impact
12-mo recidivism: 14% (vs. 44% baseline)

Every row · sourced · auditable

The thread

Every metric in Step 4 traces back to a commitment in Step 2's Dictionary, which was built from Step 1's ToC. Same grantee_id, same record, same context — accumulating, never resetting.

13
§ 1.11 · Gallery
Chapter 01 · §1.13

Five industries.
One spine.

Each industry has its own workflow — three phases, distinct stakeholders, different cadences. But every one runs the same spine, and every one builds context that doesn't reset. Here's how that plays out across the five worlds Sopact serves.

What every workflow has in common

PATTERN 01
3 phases

Intake → Onboarding → Continuous loop. Linear. Each phase deposits context.

PATTERN 02
1 persistent ID

One join key per stakeholder, from first touch to final exit. Never re-introduce.

PATTERN 03
5%→95%

Context accumulates across all 4 stages. No resets between phases.

PATTERN 04
6 reports

Generated automatically per cycle. Operational, funder, portfolio. Same data.

Each industry below is the foundational view. Full coverage lives in its own Industry Guide — Books 02 through 06 of this library.

14
§ 1.11.1 · Grant Intelligence
Workflow 01 of 05

Grant Intelligence.

Stop reading grant reports. Start understanding your grantees. The spine runs from first application through multi-year renewal — and the grantee record never resets.

Lead framework
Logic Model
Persistent ID
grantee_id
Reports / cycle
6 — automated
PHASE 01
Application Review
Every application read & scored against your rubric, with citation trails. Bias detection per reviewer.
  • Data: applications, LOIs, budgets, attachments
  • Framework: scoring rubric + Five Dimensions screen
  • Output: ranked shortlist + bias audit
PHASE 02
Interview & Logic Model
At interview, the Logic Model is built with the grantee — and signed. It becomes the Data Dictionary for every check-in to come.
  • Data: interview transcript + app context
  • Framework: Logic Model (theory + indicators)
  • Output: signed Logic Model = Data Dictionary
PHASE 03
Outcome Loop
Every check-in, progress report, and follow-up scored against the Logic Model commitments. Six reports per cycle, generated overnight.
  • Data: progress reports, beneficiary surveys, follow-ups
  • Transformation: AI scores vs. signed commitments
  • Output: 6 grant intelligence reports + board narrative
SPOTLIGHT — INSIDE PHASE 02

How the spine plays out at the interview.

This is where most tools go dark. Here, the 5 spine elements all activate at once — and the Data Dictionary is born.

Data
Interview transcript

30-min call, auto-transcribed. Joined to applicant_id.

Framework
Logic Model

Inputs → Activities → Outputs → Outcomes → Impact.

Dictionary
Signed agreement

Both parties' shared vocabulary. The scoring template.

Transform
AI extracts commitments

Every measurable promise tagged, gaps flagged.

Reports
Onboarding summary

Grantee-facing recap + reviewer prep packet.

CONTEXT KNOWNAbout the grantee, by stage
5%
Application
30%
Interview & Award
65%
Grant Period
95%
Renewal + Year 2+
The win

The Logic Model signed at interview becomes the scoring template for every check-in. Your board asks "what did the grant produce?" — the answer is already generated, sourced to the commitments the grantee made themselves.

Full deep-dive: sopact.com/solutions/grant-intelligence — and Book 03 of this library.

15
§ 1.11.2 · Impact Intelligence
Workflow 02 of 05

Impact Intelligence.

Your LP report is three weeks away. Or overnight. The spine carries the investee record from first DD document through year-seven exit.

Lead framework
IRIS+ · 5 Dimensions
Persistent ID
investee_id
Reports / quarter
6 — automated
PHASE 01
DD Intelligence
Every DD document — pitch decks, theses, financials, ToC — read, structured, and scored before the IC meets.
  • Data: 50–200 docs per investee
  • Framework: 5 Dimensions · IRIS+ · ESG screen
  • Output: scored assessment with citation trail
PHASE 02
Living TOC
Each investee's Theory of Change becomes living — quarterly updates reconcile against the DD baseline. Commitments tracked, not re-discovered.
  • Data: founder interviews + DD context
  • Dictionary: Living TOC + IRIS+ indicator map
  • Output: shared vocabulary, signed before Q1
PHASE 03
Quarterly Loop
LP narratives write themselves from the accumulated record. Every claim traced to source. Every trend grounded in longitudinal data.
  • Data: quarterly metrics + Lean Data surveys
  • Transformation: AI cross-references vs. TOC
  • Output: 6 LP-ready reports per investee
SPOTLIGHT — INSIDE PHASE 01

Reading 200 DD documents before the IC meeting.

The phase where most analysts give up halfway. Here the spine reads every page and scores against your 5 Dimensions rubric.

Data
DD package

Pitch decks, theses, financials, ToC, audits.

Framework
5 Dimensions + IRIS+

Who · What · How Much · Contribution · Risk.

Dictionary
Investee scorecard

Every indicator mapped to source page + IRIS+ code.

Transform
AI extracts claims

Structured assessment, inconsistencies flagged.

Reports
IC prep brief

Thesis validation, open questions, recommended actions.

CONTEXT KNOWNAbout the investee, by stage
5%
Due Diligence
25%
Onboarding
60%
Quarterly Loop
95%
Year 2–7 + Exit
The win

The risk signal that was in the Q2 narrative on page 7 — the one nobody got to — gets flagged the day it lands. By year seven you have a complete impact record, ready for fund close-out reports and the next fundraise.

Full deep-dive: sopact.com/solutions/impact-intelligence — and Book 04 of this library.

16
§ 1.11.3 · Training Intelligence
Workflow 03 of 05

Training Intelligence.

Your funders ask for outcomes. Your data ends at graduation. The spine connects intake → mid-program signals → placement → 180-day retention into one learner record.

Lead framework
Kirkpatrick L1–L4
Persistent ID
learner_id
Reports / cohort
6 — automated
PHASE 01
Enrollment
Every learner baselined on day one. Intake, pre-assessment, barriers — all linked to one persistent Learner ID.
  • Data: intake forms, pre-assessments, interviews
  • Framework: Kirkpatrick L1 baseline
  • Output: one learner record, not three spreadsheets
PHASE 02
Training
Mid-program pulse checks auto-deployed. Confidence dips and engagement gaps flagged in real time. At-risk learners caught before dropout.
  • Data: pulse surveys, mentor obs., employer feedback
  • Framework: Kirkpatrick L2 + L3
  • Output: at-risk alerts linked to baseline context
PHASE 03
Outcomes
30/90/180-day follow-ups automated. Placement data connected to the original enrollment record. The funder report writes itself.
  • Data: HRIS pulls, follow-up calls, wage data
  • Framework: Kirkpatrick L4 + WIOA reporting
  • Output: 6 reports + funder-ready narrative
SPOTLIGHT — INSIDE PHASE 02

Catching the at-risk learner in week 4.

Priya's confidence dropped 34% between week 2 and week 4. Most programs find this out at the week-8 exit survey. Here, the spine catches it as it happens.

Data
Pulse check + mentor note

Weekly micro-survey, linked to learner_id.

Framework
Kirkpatrick L2 / L3

Learning + behavior change, measured continuously.

Dictionary
Learner record

Baseline confidence + week-by-week delta.

Transform
Trend detection

Drop ≥ 25% triggers coordinator alert with context.

Reports
At-risk alert

Routed to mentor with full learner history attached.

CONTEXT KNOWNAbout the learner, by stage
5%
Enrollment
25%
Mid-program
60%
Completion
95%
180-day retention
The win

Your LMS tracks completions. Your survey tool measures satisfaction. Neither proves the program worked. Here, every signal — enrollment through 180-day retention — sits on one learner record. Funder-ready the morning the cycle closes.

Full deep-dive: sopact.com/solutions/training-intelligence — and Book 05 of this library.

17
§ 1.11.4 · Nonprofit Programs
Workflow 04 of 05

Nonprofit Programs.

Your program data is in seven different systems. Or one. The spine unifies surveys, documents, interviews, and offline collection into a single intelligence layer — across every partner and chapter.

Lead framework
ToC / Logic Model / Logframe
Persistent ID
participant_id
Languages
40+ supported
PHASE 01
Program Design
Build the Logic Model from interviews — not consultant workshops. Both your team and partners align on what gets measured before a single survey goes out.
  • Data: interview transcripts, program docs
  • Framework: Theory of Change · Logic Model · Logframe
  • Output: living TOC + shared Data Dictionary
PHASE 02
Unified Collection
Every participant gets a persistent unique ID at first contact. Surveys (on/offline), transcripts, documents, financial reports — all linked automatically.
  • Data: online + offline surveys, docs, transcripts
  • Dictionary: indicators mapped per partner context
  • Output: clean linked data, ready the moment it arrives
PHASE 03
Continuous Intel
AI codes qualitative responses, extracts themes, flags anomalies, cross-references against the TOC. Reports in any language. Learning daily — not reporting annually.
  • Data: ongoing data + all prior context
  • Transformation: qual + quant unified, AI-coded
  • Output: 6 reports + multi-language outputs
SPOTLIGHT — INSIDE PHASE 02

One ID across surveys, transcripts, documents, and offline.

80% of an M&E team's time goes to data cleanup. Persistent unique IDs at source eliminate the problem entirely.

Data
Multi-source intake

Online + offline + transcripts + PDFs + photos.

Framework
TOC indicator map

Each indicator mapped to partner-specific context.

Dictionary
One participant record

Every data point joined to a persistent ID at source.

Transform
Intelligent Cell scoring

Qual + quant unified. 1,000 responses coded in 4 min.

Reports
Multi-language outputs

Collect in Swahili, analyze in English, report in Portuguese.

CONTEXT KNOWNAcross all partners, by stage
5%
Partner Intake
25%
Program Launch
60%
Continuous Cycles
95%
Multi-Year Learning
The win

The qualitative bottleneck disappears. 1,000 open-ended responses coded in four minutes — was three months of consultant time. Your M&E lead writes prompts in plain English. No consultant, no SQL, no waiting.

Full deep-dive: sopact.com/solutions/nonprofit-programs — and Book 05 of this library.

18
§ 1.11.5 · Application Management
Workflow 05 of 05

Application Management.

Stop recruiting judges. Start making decisions. The spine reviews every applicant, scores against your rubric, and tracks them from application through alumni outcomes — all on one persistent ID.

Lead framework
Custom rubric + ToC
Persistent ID
applicant_id
Reports / cycle
6 — automated
PHASE 01
Application Review
All 500 applications scored against your rubric overnight. Essays read, not skimmed. Bias detection per reviewer. Citation trails on every score.
  • Data: applications, essays, decks, attachments
  • Framework: custom rubric + evidence tiers
  • Output: ranked shortlist + bias audit
PHASE 02
Onboarding
Baseline surveys deployed to selected cohort. Commitments tracked. Application context carries forward — no re-introduction at week one.
  • Data: onboarding interviews, baseline surveys
  • Dictionary: program-fit + commitments record
  • Output: cohort baseline, individually linked
PHASE 03
Program + Alumni
Milestone surveys AI-coded. Longitudinal outcome evidence. "What happened to the fellows we selected?" — answer is already generated.
  • Data: milestone surveys, employer/grad outcomes
  • Transformation: AI patterns across cohorts
  • Output: alumni outcome report + predictive selection
SPOTLIGHT — INSIDE PHASE 01

500 essays read overnight — not 60.

The shortlist used to be the first 40 applications your team had time to read. Here, every essay gets actually analyzed — and reviewer bias is audited.

Data
Full submission

Application, essays, decks, attachments — all pages.

Framework
Your rubric

Custom-scored — no generic taxonomy forced on you.

Dictionary
Scored record

Every score citation-linked to the source passage.

Transform
AI scoring + bias check

Essay NLP, reviewer calibration, demographic patterns.

Reports
Ranked shortlist

Plus the bias audit your board will ask for.

CONTEXT KNOWNAbout the applicant, by stage
5%
App Review
30%
Onboarding
65%
Program
95%
Alumni + Cycle 2+
USE CASES COVERED IN BOOK 06

This same spine runs underneath pitch competitions, fellowships, scholarships, accelerators, and community grants — each gets a full treatment with rubrics, scoring patterns, and alumni tracking specifics in Book 06 (Application Management).

The win

Three days from setup to live. Applications opened Monday — by Tuesday morning your reviewers see a scored shortlist with citation trails. The candidate who would have been #447 unread gets seen.

Full deep-dive: sopact.com/solutions/application-review-software — and Book 06 of this library.

19
§ 1.12 · The accelerant
Chapter 01 · §1.13

How Sopact Sense
compounds the work.

The spine is the idea. Sopact Sense is where it runs. Skills are the built-in playbooks — Mode B from Stage 04 — that make each phase move faster than auto-discovery alone, and that compound across cycles.

THE PLATFORM

Sopact Sense

Where the 5-stage spine actually runs — collection, AI, warehouse, BI, all in one.

  • Effective Data: collects forms, transcripts, surveys, documents.
  • Framework: ToC / Logic Model / Logframe authoring built in.
  • Dictionary: field-level definitions with calculations and lineage.
  • Transformation: AI tagging, joins, themes, sentiment — at scale.
  • Reports: operational, funder, portfolio dashboards.
THE ACCELERANT

Skills

Prepackaged playbooks (markdown files) that teach Sense the language of your domain.

Skills sit inside Sense and turn on automatically when a task matches. A few examples:

{ }
impact-framework-builder
Drafts ToC / Logic Model / Logframe from interview transcripts.
{ }
sroi-calculator
Computes Social Return on Investment per program or grantee.
{ }
iris-plus-mapper
Maps custom indicators to IRIS+ catalog automatically.
{ }
funder-report-composer
Drafts funder-ready narrative from your dictionary and dashboards.

These built-in Skills run inside Sopact Sense — Sopact maintains them and keeps them current. Your team's own custom Skills (Mode C from Stage 04) compose on top.

Why this compounds

Each skill encodes a battle-tested playbook. As more programs run through Sense, the skills sharpen. Your team isn't starting from a blank page — they're starting from the best version of last quarter's work.

20
§ 1.13 · Recap
Chapter 01 · §1.13

Six lessons
to carry forward.

1

Workflow beats tools.

AI is most useful inside a system that already knows where data belongs. The system is the spine; the AI sits inside it.

2

Context doesn't reset.

5% at intake. 95% at multi-year exit. Every phase deposits more onto the same record — document intelligence, stakeholder voice, risk signals. Nothing starts from zero.

3

Five spine elements. Every program.

Effective Data → Framework → Dictionary → Transformation → Reports. Every phase runs them. Every industry uses them. Memorize the order.

4

The dictionary is the contract.

If two people compute the same number two different ways, you don't have data — you have argument fuel. The signed Logic Model ends the argument.

5

Transformation is where AI earns its keep.

Clean → tag → join → summarize. Same order, every time. AI compresses weeks of manual coding into hours.

6

A report that doesn't change a decision is a document.

Six reports per cycle. Three audiences — operational, funder, portfolio. All drawn from the same dictionary, all on the same evidence base.

UP NEXT · CHAPTER 02

Data Design

Mixed-method · longitudinal · pre/post. How to design what you collect so the spine has something to work with.

02.
21
The Sopact Intelligence Library

Six books.
One spine.
Built for the AI era.

You just finished Chapter 1 of the foundational book. The other five are industry guides — the same spine applied to your world. Pick the one that matches yours.

BOOK 01
Beyond
the Survey
You are here
BOOK 03
Grant
Management
Industry guide
BOOK 04
Impact
Investment
Industry guide
BOOK 05
Workforce
Training
Industry guide
BOOK 05
Nonprofit
Programs
Industry guide
BOOK 06
Application
Management
Industry guide

The teams winning with AI right now aren't the ones with the best models. They're the ones whose data has a place to land.

Sopact · published with care · v1.0 · 2026
sopact.com
22
The Sopact Intelligence Library
Book 01 of 06 · Chapter 02

Data
Design.

The methodology of what to collect — mixed-method, longitudinal, pre/post — and the design choices that make multi-language, offline, skip-logic collection produce reports your funder will actually read.

DESIGN what to collect 3 LENSES mixed/long/pre-post THE FIELD offline · multi-lang SKIP LOGIC smart flows REPORTS in any language
By Unmesh Sheth · Sopact
§ 2.0 · Where this chapter sits
Where this chapter sits

Designing for the
whole record.

Chapter 01 gave you the spine. This chapter is about feeding it well — deciding what to measure, when to measure it, and how to make every form, every interview, and every document arrive clean.

Chapters in Beyond the Survey

00Introduction8 pages
01Workflow22 pages
02Data Designyou are here
03Data Collectionnext chapter
04Intelligent Suite~18 pages
05Actionable Insight~18 pages

The library

Book 01 · this book
Beyond the Survey
The foundational field guide — methodology for the AI era.
Book 03 · industry guide
Grant Intelligence
For program officers and foundation teams.
Book 04 · industry guide
Impact Intelligence
Portfolio outcomes with 5 Dimensions and IRIS+.
Book 05 · industry guide
Training Intelligence
Learner outcomes from enrollment to wage gain.
Book 05 · industry guide
Nonprofit Programs
One unified intelligence layer across many programs.
Book 06 · industry guide
Application Management
Pitch comps, fellowships, scholarships, accelerators.
2
CHAPTER · 02

Data
Design.

Three design lenses, every choice made before a respondent ever touches a form. Plus the field considerations — offline, skip logic, multi-language — that decide whether your report covers everyone or just the easy half.

What you'll learn
  • 01.Why design beats collection-in-the-moment
  • 02.The three lenses: mixed-method · longitudinal · pre/post
  • 03.Designing for the field — offline, skip logic, multi-language
  • 04.A fellowship measurement plan, worked end-to-end
Time to read
14 min
17 pages · 28 illustrations
3
§ 2.1 · Why design
Chapter 02 · §2.1

Most reports fail
at the design phase.

By the time you're staring at an export trying to make sense of 450 rows, the report has already been decided — for better or worse. The architecture of clean reporting is built upstream, in the design choices nobody saw you make.

The "we'll figure it out" path
survey v1 survey v2 interview followup post-event retro
Six instruments, no shared spine. Pre-program in one tool, follow-up in another. Match by hand later. Headers don't reconcile. Six weeks of cleanup on the back end.
The "designed first" path
T0 baseline T1 midpoint T2 endpoint T3 +6mo one persistent participant_id same fields · same calc · same dictionary → report writes itself at T3
Four touchpoints, one ID, one Dictionary. Every wave links automatically. The report exists as the architecture, ready the morning the last response closes. No reconstruction.

Report quality is decided upstream. No amount of editorial polish after collection can recover evidence the architecture never captured.

4
§ 2.2 · Three lenses
Chapter 02 · §2.2

Every measurement plan,
three design lenses.

Before you pick instruments, you make three choices. They aren't methods — they're lenses. Every credible measurement design is some combination of them.

LENS 01

Mixed-method

Numbers + stories in the same record. Numbers tell you what changed. Stories tell you why and for whom.

LENS 02

Longitudinal

Same people, multiple moments. The only way to detect real change rather than a snapshot of who happened to be in the room.

LENS 03

Pre / post

Two waves, one delta. The cleanest way to measure short-term change — as long as you don't overclaim what caused it.

Most credible designs use two lenses at once. Pre/post + mixed-method is the working baseline. Longitudinal + mixed-method is the gold standard. All three together is what the next ten pages teach you.

5
§ 2.3 · Mixed-method
Lens 01 Mixed-method

Numbers tell you what.
Stories tell you why.

Quantitative data answers "how much changed?" Qualitative answers "what changed and for whom?" Neither alone is sufficient. The trick is keeping both in the same record — joined by participant ID — so a single response can be read both as a number and as a quote.

QUANT Likert scales Rubric scores Counts · rates Demographics QUAL Open responses Transcripts Case notes Field photos the joined story on participant_id
EXAMPLE
Confidence × test score

Rubric score (quant) joined to "tell us about a moment when you felt stuck" (qual). Was the score real or rote?

EXAMPLE
Wage gain × narrative

$1,100/wk after placement (quant) joined to "how has work changed your week?" (qual). Wage gain ≠ life gain — these tell you both.

EXAMPLE
NPS × open-ended

Promoter score = 9 (quant) joined to "what would you change?" (qual). The number is the headline; the quote is the meaning.

6
§ 2.4 · Longitudinal
Lens 02 Longitudinal

Same people.
Multiple moments.

Most "impact" data is a snapshot. Longitudinal design tracks the same individuals across time — and only that lets you separate program effect from "who happened to be in the room this week."

T0 Baseline enrollment survey + interview demographics + goals T1 Mid-program week 6 pulse check at-risk signals T2 Endpoint graduation exit survey capstone reflection T3 + 6 months post-program retention check + wage / placement one participant_id across all four waves

The attrition trap

Longitudinal designs lose people. Half your T0 cohort may not respond at T3. Plan for it — track who drops, what their T0 record looked like, and report your findings with non-response transparency, not without it.

7
§ 2.5 · Pre / post
Lens 03 Pre / post

Two waves.
One honest delta.

The cheapest credible measurement is a pre/post survey of the same people on the same instrument. It's the ground floor of measurement — but the floor is important, and most teams skip it.

PRE · T0
2.8 baseline mean

5-point confidence scale, 47 respondents

12 weeks
+1.4
DELTA
POST · T1
4.2 post-program mean

Same instrument, same 47 participants

DESIGN DISCIPLINE

Three rules to keep pre/post honest

  1. Same instrument at T0 and T1. Changed wording = different measurement.
  2. Same individuals. Cohort-level pre means matched to cohort-level post means is not pre/post — it's two cross-sections.
  3. Don't claim causation without a comparison group. You measured change; you didn't prove the program caused it.
8
§ 2.6 · Designing for the field
Chapter 02 · §2.6

Field conditions decide
who's in your data.

The lenses tell you what to measure. The field tells you whether anyone can answer. Three design choices, made before a single form goes out, decide whether your sample reflects your program — or just the easy half.

CHOICE 01

Offline

Will your respondents always have connectivity? Rural communities, field staff in low-bandwidth settings, refugee camps, school visits — connectivity can't be assumed.

  • ·Capture on device, sync when connected
  • ·Photos + voice memos as data, not attachments
  • ·Persistent ID survives the offline session
CHOICE 02

Skip logic

Will respondents see questions that don't apply? A 60-question form becomes a 12-question form for any individual respondent if you branch correctly — and completion rates triple.

  • ·AND/OR conditions on any prior answer
  • ·Show/hide sections, not just questions
  • ·Validation rules per branch path
A 语
CHOICE 03

Multi-language

Will your stakeholders speak the language of your instrument? A form that requires English filters out half your population without you realizing.

  • ·Collect in any language · 40+ supported
  • ·AI analysis in source language
  • ·Reports in funder's language

All three are design choices — not collection mechanics. Decide them before you build your first form. The next page goes deep on the most under-considered: multi-language, end-to-end.

9
§ 2.7 · Multi-language end-to-end
Chapter 02 · §2.7

Collect in Swahili.
Analyze in English.
Report in Portuguese.

Most teams accept English as the limiting factor and shrink their sample to match. Designed correctly, language doesn't have to be a constraint at all: your instrument, your AI prompts, and your output reports can each live in different languages without anyone translating by hand.

THREE INDEPENDENT LAYERS · COMPOSE ANY WAY
1

Collection

The form is rendered in the respondent's preferred language. Skip logic, validation messages, and even error states all translate.

EN ·
"What barriers did you face?"
SW ·
"Ulikabili changamoto gani?"
FR ·
"Quels obstacles avez-vous rencontrés?"
2

Prompts

AI prompts read responses in their original language — not after a lossy translation step. Themes and codes come out structured.

PROMPT (any UI lang)
Extract barrier themes from the response in its source language. Tag with: transport · childcare · stigma · cost · info.
RESPONSE (Swahili) →
tag: childcare · transport
3

Reports

The funder reads it in their language. The community partner reads it in theirs. The same evidence, generated three times from the same dataset.

REPORT v1 (EN) ·
"Cohort confidence rose 38%…"
REPORT v2 (PT) ·
"A confiança da coorte subiu 38%…"
REPORT v3 (FR) ·
"La confiance de la cohorte a augmenté 38%…"

The end-to-end flow

Swahili response → analyzed in source → English structured tags → Portuguese funder report

40+ languages supported on collection. Any combination on output. The participant never sees a translated question they didn't ask for. The funder never reads a translated quote without source-language verification.

10
§ 2.8 · Worked example
Chapter 02 · §2.8

A fellowship
measurement plan.
From design choice to final report.

A global fellowship: 80 fellows, 18 countries, 6 working languages, 12 months of programming. Watch how the three lenses + three field choices compose into one coherent measurement plan.

COHORT
80
fellows · global
COUNTRIES
18
across 4 continents
LANGUAGES
6
EN · ES · FR · PT · SW · AR
PROGRAM
12mo
+6mo follow-up
01
Lens · Longitudinal + Mixed-method

Four waves (T0/T1/T2/T3). Each wave = rubric + interview, joined on fellow_id.

T0 intake · T1 month-4 · T2 month-12 · T3 +6mo
02
Lens · Pre/post on confidence rubric

Same 8-dimension instrument at T0 and T2. The delta is the headline.

8-dim rubric · 1–5 scale · same instrument
03
Field · Offline-first for South Sudan + Yemen cohort members

Captured on phone, synced when connected. Photos + voice memos as supporting data.

8 of 80 fellows · offline-capable
04
Field · Skip logic by region + program track

Six tracks × five regions = 30 paths. No fellow sees more than 14 questions.

60Q form → ≤14Q per fellow
05
Field · Multi-language collection + reporting

Collect in 6 languages. Analyze in source. Reports: EN for board, FR for francophone partners, AR for regional convening.

6 collect langs · 3 report langs
The result

One measurement plan, every design choice made before recruitment opens. At T2 + 6 months, the final report writes itself from the accumulated record — in English for the board, French for partners, Arabic for the regional convening. Every quote in every report sources back to the language it was given in.

11
§ 2.9 · Gallery
Chapter 02 · §2.9

Five design archetypes.
One spine.

Most measurement plans match one of five archetypes — or a combination of two. Recognizing yours short-cuts the design phase from weeks to hours.

When each archetype fits

Pre/post cohort — your default when the program has a clear start and end, and you can measure the same individuals twice. Most workforce-training, fellowship, and skills programs land here.
Longitudinal w/ attrition — when you need to see effects months after the program ends. Plan for attrition by design: at T3 you'll have 60–80% of T0.
Qual-primary mixed — participatory, ethnographic, or community-led evaluations where stories carry more weight than scales. Numbers exist as triangulation, not headline.
Treatment + control — when you need to claim causation. Higher rigor, higher cost. Most impact investors live here.

The next two pages walk through the two most common archetypes in detail. The others get full treatment in their domain-specific books.

12
§ 2.9.1 · Pre/post cohort
Archetype 01 of 05

Pre / post cohort.

The most common archetype for cohort-based programs. Same individuals, same instrument, two waves. The delta is the headline; the qual is the explanation.

Best for
Cohort programs · 3–12 mo
Sample
20–200 participants
Min waves
2 (T0 + T1)
THE SHAPE
T0 Day 1 Rubric ×8 Open: goals Demographics T1 Final day Rubric ×8 Open: change Capstone refl. 12 weeks · same instrument · same fellows Δ confidence · Δ skill · Δ network
Design rules
  • Identical instrument at T0 and T1. Any wording change invalidates the comparison.
  • Same individuals. Cohort-mean change ≠ individual change. Link by participant_id.
  • Mix quant + qual at both waves. Quant gives you the delta; qual gives you the why.
Watch-outs
  • Don't claim causation without a comparison group. You measured change.
  • Response shift bias — what "3" means at T0 may differ from "3" at T1. Anchor with examples.
  • Selection effects — your sample is who finished, not who started. Report both.
13
§ 2.9.2 · Longitudinal w/ attrition
Archetype 02 of 05

Longitudinal · w/ attrition.

When you need to see post-program effects — wage gain at +6mo, retention at +12mo, civic engagement at +24mo — you design for the long view. And you design for the fact that not everyone will reply.

Best for
Programs with downstream outcomes
Waves
3–5 (T0 → T3+)
Expected attrition
20–40% by T3
THE SHAPE · 4 WAVES, NARROWING SAMPLE
T0 Intake 100 T1 Mid (mo 6) 92 T2 End (mo 12) 78 T3 +6mo 62 sample shrinks · design for it · report it
The attrition discipline
  • Track who drops, not just who stays. Compare T3 respondents' T0 records to T3 non-respondents'.
  • Plan capture for the long view — at T0, get an alternate contact and an opt-in for +6 / +12 / +24 month outreach.
  • Report response rate per wave in the final report. Funders trust transparency.
What you gain
  • Sustained-effect evidence — the kind funders renew on, not the kind they doubt.
  • Sleeper effects show up at T3 that weren't visible at T1.
  • A reusable cohort — every renewal cycle starts smarter than the last.
14
§ 2.10 · The accelerant
Chapter 02 · §2.10

How Sopact Sense
handles all of this.

The design lenses and field choices in this chapter are methodology. Sopact Sense is where they get implemented — without three different tools, three different spreadsheets, or three different consultants.

THE PLATFORM

Sopact Sense

The data design choices in this chapter all run inside one platform. Contacts, Forms, and Relationships keep every wave linked to the same participant.

  • Contacts
    CRM-style cohort with unique IDs assigned at first contact.
  • Forms · 12 question types · skip logic
    AND/OR conditions, advanced validation, save-progress for long forms.
  • Relationships
    Every form ties to Contacts. T0 + T1 + T2 + T3 all link automatically.
  • Multi-language · 40+
    Forms rendered, responses captured, AI analysis in source language.
  • Offline + sync
    Mobile capture, syncs when connected, persistent ID survives.
THE ACCELERANT

Skills

Prepackaged playbooks for data-design decisions. They shorten the time from blank page to a measurement plan that funders trust.

  • { } study-design-advisor
    Recommends the lens combo for your context.
  • { } pre-post-validator
    Checks T0 and T1 instruments are truly identical.
  • { } cohort-balancer
    Flags demographic imbalances at intake.
  • { } instrument-translator
    Renders the form in 40+ languages with branch-aware skip logic preserved.

These built-in Skills run inside Sopact Sense. Your team's custom Skills compose on top.

Why this compounds

The first cohort teaches Sense your instrument. The second cohort starts with the validated version. By cohort five your team isn't designing the measurement plan from scratch — they're starting from the best version of last cohort's plan.

15
§ 2.11 · Recap + Up Next
Chapter 02 · §2.11

Six lessons
to carry forward.

1

Report quality is decided upstream.

No editorial polish recovers evidence the architecture never captured. Design before you collect.

2

Three lenses. Compose, don't pick one.

Mixed-method + longitudinal + pre/post. Most credible designs use at least two; the gold standard uses all three.

3

Numbers tell what. Stories tell why.

Neither alone is sufficient. Keep both in the same record, joined by participant_id, so a single response reads both as a number and as a quote.

4

The field decides who's in your data.

Offline · skip logic · multi-language — three design choices, made before a respondent touches the form. Get them right or your sample reflects the easy half only.

5

Language is three layers, not one.

Collect in source · analyze in source · report in audience language. Decouple them and you reach everyone without translation lag.

6

Pick an archetype before you build.

Five archetypes cover most measurement plans. Match yours and short-cut design from weeks to hours.

UP NEXT
Chapter 03 · Collection

You've designed the plan. Now you collect — online, offline, from documents, from transcripts — and keep all four channels joined on one participant_id.

03
16
End of Chapter 02
END OF CHAPTER 02 · BOOK 01

Six books.
One spine.
Built for the AI era.

Design done. Collection next. Then transformation. Then reports. Pick the industry guide that matches your world — or continue straight to Chapter 03.

BOOK 01
Beyond
the Survey
You are here
BOOK 03
Grant
Management
Industry guide
BOOK 04
Impact
Investment
Industry guide
BOOK 05
Workforce
Training
Industry guide
BOOK 05
Nonprofit
Programs
Industry guide
BOOK 06
Application
Management
Industry guide

"Report quality is decided upstream. The design phase is where the report already exists — or doesn't."

THE SOPACT INTELLIGENCE LIBRARY · 2026
17
The Sopact Intelligence Library
Book 01 of 06 · Chapter 03

Data
Collection.

Four channels — online, offline, documents, transcripts — and the one architectural choice that joins them into a single participant record instead of four disconnected exports.

ONLINE web forms OFFLINE mobile + sync DOCUMENTS PDFs · OCR TRANSCRIPTS auto · speakers ONE stakeholder_id
By Unmesh Sheth · Sopact
§ 3.0 · Where this chapter sits
Where this chapter sits

From design
to data flowing in.

Chapter 02 told you what to measure. This chapter is the mechanics of getting it in — including the three channels traditional survey tools ignore.

Chapters in Beyond the Survey

00Introduction8 pages
01Workflow22 pages
02Data Design17 pages
03Data Collectionyou are here
04Intelligent Suitenext chapter
05Actionable Insight~18 pages

The library

Book 01 · this book
Beyond the Survey
The foundational field guide — methodology for the AI era.
Book 03 · industry guide
Grant Intelligence
For program officers and foundation teams.
Book 04 · industry guide
Impact Intelligence
Portfolio outcomes with 5 Dimensions and IRIS+.
Book 05 · industry guide
Training Intelligence
Learner outcomes from enrollment to wage gain.
Book 05 · industry guide
Nonprofit Programs
One unified intelligence layer across many programs.
Book 06 · industry guide
Application Management
Pitch comps, fellowships, scholarships, accelerators.
2
CHAPTER · 03

Data
Collection.

Surveys are one channel. Real collection has four — and pretending the other three don't exist is why "impact data" is usually missing the most important parts of what people actually said.

What you'll learn
  • 01.Why "survey" is the smallest of four channels
  • 02.The four channels — online · offline · documents · transcripts
  • 03.How unique-link-per-respondent joins them all
  • 04.One cohort, four channels, one record — end-to-end
Time to read
12 min
16 pages · 22 illustrations
3
§ 3.1 · Why "survey" is too small
Chapter 03 · §3.1

Most impact data
isn't in the survey.

Application essays, exit interviews, financial documents, partner audits, field photos, voice memos from rural visits. These are the most evidence-rich parts of any program — and traditional survey tools can't accept any of them.

What a survey tool sees
Q1
Rate confidence 1–5
Q2
Select your demographic
Q3
Open text · 200 char max

A flat schema of typed cells. Anything else is "out of scope."

What's actually in the program
📝
Web form
scales + open-ended
📱
Mobile offline
photos + voice
📄
Application PDFs
essays · recs · transcripts
🎙
Exit interviews
transcripts + tags
📊
Quarterly metrics
structured CSVs
🗂
Social audits
3rd-party PDFs

Six input shapes, one record. Each format is data — not exhaust.

If your tool can only handle questions and answers, you're collecting maybe 30% of what your program actually produces.

4
§ 3.2 · Four channels
Chapter 03 · §3.2

Four channels.
One stakeholder ID.

The unlock isn't accepting more formats — it's keeping them all linked to the same person. A unique stakeholder ID assigned at first contact survives across every channel that comes after.

ONLINE web forms · unique links OFFLINE mobile · sync · photos DOCUMENTS PDFs · OCR · extraction TRANSCRIPTS auto · speakers · timestamps ONE PERSON stakeholder_id contacts × forms × rel.

The architectural choice: persistent ID from first contact.

Same person fills a web form in March, gets interviewed in June, submits a PDF in September. All three land on the same record. No reconciliation, no VLOOKUPs, no consultant gluing exports together.

5
§ 3.3 · Online
Channel 01 Online

Web forms,
but unique-link.

Online surveys are familiar — the catch is what most tools do wrong: one generic URL for the whole cohort. A unique link per respondent is the difference between "we got 200 responses" and "we know which 200 people they were and what each of them said the last time too."

Generic URL · the old way
survey.example.com/q3-feedback
5 responses · who's who?

Identity collected inside the form (if at all). Email retypes. Duplicates pile up. Pre/post linkage by hand.

Unique link · designed
sense.app/f/q3?id=p_a7f3
a7f3 b2c1 c9e2 d4a8 e1f5

Identity in the URL. Form pre-fills what's known. Respondent can edit later via the same link. Pre/post linkage is a calculation, not a project.

EMBED
Iframe into any LMS, website, or partner portal. Same unique-link logic.
SAVE-PROGRESS
Long applications resume where the respondent left off. Days later, on any device.
SUBMISSION ALERT
Email triggered on submit with full payload — route to staff or downstream system.
6
§ 3.4 · Offline
Channel 02 Offline

Capture now.
Sync when connected.

The respondents you most need to hear from often have the worst connectivity: rural farmers, refugee settlements, field staff on partner visits. Mobile offline-first capture is the difference between hearing them and writing them out of your data.

IN THE FIELD · NO BARS SURVEY Local storage · queue 47 responses SYNC when device reconnects CLOUD · UNIFIED RECORD 47 records linked to stakeholder_id automatically AI runs on photos + voice + text
PHOTO
Camera-roll attached to a response. AI describes contents at sync time. Evidence, not exhibit.
VOICE
Press-hold to record a 30s voice memo in any language. Transcribed + tagged after sync.
GPS / TIMESTAMP
Optional location + timestamp on each response. Useful for field-monitor accountability.
7
§ 3.5 · Documents
Channel 03 Documents

PDFs are
data, not attachments.

Application essays, financial statements, social audits, grantee reports — these arrive as documents. Treated as "attachments" they sit at the bottom of the record unread. Treated as data, every page becomes searchable evidence with a citation you can click.

PDF input
Sustainability Report 2025
… committed to net-zero emissions by 2035 through a 40% renewable energy mix by year-end 2026 …
… diversity on the board grew from 28% to 41% women-identifying members …
page 12 of 47
extract page-level cite
Structured output
net_zero_year
2035 p.12
renewable_pct_target
40% p.12
board_diversity_pct
41% p.12
prior_year_diversity
28% p.12

Every value clicks back to the page it came from. No "trust me" extracts.

Common extracts
  • Numbers (spend, runway, headcount) with units
  • Claims + commitments, tagged by framework
  • Demographics from rec letters or essays
  • Compliance items checked against checklists
What survives
  • Page-level citation per extracted value
  • Original source quote, in source language
  • Confidence score on each extraction
  • Human-override path when AI gets it wrong
8
§ 3.6 · Transcripts
Channel 04 Transcripts

Interviews become
queryable.

A 30-minute exit interview used to be "we'll listen to it later." Now it's auto-transcribed with speaker labels and timestamps before the call ends — and every line is joined to the same participant record as their survey.

Auto-transcript · timestamped
[00:02:14] Interviewer: Walk me through the moment you realized the program was working for you.
[00:02:24] Participant: Probably week 6. I was helping a peer debug an API call and I didn't have to look anything up.
[00:04:08] Interviewer: What changed for you outside of the technical skills?
[00:04:18] Participant: My partner could finally not ask "did you fix anything today?" like it was a joke.
[00:08:42] Participant: Confidence in interviews is real now. Not faked.
Structured output
THEMES (3)
technical confidence peer recognition family validation
EVIDENCE QUOTES (3)
  • "didn't have to look anything up" 02:24
  • "partner could finally not ask…" 04:18
  • "confidence in interviews is real" 08:42
JOIN KEY
participant_id = p_a7f3
SPEAKER LABELS
AI distinguishes interviewer from interviewee. Multi-party calls are split per speaker.
TIMESTAMP JOIN
Every claim in a report clicks back to the second of the recording it came from.
SOURCE LANGUAGE
Interview in Swahili? Transcribe in Swahili, tag in English, report in Portuguese.
9
§ 3.7 · Worked example
Chapter 03 · §3.7

One cohort.
Four channels.
One participant record.

A coding bootcamp cohort, 60 learners, 14 weeks. Watch how each of the four channels delivers different data — and how all four land on the same record without anyone joining them by hand.

01
ONLINE · WEEK 0
Intake form (unique link per learner)

Demographics · goals · prior experience · accommodations needed

60 / 60 responses
stakeholder_id assigned
02
DOCUMENTS · WEEK 0–1
Application portfolio + rec letters

PDFs extracted into structured fields · joined on stakeholder_id automatically

180 PDFs read
page-cited evidence
03
OFFLINE · WEEKS 1–14
Weekly mobile pulse-checks

14 quick check-ins per learner · sync on commute · early at-risk signals

~840 pulses
99% sync rate
04
TRANSCRIPTS · WEEK 14
Exit interviews · 30 min each

Auto-transcribed with speaker labels · themed in real-time · joined on id

54 interviews
~1620 quote-tags
The result · one record per learner, four channels deep

Day-1 demographics from the form. Application evidence from the PDFs. Weekly pulse data from the phone. Closing reflections from the interview. Same stakeholder_id, four data shapes, zero reconciliation. The cohort report writes itself the morning week 14 ends.

10
§ 3.8 · Collection patterns
Chapter 03 · §3.8

Five patterns,
by program type.

Different programs lean on different channel mixes. Recognizing your pattern short-cuts the architecture phase from weeks to hours.

Two patterns in detail

Workforce training (next page) — online intake + weekly offline pulse + exit transcript. Heaviest on volume of small data points across many weeks. Pulse data is the differentiator versus old-school pre/post-only.
Application-driven (page after) — document-heavy intake (essays, recs, financials) + structured online review forms. Each applicant generates 4–6 documents, all joined to one applicant_id.

The remaining three patterns get full treatment in their domain books.

11
§ 3.8.1 · Workforce training
Pattern 01 of 05

Workforce training.

Online intake at week 0, mobile pulses every week, document-light, transcript at the end. Continuous signal — not just a two-wave snapshot.

Cohort size
30–80 learners
Cadence
weekly pulse
Primary channels
Online + Offline
01
Online intake
  • Unique-link form, week 0
  • Demographics + goals + prior skill
  • Accommodations + language preference
  • stakeholder_id assigned here
02
Mobile pulse
  • 30-second weekly check-in
  • Confidence + blocker + 1 photo
  • Captured offline, syncs on commute
  • Voice memo optional, in source lang
03
Capstone artifact
  • Project PDF or repo link
  • Extracted: stack, complexity, themes
  • Linked to same stakeholder_id
  • Reviewer rubric joined on submit
04
Exit interview
  • 30-min auto-transcribed call
  • Speaker-labeled, time-coded
  • Themed against pulse history
  • Joined to T0 record automatically
05
+6mo follow-up
  • Same unique link as week 0
  • Wage / placement / retention
  • Open: "what's changed since?"
  • Pre/post + longitudinal in one shot
The win

Pulse data surfaces at-risk learners by week 3. Capstone evidence is read, not skimmed. Exit interviews are queryable by theme. +6mo response rate is 77% — because the same unique link still works.

12
§ 3.8.2 · Application-driven
Pattern 02 of 05

Application-driven.

Scholarships, accelerators, fellowships, pitch competitions. Document-heavy intake, structured rubric reviews, decision-supporting reports.

Volume
100–2000 applicants
Docs / app
3–8 PDFs
Primary channels
Documents + Online
01
Application portal
  • Save-progress online form
  • Document uploads inline
  • Skip logic by application track
  • Multi-language form rendering
02
Document extraction
  • Essay themes + sentiment
  • Rec letter signal extraction
  • Financials → structured numbers
  • All joined to applicant_id
03
AI-pre-scored brief
  • One-page summary per applicant
  • Rubric-aligned scoring with citations
  • Outliers flagged for panel attention
  • Time per app: 15 min → 3 min
04
Panel review grid
  • Sortable, citation-backed
  • Multi-reviewer rubric blending
  • Decision audit trail
  • Equity-audit-ready
05
Decisions + report
  • Accept / waitlist / decline tagging
  • Rationale captured per decision
  • Panel-ready evidence report
  • Cohort onboarding ready immediately
The win

500 scholarship applications reviewed in two days instead of three weeks. Every decision auditable, every score citation-backed. Selected cohort flows straight into the pattern-01 workforce-training channel mix without re-entering data.

13
§ 3.9 · The accelerant
Chapter 03 · §3.9

How Sopact Sense
handles all four channels.

Four channels could mean four tools. In Sopact Sense it's one — built around Contacts, Forms, and Relationships, with Skills handling the channel-specific work that traditional tools can't.

THE PLATFORM

Sopact Sense

Four channels, one platform. Contacts hold the unique IDs. Forms handle the structured input. Relationships keep documents and transcripts joined to the right person.

  • Online · web forms with unique links
    Embed, save-progress, submission alerts, 12 question types, validation.
  • Offline · mobile capture with sync
    Local storage, photos, voice memos, GPS, 99%+ sync rates.
  • Documents · PDF extraction
    OCR, structured field extraction, page-level citation, confidence scoring.
  • Transcripts · auto-transcribe
    Speaker labels, timestamps, theming, source-language preservation.
  • Relationships keep all four joined
    One stakeholder_id, four channel feeds, zero reconciliation.
THE ACCELERANT

Skills

Prepackaged playbooks for the channel-specific moves that take a lot of configuration to get right the first time — and zero configuration on every subsequent cohort.

  • { } unique-link-router
    Generates per-respondent URLs and pre-fills known fields.
  • { } offline-sync-monitor
    Tracks sync state across field devices; flags missing data.
  • { } document-extractor
    Pulls structured fields from PDFs with page-level citations.
  • { } transcript-importer
    Brings audio/video into the record with speaker labels and themed quotes.

These Skills run inside Sopact Sense. They aren't shipped as standalone files.

Why this compounds

Cohort 1's transcripts teach Sense your theming vocabulary. Cohort 2 inherits that vocabulary and adds nuance. By cohort 5 your team is starting from the best transcripts pipeline you've ever had — not configuring channel mechanics from scratch.

14
§ 3.10 · Recap + Up Next
Chapter 03 · §3.10

Five lessons
to carry forward.

1

"Survey" is the smallest of four channels.

Online forms are one part. Documents, transcripts, and offline mobile capture cover the other 70% of what your program actually produces.

2

Persistent ID is the architectural choice.

Unique stakeholder_id from first contact survives every channel that comes after. Pre/post becomes a calculation, not a project.

3

Documents and transcripts are data, not attachments.

Every PDF becomes structured fields with page citations. Every interview becomes themed quotes with timestamps. Both join on stakeholder_id.

4

Offline-first or you lose your hardest-to-reach.

Rural, field-staff, low-bandwidth participants are the ones funders most want evidence on. Mobile capture + sync makes them part of your data, not absent from it.

5

Pattern-match before you architect.

Five patterns cover most programs. Find yours, lift the channel mix, short-cut weeks of design work.

UP NEXT
Chapter 04 · Intelligent Suite

Four channels of data arrive on one record. Now: the AI features that analyze them — cell, row, column, grid — and the four canonical report types they produce.

04
15
End of Chapter 03
END OF CHAPTER 03 · BOOK 01

Six books.
One spine.
Built for the AI era.

Collection done across all four channels. Transformation next — where the Intelligent Suite turns this record into reports your funder will read.

BOOK 01
Beyond
the Survey
You are here
BOOK 03
Grant
Management
Industry guide
BOOK 04
Impact
Investment
Industry guide
BOOK 05
Workforce
Training
Industry guide
BOOK 05
Nonprofit
Programs
Industry guide
BOOK 06
Application
Management
Industry guide

"Four channels. One stakeholder ID. Same record growing across every form, every document, every interview."

THE SOPACT INTELLIGENCE LIBRARY · 2026
16
The Sopact Intelligence Library
Book 01 of 06 · Chapter 04

Intelligent
Suite.

Four AI features that turn one clean dataset into four canonical report types — designer-quality, multi-language, ready the moment the last response closes.

CELL ROW COLUMN GRID
By Unmesh Sheth · Sopact
§ 4.0 · Where this chapter sits
Where this chapter sits

From clean data
to decision-ready reports.

Chapter 03 brought data in cleanly from four channels. This chapter is the AI layer that turns it into reports nobody has to rebuild — including the four canonical report types every credible program uses.

Chapters in Beyond the Survey

00Introduction8 pages
01Workflow22 pages
02Data Design17 pages
03Data Collection16 pages
04Intelligent Suiteyou are here
05Actionable Insightnext chapter

The library

Book 01 · this book
Beyond the Survey
The foundational field guide — methodology for the AI era.
Book 03 · industry guide
Grant Intelligence
For program officers and foundation teams.
Book 04 · industry guide
Impact Intelligence
Portfolio outcomes with 5 Dimensions and IRIS+.
Book 05 · industry guide
Training Intelligence
Learner outcomes from enrollment to wage gain.
Book 05 · industry guide
Nonprofit Programs
One unified intelligence layer across many programs.
Book 06 · industry guide
Application Management
Pitch comps, fellowships, scholarships, accelerators.
2
CHAPTER · 04

Intelligent
Suite.

Four named AI features. Four canonical report types. One platform that runs all of it on the dataset you already collected cleanly — and writes the report in whatever language your funder reads.

What you'll learn
  • 01.Why one stack beats stitched-together tools
  • 02.The four features — Cell · Row · Column · Grid
  • 03.Prompt-craft — Constraints, Emphasis, Task, Context
  • 04.The four canonical report types — with live examples
Time to read
16 min
18 pages · 30 illustrations
3
§ 4.1 · One stack vs. stitched
Chapter 04 · §4.1

Three tools, four weeks.
One stack, four hours.

The traditional stack: SurveyMonkey for collection, NVivo for qual coding, Excel for cross-tabs, Canva for the deck. Four tools, four exports, four reconciliations. Every cycle starts over.

The stitched-tools stack
SurveyMonkey
collection
→ CSV export
NVivo / Dedoose
qual coding · 2 wks
→ XLSX export
Excel
VLOOKUP reconciliation
→ pivot table
Canva / Figma
visual design
→ stale PDF
Per cycle: 4–6 weeks · $15–35k consultant rebuild · zero reproducibility next cycle.
The Intelligent Suite
Intelligent Cell
extract from single open-ended response or document
Intelligent Row
per-respondent report card
Intelligent Column
correlations + patterns across all rows
Intelligent Grid
full designer-quality report, multi-language
Per cycle: minutes to hours · no marginal cost · reproducible from cohort to cohort.

The first two approaches carry the same reconstruction cost every cycle. The Intelligent Suite does the architectural work once.

4
§ 4.2 · The Suite
Chapter 04 · §4.2

Four features.
One dataset.

The Suite operates on four scopes — a single cell, a single row, a single column, or the entire grid. Each scope answers a different question. Run them in any combination on the same clean dataset.

participant_id confidence_t0 confidence_t1 open_response theme p_a7f324 p_b2c134 p_c9e225 p_d4a813 p_e1f535 "hard at first" CELL ROW → ↓ COLUMN GRID (all of it)
CELL
one value, one prompt
ROW
one person, one card
COLUMN
one field, all rows
GRID
whole dataset, whole report
5
§ 4.3 · Intelligent Cell
Feature 01 Intelligent Cell

One value,
one prompt.

The simplest feature in the Suite. Look at one cell — usually an open-ended response, a quote, a paragraph of a document — run a prompt against it, write the result into an adjacent cell. Multiply by 500 rows, it costs you minutes.

INPUT CELL · open_response
"Honestly the first few weeks were brutal. I kept getting stuck on async functions. But by week 8 I shipped a project and it actually felt like I could."
237 chars · participant_id = p_a7f3
prompt
OUTPUT CELLS · adjacent columns
confidence_score
4 / 5
sentiment
positive shift
themes
struggle → mastery
USE CASE 01
Score confidence from open text

Likert-equivalent from narrative.

USE CASE 02
Extract themes from interview

Tags from a transcript chunk.

USE CASE 03
Pull a number from a PDF page

Spend value with page citation.

6
§ 4.4 · Intelligent Row
Feature 02 Intelligent Row

One person,
one report card.

Take all the data you have on one person — across forms, interviews, documents, waves — and generate a one-page report card. For 500 applicants, that's 500 cards. For a cohort review, that's a personalized brief per participant.

One row · all data for p_a7f3
participant_idp_a7f3
cohortspring-2026
demographicsF · 24 · first-gen
confidence_t02 / 5
confidence_t14 / 5
capstone_score87 / 100
themesstruggle→mastery
exit_interview30 min transcript
placement_t3jr. SWE · $87k
Generated card
PARTICIPANT BRIEF

Maya · cohort spring-2026

+2
confidence shift
87
capstone score

Maya entered with self-rated confidence of 2/5. Her exit interview names week 8 as the inflection — shipping her first real project. By T3 she'd landed a junior SWE role at $87k.

Quote: "by week 8 it felt like I could" 02:24

Runs 500 times to generate 500 personalized briefs. The panel works from these, not from raw exports.

7
§ 4.5 · Intelligent Column
Feature 03 Intelligent Column

One field,
all rows.

Pick one or more columns. Ask a question that spans the whole cohort. Get a pattern, a correlation, a cluster — backed by the rows that contributed to it.

Two columns selected
COLUMN · X
test_score
quant · 0–100
COLUMN · Y
confidence_text
qual · open-ended

"Do high test scores predict high confidence — or are they independent?"

Output · participant-level scatter
test_score → confidence → low/high high/high low/low high/low ⚠ r = 0.62 · positive · not absolute
PATTERN READ

Strong positive correlation (r = 0.62) but seven outliers in the high-test / low-confidence quadrant — usually first-gen students who score but doubt. Worth a program intervention.

8
§ 4.6 · Intelligent Grid
Feature 04 Intelligent Grid

The whole grid,
one designer report.

Point the Grid at your entire dataset, hand it a prompt that describes the audience and the language, and it generates the full report — narrative, visualizations, evidence drill-down, public shareable link. Change the prompt to change the language, the audience, the framing.

WHOLE DATASET · grid
all rows · all cols
PROMPT (in French) →
Générez un rapport d'impact pour notre conseil d'administration en français. Mettez en évidence l'évolution de la confiance, les thèmes qualitatifs, et la performance par démographique.
Generated report · live URL
Rapport d'impact · Spring 2026
CONFIANCE Δ
+2.4
PLACÉS · T3
87%
pré (gris) → post (vert)

"Les apprenants signalent que la 8e semaine marque l'inflection point — moment où l'effort cède à la maîtrise…"

→ sense.app/r/abc-fr · cliquer pour la version EN ou ES

Change the prompt, change the language. Same dataset, same evidence, but the report ships in French for the board, English for the funder, and Portuguese for the regional partner — generated three times from the same Grid.

9
§ 4.7 · Prompt-craft
Chapter 04 · §4.7

A good prompt
has four parts.

Across Cell, Row, Column, and Grid — every Intelligent Suite call is a prompt. The same four characteristics distinguish prompts that produce decision-ready output from prompts that produce confident nonsense.

C

Constraints

What the output must not do. Limits, formats, scopes. The hard rails.

e.g. "Score 1–5 only. No explanation. No null."
E

Emphasis

Where to look hardest. What matters most in this response.

e.g. "Focus on the inflection moment, not the average tone."
T

Task

The action itself, in a single verb. Extract. Score. Cluster. Summarize.

e.g. "Score this response on technical confidence."
C

Context

What this response is and who said it. Program type. Wave. Rubric. Persona.

e.g. "12-week coding bootcamp · learner exit reflection · T1."
CETC ASSEMBLED · ONE PROMPT

"In a 12-week coding bootcamp learner exit reflection at T1, score the response on technical confidence. Focus on the inflection moment, not the average tone. Score 1–5 only. No explanation. No null."

10
§ 4.8 · One dataset, four ways
Chapter 04 · §4.8

One bootcamp dataset.
Four ways.

60 learners, 14 weeks, four channels of clean data. Same dataset, four different Intelligent Suite calls — each answering a different question a different stakeholder is asking.

FEATURE 01
Cell
"Can you score every open-ended response on technical confidence?"
Run on each row's open_response column · 60 cells written
confidence_score column populated · 1–5 scale · joined to participant_id
FEATURE 02
Row
"Build a one-page brief per learner for the placement team."
Pulls demographics, skill delta, capstone score, exit-interview quote · per row
60 personalized briefs · placement team works from these, not the raw export
FEATURE 03
Column
"Does demographic correlate with confidence Δ — and where are the outliers?"
Cross-tab demographic_t0 × (confidence_t1 – confidence_t0) · participant scatter
Pattern + 7 outliers flagged · first-gen learners with high score / low confidence
FEATURE 04
Grid
"Write the funder report. EN for the foundation, ES for the regional partner."
Whole dataset → designer-quality multi-language report with evidence drill-down
Two live URLs · sense.app/r/abc-en + /abc-es · same evidence, two languages
The result

One clean dataset answers four different questions, for four different audiences — without leaving the platform. No export, no consultant, no rebuild. Next cohort starts from the same recipe.

11
§ 4.9 · 4 canonical report types
Chapter 04 · §4.9

Four report types.
One architecture.

Most credible impact reports fit one of four canonical shapes. Each comes from the same clean-data architecture: persistent participant IDs · analysis at collection · live-URL delivery. The next four pages walk through real examples, each one openable in a browser.

SHARED BACKBONE · ALL FOUR REPORTS
01
Persistent IDs
Every response links back to the participant from the first form. No reconciliation.
02
Analysis at collection
Open responses themed as they arrive. No coding phase, no NVivo, no analyst.
03
Live URL delivery
No static PDF. Every value drills back to its source. Updates as data arrives.
12
§ 4.9.1 · Workforce pre/post
Report type 01 of 04

Workforce · pre/post.

A 47-person Girls Code cohort runs pre- and post-assessments across six skill dimensions with confidence tracking throughout. The program director needs one report to send to her foundation funder.

Audience
Foundation funder
Cohort
47 learners · 12 wks
Built with
Intelligent Grid
What's inside
  • Skill delta tables across six rubric dimensions — per participant + cohort average
  • Confidence movement from baseline to post-program with distribution chart
  • Demographic breakdown by age + prior experience, structured at intake
  • Qualitative themes from post-program reflections, ranked by frequency
GIRLS CODE · COHORT REPORT

Spring 2026 · Skill change report

SKILL Δ AVG
+1.7
CONF Δ AVG
+2.4
6 skill dimensions · pre (grey) → post (coral)
"Learners moved most on JavaScript fundamentals + version control. Confidence in interviews lagged technical confidence by 4 weeks…"
LIVE REPORT · NO LOGIN
sense.sopact.com/ig/d81465e6-9c72-4ee9-bf8b-08ca519f1259
Open report →
13
§ 4.9.2 · Correlation · qual + quant
Report type 02 of 04

Correlation · qual + quant.

Do high test scores predict high confidence — or are they independent dimensions? One analysis links a quantitative rubric score to AI-extracted confidence from open-ended responses, producing a participant-level scatter.

Audience
Program improvement team
Method
Cell + Column
Built with
Intelligent Suite
SCORES × CONFIDENCE

Participant-level pattern

test_score → confidence → r = 0.62 · positive · ⚠ 7 outliers high/low
"Seven learners scored ≥80 but reported confidence ≤2 — usually first-gen students. Worth an intervention."
What's inside
  • Cross-dimensional correlation between quant rubric score + AI-extracted confidence
  • Participant-level scatter showing the actual distribution, not just averages
  • Four clusters — high/high, high/low, low/low, outliers
  • Plain-language read of what the pattern means for program design
LIVE REPORT · NO LOGIN
sense.sopact.com/ig/81461672-74ca-47a7-94de-1ddb77487b42
Open report →
14
§ 4.9.3 · Application panel
Report type 03 of 04

Application panel · 500 apps.

500 scholarship applications, 15 minutes per app the old way. An AI-scored brief per applicant cuts review time to three minutes — with citations linking every score back to the source sentence a panel can audit.

Audience
Review panel
Volume
500 applications
Built with
Cell + Row
What's inside
  • One-page brief per applicant — essay themes, rec quality, rubric alignment
  • Sortable grid the whole panel works from together
  • Score distribution + flagged outliers for panel discussion
  • Review time · 15 minutes down to 3 per application
PANEL GRID · 500 APPS

Sortable applicant briefs

applicantscorethemesflag
a_001 · M.Chen87STEM·grit
a_002 · J.Diaz84arts·civic
a_003 · S.Patel61STEM⚠ rec
a_004 · K.Owusu91civic·grit
… 495 more
Each row drills to a brief. Each brief sources to a citation. Audit trail intact.
LIVE REPORT · NO LOGIN
sense.sopact.com/ig/bcc5a5a7-7b31-4bf3-8b1b-2c0d665da248
Open grid →
15
§ 4.9.4 · ESG portfolio
Report type 04 of 04

ESG portfolio · PDFs to dashboard.

Every portfolio company submits a sustainability disclosure PDF. One dashboard reads all of them, scores each against the framework, and aggregates the results into a consistent picture for investors and the board.

Audience
Investors + board
Input
PDFs per company
Built with
Document intelligence
PORTFOLIO DASHBOARD

Sustainability across the portfolio

FRAMEWORK SCORE · 8 COMPANIES Acme82 Birdco68 Cypress91 Delphi42 Elara74 Foxglove28 Gyre81
Two companies fall below the threshold. Every score links to its source PDF page.
What's inside
  • PDFs read automatically — scores, gaps, claims pulled per company
  • Per-company gap analysis against the framework with evidence citations
  • One cross-portfolio view — every company compared together
  • Ready-to-share dashboard — no separate analytics tool needed
LIVE ANALYSIS · NO LOGIN
sense.sopact.com/ir/1a2dccdb-6ea4-5dbb-8ce6-c2d48977221a
Open analysis →
16
§ 4.10 · The accelerant + BI
Chapter 04 · §4.10

Inside Sense.
Out to your stack.

The Intelligent Suite runs inside Sopact Sense. Skills automate the work that used to be a four-tool project. And the cleaned, structured output pushes out to Tableau, Power BI, Looker, or Snowflake on demand.

THE PLATFORM

Sopact Sense

Four Intelligent Suite features run on the same clean dataset that your Contacts + Forms + Relationships produced. No imports. No exports for AI.

  • Intelligent Cell
    Per-cell prompt · result lands adjacent
  • Intelligent Row
    Per-respondent report card · scales to 500+
  • Intelligent Column
    Patterns + correlations across all rows
  • Intelligent Grid
    Full designer report · public live URL · multi-language
  • BI + warehouse outbound
    Tableau · Power BI · Looker · Snowflake
THE ACCELERANT

Skills

Prepackaged playbooks for prompt design, correlation hunting, report composition, and methodology validation. The Suite gets faster every cohort.

  • { } prompt-engineer
    Drafts CETC-shaped prompts for your specific instruments.
  • { } correlation-finder
    Surfaces non-obvious patterns across columns + flags outliers.
  • { } report-builder
    Generates audience-tuned Grid reports — multi-language, citation-backed.
  • { } methodology-validator
    Checks every claim back to its evidence; prevents overclaim.

Why this compounds

Cohort 1's prompts teach Sense your vocabulary. Cohort 2 starts from validated prompts. By cohort 5 your team isn't writing prompts — they're curating the best version of last cohort's. And the same report your board sees in Sense pushes to Tableau the same week.

17
§ 4.11 · Recap · End of Chapter 04
Chapter 04 · Recap

Six lessons.
And a library
still ahead.

1
One stack beats stitched tools.

Survey + qual-coding + Excel + Canva = 4–6 weeks. Suite = minutes.

2
Four features, four scopes.

Cell · Row · Column · Grid — pick your scope, point your prompt.

3
Prompts have four parts.

Constraints · Emphasis · Task · Context. Skip one, lose quality.

4
One dataset, many reports.

Funder · panel · board · partner — same data, different prompts.

5
Multi-language is one prompt away.

EN board, FR partner, ES regional — same Grid, three URLs.

6
Four canonical shapes.

Pre/post · correlation · panel · portfolio — pick yours, lift the recipe.

END OF CHAPTER 04 · BOOK 01 · UP NEXT · CHAPTER 05 · ACTIONABLE INSIGHT
BOOK 01
Beyond
the Survey
You are here
BOOK 03
Grant
Management
BOOK 04
Impact
Investment
BOOK 05
Workforce
Training
BOOK 05
Nonprofit
Programs
BOOK 06
Application
Management

"Four features. Four canonical report types. One clean dataset that doesn't need a four-week consultant phase to become a board memo."

THE SOPACT INTELLIGENCE LIBRARY · 2026
18
The Sopact Intelligence Library
Book 01 of 06 · Chapter 05

Actionable
Insight.

Sopact Sense is the intelligence engine. Your stack — BI, warehouse, MCP, GenAI tools like Claude — is the actionable layer. Two engines, one operating system for impact data. Custom dashboards in minutes, not weeks.

INTELLIGENCE ENGINE Sopact Sense clean data · canonical reports hours after last response CLEAN STRUCTURED DATA ACTIONABLE LAYER your stack + AI BI · warehouse · MCP · Claude minutes per question
By Unmesh Sheth · Sopact
§ 5.0 · Where this chapter sits
Where this chapter sits

From canonical reports
to custom anything.

Chapter 04 gave you four canonical report types straight out of Sense. This chapter is how everything downstream of those reports gets built — by your team, in your stack, in minutes.

Chapters in Beyond the Survey

00Introduction8 pages
01Workflow22 pages
02Data Design17 pages
03Data Collection16 pages
04Intelligent Suite18 pages
05Actionable Insightyou are here

The library

Book 01 · this book
Beyond the Survey
The foundational field guide — methodology for the AI era.
Book 03 · industry guide
Grant Intelligence
For program officers and foundation teams.
Book 04 · industry guide
Impact Intelligence
Portfolio outcomes with 5 Dimensions and IRIS+.
Book 05 · industry guide
Training Intelligence
Learner outcomes from enrollment to wage gain.
Book 05 · industry guide
Nonprofit Programs
One unified intelligence layer across many programs.
Book 06 · industry guide
Application Management
Pitch comps, fellowships, scholarships, accelerators.
2
CHAPTER · 05

Actionable
Insight.

Sense produces stakeholder intelligence — clean, structured, multi-language, live. What you do with it is the actionable layer, and it's bigger than any single tool. It's your BI, your warehouse, your AI agents, your automations — all working from the same source of truth.

What you'll learn
  • 01.The two engines — Stakeholder Intelligence vs Actionable Insight
  • 02.Export · BI · warehouse — the outbound layer
  • 03.MCP, API + GenAI — custom dashboards in minutes with Claude
  • 04.Three worked examples that extend the Ch 04 reports
Time to read
14 min
16 pages · 26 illustrations
3
§ 5.1 · The two engines
Chapter 05 · §5.1

Two engines.
One operating system.

Most teams confuse the two. They want their survey tool to also be their dashboard tool, their warehouse, their decision engine. It can't — and shouldn't. Each engine has a job. Together they form how impact data actually moves through your organization.

ENGINE 01 · BUILT BY SOPACT

Stakeholder Intelligence

One platform: Sopact Sense.

DOES ONE JOB · DOES IT WELL
  • Captures four channels into one clean dataset
  • Analyzes qual + quant on collection (not weeks later)
  • Generates canonical reports (pre/post · correlation · panel · portfolio)
  • Publishes in any language, as a live URL
BUILT FOR
Standard reporting
SPEED
Hours · reproducible
CLEAN
STRUCTURED
DATA
ENGINE 02 · BUILT BY YOUR TEAM

Actionable Insight

Your stack + AI agents.

DOES MANY JOBS · ADAPTS
  • Exports to CSV · XLS · Google Sheets · Zapier flows
  • Connects to Tableau · Power BI · Looker Studio · Snowflake
  • Exposes via MCP + API — readable by Claude + AI agents
  • Unifies with external sources — Salesforce · Stripe · government data
BUILT FOR
Unique, evolving needs
SPEED
Minutes · ad-hoc

One engine produces stakeholder intelligence. The other turns it into any action your team needs. Sequential, not competitive.

4
§ 5.2 · The handoff
Chapter 05 · §5.2

A clean handoff,
and the fan-out begins.

The intelligence engine produces a single artifact: a clean, structured, audit-ready dataset (and the canonical report alongside it). From that one artifact, the actionable layer fans out — to BI dashboards, automations, AI agents, and ad-hoc custom views. One source of truth, many destinations.

INTELLIGENCE ENGINE Sopact Sense canonical reports + clean data handoff ONE dataset + report BI · WAREHOUSE Tableau · Power BI · Snowflake MCP · API · GenAI Claude · custom agents EXPORT · AUTOMATION CSV · Sheets · Zapier · Slack PRODUCES live URL reports + structured dataset THE ACTIONABLE LAYER
What stays in Sense

Collection, cleaning, qual+quant analysis, the four canonical reports. The work that should be standardized.

What moves to the actionable layer

Custom dashboards, decision automations, ad-hoc Claude analyses, BI joins with your CRM and warehouse. The work that should evolve.

5
§ 5.3 · Outbound · table stakes
Chapter 05 · §5.3

Outbound, four ways.
Table stakes.

Before the BI and AI layers, the simplest exits. Every data grid in Sense flows out as a file or a triggered event. No "data download as a service" fee.

CHANNEL 01

CSV · XLS

One click on any data grid → flat file in your downloads folder.

USE WHEN · ad-hoc analysis · email to a collaborator · feed an old spreadsheet workflow
CHANNEL 02

Google Sheets

Live sync. Sheet updates as new responses arrive — no manual export.

USE WHEN · team works in Sheets · ops dashboards · joining with manually-entered data
CHANNEL 03

Zapier

Trigger flows on every submission · pipe to Slack, Notion, Salesforce, 6,000+ apps.

USE WHEN · route to CRM · alert staff · log to records system · no-code automations
{ }
CHANNEL 04

API · Webhooks

Programmatic access · webhook on submit · full payload to your service.

USE WHEN · custom integrations · feed your warehouse · build product features on Sense data

Every grid in Sense ships out four ways — and each path keeps the stakeholder_id intact so downstream systems can join cleanly. The next page is what most people mean when they say "actionable": BI and warehouse.

6
§ 5.4 · BI + warehouse
Chapter 05 · §5.4

Your BI team already
has dashboards.

Most enterprises run on Tableau, Power BI, or Looker — and behind them, a Snowflake / BigQuery / Redshift warehouse. Sense feeds straight into that stack. Your impact data joins your finance data, your CRM data, your program data — in the dashboards your team already opens every Monday.

SOPACT Sense clean dataset TABLEAU native connector + Salesforce, ops data POWER BI direct connection + Microsoft ecosystem LOOKER · GOOGLE BI Looker Studio embeds + BigQuery joins DATA WAREHOUSE Snowflake · Redshift · BigQuery scheduled sync
WHAT THIS UNLOCKS · 01
Cross-functional joins

Impact data joins finance, CRM, and ops data in the same dashboard.

WHAT THIS UNLOCKS · 02
Existing dashboards extend

No "impact reporting tool" — impact rows land in the views you already use.

WHAT THIS UNLOCKS · 03
Enterprise governance

Warehouse access controls + audit logs apply to impact data automatically.

7
§ 5.5 · MCP + API
Chapter 05 · §5.5

MCP-native.
Your data, readable by AI.

MCP — the Model Context Protocol — is the emerging standard for how AI agents read data from external tools. Sense speaks it. Which means Claude (and any MCP-compatible agent) can read your impact data the same way your dashboards do, over the same secure API.

SENSE · MCP SERVER exposes clean dataset + canonical reports scoped · audited · tokenized same REST + GraphQL API for dashboards MCP-COMPATIBLE CLIENTS Claude desktop · API · Code Custom AI agents your internal tooling ChatGPT · others via MCP adapter Your services REST · GraphQL SAME API → BI tools + warehouse + Zapier
WHY MCP MATTERS

AI agents don't want CSV exports. They want a live API they can query conversationally. MCP is how Claude reads your Sense data, asks follow-up questions, and builds custom views — all without leaving the chat.

SECURITY & SCOPE

Every MCP token is scoped: which records, which fields, which actions. Same governance as a dashboard share. Audited per query.

8
§ 5.6 · GenAI dashboards
Chapter 05 · §5.6 · the killer combo

Claude + MCP +
your data.
Build in minutes.

The most powerful part of the actionable layer doesn't come from a BI tool. It comes from pairing your clean Sense data with a GenAI agent — most teams use Claude — over MCP. You ask in plain English. The dashboard materializes. The next question reshapes it.

CLAUDE · WITH SENSE MCP CONNECTED
From the Spring 2026 cohort, show me confidence delta broken down by demographic — and overlay placement rate at +6 months.
CLAUDE
Reading Spring 2026 cohort via Sense MCP… joining on participant_id… building chart. Done.
→ dashboard ready · 1.4s
Filter to first-gen learners only. And add the confidence quotes.
MCP
READ
DASHBOARD · GENERATED LIVE

Spring '26 · confidence × placement

conf Δ · placement % · by demographic First-gen +2.4 · 84% URM +2.1 · 89% Continuing +1.6 · 91% Career-change+2.7 · 87% → first-gen overdelivers on confidence, lags slightly on placement
→ click any bar to drill to the 12 underlying responses
90s
TIME FROM QUESTION TO DASHBOARD
Two prompts, one drill-down. No BI ticket, no consultant.
previous tool stack: ~3 days
9
§ 5.7 · Data unification
Chapter 05 · §5.7

Sense is one source.
The full picture
needs several.

The actionable layer's superpower isn't building dashboards faster — it's joining sources. Sense brings clean stakeholder data. Your CRM brings relationship history. Stripe brings transactions. Government datasets bring context. Unified, they become evidence; siloed, they stay anecdote.

SOPACT SENSE stakeholder records SALESFORCE / CRM relationship history STRIPE / FINANCE transactions, payments LINKEDIN / INDEED wage, placement data GOV · CENSUS · BLS demographic context UNIFIED stakeholder_id five sources, one row FULL-PICTURE VIEW confidence Δ × wage gain × demographic × engagement

The join key is everything. Sense's stakeholder_id is the spine that every external source attaches to — Salesforce's contact_id, Stripe's customer_id, your warehouse's user_uuid. Match once at the join, unify forever.

10
§ 5.8 · Example 1 · workforce + wage
Worked example 01 of 03 · extends Ch 04 §4.9.1

Girls Code + wage data.

The Ch 04 cohort report already shows confidence delta + skill gain. Adding alumni wage data unlocks "did the program move incomes?" — the question every workforce funder eventually asks.

Adds
LinkedIn / Indeed wages
Via
MCP + Claude
Time to build
~15 minutes
PREVIOUS · CH 04 §4.9.1
Cohort report
SKILL Δ · 6 dim
conf Δ +2.4 · skill Δ +1.7
47 learners · 12 wks
+
ADD · VIA MCP
Wage data
CLAUDE · prompt
"Pull current titles + wages for the 47 alumni from LinkedIn, joined on email."
→ 42 / 47 matched
+ T-6mo wage column
=
NEW · BUILT IN CLAUDE
Confidence × wage gain
conf Δ → $ wage gain r = 0.71 · positive
$18k median wage gain
+ drill to alumni list
The result

Sense's cohort report kept the canonical "skill + confidence" structure intact. Claude — with the Sense MCP connected — pulled LinkedIn wages on demand and built the confidence × wage scatter that the foundation funder actually wanted to see. 15 minutes from question to share-ready link.

11
§ 5.9 · Example 2 · ESG + emissions
Worked example 02 of 03 · extends Ch 04 §4.9.4

ESG portfolio + emissions actuals.

Ch 04's ESG dashboard reads each company's claims from sustainability PDFs. Joining the warehouse's emissions actuals in Snowflake produces the chart every board chair now asks for: claim vs. reality.

Adds
Snowflake emissions rows
Via
Warehouse join + Power BI
Time to build
~30 minutes
PREVIOUS · CH 04 §4.9.4
PDF claims
claims · 8 cos · PDF-extracted Acme82 Bird68 Cypr91 Delp42
scored from disclosures
+
ADD · SNOWFLAKE
Emissions actuals
SQL · join on company_id
SELECT sense.score,
  wh.scope1_actual,
  wh.scope2_actual
FROM sense_esg s
JOIN warehouse.emissions wh
  ON s.cid = wh.cid;
→ 8 / 8 matched · monthly
=
NEW · POWER BI
Claim vs. actual
claim score (purple) · actual (coral) Acme Bird Cypr Delp
⚠ Delphi · 50% gap
→ drill to evidence
The result

The PDF-claims scoring from Sense stays the source of one half of the chart. Snowflake's emissions actuals are the other half. The join — on company_id — is one SQL statement. Power BI's existing portfolio dashboard now shows the gap, every month, automatically.

12
§ 5.10 · Example 3 · application + alumni
Worked example 03 of 03 · extends Ch 04 §4.9.3

Application panel + alumni outcomes.

Ch 04's scholarship panel scores 500 applicants in days. Joining the alumni team's outcomes log (kept in Google Sheets) reveals which application traits predict alumni success. The selection rubric updates itself.

Adds
Alumni outcomes log
Via
Sheets sync + Tableau
Time to build
~2 hours · once
PREVIOUS · CH 04 §4.9.3
500 applicants
applicantscore
a_001 · Chen87
a_002 · Diaz84
a_003 · Patel61
… +497
AI brief + rubric per app
+
ADD · GOOGLE SHEETS
Alumni outcomes
SHEET · alumni-log
app_id · outcome_5yr
a_001 · founder
a_002 · academic
a_003 · founder
… maintained by alumni team
→ live sync · 312 alumni tracked
=
NEW · TABLEAU
Predictive rubric
trait → alumni-outcome lift civic-grit theme+18% STEM + arts mix+12% high rec-quality+9% low rec-quality−6%
rubric self-updates · annually
The result

Sense's per-applicant briefs stay the source of truth at intake. The alumni team's Sheet (their tool, their workflow) joins on application_id. Tableau builds the predictive overlay. Next year's rubric incorporates what the data has been quietly teaching for five years. The selection process gets smarter without anyone re-training it manually.

13
§ 5.11 · The accelerant
Chapter 05 · §5.11

Sense ships clean data.
Skills ship the bridge.

The actionable layer needs a bridge. Skills in Sense generate that bridge — the MCP exposures, the BI connectors, the unified joins, the Claude-ready prompts. So your team isn't writing integration boilerplate; they're working on the question that matters.

THE INTELLIGENCE ENGINE

Sopact Sense

Stays the same job: capture clean, analyze on collection, produce the canonical reports. Everything in this chapter consumes what Sense produces.

  • Exports · 4 channels
    CSV · XLS · Google Sheets · Zapier triggers on every grid.
  • BI · native connectors
    Tableau · Power BI · Looker Studio · Snowflake outbound.
  • MCP server · scoped
    Claude + custom agents · same audited API as your dashboards.
  • REST + GraphQL API
    Programmatic access for product engineers + ops automations.
THE ACCELERANT

Skills

Prepackaged playbooks for the actionable layer. They take the boilerplate out of integrations so your team works on the question, not the wiring.

  • { } bi-bridge
    Wires Sense as a live source to Tableau / Power BI / Looker.
  • { } mcp-exposer
    Generates scoped MCP tokens for Claude + AI agents · per-record access.
  • { } data-unifier
    Joins Sense's stakeholder_id with Salesforce, Stripe, warehouse keys.
  • { } claude-co-pilot
    Drafts MCP-ready prompts for ad-hoc dashboards built in Claude.

Why this compounds

Cohort 1 teaches Sense your join keys, your BI vocabulary, your Claude prompts. Cohort 2 inherits all three. By cohort 5, your team's "intelligence + actionable" loop runs faster every quarter — because both engines have been quietly learning from each other the whole time.

14
§ 5.12 · Recap + Up Next
Chapter 05 · §5.12

Six lessons
to carry forward.

1
Two engines, one OS.

Stakeholder Intelligence (Sense) and Actionable Insight (your stack + AI). Sequential, not competitive.

2
Four exits, table stakes.

CSV · Sheets · Zapier · API. Every grid ships out, every record stays joined.

3
BI + warehouse · massive.

Tableau, Power BI, Looker, Snowflake. Impact rows land in dashboards your team already opens.

4
MCP makes it AI-readable.

Claude + other agents read your data via the same secure API as your BI tools.

5
Dashboards in minutes.

Claude + MCP + clean Sense data = ad-hoc views built conversationally, not by ticket.

6
Unification is the multiplier.

Sense + CRM + warehouse + LinkedIn + government data — joined on stakeholder_id, none siloed.

UP NEXT
Chapter 06 · Application Management

The book closes with a full lifecycle worked example — application intake to onboarding — applying both engines to one domain. Also the teaser for Book 06.

06
15
End of Chapter 05
END OF CHAPTER 05 · BOOK 01

Two engines.
One operating system.
Built for the AI era.

Sense produces stakeholder intelligence. Your stack + AI produces actionable insight. The handoff between them is one clean dataset, and the work above runs from there.

BOOK 01
Beyond
the Survey
You are here
BOOK 03
Grant
Management
Industry guide
BOOK 04
Impact
Investment
Industry guide
BOOK 05
Workforce
Training
Industry guide
BOOK 05
Nonprofit
Programs
Industry guide
BOOK 06
Application
Management
Industry guide

"One engine produces stakeholder intelligence. The other turns it into any action your team needs. Sequential, not competitive."

THE SOPACT INTELLIGENCE LIBRARY · 2026
16
The Sopact Intelligence Library
Book 01 of 06 · Chapter 06 · the bridge

Application
Management.

The bridge chapter. Every method from Chapters 01–05 — collection, the Intelligent Suite, both engines — applied to one full domain. Then forward into Book 06.

INTAKE 01 SCREEN 02 SCORE 03 COMMITTEE 04 DECIDE 05 ONBOARD 06 THE APPLICATION LIFECYCLE · 6 STAGES applicant_id assigned at intake · survives all six stages
By Unmesh Sheth · Sopact
§ 6.0 · Where this chapter sits
Where this chapter sits

Five methods,
one domain.

Chapter 06 closes Book 01 the way a good closing chapter should — not by introducing more methodology, but by walking everything you've learned through one full lifecycle. Then opens the door to Book 06.

Chapters in Beyond the Survey

00Introduction8 pages
01Workflow22 pages
02Data Design17 pages
03Data Collection16 pages
04Intelligent Suite18 pages
05Actionable Insight16 pages
06Application Managementyou are here · last

The library

Book 01 · this book
Beyond the Survey
The foundational field guide — methodology for the AI era.
Book 03 · industry guide
Grant Intelligence
For program officers and foundation teams.
Book 04 · industry guide
Impact Intelligence
Portfolio outcomes with 5 Dimensions and IRIS+.
Book 05 · industry guide
Training Intelligence
Learner outcomes from enrollment to wage gain.
Book 05 · industry guide
Nonprofit Programs
One unified intelligence layer across many programs.
Book 06 · industry guide · next
Application Management
Pitch comps, fellowships, scholarships, accelerators. This chapter is the on-ramp.
2
CHAPTER · 06 · BOOK 01 CLOSING

Application
Management.

Scholarships, fellowships, accelerators, pitch competitions, RFPs. They all run the same six-stage lifecycle — and they all benefit from both engines: Sense for the intelligence, the actionable layer for the equity dashboards, the Salesforce push, the alumni feedback loop.

What you'll learn
  • 01.The 6-stage application lifecycle, mapped to the 5-stage methodology spine
  • 02.How Ch 03's channels and Ch 04's Suite handle stages 1–4
  • 03.The committee packet, the audit trail, the equity review
  • 04.Worked example: a small foundation, 480 applications, end-to-end
  • 05.Both engines from Ch 05 — applied to this one domain
Time to read
14 min
16 pages · 28 illustrations
3
§ 6.1 · The lifecycle
Chapter 06 · §6.1

Every application program
runs the same six stages.

Different domains call them different things — "shortlisting" vs. "screening," "interview" vs. "panel" — but the structure is consistent. Recognize the structure, and you can lift the methodology unchanged across every program type.

01
INTAKE
Application open
  • Online form + uploads
  • Save-progress
  • Multi-language
  • applicant_id assigned
02
SCREEN
Eligibility filter
  • Hard rules pass/fail
  • Document completeness
  • Incompletes flagged
  • Volume → 60–70%
03
SCORE
Rubric scoring
  • AI brief per applicant
  • Essay theme extraction
  • Rec-letter signal
  • Reviewer rubric blend
04
COMMITTEE
Panel review
  • Committee packet
  • Citation drill-down
  • Cross-reviewer blend
  • Discussion-ready
05
DECIDE
Award + audit
  • Accept / waitlist / decline
  • Rationale logged
  • Equity review
  • Notifications routed
06
ONBOARD
Cohort handoff
  • Selected → cohort flow
  • applicant_id ↔ participant_id
  • Data carries forward
  • Re-applicants linked

One ID across all six stages.

applicant_id is assigned at intake and survives every stage. The same applicant who submits in March is the same record being scored in April, discussed in May, awarded in June, and onboarded in July. No CSV gluing between stages.

4
§ 6.2 · Mapping to the methodology
Chapter 06 · §6.2

Six lifecycle stages.
Onto five
methodology stages.

The 5-stage methodology spine from Chapter 01 — Data · Framework · Dictionary · Transformation · Reports — isn't separate from the lifecycle. It's the plumbing underneath. Every application-stage uses one or two of the five.

APPLICATION LIFECYCLE · 6 STAGES
01 · INTAKE
02 · SCREEN
03 · SCORE
04 · COMMITTEE
05 · DECIDE
06 · ONBOARD
DATA
forms + uploads at intake/screen
FRAMEWORK
rubric · eligibility rules
DICTIONARY
themes from essays + recs
TRANSFORMATION
Suite · briefs · scoring
REPORTS
packets · decisions · onboarding
METHODOLOGY SPINE · 5 STAGES · CHAPTER 01

Stages aren't inventions. They're how your program already works. What changes is that every stage now produces structured, queryable data — not a folder of PDFs nobody opens.

5
§ 6.3 · Stage 1 · Intake
Stage 01 applying Ch 03 · channels

Intake is
two channels.

Online form for structured fields, document uploads for narrative and recs. Both arrive joined to the same applicant_id. The most evidence-rich parts of any application — essays, recommendations, transcripts — are documents, and they're treated as data from the moment they hit the queue.

ONLINE FORM · UNIQUE LINK demographics · goals · prior experience skip logic · save-progress · multi-language DOCUMENT UPLOADS essays · rec letters · transcripts · financials extracted to structured fields · page-cited ASSIGNED HERE applicant_id persists 6 stages STRUCTURED FIELDS demographics · scores · status EXTRACTED EVIDENCE themes · claims · page cites SOURCE FILES original PDFs · audit-ready
INTAKE RULE 01
Save-progress is non-negotiable

Long applications without save = 30%+ dropout. With save, drop falls to 8%.

INTAKE RULE 02
One identity from minute one

applicant_id assigned before document uploads. Every file lands on the right record.

INTAKE RULE 03
Equity fields collected up front

Demographics structured at intake so equity audits can run on any decision later.

6
§ 6.4 · Stages 2–3 · Screen + Score
Stage 02 Stage 03 applying Ch 04 · Cell + Row

Screen is rules.
Score is Cell + Row.

Screening is hard rules — eligibility, completeness, exclusion criteria. Cheap, automated. Scoring is where the Intelligent Suite earns its keep: Intelligent Cell reads every essay paragraph for themes and confidence; Intelligent Row assembles the one-page applicant brief the panel will read.

Stage 02 · Screen · auto-filter

Hard rules · pass/fail

18+ at deadline
Residency confirmed
Two recommendations on file
Essay min word count
Transcript uploaded
Missing financial disclosure
VOLUME
480 → 318
66% pass to scoring · ineligibles auto-notified with reason
Stage 03 · Score · Cell + Row from Ch 04

Themes per essay · brief per applicant

STEP 01 · INTELLIGENT CELL
~318 essays
For each essay: themes · sentiment · grit-signal · originality. CETC prompt with the rubric in Context, theme list in Constraints.
STEP 02 · INTELLIGENT ROW
~318 briefs
Per applicant: one-page brief joining intake + cell-extracted themes + rec quality + rubric score, with citations on every claim.
REVIEWER TIME PER APPLICANT
15 min → 3 min
7
§ 6.5 · Stage 4 · Committee packet
Stage 04 applying Ch 04 · Grid

The committee packet
writes itself.

Across the whole shortlist, the Intelligent Grid assembles one document a 6-person panel can actually work from: sortable, citation-backed, equity-aware, with the discussion-worthy outliers already flagged. No 47-slide deck to maintain.

PANEL PACKET · SPRING 2026 SCHOLARSHIP

90 finalists · ranked + flagged

SHORTLIST
90
RUBRIC AVG
76
FLAGGED
8
applicantscorethemesflag
a_004 · Owusu91civic · grit
a_017 · Reyes89STEM · arts
a_032 · Tran88civic · grit⚠ equity
a_001 · Chen87STEM · grit
a_055 · Khan86arts · civic
… 85 more
→ click any row · brief opens with citations
What the packet contains
  • Ranked finalist grid · sortable by any rubric dimension
  • Citation-backed briefs · click to drill into source PDFs
  • Theme distribution · what the cohort emphasized at intake
  • Equity summary · demographic spread of the shortlist
  • Outlier flags · 8 applicants worth panel attention
  • Live URL · 6 panelists, same packet, async-friendly

No deck. No spreadsheet. No re-keying. The Grid produces a packet that's already the decision artifact — the 6-person panel walks in pre-read, the meeting argues the flagged 8 instead of re-reading the obvious 90.

8
§ 6.6 · Stages 5–6 · Decide + Onboard
Stage 05 Stage 06 decision + audit · onboarding handoff

Decisions logged.
Equity audited.

Stage 5 isn't "press the accept button." It's capture the rationale, run the equity audit, route the notifications. Stage 6 hands the selected cohort to whatever program comes next — without re-entering data.

Stage 05 · Decide + audit trail

Three things logged per decision

01 · OUTCOME
accept · waitlist · decline · per applicant
02 · RATIONALE
2–3 sentence reason · stored in the record · tied to applicant_id
03 · CITATIONS
which brief sections + which PDF pages supported the decision
EQUITY AUDIT · ACCEPT RATE × DEMOGRAPHIC
First-gen28% · n=42 URM 29% · n=44 Continuing26% · n=39 Career-chg27% · n=25
spread within 3 points · no demographic systematically under-selected
STAGE 06 · ONBOARDING HANDOFF

Selected cohort flows forward

The 150 accepted applicants don't re-enter anything. applicant_id is mapped to participant_id for the program — every essay, rec letter, demographic field, and prior wave answer is already there.

FLOWS INTO
workforce-training pattern from Chapter 03 §3.8.1
DATA CARRIED
demographics · goals · intake confidence · prior-experience scores
RE-APPLICANTS
prior cycle's applicant_id linked · history visible to panel
9
§ 6.7 · Worked example · 480 apps
Chapter 06 · §6.7

480 applications.
Six stages. One small foundation.

A 3-person foundation team reviews scholarship applications once a year. Their old process: 4 weeks, a 47-tab spreadsheet, a 9-person panel reading PDFs until 11pm. The new process — every stage from this chapter — closes in 6 days.

01
INTAKE · 6 WEEKS
Online form + 4 documents per applicant
~1,920 documents queued and joined to applicant_id automatically · save-progress kept dropoff under 9%
480 in
02
SCREEN · DAY 1
Hard-rule eligibility check
Auto-pass/fail · ineligibles get reason-coded email automatically · staff intervenes on edge cases only
→ 318 advance
03
SCORE · DAYS 2–3
Intelligent Cell + Row produce 318 briefs
Themes extracted from essays · rec quality scored · rubric blend computed · all citations preserved
→ ranked
04
COMMITTEE · DAYS 4–5
Live URL packet · 6-person panel async
Panel reads pre-meeting, meets for 2 hours, argues only the 8 flagged outliers · async votes on the 82 clear cases
→ 90 finalists
05
DECIDE · DAY 6
150 awards · rationale + equity audit logged
Decisions tagged per applicant · equity dashboard auto-runs · acceptance rates within 3 points across demographics
→ 150 awarded
06
ONBOARD · WEEK 2
Cohort flows into the workforce-training pattern
applicant_id → participant_id · zero data re-entry · weekly pulse-checks begin
→ 150 in cohort
The result · 4 weeks → 6 days · same rigor, audit-ready

Reviewer time per application dropped from 15 minutes to 3. The 9-person panel compressed to 6 (and met for 2 hours instead of 4). Every decision has a citation trail. The equity audit ran by itself. And the foundation kept every dollar of decision authority it had before — they just spent it on the 8 hard cases instead of the 472 obvious ones.

10
§ 6.8 · Both engines, one domain
Chapter 06 · §6.8 · synthesis with Ch 05

Both engines.
One application program.

Chapter 05 named the two engines: Stakeholder Intelligence (Sense) and Actionable Insight (your stack + AI). Application management is the cleanest place to see both in motion at once — one engine produces the packet, the other turns it into the dashboards, automations, and downstream workflows that make the program move.

ENGINE 01 · STAKEHOLDER INTELLIGENCE

Sopact Sense · stages 1–5

  • Intake · online form + document upload · applicant_id
  • Screen · hard rules, auto pass/fail
  • Score · Intelligent Cell + Row produce briefs
  • Committee packet · Intelligent Grid · live URL
  • Decision logging · rationale + citations stored
Produces: clean applicant dataset · committee packet · decisions log · equity audit table
CLEAN
STRUCTURED
DATA
ENGINE 02 · ACTIONABLE INSIGHT

Your stack + AI · across stages

  • Salesforce sync · accepted applicants → CRM via Zapier
  • Tableau dashboard · equity audit cross-cycle
  • Claude + MCP · ad-hoc panel-prep questions answered in minutes
  • Slack alerts · staff pinged on flagged outliers
  • Alumni outcomes loop · 5-yr Sheets log feeds next rubric
Produces: CRM records · BI dashboards · staff workflows · predictive scoring overlays

The Intelligence Engine standardizes what stays the same every cycle. The Actionable Layer customizes what changes. Both running on one applicant_id is the architecture.

11
§ 6.9 · The accelerant
Chapter 06 · §6.9

Sense holds the lifecycle.
Skills do the heavy lifts.

Four Skills handle the application-management-specific moves that take a lot of configuration the first time and almost none thereafter. Next cycle starts from the recipe, not from scratch.

THE PLATFORM

Sopact Sense

Same platform that ran Chapters 03–05 — now configured for the application lifecycle. Contacts = applicants · Forms = intake · Relationships = documents.

  • Intake forms · uploads · multi-language
    All four Ch 03 channels live here.
  • Screening rules · eligibility logic
    Configurable per program · auto-notify ineligibles.
  • Suite-driven scoring
    Cell + Row from Ch 04 on essays + recs.
  • Committee packet · live URL
    Sortable, drillable, panelist-shareable.
  • Decision logging · audit trail
    Rationale, citations, equity table preserved.
THE ACCELERANT

Skills

Prepackaged playbooks for the application-management moves. They turn rubric design, equity auditing, packet composition, and rationale capture from a project into a configuration.

  • { } rubric-scorer
    Drafts CETC prompts for each rubric dimension and runs them on essays + recs.
  • { } equity-auditor
    Runs accept-rate × demographic breakdown with confidence bands · flags drift.
  • { } committee-packet-composer
    Generates the panel-ready Grid · outlier-flagged · share-link in one step.
  • { } decision-rationale-logger
    Captures structured rationale + citations · keeps the audit trail watertight.

Why this compounds

Cycle 1 teaches Sense your rubric vocabulary and outlier patterns. Cycle 2 inherits both — and adds the alumni-outcome loop from Ch 05's §5.10. By cycle 5, your scholarship process is selecting the applicants your last five cohorts have quietly been showing you to pick.

12
§ 6.10 · Chapter recap
Chapter 06 · §6.10

Six lessons
from the bridge.

1
Six stages, every program.

Intake → Screen → Score → Committee → Decide → Onboard. Lift the structure unchanged.

2
One ID for the whole journey.

applicant_id from minute one, mapped to participant_id at onboarding. No re-entry.

3
Cell + Row replace the consultant.

Themes from essays · briefs per applicant · 15 min → 3 min per reviewer.

4
The packet is a live URL.

Sortable, citation-backed, async-friendly. The panel argues outliers, not the obvious.

5
Equity audit runs by itself.

Accept rates by demographic logged every cycle · drift flagged before it hardens.

6
Both engines, one program.

Sense holds the lifecycle. Your stack + Claude handle the dashboards, joins, and automations.

BOOK 01 · THESIS

Stop forcing your survey tool to be your dashboard, your warehouse, and your decision engine. Let Sense be the intelligence engine. Let your stack — with Claude in the loop — be the actionable layer. Two engines. One operating system. Built for the AI era.

13
End of Book 01 · the journey
Beyond the Survey · the journey

Six chapters.
One method.

A look back at what Book 01 covered — so the next time someone hands you a new program to measure, you have the whole map in one place.

01
Workflow

The 5-stage methodology spine — Data · Framework · Dictionary · Transformation · Reports — and the 9 vocabulary terms behind every later chapter.

02
Data Design

Mixed-method, longitudinal, pre/post. Designing for the field — offline, skip logic, multi-language as three independent layers.

03
Data Collection

Four channels — online · offline · documents · transcripts — feeding one stakeholder_id. Persistence is the architectural choice.

04
Intelligent Suite

Cell · Row · Column · Grid. CETC prompt-craft. The four canonical report types — each with a live URL example.

05
Actionable Insight

The two engines named. Export · BI · MCP · Claude — and three worked examples extending the Ch 04 reports with external data.

06
Application Management

Both engines applied to one full lifecycle. 480 applications, 6 stages, 6 days. The on-ramp to Book 06.

DATA FRAMEWORK DICTIONARY TRANSFORMATION REPORTS

The 5-stage spine that runs through every chapter, every book in the library.

14
§ 6.11 · Where to go next
Chapter 06 · §6.11 · what's next

Five industry books
follow this one.
Pick yours.

Book 01 was the foundation. The next five take this methodology and apply it to the specific domains most teams actually work in — and Book 06 in particular picks up where this chapter ends.

DIRECT SEQUEL · BOOK 06

Application Management

Chapter 06 introduced the lifecycle. Book 06 goes deep — fellowships, scholarships, accelerators, pitch competitions, RFP responses. Equity auditing in depth. Cross-cycle predictive scoring. Re-applicant management.

06
BOOK 03 · INDUSTRY
Grant Intelligence

Foundation program officer view. Grantee onboarding, mid-cycle pulse, renewal recommendations. The actionable layer talks to your grants management system.

BOOK 04 · INDUSTRY
Impact Intelligence

Portfolio outcomes against IRIS+ and the 5 Dimensions of Impact. Document intelligence on disclosures. Warehouse joins for emissions actuals (extending Ch 05 §5.9).

BOOK 05 · INDUSTRY
Training Intelligence

Pre/post, cohort tracking, wage-gain follow-up. The Girls Code worked example from Ch 04 §4.9.1 and Ch 05 §5.8 — expanded to a full playbook.

BOOK 05 · INDUSTRY
Nonprofit Programs

One intelligence layer across many programs. Shared stakeholders. Cross-program reporting. The hardest reporting problem in the sector — finally tractable.

15
End of Book 01 · Beyond the Survey
END OF BOOK 01 · BEYOND THE SURVEY

Two engines.
One operating system.
A method for the AI era.

Sopact Sense holds the intelligence. Your stack + Claude hold the action. Together they replace the four-tool stack most teams have inherited — and the four-week consultant rebuild that came with it.

BOOK 01
Beyond
the Survey
Complete
BOOK 03
Grant
Management
Industry guide
BOOK 04
Impact
Investment
Industry guide
BOOK 05
Workforce
Training
Industry guide
BOOK 05
Nonprofit
Programs
Industry guide
BOOK 06
Application
Management
Direct sequel

"One engine produces stakeholder intelligence. The other turns it into any action your team needs. Sequential, not competitive."

THE SOPACT INTELLIGENCE LIBRARY · 2026
16