Four AI features that turn one clean dataset into four canonical report types — designer-quality, multi-language, ready the moment the last response closes.
By Unmesh Sheth · Sopact
§ 4.0 · Where this chapter sits
Where this chapter sits
From clean data to decision-ready reports.
Chapter 03 brought data in cleanly from four channels. This chapter is the AI
layer that turns it into reports nobody has to rebuild — including the four
canonical report types every credible program uses.
Chapters in Beyond the Survey
00Introduction8 pages
01Workflow22 pages
02Data Design17 pages
03Data Collection16 pages
04Intelligent Suiteyou are here
05Actionable Insightnext chapter
The library
Book 01 · this book
Beyond the Survey
The foundational field guide — methodology for the AI era.
One unified intelligence layer across many programs.
2
CHAPTER · 04
Intelligent Suite.
Four named AI features. Four canonical report types. One platform that runs
all of it on the dataset you already collected cleanly — and writes the
report in whatever language your funder reads.
04.The four canonical report types — with live examples
Time to read
16 min
18 pages · 30 illustrations
3
§ 4.1 · One stack vs. stitched
Chapter 04 · §4.1
Three tools, four weeks. One stack, four hours.
The traditional stack: SurveyMonkey for collection, NVivo for qual coding,
Excel for cross-tabs, Canva for the deck. Four tools, four exports, four
reconciliations. Every cycle starts over.
The stitched-tools stack
SurveyMonkey
collection
→ CSV export
NVivo / Dedoose
qual coding · 2 wks
→ XLSX export
Excel
VLOOKUP reconciliation
→ pivot table
Canva / Figma
visual design
→ stale PDF
Per cycle: 4–6 weeks · $15–35k consultant rebuild · zero reproducibility next cycle.
The Intelligent Suite
Intelligent Cell
extract from single open-ended response or document
Intelligent Row
per-respondent report card
Intelligent Column
correlations + patterns across all rows
Intelligent Grid
full designer-quality report, multi-language
Per cycle: minutes to hours · no marginal cost · reproducible from cohort to cohort.
The first two approaches carry the same reconstruction cost every cycle.
The Intelligent Suite does the architectural work once.
4
§ 4.2 · The Suite
Chapter 04 · §4.2
Four features. One dataset.
The Suite operates on four scopes — a single cell, a single row, a single
column, or the entire grid. Each scope answers a different question. Run
them in any combination on the same clean dataset.
CELL
one value, one prompt
ROW
one person, one card
COLUMN
one field, all rows
GRID
whole dataset, whole report
5
§ 4.3 · Intelligent Cell
Feature 01Intelligent Cell
One value, one prompt.
The simplest feature in the Suite. Look at one cell — usually an open-ended
response, a quote, a paragraph of a document — run a prompt against it, write
the result into an adjacent cell. Multiply by 500 rows, it costs you minutes.
INPUT CELL · open_response
"Honestly the first few weeks were brutal. I kept getting stuck on
async functions. But by week 8 I shipped a project and it actually felt
like I could."
237 chars · participant_id = p_a7f3
OUTPUT CELLS · adjacent columns
confidence_score
4 / 5
sentiment
positive shift
themes
struggle → mastery
USE CASE 01
Score confidence from open text
Likert-equivalent from narrative.
USE CASE 02
Extract themes from interview
Tags from a transcript chunk.
USE CASE 03
Pull a number from a PDF page
Spend value with page citation.
6
§ 4.4 · Intelligent Row
Feature 02Intelligent Row
One person, one report card.
Take all the data you have on one person — across forms, interviews, documents,
waves — and generate a one-page report card. For 500 applicants, that's 500 cards.
For a cohort review, that's a personalized brief per participant.
One row · all data for p_a7f3
participant_id
p_a7f3
cohort
spring-2026
demographics
F · 24 · first-gen
confidence_t0
2 / 5
confidence_t1
4 / 5
capstone_score
87 / 100
themes
struggle→mastery
exit_interview
30 min transcript
placement_t3
jr. SWE · $87k
Generated card
PARTICIPANT BRIEF
Maya · cohort spring-2026
+2
confidence shift
87
capstone score
Maya entered with self-rated confidence of 2/5. Her exit interview names
week 8 as the inflection — shipping her first real project. By T3 she'd
landed a junior SWE role at $87k.
Quote: "by week 8 it felt like I could"02:24
Runs 500 times to generate 500 personalized briefs. The panel works from these,
not from raw exports.
7
§ 4.5 · Intelligent Column
Feature 03Intelligent Column
One field, all rows.
Pick one or more columns. Ask a question that spans the whole cohort. Get a
pattern, a correlation, a cluster — backed by the rows that contributed to it.
Two columns selected
COLUMN · X
test_score
quant · 0–100
COLUMN · Y
confidence_text
qual · open-ended
"Do high test scores predict high confidence — or are they independent?"
Output · participant-level scatter
PATTERN READ
Strong positive correlation (r = 0.62) but seven outliers in the
high-test / low-confidence quadrant — usually first-gen students who score
but doubt. Worth a program intervention.
8
§ 4.6 · Intelligent Grid
Feature 04Intelligent Grid
The whole grid, one designer report.
Point the Grid at your entire dataset, hand it a prompt that describes the
audience and the language, and it generates the full report — narrative,
visualizations, evidence drill-down, public shareable link. Change the
prompt to change the language, the audience, the framing.
WHOLE DATASET · grid
PROMPT (in French) →
Générez un rapport d'impact pour notre conseil d'administration en français.
Mettez en évidence l'évolution de la confiance, les thèmes qualitatifs, et la performance par démographique.
Generated report · live URL
Rapport d'impact · Spring 2026
CONFIANCE Δ
+2.4
PLACÉS · T3
87%
"Les apprenants signalent que la 8e semaine marque l'inflection point — moment où l'effort cède à la maîtrise…"
→ sense.app/r/abc-fr · cliquer pour la version EN ou ES
Change the prompt, change the language.
Same dataset, same evidence, but the report ships in French for the board,
English for the funder, and Portuguese for the regional partner — generated
three times from the same Grid.
9
§ 4.7 · Prompt-craft
Chapter 04 · §4.7
A good prompt has four parts.
Across Cell, Row, Column, and Grid — every Intelligent Suite call is a prompt.
The same four characteristics distinguish prompts that produce decision-ready
output from prompts that produce confident nonsense.
C
Constraints
What the output must not do. Limits, formats, scopes. The hard rails.
e.g. "Score 1–5 only. No explanation. No null."
E
Emphasis
Where to look hardest. What matters most in this response.
e.g. "Focus on the inflection moment, not the average tone."
T
Task
The action itself, in a single verb. Extract. Score. Cluster. Summarize.
e.g. "Score this response on technical confidence."
C
Context
What this response is and who said it. Program type. Wave. Rubric. Persona.
e.g. "12-week coding bootcamp · learner exit reflection · T1."
CETC ASSEMBLED · ONE PROMPT
"In a 12-week coding bootcamp learner exit reflection at T1,
score the response on technical confidence.
Focus on the inflection moment, not the average tone.
Score 1–5 only. No explanation. No null."
10
§ 4.8 · One dataset, four ways
Chapter 04 · §4.8
One bootcamp dataset. Four ways.
60 learners, 14 weeks, four channels of clean data. Same dataset, four
different Intelligent Suite calls — each answering a different question
a different stakeholder is asking.
FEATURE 01
Cell
"Can you score every open-ended response on technical confidence?"
Run on each row's open_response column · 60 cells written
confidence_score column populated · 1–5 scale · joined to participant_id
FEATURE 02
Row
"Build a one-page brief per learner for the placement team."
Pattern + 7 outliers flagged · first-gen learners with high score / low confidence
FEATURE 04
Grid
"Write the funder report. EN for the foundation, ES for the regional partner."
Whole dataset → designer-quality multi-language report with evidence drill-down
Two live URLs · sense.app/r/abc-en + /abc-es · same evidence, two languages
The result
One clean dataset answers four different questions, for four different
audiences — without leaving the platform. No export, no consultant, no rebuild.
Next cohort starts from the same recipe.
11
§ 4.9 · 4 canonical report types
Chapter 04 · §4.9
Four report types. One architecture.
Most credible impact reports fit one of four canonical shapes. Each comes
from the same clean-data architecture: persistent participant IDs · analysis
at collection · live-URL delivery. The next four pages walk through real
examples, each one openable in a browser.
SHARED BACKBONE · ALL FOUR REPORTS
01
Persistent IDs
Every response links back to the participant from the first form. No reconciliation.
02
Analysis at collection
Open responses themed as they arrive. No coding phase, no NVivo, no analyst.
03
Live URL delivery
No static PDF. Every value drills back to its source. Updates as data arrives.
⇆
Workforce pre/post
p. 13 · foundation funder
/×
Correlation · qual + quant
p. 14 · program improvement
▦
Application panel
p. 15 · review panel
⚘
ESG portfolio
p. 16 · investors + board
12
§ 4.9.1 · Workforce pre/post
Report type 01 of 04
Workforce · pre/post.
A 47-person Girls Code cohort runs pre- and post-assessments across six
skill dimensions with confidence tracking throughout. The program director
needs one report to send to her foundation funder.
Audience
Foundation funder
Cohort
47 learners · 12 wks
Built with
Intelligent Grid
What's inside
Skill delta tables across six rubric dimensions — per participant + cohort average
Confidence movement from baseline to post-program with distribution chart
Demographic breakdown by age + prior experience, structured at intake
Qualitative themes from post-program reflections, ranked by frequency
GIRLS CODE · COHORT REPORT
Spring 2026 · Skill change report
SKILL Δ AVG
+1.7
CONF Δ AVG
+2.4
"Learners moved most on JavaScript fundamentals + version control. Confidence
in interviews lagged technical confidence by 4 weeks…"
Do high test scores predict high confidence — or are they independent
dimensions? One analysis links a quantitative rubric score to AI-extracted
confidence from open-ended responses, producing a participant-level scatter.
Audience
Program improvement team
Method
Cell + Column
Built with
Intelligent Suite
SCORES × CONFIDENCE
Participant-level pattern
"Seven learners scored ≥80 but reported confidence ≤2 — usually first-gen students. Worth an intervention."
What's inside
Cross-dimensional correlation between quant rubric score + AI-extracted confidence
Participant-level scatter showing the actual distribution, not just averages
Four clusters — high/high, high/low, low/low, outliers
Plain-language read of what the pattern means for program design
500 scholarship applications, 15 minutes per app the old way. An AI-scored
brief per applicant cuts review time to three minutes — with citations
linking every score back to the source sentence a panel can audit.
Every portfolio company submits a sustainability disclosure PDF. One
dashboard reads all of them, scores each against the framework, and
aggregates the results into a consistent picture for investors and the
board.
Audience
Investors + board
Input
PDFs per company
Built with
Document intelligence
PORTFOLIO DASHBOARD
Sustainability across the portfolio
Two companies fall below the threshold. Every score links to its source PDF page.
What's inside
PDFs read automatically — scores, gaps, claims pulled per company
Per-company gap analysis against the framework with evidence citations
One cross-portfolio view — every company compared together
Ready-to-share dashboard — no separate analytics tool needed
The Intelligent Suite runs inside Sopact Sense. Skills automate the work
that used to be a four-tool project. And the cleaned, structured output
pushes out to Tableau, Power BI, Looker, or Snowflake on demand.
THE PLATFORM
Sopact Sense
Four Intelligent Suite features run on the same clean dataset that your
Contacts + Forms + Relationships produced. No imports. No exports for AI.
Intelligent Cell
Per-cell prompt · result lands adjacent
Intelligent Row
Per-respondent report card · scales to 500+
Intelligent Column
Patterns + correlations across all rows
Intelligent Grid
Full designer report · public live URL · multi-language
BI + warehouse outbound
Tableau · Power BI · Looker · Snowflake
THE ACCELERANT
Skills
Prepackaged playbooks for prompt design, correlation hunting, report
composition, and methodology validation. The Suite gets faster every
cohort.
{ }prompt-engineer
Drafts CETC-shaped prompts for your specific instruments.
{ }correlation-finder
Surfaces non-obvious patterns across columns + flags outliers.
Checks every claim back to its evidence; prevents overclaim.
↑
Why this compounds
Cohort 1's prompts teach Sense your vocabulary. Cohort 2 starts from
validated prompts. By cohort 5 your team isn't writing prompts — they're
curating the best version of last cohort's. And the same report your
board sees in Sense pushes to Tableau the same week.