The Sopact Intelligence Library
Book 01 of 06 · Chapter 04

Intelligent
Suite.

Four AI features that turn one clean dataset into four canonical report types — designer-quality, multi-language, ready the moment the last response closes.

CELL ROW COLUMN GRID
By Unmesh Sheth · Sopact
§ 4.0 · Where this chapter sits
Where this chapter sits

From clean data
to decision-ready reports.

Chapter 03 brought data in cleanly from four channels. This chapter is the AI layer that turns it into reports nobody has to rebuild — including the four canonical report types every credible program uses.

Chapters in Beyond the Survey

00Introduction8 pages
01Workflow22 pages
02Data Design17 pages
03Data Collection16 pages
04Intelligent Suiteyou are here
05Actionable Insightnext chapter

The library

Book 01 · this book
Beyond the Survey
The foundational field guide — methodology for the AI era.
Book 02 · industry guide
Application Management
Pitch comps, fellowships, scholarships, accelerators.
Book 03 · industry guide
Grant Intelligence
For program officers and foundation teams.
Book 04 · industry guide
Impact Intelligence
Portfolio outcomes with 5 Dimensions and IRIS+.
Book 05 · industry guide
Training Intelligence
Learner outcomes from enrollment to wage gain.
Book 06 · industry guide
Nonprofit Programs
One unified intelligence layer across many programs.
2
CHAPTER · 04

Intelligent
Suite.

Four named AI features. Four canonical report types. One platform that runs all of it on the dataset you already collected cleanly — and writes the report in whatever language your funder reads.

What you'll learn
  • 01.Why one stack beats stitched-together tools
  • 02.The four features — Cell · Row · Column · Grid
  • 03.Prompt-craft — Constraints, Emphasis, Task, Context
  • 04.The four canonical report types — with live examples
Time to read
16 min
18 pages · 30 illustrations
3
§ 4.1 · One stack vs. stitched
Chapter 04 · §4.1

Three tools, four weeks.
One stack, four hours.

The traditional stack: SurveyMonkey for collection, NVivo for qual coding, Excel for cross-tabs, Canva for the deck. Four tools, four exports, four reconciliations. Every cycle starts over.

The stitched-tools stack
SurveyMonkey
collection
→ CSV export
NVivo / Dedoose
qual coding · 2 wks
→ XLSX export
Excel
VLOOKUP reconciliation
→ pivot table
Canva / Figma
visual design
→ stale PDF
Per cycle: 4–6 weeks · $15–35k consultant rebuild · zero reproducibility next cycle.
The Intelligent Suite
Intelligent Cell
extract from single open-ended response or document
Intelligent Row
per-respondent report card
Intelligent Column
correlations + patterns across all rows
Intelligent Grid
full designer-quality report, multi-language
Per cycle: minutes to hours · no marginal cost · reproducible from cohort to cohort.

The first two approaches carry the same reconstruction cost every cycle. The Intelligent Suite does the architectural work once.

4
§ 4.2 · The Suite
Chapter 04 · §4.2

Four features.
One dataset.

The Suite operates on four scopes — a single cell, a single row, a single column, or the entire grid. Each scope answers a different question. Run them in any combination on the same clean dataset.

participant_id confidence_t0 confidence_t1 open_response theme p_a7f324 p_b2c134 p_c9e225 p_d4a813 p_e1f535 "hard at first" CELL ROW → ↓ COLUMN GRID (all of it)
CELL
one value, one prompt
ROW
one person, one card
COLUMN
one field, all rows
GRID
whole dataset, whole report
5
§ 4.3 · Intelligent Cell
Feature 01 Intelligent Cell

One value,
one prompt.

The simplest feature in the Suite. Look at one cell — usually an open-ended response, a quote, a paragraph of a document — run a prompt against it, write the result into an adjacent cell. Multiply by 500 rows, it costs you minutes.

INPUT CELL · open_response
"Honestly the first few weeks were brutal. I kept getting stuck on async functions. But by week 8 I shipped a project and it actually felt like I could."
237 chars · participant_id = p_a7f3
prompt
OUTPUT CELLS · adjacent columns
confidence_score
4 / 5
sentiment
positive shift
themes
struggle → mastery
USE CASE 01
Score confidence from open text

Likert-equivalent from narrative.

USE CASE 02
Extract themes from interview

Tags from a transcript chunk.

USE CASE 03
Pull a number from a PDF page

Spend value with page citation.

6
§ 4.4 · Intelligent Row
Feature 02 Intelligent Row

One person,
one report card.

Take all the data you have on one person — across forms, interviews, documents, waves — and generate a one-page report card. For 500 applicants, that's 500 cards. For a cohort review, that's a personalized brief per participant.

One row · all data for p_a7f3
participant_idp_a7f3
cohortspring-2026
demographicsF · 24 · first-gen
confidence_t02 / 5
confidence_t14 / 5
capstone_score87 / 100
themesstruggle→mastery
exit_interview30 min transcript
placement_t3jr. SWE · $87k
Generated card
PARTICIPANT BRIEF

Maya · cohort spring-2026

+2
confidence shift
87
capstone score

Maya entered with self-rated confidence of 2/5. Her exit interview names week 8 as the inflection — shipping her first real project. By T3 she'd landed a junior SWE role at $87k.

Quote: "by week 8 it felt like I could" 02:24

Runs 500 times to generate 500 personalized briefs. The panel works from these, not from raw exports.

7
§ 4.5 · Intelligent Column
Feature 03 Intelligent Column

One field,
all rows.

Pick one or more columns. Ask a question that spans the whole cohort. Get a pattern, a correlation, a cluster — backed by the rows that contributed to it.

Two columns selected
COLUMN · X
test_score
quant · 0–100
COLUMN · Y
confidence_text
qual · open-ended

"Do high test scores predict high confidence — or are they independent?"

Output · participant-level scatter
test_score → confidence → low/high high/high low/low high/low ⚠ r = 0.62 · positive · not absolute
PATTERN READ

Strong positive correlation (r = 0.62) but seven outliers in the high-test / low-confidence quadrant — usually first-gen students who score but doubt. Worth a program intervention.

8
§ 4.6 · Intelligent Grid
Feature 04 Intelligent Grid

The whole grid,
one designer report.

Point the Grid at your entire dataset, hand it a prompt that describes the audience and the language, and it generates the full report — narrative, visualizations, evidence drill-down, public shareable link. Change the prompt to change the language, the audience, the framing.

WHOLE DATASET · grid
all rows · all cols
PROMPT (in French) →
Générez un rapport d'impact pour notre conseil d'administration en français. Mettez en évidence l'évolution de la confiance, les thèmes qualitatifs, et la performance par démographique.
Generated report · live URL
Rapport d'impact · Spring 2026
CONFIANCE Δ
+2.4
PLACÉS · T3
87%
pré (gris) → post (vert)

"Les apprenants signalent que la 8e semaine marque l'inflection point — moment où l'effort cède à la maîtrise…"

→ sense.app/r/abc-fr · cliquer pour la version EN ou ES

Change the prompt, change the language. Same dataset, same evidence, but the report ships in French for the board, English for the funder, and Portuguese for the regional partner — generated three times from the same Grid.

9
§ 4.7 · Prompt-craft
Chapter 04 · §4.7

A good prompt
has four parts.

Across Cell, Row, Column, and Grid — every Intelligent Suite call is a prompt. The same four characteristics distinguish prompts that produce decision-ready output from prompts that produce confident nonsense.

C

Constraints

What the output must not do. Limits, formats, scopes. The hard rails.

e.g. "Score 1–5 only. No explanation. No null."
E

Emphasis

Where to look hardest. What matters most in this response.

e.g. "Focus on the inflection moment, not the average tone."
T

Task

The action itself, in a single verb. Extract. Score. Cluster. Summarize.

e.g. "Score this response on technical confidence."
C

Context

What this response is and who said it. Program type. Wave. Rubric. Persona.

e.g. "12-week coding bootcamp · learner exit reflection · T1."
CETC ASSEMBLED · ONE PROMPT

"In a 12-week coding bootcamp learner exit reflection at T1, score the response on technical confidence. Focus on the inflection moment, not the average tone. Score 1–5 only. No explanation. No null."

10
§ 4.8 · One dataset, four ways
Chapter 04 · §4.8

One bootcamp dataset.
Four ways.

60 learners, 14 weeks, four channels of clean data. Same dataset, four different Intelligent Suite calls — each answering a different question a different stakeholder is asking.

FEATURE 01
Cell
"Can you score every open-ended response on technical confidence?"
Run on each row's open_response column · 60 cells written
confidence_score column populated · 1–5 scale · joined to participant_id
FEATURE 02
Row
"Build a one-page brief per learner for the placement team."
Pulls demographics, skill delta, capstone score, exit-interview quote · per row
60 personalized briefs · placement team works from these, not the raw export
FEATURE 03
Column
"Does demographic correlate with confidence Δ — and where are the outliers?"
Cross-tab demographic_t0 × (confidence_t1 – confidence_t0) · participant scatter
Pattern + 7 outliers flagged · first-gen learners with high score / low confidence
FEATURE 04
Grid
"Write the funder report. EN for the foundation, ES for the regional partner."
Whole dataset → designer-quality multi-language report with evidence drill-down
Two live URLs · sense.app/r/abc-en + /abc-es · same evidence, two languages
The result

One clean dataset answers four different questions, for four different audiences — without leaving the platform. No export, no consultant, no rebuild. Next cohort starts from the same recipe.

11
§ 4.9 · 4 canonical report types
Chapter 04 · §4.9

Four report types.
One architecture.

Most credible impact reports fit one of four canonical shapes. Each comes from the same clean-data architecture: persistent participant IDs · analysis at collection · live-URL delivery. The next four pages walk through real examples, each one openable in a browser.

SHARED BACKBONE · ALL FOUR REPORTS
01
Persistent IDs
Every response links back to the participant from the first form. No reconciliation.
02
Analysis at collection
Open responses themed as they arrive. No coding phase, no NVivo, no analyst.
03
Live URL delivery
No static PDF. Every value drills back to its source. Updates as data arrives.
12
§ 4.9.1 · Workforce pre/post
Report type 01 of 04

Workforce · pre/post.

A 47-person Girls Code cohort runs pre- and post-assessments across six skill dimensions with confidence tracking throughout. The program director needs one report to send to her foundation funder.

Audience
Foundation funder
Cohort
47 learners · 12 wks
Built with
Intelligent Grid
What's inside
  • Skill delta tables across six rubric dimensions — per participant + cohort average
  • Confidence movement from baseline to post-program with distribution chart
  • Demographic breakdown by age + prior experience, structured at intake
  • Qualitative themes from post-program reflections, ranked by frequency
GIRLS CODE · COHORT REPORT

Spring 2026 · Skill change report

SKILL Δ AVG
+1.7
CONF Δ AVG
+2.4
6 skill dimensions · pre (grey) → post (coral)
"Learners moved most on JavaScript fundamentals + version control. Confidence in interviews lagged technical confidence by 4 weeks…"
LIVE REPORT · NO LOGIN
sense.sopact.com/ig/d81465e6-9c72-4ee9-bf8b-08ca519f1259
Open report →
13
§ 4.9.2 · Correlation · qual + quant
Report type 02 of 04

Correlation · qual + quant.

Do high test scores predict high confidence — or are they independent dimensions? One analysis links a quantitative rubric score to AI-extracted confidence from open-ended responses, producing a participant-level scatter.

Audience
Program improvement team
Method
Cell + Column
Built with
Intelligent Suite
SCORES × CONFIDENCE

Participant-level pattern

test_score → confidence → r = 0.62 · positive · ⚠ 7 outliers high/low
"Seven learners scored ≥80 but reported confidence ≤2 — usually first-gen students. Worth an intervention."
What's inside
  • Cross-dimensional correlation between quant rubric score + AI-extracted confidence
  • Participant-level scatter showing the actual distribution, not just averages
  • Four clusters — high/high, high/low, low/low, outliers
  • Plain-language read of what the pattern means for program design
LIVE REPORT · NO LOGIN
sense.sopact.com/ig/81461672-74ca-47a7-94de-1ddb77487b42
Open report →
14
§ 4.9.3 · Application panel
Report type 03 of 04

Application panel · 500 apps.

500 scholarship applications, 15 minutes per app the old way. An AI-scored brief per applicant cuts review time to three minutes — with citations linking every score back to the source sentence a panel can audit.

Audience
Review panel
Volume
500 applications
Built with
Cell + Row
What's inside
  • One-page brief per applicant — essay themes, rec quality, rubric alignment
  • Sortable grid the whole panel works from together
  • Score distribution + flagged outliers for panel discussion
  • Review time · 15 minutes down to 3 per application
PANEL GRID · 500 APPS

Sortable applicant briefs

applicantscorethemesflag
a_001 · M.Chen87STEM·grit
a_002 · J.Diaz84arts·civic
a_003 · S.Patel61STEM⚠ rec
a_004 · K.Owusu91civic·grit
… 495 more
Each row drills to a brief. Each brief sources to a citation. Audit trail intact.
LIVE REPORT · NO LOGIN
sense.sopact.com/ig/bcc5a5a7-7b31-4bf3-8b1b-2c0d665da248
Open grid →
15
§ 4.9.4 · ESG portfolio
Report type 04 of 04

ESG portfolio · PDFs to dashboard.

Every portfolio company submits a sustainability disclosure PDF. One dashboard reads all of them, scores each against the framework, and aggregates the results into a consistent picture for investors and the board.

Audience
Investors + board
Input
PDFs per company
Built with
Document intelligence
PORTFOLIO DASHBOARD

Sustainability across the portfolio

FRAMEWORK SCORE · 8 COMPANIES Acme82 Birdco68 Cypress91 Delphi42 Elara74 Foxglove28 Gyre81
Two companies fall below the threshold. Every score links to its source PDF page.
What's inside
  • PDFs read automatically — scores, gaps, claims pulled per company
  • Per-company gap analysis against the framework with evidence citations
  • One cross-portfolio view — every company compared together
  • Ready-to-share dashboard — no separate analytics tool needed
LIVE ANALYSIS · NO LOGIN
sense.sopact.com/ir/1a2dccdb-6ea4-5dbb-8ce6-c2d48977221a
Open analysis →
16
§ 4.10 · The accelerant + BI
Chapter 04 · §4.10

Inside Sense.
Out to your stack.

The Intelligent Suite runs inside Sopact Sense. Skills automate the work that used to be a four-tool project. And the cleaned, structured output pushes out to Tableau, Power BI, Looker, or Snowflake on demand.

THE PLATFORM

Sopact Sense

Four Intelligent Suite features run on the same clean dataset that your Contacts + Forms + Relationships produced. No imports. No exports for AI.

  • Intelligent Cell
    Per-cell prompt · result lands adjacent
  • Intelligent Row
    Per-respondent report card · scales to 500+
  • Intelligent Column
    Patterns + correlations across all rows
  • Intelligent Grid
    Full designer report · public live URL · multi-language
  • BI + warehouse outbound
    Tableau · Power BI · Looker · Snowflake
THE ACCELERANT

Skills

Prepackaged playbooks for prompt design, correlation hunting, report composition, and methodology validation. The Suite gets faster every cohort.

  • { } prompt-engineer
    Drafts CETC-shaped prompts for your specific instruments.
  • { } correlation-finder
    Surfaces non-obvious patterns across columns + flags outliers.
  • { } report-builder
    Generates audience-tuned Grid reports — multi-language, citation-backed.
  • { } methodology-validator
    Checks every claim back to its evidence; prevents overclaim.

Why this compounds

Cohort 1's prompts teach Sense your vocabulary. Cohort 2 starts from validated prompts. By cohort 5 your team isn't writing prompts — they're curating the best version of last cohort's. And the same report your board sees in Sense pushes to Tableau the same week.

17
§ 4.11 · Recap · End of Chapter 04
Chapter 04 · Recap

Six lessons.
And a library
still ahead.

1
One stack beats stitched tools.

Survey + qual-coding + Excel + Canva = 4–6 weeks. Suite = minutes.

2
Four features, four scopes.

Cell · Row · Column · Grid — pick your scope, point your prompt.

3
Prompts have four parts.

Constraints · Emphasis · Task · Context. Skip one, lose quality.

4
One dataset, many reports.

Funder · panel · board · partner — same data, different prompts.

5
Multi-language is one prompt away.

EN board, FR partner, ES regional — same Grid, three URLs.

6
Four canonical shapes.

Pre/post · correlation · panel · portfolio — pick yours, lift the recipe.

END OF CHAPTER 04 · BOOK 01 · UP NEXT · CHAPTER 05 · ACTIONABLE INSIGHT
BOOK 01
Beyond
the Survey
You are here
BOOK 03
Grant
Management
BOOK 04
Impact
Investment
BOOK 05
Workforce
Training
BOOK 05
Nonprofit
Programs
BOOK 06
Application
Management

"Four features. Four canonical report types. One clean dataset that doesn't need a four-week consultant phase to become a board memo."

THE SOPACT INTELLIGENCE LIBRARY · 2026
18