The Sopact Intelligence Library
Book 03 of 06 · Chapter 01 · the opener
Grant Intelligence

The
Lifecycle.

Stop reading grant reports. Start understanding grantees. The full pre-award to post-award arc — applied to one persistent grantee record. Then six intelligence reports the night the cycle closes.

INTAKE 01 REVIEW 02 AWARD 03 PLAN 04 TRACK 05 RENEW 06 THE GRANT LIFECYCLE · 6 STAGES grantee_id assigned at intake · survives every renewal cycle
By Unmesh Sheth · Sopact
§ 1.0 · Where this chapter sits
Where this chapter sits

One method,
one domain.

Book 03 opens here. The 5-stage spine you learned in Beyond the Survey is the same — only now every stage carries grant-specific weight. The application is the data. The interview is the framework. The renewal is the report card.

Chapters in Grant Intelligence

01The Lifecycleyou are here
02Application Reviewnext
03Logic Model at Interviewcoming
04Post-Award Trackingcoming
05Six Reportscoming
06Renewal & Cohortscoming
07In the Wildcloser

The library

Book 01 · foundational
Beyond the Survey
The foundational field guide — methodology for the AI era.
Book 02 · industry guide
Application Management
Pitch comps, fellowships, scholarships, accelerators.
Book 03 · this book
Grant Intelligence
For program officers and foundation teams.
Book 04 · industry guide
Impact Intelligence
Portfolio outcomes with 5 Dimensions and IRIS+.
Book 05 · industry guide
Training Intelligence
Learner outcomes from enrollment to wage gain.
Book 06 · industry guide
Nonprofit Programs
One unified intelligence layer across many programs.
2
CHAPTER · 01 · BOOK 03 OPENING

The
Lifecycle.

Grant making isn't six tools loosely linked by spreadsheet. It's one persistent record that compounds — from the day an applicant fills the LOI to the day a board member asks "what did this program actually produce?"

What you'll learn
  • How the grant lifecycle maps onto the 5-stage methodology spine
  • Why the Logic Model belongs at interview, not as a static template
  • How citation-backed AI scoring replaces 30 days of reviewer reading
  • How six intelligence reports get written the night the cycle closes
Time to read
15
minutes · 16 pages
Best read after Beyond the Survey · Chapter 01
Builds directly on the 5-stage spine
3
§ 1.1 · Why grant intelligence
Chapter 01 · §1.1

Six weeks to build it.
Four hours to read it.

That ratio — six weeks of staff time producing a report that gets four hours of board attention — is what one program officer calls The Compliance Ceiling. The data exists. The effort exists. The architecture that turns effort into intelligence does not.

Every grant cycle generates hundreds of documents — LOIs, proposals, budgets, progress reports, site visits, board memos. None of them were designed to connect to each other. None were designed to outlive the cycle that produced them. So the foundation collects more and learns less.

BEFORE · A TYPICAL CYCLE

347 applications. 5 reviewers. 30 days.

  • · Each reviewer reads at their own pace and standard
  • · Reviewer #3 scores 15% above mean — nobody notices
  • · Budget claims contradict narrative claims — unflagged
  • · Interview commitments end up in a Google Doc nobody opens
  • · 6 progress reports late — found at board meeting
  • · Board report assembled by hand from fragments
Time to board memo: 6 weeks
AFTER · GRANT INTELLIGENCE

347 applications. Scored overnight.

  • · Every page of every attachment read against your rubric
  • · Reviewer bias flagged before final rankings
  • · Budget vs. narrative contradictions surfaced automatically
  • · Logic Model built at interview · signed by both parties
  • · Late submissions flagged the day they're due
  • · Six board-ready reports generated the night cycle closes
Time to board memo: overnight
Foundant tells you whether grants were processed correctly. The board is asking what the grants produced. Those are two different questions — and only one of them has an answer your current platform can find. The Compliance Ceiling · Chapter 1
4
§ 1.2 · The big idea
Chapter 01 · §1.2

Six lifecycle stages
onto five spine stages.

The grant cycle has six visible stages — intake, review, award, plan, track, renew. The methodology spine has five — data, framework, dictionary, transformation, reports. They aren't separate. The spine is the plumbing underneath every stage of the cycle.

GRANT LIFECYCLE · 6 STAGES
01 · INTAKE
02 · REVIEW
03 · AWARD
04 · PLAN
05 · TRACK
06 · RENEW
↓ runs on ↓
METHODOLOGY SPINE · 5 STAGES
DATA
FRAMEWORK
DICTIONARY
TRANSFORM
REPORTS
DATA
Applications, budgets, attachments, interviews, progress reports, site visits
FRAMEWORK
Rubric for review · Logic Model at interview · Theory of Change
DICTIONARY
applicant_id → grantee_id arc. Indicators. Outcome commitments.
TRANSFORM
AI scoring · bias detection · progress-vs-promise · theme extraction
REPORTS
Six board-ready intelligence reports · auto-generated · cycle close

One record. Six stages. Every renewal cycle.

The grantee_id is assigned at LOI and never resets. Year 1 questions keep their Year 1 answers. Year 3 questions get added without breaking the history. No CSV gluing between cycles.

5
§ 1.3 · Effective data in grants
Stage 01 · Data EVERY DOCUMENT THAT TOUCHES A GRANT

A grant is not a survey.
It's a document trail.

The first decision in grant intelligence is structural: treat every document as data. Not just the application form fields. The LOI narrative, the uploaded budget, the audit report, the interview transcript, the quarterly progress report, the site-visit notes. All of it is signal. All of it scores.

PRE-AWARD

Application package

LOI, full proposal, project narrative, budget, organizational documents, letters of support, prior-year financials.

LOI.pdf budget.xlsx narrative.docx
AT AWARD

Interview & Logic Model

90-minute onboarding call transcript, signed Logic Model, evaluation and learning plan, baseline data submission.

interview.txt logic_model baseline.csv
POST-AWARD

Recurring reports

Quarterly metrics, semi-annual narratives, mid-cycle reports, site-visit notes, audit findings, financial actuals.

Q1_report.pdf site_visit.docx audit.pdf
STAKEHOLDER

Beneficiary voice

Beneficiary surveys, 30-day follow-up calls, photo evidence, third-party evaluation reports, longitudinal outcome surveys.

survey.csv interview.mp3 eval_report.pdf
CONCEPT · THE ANCHOR DEFICIT

A rubric criterion names what to assess. An evidence anchor specifies what observable text constitutes each scoring level. Without anchors, "demonstrates community need" means three different things to three reviewers — and twelve different things to twelve. Anchoring is the prerequisite step that every other platform skips.

6
§ 1.4 · Framework at interview
Stage 02 · Framework BUILT IN A 90-MIN CALL · NOT FROM A TEMPLATE

The Logic Model
is a conversation.

Most platforms ship Logic Model templates that grantees fill out alone. They arrive back as boilerplate. The Logic Model that matters is the one built during the onboarding interview — application context on one side, the grantee's voice on the other, and a signed document that becomes the baseline for everything that follows.

HOW THE LOGIC MODEL GETS BUILT
INPUT 1
Application

Narrative, budget, theory of change as written.

+
INPUT 2
Interview

90-min call · transcript captured automatically.

=
OUTPUT
Signed Logic Model

Scoring template for every check-in.

WHAT THE LOGIC MODEL CONTAINS · 5 LINKED COLUMNS
Inputs

Staff · budget · materials

Activities

Workshops · coaching · case management

Outputs

120 enrolled · 80 completed

Outcomes

Skill gain · job placement

Impact

Wage gain · 3-year retention

!

The Commitment Orphan

When commitments are made at interview but never connect to the system that will ask grantees to report against them, you get late progress reports that reference nothing. Logic Model at interview eliminates the orphan — every check-in inherits the commitments it should measure.

7
§ 1.5 · Shared vocabulary
Stage 03 · Dictionary ONE ID FROM LOI TO YEAR-3 RENEWAL

One record.
Many cycles.

The hardest problem in multi-year grantmaking isn't collecting data. It's keeping the data comparable across cycles. Year 1 grantees were asked one set of questions. Year 3 grantees get refined ones. Most platforms force a choice — preserve history or evolve the schema. The persistent record does both, additively.

A GRANTEE'S RECORD · ONE PERSISTENT ID
grantee_id = gr_4f8a2c · persistent across the entire arc 1 LOI Mar '25 2 PROPOSAL May '25 3 AWARD Jul '25 4 INTERVIEW Aug '25 5 PROGRESS Q1-Q12 6 RENEWAL Year 3+ 3-YEAR JOURNEY · ONE RECORD · NO RE-ENTRY
DATA DICTIONARY · SHARED VOCABULARY
FIELD
DEFINITION
VERSIONING
grantee_id
Persistent unique identifier · assigned at LOI
never changes
unique_served
Unique individuals served · de-duplicated by client_id
v2 · 2026
outcome_commit
Stated outcome from signed Logic Model
per cycle
progress_pct
Reported vs. committed · 0-100% · auto-calculated
per quarter
8
§ 1.6 · Transformation
Stage 04 · Transform AI READS · CITES · CALIBRATES · FLAGS

Score with
evidence. Not
with vibes.

Every score in grant intelligence comes with a citation — the exact passage from the application that produced the rating. Reviewers shift from cold-reading to verifying. The committee debates outliers, not the obvious.

CommunityBridge Coalition
grantee_id · gr_4f8a2c · application v.1
82
/ 100 · ADVANCE
Community Need
18 / 20
"...serving 2,400 unhoused residents in three Census tracts where housing cost burden exceeds 50% (HUD CHAS 2023)..." (p. 4)
Theory of Change
16 / 20
"...case management → stable housing within 90 days → income stability at 12 months..." (p. 6) · Logic chain present but mid-tier outcome gaps
Budget Alignment
12 / 20
⚠ Inconsistency flagged: Narrative claims "5 case managers" (p. 8); budget line shows 3.5 FTE (budget.xlsx row 14)
Org Capacity
20 / 20
990 filing 2024 shows 8 years of consecutive funding cycles · audit clean · staff retention 87% (annual report p. 12)
Evidence & Eval
16 / 20
"...partnered with university to evaluate 2023 cohort outcomes..." (p. 9) · external eval cited, methodology not disclosed
REVIEWER BIAS · MEAN SCORE BY REVIEWER · COHORT N=347
Reviewer 1
64.2
Reviewer 2
66.1
Reviewer 3
80.4 ⚠
Reviewer 4
63.7
Reviewer 5
65.4
⚠ Reviewer 3 scoring 15.2% above cohort mean · calibration recommended before final ranking
9
§ 1.7 · Six reports
Stage 05 · Reports GENERATED THE NIGHT THE CYCLE CLOSES

Six reports.
Every cycle.
Every grantee.

When the spine produces clean data at every stage, the reports stop being assembly projects. They become outputs — generated automatically the night the cycle closes. The program officer's job shifts from building the report to reading it.

01
Portfolio
Portfolio Health

Aggregate outcomes across all grantees and cohorts. Which cohorts are delivering, plateauing, or at risk — segmented by program area.

02
Compliance
Missing Data Alert

Who hasn't reported, what's incomplete, when it was due. Flagged the day a submission goes late — not the week the board deck is due.

03
Outcome
Progress vs. Promise

Actual outcomes compared against Logic Model commitments. AI synthesizes 60+ narratives into thematic patterns the board can read in 4 hours.

04
Renewal
Renewal Summary

Every active grantee's status in one view. Auto-generated across all check-ins. The Year-1 grantee record sits next to the Year-3 renewal candidate.

05
Equity
Fairness Audit

Scoring patterns by reviewer, demographic, and geography. Identifies where reviewer bias may have influenced decisions — before the board sees them.

06
Board
Board Report

Executive program summary with top performers, risks, and renewal recommendations. Evidence-backed narrative pulled from the persistent record.

The work moves from assembly to reading

Three weeks of board-deck assembly. Replaced by overnight generation, sortable filters, and a live URL the board can open before the meeting. Your program officers stop being report-builders and start being report-readers.

10
§ 1.8 · Worked example
Chapter 01 · §1.8

A community foundation,
one spring cycle.

A regional community foundation. $1.8M across three program areas. 347 LOIs. 4 weeks from intake close to board meeting. Watch the same lifecycle in motion — one persistent record per applicant, from first form to year-3 renewal.

347
LOIs received
across 3 programs
31
Awards made
over $1.8M total
3
Year term
6 reporting milestones
6
Intelligence reports
auto-generated
1
Week 1 · LOIs land · scored overnight

347 LOIs close on Friday. Sopact reads every page of every attachment over the weekend — applications scored against the foundation's anchored rubric with citation evidence per criterion. Monday morning: 210 clear non-advances, 97 borderline, 40 clear advances. Reviewers focus on the 97.

Output: Ranked pool with citation trails · bias report flagging Reviewer 3 · 23 budget-vs-narrative inconsistencies surfaced
2
Week 3 · Committee · packet is a live URL

Committee gets a sortable, filterable, citation-linked packet — not a PDF. Members read in advance. The meeting itself debates 14 borderline cases. The other 83 advances are accepted by exception. Decision rationales logged inline, tied to applicant_id.

Output: 31 awards · 12 waitlist · 304 declines with feedback · all decisions carry citation lineage forward
3
Month 2 · Onboarding interview · Logic Model signed

90-minute Zoom call with each new grantee. Transcript auto-captured. Application context (theory of change, proposed metrics, budget questions) on one side; grantee voice on the other. Out comes a signed Logic Model — the scoring template for every check-in for the next 36 months.

Output: 31 signed Logic Models · 6 milestone definitions per grantee · baseline data captured before implementation begins
4
Quarter 3 · Progress reports auto-scored against commitments

Q1 progress reports land. Sopact reads each one against that grantee's Logic Model. Quantitative metrics matched to indicators. Narrative scored against stated outcomes. Progress vs. Promise assembled automatically. 6 late reports flagged the day they're due — not at board-week.

Output: Per-grantee progress dashboard · early-warning flags on 4 underperformers · staffing surfaced as cross-portfolio barrier
5
Year 3 · Renewal decisions · selection-to-outcome linkage

Renewal cycle opens. Each renewal candidate's Year-3 outcome record sits next to their original Year-1 application — same grantee_id. The board can finally answer: which application characteristics predicted strong outcomes? The rubric calibrates itself on the next cycle's anchors.

Output: Predictive selection signal · renewal recommendations · rubric weights updated for next cycle based on outcome correlation
11
§ 1.9 · In the wild
Chapter 01 · §1.9

Five grant programs.
One spine.

Every foundation shape uses the same 5-stage spine. Only the rubric, the cadence, and the renewal arc change. Read the archetype closest to yours — the rest will still rhyme.

Family foundation · low volume, high context
20–80 grants · multi-year · relationship-driven
The spine here. Small cohort. Rubric is loose by design. The signal lives in interview depth, not application volume. Logic Model built collaboratively — often co-authored with the grantee.
Where intelligence wins. Portfolio rollup that the family principal can read in 20 minutes. Cross-grantee patterns surfaced across 5+ years. Renewal decisions backed by outcome trajectory, not memory of last quarter's call.
Community foundation · multi-program portfolio
100–500 grants · 3–8 program areas · annual cycle
The spine here. Different rubric per program area — health, education, climate, arts. Same grantee_id arc across all. Reviewer pools sometimes overlap across programs; bias detection runs per program and across them.
Where intelligence wins. Cross-program insights — which grantees are receiving from multiple programs, which program areas produce the strongest outcomes per dollar. Board sees one dashboard across 4 programs, not 4 separate decks.
Corporate giving · CSR & employee programs
300–1,000 grants · matching gifts · volunteer hours
The spine here. Multi-program complexity — community grants, employee scholarships, volunteer awards, matching gifts. Each program has its own application form but they all share the same stakeholder ID chain. Benevity-style platforms handle disbursement; intelligence layer runs on top.
Where intelligence wins. Board ESG reporting that ties program outputs back to corporate strategy. Cross-program rollup showing total community impact, not 6 separate reports per CSR vertical.
Government / public agency · regulated grantmaking
High audit · multi-year · formal compliance
The spine here. Strong audit trail required at every stage. Every score change requires timestamped rationale. PII redaction and evidence packs for public hearings. Compliance is structural, not theatrical.
Where intelligence wins. Persistent record IS the audit trail. Every rubric application, every score change, every renewal decision lineage is reproducible. Compliance officers stop assembling packets and start verifying them.
Multi-program nonprofit · grants alongside programs
Re-grant + direct service · shared stakeholders
The spine here. Some grantees are also participants in your direct programs. The same stakeholder may appear as a sub-grantee, a workshop attendee, and a survey respondent. One ID chain holds all three roles.
Where intelligence wins. The integration most platforms can't do: a stakeholder's grant outcomes and their direct-service outcomes on one record. Cross-program intelligence ties funded work to delivered work.
12
§ 1.10 · The library
Chapter 01 · §1.10

Adjacent books
in the library.

Grant Intelligence is one industry guide of four. Every book sits on the same 5-stage spine and shares the same persistent-record architecture. Read the foundational book first if you haven't.

Book 01 · Foundation
Beyond the Survey
The methodology spine. Six chapters that teach you to think about impact data as a workflow, not a survey. Required reading before any industry guide.
Book 04 · Industry guide
Impact Intelligence
Portfolio outcomes against IRIS+ and the 5 Dimensions of Impact. Document intelligence on ESG disclosures. Closely adjacent to Book 03 — many foundations and impact investors share LPs and grantees.
Book 05 · Industry guide
Training Intelligence
Pre/post measurement, cohort tracking, wage-gain follow-up. Useful for grantmakers whose portfolio includes workforce programs — the beneficiary-level outcome data flows back into your grant intelligence.
Book 06 · Industry guide
Application Management
Scholarships, fellowships, accelerators, pitch competitions. The same lifecycle architecture as grant intelligence — just a different rubric and a different downstream cohort.
13
§ 1.11 · Sopact Sense + Skills
Chapter 01 · §1.11

The operating system,
and the playbooks.

The methodology in this chapter isn't theoretical — it runs end-to-end on Sopact Sense, and the grant-specific Skills compound it. Two cycles in, the platform knows your rubric vocabulary. Five cycles in, it knows your portfolio.

Sopact Sense

The platform · where the 5-stage spine actually runs end-to-end

  • Stage 1 · Data · Contacts, Forms, Documents — every application, attachment, transcript, and progress report on one record.
  • Stage 2 · Framework · Logic Model builder · interview-driven · scoring rubric becomes the indicator dictionary.
  • Stage 3 · Dictionary · Persistent grantee_id · indicator versioning · commitment tracking across cycles.
  • Stage 4 · Transform · Intelligent Cell / Row / Column / Grid — scoring with citations, bias detection, theme extraction.
  • Stage 5 · Reports · Six intelligence reports — generated the night the cycle closes, no consultant required.

Skills

Pre-packaged playbooks that turn grant management expertise into platform behavior

  • { } application-rubric-scorer
    Anchors your rubric criteria to evidence and runs the scoring with citation trails per dimension.
  • { } logic-model-builder
    Reads the application + interview transcript, drafts the signed Logic Model, locks the indicator dictionary.
  • { } progress-vs-promise
    Reads every progress report against Logic Model commitments · early-warning flags · cross-portfolio themes.
  • { } board-report-composer
    Generates the six intelligence reports · pulls from the persistent record · ready the night cycle closes.

Skills run inside Sopact Sense. They aren't shipped as standalone files — they come with the platform, kept current by Sopact.

Why this compounds

Cycle 1 teaches Sense your rubric vocabulary and your reviewer patterns. Cycle 2 inherits both — and adds the Logic Model baselines from the new awards. By cycle 5, your grantmaking is selecting the applicants your last five cohorts have quietly been showing you to pick.

14
§ 1.12 · Recap
Chapter 01 · §1.12

Six takeaways
from the lifecycle.

1
Every document is data.

LOI, narrative, budget, audit, interview, progress report — all of it scores.

2
Logic Model at interview.

The framework gets built in the 90-min call. Not as a template before. Not as notes after.

3
One grantee_id, every cycle.

LOI through year-3 renewal — the record never resets. Schema evolves additively.

4
Score with evidence.

Every AI score cites the passage that produced it. Reviewers verify; the committee debates outliers.

5
Six reports, automatically.

Portfolio, Missing Data, Progress vs Promise, Renewal, Fairness, Board — all overnight.

6
Don't rip and replace.

Sopact adds the intelligence layer on top of Foundant, Submittable, or Fluxx. Disbursement stays where it is.

UP NEXT · CHAPTER 02

Application Review.

The Anchor Deficit in depth. How to write evidence-anchored rubrics. How AI scores 500 applications overnight with citations. How bias detection actually works inside a panel review.

02
15
End of Chapter 01 · Grant Intelligence
END OF CHAPTER 01 · BOOK 03 · GRANT INTELLIGENCE

Stop reading reports.
Start understanding grantees.

One persistent record. Six lifecycle stages. Five methodology stages. The architecture is what produces the intelligence — and now you have it.

BOOK 01
Beyond
the Survey
Foundation
BOOK 03
Grant
Intelligence
you are here
BOOK 04
Impact
Investment
Industry guide
BOOK 05
Workforce
Training
Industry guide
BOOK 05
Nonprofit
Programs
Industry guide
BOOK 06
Application
Management
Industry guide

"The award letter is the handoff, not the end of the record. Every cycle compounds the next one."

THE SOPACT INTELLIGENCE LIBRARY · 2026
16