From the first DD document to year-seven exit — one investee record that compounds, not resets.
Seven chapters on how impact funds, foundations, and ESG teams move from scoring to scoring that compounds. Same data architecture from due diligence through exit.
Each chapter is published as a stand-alone PDF and as a sub-thread inside this book. The whole book reads in roughly two hours. The chapters are self-contained.
A pitch deck read once at IC and filed forever. A founder interview summarized into a memo and reopened never. Quarterly updates that never connect back to the investment thesis. The framework is not the problem. The architecture is.
The LP deck is due Monday. The IC meeting is Thursday. There are 40 investee folders, none of them have been read since DD, and the analyst who did that DD left six months ago. This is the part nobody includes in the pitch.
Every impact-fund team has lived the same week. Open the DD folder from fourteen months ago. Re-read 80 pages of Q1, Q2, Q3 narrative submissions that nobody read in sequence. Merge metrics into the LP template, except the column names changed between Q1 and Q3 and two investees still use the old version. Find — Friday afternoon — a risk signal that was in the Q2 narrative on page 7, six months ago, when there was still time to act on it.
The problem is not effort. Impact teams are some of the most disciplined people in finance. The problem is that every tool in the standard stack treats due diligence, onboarding, and reporting as separate activities — and resets context at each stage. The intelligence generated during a 90-minute founder interview vanishes the moment capital is deployed.
The investment lifecycle has three operational stages — DD, onboarding, and the quarterly loop — and they map cleanly onto the five-stage spine that runs through every book in this library.
The spine is the data architecture. The lifecycle is the workflow. Read across: at DD, Effective Data flows into a Five-Dimensions framework. At onboarding, the framework is anchored in a Data Dictionary tied to one persistent investee ID. From there, every quarterly submission becomes a Transformation that feeds the Reports — six per investee, every quarter, with the Year-7 exit memo already assembling itself in the background.
investee_id from the first DD document to the year-seven exit memo.The architectural requirement is not new metrics. It is a single record that survives every analyst handoff, every quarterly cycle, every template change. Sopact Sense issues an ID at the first uploaded document and uses it forever. The score is one signal. The compounded record is the answer.
Fifty to two hundred documents per investee, none of them designed to connect to what comes next. The work is not collection. The work is making collection structured at the moment of intake.
Pacific Community Ventures' DD guide describes three common approaches funds use: narrative descriptions of expected impact, structured DDQ questionnaires, and quantitative scoring tools. Each captures a different shape of evidence. None of them, on their own, connect to monitoring. The four buckets below are how Sopact Sense organizes the DD intake so that every field is queryable from day one.
Sixteen numbers in a deck — workshops delivered, hours logged, dollars disbursed, participation rate. The CFO nods, approves the next quarter's budget, and changes nothing about how capital gets allocated. The team calls this performance measurement. It is not. It is a faithful record of what happened, dressed in the vocabulary of performance. The fix is making outcome data — not activity counts — the default at intake.
The Five Dimensions of Impact are the consensus framework — What, Who, How Much, Contribution, Risk. Every impact fund references them. Almost none has the data infrastructure to score against all five with cross-portfolio consistency.
The framework was built between 2016 and 2020 by the Impact Management Project, with more than 2,000 organizations contributing. Its power is in the consensus, not the novelty — it gave the field a shared question structure so that different funds could describe impact in commensurable terms. What it does not do, by itself, is force a fund to actually score against the dimensions instead of organizing report sections under them.
Adopting the Five Dimensions as headers in the LP report rather than as a scoring architecture. The dimensions organize how content is presented; they do not change how investees are selected, monitored, or compared. Section headers are free. A rubric anchored to all five dimensions, with evidence citations and weights aligned to the fund's thesis, is infrastructure.
The second framework artifact, built during onboarding, is the Living Theory of Change. Most TOCs are drawn once during DD and never updated. A living TOC is a hypothesis that evolves with evidence — built collaboratively from the DD documents plus a 90-minute investee interview, and revised every quarter as new submissions arrive.
Persistent IDs at first contact are the architectural requirement. The Who at intake has to be the same Who at follow-up. Without it, Duration is reconstruction, Risk is a paragraph that never returns, and the LP report rebuilds itself from scratch every cycle.
| Field | Source | IRIS+ / Dim | Cadence |
|---|---|---|---|
| investee_id | Issued at first DD upload | — | Persistent · forever |
| outcome_definition | Founder interview · ToC | D1 · What | Set at onboarding · review Y2, Y5 |
| beneficiary_demographic | Lean Data survey | D2 · Who · PI2364 | Baseline + every Q4 |
| scale_per_quarter | Investee Q-report | D3 · Scale · OI1234 | Every quarter · auto-reconciled |
| contribution_narrative | Stakeholder voice survey | D4 · Contribution | Year 1, Year 3, Year 5 |
| risk_category_flag | AI parse of Q-narrative | D5 · Risk | Every submission · pattern-matched |
Below: a working DD assessment for a fictional clean-energy investee. Five Dimensions scored from the DD pack, each criterion linked back to the source passage that supports it. Ask "why a 3 on Contribution?" and the answer is a sentence from the founder interview, not a number someone typed in a spreadsheet.
Every citation tag links to the exact source paragraph in the document pack. The IC reviewer can audit any score in seconds — and the score itself becomes the baseline against which every Q-narrative is reconciled for the next seven years.
The reports below are not dashboards. They are publication-ready narratives, each one synthesized from the connected record. The DD scores set the baseline. The quarterly submissions accumulate against it. By 8am after the quarter closes, six reports per investee are sitting in the team's queue — every claim traced to source, every trend grounded in longitudinal data.
Structured assessment across all five rubric dimensions with evidence-linked scores and trend indicators vs DD baseline.
DD + EVERY QUARTERAuto-flagged data gaps, contradictions between claims and evidence, emerging risk patterns matched to DD risk taxonomy.
ON DOCUMENT UPLOADEverything the investment committee needs: thesis validation, key metrics, open questions, recommended actions.
BEFORE EACH ICPublication-ready impact narrative for LP reports — synthesized from the full investee record with source citations.
QUARTERLY · ANNUALLYMulti-year trajectory per indicator. Compounding progress, emerging concerns, cohort-level signal across the portfolio.
ANNUALLY · ON DEMANDComplete impact record from entry to exit. Ready for LP close-out, case studies, and future-fund fundraising.
AT EXIT · FUND CLOSEBecause every quarterly submission attaches to the same investee_id, the exit memo is not a project. It is a query. "What was committed at DD, what was delivered each quarter, what changed, and why." The audit-grade chain — commitment to verified outcome — is already there the day the term sheet expires.
47 entities. 31 investees and 16 supply-chain partners. Agriculture, infrastructure, education, clean water. A seven-year fund running its second cycle on the compounding record. Here is what year zero through year seven actually looks like.
Sustainability Fund VII opens its first DD on a regenerative-agriculture investee. The fund's rubric — anchored to all five dimensions, weighted by thesis — is generated from the partnership manager's stated focus. AI reads the 138-page pack overnight, scores each criterion with citation trails, and routes the file to two analysts for review.
Capital deploys. The 90-minute founder interview is recorded and synthesized with the DD pack. Both parties sign off on a 14-indicator Data Dictionary aligned to IRIS+. The investee_id (REGN-014) is now the spine of the record.
The Q6 narrative mentions "delayed cooperative payments to outgrower farmers." AI pattern-matches against the DD risk taxonomy — flagged as participation risk, identified at DD memo p.11. The IC sees it eight weeks before metrics confirm a drop in farmer retention.
Across 20 quarters of REGN-014 plus 46 other entities, the LP report rolls up automatically. Every claim is traced to a source paragraph; every cross-portfolio trend is grounded in longitudinal data. Three analyst-days of work, not three weeks.
REGN-014 exits via secondary sale. The exit impact summary — commitments at DD vs verified outcomes across seven years — is already assembled. It becomes a case study in the Fund VIII fundraise deck. The Living ToC, originally drafted in Year 0, has been revised six times. Year 8 starts with a stronger thesis than Year 0 ended on.
Impact funds are not a monoculture. A climate-tech VC and a corporate ESG team share the same data architecture but pull on different fields. Here is what the same compounding record looks like in five different organizations.
Series A/B specialists in distributed energy, agtech, mobility. SDG 7 / 13 weighted. Founder interviews carry most of the Contribution evidence.
Microfinance, embedded finance, SME lending. Lean Data stakeholder voice is core. Persistent customer IDs matter for Depth and Duration.
Program-Related Investments alongside grant-making. Cross-references with Book 03 grantees. ToC is the bridge between grant and investment dollars.
IFC-style with AIMM-class architecture. Multi-billion AUM. Connects front-end diagnostics to ex-post evaluation natively. Big rubrics, longer cycles.
Procurement teams under CSDDD pressure. 200–500 supplier records, worker-voice surveys at scale, corrective-action follow-through. The compounding record turns into the audit-grade effectiveness chain — commitment at intake to mid-cycle evidence to corrective action to re-verification.
The Sopact Intelligence Library is one architecture, six industry guides. What you read here applies sideways. The same five-stage spine runs through every book — only the lifecycle and the field names change.
The five-stage spine in full — Effective Data, Framework, Dictionary, Transformation, Reports. The architecture this chapter sits on.
Same compounding architecture, applied to foundations. grantee_id instead of investee_id; Logic Model instead of Living ToC.
Persistent participant_id from intake through 18-month follow-up. The Lean Data pattern that funds in Book 04 expect to see at portfolio companies.
Reviewer rubrics with observable anchors, bias detection, citation trails. The pattern under DD scoring in this book, generalized to any application pipeline.
Everything in this chapter runs on Sopact Sense — the data origin platform with a persistent ID at first contact. Skill files are the small Markdown recipes that turn the platform into a Five-Dimensions scorer, a Living ToC builder, or an LP-narrative composer. We don't distribute them; we author them with you in the first 60 minutes.
Data origin platform — not a downstream aggregator. Persistent IDs at first DD upload, structured collection at intake, AI analysis at submit, six reports per investee overnight.
Four skill files cover most impact-fund work. We co-write them with your team and your rubric — not generic templates.
A skill file is small — usually one or two pages of Markdown. The platform gives the skill an investee_id, a Living ToC, and 24 quarters of submissions to read against. Every quarter the platform gets smarter. The skill file stays the same. Sopact's job is to make the architecture invisible enough that the framework finally runs as code, not as decoration.
Before Chapter 02 (where we open the DD documents themselves), here is the compressed version of what changes when the record compounds instead of resets.
Every fund references the Five Dimensions. The gap is between framework adoption and framework operation. Architecture closes the gap.
A persistent investee_id at first DD upload is the single architectural change that makes Duration, Risk, and Contribution operationally tractable.
Dimensions as report section headers are decorative. Dimensions as a rubric with citations and weights aligned to thesis are infrastructure.
The Living Theory of Change is built at onboarding from DD pack + founder interview — and revised every quarter as submissions arrive.
Six reports per investee per quarter generate overnight when the record compounds. LP narratives are a roll-up, not a rebuild.
By Year 7 the audit-grade chain — commitment to verified outcome — already exists. Fund VIII fundraising starts with evidence, not anecdotes.
Not the end of a file. Not the close of an analysis. The start of the record that will outlive the analyst, the cycle, and the fund.
"The score is a snapshot. The record compounds. By the time the audit arrives, the chain is already there."