Example Report
Assessment Report Template
Below are two mock assessment reports — one for a Product & Tech Team and one for a Professional Services firm — showing the structure and depth of a typical engagement output.
Approach
The assessment ran over three weeks following a structured discovery & planning methodology. Our approach combines qualitative interviews, process mapping, and quantitative flow metrics to build a complete picture of how teams work today — and where the highest-impact improvements sit.
Week 1 — Team Discovery
- 1-to-1 interviews with all 6 squad leads and 3 engineering managers
- Shadow sessions observing refinement, stand-ups, and retros
- PDLC artefact review (backlogs, PRDs, acceptance criteria)
Week 2 — Leadership & AI Readiness
- Interviews with VP Product, CTO, and Head of Design
- AI readiness workshops — tool landscape, risk appetite, current usage
- Survey across all product & engineering (44 respondents)
Week 3 — Analysis & Report
- Process flow mapping: idea → production for all squads
- Maturity scoring across 6 dimensions
- Findings synthesis, root-cause analysis, and roadmap design
Current State Flow
We mapped the end-to-end product delivery flow from idea intake through to production release and feedback. Bottlenecks and pain points are highlighted in the flow below.
Findings & Root Causes
Inconsistent delivery practices
Each squad runs a different variant of Scrum. No shared definition of done, no common backlog hygiene standards.
Slow cycle time (11-day avg)
Work sits in "In Review" or "Ready for QA" for 3–4 days on average before progressing.
AI adoption near zero
Two engineers use Copilot informally. No team-level AI tooling, no policy, no structured evaluation of use cases.
Discovery disconnected from delivery
Product managers run discovery in isolation. No dual-track cadence — discoveries land as large, undefined epics.
Limited metrics & visibility
No team-level dashboards. Leadership relies on weekly status updates written manually by squad leads.
Release gating by ops
All releases go through a single ops engineer who runs manual deployment checklists. Max 1 release/week per squad.
Maturity Assessment
Each dimension is scored 1–5 based on interview evidence, artefact review, and survey data. The spider chart shows the current state baseline; target state (dashed) represents a realistic 6-month goal.
Recommendations
Prioritised by impact and feasibility. Quick wins first, then structural changes.
Standardise definition of done & backlog hygiene
Align all 6 squads on a shared DoD, story format, and refinement checklist. Estimated: 1–2 sprints.
Introduce WIP limits and flow metrics dashboards
Configure Jira boards with WIP limits and deploy a shared dashboard (cycle time, throughput, age). Estimated: 1 sprint.
Deploy AI coding assistant across all squads
Roll out GitHub Copilot with usage policy, governance guardrails, and squad-level AI champions. Estimated: 2–3 sprints.
Establish dual-track discovery cadence
Create discovery playbook; reallocate PM capacity (target 1 PM : 2 squads max); introduce weekly discovery sync. Estimated: 2–4 sprints.
Automate CI/CD and remove release gating
Build pipeline automation, feature flags, and automated smoke tests to enable continuous deployment. Estimated: 4–6 sprints.
AI-assisted QA and document generation
Implement AI-powered test generation, PRD drafting, and acceptance criteria writing across the PDLC. Estimated: 4–8 sprints.
AI Implementation Approach
AI is embedded across the roadmap — not as a separate initiative, but woven into each phase of ways-of-working improvement. The approach follows a crawl → walk → run model aligned to team readiness.
Foundation & Governance
Establish AI usage policy and security guardrails. Deploy Copilot to 2 pilot squads. Train all engineers on prompt patterns and safe use. Appoint squad-level AI champions.
Expand & Integrate
Roll Copilot to all 6 squads. Introduce AI-assisted code review and PR summarisation. Begin AI-generated acceptance criteria pilot. Launch fortnightly AI upskilling sessions.
Advanced Use Cases
Deploy AI-assisted test generation in CI pipeline. Automate PRD and technical spec drafting. Implement AI-powered discovery insights (user research summarisation, competitor analysis). Measure adoption metrics and iterate.
Implementation Roadmap
Phased delivery over 8 weeks (4 × 2-week sprints), structured to show value early and build momentum. This aligns to the sample roadmap template.
| Workstream | Sprint 1 W1–W2 | Sprint 2 W3–W4 | Sprint 3 W5–W6 | Sprint 4 W7–W8 |
|---|---|---|---|---|
| Ways of Working & PDLC | Standardise DoD, backlog hygiene, squad ceremonies | Dual-track discovery cadence; refinement playbook rollout | Cross-squad retros; continuous improvement habits | |
| AI Enablement | AI policy & governance; Copilot pilot (2 squads) | Copilot full rollout; AI code review pilot | AI test generation; PRD drafting assistant | Measure & iterate; advanced use cases |
| Metrics & Visibility | Jira standardisation; WIP limits | Flow metrics dashboards; team health checks | Leadership reporting; quarterly review rhythm | |
| Coaching & Upskilling | Squad lead coaching; AI literacy sessions | Wave 1 team coaching; AI prompt workshops | Ongoing 1-to-1 coaching; community of practice; train-the-trainer | |
→ View the full interactive roadmap to see how phases, workstreams, and sprints are structured.
Want a report like this for your organisation?
Every assessment is tailored to your context — your teams, your industry, your goals. A short conversation is all it takes to get started.