Example Report

Assessment Report Template

Below are two mock assessment reports — one for a Product & Tech Team and one for a Professional Services firm — showing the structure and depth of a typical engagement output.

← Back to Start with an Assessment

Mock Client

Acme Digital

Mid-size SaaS company · 6 product squads · ~80 engineers · Series B

01

Approach

The assessment ran over three weeks following a structured discovery & planning methodology. Our approach combines qualitative interviews, process mapping, and quantitative flow metrics to build a complete picture of how teams work today — and where the highest-impact improvements sit.

Week 1 — Team Discovery

  • 1-to-1 interviews with all 6 squad leads and 3 engineering managers
  • Shadow sessions observing refinement, stand-ups, and retros
  • PDLC artefact review (backlogs, PRDs, acceptance criteria)

Week 2 — Leadership & AI Readiness

  • Interviews with VP Product, CTO, and Head of Design
  • AI readiness workshops — tool landscape, risk appetite, current usage
  • Survey across all product & engineering (44 respondents)

Week 3 — Analysis & Report

  • Process flow mapping: idea → production for all squads
  • Maturity scoring across 6 dimensions
  • Findings synthesis, root-cause analysis, and roadmap design
02

Current State Flow

We mapped the end-to-end product delivery flow from idea intake through to production release and feedback. Bottlenecks and pain points are highlighted in the flow below.

💡
Idea Intake
Ad-hoc Slack requests, stakeholder emails, no single backlog
📋
Prioritisation
No clear framework — HiPPO-driven, inconsistent across squads
Bottleneck
✏️
Refinement
Varies by squad: 30 min → 2 hrs; inconsistent acceptance criteria
🔨
Build & QA
Avg cycle time 11 days; manual testing dominant
🚀
Release
Manual deploy process; 1 release/week gated by ops team
Bottleneck
📊
Feedback
No structured feedback loop back to discovery
03

Findings & Root Causes

Inconsistent delivery practices

Each squad runs a different variant of Scrum. No shared definition of done, no common backlog hygiene standards.

Root cause: No PDLC playbook or squad-level operating model — each lead improvised as the team grew.

Slow cycle time (11-day avg)

Work sits in "In Review" or "Ready for QA" for 3–4 days on average before progressing.

Root cause: Manual testing bottleneck + no WIP limits + PR review round-trips averaging 2.3 cycles.

AI adoption near zero

Two engineers use Copilot informally. No team-level AI tooling, no policy, no structured evaluation of use cases.

Root cause: Leadership unsure where to start; no governance framework; perceived IP/security risk with no clear guidelines.

Discovery disconnected from delivery

Product managers run discovery in isolation. No dual-track cadence — discoveries land as large, undefined epics.

Root cause: No discovery playbook. PM capacity stretched across too many squads (1 PM : 3 squads).

Limited metrics & visibility

No team-level dashboards. Leadership relies on weekly status updates written manually by squad leads.

Root cause: Jira configured inconsistently across squads; no data engineering capacity to build reporting.

Release gating by ops

All releases go through a single ops engineer who runs manual deployment checklists. Max 1 release/week per squad.

Root cause: No CI/CD pipeline maturity; deployment process never automated beyond basic scripts.
04

Maturity Assessment

Each dimension is scored 1–5 based on interview evidence, artefact review, and survey data. The spider chart shows the current state baseline; target state (dashed) represents a realistic 6-month goal.

Discovery (2.2)Delivery (2.8)AI Adoption (1.4)Ways of Working (2.5)Metrics & Visibility (1.8)Playbooks & Standards (1.6)
Current State 6-Month Target
05

Recommendations

Prioritised by impact and feasibility. Quick wins first, then structural changes.

Quick Win

Standardise definition of done & backlog hygiene

Align all 6 squads on a shared DoD, story format, and refinement checklist. Estimated: 1–2 sprints.

Quick Win

Introduce WIP limits and flow metrics dashboards

Configure Jira boards with WIP limits and deploy a shared dashboard (cycle time, throughput, age). Estimated: 1 sprint.

Medium Term

Deploy AI coding assistant across all squads

Roll out GitHub Copilot with usage policy, governance guardrails, and squad-level AI champions. Estimated: 2–3 sprints.

Medium Term

Establish dual-track discovery cadence

Create discovery playbook; reallocate PM capacity (target 1 PM : 2 squads max); introduce weekly discovery sync. Estimated: 2–4 sprints.

Strategic

Automate CI/CD and remove release gating

Build pipeline automation, feature flags, and automated smoke tests to enable continuous deployment. Estimated: 4–6 sprints.

Strategic

AI-assisted QA and document generation

Implement AI-powered test generation, PRD drafting, and acceptance criteria writing across the PDLC. Estimated: 4–8 sprints.

06

AI Implementation Approach

AI is embedded across the roadmap — not as a separate initiative, but woven into each phase of ways-of-working improvement. The approach follows a crawl → walk → run model aligned to team readiness.

Crawl

Foundation & Governance

Establish AI usage policy and security guardrails. Deploy Copilot to 2 pilot squads. Train all engineers on prompt patterns and safe use. Appoint squad-level AI champions.

Sprints 1–2 · Aligned to Roadmap Phase 1
Walk

Expand & Integrate

Roll Copilot to all 6 squads. Introduce AI-assisted code review and PR summarisation. Begin AI-generated acceptance criteria pilot. Launch fortnightly AI upskilling sessions.

Sprints 2–3 · Aligned to Roadmap Phase 2
Run

Advanced Use Cases

Deploy AI-assisted test generation in CI pipeline. Automate PRD and technical spec drafting. Implement AI-powered discovery insights (user research summarisation, competitor analysis). Measure adoption metrics and iterate.

Sprints 3–4 · Aligned to Roadmap Phases 2–3
07

Implementation Roadmap

Phased delivery over 8 weeks (4 × 2-week sprints), structured to show value early and build momentum. This aligns to the sample roadmap template.

WorkstreamSprint 1 W1–W2Sprint 2 W3–W4Sprint 3 W5–W6Sprint 4 W7–W8
Ways of Working & PDLCStandardise DoD, backlog hygiene, squad ceremoniesDual-track discovery cadence; refinement playbook rolloutCross-squad retros; continuous improvement habits
AI EnablementAI policy & governance; Copilot pilot (2 squads)Copilot full rollout; AI code review pilotAI test generation; PRD drafting assistantMeasure & iterate; advanced use cases
Metrics & VisibilityJira standardisation; WIP limitsFlow metrics dashboards; team health checksLeadership reporting; quarterly review rhythm
Coaching & UpskillingSquad lead coaching; AI literacy sessionsWave 1 team coaching; AI prompt workshopsOngoing 1-to-1 coaching; community of practice; train-the-trainer

View the full interactive roadmap to see how phases, workstreams, and sprints are structured.

Want a report like this for your organisation?

Every assessment is tailored to your context — your teams, your industry, your goals. A short conversation is all it takes to get started.