Reducing Workload and Improving Output Quality
Problem
A fast-growing health-tech found itself in an enviable position: demand for its services was outpacing its ability to deliver. The market opportunity was there, however a single bottleneck in their internal workflow was preventing them from capitalising it.
At the heart of the problem was a highly manual clinical process. Before a final assessment could be completed, skilled clinicians — already scarce and difficult to recruit — were required to work through more than 30 pages of forms and meeting notes, synthesising the information into a structured report. Compounding the challenge, each report had to be formatted to a specific template determined by the client's source, age, and profile, adding further complexity to an already time-consuming task.
Competitors facing the same constraint had attempted to solve it by building out large administrative teams to support their clinicians. This approach however, introduced new problems: higher error rates, greater operational overhead, and — critically — a process that still wasn't fast enough to unlock the growth the market was ready to support.
The company needed a fundamentally different solution.
Solution
To understand the problem in full, we conducted a series of interviews with team members across the business, mapping the end-to-end workflow and identifying where time was being lost at each stage. This gave us three things: a clear picture of the underlying problem, a way to prioritise the most high-impact areas, and a baseline against which we could measure the results of any intervention.
Armed with those insights, I designed a solution that embedded AI directly into the report-writing workflow. The system began by transcribing client meetings automatically, removing the need for manual note-taking. When a clinician came to write a report, the system would identify and surface the correct template based on the client's profile, pull through the relevant client information from the CRM, and present AI-generated summaries of the meeting transcriptions alongside the key sections of the client's forms and notes — all within a single, structured view.
Rather than asking clinicians to hunt for information across multiple sources, everything they needed to complete the assessment was consolidated in one place, ready for their review and clinical judgement.
Result
For the transcription element, I chose to integrate an established third-party AI provider rather than build a bespoke solution from the outset. This allowed the team to move quickly, prove the concept in a real environment, and de-risk the investment before committing to a full in-house build.
The initial implementation did not go to plan. The AI's summarisation was too aggressive, compressing information to a degree that caused critical clinical detail to be lost. Rather than abandoning the approach, we treated this as a learning opportunity — dialling back the extent of summarisation and finding the right balance between brevity and accuracy. The revised implementation performed significantly better.
Even with some elements of the toolset still in development, the results were striking. The number of manual steps in the report-writing process was meaningfully reduced, and the overall time required to produce a report fell dramatically. Beyond the headline efficiency gains, an unexpected benefit emerged: neurodiverse members of the clinical team reported that the tool substantially reduced the cognitive load that had previously made aspects of the process particularly challenging for them.