Last reviewed on December 17, 2025 by @Faraz Malik

1. Overview

Project purpose

MedPrompt is a tool to expedite document processing in primary care clinics by automating document summarisation, SNOMED code detection, outcome tasks inference and NHS ID detection.

By leveraging vision-text-to-text LLMs and phrase embedding models, MedPrompt is able to work in tandem with the capabilities of trained coders, saving them immense time by already completing the vast majority of the task.

1.1 Architecture

1.1.1 App-level

MedPrompt High-level architecture.svg

There are 4 components to the MedPrompt app:

The deployment integrates with the OpenAI API for getting query responses from vision-text-to-text LLMs.

1.1.2 Deployment-level

MedPrompt Deployment-level architecture.svg

The frontend is deployed on the app subdomain. There is a single, shared identity server deployment. Each clinic has a dedicated medprompt-processor server.

The user authenticates with the identity server from the frontend, retrieving credentials and its clinic path. This is used to connect to the appropriate server deployment.

Note that both the clinic and identity servers share the same server subdomain to enable cross-server credentials sharing in compliance with browser security constraints.

1.2 Core features