Healthcare sector · RAG, PostgreSQL, vector search, LLM integration
AI Knowledge Assistant for Healthcare Laboratory Partners
A midsized healthcare company needed a better way for partners and trained specialists to identify relevant laboratory tests based on patient indicators and clinical context. I helped develop a RAG-based assistant grounded entirely in the company's own lab information — purpose-built for a knowledge-heavy, trust-sensitive environment where speed, clarity, and accuracy matter.
Challenge
Laboratory partners and trained medical specialists needed to navigate complex, proprietary documentation to identify the right tests for a given set of patient indicators. The existing approach required manual search through dense reference material — slow, error-prone, and dependent on individual expertise rather than reliable, systematic access to the company's knowledge base.
Solution
A RAG-based knowledge assistant grounded entirely in the company's own laboratory reference data. Users query by symptom profile or clinical context and receive source-grounded test suggestions, each traceable to the underlying documentation. No hallucination, no generic web content — only what the company's own lab information supports. The assistant was designed for trained professionals: it surfaces options and cites sources, but leaves clinical judgement with the user.
Technical Approach
Full RAG pipeline built from scratch: documents chunked and tokenised, embeddings generated and stored in PostgreSQL with vector search capabilities, semantic retrieval on each query, relevant chunks injected into LLM context, and responses grounded with source citations. Every suggestion is linked back to its source document, so trained users can verify and trust the output. The system deliberately keeps humans in the loop — it accelerates decision support without replacing professional judgement.
Impact
Partners and specialists can now identify relevant laboratory tests in seconds rather than minutes. The tool reduces reliance on individual expertise and provides consistent, auditable access to the company's knowledge base — improving both speed and confidence in a domain where accuracy has real clinical consequences.
Why This Matters
Deploying AI in healthcare is not the same as deploying it anywhere else. This project required thinking carefully about what responsible adoption looks like in practice: grounding outputs in authoritative, proprietary sources; making reasoning transparent and verifiable; and keeping human judgement central to the workflow. That discipline — building AI that organisations can actually trust — is what makes adoption possible in regulated, high-stakes environments. It is the same discipline that matters when selling frontier AI at enterprise scale.