Chat et al.
Clinical decision support, grounded in guidelines.

Guideline-linked answers to assist your decisions

A retrieval augmenteded system that surfaces relevant guideline passages and structured clinical knowledge. Designed to support clinical reasoning with traceable sources.

Traceable sources

Responses are linked to specific guideline passages, so you can verify where information comes from.

Structured knowledge

Clinical guidelines are parsed into a knowledge graph that captures entities, thresholds, drug classes, and decision pathways.

Contextual retrieval

The system retrieves relevant sections based on your query context, reducing time spent searching through documents.

How it works

From guidelines to grounded answers

A retrieval-augmented pipeline that combines semantic search with structured clinical knowledge.

1

Guideline ingestion

Clinical guidelines (e.g., ESC, AHA) are parsed from source documents, preserving structure, tables, and figures.

2

Knowledge graph construction

Entities—conditions, drugs, thresholds, contraindications—are extracted and linked into a graph that captures clinical relationships.

3

Vector indexing

Text passages are embedded and stored in a vector database for semantic similarity search.

4

Contextual retrieval

Given a query, the system retrieves relevant passages from both the vector index and the knowledge graph.

5

Response generation

A language model synthesizes retrieved content into a coherent answer, citing source passages.

What the knowledge graph captures

  • Clinical entities (diseases, drugs, procedures)
  • Recommendation classes and evidence levels
  • Therapeutic thresholds and target values
  • Contraindications and drug interactions
  • Decision pathways and treatment algorithms
Evaluation Snapshot

Performance on Italian specialty exams

We evaluated Chat et al. alongside other AI systems on the 2025 SSM exam to understand retrieval and reasoning accuracy.

2025 SSM Exam

Percentage of questions answered correctly

Chat et al.89%
OpenEvidence88%
ChatGPT82%
Gemini74%

Methodology

All systems were tested on the complete 2025 SSM exam question set under identical conditions. Score = correct answers / total questions.

ModelPass Rate
Chat et al.89%
OpenEvidence88%
ChatGPT82%
Gemini74%

Design principles

How we approach clinical decision support.

Support, not replacement

The system is designed as a reasoning aid for clinicians, not an autonomous decision-maker. Final judgment remains with you.

Maintained knowledge base

Guidelines are periodically re-ingested to reflect updates and new evidence as they become available.

Transparent limitations

We acknowledge that retrieval may miss relevant context and that LLM-generated text can contain errors. Always verify critical information.

Data handling

Queries are not used for model training. Conversations are encrypted and handled according to privacy best practices.

Challenges in clinical information access

Guidelines evolve frequently. In busy clinical settings, locating the right passage across PDFs and apps takes time that isn't always available.

Scattered sources

Protocols, updates, and reference tables are spread across multiple documents and platforms.

Time constraints

Clinical decisions often can't wait for lengthy searches and cross-referencing.

Evolving evidence

New guidelines and consensus statements are published regularly and can be easy to miss.

Fragmented tools

Risk calculators, tables, and figures often live in separate applications.

Limited consultation

Not every decision can wait for a colleague's input, especially during off-hours or high-volume shifts.

Documentation overhead

Translating guideline recommendations into structured notes requires additional steps.

Want to learn more?

Leave your email and we'll share updates on the project.