These documents govern your use of MilestonesIQ. They are initial drafts prepared for review by qualified legal counsel. For questions, contact [email protected].
Effective April 19, 2026 · Last updated April 19, 2026
MilestonesIQ uses artificial intelligence to help Program Directors and faculty identify patterns in trainee milestone data and generate narrative performance summaries. Programs and trainees have a right to understand how these tools work, what their limitations are, and what safeguards are in place.
Milestone Risk Scoring. The Platform uses a weighted scoring model to analyze a trainee's milestone trajectory across all six ACGME competency domains — Patient Care (PC), Medical Knowledge (MK), Systems-Based Practice (SBP), Practice-Based Learning and Improvement (PBLI), Professionalism (PRO), and Interpersonal and Communication Skills (ICS). The model computes a risk score based on the rate of milestone progression, the gap between current scores and expected benchmarks for the trainee's PGY level, and the pattern of scores over time. Risk scores are expressed as a confidence interval, not a binary pass/fail flag.
AI-Generated Narrative Summaries. When a Program Director requests an AI summary, the Platform sends de-identified performance data to a large language model (currently Anthropic Claude) and receives a narrative summary written in ACGME milestone language. The summary describes the trainee's strengths, areas for development, and suggested focus areas.
The AI does not make decisions — it identifies patterns and drafts summaries. Every output requires a Program Director's review and disposition before any action is taken. The AI does not recommend remediation, probation, or dismissal. It does not access patient records or clinical notes. It does not use trainee data to train its underlying models (prohibited by our data processing agreement with Anthropic).
Every AI-generated risk flag and narrative summary requires a Program Director to record one of three dispositions before the summary is finalized:
|---|---|
This disposition requirement is enforced by the Platform's workflow. The disposition and the PD's identity are recorded in an immutable audit log. No adverse action may be taken based solely on an AI-generated output.
MilestonesIQ includes a built-in bias audit layer that analyzes AI flag rates and summary language across demographic groups within a cohort. When the system detects a statistically notable pattern — for example, if trainees of a particular background are flagged at a higher rate than their performance data would predict — it surfaces a warning to the Program Director.
The bias audit layer is a screening tool, not a guarantee. Program Directors are encouraged to treat bias audit alerts as prompts for reflection and conversation, not as definitive findings.
The AI risk scoring engine uses milestone scores, EPA observation records, ITE scores (if uploaded), and rotation evaluation quantitative scores. The AI narrative summary generator uses milestone scores, ILP goal progress, and aggregate procedure log counts.
The AI does not use trainee demographic information (name, gender, race, ethnicity, national origin) as inputs to risk scoring or summary generation. Demographic data is used only in the bias audit layer.
MilestonesIQ's narrative summary feature uses Anthropic Claude. The risk scoring engine is a proprietary weighted model developed by MilestonesIQ. Neither model has been validated as a medical device, and neither is intended to constitute clinical decision support as defined by the FDA.
Trainees have the right to know when AI has been used (all AI summaries are clearly labeled), see the PD's disposition on any AI summary, contest an AI output through their Program Director, and opt out of AI summary generation (contact your Program Director or [email protected]).
If you have questions about our AI features, believe an AI output has caused harm, or wish to report a potential bias issue, please contact:
AI Accountability: [email protected] | General: [email protected]
We will respond within five (5) business days.
This AI Transparency Statement is an initial draft prepared for review by qualified legal counsel. It is not a substitute for advice from a licensed attorney.