The Numbers Behind the Crisis
Across primary care and every medical subspecialty, the question of whether training programs are producing physicians who are truly ready for independent, unsupervised practice has become one of the most urgent conversations in American medicine.
The AAMC projects a shortage of between 20,000 and 40,000 primary care physicians by 2036. Physician attrition from clinical practice jumped from 3.5% to 4.9% between 2013 and 2019. Pediatric subspecialty workforce pipelines are under particular strain — applications to general pediatrics residency declined by approximately 12% over five years, according to the Council of Pediatric Subspecialties.
These workforce pressures create an almost irresistible pull toward shortening training durations. But shortening training without a rigorous, real-time mechanism to verify readiness does not solve the problem — it transfers it downstream, into the hospitals, clinics, and ICUs where patients are most vulnerable.
In April 2026, the American Board of Pediatrics made this tension official policy: a new competency-based model that creates a two-year clinically oriented fellowship pathway across all 15 ABP subspecialties — including neonatology — with the earliest implementation anticipated for fellows entering training in July 2028. The ABP's own press release acknowledged the core challenge directly: "A growing body of evidence indicates that the current training paradigm...does not consistently ensure readiness for unsupervised clinical practice."
Primary care and pediatric subspecialties face projected shortfalls that will affect every community in America. Training more physicians faster is not the answer if those physicians are not ready.
The ABP's move to a 2-year fellowship pathway means programs must now demonstrate readiness within a shorter window. The margin for missing a struggling trainee has never been smaller.
When readiness gaps go undetected in training, they surface in practice — in the NICU, the pediatric ICU, the subspecialty clinic. The cost is measured in patient outcomes, not just board scores.
Competency-Based Medical Education (CBME) — built on ACGME milestones, Entrustable Professional Activities (EPAs), and Clinical Competency Committee (CCC) review — was designed precisely to answer the readiness question. In theory, it is the most powerful framework ever developed for GME. In practice, the data it generates is chronically underused.
ACGME milestone assessments generate substantial data across six core competency domains every six months. But 52.6% of programs cite logistics and tracking as their primary challenge — not lack of data, but inability to act on it. The data exists. The intelligence to interpret it does not.
Written faculty evaluations contain the richest signal about trainee development — and they are the least systematically analyzed. Program directors lack the time and tools to synthesize dozens of narrative comments into a coherent developmental picture before each CCC meeting.
Research consistently shows that trainees in difficulty are identified late — often not until a crisis. Early warning signals exist in the milestone data, but they require pattern recognition across time, evaluators, and competency domains that no spreadsheet or LMS can provide.
A 2025 study in the Journal of Medical Education (Borges et al.) found that faculty and program directors report negative perceptions of milestone data quality, difficulty interpreting what the data means for individual trainees, and frustration that milestone assessments are "not always helpful." This is not a failure of CBME as a philosophy — it is a failure of the infrastructure built to support it. The framework is sound. The tools have not kept pace.
On April 16, 2026, the American Board of Pediatrics announced a fundamental restructuring of subspecialty fellowship training — and placed competency verification at the center of every program's accountability.
The new ABP model creates a two-year, clinically oriented fellowship pathway across all 15 core pediatric subspecialties. Fellows who demonstrate readiness for unsupervised practice complete training in two years. An optional third year remains available for scholarship and advanced training — but it is no longer the default.
To make this work, the ABP is requiring a more comprehensive assessment system: workplace-based micro-assessments completed at the point of care, 360-degree evaluations from the full care team, longitudinal tracking of EPA progress, and rigorous Clinical Competency Committee review and attestation.
The ABP explicitly noted that "technology, including mobile apps, can support this new approach to assessment" — and that they are actively reviewing vendor proposals. The infrastructure for competency-based assessment is no longer optional. It is the requirement.
The people responsible for ensuring trainee readiness are themselves operating at the edge of their capacity — managing clinical, scholarly, and administrative obligations that leave little room for the deep, data-driven analysis that CBME demands.
Survey data compiled across multiple institutions shows that approximately 50–55% of residency program directors endorse emotional exhaustion. Program directors who spend 10 or more hours per week on accreditation, documentation, and scheduling tasks have burnout rates 10–20 percentage points higher than those spending fewer than five hours on the same tasks.
The typical program director is simultaneously a practicing clinician, a researcher with scholarly obligations, an educator responsible for curriculum design and faculty development, and an administrator accountable to ACGME, their institution, and their specialty board. Only 20–30% of faculty in education-heavy roles report feeling "appropriately recognized" for this work.
Into this environment, CBME adds a new expectation: synthesize milestone data across dozens of evaluators, identify developmental trajectories, convene Clinical Competency Committees, write individualized learning plans, and document performance improvement plans — all while maintaining clinical productivity. The expectation is reasonable. The infrastructure to support it has not existed.
Self-reported time allocation, clinician-educators with major teaching roles. Source: Multi-institutional faculty survey data, 2025.
The core problem: The data needed to identify struggling trainees early exists within the ACGME milestone system. But extracting actionable insight from that data — across time, evaluators, and competency domains — requires analytical capacity that no individual program director has the bandwidth to provide alone.
MilestonesIQ is not a data repository. It is a clinical intelligence platform — purpose-built to transform ACGME milestone data, EPA assessments, and narrative evaluations into the actionable developmental insight that program leaders need to identify, support, and produce strong clinical physicians.
Visual trajectory graphs across all six ACGME competency domains reveal flattening or declining performance before it becomes a crisis. Risk scores are confidence-interval-based, not binary flags.
Natural language processing synthesizes written faculty evaluations using ACGME milestone language, surfacing patterns across dozens of evaluators that no individual reader could identify manually.
Every trainee — resident or fellow, in any specialty — has a dedicated growth portal with milestone tracking, ILP management, and structured mentorship tools. Not just for those in difficulty.
AI-drafted, milestone-linked ILP goals that are editable, time-bound, and tied directly to ACGME competency data. Trainees acknowledge goals digitally, creating a shared accountability record.
Built for the ABP's new competency-based model. Longitudinal EPA tracking, workplace-based assessment logging, and CCC attestation workflows — ready for the July 2028 implementation.
Every AI flag requires PD disposition. Immutable audit logs, version-controlled model parameters, and FERPA-compliant documentation protect programs, trainees, and institutions.
"The goal is not to produce more data. The goal is to produce stronger doctors."
— The founding principle of MilestonesIQ
MilestonesIQ was not designed in a boardroom. It was designed in the spaces between patient rounds, CCC meetings, and fellowship interviews — by clinician-educators who have lived every challenge this platform is built to solve.
Our team is led by an active neonatologist-intensivist with direct experience in high-acuity neonatal care. We understand what it means to balance patient care, scholarly work, and educational leadership simultaneously.
MilestonesIQ was conceived and built by a fellowship program director who has sat in CCC meetings with incomplete data, written ILPs for trainees who needed support months earlier, and navigated the new ABP competency-based training requirements firsthand.
Every feature is grounded in ACGME milestone frameworks, EPA entrustment theory, and the evidence base for competency-based medical education — not generic performance management software adapted for medicine.
Work With Us
Whether you are a program director exploring a pilot, an institution evaluating GME technology, or a researcher interested in collaboration, we would like to hear from you.
MilestonesIQ is a universal growth and development platform for all GME trainees — residents, fellows, and clinical trainees across every medical specialty. We exist at the intersection of competency science, clinical education, and intelligent technology, with a single purpose: to produce stronger doctors and healthier patients.