Artificial Intelligence
AI offers primary care both promise or peril. AI promises to help physicians relieve their administrative burdens. Yet, AI also presents perils by potentially furthering inequity, causing patient harm, and threatening the role of primary care physicians and their scope of practice.
Promise
AI promises to help physicians address their three existential threats; administrative burden, an unsustainable payment model and a narrowing scope of practice. An growing list of the potential applications of AI in primary care include:
​
-
Chart review and documentation
-
Risk prediction and intervention
-
Diagnostics
-
Population health management
-
Device integration
-
Medical advice/triage
-
Risk-adjusted paneling/resourcing
-
Practice management
-
Digital health coaching
-
Clinical decision making
Peril
Ai also presents potential perils to patients, physicians and practices. A list of perils for AI in primary care include:
​
-
Profit-driven design
-
Hallucinations
-
Health inequity
-
Lack of clear accountability
-
Improper training data: selection bias, implicit bias
-
Lack of explainability
​
Profit-driven Design
Primary care cannot allow AI to follow the same path as EHRs. Early on when physicians were leading, EHRs appeared to be on track. But that leadership was supplanted by others leading to “meaningful use” and EHRs that are central to administrative burden and physician burnout. Primary Care must lead AI innovation ensuring this does not happen again. AI must work for us, not against us.
​
Narrowing the Scope of Practice
Under profit-driven design, Artificial Intelligence could applied to replace some if not all of the role of a primary care provider interfacing directly with patients to evaluate, diagnose and and even treat them.
​
AI Innovation Labs
Physicians can consider a growing set of categories of AI innovation that are proving to help fix primary care.
AI powered voice enabled assistant for documentation
Principles
1. Preserve and Enhance Primary Care
As the patient-physician dyad is expanded to a triad with AI, the patient-physician relationship must at the minimum be preserved, and ideally, enhanced. When AI/ML is applied to primary care, it must enhance the 4 C’s of primary care and expand primary care’s capacity and capability to provide longitudinal care that achieves the quintuple aim.
​
2. Maximize Transparency
AI/ML solutions must provide transparency to the physician and other users so that the solution’s efficacy and safety can be evaluated. Companies must provide transparency around the training data used to train the models. Companies should provide clear, understandable information describing how the AI/ML solution makes predictions. Ideally, this would be for each inference but at least provide a conceptual model for decision-making, including the importance of data leveraged for the inference.
​
3. Address Implicit Bias
Companies providing AI/ML solutions must address implicit bias in their design. We understand implicit bias cannot always be completely eliminated. Still, the company should have standard processes in place to identify implicit bias and to mitigate the AI/ML models from learning those same biases. In addition, when applicable, companies should have processes for monitoring for differential outcomes, particularly those that affect vulnerable patient populations.
​
4. Maximize Training Data Diversity
To maximize the generalizability of AI/ML solutions, training data must be diverse and representative of the populations cared for by family medicine. Companies must provide clear documentation on the diversity1 of their training data. Companies should also work to increase the diversity of their training data to not increase or create new health inequities.
​
5. Respect the Privacy of Patients and Users
AI/ML requires large volumes of data for training. It is critical for patients and physicians to trust companies will maintain confidentiality of data from them. Companies must provide clear policies around how they collect, store, use and share data from patients and end-users. Companies must get consent for collecting any identifiable data and the consent should clearly state how the data will be used or shared.
​
6. Take a Systems View in Design
An AI/ML solution will be a component in a larger work system, and therefore it must be designed to be an integrated component of the system. This means that the company must understand how the AI/ML solution will be used within a workflow. The company needs to have a user-centered design approach. Since the vast majority of AI/ML solutions in health care will not be autonomous, the company must understand and leverage the latest science around human/AI interaction as well as quality assurance.
​
7. Take Accountability
If an AI/ML solution is going to take a prominent role in health care, the company must take accountability for assuring the solution is safe. For those solutions design for use in direct patient care, they must undergo a similar rigorous evaluation as any other medicine intervention. We also believe that companies should take on liability where appropriate. The appropriateness should be based on a risk-based model that accounts for the role played by AI/ML and the situation in which it is being applied. A good starting point for such a risk-model is the Food & Drug Administration (FDA) and Office of the National Coordinator for Health Information Technology (ONC) framework for software as a medical device.
​
8. Design for Trustworthiness
Maintaining the trust of physicians and patients is critical for a successful future of AI/ML in health care. Companies must implement policies and procedures that ensure the above principles are appropriately addressed. Companies must strive to have the highest levels of safety, reliability and correctness in their AI/ML solutions. Companies should consider how they can maximize trust with physicians and patients throughout the entire product lifecycle. AI/ML will continue its rapid advancement, so companies must continually adopt the latest state of the art best practices.