MALINDA PEEPLES , MS, RN, CDCES, FADCES,
The diabetes care and education specialist (DCES) has an unprecedented opportunity right now to play a leadership role in their places of practice to transform health care. Artificial intelligence (AI) is here, and we each have a responsibility to bring AI to health care in a way that can help make health care healthy. Diabetes, as a largely self-managed chronic condition, lends itself to the application of AI to help make diabetes care, education, and support available continuously, on demand.
As health care providers, we may think that the technology experts will provide us with the tools we need. At ADCES24 in New Orleans, the authors presented a panel that illustrated that the DCES is integral to the development and workflow integration of AI. The DCES must ensure that AI incorporates evidence-based interventions that address health equity needs and health outcomes. In this article, we summarize some key observations and experiences that we hope will motivate you to learn more about AI and how you can become involved in this revolution.
A quick reminder of a way to systematically think about the value and role of the DCES in any innovation is to remind ourselves of the Quintuple Aim (Figure 1) from the Institute for Healthcare Improvement. Health care innovation should address these 5 aims with an equal focus1:
improving the patient experience
improving population health
improving the well-being of the care team
reducing health care costs
advancing health equity for all people, especially those most vulnerable.
“AI” is a term first used by John McCarthy in 1956 and further defined in 2004 by IBM. AI is technology that enables computers and machines to simulate human intelligence and problemsolving capabilities.2 There are several types of AI, and for a more in-depth understanding, refer to our Handout of AI Resources attached as Appendix A and use this to further enhance your AI knowledge.
According to the US Government Accountability Organization, AI clinical and administrative applications already exist in health care. Examples are displayed in Figure 2.3 In this article, we address clinical applications that are being used by our authors.
Where does AI fit into clinical care? The “Digital Horizon” framework recently published by Mayo Clinic4 gives us a nice way to look at our evolving technology-enabled care models. The care models in Table 1 include “face-to-face” apps and tools that are independently used by people with chronic disease, virtual self-management support that is provided with nonprescribing providers, and virtual diabetes care or treatment provided by prescribing providers—essentially, anywhere health care is taking place.
Each of the care models listed in Table 1 can integrate AI functionality into their products and processes, but it must be done with caution and with the Quintuple Aim in mind. In the rest of this article, we focus on the patient and provider experience and health equity in the use of AI in diabetes cardiometabolic care.
Health equity is a state in which everyone has a fair and just opportunity to attain their highest level of health.5 Attaining the highest level of health requires eliminating health disparities and creating accessible social, physical, and economic environments that promote health and well-being for all. Addressing health disparities is necessary to address health inequities, improve overall quality of care, promote population health, and reduce costs. Studies suggest that disparities cost an estimated $93 billion in excess medical costs and $42 billion in lost productivity per year and economic losses due to premature death. Taking a deeper look at diabetes, it is estimated that more than 5% of spending on diabetes ($15.6 billion), one of the costliest diseases in this country, is linked to unnecessary spending associated with disparities.6
"The underlying problem of health care spending is health inequity.”
—Pierre Theodore, MD, vice president of health disparities, Johnson & Johnson Global Public Health
Unfortunately, most of the economic burden for racial and ethnic disparities is borne by the minoritized population groups in the following areas7:
Premature deaths: Black/African American population (69%).
Economic burden: Native Hawaiian/Pacific Islander ($23 225) and American Indian/Alaska Native ($12 351) populations had the highest economic burden per person. Most of the economic burden was attributed to premature deaths for Native Hawaiian/Pacific Islander (90%), Black/African American (77%), and American Indian/Alaska Native (74%) populations.
Excess medical and lost of productivity costs: For Asians (55%) and lost labor market productivity: Hispanic/Latino (43%) populations respectively.
As the largest employer and funder of health care in the United States, the federal government has a major role to play in the development and use of AI and in mitigating its risks. On October 30, 2023, the Biden Administration released Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This executive order establishes a governmentwide effort to guide responsible AI development and deployment through federal agency leadership, industry regulation, and engagement with international partners.
Executive Order 14110 states,
Artificial Intelligence (AI) holds extraordinary potential for promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could disempower workers, stifle competition, and pose risks to national security. Harnessing AI for good and realizing its benefits requires mitigating its substantial risks. This endeavor demands a societywide effort that includes government, the private sector, academia, and civil society. Executive Order (E.O.) 14110 outlined eight overarching policy areas: 1. Safety and security; 2. Innovation and competition; 3. Worker support; 4. Consideration of AI bias and civil rights; 5. Consumer protection;6. Privacy; 7. Federal use of AI; and 8. International leadership.8
“Of all the forms of inequality, injustice in health care is the most shocking and inhumane.”
—Martin Luther King, Jr
AI has power, promise, and the potential to reduce health disparities if implemented with fairness and justice and deployed with equity in mind. To do this well, developers and DCESs must be aware of the “AI iceberg” and find ways to effectively intervene. As with a real iceberg, the AI’s iceberg has areas that are well known and can be visible and invisible. This could include the following.
Computational BiasAI systems can exhibit various forms of computational bias arising from different stages of the AI development and deployment process. For example, this can include the following:
training data bias;
sampling bias (lack of diversity);
historical bias (data reflect historical inequalities or prejudices);
label bias (labeling process might be biased in sentiment analysis);
algorithmic bias: objective function bias, feature selection bias, bias in model architecture;
deployment bias: user interaction bias (the way users interact with the AI system introduces bias);
feedback loop bias (when the AI system’s outputs influence future inputs in a way that perpetuates or amplifies bias), bias in realworld contexts (deployment context of the AI system introduces bias);
measurement and evaluation bias: (performance metric bias and benchmark data set bias (common computer vision benchmarks might have fewer images from certain geographic regions).
Confirmation BiasAs with face-to-face diabetes care, there can be bias from human-AI interaction. When users tend to accept AI recommendations that align with their existing beliefs and disregard those that do not, this can reinforce their preexisting biases.
Automation BiasThis occurs when users overrely on AI outputs, assuming they are more accurate or impartial than they are. If the AI is flawed, this can lead to biased decisions.
Addressing each of these computational biases involves careful consideration at all stages of AI development, including diverse and representative data collection, fairness-aware algorithm design, and ongoing monitoring and evaluation in deployment.7
Health System BiasIn addition to mitigating the aforementioned biases, it is critical to keep the current state of our health system in mind and the prevailing biases that already exist in our clinics and practices. We must therefore develop a plan to mitigate the systemic biases that already impact access to services, quality of care, and reproductive and preventive care. Included among these biases are racial and ethnic, gender, socioeconomic, geographic, LGBTQ+, disability, and age. Unless these biases are kept in mind and addressed at every opportunity, AI can widen health disparities instead of delivering on its promise by improving diabetes and cardiometabolic care. AI has power and promise, but it needs to be deployed intelligently.
“Artificial intelligence has potential, promise, and power, but we must keep humans on top of and in the loop to enhance health equity.”
—Magon Saunders, DHSC, MS, RDN, LD, FADCES, FAC-COR-1 (2024)
The DCES has a critical role as we use these AI technologies to enhance diabetes and cardiometabolic care. The following 4 steps can guide us to participate in the development and use of equitable AI.
Practice responsible AI: Engage diverse staff, be ethical, values-focused, fair, and inclusive. Work to reduce all the aforementioned biases and always keep humans in the loop.
Explainability and interpretability: Be able to explain and interpret AI data and the processes to various audiences, especially our clients, using layman terms.
Bake in equity: Bake in equity into every step of the AI process and use checklists and checks and balances to ensure that the process is unbiased. Inclusivity and equity should be the guiding principles in every aspect of AI implementation. By thinking of equity in everything, we can ensure a more just and balanced health care system. Our diverse clients with diabetes and the DCES should be encouraged to get involved in this stream of work. DCESs must be at the research, clinical trial, and AI tables.
Build self-efficacy: As a DCES, our goal is to help our patients to better self-manage their diabetes and to build self-efficacy. Therefore, we must empower our clients and use AI to help them better manage their diabetes and other medical conditions.
Taking all 4 of these steps will go a long way in building equity into the AI process.
Best equity practices for the DCES in AI Implementation in diabetes and cardiometabolic care include:
Work with advocates: Ensure equity practices and policies are considered throughout the AI process.
Build diverse expert teams: Assemble multidisciplinary teams with diverse expertise, including data scientists, clinicians, ethicists, and social scientists, to inform, develop, and evaluate AI systems. Diversifying the AI workforce will help to mitigate some of these risks.
Standardize protocols: Ensure that everyone is treated the same and that there is no discrepancy in treatment.
Continue to work to improve diabetes self-management and patient efficacy: AI is a “shiny new toy” that has the potential to improve diabetes and cardiometabolic care, but adding this to a broken, expensive health care system will not achieve the outcomes we seek unless we double down on our efforts to build equity into our practices and fix the areas that are broken.
Humanize the AI process: Keep humans in and on top of the AI loop.
Provide continuous training and professional development: Provide ongoing education and training for health care providers on AI technologies to ensure they are proficient in using and interpreting AI tools.
Ensure that all AI interactions are ethical: Given the extensive incentives in this space, when incentives drive the AI system, ethics will suffer.8-11
It is important to remember that equity is in the minutiae, so we must “bake” equity into all AI policies and practices to harness AI’s power and promise. Health equity is a team sport; the team should look like the America we serve. Order matters; amazing, positive AI advances will not help much in a dysfunctional society where trust, physical and mental health, and so on have broken down. We must continue to work to build trust and to improve whole-person care for the clients that we serve. Most importantly, in the race to build more powerful AI, we cannot forget the people with diabetes, the technology’s users, and the most vulnerable in our society. We must strive to build humane technologies, share understanding, and support fairness and justice.
Finally, as DCESs, we must put ethics before incentives. Current AI incentives can create a dangerous spiral and support harmful technology. Unless we consider equity, ethics will suffer. Lastly, we cannot lose sight of why we came to diabetes care. Let’s keep our clients, their self-efficacy, and equity front and center!
Although many of us may think that AI is the “new thing,” as noted earlier, it was first defined by IBM in 2004. Indeed, AI is already being used in diabetes cardiometabolic solutions, in part, due to the following:
There are massive amounts of data from devices.
Computational power on large-scale data is available.
Algorithms for metabolic management have been developed and validated.
Care can be delivered at individual and population levels.
National standards for medical and education care have been established.
AI is already in use in diabetes care and education, including in the following ways:
automated insulin delivery;
EMR data extraction, discovery, and smart phrases;
continuous glucose monitors (CGM) and data visualization;
connected insulin pens, insulin dosage algorithms;
diabetes coaching apps;
retinopathy and neuropathy diagnosis and care;
meal planning.
Integrating patient-generated health data to integrate the resulting data from these diabetes technologies into clinical practice include the following:
use of information to trigger a visit, call, or notification;
ability to provide educational resources within the solution—“just in time”;
2-way messaging to communicate to answer questions, provide guidance, support, and so on;
consolidation and analysis of data to suggest or perform an action or enter into the electronic health record.12
As a DCESs, we can contribute to both the development and implementation of an AI- enabled product in the technology-enabled care model. Specifically, the DCES can ensure equitable care and integration into the clinical workflow and demonstrate the added value of AI as a clinical tool.
To illustrate how the DCES is integral to the use of AI in clinical care, we present a synthetic wound care solution based on an aggregate of current wound care solutions in clinical use as enhanced by AI means. Machine learning is applied to a vast wound care encounters data set. Very large numbers (in the millions) are required in the data set to ensure relevant output. Furthermore, to avoid sampling bias, these encounters need to include individual patients of varying ages from a wide range of ethnic groups with a variety of skin tones and visit and wound data.
The data set inputs include:
wound size and depth
color as assessed by AI image analysis
disease state and comorbidities
age of wound at encounter
patient health markers
glucose values
relevant lab values
standards of care in wound care.
The application of data processing and machine learning to this large data set was used to develop a wound treatment and prediction algorithm that identifies the likelihood of wound closure or maintenance that can then be applied in ongoing clinical care for future patients. With the addition of each new encounter, AI can continue to learn, and the wound treatment and prediction algorithm improves in its ability to predict outcome and direct care.
The care team applies the wound treatment and prediction algorithm via HIPAA-compliant AI processing on patient-generated health data to create tailored, individualized, data-driven care plans based on individuals’ data to uniquely guide the care for each person.
This example wound care solution includes the additional uses of AI to create the following:
Standardized documentation is created using an AI scribe feature (eg, SOAP note generation).
AI automated patient education assignments based on smart phrases or key word trigger within a patient’s care note.
Automated referrals are made, including DCES follow-up for glycemic management based on smart phrases or reaching set threshold levels from connected glucose data (CGM or blood glucose monitoring data) based on information determined by the wound treatment and prediction algorithm.
Messaging follow-up is sent using natural language processing after a clinical visit to coordinate care, provide support, schedule follow-up visit, and support people in selfmanaging their condition.
Chat summarization is created to guide messaging, clinical care, self-management, and education requirements.
Nutrition education needs assessment is made based on food intake data from AI analysis of food recall information collected during an in-person visit or logged food journal reports to create tailored meal plans (note: meal planning example is included in the next section of this article).
This AI-driven wound care solution illustrates only 1 example of the value of the DCES in the application of AI in patient care. With the clinician as the expert, health care can take advantage of the power of AI. The vast knowledge, training, and clinical expertise of the DCES enables the application of AI to evaluate large amounts of patient clinical information to not only create a product solution but also to automate the provision of appropriate clinical care. AI has the potential to improve care and workflow and maximize clinician hours. With the DCES included in the process, the solution is more likely to be relevant, include best equity practices, and improve clinical workflow.
Another example of the use of AI by DCESs is large language models (LLMs). LLMs are advanced AI models that process and generate human language.13
These models have extensive amounts of data from books, web pages, and articles that allow them to answer questions, create content, and write essays with some level of reasoning. LLMs are a type of generative AI specifically designed to mimic human text and create context, images, and audio or code. Examples of LLMs include OpenAI’s ChatGPT, Google’s Gemini, Meta’s LLaMA, and Microsoft’s Copilot integrated OpenAI-productivity model. They can help streamline research, education material development, and patient education strategies. This article provides insights on how these AI tools can be leveraged in a clinical setting while highlighting their practical applications and limitations.
Today’s LLMs can generate text and content, assist with literature search, answer complex questions, create images, develop scientific professional presentations and patient presentations, and even translate languages, which makes them incredibly versatile for diverse tasks.
Utilization of an LLM begins when the user enters a query or a prompt. The query may be a question, such as, “What is the scientific evidence of intermittent fasting and type 2 diabetes in adults?,” or a task, such as “Create a 10-slide presentation about intermittent fasting and type 2 diabetes for adults. Include definition of intermittent fasting; types of intermittent fasting; scientific research on intermittent fasting; benefits and contraindications” (see Table 2). The response to the text-typed question included 3 accurate references and conclusions but missed a key finding from one study. The outline provided in response to the text-typed task was clear and effective, offering a time-saving resource for clinicians.
To create a graphic, use the task, “Create a graphic for a presentation titled: Intermittent fasting and type 2 diabetes in adults.” Choose different LLMs to see which one offers the most suitable result. Microsoft Copilot can assist in creating visual aids, such as infographics that simplify complex information into digestible graphics.
By understanding how each tool functions, DCESs can choose the right LLM to meet their specific needs in diabetes care and education.
LLMs can be utilized to generate detailed and personalized meal plans. The ability of ChatGPT to create personalized and varied meal plans for persons with obesity, cardiovascular disease, and type 2 diabetes accurately depends significantly on the input provided. Prompts should include specific nutritional rules, including target energy intake and nutritional content (ie, fat, carbohydrate, protein, iron, vitamin C), to generate accurate meal plans. The knowledgebased data yielded 99% in nutrients accuracy, whereas ChatGPT-based recommendations achieved 91% mean nutrient accuracy. Including specific energy targets in the prompts increased meal plan precision, and ChatGPT’s capacity for variety made 7-day meal plans more appealing to patients. Registered dietitians and DCESs should review these plans to confirm nutritional accuracy, cultural suitability, and cost-effectiveness.14
Table 3 shows the result by ChatGPT when prompted by LD to generate a 1-day meal for a Colombian woman from Medellín with type 2 diabetes, specifying 1500 calories; 50 g of carbohydrate per meal; 15 g of protein at breakfast, 25 g at lunch, and 25 g at dinner; fewer than 15 g of saturated fat daily; and 20 g of dietary fiber. LD also requested that all nutrients be indicated for each food item and that the menu be printed in English and Spanish. Although most instructions were followed, the initial output did not include caloric values, dietary fiber totaled 42 g, protein at breakfast was slightly below target, and the food options were somewhat generic. A follow-up prompt to include the caloric value of foods and increase breakfast protein from 10 g to 15 g successfully addressed these issues.
LLMs can assist in creating a wide variety of educational resources tailored for diabetes care, including responses to commonly asked questions. Microsoft’s Copilot also adds the ability to create custom images that can be used for patient education, social media posts, and slide decks. Here is how AI can be applied to different educational resources:
Patient FAQs: LLMs can be used to prepare responses to frequent patient questions. For example, if a patient asks about the impact of low- and no-calorie sweeteners on blood glucose levels or asks what they should eat after being diagnosed with metabolic dysfunction-associated steatotic liver disease, the DCES can create an AI-generated response that is evidencebased, patient-friendly, and culturally relevant.15
Recipe creation: DCESs can generate sample vegetarian, Mediterranean, or culturally specific recipes that align with diabetes management goals by providing an LLM with details about the patient’s dietary preferences, budget, or restrictions. An example query might be, “Provide a vegetarian lasagna recipe without garlic using spinach that costs $3 per serving”
Slide decks and infographics: Microsoft Copilot and similar tools allow users to create visually appealing, informative slides or infographics. Infographics could use a prompt like, “Generate a slide on the benefits of fiber in blood glucose management,” and quickly receive engaging visuals to support patient education sessions. LLMs can help you create slide decks, infographics, and graphics.
Language: For patients who speak different languages, LLMs can help translate and create educational materials in various languages. Have native speakers and CDCESs review the educational materials for accuracy before sharing them with patients.
Titles: When you need imaginative titles for articles, classes, or webinars, LLMs can serve as a creative assistant to generate fresh, engaging options.
LLMs are prone to hallucinations, which are outputs that are fabricated, inconsistent, and different from the user’s intent.16 For example, the user prompts the LLM to provide references for the topic on sarcopenia on men over the age of 50. When checking the references, some appear to have incorrect titles, and others have the correct authors but incorrect journal names. This would be a case of hallucinations.
To minimize hallucinations:
ensure specific and detailed input (prompt)
cross-check all references when doing research
review and revise all output.
In summary, AI tools, especially LLMs, offer transformative potential for DCESs working in diabetes-cardiometabolic care by helping gather research, developing patient education materials, and facilitating language-appropriate resources. Using platforms like ChatGPT, Gemini, and Copilot can enhance productivity, empower patient-centered education, and support ongoing professional knowledge building. However, because AI can produce inaccurate results, users should always verify AI-generated information with reliable sources to ensure accuracy.
By thoughtfully integrating AI solutions into their practice, DCESs can increase access to their services and continue to improve diabetes care and education while making complex information more accessible and engaging for people with diabetes.
The ADCES24 panel that spoke during one of the General Sessions concluded with the challenge to the DCES to increase their understanding of AI, look for ways to get involved with the use in their practice, and work with the association to learn how to move to the next generation of DCES practice. In a companion article, Macleod discusses short-term action steps and longer term plans to put in place to lead in building the bridge to optimal diabetes-cardiometabolic care leveraging AI.
Malinda Peeples https://orcid.org/0000-0003-4097-1584
Janice Macleod https://orcid.org/0000-0001-8689-4938
Institute for Healthcare Improvement. Quintuple Aim. Accessed November 11, 2024. https://www.ihi.org/resources/Pages/Publications/quintuple-aim-for-health-care-improvement.aspx
IBM. What is artificial intelligence (AI)? Published August 9, 2024. Accessed November 11, 2024. https://www.ibm.com/topics/artificial-intelligence
Government Accountability Office. Artificial intelligence in healthcare. Accessed November 14, 2024. https://www.gao.gov/products/gao-21-7sp
Philpot LM, Dugani SB, Singla A, DeZutter M, Ebbert JO. Digital Care Horizon: a framework for extending health care through digital transformation. Mayo Clin Proc Digit Health. 2023;1(3):210-216. doi:10.1016/j.mcpdig.2023.05.005
Office of Disease Prevention and Health Promotion, Office of the Assistant Secretary for Health, Office of the Secretary, U.S. Department of Health and Human Services. Health equity in Healthy People 2030. Published 2020. Accessed November 13, 2024.
Deloitte Insights. US health care can’t afford health inequities. Inequities in the US health system cost approximately $320 billion today and could eclipse $1 trillion in annual spending by 2040 if left unaddressed. Published June 22, 2022. Accessed February 24, 2025. https://www2.deloitte.com/us/en/insights/industry/health-care/economic-cost-of-health-disparities.html
IBM. What is AI bias? Published December 22, 2023. Accessed November 13, 2024. https://www.ibm.com/think/topics/ai-bias
Mullan I. Health equity and ethical considerations in using artificial intelligence in public health and medicine. Prev Chronic Dis. 2024;22(2):E64. doi:10.5888/pcd21.240245
Mann H. Do all AI systems need to be explainable? Stanford Social Innovation Review. Published November 15, 2023. Accessed November 14, 2024. https://ssir.org/articles/entry/do_ai_systems_need_to_be_explainable
Chen Y, Clayton EW, Novak LL, Anders S, Malin B. Humancentered design to address biases in artificial intelligence. J Med Internet Res. 2023;25:e43251. doi:10.2196/43251
Dankwa-Mullan I, Scheufele EL, Matheny M, et al. A proposed framework on integrating health equity and racial justice into the artificial intelligence development lifecycle. J Health Care Poor Underserved. 2021;32(2):300-317.
Klutka J, Ackerly N, Magda AJ. Artificial Intelligence in Higher Education: Current Uses and Future Applications. Learning House; 2018.
Boscardin CK, Gin B, Golde PB, Hauer KE. ChatGPT and generative artificial intelligence for medical education: potential impact and opportunity. Acad Med. 2024;99(1):22-27. doi:10.1097/ACM.0000000000005439
Papastratis I, Stergioulas A, Konstantinidis D, Daras P, Dimitropoulos K. Can ChatGPT provide appropriate meal plans for NCD patients? [published correction appears in Nutrition. 2024;128:112532. doi:10.1016/j.nut.2024.112532.] Nutrition. 2024;121:112291. doi:10.1016/j.nut.2023.112291
Ponzo V, Goitre I, Favaro E, et al. Is ChatGPT an effective tool for providing dietary advice? Nutrients. 2024;16(4):469. doi:10.3390/nu16040469
Liu F, Liu, Y, Shi, L, et al. Exploring and evaluating hallucinations in LLM-powered code generation. 2024. arXiv. https://arxiv.org/abs/2404.00971
ADCES 2024 Panel Presentation: Employ Artificial Intelligence to Advance Diabetes-Cardiometabolic Care
Moderator/Speakers: Malinda Peeples, MS, RN, CDCES, FADCES; Lorena Drago, MS, RD, CDN, CDCES; LaurieAnn Scher, MS, RD, CDCES, FADCES; Magon Saunders, DHSc, MS, RDN, LD, FADCES, FAC-COR-1; Janice MacLeod, MA, RD, CDCES, FADCES
Action steps to take now to build AI acumen:
Follow, listen, learn, join the conversation. Many of these recommended AI experts are not thinking about the role of the DCES. That is our job. We can add valuable perspective and discover and promote opportunities for DCESs to be at the table by providing thoughtful responses to posts.
Dr. Bertlan Mesko: MedicalFuturist.com; https://www.linkedin.com/in/bertalanmesko/
Dr. Eric Topol: https://www.linkedin.com/in/eric-topol-md-b83a7317/
Dr. Harvey Castro: harveycastromd.info; linkedin.com/in/harveycastromd
Tom Lawry: https://www.tomlawry.com/; https://www.linkedin.com/in/tomlawry/
Sergei Polevikov: AI Health Uncut: sergeiAI.substack.com; linkedin.com/in/sergeiai
Read the Coalition for Health AI Blueprint for Trustworthy AI: https://www.coalitionforhealthai.org/papers/blueprint-for-trustworthy-ai_V1.0.pdf
Read: Lawry T. Hacking health care: how AI and the intelligence revolution will reboot an ailing system. Routledge; 2023
Read: American Medical Association. Augmented intelligence in medicine. Accessed February 28, 2024. https://www.ama-assn.org/practice-management/digital/augmented-intelligence-medicine.
Explore AI training and certification offerings at the American Board of Artificial Intelligence in Medicine: https://abaim.org/
Review National Academy of Medicine: Code of Conduct for AI in health, healthcare and biomedical science: https://nam.edu/programs/value-science-driven-health-care/health-care-artificial-intelligence-code-of-conduct/
Keep up with publications on AI in healthcare by following Jan Berger from GE Health on LinkedIn who summarizes and provides link to relevant articles: https://lnkd.in/eR7qichj
Follow the Consumer Technology Association linkedIn: https://www.linkedin.com/company/consumer-technology-association/and read their Standards documents on AI
National AI Policy and Regulatory Framework
Artificial Intelligence in Health Care: Practices for Identifying and Managing Bias (ANSI/CTA-2116)
Make a list of every repetitive task you do over and over each day as you provide consultations, teach classes, document your visits. How could AI help? Use the Chat GPT Prompt guides and cheat sheets here to learn how to refine your prompts to optimize output:
Chat GPT Prompt Cheat Sheet by the Medical Futurist, Dr. Bertlan Mesko: https://www.linkedin.com/in/bertalanmesko/recent-activity/all/
Chat GPT Cheat Sheet by Dr. Harvey Castro (AI expert and medical doctor: https://www.linkedin.com/posts/harveycastromd_chatgpthealthcare-thegptpodcast-harveycastromd-activity-7090635508002811904-3hxw/
Make a list of what you could do to help clients, if freed from the rote, repetitive tasks. Read this related LinkedIn post: https://www.linkedin.com/pulse/healthy-eating-diabetes-person-list-foods-avoid-janice/
Lean in. Embrace the change. Be part of the solution.
Federal Resources on AIExecutive Order #14110- Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Retrieved from 2023-24283.pdf (govinfo.gov)
Peer-Reviewed Resources & Panel Presentation
Alanazi A. Clinicians’ views on using artificial intelligence in healthcare: opportunities, challenges, and beyond. Cureus. 2023; 15(9) e45255. doi:10.7759/cureus.45255
Hoang YN, Chen YL, Ho DKN, et al. Consistency and accuracy of artificial intelligence for providing nutritional information. JAMA Netw Open. 2023;6(12):e2350367
Klutka J, Ackerly N, Magda AJ. Artificial intelligence in higher education: current uses and future applications. Learning House; 2018.
McGowan A, Gui Y, Dobbs M, et al. ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Res. 2023;326:115334. https://doi.org/10.1016/j.psychres.2023.115334
Park W, Seo SW, Kang N, et al. Artificial intelligence in health care: current applications and issues. J Korean Med Sci. 2020; 35(42). https://doi.org/10.3346/jkms.2020.35.e379
Philpot LM, Dugani SB, Singla A, DeZutter M, Ebbert JO. Digital care horizon: a framework for extending health care through digital transformation. Mayo Clinic Proceedings: Digital Health. 2023; 1(3), 210-216. doi: 10.1016/j.mcpdig.2023.05.005
Sang GGR, Tung JYM, Lim DYZ, Bee YM. Potential and pitfalls of ChatGPT and natural-language artificial intelligence models for diabetes education. Diabetes Care. 2023;46(5):e103-e105. doi: 10.2337/dc23-0197
Other Gray Literature
Johnson and Johnson (2024). Artificial intelligence is helping revolutionize healthcare as we know it. Retrieved from Artificial intelligence is helping revolutionize healthcare as we know it (jnj.com)
Springer (2022). Artificial Intelligence in Healthcare: Recent Applications and Developments. Retrieved from Artificial Intelligence in Healthcare: Recent Applications and Developments | SpringerLink
The Conference Board (2023). Explainability in AI: The Key to Trustworthy AI Decisions. Retrieved from Explainability in AI: The Key to Trustworthy AI Decisions (conference-board.org)