Decoding Data: Empowering Health Care Providers and Patients with Tech Literacy

Opinon by: Carrie Boericke

 

Artificial intelligence has suddenly been integrated into virtually every part of our lives. It’s drawing us into conversations with companies that we visit online. It’s being used to impersonate political figures, to embarrass celebrities, and often to do our work by drafting emails, blogs, and speeches. I’ve even had to alter my university course on healthcare informatics because students were using AI to complete their homework. The stakes are even higher for the use of data in healthcare. That’s why it’s important for everyone to have a basic grounding in artificial intelligence and the algorithms used to make important decisions about care. Understanding how these tools are trained and where they can go wrong is crucial to using them safely and fairly. As part of a Public Health Informatics & Technology (PHIT) program designed to train students in data science, analysis, and visualization skills, I teach an introductory course on the use of data in health care including the perils and promise of artificial intelligence. This program is open to health care professionals and others looking to gain skills in this area, and it provides important perspective for anyone living in a world increasingly touched by AI.

By now, most people have heard of ChatGPT, the large language model that is trained on vast amounts of data to enable it to generate human-like text. Large language models are already being used in health care for things like medical transcription or assisting with patient communication.

The integration of data and technology in health care extends far beyond text generation. Artificial intelligence is increasingly playing a pivotal role in things like detecting cancerous polyps on a colonoscopy. Data analysis can also delve into medical records to identify people at risk for heart disease or osteoporosis, create personalized cancer treatments for a specific tumor, or help predict a disease outbreak in a community.

Paired with medical devices, data analytics can do things like track blood oxygen levels. A pulse oximeter attached to a finger, toe or earlobe measures oxygen in your blood – providing important information about how well your lungs and heart are working. While it is not always obvious how these technologies work, it is important to understand their capabilities and limitations. Bias can creep into systems in unexpected ways.

For instance, pulse oximeters have been used for decades to collect data to inform patient care. Only last year, after questions mounted during the COVID pandemic about their accuracy in patients with darker skin, did the Food and Drug Administration begin to rethink guidance about their use. In other cases, existing biases can be inadvertently built into algorithms, leading to disparate health outcomes for certain groups –often communities of color. For example, an algorithm that was designed to predict when a woman who previously delivered a baby by C-section could safely go through a vaginal delivery was found to overestimate the risk of a subsequent vaginal delivery for Black and Hispanic women. So, for more than a decade,doctors followed this algorithm and performed numerous unneeded C-sections on women of color.

As we find more medical uses for AI, it’s important to make sure that biases like these don’t become ingrained in models used to make life-changing decisions. Insisting on “explainable AI” is one way to ensure we can spot biases like this more quickly. Explainable AI forces machine learning algorithms to show their work, so that human users have a better understanding of the source of the results and can evaluate their reliability.

Everyone, especially healthcare workers, should have this basic understanding of AI. We should not accept a world where all our data goes into a black box that we don’t understand rendering decisions that impact people’s health and health care. Other protections against undesired and biased results from AI include building diverse teams and monitoring algorithms for “drift” as circumstances and populations change. It is urgent that more people in all areas of health care know how data is used in this field known as healthcare informatics. People already in health care will benefit from understanding how AI impacts our lives and health, and for people hoping to get into careers in health, public health informatics can provide a starting point that will open doors and minds.

Carrie Boericke is program manager for the Public Health Informatics & Technology (PHIT) Workforce program at Dominican University New York, supported by a grant from the Office of the National Coordinator for Health Information Technology (ONC). She teaches Introduction to Public Health Informatics and Technology.

You must be logged in to post a comment Login