Authors: Peter Phiri, PhD1,2, Gayathri Delanerolle2, 1 School of Psychology, Faculty of Environmental and Life Sciences, University of Southampton, Southampton, UK 2…
Introducing “Health Communication AI”: The Next Iteration of Opinion Leader for the Age of Artificial Intelligence

Authors: Amelia Burke-Garcia, PhD, AM, Director, Center for Health Communication Science and Program Area Director, Digital Strategy & Outreach, NORC at the University of Chicago; and Rebecca Soskin Hicks, MD, NORC at the University of Chicago.
Today, we find ourselves in one of the most challenging communication environments ever faced – characterized by a fragmented ecosystem, driven in large part by mis- and disinformation, a loss of institutional trust and a widening digital divide.
The emergence of artificial intelligence (AI) and large language models (LLMs) is exacerbating these issues.
AI-generated misinformation increases the spread and exposure to misleading health and medical information, posing a major challenge to health and well-being (Monteith et al., 2023). This may include factual errors, fabricated sources, and dangerous advice – all of which can impact lives.
It is not all bad news though.
Early research suggests that AI models can be used to deliver accurate health information – and in an empathetic way. Some models have been shown to surpass general human emotional awareness, scoring near maximum possible scores on Levels of Emotional Awareness Scales (LEAS) testing (Elyoseph et al., 2023); and work by Ayers et al. (2023) and Liu et al. (2023) suggests that in some cases, patients may prefer empathetic AI-authored responses to physician-authored ones.
We posit that while AI models are contributing to this problem of misinformation, they can also be part of the solution.
To realize the potential that lies in AI solutions for health communication, we need to invest in a new scientific agenda we are calling, “Health Communication AI,” which we define as,
“An approach that blends the authenticity of social media influencers with AI’ s technological scale capabilities informed by accurate and up-to-date health- and health communication-related expertise” (Burke-Garcia & Soskin Hicks, in press).
This idea is predicated on an understanding of the role of opinion leaders in health promotion programs and uses Burke-Garcia’s (2019) work as a blueprint.
Researchers have been examining opinion leadership and health promotion for decades (Becker, 1970; Centola, 2021; Dearing, 2009; Rogers, 1962; Rogers, 2003; Valente, 2012; Valente & Pumpuang, 2007; Burke-Garcia et al., in press) and central to their role is the prima facie credibility they have with their communities (Valente & Pumpuang, 2007; Burke-Garcia, 2017). This comes from both their trustworthiness and expertise as perceived by their communities (Hovland et al., 1953) and the emotional intensity and intimacy between them and their communities (Gatignon & Robertson, 1986; Granovetter, 1973; Burke-Garcia, 2017).
Burke-Garcia’s (2019) work builds on this, positing that social media influencers are modern-day opinion leaders, as they have that same prima facie credibility and trust with their followers.
The promise of “Health Communication AI” lies in the potential for AI to establish this same prima facie credibility with individuals. Recent research at Google demonstrated their LLM to outperform primary care physicians on measures of empathy, perceived honesty, and accuracy in digital diagnostic conversations when rated by both patient actors and specialty physicians (Tu et al., 2024).
Perhaps most powerfully, however, is the reality that AI can do this at scale.
Currently, much of the communication of reliable health information depends on human interactions, either face-to-face or through digital interactions. This solution does not scale effectively to address the problem of health misinformation.
Human health communicators simply cannot interact quickly or comprehensively enough to engage with the millions of users participating in health-related digital conversations each day. Worldwide, users search for health content on Google Search at a rate of 70,000 queries per minute and up to 90% of Americans regularly search for health information on social media (Bishop, 2019). Additionally, human engagement on social media is often driven by high emotion and disagreement, thus making maintaining empathetic interactions challenging (Berger, 2011; Messing & Weisel, 2017).
“Health Communication AI” addresses these issues, making it the natural next step in the evolution of opinion leadership.
Achieving the vision of “Health Communication AI” requires several foundational shifts. First, the fields of public health and medicine need to embrace AI solutions as tools to support health communication. We need to work with technology innovators to ensure that model design and training is done with unbiased domain and health communication expertise.
The recent challenges experienced with the World Health Organization’s (WHO) newly launched Smart AI Resource Assistant for Health (S.A.R.A.H.) (Gil, 2024) demonstrates the importance of developing health related AI systems vis a vis a robust scientific agenda with rigorous testing to prove safety prior to launch.
AI’s dual role as both a challenge to and potential solution for the dissemination of credible and timely health information in tailored and empathetic ways underscores the urgent need to responsibly leverage its capabilities.
As we stand at this technological crossroads, the time to act is now.
References
Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D.J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA internal medicine.
Becker, M.H. (1970), “Sociometric Location and Innovativeness: Reformulation and Extension of the Diffusion Model,” American Sociological Review, 35(2), 267–82.
Berger, J. (2011). Arousal increases social transmission of information. Psychological science, 22(7), 891-893.
Bishop, M. (2019). Healthcare Social Media for Consumer Informatics. In: Edmunds, M., Hass, C., Holve, E. (eds) Consumer Informatics and Digital Health. Springer, Cham.
Burke-Garcia, A. (2017). Opinion leaders for health: formative research with bloggers about health information dissemination (Doctoral dissertation, George Mason University).
Burke-Garcia, A. (2019). Influencing Health: A Comprehensive Guide to Working with Online Influencers (1st ed.). Productivity Press.
Burke-Garcia, A., Johnson-Turbes, A., Afanaseva, D., Zhao, X., Valente, T., & Rivera-Sanchez, E. (in press). Supporting Mental Health and Coping for Historically Marginalized Groups Amid The COVID-19 Pandemic: The Power of Social Media Influencers in the How Right Now Campaign. In Ahmed, R., Mao, Y., & Jain, P. (Eds.), The Palgrave Handbook of Communication and Health Disparities. Springer.
Burke-Garcia, A. & Soskin Hicks, R. (in press). Scaling the Idea of Opinion Leadership to Address Health Misinformation: 1 The Case for “Health Communication AI”. Journal of Health Communication.
Centola, D. (2021, January). Change: How to make big things happen. Goodreads. https://www.goodreads.com/en/book/show/53369466
Dearing, J. W. (2009). Applying diffusion of innovation theory to intervention development. Research on social work practice.
Elyoseph, Z., Hadar-Shoval, D., Asraf, K., & Lvovsky, M. (2023). ChatGPT outperforms humans in emotional awareness evaluations. Frontiers in Psychology, 14, 1199058.
Gatignon, H., & Robertson, T. S. (1986). An exchange theory model of interpersonal-communication. Advances in consumer research, 13, 534-538.
Gil, B., (2024). The WHO’s new AI health chatbot is already making some mistakes. Quartz. Accessed at: https://qz.com/who-sarah-ai-bot-1851419782 on April 22, 2024.
Granovetter, M. S. (1973). The strength of weak ties. American journal of sociology, 78(6), 1360-1380.
Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. Yale University Press, New Haven, CT.
Liu, S., McCoy, A. B., Wright, A. P., Carew, B., Genkins, J. Z., Huang, S. S., Peterson, J. F., Steitz, B., & Wright, A. (2023). Leveraging large language models for generating responses to patient messages. medRxiv, 2023-07.
Messing S., & Weisel R., (2017) Partisan conflict and congressional outreach. Pew Research Center Report. https://www.pewresearch.org/politics/2017/02/23/partisan-conflict-and-congressional-outreach/.
Monteith, S., Glenn, T., Geddes, J. R., Whybrow, P. C., Achtyes, E., & Bauer, M. (2023). Artificial intelligence and increasing misinformation. The British Journal of Psychiatry, 1-3.
Rogers, E. M. (1962). Diffusion of innovations. https://blogs.unpad.ac.id/teddykw/files/2012/07/Everett-M.-Rogers-Diffusion-of-Innovations.pdf
Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Free Press.
Tu, T., Palepu, A., Schaekermann, M., Saab, K., Freyberg, J., Tanno, R., Wang, A., Li, B., Amin, M., Tomasev, N., Azizi, S., Singhal, K., Cheng, Y., Hou, L., Webson, A., Kulkarni, K., Mahdavi, S. S., Semturs, C., Gottweis, J., Barral, J., Chou, K., Corrado, G. S., Matias, Y., Karthikesalingam, A., & Natarajan, V. (2024). Towards Conversational Diagnostic AI. arXiv preprint arXiv:2401.05654.
Valente T. W. (2012). Network interventions. Science (New York, N.Y.), 337(6090), 49–53. https://doi.org/10.1126/science.1217330
Valente, T. W., & Pumpuang, P. (2007). Identifying opinion leaders to promote behavior change. Health Education & Behavior, 34(6), 881-896.