Medical Humanities and Artificial Intelligence: Boundaries, Methodologies, Practice

What does the spike in attention to AI mean for the medical humanities? Louise Hatherall considers some possible answers – and raises further questions.

Do you believe in the human heart? I don’t mean simply the organ, obviously. I’m speaking in the poetic sense. The human heart. Do you think there is such a thing? Something that makes each of us special and individual?

Klara and the Sun, Kazuo Ishiguro

It can feel as if stories about artificial intelligence (AI) are everywhere. It seems impossible to open a newspaper (or a blog!) without seeing a story about how AI will save the world, end the world, or at the very least change it in some way. Health has become a key site of focus for AI development, where systems using AI have been heralded as likely to contribute to huge leaps in patient care, improved health outcomes, and more manageable workloads for staff. This has consequently attracted significant funding; for example, in June 2023 the UK Government announced £21 million in funding to accelerate the deployment of AI technologies within the NHS.

Despite its recent ubiquity in the media, AI itself is not new: Russell and Norvig (2021) trace its origins back to 1943. The recent spike in investment and attention has been spurred by a combination of significantly increased computing power and access to large troves of data. What does this spike in AI attention mean for the medical humanities? This blog perhaps raises more questions than answers on this topic but, in doing so, will hopefully show why AI is an increasing site of importance for medical humanities and social science researchers. It briefly explores three focal points: definitions, methodologies, and best practice.

First, it’s important to be clear what we mean when talking about AI. 

A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

Definitions, boundaries

There is no universally agreed definition of AI. Some definitions focus on whether a system embedding AI can demonstrate human characteristics (utilising concepts like the well-known Imitation Game, developed by Alan Turing. Others take a rationalist approach, requiring a system to have some external goal it works toward (Turner 2021). AI can also be described as ‘weak’ (automating simple tasks, such as recall) or ‘strong’ (mimicking human intelligence); ‘narrow’ (focused on specific tasks, using predefined models) or ‘general’ (able to perform human-like intelligent tasks, with self-learning capabilities); ‘predictive’ (identifying patterns imperceptible to humans to predict a likely outcome) or ‘generative’ (generating creative works, such as words or images, following a user inputted ‘prompt’).

Many of these definitions and categorisations rest on measuring AI against human intelligence and abilities. For medical humanities scholars, amongst others, this might present a tricky measuring stick. For example, what exactly is human intelligence? Is demonstrating facets of human intelligence equivalent to actually being intelligent? And, particularly in the context of health, should we consider broader definitions, such as the emotional intelligence needed to communicate issues of health, illness, and disease? These definitions can also be criticised for obscuring the very human elements which go into AI: that is, the developers, designers, deployers, and users who shape the way these systems work in practice. Some of this challenge may be disciplinary; Yeung (2023), talking in the context of algorithms, points out that disciplines can sometimes (though not always) talk at cross-purposes. Those working in the social sciences or humanities see algorithms as socio-technical assemblages made up of software, hardware, and humans, whereas computer scientists may understand them as technical instruments. 

The challenge of these definitions is that they may obscure and de-prioritise the human experience within a set of technological concepts. None of the definitions, for example, capture the humans whose lives these systems will increasingly act on, and shape. Nor do they easily capture the complex social, human and cultural contexts in which these systems will be deployed. Much as the medical humanities have ‘animated’ clinical and research spaces, shining vital light onto situated accounts of health, illness, and suffering (Fitzgerald and Callard 2016), so too might they do this for contexts where AI and health intersect.

New sites, old methods?

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style.  Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at.  On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise.  Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up.  A similar pattern of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen.  The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.
Image: Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries. / CC-BY 4.0

Medical humanities methodologies might also be leveraged to capture the static and changing human experience of health in this vast social, cultural, and technological landscape. Attitudes toward the use of intelligent systems in health are complex, messy, and sometimes contradictory. A recent report by The Ada Lovelace Institute (2023) exploring public attitudes toward AI identified health as an area where such systems are particularly welcomed, but also an area of significant concern. Navigating these challenges requires more than purely technical solutions: a system which significantly improves human health but which is not used because people do not trust it will not benefit patient outcomes regardless of its ability to do so. Medical humanities methodologies have the capacity to go beyond the technical: addressing social, ethical, and cultural questions about how, when, and why we embrace or reject new technologies.

AI might also shift sites of healthcare, with potential attendant shifts in how people understand and experience health, illness, and disease. We are already seeing healthcare increasingly mediated through systems, such as the use of virtual wards, and an increase in mobile phone apps which promise triage, diagnosis, and treatment from home. Exploring these shifts and understanding how they might – or might not – change experiences of health is a rich, and growing, ground for exploration.

On a grander scale, humanities methodologies offer a rich array of ways to explore and understand what we might want the future of AI and health to look like. Tools from Science and Technology Studies (STS) offer one example to capture these futures in the context of the emerging – and shifting – AI landscape. The Harvard STS Research Platform highlights the power of “tracing the links between artistic imaginations and other forms of social life” to garner additional insights into such socio-technical imaginaries. Healthcare and AI are not short of these artistic expressions: one need only look at the medical pods in Neill Blomkamp’s 2013 film Elysium, or read Kazuo Ishiguro’s 2021 book Klara and The Sun to see the ways in which future health and care are conceptualised.  

This picture is made up of 9 images in rows of 3. Each row shows a different image of a pill bottle spilling out pills onto a plain surface, on yellow or white backgrounds. On one side, the image is an original photograph. The next two iterations show it getting represented in progressively larger blocks of colour.
Image: Rens Dimmendaal & Banjong Raksaphakdee / Better Images of AI / Medicines / CC-BY 4.0

Wider impacts

Beyond boundaries, definitions, current and future grounds for exploration, AI is an increasingly important site for medical humanities scholars for very practical reasons. As AI becomes embedded in health systems, new forms of best practice and guidance are needed. The absence of professional guidelines to-date has attracted criticism (Smith, Downer and Ives 2023). Medical humanities, with its robust and varied methodological background, can – and should – play a vital role in developing these guidelines. It is widely recognised that there is an urgent need to capture diverse patient voices to understand and shape how AI is used in healthcare. A sharp focus on situated experiences will be vital to this endeavour. AI promises to improve human health, and it demonstrates great potential to do so. But it poses broader questions about the intersections of health, illness, and disease. This is a relatively short piece and so has touched on only a few areas of particular interest at the crossroads between medical humanities and AI. Fitzgerald and Callard (2016) called for medical humanities to engage consequentially in the research practices of biomedicine. The emerging sites of AI in health present ripe opportunities to do so: to centring human accounts of illness, health, and intervention within this burgeoning techno-health landscape.

About the author

Louise Hatherall is a socio-legal research fellow at the Centre for Biomedicine, Self and Society at the University of Edinburgh. Her research interests are in law, health technologies, patents, and the public and she is particularly interested in empirical work to explore where these intersect. She has explored these convergences across her previous research analysing civil society patent challenges, and in her current work on the Trustworthy Autonomous Systems: Making Systems Answer project. The latter explores issues of trustworthiness and responsibility in relation to autonomous systems, with a particular interest in developing empirically grounded approaches to regulation. You can connect with her on LinkedIn or via email (lhathera@ed.ac.uk).

About MedHums 101

Our ‘MedHums 101’ series explores key concepts, debates and historical points in the critical medical humanities for those new to the field. View the full ‘MedHums 101’ series.

References

Fitzgerald, Des, and Felicity Callard. 2016. “Entangling The Medical Humanities.” In The Edinburgh Companion to the Critical Medical Humanities, edited by Anne Whitehead and Angela Woods, 35 – 49. Edinburgh: Edinburgh University Press.

Reed, Octavia, Anna Colom, and Roshni Modhvadia. 2023. “What do the public think about AI? Understanding public attitudes and involving the public in decision-making about artificial intelligence”. The Ada Lovelace Institute, October 2023.

Russell, Stuart, and Peter Norvig. 2021. Artificial Intelligence: A Modern Approach. Harlow: Pearson

Smith, Helen, John Downer, and Jonathan Ives. 2023. “Clinicians and AI use: where is the professional guidance?” Journal of Medical Ethics Published Online First: 22 August 2023. doi: 10.1136/jme-2022-108831.

Turner, Jacob. 2018. Robot Rules: Regulating Artificial Intelligence. Switzerland: Palgrave Macmillan.

Yeung, Karen. 2023. “Algorithmic Regulation: A Critical Interrogation”. Regulation & Governance 12 (4): 505 – 523.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.