Can you start by telling us a bit about your background and brief knowledge in the ethical implications of AI in healthcare?
My background is in the ethics of emerging technologies. I have written about genetics, biotechnology and, of course, artificial intelligence. Recently, my focus has been on the gains associated with further integrating biotechnology with digital technologies and this is where AI has become a major influence, but at the heart of everything I do are questions about the ethical implication of recent technology. Some of my recent work has examined the ethical implications of brain implants, for example. I have always been interested in the boundary between therapy and enhancement - when does technology take us beyond just repair into an entire range of capabilities? My research investigates the implications of that ethically and it is part of my wider interest in how we think about our relationship with technology more generally, how should humanity use technology? What are the limits of that use? What are the concerns we have about it? Additionally, how can it allow us to also evolve? Fundamentally, these are questions about how humanity can flourish and what makes a good life.
How do you think AI is transforming the healthcare industry? What are some of the most significant benefits and challenges that come with this transformation?
We must recognize that the healthcare sector, to serve an increasingly aging population with increasing numbers of medical challenges, is entirely reliant on the technological efficiencies of artificial intelligence. If we want a healthcare service that is available to everybody at the point of need, rather than dependent on private healthcare, then the only way we can service this need is by achieving greater efficiencies within the system. And those efficiencies operate on three distinct levels.
One is a human resource efficiency; how can we ensure that we have a workforce that is able to function effectively? Another is a technological efficiency; how can we make sure that the infrastructure that underpins healthcare is optimal for delivery? But the third one, and this for me, is the more critical one: how can we ensure there are data efficiencies within the system? And what we have seen over the last 15 years, with the rise of data driven economies, is that companies that have the data are most able to provide the best service.
However, if we progressively use third party companies to drive healthcare innovation as an infrastructure, then the risk to public health care is huge, as it has progressively diminishing capability, as more of that data is owned by a third party to whom we are now increasingly beholden to make any kind of efficiencies at all.
So, with all of this in mind, the role of artificial intelligence - which still is very much treated as a panacea for society – is huge and we need to ensure that public health provision can take a leading role in those developments. The impact of this technology is something that is already having dramatic consequences for healthcare and creating a lot of optimism for its future.
What are some of the most pressing ethical concerns surrounding the use of AI in healthcare? How do you think these concerns can be addressed?
Some of the more prominent conversations are about the disruption to the workforce. for example, in 2020, the World Health Organization launched its first digital healthcare worker, an AI chat bot driven through a semi realistic avatar with which you could have a conversation about your condition. And it was quite a limited trial, but they wanted to see if it could supplement or even replace some of the existing healthcare provision.
You could argue that such examples potentially reducing the important nuances of good healthcare practice and risks losing some crucial elements, like the human touch. But you could also argue that, in fact, for many people, the current system of healthcare provision is not working very effectively. So, they wanted to explore whether it could be a more effective way of doing that.
A notable example is obesity. So, many parts of the world are experiencing an obesity crisis, and countries are reverting to drugs to address this problem, which is a model that many clinicians are concerned about. The general predisposition towards finding a drug to treat a problem is something that people are quite anxious about as a behavioural expectation. Yet, we have spent half a century driving treatments through pharmaceutical solutions, because of their efficiencies and we are now seeing this potential for addressing obesity. And whilst there is a lot of complexity this subject, an appetite supressing drug seems like an innovative idea. However, we might also apply AI to understand the complex behavioural characteristic of problematic eating, and this could be helpful to many people to assist in their weight loss, without needing to resort to drugs.
More generally, the ethical implications associated with AI derive in healthcare relate to the impact on the healthcare sector. AI will undoubtedly transform the workforce in a huge way, but that does not necessarily mean a reduction. If you look at the World Economic Forum, it published a report a couple of years ago which argued that eighty-seven million jobs are in jeopardy because of AI, but ninety-five million jobs are likely to be created by AI. So, on balance, it is likely that we will have more jobs rather than less jobs. But what you need to do is make sure that you can transition people from their existing work into an AI based work experience. And that is often the hard part when you look at the economics of the labour force, as transitioning people into new skills is a really challenging task and the fair treatment of employees is paramount in that process. So, these are critical risks that are fundamental to how governments must start to think about the implications of AI, because if your AI crashes your economy, you have got lots of other problems as well.
Going back to the point about looking at this from a macro perspective, you must address that first; the primary responsibility is to ensure the stability of an economy which is intimately tied to the stability of prospects of healthcare. People that have no jobs and have no income have extremely limited health prospects. Once you have thought about that, you then must look at the specific applications of AI, whether they are undermining a range of health or ethical concerns. We have a reasonable proposition for that in Europe now with the EU AI Safety Act, which has demarcated various levels of ethical concern. So, it has a graded system for assessing what level of ethical concern is appropriate to whatever application and put in place some specific recommendations for things that we do not do.
For example, an AI system should not discriminate between people based on their demographic circumstances. If an AI is unmonitored and unmanaged, where you cannot see clearly what impacts it is having on the people whose lives are affected by its use, then that is something that would fail the test of meeting the ethical standards of the EU Safety Act. That is crucial.
One of the things that is critical, but which is often overlooked is how we consider informal experiences of healthcare provision. We look at that the health profession as if it represents the whole problem, when a lot of what is taking place is happening well outside of that sphere. Many people are using health related digital systems that have an entire range of implications for their own health and wellbeing, but also allow companies to determine a full range of health insights by looking at the data. Examples like Spotify are platforms where health related information is being gathered by non-health related applications that, in my mind, is potentially the biggest part of the health data that we both need, but also is being exploited by commercial companies without any real insight from the user. The same is true with things like Google, and Strava exercise applications, which are now full of a full range of health data, which we simply click, accept to, and go along with. But of course, is underpinning the business model of those organizations. So, when we think about the ethics of the future of health applications of AI, we must look far beyond just what the healthcare system is looking at from a governance perspective, because in my mind, that is a bigger part of the puzzle.
How do you think healthcare organizations can balance the need to collect and analyse substantial amounts of patient data with the need to protect patient privacy and ensure data security?
One of the key cornerstones of good practice is patient involvement in all these discussions. We see examples of that within the scientific profession and we need to apply those insights and best practices into health AI. If you look at such journals as the British Medical Journal, it is an increasing expectation that authors include patients within their research process. This is very well established now and has emerged as a critical component of ethical practice.
When you start to think about the potential AI applications, we should be mindful to make sure that the public are involved in those conversations. So, the first starting point is to ask questions about whether the research community is engaging with the public about discover applications to find out both the concerns that people may have, but also to help them design solutions that are in keeping with those ethical and moral concerns. If you can use AI to draw upon data to develop earlier insights into cancer diagnosis, I think most potential or existing cancer patients would say, that sounds brilliant. And I think in many cases, that is the case, but it is still important to engage the public on these matters, because it is often their data that is required to achieve that.
What are some of the potential risks and consequences of data breaches or misuse of patient data in AI-powered healthcare systems?
There are two key risks associated with the use of data. One is whether the modelling is ineffective. The presumption is - and it is a misleading idea - that AI is necessarily going to be smarter than the human. One of the risks is around how we model AI and how we engage with the AI itself, from the perspective of the governance of healthcare, and getting that wrong is a huge risk because people do not know what the optimal model is yet.
There are very few clinicians within the NHS that are interacting with AI patient data. So, we have not even begun the process of working out what should be the nature of the clinician’s interaction with AI data when it comes to patient care. The NHS is also still quite primitive in its use of digital technology and this needs to change. If you go into most hospitals, the IT infrastructure is far less effective than the latest technology and it is not all excessive cost either. For example, there has been various interesting articles in the BMJ recently which talk about whether doctors should be allowed to use WhatsApp during their clinical observations and that sort of tells you the level of conversation that we are at. It is incredibly early stage and extremely far behind where we should be as a digital conversation in healthcare. I would say that the immediate risks, in fact, are less to do with deployment and more to do with not being able to critically and rapidly engaged with the pace of change of technology in this sector. It has been quite well recognized that, not only has the NHS been far behind digitally, but it is also at risk of becoming further behind if it does not utilize the AI effectively.
Who should be held accountable when AI systems make mistakes or cause harm to patients, and how can we ensure that accountability in these situations?
I would urge that we look at accountability from the perspective of formal governance of healthcare and all other health related data together, because not only can we gain greater insights into how best to develop healthcare by looking at those two elements, but it also explains the complexity of this problem. If you look at what healthcare tries to do and what it is tried to do, especially over the last 30 years, we have had a strong focus on preventive medicine. We want to make sure that people do not establish habits that lead to health care burdens, but the only way you can do that is by accessing them in aspects of their life that are outside of health care.
Where is the NHS finding opportunities to talk to patients about healthcare, health risk prevention or reduction? It must look elsewhere, when interaction is very minimal with patients, which means it must grapple with all that health-related data that exists within social media to maximise those opportunities.
When it comes to how the NHS deploys specific AI based applications, the emphasis will have to be specific, and the example of genetic counselling is a great comparison here. When the prospect of harnessing insights from genetic information became a reality twenty or so years ago, it was never the case that you could simply give someone a genetic test and expect that to be beneficial without further support in understanding what to do with that information. In many cases, the information created new burdens that required support to think through. We now have quite established and rigorous systems of genetic counselling to help people come to terms with the implications of what that data means about them and we may a kind of AI counselling to help people work through the implications of health insights that arise through a similar process. And that's the conversation that still hasn't begun, and perhaps is something that is a service that should be delivered in combination with healthcare professionals, both to diminish the burden on the NHS, but also to recognize that there's distinct consequences to people interacting with AI that will have an impact on the healthcare system as well.
How do you think we can ensure that the benefits of AI in healthcare are equitably distributed and accessible to all patients, regardless of their background or socioeconomic status?
I think the first thing to recognize is that we do not have that already, in a pre-AI world. Often, the immediate response that people have to AI is that it could create greater inequalities, which is often the same case with concerns about technology. I believe that AI will deliver greater equality, greater justice in healthcare, but this requires vigilance and monitoring, but also understanding the complexities of injustice. The current healthcare system cannot deliver healthcare justice presently and we need to look at this critically - at what is not working presently - and appreciate where that can be improved by technology. There is more research that is illuminating the degree of biases that exist within the profession, not just at the clinical level, but also at the at the bureaucratic level as well.
There is a good chance that AI improves those situations, because it will help us identify those injustices, based on being able to more adequately analyse data and examine those injustices. Now, a lot of our data is determined by quite a limited human resource that has capacity to do that research and generate those insights. I think again, the optimist in me thinks that we can design an awareness of that into the system so the AI can be informed by those potential risks, and both monitor for it, but also adjust for it.
If you think about AI and what it could do, AI designed with these sorts of questions in mind, these sorts of concerns, would be able to reveal that distinction and that difference and flag it up as a potential bias. Following through from the methodology that this research has developed, it would be able to more effectively test whether the questions are working. I am optimistic that, with our awareness of those ethical concerns about injustice in mind, it has the capacity to deliver a far more efficient and effective way of monitoring for them, because now, we are not particularly good at that.
In conclusion, is there anything else you would like to add about the ethical implications of AI in healthcare, or any concluding thoughts you would like to share with our readers?
Well, I think the first thing I would say is that everybody needs to be part of this journey. A lot of work is needed to make sure that people are part of the transformation to society that is happening through AI.
I speak to many people about their use of ChatGPT, which, of course, is the most common application that is out there. Most people are still using it like Google, but the best way to use it, is not like Google at all. There is a huge amount of work to help people understand what is possible with even that application. That application is one of 1000s of applications that are out there now. So that is a huge chunk of work for society to do to make sure that the population understands what can be done to improve people's lives because of these technologies.
My advice would be to make sure you complete your training with a degree of AI literacy that will allow you to bring that innovation into your sector. If you are going into biotechnology, make sure you graduate with biotechnology skills around AI integrations that will both allow you to make more useful contributions to your sector, but it will also allow you to be ready for the future of your sector too. Artificial intelligence will not replace your job, but people who know how to us artificial intelligence will replace those who do not.
The final thing I would add to that is, do not just look to your narrow sector for guidance, look at what is happening around your entire industry and beyond. More generally, discover the companies that are doing the best work ethically, but more importantly, the companies that are developing AI for the right reasons and have the best values to underpin that work.