EDQM On Air podcast

EDQM On Air is a podcast on public health brought to you by the European Directorate for the Quality of Medicines & HealthCare (EDQM) of the Council of Europe . Tune in for expert insights, engaging discussions and inspiring stories to discover how the #EDQM, together with its stakeholders, is safeguarding public health through its work in medicines, pharmaceutical care, substances of human origin and consumer health.

6 episodes

Back AI in public health – Innovations, ethics and the road ahead, with Eric Sutherland

Artificial intelligence (AI) is a source of great hope in healthcare. Properly applied, it has the potential to analyse images and data faster and more accurately than human beings. Its ability to perform administrative tasks can also free up medical professionals to dedicate more time to their patients. But AI comes with certain risks. What data is being used?

Can AI respect privacy when handling personal health data? Can we really entrust our health to machines? The EDQM’s 60th anniversary conference, held on 11-12 June 2024, featured talks and discussions on a wide range health topics, including from Eric Sutherland, Senior Health Economist at the Organisation for Economic Co-operation and Development (OECD). In this episode, Eric shares his insights into the developing field of AI in healthcare, addressing topics such as digital health, policy guidance and responsible analytics.


 Transcription

Presenter – Welcome to EDQM On Air, a podcast on public health brought to you by the European Directorate for the Quality of Medicines & HealthCare of the Council of Europe. We hope you enjoy this episode and we invite you to stay tuned to learn how the EDQM and its stakeholders work together for better health for all.

On June 11th and 12th, 2024, the EDQM hosted its 60th anniversary conference to celebrate 60 years of protecting public health. We had the honour and privilege of organising presentations and roundtable discussions featuring a wide range of speakers, from representatives of patient associations to officials from international health organisations like the OECD, the World Health Organization, the European Medicines Agency, and many more. We also had the opportunity to have some of those speakers come on our podcast as guests, where they shared their unique perspectives and insights into the world of public health. This episode features Eric Sutherland, Senior Health Economist at the OECD, who shared his insights into the developing field of artificial intelligence in healthcare. We hope you enjoy the discussion, and thank you for tuning in. So I’m here with Eric Sutherland, who is a senior health economist leading the work of the Organisation for Economic Co-operation and Development, or OECD, in digital health, bringing together policy guidance for digital tools, integrated data, and responsible analytics, including artificial intelligence. Thank you, Eric, for being here with me.

Eric Sutherland – Absolute pleasure. I am thrilled to be here today.

Presenter – Perfect. So the process of integrating AI in healthcare is still in its early phases, but there’s already incredible results. And therefore, there’s a great deal of hope as well. So what sort of positive developments or benefits can we expect to see in the future?

Eric Sutherland – From an AI perspective, it really is remarkable, the opportunities that we see. There’s a lot of press around the ability of AI to better look at images, to actually detect obscure anomalies that would actually indicate that there’s a potential issue. There was a case last year in Israel where a person had such an anomaly on his scan, the AI system alerted the doctor to the anomaly. The doctor informed the patient. The patient went to the emergency room and avoided having a brain aneurysm. Life saved because of AI. That happened literally last year that was well publicised. But there’s also other things about AI that we can do. It’s not just some of the fancier things, so to speak. There’s huge opportunities. for AI to help with the administration of doctors and their practices and with the nursing cadre of people, just health professionals at large, because 30 to 50% of work for a lot of health professionals is frankly administrative, taking notes, recording things, doing bookings, doing billings, sending out prescriptions, frankly, a lot of things that could be aided or and or automated through the use of tools like artificial intelligence. And I like to think of it as instead of a patient who sadly in the last several years with the implementation of digital tools and technology, oftentimes a patient and doctor’s visit involves a patient staring at the back of a screen. That we can actually get back to a place where the patient and the provider are interacting directly and AI systems are managing and monitoring the conversation to actually record what’s happening. and actually create a record of the discussion that helps facilitate and simplify the administration for the health professional afterwards, and create a real positive experience for both the patient and the provider.

Presenter – All right. Amazing. Yeah, to me, it seems, I mean, AI, you know, it’s exploding kind of in all fields that we see. It’s crazy. The exponential growth and everything. So if that is applied to healthcare, I’m imagining that we can expect the same sort of exponential growth to the capabilities of healthcare systems and providers. Yeah,

Eric Sutherland – absolutely. Absolutely. One thing I do want to clarify in this is, and I totally agree about the exponential growth of it. Frankly, many other industries have already experienced that exponential growth. I look at banking, I look at aviation, where they’ve really already meaningfully integrated in the world of AI into their practices such that they actually are realising its benefits. We are relatively in health slow to adopt these new tools and technologies, rightfully so because of our concerns about the risks associated therewith. Having said that, I worry that we are over worried about the risks at the loss of the potential opportunities we could have if we were to actually approach AI, but approach AI with caution, as opposed to being so cautious as to not approach AI at all.

Presenter – Okay. So continuing from the thread that you just started about the need for caution, besides the positive side, there’s also potentially negative developments in this field. So what would you say are some of the most significant risks we might face?

Eric Sutherland – So from a risk perspective, I would call out... three, two that you probably expect me to say and one that I’m hoping you might not. The first and most obvious one, the one that you hear about most often is the risk of privacy, because AI by its nature requires large amounts of data about people in order to train its algorithms to find the patterns that actually create the insights that help provide better care for individuals and better population health for communities. find cures for various illnesses, find better pathways for more efficient care practices. All of those require personal data. And so there was concern about, well, if all of this data is sitting there, is my privacy at risk? And there have been real cases where data has been gathered in a way that was frankly sloppy. And as a result, yeah, privacy was at risk. And so this is where What we need to do from an AI perspective is we need to embrace that data are really the fuel of AI. Without data, there are no artificial intelligence algorithms. And so what we really need to focus on are what are your appropriate data governance practices to actually effectively manage, monitor, and dare I say steward the data so it’s actually used responsibly and effectively while you’re protecting the data from privacy breaches. while ensuring the data are used at the same time. It’s finding that and between privacy and use. And sadly, I think in many cases, we’re so worried about privacy, we forget about the and use part of it. And that’s where I think we really need to go. So privacy is a risk. But I think that is a remedy that we need to really think about. What do I need from a privacy perspective? What is sufficient privacy to get to a yes? What are the controls I need to have in place to make sure that privacy is not breached after I provided access to the data. So that’s one. Two is around bias. There are real concerns that data, because we’re again using data from populations, that when we use data sets, are they sufficiently representative in order to train the systems for the algorithms that are actually being generated? Easiest case of this was early AI systems that were trained on skin of Caucasians. And then they tried to apply those algorithms of people who are not Caucasian. And shockingly, skin didn’t really match in terms of its abilities. But that’s not necessarily a challenge of the bias itself. Bias is a concern, but bias is, frankly, a statistical concept. It is having the clear understanding of what are the demographics of the data set that were used to train the system. And... Given the person that’s sitting in front of me, are their demographics aligned with the demographics of the system with its training? If so, then bias is less of a concern because it’s well trained for the individual. If it’s not a match. doesn’t necessarily mean that you say, no, I’m not going to use it. It means use it with extreme caution. So bias is not a reason to not do AI, but it is a reason to be cautious with its application. So that’s the second area of risk I call out. The third actually relates to that because it actually ties more directly to inequities. One of the major issues that I have or worries that I have is that AI, as it’s being developed right now, is being developed in pockets by various hospitals that are doing their best with the solutions that they have, with the resources they have. But sadly, they’re not doing as much collaboration as I would like. So they keep on doing the same solution hospital after hospital after hospital. They’re developing in silos. And the problem with developing in silos is that you’re going to create a substandard solution because you’re only training the system. based on the data that you actually have, as opposed to pulling larger data sets, again, with the right privacy protections in place. But my second worry about it is that the people who have access to those AI innovations are only those people who have access to those health facilities. So for example, if I’m living in rural Germany and I don’t live in an urban centre, I may not have access to the AI solutions that are developed by the large urban centres in Germany. Germany is one case out of But it’s also the availability and the scalability of the systems. And being clear that when we’re doing AI systems, our aim in this is a level of equitability. Because the real risk is that because these technical solutions are only going to be available to people who live in centres that can afford building those innovations and actually fostering those innovations, we’re going to expand the current digital device that we have built up over the last 20 years. And as I like to frame it, we like to that they will actually expand to untraversable digital canyons. And so the current lack of trust that people who do not live in these urban centres where they have available these solutions in the health system will go down, which will actually have negative implications for the future use of said tools and data and may have broader social issues we have in terms of impairing trust with the public.

Presenter – All right, I see. Thank you. I want to ask a more personal question, I would say, actually about AI itself as an entity, let’s say, as a mechanism. The way I’m imagining it is that AI is going to come in as a helper to doctors, right? And that’s a very important part of healthcare, that there needs to be trust from the patient to the caregiving entity, whether that is a private practitioner or a hospital or anything else. So how would you address this sort of concern of AI potentially? taken over the decision-making capacity of the healthcare provider? Because, I mean, I can’t speak on behalf of the whole world, but me personally, I wouldn’t like to be treated by a robot doctor, quote unquote, you know? So is there some sort of provision for this concern of keeping doctors, human doctors, fully in charge of the actual treatment of patients?

Eric Sutherland – So you are not alone. Many, many, many people have expressed they do not want their care to actually be determined by a computer. And so... The notion of humans being in the loop and humans being accountable for care is absolutely one of the core principles about the way that we want to advance responsible artificial intelligence. Even the term artificial intelligence, I’d argue it has the major problem that it has with it is that it anthropomorphises. It makes the system sound like it’s a human. It is not. AI is math. It is not magic. And so in that way, because it is math, having a person that can interpret the AI solution and what it’s actually saying for the provision of care is absolutely essential. And that way, I would prefer if we had could go back in time and call it augmented intelligence as opposed to artificial intelligence, because it’s exactly as you described.

Presenter – It gives me too, you know, it’s like artificial intelligence. It gives me the feeling of something or someone being there taking the. the place of my doctor.

Eric Sutherland – Exactly. So it is very much not the direction I... would champion that we move forward. Doctors are going to continue to be in the loop. Nurses are going to continue to be in the loop. People should be more empowered to actually achieve their own health outcomes because they are empowered with their data, but always with the careful guidance of health professionals. My worry, which actually in the last two days as I have been talking here at this conference I’ve been actually at, there have been several people who have talked to me about their worry that Because AI solutions will be built, will it actually erode the qualification requirements for health professionals? Because the AI solutions will be so good that you don’t actually need as qualified health professionals as we used to have. I think that that is a real risk that I could see economically that people would argue, oh, I can hire a doctor for cheaper and I just trust the AI system. But frankly, then... What we are failing to do is keeping the person who is receiving care at the centre and their safety and their health is really the thing that is most important. So we are always going to need to have qualified health professionals partnering with AI, even partnering is the wrong word, leveraging the technology that is now being provided through this augmented intelligence tool to in their provision of care. That’s where we need to be going. And we need to make sure that we do not degrade. And in fact, what we need to do is encourage health professionals to embrace the use of this new tool, that this is not unlike how useful the stethoscope was when it first came out. This is what the 21st century version of a stethoscope is. How do you use it most effectively and responsibly in your practice, integrate it into your care, have it as part of your workflow, use it as you would.

Presenter – but you’re still the person who has to may say the last say about how care is working for the person okay you really relieved a big part of my ai anxiety right there excellent um one last thing i want to bring up so recently on may 17 2024 the council of europe adopted the first international treaty on artificial intelligence so how do you think this development is going to influence the future of ai in healthcare specifically thank you for that i i think it is fantastic

Eric Sutherland – Fantastic the Council of Europe has adopted this treaty. I’m not deeply familiar with it. It is something however that I am very well more than intrigued by. I think it is incredibly important. If I go back, I did have a look through of the treaty earlier on today as a matter of fact and it was very reminiscent of work that the OECD did back in 2019. where we published our international recommendation around AI principles. And even when I looked through the text of the Council of Europe recommendation, I believe the definition they used for artificial intelligence was actually the one that was adopted from the OECD. So there’s already relationship there. Many of the principles that are in the treaty, many are the same and are very related to the same principles we have under the AI work. Importantly, What I think that the Council of Europe have done, they’ve gone a step further with the treaty, which I was actually particularly excited by, because they actually went into not just what the principles were, but started to describe what the foundational environment that needed to exist for AI to thrive in a way that was responsible, equitable, sustainable, and scalable. For example, they included areas like digital literacy within that act. So really embracing that. improving the education of individuals, providers, the entire public, policymakers themselves, is going to be essential. If we’re going to unlock this power of this tool, we will need that type of literacy in the public. If I can use the strange analogy that I often use in this space, I see the potential of AI as a fundamental public good, and a public good has potential to create benefits and has potential to create some challenges and harms. The analogy I use goes back literally 140 years to when electricity first became commonplace. And when electricity became commonplace, what happened was a group of electric and electrical engineers got together and said, the value of this is too important for us to argue over things like the wires and the outlets and actually the distribution systems for this. While we will recognise that the... Innovators are going to innovate and it’s going to be competitive. Consumers are going to consume and that’ll be competitive. But the distribution needs to be safe. And that is what evolved into the IEEE did for electricity. They recognised the distribution needed to be safe. And they set up what their guidelines are around what the safety are and put in the right mechanisms to constantly learn from issues that came up to improve the safety recommendations. and clearly inform the innovators and say, we’re not going to allow your innovation to be distributed unless you actually meet our safety standards. And so my question is, when I go to that as an analogy from an AI perspective, what does AI need that is learned from what electricity did 140 years ago? And if I extend that analogy by a counterfactual, because I want to look at the internet, which has a large competitive space in creation, a large competitive space in consumption. But distribution, we really didn’t regulate it at all that well. And arguably, it’s certainly creating some benefits, but also some significant harms now. And our ability to learn from those harms is hampered. So I’m hoping that we learn from that mistake and learn from the successes that we’ve had in the past to actually create a level playing field that actually achieves general public value. from this enormous new capability that we’re building it.

Presenter – Absolutely. Yeah. It’s sort of setting an even playing field for the development of all of this.

Eric Sutherland – Absolutely.

Presenter – Well, Eric, thank you very much for your time. And thank you for being here with me on the podcast. And also on behalf of the EDQM, thank you very much for attending the conference and for your insights and your presentation, which was fantastic. So yeah, we really appreciate it.

Eric Sutherland – Thank you. I will say happy birthday to the EDQM. It is an... honor to be here. I learned so much in the last day and a half. I made connections with people that were going to be continuing work after this session. This has been enormously helpful for me, and I’m hoping that my contribution was useful for the EDQM, and I’m very happy to help support its work going forward.

Presenter – Fantastic. Excellent. Thank you so much.

Eric Sutherland – Thank you.

Presenter – If you enjoyed this episode, please subscribe to our show on your podcast platform of choice to make sure you don’t miss out on new releases. Thanks again for tuning in, and we hope to see you in the next one.

 

20min35 24 january 2025
  • Diminuer la taille du texte
  • Augmenter la taille du texte
  • Imprimer la page

Subscribe to the playlist

Link to Spotify   Link to Apple Podcasts   Link to Overcast   Link to Castro   Link to Podcast Addict   Link to Castbox   Link to YouTube   Link to Deezer

 



Useful link

 European Directorate for the Quality of Medicines & HealthCare