How Do Patients Feel About AI?

Supporters want to implement it as fast as possible, those scared of it want to postpone it as long as possible. But one party is left out of the conversation - the patient.

How Do Patients Feel About AI?
Photo by Maxime VALCARCE / Unsplash

The medical community is divided into two sides when it comes to AI in medicine - those that fully support it and those that fear it. Supporters want to implement it as fast as possible, since it will help them do a better job. Those scared of it want to hold off this process to a later date.

But one party is left out of this discussion - the patient.

The goal of implementing AI in medicine is for the benefit of the patients. However, we fail to ask them what they think and how they feel about it. Luckily, a research article was published in Nature in September researching how patients feel about AI getting adopted in medicine.

Patient apprehensions about the use of artificial intelligence in healthcare - npj Digital Medicine
npj Digital Medicine - <ArticleTitle Language="En" xml:lang="en">Patient apprehensions about the use of artificial intelligence in...

Before diving in, let’s settle who was included in the study. Most of the participants had an education level higher than a high school degree, and 20% of them had work experience related to technology or computer science. Interestingly, about half of them were working in healthcare or health science. However, none of them had prior experience with AI.

Definitely a sample that has the potential to understand the impact of AI on their lives. This was also the general finding - participants were largely “enthusiastic about AI improving their care”. They correctly identified that AI is still an emerging technology in medicine, but that it aligns well with its goal - treat as many patients as possible.

There were some specifics I want to examine separately, though.

Safety and risk

One of the concerns identified in the study was the safety and risk-aversion of healthcare AI, rightfully so.

One part of the concern regards to medical decision-making. The point of, for example, a decision support system, is to improve medical decision-making. Not to replace doctors, but help them. And the main point the participants in the study stressed was that clinicians should act as a safeguard for AI, just as they’re a safeguard of their health. There’s definitely a need for excellent understanding of AI-related systems from clinicians, just as there is for understanding human physiology and pathophysiology.

This will likely be a key factor for doctors and how they will work with new technologies. At the end of the day, clinicians should be in charge of patients’ health, regardless of which tools they use. At the same time, this is an additional argument that technology shouldn’t replace doctors, as patients want and trust human interaction more.

The second part of the concern was technology itself and what happens when there’s a system-level crash or mass technological failure. There’s a brilliant quote from one of the participants in the study:

I have some background in electronics, and one thing you can guarantee with electronics is they will fail. Might not be now, might never happen in 10, 20 years. The way things are made, ‘cause I’ve actually worked in the industry of making medical equipment, it’s all about using the cheapest method to get the end result. Well, electronics fail. They just do.

Along with the trust factor of patients, this is also a reason why healthcare shouldn’t fully rely on technology, especially not doctors.

Data integrity

Connected to the safety and risk of care associated with AI is data integrity. I found this quite interesting and surprising. Participants reported they experienced their medical data not being correct when they got a look into it. Logically, AI shouldn’t be trained on flawed data, which adds another layer of complexity to implementing any AI-based systems in healthcare. In theory, patients should thus be informed before their data is included in any AI training to check if everything seems correct - complex and time-consuming.

On the other hand, we can’t expect AI to be massively included in healthcare tomorrow. However, we can assume that the quality of data is getting better. One of the reasons is definitively that more and more younger clinicians with better technological skills are entering healthcare.

An excellent point also made in the study was about existing biases in healthcare datasets. I won’t lose any more words and simply paste this quote from the study, which sums it up perfectly:

Prejudices that people can have, like it could absorb those or it could be taught to work against them, like a lot of people who are overweight have said that their providers assume that that’s the cause and ignore doing other tests or pursuing other avenues, and if an AI wasn’t going to make the assumption that that was what was the problem, then that would be good, but if it was learning from people around it that it should make that assumption, then it would perpetuate the problem.

Choice and cost

Both concerns above are also connected to the report from the research that patients would feel better if they had a choice. “They felt that patients should have the right to choose to have an AI tool used in their care and be able to opt out of AI involvement if they felt strongly.”

I agree about preserving the choice of whether an AI tool should be used in patients’ care or not. You can also reject treatment, so I don’t see a reason why it shouldn’t be the same for AI. On the other hand, why wouldn’t you want to receive care with something that improves it?

Lastly, and rightfully so, some patients were concerned about the additional healthcare cost and insurance coverage of AI tools.

I think we all agree that AI can make healthcare more efficient and thus maybe less expensive. The concern comes with the high development and deployment costs, which is valid. Another side of this is what if an AI algorithm recommends a treatment that’s too expensive for a patient to afford? That’s not a huge problem in the EU countries, but what about the USA, with a different healthcare insurance system?

Fraud is also an interesting perspective about AI algorithms - what if some of them intentionally recommend more expensive treatments? This is borderline conspiracy theory, but I thought it was an intriguing point in the study.


Definitely lots of new stuff - I honestly wouldn’t think about half of these if I didn’t read the study. It gave me a different perspective, outside this digital health bubble where every new idea is spectacular.

I think the results of such studies will change over time as younger people with more technological background become the majority of the patients. Maybe the perspective then will be different. All in all, each individual should learn more about technology and especially AI as it’s deemed to affect everyone’s lives in the following years, not just doctors’ and patients’.