How Do You Make a Better Decision Support System?

Decision support systems have been around for a long time. But not coupled with AI. A team of researchers is proposing guidelines to make DSS with AI better.

How Do You Make a Better Decision Support System?
Photo by Tingey Injury Law Firm / Unsplash

Decision support systems (DSS) have been around for quite some time. They are “intended to improve healthcare delivery by enhancing medical decisions with targeted clinical knowledge, patient information, and other health information” (source). The first ones came into use in the 1970s but were poorly integrated and limited for use. The point of DSS is to combine all the information of a patient in one place for the physician to combine their knowledge with this information to treat their patients better.

These days they’re already put to good use. But a new breed of DSS is starting to emerge exponentially, namely coupled with AI. However, the clinical outcomes of DSS coupled with AI are yet to be demonstrated.

Exactly this is the case the authors make in a new article in Nature Medicine, which aims to implement new “guidelines to bridge the development-to-implementation gap in clinical artificial intelligence”.

The point is to try to make AI more human…not to replace humans, but make it less focused on the technical aspects and more focused on the interaction with human users.

Hence, bridging algorithm development to bedside application while keeping humans at the center of the design and evaluation process is a complicated task, and current guidance is incomplete. - The DECIDE-AI Steering Group

The guidelines are called Developmental and Exploratory Clinical Investigation of Decision-support systems driven by Artificial Intelligence. Or shortly DECIDE-AI. I’m always fascinated by how they can come up with names that make sense as an acronym and in full-length. It also reminds me of the AI clinical trial guidelines CONSORT-AI and SPIRIT-AI published in September 2020. DECIDE-AI is kind of a clinical extension of them (read the issue here).

Artificial Intelligence In Clinical Trials
New standards for designing trials that include AI were introduced in top journals. CONSORT-AI and SPIRIT-AI are the start of a new era in clinical trials.

So, why do we need these guidelines?

The article makes some key and interesting points.

“Human decision-making processes are complex and subject to many biases.” The reason is that humans won’t exactly follow the recommendations DSS give, which is why DSS have to undergo careful testing for their efficacy. The proposition is also to avoid large-scale clinical trials and evaluate the impact on the decisions at an early stage. And perhaps the most important of all, it should be tested in the target environment where it’ll be used - in the clinical setting.

“It is crucially important to test the safety profile of new algorithms not only in silico but also when used to influence human decisions.” So yes, one can test the DSS itself on the computer where it’s located…but this isn’t nearly enough. This is the same as if I’d code an app, test it quickly on my phone and upload it to the App Store expecting to solve problems of millions. It may be working out for me, but I still can’t know if it’s actually beneficial and user-friendly.

“The evaluation of human factors (ergonomics) should happen as early as possible and needs iterative evaluation–design cycles.” This is a similar step to the last one. With the development and design of a DSS, one has to keep the end-user in mind. And by making corrections based on clinicians’ feedback early, it makes it more cost-effective and user-friendly.

“Large-scale clinical trials are complex and expensive endeavours that require careful preparation.” A very similar principle applies to any clinical trial featuring AI. Abraham Lincoln’s quote describes it best: “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” This way, the large-scale clinical trials can be as efficient as possible and one of the best ways to do this is to simply make a small-scale and targeted trial first.

It’s fascinating to look at the development of any product for clinical use through the lens of creating commercial products. There are many similarities:

  1. Test the demand for your idea.
  2. Develop it according to what your users need and want.
  3. Launch it to the public.

It’s very much a bottom-up process that results in better efficiency than otherwise. Of course with clinical tools having actual consequences on human health. But in theory, starting small, figuring it out and scaling big is the way to go pretty much anywhere.

For more details read the full article below.

DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence
As an increasing number of clinical decision-support systems driven by artificial intelligence progress from development to implementation, better guidance on the reporting of human factors and early-stage clinical evaluation is needed.

An example of a decision support system is one for managing diabetes, about which I wrote in issue #30. Have a read if you’re interested.

Blood Glucose Management With Tech
Diabetes prevalence is already a pandemic and it will just get worse. Doctors alone don’t stand a chance against ever growing numbers of diabetics. Tech might be the answer.