In a small hospice room, a patient lies; they feel slightly drowsy due to medication. As with all hospice patients, they know that their time is coming to an end, even though they have only been in hospice care for a week. The experience has been wonderful for the patient, and the burden on their family has been lessened. Still, the patient felt that maybe they should have gone to hospice care earlier so that way, the transition to the end of life was smoother and less rushed, and their family could be less stressed out. What can be done to ensure that end-of-life care begins at the right time for better quality end-of-life care?
A patient is only suggested hospice care if they are expected to have 6 months or less to live. It is a decision that the patient and the family make together, often based on input from a physician or other healthcare professional. But, many patients end up making the decision too late. At the hospice where I volunteer, many patients stay less than two to three weeks. And while it is personal to the patient, it raises the question, is that enough time to have a quality end with proper care?
Often, conversations about end-of-life planning don’t happen early enough to have maximum benefit. Healthcare providers can only make an educated estimate of the length of one’s life; this is called prognostic uncertainty and can be one reason that patients may not seek out care early enough. Another is optimism bias, where people assume positive outcomes are more likely and negative ones are less likely. This could mean that a healthcare provider underestimates the severity of a patient’s illness, leading to possible overestimation of their time left. A study was conducted to understand the effects of delayed initiation of end-of-life care. It found that early initiation of end-of-life care could reduce the administration of unnecessary medications, minimize laboratory and radiological investigations, and avoid procedures that could lead to complications with little benefits. Of the 107 terminally ill patients in this study diagnosed with treatment futility, 64 patients (59.8%) began end-of-life care early, while 43 patients (40.2%) started it later. The number of days patients did not take antibiotics was higher in the early initiation group, and the number of surgical interventions was less (Choudhuri et al.). Since the treatments were futile in both groups, delayed initiation led to more unnecessary treatments for patients. In addition, 30.2% of the late initiation was caused by prognostic dilemma. Thus, addressing the timing of end-of-life care is critical.
So, how can prognostic uncertainty and other diagnosing issues decrease? Some recent developments about the use of AI in this field could be the answer. The idea is to use AI-based “nudges” as a decision-making support tool that would alert clinicians to whether or not to discuss end-of-life planning with patients with a high risk of short mentality. There are two machine learning AI-based nudges, one of which has already been commercialized. Machine learning in AI refers to training computer algorithms to provide accurate predictions based on historical data. Machine-based learning tools use Electronic Health Records (EHRs) to provide a more precise estimate of short-term mortality. They can create a list of patients at a high risk of 30-day or 180-day mortality. This development can improve early access to end-of-life care for patients with cancer and foster more end-of-life planning conversations. A study compared the implementation of the AI nudge with usual care in patients with cancer; it was found that there was a significant increase in “serious illness conversations” (SIC) rates in the nudge group compared to the control period (4.4% compared to 1.3%). (Manz et al.).
However, as with most AI tools, there is the issue of possible uncertainty and inaccuracy of the algorithm predictions. Another important note is that this nudge is a support tool, and the clinicians are still responsible for making the decisions. Behavioral studies found that as the tool is continually used, providers pay less attention to the alerts (alert fatigue), decreasing the impact of the tool. In addition, it was found that specialists who had fewer patients were generally more responsive to reminders than general oncologists who had more patients.
As with many AI tools, the question of how ethical it is to use algorithm predictions arises. This AI operates as a “black box,” meaning that it cannot explain how they make decisions. There is also the possibility that patients, families, and clinicians overly trust AI and view its decisions as being more objective and valid than they really are. This could mean that the AI is steering families towards a decision they don’t fully understand (because it doesn’t explain the rationale for its decisions), possibly violating the autonomy of a decision. AI is also not free of its own biases. Some models may replicate the biases of their training datasets or be biased in other ways, such as through label bias, which can occur if data is not diverse or representative, leading to different interpretations of the same data. Another possible ethical concern is that the patient and the family don’t even know that AI influences the clinician’s decision to start conversation about the end of life. While this technology could bring significant benefits to families and patients by improving their quality of end-of-life care, it also raises important questions about autonomy and informed decision-making. Should AI tools be more widely adopted to improve end-of-life communication, or do the risks of over-reliance and violating principles of end-of-life care outweigh the benefits? Or, can we improve the AI enough that it is accurate and reliable so that it enhances the quality of end-of-life care while remaining cautious about its usage?
Sources:
https://www.ncbi.nlm.nih.gov/books/NBK599854/
https://pmc.ncbi.nlm.nih.gov/articles/PMC7435104/
