Project Description
The development and efficacy-testing of a holistic, personalized, electronically integrated clinical decision support system for left-ventricular assist device candidates will help to ensure that heart failure patients receive tailored treatments that lead to optimal and values-based outcomes. Our study involves utilization of an AI/machine learning system that predicts personalized risks using big data. Specifically, it applies the most advanced personalized risk prediction technologies and decision support available to make sure that evidence about cardiac outcomes is used by both patients and clinicians in the service of shared decision making that leads to more informed and value-concordant health decisions. The impact of this personalized approach to clinical decision making addresses the urgent need to better identify and respond to the specific and dynamic nature of patient needs in seeking treatment for advanced HF.
We will do this by updating and integrating a validated online risk prediction and communication tool, the Heart Mate 3 Risk Score calculator developed by Dr. Mandeep Mehra and colleagues at Brigham and Women's Hospital, with our efficacy-tested decision aid (Deciding Together) for LVAD.
This five-year project builds on 6 years of research on the development, implementation and dissemination of LVAD decision support and a decade of research into accurate risk prediction models for LVAD.
Supported by: R01 HS027784, Agency for Healthcare Research and Quality
Publications
Kristin M. Kostick-Quenet, Benjamin Lang, Natalie Dorfman, Jerry Estep, Mandeep R. Mehra, Arvind Bhimaraj, Andrew Civitello, Ulrich Jorde, Barry Trachtenberg, Nir Uriel, Holland Kaplan, Eleanor Gilmore-Szott, Robert Volk, Mahwash Kassi, J.S. Blumenthal-Barby. Patients’ and physicians’ beliefs and attitudes towards integrating personalized risk estimates into patient education about left ventricular assist device therapy. Patient Education and Counseling, Volume 122, 2024, 108157, ISSN 0738-3991. https://doi.org/10.1016/j.pec.2024.108157.
Kostick-Quenet K, Lang B, Smith J, Hurley M, Blumenthal-Barby JS. Trust Criteria for Artificial Intelligence in Health: Normative and Epistemic Considerations. Journal of Medical Ethics (November 18, 2023). https://doi.org/10.1136/jme-2023-109338.
Lang, Benjamin H., Blumenthal-Barby, J.S. Nyholm, Sven. Responsibility Gaps and Black Box Healthcare AI: Shared Responsibilization as a Solution. DISO 2, 52 (November 16, 2023). https://doi.org/10.1007/s44206-023-00073-z.
Hurley, Meghan E., Benjamin H. Lang, and Jared N. Smith. “Therapeutic Artificial Intelligence: Does Agential Status Matter?” The American Journal of Bioethics 23, no. 5 (May 4, 2023): 33–35. https://doi.org/10.1080/15265161.2023.2191037.
Kostick-Quenet, Kristin M., Benjamin Lang, Natalie Dorfman, and J. S. Blumenthal-Barby. “A Call for Behavioral Science in Embedded Bioethics.” Perspectives in Biology and Medicine 65, no. 4 (September 2022): 672–79. https://doi.org/10.1353/pbm.2022.0059.
Blumenthal-Barby, J., Lang, B., Dorfman, N., Kaplan, H., Hooper, W. B., & Kostick-Quenet, K. (2022). "Research on the Clinical Translation of Health Care Machine Learning: Ethicists’ Experiences on Lessons Learned." American Journal of Bioethics, 22(5), 1–3. doi: 10.1080/15265161.2022.2059199.
Kostick, K.M., Cohen, G., Gerke, S., Lo, B., Antaki, J., Movahedi, F., Njah, H., Schoen, L., Estep, J. & Blumenthal-Barby, J.S. (2022). "Mitigating Racial Bias in Machine Learning." Special issue of the Journal of Law, Medicine & Ethics, 50(1), 92-100. doi:10.1017/jme.2022.13
Kostick, K.M., & Blumenthal-Barby, J.S. (2021). "Avoiding “Toxic Knowledge”: The importance of framing personalized risk information in clinical decision-making." Personalized medicine, 18(2), 91–95. https://doi.org/10/2217/pme-2020-0174