Explainable Artificial Intelligence For Psychiatry Consultation For Medically Hospitalized Patients
Posted Date: May 4, 2024
- Investigator: David Karol
- Specialties:
- Type of Study: Observational/Survey
Background: Explainable artificial intelligence (XAI) using machine learning tools improves biomedical applications including radiology and pathology interpretation. XAI has been identified as one of the key attributes towards a human-centered AI paradigm that will enable trustworthiness between the AI system, the decision maker, and the public. With feedback from consultation-liaison psychiatrists, we will apply XAI to guide psychiatric consultation in the acute care setting, thereby improving health outcomes. Methods: Prospective cohort design with multiple iterations as the XAI system is modified by psychiatrist input. Statistical analysis will compare consultation completion (primary outcome), length of stay, emergency psychotropic use, restraint use, one-to-one observation, and patient/staff satisfaction. Specific Aims: Aim 1 is to establish the feasibility of using XAI to identify patient candidates for psychiatric consultation. Aim 2 is to establish the preliminary effectiveness of our model (compared to usual care) on the outcomes described above. Impact: This study will assist in rapidly and accurately identifying acutely ill patients to improve outcomes for patients with co-occurring psychiatric conditions who are known to have significant health disparities. Future Research: This study will provide data for an R34 application (PAR-22-082) to address factors affecting mental health outcomes using advanced computational and predictive analytic approaches.
Criteria:
Eligible Participants Include Inpatients Admitted To University Of Cincinnati Medical Center, Excluding Psychiatry Units.
Keywords:
Proactive, Machine, Learning
For More Information:
David Karol
513-584-7107
david.karol@uc.edu