As the exponentially increasing power of computer systems allows for countless innovations in diagnosis and treatments, Artificial Intelligence (AI) will continue to revolutionise healthcare. There is, however, a barrier that AI will have to overcome to be truly effective, and oddly, it is a very human barrier – Trust. AI, a faceless automaton comprised of lines of code and presented to us on screens, has the potential to communicate with people empathetically, in a way that builds trust and understanding. But first, people need to trust AI enough to begin using and interacting with it.
Technology is pushing the boundaries of what we trust when it comes to our health information, particularly with mental health and symptom reporting. In 2015, a group of researchers at the Institute of Creative Technologies in the University of Southern California created “Ellie”, a pseudo-therapist AI. Ellie uses facial recognition software, eye tracking, speech analysis, and a plethora of other technologies that allow for a deeper understanding of the patient. Interfaced using a reactively animated on-screen persona, Ellie asks the participants questions and changes her body language depending on the answers given. She smiles when the participant smiles, she leans in when the participant leans away, all to reactively initiate trust. Yet, for all her human qualities, the artificiality of the experience removes the direct fear of judgement from the user. During trials diagnosing US military veterans with PTSD, it was found that participants were more likely to share symptoms of their mental health with Ellie than on trauma assessment forms. Whilst never used in isolation, this kind of smart and empathetic healthcare can be invaluable, as an artificially empathetic diagnosis tool that patients can trust and provide with their sensitive information.
Continue reading in Med-Tech News.