Does your doctor know your name? You may wonder why I’m asking this question simply because it’s the first thing that most of us share with our doctors when we visit them for a check-up or medical advice.
Doctors typically want to know more about you–your name, family history, and other personal details–so that they can diagnose your ailments better and treat you accordingly. Most of us have no problem with this line of questioning since we clearly see the benefits.
But would you be comfortable sharing the same details or taking advice from a robot doctor that is powered with artificial intelligence (AI)?
Researchers from Penn State and University of California, Santa Barbara (UCSB) believe people may not be as forthcoming with AI doctors as they would with their human counterparts.
S Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State believes one reason that humans are not very comfortable with AI doctors is because machines do not understand what empathy means. In other words, machines are unable to feel and experience. “So when they (AI doctors) ask patients how they are feeling, it’s really just data to them”, which is one likely reason why people have been resistant to medical AI in the past, according to Sundar.
That said, machines certainly have advantages as medical providers, according to Joseph B. Walther–distinguished professor in communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at University of California, Santa Barbara (UCSB).
AI, for instance, can help in developing vaccines that remain effective against the multiple Covid variants. For instance, Massachusetts Institute of Technology (MIT) researchers published a study in ‘Science’ this January which said that Natural Language Programming, or NLP, models (that analyse the frequency with which certain words occur together) can also be applied to biological information such as genetic sequences.
In the study (https://news.mit.edu/2021/model-viruses-escape-immune-0114), which was yet to be peer reviewed, the researchers said they had succeeded in training a NLP model to analyze patterns found in genetic sequences in a bid to tackle ‘viral escape’–a process which allows viruses to mutate very rapidly and thus evade the antibodies generated by a specific vaccine.
According to Bonnie Berger, the Simons Professor of Mathematics and head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, “Viral escape of the surface protein of influenza and the envelope surface protein of HIV are both highly responsible for the fact that we don’t have a universal flu vaccine, nor do we have a vaccine for HIV, both of which cause hundreds of thousands of deaths a year.” Berger and her colleagues say they have identified possible targets for vaccines against influenza, HIV, and SARS-CoV-2 (Covid).
In the context of a doctor-patient relationship, Walther underscores that computers can know a patient’s complete medical history, similar to how a family doctor who has treated a patient for a long time. Moreover, this would have a clear advantage over a new doctor or specialist who would only know your latest lab tests. “This struck us with the question: ‘Who really knows us better: a machine that can store all this information, or a human who has never met us before or hasn’t developed a relationship with us, and what do we value in a relationship with a medical expert?’” asks Walther.
“When an AI system recognizes a person’s uniqueness, it comes across as intrusive, echoing larger concerns with AI in society,” said Sundar in a press release (https://news.psu.edu/story/657391/2021/05/10/research/patients-may-not-take-advice-ai-doctors-who-know-their-names). The researchers presented their findings at the virtual 2021 ACM CHI Conference on Human Factors in Computing Systems on 10 May.
Their concerns are well-founded, as Srinivas Prasad, Founder and CEO of Neusights, points out in an article on AI bias in healthcare (https://www.cxotoday.com/ai/addressing-bias-is-key-for-the-adoption-of-ai-in-healthcare/) that he wrote for CXOToday.com
In that article, he notes that “Biased decisions in medicine can have a serious adverse impact on clinical outcomes. Diagnostic errors are associated with 6–17% of adverse events in hospitals. Cognitive bias accounts for 70% of these diagnostic errors. These diagnostic errors due to bias get embedded in the historical patient data, which later get used in training AI algorithms. This will result in the AI algorithm learning to perpetuate existing discrimination.”
He cites the example of an AI algorithm built to help doctors detect cardiac conditions. Typically, heart attacks are diagnosed by doctors based on symptoms experienced more commonly by men. Hence, an AI algorithm that is trained on relevant historical patient data would inherit the bias from the training data. In other words, the algorithm would not be as efficient when diagnosing women as it would with men.
Algorithmic biases are well documented. Lead author, Aylin Caliskan — a postdoctoral research associate and a CITP fellow at Princeton, argued in a paper titled, ‘Semantics derived automatically from language corpora contain human-like biases’, that common machine learning programs can acquire cultural biases when trained with human language that is available online. These biases range from the morally neutral, like a preference for flowers over insects, to objectionable views of race and gender.
In the paper, which was published on 14 April, 2017, in the journal ‘Science’, the Princeton team used the open source GloVe algorithm, developed by Stanford University researchers, to train on 840 billion words online. They discovered, for instance, that the machine learning program associated female names more with familial attribute words, like “parents” and “wedding”, rather than male names. In turn, male names had stronger associations with career attributes such “professional” and “salary”.
Hence, we need to insist that AI-powered algorithms be designed such that they remain accountable to humans and are able to explain their actions to us. In this context, companies and governments are now gravitating towards a concept called ‘Explainable AI’ (XAI), also referred to as transparent AI, which even has the backing of the likes of institutions like the US-based Defense Advanced Research Projects Agency (DARPA).
That said, it’s perhaps unreasonable and impractical to expect algorithms developed by us — and trained by the data we have generated over the years — to be completely rid of any type of bias. What we can do, however, is to insist on a policy framework that continuously keeps an eye on AI algorithms and demands that they explain the process by which they arrive at decisions.
All World Economic Forum (WEF) projects, for instance, now consider and include the big ethical issues of AI -– safety, privacy, accountability, transparency, and bias. Along with initiatives like the Partnership on AI, these steps will help the world improve these algorithms and prevent their misuse to a greater extent by building accountability at the design stage itself.