Automated Medical Diagnoses

The use of artificial intelligence techniques for automating medical diagnoses. 

Examples

There is an app called ADA that makes automatic medical diagnoses. The app works based on a decision tree, where the person is asked if they have a series of symptoms, and the person responds affirmatively or negatively. Based on that, it produces statistics and gives a result of the type, six out of 10 people with your symptoms have this disease, and four have this other disease. For this disease we do not recommend visiting a doctor, or we recommend that you visit a doctor. In their website, the company announces that ADA completes a health assessment every three seconds. The question in this case is how is it possible to rely on statistics for a medical diagnosis without a holistic assessment of the patient? 

Another example is automated medical diagnoses based on tomography images or brain images. In many of such cases, the automated diagnosis is better that the one made by a medical doctor, since there are certain types of images that are difficult for the human to understand. Here the concern is that the diagnosis is made by a black box, and it is not possible for the algorithm to explain the criteria applied for making the diagnosis.  

In mental health, there is an official standard called Diagnostic and Statistical Manual of Mental Disorders (DSM) that typifies diseases. Following DSM and based on a set of symptoms, an algorithm of machine learning can diagnose that the patient suffers from schizophrenia, sleep disorder, or another disease. There is a whole debate regarding this dynamics based on standards. It is common for doctors to raise many questions arguing that medicine still relies heavily on the clinic. They argue that DSM codes and symptoms are very indicative, very synthetic and need to be more descriptive. 

An app called REPLIKA acts as a kind of therapist. It is a chatbot to which the user can talk to as if they were talking to a psychologist. Based on artificial intelligence techniques, the app tries to help the user with emotional problems, although it is not trained to act as a therapist. Based on the frequent interactions, the person gets used to the communication style. However, some users reacted negatively to some behavioral changes of the bot due to software updates.

The use of medical robots and robots in healthcare, companion robots and robots in elderly care is discussed in (COMEST, 2017).

Benefits

  • Obtaining a health diagnosis without having access to the physical presence of a medical doctor
  • Having access to more information for analyzing and diagnosing 
  • Better precision in image-based diagnosis

Threats related to misuse and abuse

Patient´s lack of awareness – There was a case of a famous actress who decided to undergo a preventative double mastectomy to decrease breast cancer risk. The case helped to raise awareness that statistics, for the good or for the bad, have an impact in human life. Patients must be aware about how automated medical diagnoses systems work. This means, that most of them rely on statistics and they indicate a probability of having a disease, but they do not predict if the patient actually has such disease. 

Lack of interaction with medical doctors – In medical issues, as sensitive as physical or mental health, can we dispense with the professional doctor? Can we consider medical diagnostic applications as a replacement for the professional? We must not forget that we are social beings, we live in a society and we need to be in contact with other persons. There is a need to reflect on what is the role we would like technology to play for medical diagnoses. What will happen if technology exceeds the capacity of medical professionals?

Mistaken diagnosis – One of the ethical challenges of applying new technologies to medical diagnoses is related to the risk of false positives or false negatives being reported. A false positive occurs when they diagnose the disease to a person who does not really have it, and additionally if, depending on that diagnosis, unnecessary medications and treatments are prescribed that can affect the person’s health. The false negative occurs when the illness is not detected and the person should be receiving some medical treatment that is not prescribed. How much harm can a mistaken diagnosis cause to a patient?

Defining responsibility of diagnoses – Legally, medical doctors have a license and take a Hippocratic Oath. They become responsible for their diagnoses, but when it is an automated system that diagnoses, who is responsible?

Misuse of medical data – Keeping medical data about patients can be seen as positive, since such data can serve for statistical purposes. As long as the health data of a particular person is not disclosed, patients ‘data can be used to prepare and disseminate statistics. There is a need for considering both sides – it is very important to ensure the privacy of personal data, and statistics are very useful for public health. Thus, data anonymization is a key issue. In addition, thinking about providing better health services, a system should be able to predict a disease for a person based on reviewing the person’s medical history. However, this could only occur without violating their privacy. In this regard, maintaining the history of medical records can be seen as negative, since how are records kept and how is privacy protected? Who owns such data? 

Diagnosis reliability – Given the scenario where a person receives two different diagnoses indicating different diseases, the following questions could be formulated, was the system using the same information? Is the information maintained taking appropriate security measures? Can information stored in the system be manipulated? Can the system be hacked?

Information bias – Another ethical challenge could be related to giving false information about diseases in a certain area, when in fact there is a programmed filter restricting the information that is provided; for example, not declaring the emergence of a disease for not affecting commercial interests.  Bias should be avoided when disseminating health-related statistics. A major concern is how to ensure transparency in health-related statistics.

Data sharing – The possibility of sharing data among hospitals and health centers may be considered an advantage. For example, if a patient is treated in a hospital and a diagnosis is made, that information should be available in other hospitals. A major problem is how to access such information. Rules should be defined to precisely specify which data can be shared and what data should only be accessible to medical doctors. As a negative effect, it could happen that if a hospital accesses health data, it could discriminate a patient not accepting to provide him a specific treatment.  In the case of a private hospital, they have the right to decide. Is this desirable? Access to health data is also associated to an ethical risk of discriminating a person who is searching for a job, Another risk is that the information is used by health or life insurance companies to decide the amount to be charged for services health or life insurance companies to decide the amount to be charged for services. 

Ethical challenges

The following table summarizes ethical challenges associated with automated medical diagnoses.

ID CHALLENGE RELATED TO *
C1 Ensuring patient´s awareness of how automated diagnoses systems produce the diagnosis Principle R Principle A
C2 Losing opportunities for interacting face-to-face with medical doctors Principle R
C3 Ensuring the reliability of an automated medical diagnosis Principle R Principle O
C4 Deciding on the responsibility of an automated diagnosis Principle M
C5 Misuse of medical data, like violating patient´s privacy by disclosing personal health records collected by an automated medical diagnoses system Principle R
C6 Ensuring transparency and accountability when providing health-related statistics, i.e. that published or disseminated information is not biased Principle O Principle A 
C7 Ensuring that personal health records are shared strictly following the regulations and with the patient´s consent  Principle R Principle A
C8  Ensuring that health-related data stored in public or private databases are not used to discriminate individuals.  Principle R

*See Principles for more information about principles.