Sample CON and Rebuttal
Sample CON and Rebuttal
1. The critics might claim that AI systems in healthcare could contain systematic biases, which can be discriminatory
against certain groups. This is indeed true, and the main reason for this issue might be the exclusive training data.
Training data is essential in developing AI models because AI learns to make decisions and spot patterns based on
them. However, sometimes, those datasets might be biased or contain prejudices because of the viewpoints of the
creators or social standards. Specifically, when the datasets overrepresent specific gender, racial, or age groups, AI-
driven services may reflect those biases and produce different results or perform less accurately for certain
individuals. For instance, BSR (2023) states that biases within AI models can lead to misdiagnosis of medical
conditions in people of color or produce a less effective treatment plan for them. Based on this evidence, biased AI
tools might perform less accurately at identifying, for example, melanoma in black patients, potentially delaying
treatment or diagnosis until the condition turns into a more serious condition, which can threaten their lives.
Therefore, systematic biases within AI systems can disadvantage certain communities, posing a threat to their right
to life, and serious actions must be taken to resolve this issue.
Although this concern has a place to exist, it can be effectively solved, and it is a matter of careful implementation.
The systematic biases in the AI models can be reduced if the training data is diverse and contains various
demographic information and characteristics. In other words, if the training data is not limited to different races,
genders, age groups, and socioeconomic groups, the AI systems could work equally accurately for everyone due to
the high-quality and diverse dataset. To demonstrate how this works, Daneshjou et al. (2022) conducted a study
using three AI algorithms and tried to identify benign and malignant skin diseases. During the experiment, they
found that all of them performed less accurately on dark skins due to systematic biases within the training data.
However, after training two AI algorithms on diverse datasets that included different skin types and diseases, the AI
algorithms performed accurately on light and dark skin. This experiment highlights that training AI on diverse
datasets is very important for mitigating biases. Thus, it is possible to deal with systematic biases in AI by training
them on an inclusive dataset.
2. While some research claims that AI systems can optimize healthcare processes and offer a better medical service to
patients, there are many risks for discrimination of certain groups that AI services can pose. Gaumond and Regis
(2023) argued that AI could reduce the amounts of medical errors in the healthcare industry and improve the quality
of the diagnosis. According to their logic, AI can keep up with time-consuming tasks such as writing basic
prescriptions while the staff gets an opportunity to spend more time with patients. It leads to a better allocation of
the doctors’ time and allows them to assist more patients. It is very important, as they can pay more attention to the
needs of patients from lower social backgrounds, which are usually dismissed. Hence, integrating AI into healthcare
may help more people get medical assistance.
However, these basic time-consuming tasks are a significant factor in the quality of medical services: writing
prescriptions requires much expertise and deep case research for every patient. AI has very limited databases and
cannot come up with the most suitable treatment plans for all patients. According to the BSR (2023), it tends to
overgeneralize medical cases based on historical statistics and data lacking representation. Consequently, AI's
medical research and clinical trials using this imbalanced data can lead to disproportionately worse effects on the
health of underrepresented communities. For example, symptoms of black people and people with disabilities can
be addressed with stereotypes or even be ignored. This way, it violates the human right to qualified medicine and
the right to health, especially among the most vulnerable groups' representatives, as their medical plans are not
treated with a lot of attention. Therefore, the optimization of the healthcare processes by the implementation of AI
seems to be a good idea, but on the practical level, it can pose a threat to patients’ freedom from discrimination.