[go: up one dir, main page]

0% found this document useful (0 votes)
7 views3 pages

WHO EthicsAI Transcript Module4 Unit9 en v0.1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views3 pages

WHO EthicsAI Transcript Module4 Unit9 en v0.1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Transcript

Module 4 – Unit 9 : Bias and Discrimination with AI

{00:00 – 00:13}

Welcome to this unit on Bias and Discrimination with AI.


By the end of this unit, you will be able to explain bias and discrimination concerns relating to AI for
health.

{00:13 – 01:32}

The data sets used to train AI models can be biased. Many can exclude:

• Girls and women;


• Ethnic minorities;
• Elderly people;
• Rural communities;
• And disadvantaged groups.

AI is usually biased towards the majority data set – or the populations for which there are most data.

This means that, in unequal societies, AI may be biased towards the majority and place a minority
population at a disadvantage – which could increase the risk of vulnerable people being harmed.

When these systematic biases are integrated into AI, they become normative biases and can exacerbate
and cement existing disparities in health care.

If existing bias and discrimination in health care provision are captured in the data used to train AI
models, then this bias becomes part of the recommendations made by AI-guided technologies.

This can result in recommendations, which are irrelevant or inaccurate for the populations excluded
from the data.

And it means that an AI technology trained for use in one context will be ineffective when used in a
different context.

{01:32 – 02:34}

For example, AI models trained with data to detect skin cancer in lighter skinned populations are not
accurate or effective for detecting skin cancer in people of colour.

Data biases could also affect the use of AI for drug development.

1
Module 4 – Unit 9 : Bias and Discrimination with AI
For example, if an AI technology is based on a racially homogenous dataset, then biomarkers that the
technology identifies – and that are responsive to a therapy – may be appropriate only for the race or
gender of the dataset and not for a more diverse population.

In such cases, an approved drug may not be effective for the excluded population or may even be
harmful to their health and well-being.

Data biases are also linked to the digital divide.

For example, because women in low- to middle-income countries are less likely to use a mobile phone;
women contribute less data to data sets used to train AI and are less likely to benefit from services.

{02:34 – 03:21}

Another cause of bias is unbalanced collection of data; even where the digital divide is not a factor.

For example, genetic data tend to be collected disproportionately from people of European descent.

Furthermore, experimental and clinical studies tend to involve male experimental models or male
subjects, resulting in neglect of sex-specific biological differences.

Biases can also emerge when certain individuals or communities choose not to provide data, whether
through lack of trust, language barriers, or cost.

For example, data on certain population subsets may be difficult to collect if collection requires
expensive devices, such as wearable monitors.

{03:21 – 05:07}

Biases in datasets often depend on who funds and who designs an AI technology.

AI-based technologies have tended to be developed by one demographic group and gender, increasing
the likelihood of certain biases in the design.

For example, the first releases of the Apple Health Kit, which enabled specialized tracking of some
health risks, did not include a menstrual cycle tracker.

Bias can also arise from insufficient diversity of the people who label data or validate an algorithm.

To reduce bias, people with diverse ethnic and social backgrounds should be actively involved in AI
development. A diverse team is necessary to recognize flaws in the design or functionality of the AI in
validating algorithms to ensure lack of bias.

Bias can be due to the origin of the data with which AI is designed and trained.

2
Module 4 – Unit 9 : Bias and Discrimination with AI
It may not be possible to collect representative data if an AI technology is initially trained with data from
local populations that have a different health profile from the populations where the AI technology is
used.

AI is often trained with local data to which a company or research organization has access, but sold
globally with no consideration of the inadequacy of the training data.

If AI technology is trained in one country and used in another country, the technology could
discriminate, be ineffective or provide an incorrect diagnosis or prediction for a population of with
different characteristics – such as race, ethnicity or body type.

{05:07 – 06:01}

Bias can also be introduced during implementation of systems in real-world settings.

If the design of the AI system hasn’t given due consideration to the diversity of the target population
(whether age, disability, co-morbidities or poverty), the technology may not be effective for these
populations.

As AI is designed predominantly in high-income countries, there may be significant misunderstanding of


how it should be deployed in low- to middle-income countries, including the discriminatory impact (or
worse) or that it cannot be used for certain populations.

You have now completed Unit 9 of Module 4: Ethical Challenges.

In the next unit, we will look at AI and Cybersecurity.

3
Module 4 – Unit 9 : Bias and Discrimination with AI

You might also like