Ethics Inf Technol (2011) 13:199226
DOI 10.1007/s10676-010-9242-6
A framework for the ethical impact assessment
of information technology
David Wright
Published online: 8 July 2010
Springer Science+Business Media B.V. 2010
Abstract This paper proposes a framework for an ethical
impact assessment which can be performed in regard to any
policy, service, project or programme involving information technology. The framework is structured on the four
principles posited by Beauchamp and Childress together
with a separate section on privacy and data protection. The
framework identifies key social values and ethical issues,
provides some brief explanatory contextual information
which is then followed by a set of questions aimed at the
technology developer or policy-maker to facilitate consideration of ethical issues, in consultation with stakeholders, which may arise in their undertaking. In addition,
the framework includes a set of ethical tools and procedural
practices which can be employed as part of the ethical
impact assessment. Although the framework has been
developed within a European context, it could be applied
equally well beyond European borders.
Keywords Ethical impact assessment Ethical issues
Ethical tools Respect for autonomy Nonmaleficence
Beneficence Justice
Introduction
Objective
The objective of this paper is to propose an ethical impact
assessment framework that could be used by those developing
new technologies, services, projects, policies or programmes
D. Wright (&)
Trilateral Research and Consulting, 22 Argyll Court,
82-84 Lexham Gardens, London W8 5JB, UK
e-mail: david.wright@trilateralresearch.com
as a way to ensure ethical implications are adequately examined by stakeholders before deployment and so that mitigating
measures can be taken as necessary. The framework could be
used in many different contexts, wherever the decision-maker
perceives a need to take the ethical considerations of stakeholders into account.
Here are some examples of where an ethical impact
assessment could help or could have helped project managers or policy-makers identify ethical issues before
deploying a technology or service:
Google introduced its Buzz social network in February
2010 without adequate consideration of the ethical or
privacy impacts. Google developed Buzz as a rival to
Facebook by creating instant and automatic social
networks for users of its Gmail service. The snag was
that it did not ask users whether they wanted a social
network composed of the people whom they e-mailed,
no matter how frequently. As a New York Times
reporter observed, E-mail, it turns out, can hold many
secrets, from the names of personal physicians and
illicit lovers to the identities of whistle-blowers and
antigovernment activists.1 Surprised by the firestorm
of criticism, Google had to make changes to Buzz
within a few days of its introduction. If it had carried
out an ethical impact assessment in advance of making
Buzz operational, it might have avoided the flak.
Is it ethically acceptable to electronically tag those with
incipient dementia who may go wandering from
assisted living facilities? While it may be ethically
correct not to hold such people as virtual prisoners
within the confines of a residence, is it ethically
acceptable to keep them under constant surveillance?
Even if they consented to be tagged, can their consent
1
Helft (2010).
123
200
D. Wright
be regarded as informed? Whose view should be
accepted if, in moments of lucidity, the senior citizen
did not want to be tagged, while his or her adult
children did want him or her to be tagged?
The UK government is introducing an electronic health
record scheme for the entire population of the country
on the basis of implied consentpatients are assumed
to agree to the creation of a record unless they refuse.
How ethically acceptable is the notion of implied
consent?
Following the attempt at blowing up an aircraft on its
way to Detroit at Christmas in 2009, the US, UK and
some other countries introduced full body scanners at
airports, which may or may not be successful in
detecting liquid explosives. While such scanners may
enhance security, they do so at the expense of the
passengers privacy. Which is the ethically correct
choice?
These and many other examples indicate the utility for
the technology developer, policy-maker or project manager
in carrying out an ethical impact assessment in consultation
with stakeholders before the technology is deployed. One
of the objectives of the ethical impact assessment is to
engage stakeholders in order to identify, discuss and find
ways of dealing with ethical issues arising from the
development of new technologies, services, projects or
whatever. Stakeholders may have some information or
ideas or views or values which the project manager had not
previously considered. They may be able to suggest alternative courses of actions to achieve the desired objectives.
They may be able to suggest some safeguards which would
minimise the ethical risks that might otherwise explode
after a technology or project is launched. By consulting
stakeholders before launch, the project manager may be
able to lower his liability and avoid some nasty surprises.
As a minimum, the policy-maker or project manager will
earn some good will by consulting stakeholders who might
otherwise be among his chief critics.
Method
The ethical impact assessment framework proposed in this
paper draws on data selected, collected and analysed from
various sources. Among those sources, Hofmann for one
has commented that no general method for assessing the
moral implications of (health) technology has been established.2 The framework proposed here offers a solution.
The need for an ethical impact assessment framework also
2
Hofmann refers specifically to health technology, but his observation may well be applicable to any technology. Hofmann (2005, p.
288).
123
seems apparent by virtue of the fact that the relevance of
ethical principles and social values may be influenced by
the context in which they are considered. The idea or need
to consider ethics in context is not new. For example, in his
1985 essay, What is Computer Ethics?, Moor observed
that A typical problem in computer ethics arises because
there is a policy vacuum about how computer technology
should be used A central task of computer ethics is to
determine what we should do in such cases, i.e., to formulate policies to guide our actions.3 He added Computer ethics is not a fixed set of rules which one shellacs
and hangs on the wall. Nor is computer ethics the rote
application of ethical principles to a value-free technology.
Computer ethics requires us to think anew about the nature
of computer technology and our values. An ethical impact
assessment would be a way of addressing Moors concerns.
Helen Nissenbaum, author of the influential essay
Privacy as contextual integrity, argued along somewhat
the same lines. She presented a model of informational
privacy in terms of contextual integrity, namely, that in
determining privacy threats, one needs to take into account
the nature of a situation or context: what is appropriate in
one context can be a violation of privacy in another context.4 Again, given the need to consider ethical issues in
context, an ethical impact assessment would be more
appropriate than prescriptive rules.
If a prescriptive ethical guidance is problematic because
contextual factors influence the ethics, then a better
approach would be to ask questions, which is what the
European Commission and others do, and which is the
approach adopted here too.5 Those making proposals for
funding under the Commissions Framework Programmes
of research and technological development must respond to
a set of ethical questions (e.g., Does the proposal involve
tracking the location or observation of people?). Questions aimed at identifying issues also feature in the privacy
impact assessment models in countries such as Canada6
and the UK.7 Scholars such as Gary Marx have also formulated sets of questions aimed at uncovering ethical
issues.8 In preparing the ethical impact assessment framework presented in this paper, the author drew on the
approach and questions presented by these and other
sources.
Moor (1985).
Nissenbaum (2004).
5
http://cordis.europa.eu/fp7/ethics_en.html#ethics_cl
6
Treasury Board of Canada Secretariat 2002.
7
[UK] Information Commissioners Office (ICO) 2009.
8
Marx (1998). Van Gorp also proposed a list of questions that helps
researchers doing research in technological fields to identify ethical
aspects of their research. Van Gorp (2009).
4
A framework for the ethical impact assessment of information technology
Target audience
The ethical impact assessment proposed in this paper is
primarily aimed at those who are developing or intend to
develop an information technology project, policy or programme that may have ethical implications. More specifically, this would include industry players when they are
developing a new technology or planning a new service as
well as policy-makers and regulatory authorities when they
are considering a new policy or regulation. In addition, the
ethical impact assessment framework should be of interest
to civil society organisations, so that when they become
aware of proposals or plans for new technologies, they can
advocate the frameworks use and their involvement in the
decision-making process. Other stakeholders, such as academics, may find the ethical impact assessment framework
of interest too and, as a consequence, may be able to
suggest improvements or to analyse its use. It might also be
of interest to the media as background to any stories they
prepare on the introduction of a new technology or service,
which in turn will help raise the awareness of the public
and other stakeholders about the associated ethical issues.
Nominally, an ethical impact assessment of a new or
emerging technology should target stakeholders interested
in or affected by the outcome. In the first instance, the
policy-maker or technology developer or project manager
should identify the stakeholders he or she thinks relevant,
but in most cases he or she should be open to or even
encourage other stakeholders to contribute to the assessment.9 To ensure those participating in an ethical impact
assessment are truly representative of the relevant stakeholder groups, the technology developer or policy maker
may need to make some special efforts to engage the relevant stakeholders in order to avoid something akin to
regulatory capture.
201
Some of the principles and issues are also values,
while other issues are related to tactics, policies or regulations adopted by decision-makers in pursuit of values
(like data protection). The word issues has been used
here because both the values and policies can be debated.
Indeed, one should expect that in the ethical impact
assessment of a new technology, they will be debated. It is
why an ethical impact assessment that engages stakeholders in the debate is necessary. Nevertheless, the identification of values and policy design are two different needs,
although the former supports the latter and they may be
served by the same ethical impact assessment framework.
For example, in this framework under the principle of
respect for autonomy, dignity is a social value (indeed, a
fundamental right) while informed consent is a matter of
policy. However, in particular situations, say with regard to
the consequences of the application of a new technology,
stakeholders could debate whether dignity is being
respected or whether consent has truly been informed.
The ethical tools can be used to engender debate over
the extent to which social values are respected by a new
technology (or whatever) and what might be the ethical
implications arising from the application of a new
technology.
The section on procedural aspects or practices relates to
the process of undertaking the assessment, stakeholder
engagement and consultation, risk assessment, accountability, third-party review and audit, providing information,
responding to complaints and good practice. It also briefly
presents a step-by-step procedure or guidelines for undertaking an ethical impact assessment. There is a close
relationship between the ethical tools and some of the
procedural aspects. Employing the ethical tools is a way of
engaging stakeholders. Providing more information and
responding to complaints are also ways of engaging or
interacting with stakeholders.
How the paper is structured
Previous studies and the role that IT plays
This paper contains five main parts, namely, this introduction, a section on ethical principles and issues, a section
on ethical tools, a section on procedural aspects and the
conclusions. Not only does the paper advocate use of ethical impact assessments, but it also provides a structure for
undertaking such assessments, i.e., it identifies key ethical
principles and issues that should be addressed in an
assessment as well as ethical tools that can be used in
undertaking an assessment.
9
Dekker says ethical reflection in technology assessment requires an
engagement of experts from different disciplines for two reasons:
Firstly, the technical, economical, legal and social aspects are deeply
cross-correlated with the ethical reflection. And secondly, participating in such interdisciplinary discussions enables an ethical reflection
which keeps in touch with the real world. See Dekker (2004).
The construction of an ethical impact assessment framework, as proposed in this paper, draws on various sources
with regard to values, different types of impact assessment
and the role that IT plays.
With specific regard to values, it draws on those stated
in the EU Reform Treaty, signed by Heads of State and
Government at the European Council in Lisbon on 13
December 2007, such as human dignity, freedom, democracy, human rights protection, pluralism, non-discrimination, tolerance, justice, solidarity and gender equality.10
These values are also stated in the Charter of Fundamental
10
http://eurlex.europa.eu/JOHtml.do?uri=OJ:C:2007:306:SOM:EN:
HTML
123
202
D. Wright
Rights of the European Union,11 and constitute the key
frame for design and implementation of all EU policies.12
The values set out in these texts could serve as an ethical
guidance. In fact, it has been adopted here as the baseline
for identifying the key values or ethical principles or issues
that must be taken into account in the development of new
technologies, etc. Other important policies dealing with
ethical issues or touching upon such issues are also
mentioned.
With regard to impact assessment, the paper draws on
the work that scholars, experts and policy-makers have
done, especially over the last 30 years or so. There are
various types of impact assessments, including
are key features of the ethical impact assessment framework proposed here too.
The notion of examining the ethical impacts of information technology has been gaining traction ever since
Moor published the article cited above more than a quarter
of century ago. Of more recent provenance is the work
done by Skorupinski and Ott who argued that technology
assessment (TA) is not detachable (their descriptor)
from ethical questions for several reasons, among which
are
environmental impact assessments (which includes the
notion of the precautionary principle which was given
its impetus at the UN Rio Conference or Earth
Summit in 1992),
risk assessment, which changed from purely technical
analysis to an assessment involving stakeholders, those
interested in or affected by a risk,13
technology assessment,14
regulatory impact assessment or simply impact
assessment,15
privacy impact assessment,
etc.
In the context of other writings on IT, ethics and impact
appraisal, this paper can be situated or should be positioned
as the logical descendent of these antecedents.
All of these impact assessments, at least in their more
progressive manifestations, have in common a recognition
of the need to involve stakeholders in the assessment
process. Authors such as Moor, Dekker, Skorupinski and
Ott, Palm and Hansson, Beekman et al. have seen a relationship between participatory technology assessment and
ethics. Stakeholder engagement and ethical consideration
The judgement to regard a certain technological option
as preferable in contrast to others as the result of a TA
arrangement is not possible without reference to norms and
values.16
They pointed out that technology assessment has several
functions, which underscore the relationship between TA
and ethics as well as the need to engage stakeholders,
including the public, in the assessment process:
One of the key functions of TA is early warning. A
warning presupposes an altruistic or at least a wellmeaning attitude. The attempt to prevent something
undesirable occurring requires a value judgement
about what is undesirable. The persons who make
that attempt cannot at the same time be neutral
observers.
Another function of TA is counselling. Giving advice
is not possible without having made value judgements on which course of action should be preferred.
The notion of counselling as one of the tasks of TA
leads to several ethical and conceptual questions.
Assessing risks is not possible without reference to
norms and values.17
11
http://www.europarl.europa.eu/charter/pdf/text_en.pdf
European Commission 2007.
13
For a state-of-the-art review, see Renn (2008).
14
Technology assessments as an instrument for counselling political
decision-makers were given a major impetus with the establishment
of the Office for Technology Assessment (OTA) by the US Congress
in 1972. Similar organisations were subsequently established in
Europe, both at the Member State level (e.g., the Danish Board of
Technology) and at the European level (e.g., the European Parliaments office of Science and Technology Options Assessment
(STOA)). STOA is a member of the European Parliamentary
Technology Assessment Network (EPTA). Other EPTA members
are the national parliamentary technology assessment bodies of
Denmark, Finland, France, Germany, Greece, Italy, the Netherlands
and the United Kingdom.
15
For a good overview of developments in this area, see Kirkpatrick
and Parker (2007).
12
123
TA is generally regarded as a policy instrument, to
render responsible decisions possible in the realm of
scientific and technological development.
Certain central aspects in the concepts of TA lead to
ethical questions. These include, for instance, the
function as an early warning system, which would
highlight undesirable consequences or the aspect of
how manipulation in TA arrangements can be avoided.
More recently, Beekman et al. view the ethical assessment of the application of new technologies as complementary to rather than an alternative to scientific risk
assessments and economic cost-benefit assessments. Taken
together, they say, these ethical, scientific and economic
assessments should provide a sound basis for socio-political decision-making.18
16
17
18
Skorupinski and Ott (2002, p. 97).
Skorupinski and Ott, p. 98.
Beekman et al. (2006), p. 13).
A framework for the ethical impact assessment of information technology
And more recent still, Palm and Hansson concur with
the view that the primary task of an ethical technology
assessment is to identify potential ethical issues associated
with a new technology.19 They offered a preliminary
check-list of ethical issues, i.e., including:
Dissemination and use of information
Control, influence and power
Impact on social contact patterns
Privacy
Sustainability
Human reproduction
Gender, minorities and justice
International relations
Impact on human values.
Most of these issues can also be found in this paper.
The collection of essays brought together by Paul Sollie
and Marcus Duwell in their book Evaluating New Technologies advanced even further the state of the methodological art of ethical assessment of new technologies.20 In
their introductory chapter, the editors note that Although
technology is easily one of the most permeating and consequential features of modern society, surprisingly, an
ethics of technology is still in its infancy. Important reasons for this underdevelopment of a methodology for
morally evaluating technology development are related to
its complex, uncertain, dynamic, and large-scale character
that seems to resist human control.21
Regarding the role that IT plays, in conducting an ethical impact assessment of a new technology, one should not
treat the technology as a kind of black box. Technologies
always help to shape human actions and interpretations on
the basis of which (moral) decisions are made, comments
Verbeek. When technologies are always influencing
human actions, we had better try and give this influence a
desirable and morally justifiable form.22 Technologies are
neither neutral nor value-free. Hofman agrees: Technology
expresses and influences the norms and values of its social
context.23 Orlikowski and Iacono rightly say that because
IT artefacts are designed, constructed, and used by people,
they are shaped by the interests, values, and assumptions of
a wide variety of communities of developers, investors,
users, etc.24
19
Palm and Hansson (2006). An extensive set of criteria, some of
which are ethical, for assessing emerging technologies can be found
in Kuzma et al. (2008). Kuzma et al. also use a question approach for
assessing emerging technologies.
20
Sollie and Duwell (2009).
21
Sollie and Duwell, p. 4.
22
Verbeek (2009, p. 67, 71).
23
Hofmann, p. 289. He observes (p. 288) that there appears to be
broad agreement among scholars that technology is value-laden.
24
Orlikowski and Iacono (2001, p. 131).
203
They reviewed 188 articles published over 10 years in
the journal Information Systems Research (ISR) and found
a broad array of conceptualizations of IT artefacts.25 They
make the point that IT artefacts are not static or
unchanging, but dynamic different features are developed and users adapt the artefact for new and different
uses. Given the context-specificity of IT artefacts, there
is no single, one-size-fits-all conceptualization of technology that will work for all studies.26 Furthermore, they say,
the tendency to take IT artefacts for granted in IS studies
has limited our ability as researchers to understand many of
their critical implicationsboth intended and unintendedfor individuals, groups, organisations and society.27 While it may be impossible to foresee all of the
ethical and other consequences of an emerging technology,
nevertheless, an ethical impact assessment, involving different stakeholders from different disciplines and backgrounds, may be a good way of avoiding the traps
discerned by Orlikowski and Iaconoi.e., of not seeing the
context specificity of a technology and of not examining its
critical implications for individuals, groups, organisations
and society.
In addition, we must recognise that the (ethical) complexity of a technology multiplies as it converges with
other technologies. The Internet was originally conceived
as a way for scientists to exchange documents, but has
changed beyond recognition as it has brought together and
absorbed new broadband technologies, high speed
servers, a multiplicity of low-cost, high-performance user
devices, the vast storage capacity of cloud computing,
GPS, networking sensors and actuators, ambient intelligence, the so-called Internet of Things and so on. In less
than the time span of a single generation, the Internet has
gone from something few people had even heard of to a
point where broadband access to it is increasingly and
widely described as a fundamental right. We can assume
that even the US Defense Advanced Research Projects
Agency (DARPA) could not have imagined the immeasurable benefits, nor the dangers of a virtually ubiquitous
Internetthe reductions in privacy, the proliferation of ID
theft, child grooming, spam, cybercrime and cyberterrorism, nor the extent to which our society and economy are
underpinned by what has become a critical infrastructure.
In considering whether the architecture of an IT system
matters in and of itself in terms of impacts, the Internet (or
rather the World Wide Web) provides a case in point. Just
as the architecture of the Internet has changed, and continues to change as we progress towards Web 2.0, Web
3.0 and the semantic Web, we can see that the architecture
25
26
27
Orlikowski and Iacono, p. 130.
Orlikowski and Iacono, p. 131.
Orlikowski and Iacono, p. 133.
123
204
of this colossal IT system does matter. However, what the
impacts might be will depend on the specifics of the
architecture. It might be possible to minimise or eliminate
any negative impacts by modifying the architecture, as
Web 2.0 is regarded as a solution of sorts to the existing
WWW.
What I might regard as negative in the architecture of,
lets say, a national IT system for electronic health records
may well differ from what the designers think. This is
clearly why it is useful (necessary) to engage all relevant
stakeholders to discuss the consequences, to minimise
information asymmetries and for all stakeholders, especially the proponents of the architecture, system, project,
technology or whatever to engage with their peers with an
open mind and a willingness to address problems and to
recognise that it will most likely be in their own interests to
do so at an early stage, rather than when the system or
architecture is installed and when there may be significant
antipathy on the part of other stakeholders.
Thus, an ethical impact assessment must not only focus
on the ethics of a technology, but on the technology itself,
its values, how it is perceived and how it is used or might
be used in the future, not only by itself but as a component
in a larger technological framework.
D. Wright
Respect for autonomy (right to liberty)
According to Beauchamp and Childress, Personal autonomy is, at a minimum, self-rule that is free from both
controlling interference by others and from limitations,
such as inadequate understanding, that prevent meaningful
choice. The autonomous individual acts freely in accordance with a self-chosen plan A person of diminished
autonomy, by contrast, is in some respects controlled by
others or incapable of deliberating or acting on the basis of
his or her desire and plans Virtually all theories of
autonomy agree that two conditions are essential for
autonomy (1) liberty (independence from controlling
influences) and (2) agency (capacity for intentional
action).29
Autonomy, equated with liberty, is a right enshrined in
Article 6 of the European Charter of Fundamental Rights as
well as Article 3 of the UNs Universal Declaration of
Human Rights of 10 December 1948.30
Questions
Does the technology or project curtail a persons right to
liberty and security in any way? If so, what measures
could be taken to avoid such curtailment?
Does the project recognise and respect the right of
persons with disabilities to benefit from measures
designed to ensure their independence, social and
occupational integration and participation in the life of
the community?
Will the project use a technology to constrain a person or
curtail their freedom of movement or association? If so,
what is the justification?
Does the person have a meaningful choice, i.e., are some
alternatives so costly that they are not really viable
alternatives? If not, what could be done to provide real
choice?
Ethical principles
The framework is structured on the four principles posited
by Beauchamp and Childress28 together with a separate
section on privacy and data protection. Under these major
principles are some values and/or issues followed by some
brief explanatory text and a set of questions aimed at the
technology developer or policy-maker to facilitate a consideration of the ethical issues which may arise in their
undertaking. Values and issues are clustered together
because of their relation to the overarching principles and
because they will generate debate among stakeholders. For
example, everyone would subscribe to the shared value of
dignity, but dignity could also become an issue in particular contextsi.e., does an emerging technology respect
the dignity of the individual? Is dignity compromised?
What is meant by dignity in the given context?
The framework draws on various sources (see the References) in compiling these questions. No doubt more
issues and questions could be added, and some questions
could be framed differently, and if so, thats fine. To some
extent, the issues and questions set out here should be
regarded as indicative, rather than comprehensive.
Dignity
Dignity is a key value, as evidenced by its being the subject
of Article 1 (Human dignity is inviolable. It must be
respected and protected.) of the Charter of Fundamental
Rights as well Article 25 which specifically refers to the
rights of the elderly (The Union recognises and respects
the rights of the elderly to lead a life of dignity and independence and to participate in social and cultural life.)
Dignity also features in Article 1 of the UNs Universal
Declaration of Human Rights, which states that All
human beings are born free and equal in dignity and
29
28
Beauchamp and Childress (2001).
123
30
Beauchamp and Childress, p. 58.
www.un.org/Overview/rights.html
A framework for the ethical impact assessment of information technology
rights. Article 1 of the Charter of Fundamental Rights
provides that dignity is to be not only respected, but also
protected. This means that public authorities are
required not only to refrain from tampering or interfering
with an individuals private sphere, but also to take steps in
order to bring about the conditions allowing individuals to
live with dignity.
Dignity means that citizens should be enabled to live in
dignity and security and be free of exploitation and physical or mental abuse, according to Boddy. Citizens should
be able to participate actively in the formulation and
implementation of policies that directly affect their wellbeing. They should be treated fairly regardless of age,
gender, racial or ethnic background, disability or other
status, and be valued independently of their economic
contribution.31
The fact that some citizens need to be in assisted living
residences does not mean that they have lost their entitlement to their fundamental rights and dignity. The LOCOMOTION report rightly makes this point: Clients should
be enabled to enjoy human rights and fundamental freedoms when residing in any shelter, care or treatment
facility, including full respect for their dignity, beliefs,
needs and privacy and for the right to make decisions about
their continuing care and the quality of their lives.32
Respect for the dignity of senior citizens can be manifested in different ways, including in the use of devices by
or for senior citizensi.e., as far as possible devices
should not make users feel different from others or make
them appear to be something less than the rest of us.33
Questions
Will the technology or project be developed and
implemented in a way that recognises and respects the
right of citizens to lead a life of dignity and independence and to participate in social and cultural life? If not,
what changes can be made?
Is such a recognition explicitly articulated in statements
to those involved in or affected by the project?
Does the technology compromise or violate human
dignity? For example, in the instance of body scanners,
can citizens decline to be scanned or, if not, what
measures can be put in place to minimise or avoid
comprising their dignity?
Does the project require citizens to use a technology that
marks them in some way as cognitively or physically
disabled? If so, can the technology be designed in a way
so that it does not make them stand out in a crowd?
31
Boddy (2004, p. 39). LOCOMOTION was a project funded by the
European Commissions Fifth Framework Programme (FP5).
32
Boddy, p. 40.
33
Boddy, p. 48.
205
Does the project or service or application involve
implants? If so, does it accord with the opinion of the
European Group on Ethics (EGE)?34
Informed consent
It has been said that consent must be meaningful: Give us
your data or we wont serve you is not meaningful
consent.35
The EU Directive on clinical trials (2001/20/EC) provides good guidance on informed consent. It says that a
person gives informed consent to take part in a trial only if
his decision:
is given freely after that person is informed of the
nature, significance, implications and risks of the trial
and either:
is evidenced in writing, dated and signed, or otherwise
marked, by that person so as to indicate his consent, or
if the person is unable to sign or to mark a document,
his consent is given orally in the presence of at least
one witness and recorded in writing.
The Directive says the following conditions apply to the
giving of informed consent by a capable adult:
The subject (end user) has had an interview with the
investigator, or another member of the investigating
team, in which he has been given the opportunity to
understand the objectives, risks and inconveniences of
the trial (research activity) and the conditions under
which it is to be conducted.
The subject has been informed of his right to withdraw
from the trial at any time.
The subject has given his informed consent to taking
part in the trial.
The subject may, without being subject to any resulting
detriment, withdraw from the trial at any time.
The subject has been provided with a contact point
where he may obtain further information about the trial.
The Directive says that in the case of other persons
incapable of giving their consent, such as persons with
dementia, psychiatric patients, etc., inclusion in clinical trials should be on an even more restrictive basis. Medicinal
products for trial may be administered to all such individuals
only when there are grounds for assuming that the direct
benefit to the patient outweighs the risks. Moreover, in such
cases, the written consent of the patients legal representative, given in co-operation with the treating doctor, is necessary before participation in any such clinical trial.
34
For ethical considerations re implants, see the European Group on
Ethics in Science and New Technologies (EGE) 2005.
35
Goldberg et al. (2001).
123
206
The posture of the Directive toward informed consent is
not only relevant in clinical trials, but also in trials and
applications of information technology too, in instances
where persons might use a particular technology of their own
free will or might be obliged to use it in a situation where they
cannot give informed consent (for example, because they
suffer from dementia). Informed consent is also addressed in
Article 7 of the EU Data Protection Directive: Member
States shall provide that personal data may be processed only
if: (a) the data subject has unambiguously given his consent. Many online services should obtain informed consent
with regard to the collection and use of personal data.
D. Wright
Are persons involved in or affected by the project able to
withdraw from the project and to withdraw their data at
any time right up until publication?
Does the project or service collect information from
children? How are their rights protected?
Is consent given truly voluntary? For example, does the
person need to give consent in order to get a service to
which there is no alternative?
Does the person have to deliberately and consciously opt
out in order not to receive the service?
Nonmaleficence (avoiding harm)
Questions
Will the project obtain the free and informed consent of
those persons to be involved in or affected by the
project? If not, why not?
Will the person be informed of the nature, significance,
implications and risks of the project or technology?
Will such consent be evidenced in writing, dated and
signed, or otherwise marked, by that person so as to
indicate his consent?
If the person is unable to sign or to mark a document so
as to indicate his consent, can his consent be given orally
in the presence of at least one witness and recorded in
writing?
Does the consent outline the use for which data are to be
collected, how the data are to be collected, instructions
on how to obtain a copy of the data, a description of the
mechanism to correct any erroneous data, and details of
who will have access to the data?
If the individual is not able to give informed consent
(because, for example, the person suffers from dementia)
to participate in a project or to use of a technology, will
the project representatives consult with close relatives, a
guardian with powers over the persons welfare or
professional carers? Will written consent be obtained
from the patients legal representative and his doctor?
Will the person have an interview with a project
representative in which he will be informed of the
objectives, risks and inconveniences of the project or
research activity and the conditions under which the
project is to be conducted?
Will the person be informed of his right to withdraw from
the project or trial at any time, without being subject to
any resulting detriment or the foreseeable consequences
of declining to participate or withdrawing?
Will the project ensure that persons involved in the
project give their informed consent, not only in relation
to the aims of the project, but also in relation to the
process of the research, i.e., how data will be collected
and by whom, where it will be collected, and what
happens to the results?
123
Beauchamp and Childress say that The principle of nonmaleficence asserts an obligation not to inflict harm on
others and that Nonmaleficence only requires intentionally refraining from actions that cause harm. Rules of
nonmaleficence, therefore, take the form of Do not do
X.36 Under this broad principle, this framework includes
several ethical values and issues, as follows.
Safety
Article 38 of the Charter of Fundamental Rights deals with
consumer protection: Union policies shall ensure a high
level of consumer protection. It is the subject of Article
153 of the EC Treaty: In order to promote the interests of
consumers and to ensure a high level of consumer protection, the Community shall contribute to protecting the
health, safety and economic interests of consumers, as well
as to promoting their right to information, education and to
organise themselves in order to safeguard their interests.
Consumer protection at European level is also provided by
(amongst others) Directive 93/13 on unfair terms in consumer contracts, Directive 97/7 on consumer protection in
respect of distance contracts and the Directive on liability
for defective products (85/374/EEC).
Questions
Is there any risk that the technology or project may cause
any physical or psychological harm to consumers? If so,
what measures can be adopted to avoid or mitigate the
risk?
Have any independent studies already been carried out
or, if not, are any planned which will address the safety
of the technology or service or trials? If so, will they be
made public?
To what extent is scientific or other objective evidence
used in making decisions about specific products,
processes or trials?
36
Beauchamp and Childress 2001, p. 113 and p. 115.
A framework for the ethical impact assessment of information technology
Does the technology or project affect consumer
protection?
Will the project take any measures to ensure that persons
involved in or affected by the project will be protected
from harm in the sense that they will not be exposed to
any risks other than those they might meet in normal
everyday life?
Can the information generated by the project be used in
such a way as to cause unwarranted harm or disadvantage to a person or a group?
Does the project comply with the spirit of consumer
legislation (e.g., Directive 93/13 on unfair terms in
consumer contracts, Directive 97/7 on consumer protection in respect of distance contracts and the Directive on
liability for defective products (85/374/EEC))?
social isolation worse. Palm and Hansson rightly observe
that even if communication is facilitated, it is not selfevident that this will bring people together. There is a
tendency for electronically mediated contacts to substitute
face-to-face contacts.38 Moreover, many senior citizens
and the disabled are already isolated because new technologies and services are not affordable or are otherwise
inaccessible. In any event, the availability of new communication technologies may diminish the interest in going
outside the home, which would only compound the
reduction in face-to-face contacts.
Questions
Will the project use a technology which could replace or
substitute for human contact? What will be the impact on
those affected?
Is there a risk that a technology or service may lead to
greater social isolation of individuals? If so, what
measures could be adopted to avoid that?
Is there a risk that use of the technology will be seen as
stigmatising, e.g., in distinguishing the user from other
people?
Social solidarity, inclusion and exclusion
The European Councils Lisbon Strategy adopted the
notion of e-inclusion which refers to the actions to realise
an inclusive information society, that is, an information
society for all.37 To achieve this objective, which is a
manifestation of the value of social solidarity, Europe must
tackle the root causes of exclusion and e-exclusion. There
are various reasons why some people are excluded from the
Information Society, but cost and knowledge are among the
principal ones.
Questions
Has the project taken any steps to reach out to the eexcluded (i.e., those excluded from use of the Internet)?
If not, what steps (if any) could be taken?
Does the project or policy have any effects on the
inclusion or exclusion of any groups?
Are there offline alternatives to online services?
Is there a wide range of perspectives and expertise
involved in decision-making for the project?
How many and what kinds of opportunities do stakeholders and citizens have to bring up value concerns?
Isolation and substitution of human contact
Isolation is the objective condition of having too few and
too poor social ties, of not being in any relevant social
network. New forms of communicationfrom phone calls
to e-mails, instant messaging, Web meetings, social networking, wireless personal area networks and so onhelp
to alleviate, if not overcome, isolation. By the same token,
however, new communication tools may become a substitution for face-to-face contact and could, thereby, make
37
European Council resolution on e-Inclusion 2001.
207
Discrimination and social sorting
Article 21 of the European Charter of Fundamental Rights
prohibits Any discrimination based on any ground such as
sex, race, colour, ethnic or social origin, genetic features,
language, religion or belief, political or any other opinion,
membership of a national minority, property, birth, disability, age or sexual orientation.
Discrimination occurs, not only in employment but also
in access to goods and services such as banking, education,
transport and health. Aiming to guarantee equal treatment
in these areas, the European Commission proposed legislation on anti-discrimination outside the field of employment in the summer of 2008. The European Parliament
adopted the Directive on 2 April 2009.
Profiling technologies have raised a host of ethical, legal
and other issues including privacy, equality, due process,
security and liability. Profiling technologies make possible
a far-reaching monitoring of an individuals behaviour and
preferences. Profiling technologies are by their very nature
discriminatory tools. They allow unparalleled kinds of
social sorting39 and segmentation which could have unfair
effects. The people profiled may have to pay higher prices,
could miss out on important offers or opportunities, and
38
Palm and Hansson, p. 552.
Social sorting is a process of classifying people and populations
according to varying criteria, to determine who should be targeted for
special treatment, suspicion, eligibility, inclusion, access and so on.
See Lyon (2003, p. 20).
39
123
208
may run increased risks because catering to their needs is
less profitable. In most cases, they will not be aware of this,
since profiling practices are mostly invisible and the profiles themselves protected by intellectual property or trade
secret. This poses a threat to the equality of and solidarity
of citizens.40
Questions
Does the project or service use profiling technologies?
Does the project or service facilitate social sorting?
Could the project be perceived as discriminating against
any groups? If so, what measures could be taken to
ensure this does not happen?
Will some groups have to pay more for certain services
(e.g., insurance) than other groups?
Beneficence
Beauchamp and Childress say Morality requires not only
that we treat persons autonomously and refrain from
harming them, but also that we contribute to their welfare.
Such beneficial actions fall under the heading of beneficence principles of beneficence potentially demand
more than the principle of nonmaleficence because agents
must take positive steps to help others, not merely refrain
from harmful acts. They cite two principles of beneficence: Positive beneficence requires agents to provide
benefits. Utility requires that agents balance benefits and
drawbacks to produce the best overall results.41
Questions
Will the project provide a benefit to individuals? If so,
how will individuals benefit from the project (or use of
the technology or service)?
Who benefits from the project and in what way?
Will the project improve personal safety, increase
dignity, independence or a sense of freedom?
Does the project serve broad community goals and/or
values or only the goals of the data collector? What are
these, and how are they served?
Are there alternative, less privacy intrusive or less costly
means of achieving the objectives of the project?
What are the consequences of not proceeding with
development of the project?
Does the project or technology or service facilitate the
self-expression of users?
D. Wright
Universal service
Universal service means an obligation imposed on one or
more operators of electronic communications networks
and/or services to provide a minimum set of services to all
users, regardless of their geographical location within the
national territory, at an affordable price.42 Universal service is broader than basic telephony service. Now the
notion of universal service in Europe encompasses broadband and Internet access for all. The European Commission
and various Member States have recognised that it makes
economic and social sense to extend broadband Internet
access to all citizens. It is also the ethically correct thing to
do. They have made commitments with specific deadlines
to achieving this objective.43 Finland has recently made
broadband access to the Internet a basic right.44
Questions
Will the project or service be made available to all
citizens? When and how will this be done?
Will training be provided to those who do not (yet) have
computer skills or knowledge of the Internet? Who
should provide the training and under what conditions?
Will the service cost the same for users who live in
remote or rural areas as for users who live in urban
areas? How should a cost differential be paid?
Accessibility
With some exceptions, industry is reluctant to factor the
needs of the disabled and senior citizens into their design of
technologies and services and to adopt a design-for-all
approach. The accessibility (user-friendliness) of devices
and services are prerequisites for the e-inclusion of citizens
in the Information Society. Markets tend to overlook the
needs of senior citizens and the disabled: there are few
guidelines, voluntary or mandatory standards and related
regulatory frameworks.45
Others have said commitment to accessibility is widespread throughout the ICT industry, that there is a strong
willingness on the part of software and hardware vendors
to create accessible products; however, vendors ability to
develop and deploy accessible products is held back by the
need to comply with multiple standards. Thus, there needs
to be greater convergence between the accessibility
42
European Parliament and Council 2002.
On 28 January 2009, the European Commission announced its aim
to achieve 100 per cent high-speed Internet coverage for all citizens
by 2010. See European Commission 2009. http://europa.eu/rapid/
pressReleasesAction.do?reference=MEMO/09/35
44
Johnson (2009).
45
European Commission 2007.
43
40
For more on profiling and social sorting, see Hildebrant and
Gutwirth (2008) as well Lyon, op. cit.
41
Beauchamp and Childress, p. 165.
123
A framework for the ethical impact assessment of information technology
standards in force in different areassuch as Europe and
the USso that vendors can develop products that can be
marketed and sold worldwide.46
Although the initiatives of some in the private sector to
improve accessibility are welcome, overall, there is still a
far from adequate supply of affordable, accessible ICTs.47
According to the European Commission, a lack of accessibility persists in many areas, including websites, digital
television, phones, emergency services and public information terminals. New barriers to accessibility are
appearing, often because of market failures, even though
the markets for accessible products and services are worth
many billions of euros. With 15 per cent of the EU population suffering some form of disability, they represent a
mass market.
Questions
Does the new technology or service or application
expect a certain level of knowledge of computers and the
Internet that some people may not have?
Could the technology or service be designed in a way
that makes it accessible and easy to use for more people,
e.g., senior citizens and/or citizens with disabilities?
Are some services being transferred to the Internet only,
so that a service is effectively no longer available to
people who do not (know how to) use computers or the
Internet? What alternatives exist for such people?
Questions
Is the project or technology or service being designed
taking into account values such as human well being,
dignity, justice, welfare, human rights, trust, autonomy
and privacy?
Have the technologists and engineers discussed their
project with ethicists and other experts from the social
sciences to ensure value sensitive design?
Does the new technology, service or application
empower users?
Sustainability
Sustainability, as used here, refers to a condition whereby a
project or service can be sustained, can continue into the
future, either because it can generate the financial return
necessary for doing so or that it has external support (e.g.,
government funding) which is not likely to go away in the
foreseeable future. In addition to economic and social
sustainability, more conventional understandings of sustainability should also be considered, i.e., decisions made
today should be defensible in relation to coming generations and the depletion of natural resources. Often new
technological products can be improved, for instance,
through the use of more recyclable materials.50
Questions
Is the project, technology or service economically or
socially sustainable? If not, and if the technology or
service or project appears to offer benefits, what could be
done to make it sustainable?
Should a service provided by means of a research project
continue once the research funding comes to an end?
Does the technology have obsolescence built in? If so,
can it be justified?
Has the project manager or technology developer
discussed their products with environmentalists with a
view to determining how their products can be recycled
or how their products can be designed to minimise
impact on the environment?
Value sensitive design
Some experts have argued that technology is not neutral
with respect to values. Among those that argue in favour of
Value Sensitive Design, Flanagan, Howe and Nissenbaum
say that the design of technologies bears directly and systematically on the realisation, or suppression, of particular
configurations of social, ethical and political values.48 They
also observe that the values of members of a design team,
even those who have not had a say in top level decisions,
often shape a project in significant ways as it moves
through the design process. Beliefs and commitments, and
ethnic, economic, and disciplinary training and education,
may frame their perspectives, preferences, and design
tendencies, resulting eventually in features that affect the
values embodied in particular systems.49
209
Justice
Beauchamp and Childress draw a distinction between the
terms justice and distributive justice as follows:
The terms fairness, desert (what is deserved), and
entitlement have been used by various philosophers in
attempts to explicate justice. These accounts interpret
46
See the statement by Oracle: Oracle Welcomes New EU Policy
on e-Inclusion. http://www.oracle.com/global/eu/public-policy/
fs/new-e-inclusion-policy.html
47
European Commission 2007, p. 4.
48
Flanagan et al. (2008).
49
Flanagan, et al., p. 335.
50
Palm and Hansson, p. 553. See also Anke van Gorp who also
includes sustainability in his checklist of ethical issues and in this
sense. van Gorp, op. cit., p. 41.
123
210
D. Wright
justice in the broader sense is often thought of as
transcendental, justice as fairness is more contextbound. Parties concerned with fairness typically
strive to work out something comfortable and adopt
procedures that resemble rules of a game. They work
to ensure that people receive their fair share of
benefits and burdens and adhere to a system of fair
play.
justice as fair, equitable, and appropriate treatment in
light of what is due or owed to persons. Standards of
justice are needed whenever persons are due benefits
or burdens because of their particular properties or
circumstances, such as being productive or having
been harmed by another persons acts. A holder of a
valid claim based in justice has a right, and therefore
is due something. An injustice thus involves a
wrongful act or omission that denies people benefits
to which they have a right or distributes burdens
unfairly.
The term distributive justice refers to fair, equitable,
and appropriate distributions determined by justified
norms that structure the terms of social cooperation.
Its scope includes policies that allot diverse benefits
and burdens, such as property, resources, taxation,
privileges, and opportunities. Distributive justice
refers broadly to the distribution of all rights and
responsibilities in society, including, for example,
civil and political rights.51
The principles of justice and fairness can be thought
of as rules of fair play for issues of social justice.
Whether they turn out to be grounded in universal
laws or ones that are more context-bound, these
principles determine the way in which the various
types of justice are carried out
The principles of equity, equality, and need are most
relevant in the context of distributive justice, but
might play a role in a variety of social justice issues.
These principles all appeal to the notion of desert, the
idea that fair treatment is a matter of giving people
what they deserve.52
Questions
Questions
Has the project identified all vulnerable groups that may
be affected by its undertaking?
Is the project equitable in its treatment of all groups in
society? If not, how could it be made more equitable?
Does the project confer benefits on some groups but not
on others? If so, how is it justified in doing so?
Do some groups have to pay more than other groups for
the same service?
Is there a fair and just system for addressing project or
technology failures with appropriate compensation to
affected stakeholders?
Will the service or technology be made widely available
or will it be restricted to only the wealthy, powerful or
technologically sophisticated?53
Does the project or policy apply to all people or only to
those less powerful or unable to resist?
If there are means of resisting the provision of personal
information, are these means equally available or are
they restricted to the most privileged?54
Are there negative effects on those beyond the person
involved in the project or trials and, if so, can they be
adequately mediated?
If persons are treated differently, is there a rationale for
differential applications, which is clear and justifiable?
Will any information gained be used in a way that could
cause harm or disadvantage to the person to whom it
pertains? For example, could an insurance company use
the information to increase the premiums charged or to
refuse cover?
Equality and fairness (social justice)
One commentator has distinguished between equality and
fairness, thusly:
The terms justice and fairness are often used
interchangeably. Taken in its broader sense, justice is
action in accordance with the requirements of some
law. Some maintain that justice consists of rules
common to all humanity that emerge out of some sort
of consensus. This sort of justice is often thought of
as something higher than a societys legal system. It
is in those cases where an action seems to violate
some universal rule of conduct that we are likely to
call it unjust. In its narrower sense, justice is
fairness. It is action that pays due regard to the proper
interests, property and safety of ones fellows. While
Privacy and data protection
Privacy is guaranteed as a right in the European Charter of
Fundamental Rights, the European Convention of Human
Rights, the UNs Universal Declaration of Rights as well as
52
53
51
Beauchamp and Childress 2001, p. 226.
123
54
Maiese (2003).
Marx, p. 174.
Marx, p. 174.
A framework for the ethical impact assessment of information technology
the EUs Data Protection Directive (95/46/EC), the e-Privacy Directive (2000/58/EC), etc.
Article 12 of the Universal Declaration of Human Rights
says No one shall be subjected to arbitrary interference
with his privacy, family, home or correspondence.
Article 8 of the Council of Europes Convention for the
Protection of Human Rights and Fundamental Freedoms,
as amended by Protocol No. 11, Rome, 4.XI.1950,
addresses the right to respect for private and family life.55
The 1980 OECD Guidelines on the Transborder Flows
of Personal Data and the EUs Data Protection Directive
(95/46/EC) identify a set of fair information practices or
principles which are important in any consideration of
ethical issues that might arise in matters affecting privacy
and data protection.
The complexities and intricacies of issues relating to
privacy and data protection have received huge attention
from policy-makers, regulators, academia, the mass media
and many other stakeholders, including ethicists. Privacy is now recognized by many computer ethicists as
requiring more attention than it has previously received in
moral theory. In part this is due to reconceptualizations of
the private and public sphere brought about by the use of
computer technology, which has resulted in inadequacies in
existing moral theory about privacy.56
Although privacy in the sense of protection of personal
data has received lots of attention in the computer age,
privacy extends beyond computers and data protection.
Some years ago, Roger Clarke identified four dimensions
of privacy:
privacy
privacy
privacy
privacy
of
of
of
of
the person;
personal behaviour;
personal communications; and
personal data.57
All four of these dimensions are referenced in the pages
that follow.
obtained by lawful and fair means and, where appropriate,
with the knowledge or consent of the data subject.58
Data retention concerns the storage of call detail records
of telephony and Internet traffic and transaction data, the
phone calls made and received, e-mails sent and received
and websites visited. These data provide an idea of who
stays in contact with whom, when and how frequently.
Further identifying information could be added as well as
location data. The content of calls or e-mail is not (supposed to be) retained indefinitely. The Data Retention
Directive (2006/24/EC) obliges service providers to retain
call data for at least 6 months and up to 2 years. Such data
may be viewed by law enforcement authorities.59
Questions
How will the project determine what constitutes the
minimum amount of personal data to be collected?
Who will determine what constitutes the minimum
amount of personal data to be collected?
Will any data be collected which is not necessary for
fulfilling the stated purpose of the project?
Is information collected in ways of which the data
subject is unaware?
Is information collected against the wishes of the
person?
For how long will the information be retained?
Will the information be deleted when it is no longer
needed for the purpose for which it was collected?
Data quality
The OECD guidelines say that personal data should be
accurate, complete and kept up-to-date. Similarly, Article 6
of the EUs Data Protection Directive says that personal
data must be accurate and, where necessary, kept up to
date.
Questions
Collection limitation (data minimisation) and retention
What measures will be put in place to ensure the quality
of the information gathered?
What assurances exist that the information collected is
true and accurate?
Has the information been collected from others than the
person to whom it pertains?
If the information collected is not accurate, what
consequences might ensue?
The OECD guidelines say there should be limits to the
collection of personal data and any such data should be
55
http://conventions.coe.int/treaty/en/Treaties/Html/005.htm
Brey (2000). Previous to this, Moor commented that From the
point of view of ethical theory, privacy is a curious value. On the one
hand, it seems to be something of very great importance and
something vital to defend, and, on the other hand, privacy seems to be
a matter of individual preference, culturally relative, and difficult to
justify in general. He goes onto argue that privacy has both
instrumental value (that which is good because it leads to something
else which is good) and intrinsic value (that which is good in itself).
Moor (1997).
57
Clarke (2007).
211
56
58
The Guidelines dont specify or define what where appropriate
means.
59
European Parliament and Council 2006.
123
212
D. Wright
Purpose specification
Confidentiality, security and protection of data
The OECD guidelines say that the purposes for which
personal data are collected should be specified not later
than at the time of data collection. Similarly, Article 6 of
the EUs Data Protection Directive says that personal data
must be collected for specified, explicit and legitimate
purposes and not further processed in a way incompatible
with those purposes.
One of the principles in the OECD guidelines deals with
security safeguards and states that Personal data should be
protected by reasonable security safeguards against such
risks as loss or unauthorised access, destruction, use,
modification or disclosure of data. Similarly, Article 17 of
the Data Protection Directive provides that the controller
must implement appropriate technical and organizational
measures to protect personal data against accidental or
unlawful destruction or accidental loss, alteration, unauthorized disclosure or access, in particular where the processing involves the transmission of data over a network,
and against all other unlawful forms of processing.
Questions
Regarding the project, technology or service, are individuals aware that personal information is being (is to
be) collected, who seeks it, and why?
Has the purpose of collecting personal data been clearly
specified?
Has the project given individuals a full explanation of
the purpose of the project or technology in a way that is
clear and understandable?
Has the person been informed of the purpose of the
research, its expected duration and the procedures by
means of which the data is being (will be) collected?
Is there an appropriate balance between the importance
of the projects objectives and the cost of the means?
How have the goals of the data collection been
legitimated?
Is there a clear link between the information collected
and the goal sought?60
Questions
Has the project taken measures to ensure protection of
personal data, e.g., by means of encryption and/or access
control? If so, what are they?
Who will have access to any personal data collected for
the project or service?
What safeguards will be put in place to ensure that those
who have access to the information treat the information
in confidence?
Many service providers who provide service via the
telephone say that conversations are monitored for
training or quality control purposes. Will that happen
in this project or service? What happens (will happen) to
such recorded conversations?
Use limitation
Transparency (openness)
The OECD guidelines state that personal data should not be
disclosed, made available or otherwise used for purposes
other than those specified except with the consent of the
data subject or by the authority of law. Similarly, Article 6
of the EUs Data Protection Directive says that personal
data must be adequate, relevant and not excessive in relation to the purposes for which they are collected and/or
further processed.
Questions
Is the personal information used for the purposes given
for its collection, and do the data stay with the original
collector, or do they migrate elsewhere?
Is the personal data collected used for profit without
permission from or benefit to the person who provided
it?61
Who will have access to or use of the data collected?
Will the data be transferred to or shared with others?
60
61
Marx, p. 174.
Marx, p. 174.
123
Transparency is a precondition to public trust and confidence. A lack of transparency risks undermining support
for or interest in a technology or service.
The OECD guidelines contain an openness principle
which states that There should be a general policy of
openness about developments, practices and policies with
respect to personal data. Means should be readily available
for establishing the existence and nature of personal data,
and the main purposes of their use, as well as the identity
and usual residence of the data controller.
While the Data Protection Directive does not explicitly
mention openness in this way, recital 63 does say that data
protection supervisory authorities must help to ensure
transparency of processing in the Member States within
whose jurisdiction they fall.
Vedder and Custers have opined that With the growing
speed of the information and communication networks, two
characteristics of the Internet are further enlarged. First, as
the number of content providers and the ease of uploading
information further increases, assessing the true nature of
A framework for the ethical impact assessment of information technology
sources and intermediaries of information becomes more
difficult. Second, as the technologies involved become
more sophisticated and complicated, the processes of
interaction become less transparent.62
Philip Brey comments that It is part of the job of
computer ethics to make computer technology and its uses
transparent, in a way that reveals its morally relevant features.63 He proposes an approach which he calls disclosive computer ethics, which is concerned with disclosing
and evaluating embedded normativity in computer systems,
applications and practices.
213
Member States shall guarantee every data subject the
right to obtain from the controller:
(a)
without constraint at reasonable intervals and without
excessive delay or expense:
Questions
If a new database is to be created or an existing database
extended, has the data controller informed the data
protection supervisory authority?
Has the data controller made known publicly that he has
or intends to develop a new database, the purpose of the
database, how the database will be used and what
opportunities exist for persons to rectify inaccurate
personal information?
If a database is breached or if the data controller has lost
any data, has he informed the persons whose data have
been compromised and/or the data protection authority?
What activities will be carried out in order to promote
awareness of the project, technology or service?
Will such activities be targeted at those interested in or
affected by the project, technology or service?
Has an analysis been made of who are the relevant
stakeholders?
Are studies about the pros and cons of the project or
technology available to the public?
(b)
(c)
62
63
Vedder and Custers (2009, p. 25).
Brey, op. cit., p. 126.
as appropriate the rectification, erasure or blocking of
data the processing of which does not comply with
the provisions of this Directive, in particular because
of the incomplete or inaccurate nature of the data;
notification to third parties to whom the data have
been disclosed of any rectification, erasure or blocking carried out in compliance with (b), unless this
proves impossible or involves a disproportionate
effort.
Questions
Have measures been put in place to facilitate the
persons access to his or her personal data?
Is there a charge for access to data and, if so, how has
that charge been determined?
Is the charge stated on the website of the project or
service?
Will the charge be perceived as reasonable by those
whose data are collected and by the data protection
supervisory authority?
How long should it usually take to respond to requests
for access to personal data and to provide such data?
Can the person whose data are collected rectify easily
errors in those data? What procedures are in place for
doing so?
Individual participation and access to data
The OECD guidelines contain an individual participation
principle which states that An individual should have the
right (a) to obtain from a data controller, or otherwise,
confirmation of whether or not the data controller has data
relating to him; (b) to have communicated to him, data
relating to him within a reasonable time, at a charge, if any,
that is not excessive, in a reasonable manner, and in a form
that is readily intelligible to him; (c) to be given reasons if
a request is denied, and to be able to challenge such denial;
and (d) to challenge data relating to him and, if the challenge is successful to have the data erased, rectified,
completed or amended.
Similarly, Article 12 (Right of access) of the Data
Protection Directive says that
confirmation as to whether or not data relating to
him are being processed and information at least
as to the purposes of the processing, the categories
of data concerned, and the recipients or categories
of recipients to whom the data are disclosed,
communication to him in an intelligible form of
the data undergoing processing and of any available information as to their source,
knowledge of the logic involved in any automatic
processing of data concerning him at least in the
case of the automated decisions referred to in
Article 15 (1);
Anonymity
According to the ISO/IEC 15408 standard on evaluation
criteria for IT security, anonymity ensures that a subject
may use a resource or service without disclosing his or her
identity.64
64
International Organization for Standardization 1999.
123
214
The OECD guidelines note that The precise dividing
line between personal data in the sense of information
relating to identified or identifiable individuals and anonymous data may be difficult to draw and must be left to the
regulation of each Member country.
Article 6 of the e-Privacy Directive (2002/58/EC) says
that traffic data relating to subscribers and users processed
and stored by the provider of a public communications
network or publicly available electronic communications
service must be erased or made anonymous when they are
no longer needed for the purpose of the transmission of a
communication. This also applies to all location data processed for the purpose of the conveyance of a communication on an electronic communications network.
The Article 29 Data Protection Working Party (which
represents the data protection authorities of the EU Member States) has considered anonymity to be an important
safeguard for the right to privacy and recommended:
The ability to choose to remain anonymous is
essential if individuals are to preserve the same protection for their privacy online as they currently enjoy
offline.
Anonymity is not appropriate in all circumstances
Legal restrictions which may be imposed by governments on the right to remain anonymous, or on the
technical means of doing so (e.g., availability of
encryption products) should always be proportionate
and limited to what is necessary to protect a specific
public interest in a democratic society
The sending of e-mail, the passive browsing of World
Wide Web sites, and the purchase of most goods and
services over the Internet should all be possible
anonymously.
Some controls over individuals contributing content
to online public fora are needed, but a requirement
for individuals to identify themselves is in many
cases disproportionate and impractical. Other solutions are to be preferred.
Anonymous means to access the Internet (e.g., public
Internet kiosks, prepaid access cards) and anonymous
means of payment are two essential elements for true
online anonymity.65
In its later opinion on search engines, the Article 29
Working Party said that search engine providers must
delete or anonymise (in an irreversible and efficient way)
personal data once they are no longer necessary for the
purpose for which they were collected. It called upon
search engines to develop appropriate anonymisation
65
Article 29 Data Protection Working Party 1997.
http://ec.europa.eu/justice_home/fsj/privacy/workinggroup/wpdocs/
1997_en.htm
123
D. Wright
schemes. It also said it did not see a basis for a retention
period beyond 6 months.66
Questions
Has the project taken steps to ensure that persons cannot
be identified from the data to be collected?
Have pseudonyms or codes been used to replace any
data that could identify the individual?
Is there a possibility that data from different sources
could be aggregated or matched in a way that undermines the persons anonymity?
Privacy of personal communications: monitoring and
location tracking
Clarke (op. cit.) explains privacy of personal communications by saying that Individuals claim an interest in being
able to communicate among themselves, using various
media, without routine monitoring of their communications
by other persons or organisations. This includes what is
sometimes referred to as interception privacy.
For many decades, technology has existed for intercepting and monitoring communications and tracking an
individuals movements. The technology has become
increasingly sophisticated, and even the users technology
(e.g., mobile phones) makes it easy to pinpoint where
someone is making a call. There are laws, of course,
against monitoring communications without the consent of
the user unless it is legally authorised, e.g., by a courtauthorised warrant.
Article 5 of the EUs e-Privacy Directive states that
Member States shall ensure the confidentiality of
communications and the related traffic data by means
of a public communications network and publicly
available electronic communications services,
through national legislation. In particular, they shall
prohibit listening, tapping, storage or other kinds of
interception or surveillance of communications and
the related traffic data by persons other than users,
without the consent of the users concerned, except
when legally authorised to do so in accordance with
Article15(1). This paragraph shall not prevent technical storage which is necessary for the conveyance
of a communication without prejudice to the principle
of confidentiality.
In essence, it means that interception or surveillance of
communications can only take place when legally
authorised.
The same Directive also addresses location data, defined
as any data processed in an electronic communications
66
Article 29 Working Party 2008.
A framework for the ethical impact assessment of information technology
network, indicating the geographic position of the terminal
equipment of a user of a publicly available electronic
communications service. Article 9 prohibits the processing of location data unless it is made anonymous, or with
the consent of the users. The service provider must inform
the users, prior to obtaining their consent, of the type of
location data which will be processed, of the purposes and
duration of the processing.
Questions
Does the project monitor or record a persons communications? If so, is it with the persons consent?
Does the project involve observation or monitoring of
individuals or tracking their movements or whereabouts?
If so, is it with their consent?
If the project or other action involves interception of
private communications, has such interception been
properly authorised (e.g., has a warrant been obtained
from a judge)?
215
and in public places. It includes what is sometimes referred
to as media privacy.
In the UK (especially), its been said (by former Information Commissioner Richard Thomas) that we are sleepwalking into a surveillance society, and there can be no
doubt about it in view of the thousands of CCTV cameras that
festoon our streets, shopping malls, subways, airports and so
on. CCTV cameras and other surveillance and dataveillance
technologies record our behaviour and activity.
Surveillance is not only about catching terrorists or
criminals or owners who allow their dogs to foul the
pavement, but it is also about monitoring senior citizens
afflicted with dementia or the disabled to ensure they do
not harm themselves or others.
Questions
Does the project involve surveillance of individuals or
groups of people? If so, what is the legal basis of such
surveillance?
Have any signs or other notifications been made to alert
people to the presence of CCTV cameras or other
surveillance devices?
How long will images or data be retained?
How will such images or data be used or erased?
Who will authorise the surveillance practice, whether in
public places such as city streets or banks or in assisted
living residences?
What measures will be put in place to avoid abuses
where, for example, surveillants watch others engaged in
behaviour that generally accepted social norms would
regard as intimate or private?
Privacy of the person
According to Clarke (op. cit.), privacy of the person,
sometimes referred to as bodily privacy, is concerned
with the integrity of the individuals body. As examples of
issues, he cites compulsory immunisation, blood transfusion without consent, compulsory provision of samples of
body fluids and body tissue and compulsory sterilisation.
We could add examples such as body searches (e.g., at
customs and immigration), body scanning at airports,
requirements to provide fingerprints or eye scans upon
entering countries such as the United States, and so on.
Questions
Ethical tools (value appraisal techniques)
Does the project or the service or policy or program
involve body searches or body scanning?
Does the project involve biometrics, e.g., taking fingerprints or eye scans?
Is the individual informed in advance of such
requirements?
How long will such data be retained and who will have
access to such data?
Have third parties been consulted with regard to the
necessity of such data collection?
Have less privacy-intrusive alternatives been considered?
This section identifies various ethical tools which can be
used by decisions-makers to engage stakeholders in considering the principles, values and issues contained in the
previous section.
Although the European Union has increasingly placed
emphasis on involving the general public in regulatory
processes with respect to modern technologies, Beekman
et al. are of the view that the tools needed to effectively
take ethical concerns into considerationand to satisfactorily involve the general publicare not fully developed
or described. What is needed, they say, is a comprehensive,
transparent and democratic procedure that gives all ethical
arguments fair and balanced consideration.67 Ethical tools,
as they go onto say, are a way of doing so.
Ethical tools refer to practical methods designed to
improve ethical deliberation by capturing all ethically
Privacy of personal behaviour
Privacy of personal behaviour, explains Clarke (op. cit.),
relates to all aspects of behaviour, but especially to sensitive matters, such as sexual preferences and habits,
political activities and religious practices, both in private
67
Beekman (2006).
123
216
relevant aspects of an issue.68 The tools can be used to
include ethical issues in public consultation and involvement; to support systematic reflection upon ethical issues in
decision-making; and to support explicit communication
about values.69 They are designed to facilitate ethical
assessments and decision-making, but not to replace ethical
judgement.70
Beekman et al. rightly argue that It is unlikely that a
single tool will suffice for a full assessment of the whole
range of divergent ethical issues involved in the introduction and application of new technologies. Thus, they
developed a toolbox, in which particular tools are more
applicable for certain purposes and/or in certain contexts.71
In a separate paper, Beekman and Brom argue that if the
issues at stake and technology have societal impacts, lay
perspectives need to be taken into account. Instruments to
facilitate broadening the debate need to be comprehensive,
transparent and democratic tools that give all arguments
fair and balanced consideration.72 The use of ethical tools
contributes to improved transparency in governance
throughout the European Union73 (or anywhere, for that
matter).
Various tools exist to help determine whether a project
raises ethical issues. A set of questions such as those given
in the preceding section is one such tool. Other tools are
given in the following pages. An important distinction can
be made between tools that are more procedural, i.e.,
prescribe a certain method of how to trigger ethical
responses among public groups, and those tools that are
more substantive, i.e., provide some ethical content as
input for further analysis.74
Consultations and surveys
Consultations and surveys are frequently used by policymakers to gather the views of stakeholders before
D. Wright
implementing policies. Typically, in a consultation, the
government will pose a set of questions posted on its
website and invite comments from interested stakeholders.
Stakeholders may have the opportunity not only to respond
to the questions, but also to prepare papers in which they
elaborate their views on the policy issue at stake. Consultations have the virtue that they are open and transparent.
Anyone can respond to the questions and, if they wish, to
send in a letter or paper. They are transparent too in that the
government will publish the results of the consultation on
their website, so that one can see who responded and how
(although in some cases of commercial or competitive
sensitivity, the stakeholder can request that its views not be
published). The snag is that the response rate is usually
quite low and confined to those who are aware of the
consultation and have a vested interest (even if their vested
interest is acting on behalf of civil society organisations
and/or the public) in the outcome of the deliberation.
Furthermore, the policy-maker cannot be assured that the
outcome of the consultation genuinely represents a crosssection of the public.
Hence, policy-makers and the private sector sometimes
resort to surveys that are intended to provide a reflection of
the publics views of a particular issue (within plus or
minus three per cent). The snag with surveys is that they do
not necessarily reflect informed views and usually they do
not provide an opportunity for a detailed or nuanced
response. Survey questionnaires are designed to elicit
responses that can be easily quantified statistically. Thus,
the questions are relatively simple so that the response is
either yes, no or dont know or multiple choice, in which
case the choice is limited to those contained in the
questionnaire.
While consultations and surveys are useful tools, they
are dangerous if the policy-maker were to rely solely on
them as inputs in making a policy decision. Additional
tools are needed.
68
Beekman et al., p. 14.
Beekman and Brom (2007, pp. 34).
70
Beekman et al., p. 21.
71
Beekman et al., p. 6. Although Rowe and Frewer do not focus
specifically on ethical tools, nevertheless, they do provide a long list
of different mechanisms for engaging stakeholders, including the
public, some of which could be used to facilitate an ethical impact
assessment. See Rowe and Frewer (2005). Also of interest in this
regard is Essays 9 & 10 in Chap. 8 in Renn, op. cit., pp. 273352.
Renn says, A combination of analytic and deliberative instruments
(or stakeholders and the public) is instrumental in reducing
complexity, necessary for handling uncertainty and mandatory for
dealing with ambiguity. Uncertainty and ambiguity cannot be
resolved by expertise only (p. 350). The two essays are useful
guidance for ethical impact assessment as well as risk governance.
72
Beekman and Brom, p. 6.
73
Beekman et al., p. 46.
74
Beekman et al., p. 20.
69
123
Expert workshops
The European Commission, European agencies (such as
ENISA75) and many other organisations convene expert
workshops or stakeholder panels, often to complement
consultations and sometimes surveys. Ideally, such workshops bring together representatives from various stakeholder groups to discuss issues. The workshops often
consist of a mixture of presentations by those representatives and discussions on one or two or, at least, a limited
number of issues, which can be addressed in the course of a
one or 2-day meeting. Sometimes, just a single workshop is
75
ENISA is the acronym of the European Network and Information
Security Agency. www.enisa.europa.eu.
A framework for the ethical impact assessment of information technology
held, at other times, there may be more, say, three, over a
period of 6 months or so. At still other times, the convened
experts may agree to work collaboratively on a report in
between the workshops. Usually, the workshops result in a
report, which is posted on the host organisations website.
The success of the workshop depends very much on the
chairperson of the workshop and how the meeting is
structured and, to some extent, the chemistry that develops
between the participants. Often the time for discussion is
derailed by too many presentations. The principal benefit of
an expert workshop is that it allows more in-depth, face-toface discussion by a range of different stakeholders than,
say, a consultation or a survey. If the experts convened for
a workshop such as those convened by ENISA are tasked
with preparing a report, there is another important advantage, which is that they produce a consensus report, i.e.,
there is an opportunity for stakeholders to learn from each
other and to reach a shared view. The principal disadvantage is that, despite inviting representatives from different
stakeholder groups, the host organisation may still not get a
representative view of the ethical considerations of a crosssection of individual stakeholders (as distinct from stakeholder groups).
Checklists of questions
A checklist of principles, issues and questions, as provided
in the preceding section, is itself an ethical tool. Stakeholders can use the checklist as a way of appraising the
ethical sufficiency of a (proposed) design or decision.
Not all experts or ethicists favour a checklist of questions because they fear that responding to such questions
will become routinised or that somehow they will lead to a
dumbing down of thoughtful consideration of the issues
at stake. While that is a risk, nevertheless questions do
seem a useful way of provoking consideration of the issues
at stake by those undertaking new projects or designing
new technologies or services. In any event, other measures
such as ethical reviews or audits by a committee of independent ethicists will surely spot a too-casual response to
the questions.
Van Gorp proposes a list of questions to help researchers
doing research in technological fields to identify ethical
aspects of their research.76 It is difficult if not impossible
to make a complete checklist of ethical issues that is valid
for researchers in all technological research. New research
might bring forth new ethical issues that are not foreseeable. A checklist can therefore never guarantee that all
ethical issues will be identified. The checklist can, however, make sure that ethical issues that are foreseeable are
indeed identified. The checklist is only a tool to quickly
217
identify ethical issues. If ethical issues are identified then a
thorough ethical analysis should be made.
This is an important point. The checklist should not be
used simply to answer the questions. The answers should
form the basis for discussion among stakeholders. Thus, if
the answer to the question Has the project taken any steps
to reach out to the e-excluded (i.e., those excluded from use
of the Internet)? is No, then the stakeholders should
consider whether, given the context, it is an ethically satisfactory answer. If the context involves a company
developing a computer game targeted at a market of young
and highly skilled users, then it may be difficult for
stakeholders to hold the company as being ethically deficient. In a different context, for example, involving the
development of electronic tools for e-voting in communities, the consideration might be quite different. Thus, the
contextual factors are important to take into account when
considering the responses to the questions.
Ethical matrix
The ethical matrix and the two following tools (ethical
Delphi and consensus conference) were discussed in the
report by Beekman et al. of their ethical tools project which
was funded by the European Commission under its Fifth
Framework Programme. The consortium considered a
variety of ethical tools, but particularly focused on these
three. The descriptions for these three tools have been
extracted from their report.77
The ethical matrix applies a number of prima facie
principles to a set of specified interest groups. The three
principles used in the standard version are respect for wellbeing, autonomy and fairness, and together they form the
columns of the ethical matrix. The rows consist of the
interest groups (i.e., affected parties) that are relevant to
the issue in question. These might include different groups
of people, such as consumers and food producers. Users
can apply the ethical matrix to map ethical issues. When
making a judgement or forming an opinion, the ethical
matrix can be used as a structured approach for reflecting
on competing ethical impacts. The aim of the ethical matrix
is to help users identify ethical issues raised by the use of
novel technologies and to arrive at intellectually defensible
decisions. However, the ethical matrix does not prescribe
any particular decisions.
Ethical delphi
The ethical Delphi is an iterative process for exchanging
views and arguments between experts. The method is
77
76
van Gorp, op. cit.
See Beekman et al., p. 21, pp. 2829. The ethical matrix concept
was developed by Ben Mepham. See Mepham (2005).
123
218
structured around the notion of a virtual committee where
the exchange of ideas is conducted anonymously and
remotely through a series of opinion exchanges (in the
form of rounds). The ethical Delphi is used to map the
ethical considerations that experts believe are pertinent and
significant. It indicates the extent of agreement as well as
drawing out divergence in expert opinion on a given topic.
The ethical Delphi can be used to characterise and map the
ethical issues raised by the use of novel technologies. One
of the benefits of the ethical Delphi is the combination of
scoring and reasoned arguments where it is possible to
see the importance of an issue (using a Likert scale) and the
relevant arguments.
Consensus conferences
The participatory consensus conference was initially
developed by the Danish Board of Technology and represents a further development from the original consensus
conferences arranged by the US Office of Technology
Assessment (OTA). The aim of the OTA conferences was
to expose expert views and to reach consensus among
experts regarding a given topic. Consensus is still (in most
cases) an aim, but instead of striving for consensus among
experts, consensus is sought among laypersons. The reason
given for the importance of involving laypersons in such
conferences is typically to give citizens the opportunity to
influence decisions having an impact on their lives, to
affect the public debate or to overcome limitations in
expert knowledge. Laypersons should be entitled to choose
the type of experts they want invited to and question at the
consensus conference.
D. Wright
to evaluate these options. Citizen panels require a large
investment of time and money and are not suitable for all
types of problems and all contexts. If the problem is highly
technical, it may be impossible to bring citizens up to the
necessary level of understanding.
Procedural aspects or practices
This section contains procedural aspects or practices which
should feature in an ethical impact assessment. They serve
as a complement to the ethical tools mentioned in the
previous section. To help decision-makers in their consideration of the utility and relevance of these procedural
aspects, a set of questions follows each of them. In some
instances, there is no one correct answer to the questions.
The applicability and relevance of some questions may
depend on the context and on the willingness of the decision-maker to employ these practices. For example, to the
question Is there a process for engaging stakeholders?,
the decision-maker might say yes. If there is, thats fine.
However, the decision-maker might say no, and his or her
response might be equally valid, because he or she does not
believe the project or new technology raises any ethical
issues that need to be considered by stakeholders. And he
or she could well be right. If they are wrong, however, they
may be held accountable and suffer certain liabilities.
Process: consulting and engaging stakeholders
A variant on the consensus conference is the citizen panel.
Skorupinski and Ott argue that The model of consensus
conferences needs further advancement, especially in
regard to the questioning of experts. The rigid form of lay
people questioning experts should be replaced by a more
dialogic modus. In this respect, they say, the model of
citizen panels seems to be superior to consensus conferences.78 Citizen panels are groups of randomly selected
citizens who are asked to compose a set of policy recommendations on a specific issue. The objective is to provide
citizens with the opportunity to learn about the technical
and political facets of a given issue and to enable them to
discuss and evaluate these options and their likely consequences according to their own set of values and preferences. Citizens are informed about the potential options
and the corresponding consequences before they are asked
An ethical impact assessment should not consist of questions only. A process for engaging and consulting with
stakeholders should be put in place to help policy-makers,
technology developers and project managers in ensuring
that ethical issues are identified, discussed and dealt with,
preferably as early in the project development as possible.
There are various reasons why project managers should
engage stakeholders and undertake a consultation when
developing new technologies or projects. For one thing,
Article 41 of the Charter of Fundamental Rights of the
European Union, entitled the Right to good administration,
makes clear that this right includes the right of every
person to be heard, before any individual measure which
would affect him or her adversely is taken, which
suggests that consultation with stakeholders is not only
desirable but necessary.
But there are other reasons too. Stakeholders may bring
new information which the project manager might not have
considered and may have some good suggestions for
resolving complex issues.79 Also, technology development
is often too complex to be fully understood by a single
78
79
Citizen panels
Skorupinski and Ott (2002, p. 119).
123
Stern and Fineberg (1996).
A framework for the ethical impact assessment of information technology
agent, as Sollie and others have pointed out.80 Palm and
Hansson state that It would be delusive to believe that
technology developers are conscious of all the effects of
their products. In many cases, negative side effects come as
a surprise to technology developers themselves. If they
could have anticipated the negative consequences, they
would, in the vast majority of the cases, have done their
best to avoid them out of social concern or for commercial
reasons, or both.81 Furthermore, by engaging stakeholders, project managers may avoid subsequent criticism about
a lack of consultation. Engaging stakeholders before the
project is implemented may be a useful way of testing the
waters, of gauging the publics reaction to the project. In
any event, A central premise of democratic government
the existence of an informed electorateimplies a free
flow of information.82 Even if participation does not
increase support for a decision, it may clear up misunderstandings about the nature of a controversy and the views
of various participants. And it may contribute generally to
building trust in the process, with benefits for dealing with
similar issues in the future.83
The process of identifying, discussing and dealing with
ethical issues should be ongoing throughout the project and
perhaps even after it has been implemented, if only because
new ethical issues may arise that were not evident at the
outset of the project development. Moor has made this
point: Because new technology allows us to perform
activities in new ways, situations may arise in which we do
not have adequate policies in place to guide us. Ethical
problems can be generated at any point, says Moor, but
the number of ethical problems will be greater as the revolution progresses.84
The process of engaging stakeholders in consideration of
ethical issues that may arise from the development of a new
technology or the new use of an existing technology or a new
policy or programme is arguably as important as the result.
The policy-maker or technology developer can use some or
all of the ethical tools mentioned in the preceding section to
facilitate the process. He or she can also use the procedural
practices mentioned in this section to lend more credibility to
the process. While stakeholders can make a substantial
contribution to the decision-making process, at the end of the
day, however, it is the policy-maker or technology developer
80
Sollie (2007, p. 302). Moor 2005, op. cit., p. 118, also supports
better collaboration among ethicists, scientists, social scientists and
technologists.
81
Palm and Hansson, p. 547.
82
US National Research Council 1989, p. 9.
83
Stern and Fineberg, pp. 2324.
84
Moor (2005). In his paper, Moor proposes the following hypothesis, which he calls Moors Law: As technological revolutions
increase their social impact, ethical problems increase.
219
who must take a decision whether to proceed with the technology or to modify it or to build some safeguards into its use
in order to accommodate the concerns raised by stakeholders. It is the policy-maker or technology developer alone who
will be held accountable for the decision.
Palm and Hansson caution that the search for consensus in controversial issues should not be overemphasized
since it may lead to the closure of issues at a too early
stage. In ethical TA, conflicts and different opinions should
be highlighted rather than evened out. They also urge that
the assessment should seek to identify all relevant
stakeholders, i.e., a broad spectrum of agents and therefore
also a broad spectrum of responsibilities. They see the
task of an ethical assessment as being to delineate and
analyze the issues and point out the alternative approaches
for the final analysis that are available.85
It would make life easier, undoubtedly, if the stakeholders reach a consensus about how to deal with the
ethical considerations raised and if the decision-maker
agreed with the consensus. In real life, that does not always
happen, so the decision-maker will need to decide which
considerations are given greatest weight and to explain
why he or she took that decision. The decision-maker
should make clear to stakeholders when he or she first
reaches out to them what the rules of the game will be, how
and by whom that ultimate decision will be made.
When a decision-maker ends up disagreeing with the
results of the consultation processes, this calls for explicit
argument, as Beekman et al. point out. It does not follow
that the decision-makers should always follow the results
of the use of ethical tools. Ethical tools are not decisionmaking machines for ethics. However, when such a situation occurs, the great advantage of ethical tools is that they
force the decision-maker to state why he or she prefers a
different conclusion.86
Questions
Has the policy-maker or technology developer developed a process for identifying and considering ethical
issues?
Will the project engage in consultations with stakeholders? If so, when?
Have all relevant stakeholders (i.e., those affected by or
with an interest in the technology or project) been
identified?
Have they been invited to participate in a consultation
and/or to provide their views on the project or
technology?
Is the process by means of which decisions are made
clearly articulated to stakeholders?
85
86
Palm and Hansson, pp. 550551.
Beekman et al., p. 26.
123
220
How many and what kinds of opportunities do stakeholders and citizens have to bring up concerns about
values or non-technical impacts?
How long will the consultation last? Will there be
sufficient time for stakeholders to conduct any research
which they may need to do in order to represent their
views to the project manager?
How will conflicting views of stakeholders be taken into
account or resolved? Are some stakeholders (e.g., industry) given more weight than others (e.g., civil society
organisations)?
Has the project manager made known to the public the
optionsand the pros and cons of each option
available with regard to the development or deployment
of the project, technology, service, etc.?
Is there a process in place for considering ethical issues
at later stages in the project or technology development
that may not have been considered at the outset?
D. Wright
level of risk for society is an eminently political
responsibility.
The Commission says the decision-making procedure
should be transparent and involve all interested parties at
the earliest possible stage in the study of various risk
management options once the results of the scientific
evaluation and/or risk assessment are available. Where
action is deemed necessary, measures based on the precautionary principle should be, inter alia:
Risk assessment, uncertainty and unintended
consequences
Questions
Much has been written about risk assessment over the last
few decades. One of the best guidances is Ortwin Renns
recent book on Risk Governance.87 While risk experts,
such as Renn, have considered how to deal with uncertainty, uncertainty is a concept scarcely scrutinised in
ethics in general and ethics of technology in particular,
according to Paul Sollie.88 He says the uncertainty arising
from the unpredictable, unforeseen and unanticipated nature of technology development has many causes, one of
which is that technology designed for specific purposes
often ends up being used for completely different activities.
He notes that uncertainty is not simply the absence of
knowledge. Uncertainty can prevail even in situations
where a lot of information is available. New information
does not necessarily increase certainty, but might also
augment uncertainty by revealing the presence of uncertainties that were previously unknown or understated.
The European Commissions Communication on the
precautionary principle89 aims to build a common understanding of how to assess, appraise, manage and communicate risks that science is not yet able to evaluate adequately. It
says the precautionary principle should be considered within
a structured approach of risk assessment, management and
communication. Decision-makers need to be aware of the
scientific uncertainties, but judging what is an acceptable
87
88
89
Renn, op. cit.
Sollie 2007, op. cit., p. 295.
European Commission 2000.
123
proportional to the chosen level of protection,
non-discriminatory in their application,
consistent with similar measures already taken,
based on an examination of the potential benefits and
costs of action or lack of action (including, where
appropriate and feasible, an economic cost/benefit
analysis),
subject to review, in the light of new scientific data, and
capable of assigning responsibility for producing the
scientific evidence necessary for a more comprehensive
risk assessment.
Has the project performed a risk assessment of the
technology to be used or service supplied?
Has the project considered less privacy-intrusive
alternatives?
Has the project considered the possibility of unintended
consequences of a technology or service? For example, a
revolving door may keep out the cold, but may make it
impossible for a person in a wheelchair to enter a
building.90
Has the project identified ways of eliminating or
mitigating those risks?
Is there a human review of machine-generated results?
Can the technology or service be used for purposes other
than that for which they have been designed?
Is there a risk that the project or service or application
will create an unwanted precedent?
Is there a risk that the project may have a negative effect
on those who are implementing the service or application
as well as on those who are subject to the application?
Have different types of risks been considered, i.e.,
political, social, economic, technological, environmental, as well as risks to individuals?
Are some risks foreseen, but difficult to quantify?
Are there uncertainties about use of the technology and
its long-term consequences?
How will the project distribute any costs or risks? Will
some stakeholders bear greater risks than others?
90
Verbeek, p. 72, uses this example.
A framework for the ethical impact assessment of information technology
What are possible applications and consequences of the
new technologies or services?
Who is affected and to what extent?
What status do stakeholder values and opinions have and
how are these integrated into an ethical analysis?
If the project or technology is complex and responsibility is distributed, can mechanisms be created to ensure
accountability?
Are there means for discovering violations and penalties
to encourage responsible behaviour by those promoting
or undertaking the project?
If personal data are transferred outside the European
Union, what measures will be put in place to ensure
accountability to the requirements of the Data Protection
Directive?
Accountability
The Data Protection Directive says the data controller
should be accountable for complying with the principles
stated in the Directive.
In the development of new technologies and services,
however, many of the actors and stakeholders involved
(in their development) only have a very restricted insight
into the opportunities and risks involved. Moreover, many
of them have restricted means to respond. For instance,
engineers are involved in the first phases (of research and
development), but have limited influence on the introduction of new technologies into the market/society. End users
may have effect on how the new technologies are introduced into society and how the new technologies are
actually used. However, end users have restricted means to
influence research, development and production of new
technologies.91 Vedder and Custers argue that it is
undesirable to assign all responsibilities to just one group
of stakeholders. Instead, they argue in favour of joint
responsibilities. Instead of creating gaps in the responsibilities, i.e., parts of the research and development process where nobody is responsible, this may create joint
responsibilities. We consider overlapping responsibilities
an advantage rather than a drawback in these cases.92
Rene von Schomberg also argues along these lines. He
claims that the idea of role responsibility cannot be used any
longer in the complex society in which we live. No one person
has an overview of all consequences of a technological
development and therefore he argues for an ethics of knowledge policy and knowledge assessment and says that citizens
should be involved in the assessment and policy-making.93
Questions
Does the project make clear who will be responsible for
any consequences of the project?
Who is responsible for identifying and addressing
positive and negative consequences of the project or
technology or service?
Does the project make clear where responsibility lies for
liability, equality, property, privacy, autonomy, accountability, etc.?
91
92
93
Vedder and Custers, p. 30.
Ibid., p. 32.
von Schomberg (2007).
221
Third-party ethical review, evaluation and audit
The final phase of the privacy impact assessment (PIA)
methodology recommended by the UKs Information
Commissioners Office (ICO) is the review and audit
phase, the purpose of which is to ensure that the design
features arising from the PIA are implemented, and are
effective. Implementation of an ethical impact assessment
could take a leaf out of the ICO PIA manual in this regard.
An ethical review and audit by a third party would ensure
that an ethical impact assessment has been effectively
carried out. As mentioned in the introduction, a third-party
review and/or an audit is a way of ensuring that responses
to the questions are not merely perfunctory.
Unless an organisation appoints an independent ethical
review panel, there will be a lacuna in ethical impact
assessments and, in particular, a review of the adequacy of
such assessments. Although the European Commission has
established the European Group on Ethics in Science and
New Technologies (EGE)94 and Member States have
similarly independent ethics committees,95 these committees do not have a mandate to perform an ethical audit of
individual organisations. Rather they are appointed to
provide advice on issues of ethical importance, which are
either referred to them (by the Commission, for example)
or that they initiate themselves. Nevertheless, a review and
audit of ethical assessments by an independent third-party
would obviously confer considerable credibility on any
reviews undertaken by individual projects.
94
Article 2 of the mandate given to the EGE states: The task of the
EGE shall be to advise the Commission on ethical questions relating
to sciences and new technologies, either at the request of the
Commission or on its own initiative. The Parliament and the Council
may draw the Commissions attention to questions which they
consider to be of major ethical importance. The Commission shall,
when seeking the opinion of the EGE, set a time limit within which an
opinion shall be given. http://ec.europa.eu/european_group_ethics/
mandate/index_en.htm
95
http://ec.europa.eu/european_group_ethics/link/index_en.htm#4
123
222
Questions
Has the project, its objectives and procedures in regard
to treatment of ethical issues been reviewed by independent evaluators to ensure that ethical issues have
been adequately considered?
Has the decision-maker considered evaluation of its
ethical impact assessment with a view to improving the
process of conducting such an assessment?
If the project involves the development and deployment
of complex technologies, an ethical impact assessment
may need to be ongoing or, at least, conducted again
(perhaps several times again). When does the project
manager envisage submitting its ethical impact assessment to a review by an independent third party?
D. Wright
Good practice
Examples of good practice in ethical assessments may be
strategically important from a policy point of view in the
sense that they might encourage other organisations to
undertake similar assessments, which might also be an
objective of policy-makers. Examples of good practice are
also practically important in the sense they provide guidance on how to undertake ethical assessments. The utility
of good practices depends on how well information about
such good practices is disseminated and how easy it is for
project managers to find relevant good practices.
Questions
Would the project, technology or service be generally
regarded as an example of ethical good practice?
Will the technology or project inspire public trust and
confidence?
Have the designers or proponents of the project examined other relevant good practices?
Providing more information and responding
to complaints
An important consideration in undertaking an ethical
impact assessment is to provide (proactively) information
to stakeholders. The results of an ethical impact assessment
should be communicated as widely as possible. The choice
and design of future technologies should not be restricted
to a well-educated and articulated elite.96 It is also
important that the project manager respond to complaints
about either the way the ethical assessment has been conducted or the way in which a particular ethical issue has
been considered. The name and contact details of the
person responsible for conduct of the ethical impact
assessment should be made publicly available (for example, on the project managers website).
Questions
What steps will the project manager take to make
relevant information available to relevant stakeholders
as soon as possible?
Are relevant stakeholders aware of the findings of ethical
assessments and how they were generated?
Has the project instituted a procedure whereby persons
can lodge their complaints if they feel that they have
been mistreated by the project?
Are there procedures for challenging the results, or for
entering alternative data or interpretations into the record?
If an individual has been treated unfairly and procedures
violated, are there appropriate means of redress?
If anyone objects to the project, does the project make clear
whom they can contact to make known their objection?
Have the contact details been published or posted on the
relevant website where a person may obtain further
information about the ethical impact assessment?
96
Palm and Hansson, p. 550.
123
Guidelines on integration (synthesis)
This paper has provided guidelines on identifying ethical
impacts, perspectives and boundaries; now it is time to
offer guidelines on integration or synthesis for a structured
approach to conducting an ethical impact assessment. In
doing so, two models in particular have influenced these
guidelines. The first comes from Skorupinski and Otts
paper on Technology assessment and ethics. They
present a comprehensive concept for participatory and
discoursive TA in 12 modules or steps.97 The second
model comes from the privacy impact assessment (PIA)
manual published by the UK Information Commissioners
Office (ICO). From these two models and taking into
account the foregoing sections of this paper, we can distil
key guidelines as follows:
97
The organisation proposing a technology with ethical
implications should prepare a briefing paper for stakeholders which describes the technology, the ethical
issues foreseen, who will benefit from the technology
and who might bear the consequences, and possible
ways of addressing the ethical issues. The briefing
paper should state what the rules of the game will be,
i.e., it should indicate the process (the plan) to be
followed and the timeframe for conducting the ethical
impact assessment.
The organisation invites relevant stakeholders, including the public, to participate in the assessment of the
ethical impacts of the technology. A neutral facilitator
Skorupinski and Ott (2002, pp. 117120).
A framework for the ethical impact assessment of information technology
should manage the process. A variety of ethical tools,
such as those described above, should be employed.
The organisation should be honest about its willingness
to take on-board recommendations. If it will proceed
with its plans for the technology, no matter what the
outcome of the ethical impact assessment might be,
then it should at least say so and bear the consequences
to its credibility and public trust.
All participants should be treated equally, their views
respected and reflected in the ethical impact assessment. Information asymmetries should be avoided.
Participants should have equal access to information
and to independent experts.
Led by the facilitator, the participants should consider
the ethical principles and issues and associated questions provided in the second section of this paper. As
mentioned there, these issues and questions are
intended to be indicative, not comprehensive, so
participants may raise additional issues and questions
which should also be addressed in the course of a
specific ethical impact assessment.
The facilitator should seek consensus, but not at any
price. Dissenting views should be reflected in the final
report, which should describe how the process was
conducted and should spell out how a final decision on
the technology was taken and what the consequences of
that decision are expected to be. The report should be
published on the organisations website (unless there
are legitimate commercially competitive or security
reasons for not doing so, in which case the role for
independent third-party evaluation becomes even more
critical).
The process for the ethical impact assessment should be
reviewed by an independent evaluator whose findings
should be published. The extent to which recommendations are implemented should be audited. If new
information subsequently comes to light that changes
the basis of the recommendations, the process should be
repeated if necessary.
As far as possible, the conduct of an adequate ethical
impact assessment should be tied to a decision on
funding the technology development.
Conclusions
This paper has proposed an ethical impact assessment
framework that could used by those developing new
technologies, services, project, policies or programmes as a
way to ensure that their ethical implications are adequately
examined by stakeholders before possible deployment and
so that mitigating measures can be taken as necessary.
223
The paper argues that an ethical impact assessment is
needed of new and emerging technologies because technologies are not neutral, nor value free. Technologies, how
they are configured and used, reflect the interests and
values of their developers and owners. Over time, other
stakeholders, including users, may become developers too
by creating new applications for the technology or by
adapting the technology for uses unforeseen when the
technology was originally developed. An ethical impact
assessment is also needed because ethical considerations
are often context-dependent. What may be ethically
acceptable in one context may not be acceptable in another
context.
It is very difficult to identify impacts resulting from the
interaction of the technical and social because the impacts
will depend on the contextual factors, as Nissenbaum and
others have said. It may be that over time, as we gain more
experience in the use of ethical impact assessments, we
will be able to spot similar impacts arising in similar situations. One might need to perform a detailed ethical
impact assessment the first time, but more abbreviated
EIAs might be possible as time goes on where, for example, new projects use similar or identical technologies.
Generally, however, we should not adopt any kind of formula or make assumptions about the impacts arising from
the interaction of the technical and social. One must
examine each case on its own merits.
It is in the interests of policy-makers, technology developers and project managers to conduct an ethical impact
assessment, involving stakeholders interested in or affected
by the technology, as early in the development cycle as
possible in order to minimise ethical risks that may arise once
the technology is launched. The paper gave some examples
at the outset of instances that could profit or could have
profited from an ethical impact assessment. In some sense, an
ethical impact assessment, like a privacy impact assessment,
can be regarded as a form of risk managementi.e., the
purpose of conducting the exercise is to avoid any nasty
fallout from consumers or policy-makers who might feel that
the technology as implemented works to the detriment of
generally accepted social values.98
The framework proposed here consists of a set of ethical
principles, values and issues followed by a set of questions
the aim of which is to facilitate ethical consideration of the
98
Verbeek indirectly offers at least two reasons supporting an ethical
impact assessment. Two forms of designer responsibility can be
distinguished here. First, designers can anticipate the impact, sideeffects and mediating roles of the technology they are designing. On
the basis of such anticipations, they could adapt the original design, or
refrain from the design at all. Second, designers can also take a more
radical step and deliberately design technologies in terms of their
mediating roles. In that case, they explicitly design behaviorinfluencing or moralizing technologies: designers then inscribe
desirable mediating effects in technologies. Verbeek, p. 70.
123
224
new technology. The framework is supported by ethical
tools (or value appraisal techniques) and procedural aspects
or practices. The ethical tools will help the technology
developer to get a better idea of how the technology is
perceived ethically by stakeholders and what measures
could be adopted to ensure that the technology is ethically
acceptable or what alternatives might be at his or her disposition. The procedural aspects are aimed at ensuring the
ethical impact assessment is conducted in a way that
engages stakeholders, ensures the transparency of the
whole process and provides for independent evaluation and
audit.
The key to a successful ethical impact assessment is
finding a way to engage stakeholders effectively. While
some decision-makers may think engaging stakeholders is a
hassle or risks delaying development, the benefits of
engaging stakeholders are numerous and should outweigh
any such thoughts. Stakeholders may have some information
or ideas or views or values which the project manager had not
previously considered. They may be able to suggest alternative courses of actions to achieve the desired objectives.
They may be able to suggest some safeguards which would
minimise the ethical risks that might otherwise explode after
a technology or service is launched. By engaging stakeholders, the technology developer has a better chance of
minimising liability and avoiding subsequent criticisms and,
possibly, costly retrofits downstream.
While consulting and engaging stakeholders is important, ultimately in most cases the decision-makerthe
technology developer or policy-makerwill need to take
the final decision about whether or how to proceed. If he or
she takes a decision at variance with the generally accepted
ethical considerations of stakeholders, he or she may (will)
need to explain his or her reasons for doing so.
The ethical impact assessment framework proposed here
builds on work by other researchers and policy-makers.
Even if the exact wordsan ethical impact assessment
have not been used previously, others have seen the need
for something like it. Verbeek, for example, has emphasised that Technologies are morally significant; they help
human beings to do ethics, by informing our moral decisions and by giving shape to our actions. In order to deal
adequately with the moral relevance of technology, therefore, the ethics of technology should broaden its scope.
Rather than approaching ethics and technology as belonging to two radically separated domains, the interwoven
character of both should be central.99 Palm and Hansson
noted that new technologies often give rise to previously
unknown ethical problems and argued in favour of a continuous dialogue and repeated assessments as preferable to
99
Verbeek, op. cit.
123
D. Wright
one single large-scale assessment since moral implications
may arise at all stages of technological development.100
Furthermore, they add, Predicting the future of a technology is a vain undertaking with low chances of success.
Ethical technology assessment should therefore avoid
crystal ball ambitions. The ambition should not be to see as
far as possible into the future, but to investigate continuously the ethical implications of what is known about the
technology under development.
Building on the work of these and other experts, the
framework proposed here offers a new and structured
approach to assessing the ethical legitimacy of new technology. While models and methodologies exist for undertaking privacy impact assessments, environmental impact
assessments, policy and programmatic impact assessments,
technology assessments, regulatory impact assessments and
so on, that has not been the case for ethical impact assessments. Furthermore, the framework can be applied not only
to new and emerging technologies, but also to products,
services, policies and programmes, indeed virtually any
undertaking that is likely to raise ethical concerns.
Although it has not been within the scope of this paper,
the author believes there could be a case for integrating an
ethical impact assessment and privacy impact assessment.
Privacy and data protection raise ethical issues, although
ethical impact assessment addresses issues beyond simply
those of privacy and data protection. Nevertheless, there
would seem to be value in further research exploring the
possibility of developing an integrated privacy and ethical
impact assessment.
Acknowledgments The author acknowledges with thanks the
thoughtful and detailed comments of the three anonymous reviewers
as well as those of Guido van Steendam, professor of ethics at the
University of Leuven, which have helped improve this paper. This
paper is based in part on work undertaken by the author in two projects funded under the European Commissions Seventh Framework
Programme: SENIOR (Social Ethical and Privacy Needs in ICT for
older People: a Dialogue Roadmap, grant agreement no. 216820) and
PRESCIENT (Privacy and emerging fields of science and technology:
Towards a common framework for privacy and ethical assessment,
grant agreement no. 244779). The views in this paper are those of the
author alone and are in no way intended to reflect those of the
European Commission.
100
Palm and Hansson, op. cit., pp. 547548, p. 550. Moor (2005, p.
118), makes a similar point: We can foresee only so far into the
future We cannot anticipate every ethical issue that will arise from
the developing technology our ethical understanding of developing
technology will never be complete. Nevertheless, we can do much to
unpack the potential consequences of new technology. We have to do
as much as we can while realizing applied ethics is a dynamic
enterprise that continually requires reassessment of the situation. See
also Brey, op. cit.
A framework for the ethical impact assessment of information technology
References
Article 29 Data Protection Working Party, Recommendation 3/97:
Anonymity on the Internet (WP 6), Adopted on 3 December
1997. http://ec.europa.eu/justice_home/fsj/privacy/.
Article 29 Working Party, Opinion on data protection issues related to
search engines, 00737/EN, WP 148, Adopted on 4 April 2008.
http://ec.europa.eu/justice_home/fsj/privacy/workinggroup/
wpdocs/2008_en.htm.
Beauchamp, T. L., & Childress, J. F. (2001). Principles of biomedical
ethics (5th ed.). New York: Oxford University Press.
Beekman, V., et al. (2006). Ethical bio-technology assessment tools
for agriculture and food production, Final Report of the Ethical
Bio-TA Tools project, LEI, The Hague, February. http://
www.ethicaltools.info.
Beekman, V., & Brom, F. W. A. (2007). Ethical tools to support
systematic public deliberations about the ethical aspects of
agricultural biotechnologies. Journal of Agricultural and Environmental Ethics, 20(1), 312.
Boddy, Dr Ken, LOCOMOTION Ethical Study Report, Deliverable D
3.3, Final Version, September 2004. http://cordis.europa.eu/
search/index.cfm?fuseaction=proj.document&PJ_LANG=EN&
PJ_RCN=6099060&pid=37&q=6AF6FCCDA9FE6C99B48B10
861AFEBDDA&type=sim.
Brey, P. (2000). Method in computer ethics: Towards a multi-level
interdisciplinary approach. Ethics and Information Technology,
2(2), 125129.
Clarke, R. (2007). Introduction to dataveillance and information
privacy, and definitions of terms, Aug. http://www.rogerclarke.
com/DV/Intro.html.
Dekker, M. (2004). The role of ethics in interdisciplinary technology
assessment. Poiesis & Praxis, 2(23), 139156.
European Commission, Ageing well in the Information Society,
Action Plan on Information and Communication Technologies
and Ageing, An i2010 Initiative, Communication from the
Commission to the European Parliament, the Council, the
European Economic and Social Committee and the Committee
of the Regions, COM (2007) 332 final, Brussels, 14 June 2007.
European Commission, Communication on the precautionary principle, COM (2000)1, Brussels, 2 Feb 2000.
European Commission, Commission earmarks 1bn for investment in
broadbandFrequently Asked Questions, Press release, MEMO/
09/35, Brussels, 28 January 2009. http://europa.eu/rapid/press
ReleasesAction.do?reference=MEMO/09/35.
European Commission, The European Research Area: New Perspectives, Green Paper, COM(2007) 161 final, Brussels, 4 Apr 2007.
European Commission, European i2010 initiative on e-Inclusion: To
be part of the information society, Communication from the
Commission to the European Parliament, the Council, the
European Economic and Social Committee and the Committee
of the Regions, COM (2007) 694 final, Brussels, 8 Nov 2007.
European Council resolution on e-Inclusion, exploiting the opportunities of the information society for social inclusion, 2001/C 292/
02, OJ 18 Oct 2001.
European Group on Ethics in Science and New Technologies (EGE),
Opinion No. 20 on Ethical Aspects of ICT Implants in the
Human Body, Adopted on 16 March 2005.
European Parliament and Council, Directive 2001/20/EC of 4 April
2001 on the approximation of the laws, regulations and
administrative provisions of the Member States relating to the
implementation of good clinical practice in the conduct of
clinical trials on medicinal products for human use, OJ L 121/34,
Brussels, 1 May 2001.
European Parliament and Council, Directive 2002/22/EC of 7 March
2002 on universal service and users rights relating to electronic
225
communications networks and services (Universal Service
Directive), Official Journal L 108 of 24 April 2002.
European Parliament and Council, Directive 2006/24/EC on the
retention of data generated or processed in connection with the
provision of publicly available electronic communications
services or of public communications networks and amending
Directive 2002/58/EC, 15 March 2006.
European Parliament and Council, Directive 95/46/EC of 24 October
1995 on the protection of individuals with regard to the
processing of personal data and on the free movement of such
data, OJ L281/31 of 23 Nov 1995.
Flanagan, M., Howe, D. C., & Nissenbaum, H. (2008). Embodying
values in technology: theory and practice. In J. van den Hoven &
J. Weckert (Eds.), Information technology and moral philosophy
(pp. 322353). Cambridge: Cambridge University Press.
Goldberg, I., Hill, A., & Shostack, A. (2001). Trust, ethics, and
privacy. Boston University Law Review, 81, 101116.
Helft, M. (2010). Critics say Google invades privacy with new
service. The New York Times, 12 Feb. http://www.nytimes.
com/2010/02/13/technology/internet/13google.html.
Hildebrant, M., & Gutwirth, S. (2008). Profiling the European
Citizen. Dordrecht: Springer.
Hofmann, B. (2005). On value-judgements and ethics in health
technology assessment. Poiesis & Praxis, 3(4), 277295.
International Organization for Standardization, ISO/IEC 15408,
Information technologySecurity techniquesEvaluation criteria for IT security, First edition, International Organization for
Standardization, Geneva, 1999.
Johnson, B. (2009). Finland makes broadband access a legal right.
The Guardian, 14 Oct. http://www.guardian.co.uk/technology/
2009/oct/14/finland-broadband.
Kirkpatrick, C., & Parker, D. (Eds.). (2007). Regulatory impact
assessment: towards better regulation?. Cheltenham, UK:
Edward Elgar.
Kuzma, J., et al. (2008). An integrated approach to oversight assessment
for emerging technologies. Risk Analysis, 28(5), 11971219.
Lyon, D. (2003). Surveillance as social sorting: privacy, risk, and
digital discrimination. London: Routledge.
Maiese, M. (2003) Principles of Justice and Fairness, Beyond
Intractability.org, July. http://www.beyondintractability.org/
essay/principles_of_justice/.
Marx, G. T. (1998). Ethics for the new surveillance. The Information
Society, 14, 171185.
Mepham, T. B. (2005). Bioethics: An introduction for the biosciences.
Oxford: Oxford University Press.
Moor, J. H. (1985). What is Computer Ethics? In T. W. Bynum (Ed.),
Computers & Ethics (pp. 266275). Oxford: Blackwell.
Moor, J. H. (1997). Towards a theory of privacy in the information
age. Computers and Society, 27, 2732.
Moor, J. H. (2005). Why we need better ethics for emerging
technologies. Ethics and Information Technology, 7(3), 111119.
Nissenbaum, H. (2004). Privacy as contextual integrity. Washington
Law Review, 79(1), 101139.
Organisation for Economic Co-operation and Development (OECD),
Guidelines on the Transborder Flows of Personal Data, Paris, 23
Sept 1980. http://www.oecd.org/document/18/0,3343,en_2649_
34255_1815186_1_1_1_1,00.html.
Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary:
Desperately seeking the IT in IT researcha call to theorizing
the IT artifact. Information Systems Research, 12(2), 121134.
Palm, E., & Hansson, S. O. (2006). The case for ethical technology
assessment (eTA). Technological Forecasting & Social Change,
73, 543558.
Renn, O. (2008). Risk governance: coping with uncertainty in a
complex world. London: Earthscan.
123
226
Rowe, G., & Frewer, L. J. (2005). A Typology of Public Engagement
Mechanisms. Science, Technology & Human Values, 30(2), 251
290. http://sth.sagepub.com/cgi/content/abstract/30/2/251.
Skorupinski, B., & Ott, K. (2002). Technology assessment and ethics.
Poiesis & Praxis, 1, 95122.
Sollie, P. (2007). Ethics, technology development and uncertainty: an
outline for any future ethics of technology. Journal of Information Communications & Ethics in Society, 5(4), 293306.
Sollie, P., & Duwell, M. (2009). Evaluating new technologies:
Methodological problems for the ethical assessment of technology developments. Dordrecht: Springer.
Stern, P. C., & Fineberg, H. V. (Eds.). (1996). Understanding risk:
Informing decisions in a democratic society. Washington, DC:
Committee on Risk Characterization, National Research Council, National Academy Press.
Treasury Board of Canada Secretariat, Privacy Impact Assessment
Guidelines: A framework to Manage Privacy Risks, Ottawa, 31
Aug 2002.
UK Information Commissioners Office (ICO), Privacy Impact
Assessment Handbook, Version 2.0, June 2009. http://www.
ico.gov.uk/for_organisations/topic_specific_guides/pia_handbook.
aspx.
123
D. Wright
US National Research Council, Committee on Risk Perception and
Communications, Improving Risk Communication, National
Academy Press, Washington, D.C.,1989. http://www.nap.edu/
openbook.php?record_id=1189&page=R1.
Van Gorp, A. (2009). Ethics in and during technological research; An
addition to IT ethics and science ethics. In P. Sollie & M. Duwell
(Eds.), Evaluating new technologies (pp. 3550). Dordrecht:
Springer.
Vedder, A., & Custers, B. (2009). Whose responsibility is it anyway?
Dealing with the consequences of new technologies. In P. Sollie
& M. Duwell (Eds.), Evaluating new technologies. Dordrecht:
Springer.
Verbeek, P.-P. (2009). The moral relevance of technological artifacts.
In P. Sollie & M. Duwell (Eds.), Evaluating new technologies:
methodological problems for the ethical assessment of technology developments (pp. 6379). Dordrecht: Springer.
von Schomberg, R. (2007). From the ethics of technology towards an
ethics of knowledge policy & knowledge assessment. Working
document from the European Commission Services, Jan.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.