[go: up one dir, main page]

0% found this document useful (0 votes)
225 views2 pages

Transcript #1

Aboba Barhani is a PhD student at University College Dublin who studies embodied cognitive science and AI ethics. Their work focuses on how technology impacts and shapes cognition, but not equally for all - more privileged individuals have more control over technologies' influence. Barhani's paper at the Black and AI workshop examined "algorithmic injustices" or how those farthest from the stereotypical white cisgender male face greater negative impacts from technologies like hiring and housing algorithms. Their concept of "relational ethics" centers the perspectives of disproportionately impacted groups to address problems emerging from technology development and implementation.

Uploaded by

Ma. Carlos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
225 views2 pages

Transcript #1

Aboba Barhani is a PhD student at University College Dublin who studies embodied cognitive science and AI ethics. Their work focuses on how technology impacts and shapes cognition, but not equally for all - more privileged individuals have more control over technologies' influence. Barhani's paper at the Black and AI workshop examined "algorithmic injustices" or how those farthest from the stereotypical white cisgender male face greater negative impacts from technologies like hiring and housing algorithms. Their concept of "relational ethics" centers the perspectives of disproportionately impacted groups to address problems emerging from technology development and implementation.

Uploaded by

Ma. Carlos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2

Participant #1:

Alright everyone. I am on the line with Aboba Barhani. Aboba is a PhD student at
University College Dublin. Aboba, welcome to the Twin AI podcast. Thank you so much
for having Mesa. I'm really excited about this conversation. We had an opportunity
to meet in person after a long while interacting on Twitter at the most recent
Neurops conference, in particular, the Black and AI workshop, where you not only
presented your paper Algorithmic Injustices toward Relational ethics, but you won
best paper there. And so I'm looking forward to digging into that and some other
topics. But before we do that, I would love to hear you kind of share a little bit
about your background, and I will mention for folks that are hearing the sirens in
the background. While I mentioned that you are from University College Dublin, you
happen to be in New York now at the AIEs conference in Association with AAA. And as
folks might know, it's hard to avoid sirens and construction in New York City. So
just consider that background or mood ambience background sounds. So your
background? Yes. How did you get started working in AI ethics? So my background is
in cognitive science and particularly a part of cognitive science called embodied
cognitive science, which has the roots in cybernetics, in systems thinking. The
idea is to focus on the social, on the cultural, on the historical and kind of to
view cognition in continuity with the world with historical backgrounds and all
that, as opposed to your traditional approach to cognition, which just treats
cognition as something located in the brain or something formalizable, something
that can be computed. So that's my background even during my Masters, I lean
towards the AI side of cognitive science. The more I delve into it, the more I much
more attracted to the ethics side, to injustices to the social issues. And so the
more the PhD goes on, the more I find myself in the ethics side. Was there a
particular point that you realized that you are really excited about the ethics
part in particular? Or did it just evolve for you? I think it just evolved. So when
I started out at the end of my master's and at the start of the PhD, my idea is
that we have this new relatively new school way of thinking, which is embodied
Cogswell, which I quite like very much because it emphasizes ambiguities and
messiness and contingencies as opposed to drawing clean boundaries. The idea is
yes. I like the idea of redefining cognition as something relational, something
inherently social and something that is continually impacted and influenced by
other people and the technologies we use. So the technology aspect the technology
and was my interest. So initially the idea is yes, technology constitutes aspect of
our cognition. You have the famous 98 thesis by Andy Clark and David Charmers. They
extended mine where they claimed the iphone is an extension of your mind so you can
think of it that way, and I was kind of advancing the same line of thoughts. But
the more I delved into it, the more I saw. Yes, digital technology, whether it's
ubiquitous computing, such as face recognition systems on the street or your phone,
whatever. Yes, it does impact, and it does continually shape and reshape our
cognition and what it means to exist in the world. But what became more and more
clear to me is that not everybody is impacted equally. The more privileged you are,
the more in control of you are as to what can influence you and what you can avoid.
So that's where I become more and more involved with the ethics of computation and
its impact on cognition. The notion of privilege is something that flows throughout
the work that you presented at Black and AI, the algorithmic Injustices paper, and
this idea, this construct of relational ethics, what is relational ethics? And what
are you getting at with it? Yeah. So relational ethics is actually not a new thing.
A lot of people have theorized about it and have written about it, but the way I'm
approaching it, the way I'm using it is, I guess it kind of Springs from this
frustration that for many folks who talk about AI ethics or fairness or justice,
most of it comes down to constructing this need formulation of fairness or
mathematical calculation of who should be included and who should be excluded. What
kind of data do we need that sort of stuff? So for me, relationally leave that for
a little bit and let's Zoom out and see the bigger picture. And instead of using
technology to solve the problems that emerge from technology itself, which means
centering technology. Let's instead center the people that are people, especially
people that are disproportionately impacted by the limitations or the problems that
arise with the development and implementation of technology. So there is a robust
research you can call it AI fairness or algorithmic injustice. And the pattern is
that the more you are at the bottom of the intersectional level, that means the
further away from you are from your stereotypical white cisgendered male, the
bigger the negative impacts are on you, whether it's classification or
categorization, or whether it's being scaled and scored for by hiring algorithms or
looking for housing or anything like that, the more you move away from that
stereotypical category, the status quo, the more the heavy the impact is on you. So
the idea of relational is kind of to think from that perspective to take that as a
starting point. So these are the groups he's.

You might also like