[go: up one dir, main page]

0% found this document useful (0 votes)
8 views7 pages

Mod 3

The document outlines principles for ethical AI development, emphasizing falsifiability, incremental testing, and user-centered interventions to ensure fairness and transparency. It highlights the importance of privacy protection, situational fairness, and human-friendly AI that empowers users while maintaining ethical standards. Additionally, it discusses the challenges of translating ethical principles into actionable practices and the need for a collaborative approach to address these issues effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views7 pages

Mod 3

The document outlines principles for ethical AI development, emphasizing falsifiability, incremental testing, and user-centered interventions to ensure fairness and transparency. It highlights the importance of privacy protection, situational fairness, and human-friendly AI that empowers users while maintaining ethical standards. Additionally, it discusses the challenges of translating ethical principles into actionable practices and the need for a collaborative approach to address these issues effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

MODULE 3

Pragna S
[COMPANY NAME] [Company address]
Falsifiability and Incremental Deployment

• Falsifiability: Define testable requirements to reveal potential failures.

o Example: Conduct safety tests for autonomous vehicles to confirm collision


avoidance.

• Incremental Testing and Deployment: Start with small, safe testing contexts and
expand gradually. Be ready to halt or adjust if issues arise.

• Trustworthiness through Testing: Ensure AI aligns with ethical principles.

o Example: Test autonomous vehicles in controlled environments before public


release.

• Simulations and Real-World Testing: Use simulations to test AI in complex scenarios


but recognize their limitations. Real-world testing is crucial to capture unpredictable
factors.

o Example: Wildlife patrol AI adjusted after real-world testing revealed


challenges.

• Cycle of Testing and Adjustment: Define and test falsifiable requirements, use results
to incrementally improve trustworthiness, and refine assumptions as testing expands
to broader contexts.

Safeguards Against the Manipulation of Predictors in AI for Social Good (AI4SG)

• Purpose: Reduce the risks of data manipulation and reliance on non-causal indicators
to ensure fair, effective, and trustworthy AI4SG interventions.

• Manipulation of Data Inputs: People might change their behavior or alter data to
achieve favorable predictions, reducing accuracy and fairness.

o Example: Teachers might boost math grades if they know higher scores lower
students' risk ratings.

• Reliance on Non-Causal Indicators: Using indicators that correlate with, but do not
cause, certain outcomes may lead to interventions that address only symptoms
rather than root causes.

o Example: Predicting corporate fraud based solely on observable behaviors


might ignore underlying governance issues.

Receiver-Contextualized Intervention in AI for Social Good (AI4SG)

• Purpose: Ensure AI-driven interventions respect user autonomy and are tailored to
individual needs and contexts, balancing effectiveness with non-intrusiveness.
• User Characteristics: Design interventions that consider users' specific needs,
capacities, and preferences.

o Example: A reminder system for individuals with cognitive disabilities that


minimizes disruption.

• Coordination Methods: Foster a collaborative approach between the user and AI


system.

o Example: Interactive software for patients that allows snoozing or delaying


prompts.

• Purpose of Intervention: Make interventions purposeful and aligned with the user’s
goals.

o Example: A wildlife patrol app suggesting routes that prioritize user needs.

• Effect of the Intervention: Evaluate and adjust the impact on user experience for
beneficial and minimally disruptive outcomes.

o Example: Allow patrol officers to ignore or modify challenging routes.

• Option of Ignoring or Modifying Interventions (Optionality): Ensure users can


decline or alter AI recommendations.

o Example: AI-based scheduling tools that allow users to skip or reschedule


prompts.

Receiver-Contextualized Explanation and Transparent Purposes

• Explainability of AI Systems: AI systems should explain how they work and what
outcomes they produce, tailored to the specific person or group receiving the
explanation.

o Example: An AI-powered educational tool that uses simpler terms and


examples matching the student’s grade level.

• Levels of Abstraction (LoA): The complexity of the explanation should match the
receiver’s needs and the goal of the explanation.

o Example: Engineers might need detailed explanations, while users might need
a high-level understanding.

• Purpose of the Explanation: Focus on the purpose of the system, whether it's to
justify a decision, inform a user about how a decision was made, or help them act on
that decision.

• Transparency in Goals: The goal of the AI system should be clear to the receiver.
o Example: Staff should know the goal behind an AI system monitoring hygiene
in a hospital.

• Trust and Adoption: Clear and understandable explanations build trust in AI systems,
leading to greater acceptance and adoption.

Privacy Protection and Data Subject Consent

• Respect Consent Levels: Honor established consent thresholds when processing


personal data.

o Example: An AI health app that asks for permission to access step count and
location.

• Informed Consent When Possible: Aim to secure informed consent for non-urgent
data uses.

• Dynamic Consent: Allow individuals to adjust their privacy settings dynamically.

• Transparency and Communication: Clearly inform individuals when their data is


being used in AI applications.

• Avoid "Take It or Leave It" Approaches: Give users more control over their data
rather than forcing them to accept all terms.

• Ethically Questionable Use: Be cautious about using publicly available data for
purposes that individuals may not have explicitly agreed to.

• Balance Privacy with Utility: Strive to find a balance between respecting privacy and
achieving social good goals.

Situational Fairness

• Remove Irrelevant Factors: Exclude variables and proxies from datasets that are
ethically significant but irrelevant to the specific AI outcome.

o Example: In a hiring process, AI should ignore irrelevant factors like race,


gender, or religion.

• Avoid Reinforcing Biases: Be careful not to use biased data that can lead to
reinforcing unfair outcomes.

• Preserve Important Context: Keep meaningful context that helps ensure decisions
are fair and inclusive.

• Ensure Inclusiveness and Safety: Include ethically significant variables if they help
make the AI more inclusive, support user safety, or fulfill other ethical goals.

• Account for User Sensitivity: Ensure AI respects users' unique characteristics and
interactions, particularly in sensitive areas like healthcare or educational tools.
Human-Friendly Semanticisation

• Support Human Meaning-Making: AI should help people understand and make


sense of things, rather than defining all meanings itself.

o Example: A virtual assistant in a hospital that reminds patients to take their


medications in a respectful way.

• Focus on Human Empowerment: Allow people to keep control over meanings that
involve empathy, values, or subjective interpretation.

• Use AI to Simplify Routine Tasks: Delegate repetitive tasks to AI to free up time for
humans to focus on meaningful interactions.

• Foster Trust and Autonomy: Ensure AI helps people make sense of information in
ways that respect human autonomy.

• Distinguish AI's Role: Clearly define tasks suitable for AI and reserve more subjective,
empathetic, or interpretive roles for humans.

By following these simplified principles, we can ensure that AI is developed and used
ethically and responsibly, benefiting society as a whole.

……………………………………………………………………………………………………………………………………………

Moving from Principles to Practices

• OECD Standard on AI: Established in 2019 by 36 member countries to promote


robust, safe, fair, and trustworthy AI systems through five principles.

• Ethical AI Documents: Over 70 documents in three years with varying priorities.


Emerging global consensus on principles: transparency, fairness, responsibility, and
privacy.

• Challenges: Transitioning principles into actionable guidance is complex. Risks


include 'ethics bluewashing' (tokenism) and 'ethics shirking' (avoidance).

What Ethical AI Means

• Creating Systems: Ethical AI means creating systems that are fair, safe, and
transparent.

o Example: Facial recognition technology can be used for good (like finding
missing persons) but also has risks (bias against certain racial groups). Ethical
AI ensures such risks are minimized.

Methodology

• Typology Design: Aligns ethical principles (beneficence, justice, explicability) with


development stages.
• Literature Review: Comprehensive review of 425+ sources for relevance and
actionability.

• System Requirements Translation: Gradual process to make abstract ethical norms


into actionable system requirements.

o Example: Tesla’s self-driving cars must follow ethical principles like "non-
maleficence" (don’t harm people). For instance, how does the car decide
between swerving into traffic or hitting a pedestrian?

Key Insights and Trends

• Over-Reliance on Explicability: Excessive focus on transparency overshadows other


principles like beneficence and justice.

• Individual Focus Over Collective Interests: Tools prioritize individual protection (e.g.,
privacy) but lack focus on societal impacts.

• Lack of Usability: Tools are not user-friendly, limiting adoption in real-world projects.

o Example: An AI that denies loans might explain its decision but still be biased
against specific demographics, failing fairness.

Addressing Ethical Gaps

• Balanced Tool Distribution: Need tools for all ethical principles across development
stages.

• Humanistic Perspective: Incorporate social awareness, education, and transparent


system design.

• Impact Assessment: Enhance tools to assess societal and individual impacts of ML


systems.

o Example: In healthcare AI, prioritizing individual privacy (e.g., anonymizing


patient data) is essential, but developers must also ensure the AI benefits
society by identifying public health trends.

Proposed Way Forward

• Collaborative and Multidisciplinary Approach: Involve developers, policymakers, and


ethicists to integrate diverse perspectives.

• Regular Reflection and Agile Ethics: Foster iterative development with foresight into
ethical implications.

• Pro-Ethical Business Models: Develop incentives and structures to balance ethical


investments with commercial goals.
o Example: Voice assistants like Alexa must consider language and cultural
differences to avoid misinterpreting commands in different regions.

Limitations and Challenges

• Typology and Research Limitations: Broad focus excluded sector-specific tools and
oversimplified development stages.

• Governance Uncertainty: Lack of clarity on how tools will impact the regulation of
ML systems.

o Example: Predictive policing AI systems, if not carefully designed, may


disproportionately target specific communities, leading to mistrust.

Conclusion

• Historical Context: Ethical considerations in AI have long been recognized (e.g.,


Turing, Wiener).

• Challenges: Short-term commercial pressures risk undermining ethical AI adoption.


Immature tools and lack of competitive incentives hinder progress.

• Urgent Need for Ethical AI: Avoid biased or harmful AI systems and maximize
benefits for societal good.

o Example: Biased hiring algorithms create workplace discrimination. Ethical


design ensures fairness in employment decisions.

By addressing these points, we can ensure that AI is developed and used ethically and
responsibly, benefiting society as a whole.

You might also like