[go: up one dir, main page]

Jump to content

Wikimedia Foundation Community Affairs Committee/2023-12-07 Conversation with Trustees

From Meta, a Wikimedia project coordination wiki
WikiSG 20230817 Poster Category 2C8A6568 Edited
WikiSG 20230817 Poster Category 2C8A6568 Edited

The Community Affairs Committee – a committee of the Wikimedia Foundation Board of Trustees – hosted a Conversation with the Trustees on 7 December from 19:00-20:30 UTC. This conversation was an opportunity for community members to speak directly with the trustees about their work. The Board of Trustees is a volunteer body of movement leaders and external experts in charge of guiding the Wikimedia Foundation and ensuring its accountability.

How to participate

[edit]

This conversation was held on Zoom with a live YouTube stream. The call lasted 90 minutes and consisted of updates as well as open Q&A and conversation.

Agenda

[edit]
  • Board updates: Sister Projects Task Force, AffCom Strategy, what is next for the Community Affairs Committee
  • Product and Technology updates
  • Developments in AI: the affiliate approach and the Foundation's current work
  • Talking:2024 - an invite to talk about the future of Wikimedia with Foundation senior leadership and trustees

Call notes

[edit]
Conversation with the Trustees December 2023

Board updates

[edit]
  • In the last Board meeting of the calendar year. Appointed Lorenzo as liaison to MCDC to replace Shani
  • Board selection process will start in 2024, some trustees have terms that are ending so a separate working group within the Board has been created to liaise with Elections Committee and avoid conflict of interest. Dariusz, Nat, Era’a, Kathy and as members of Governance Committee.
  • This was last meeting of Audit Committee Chair Tanya, she will stay on as advisory member as new trustee Kathy Collins steps in. Kathy has already attended WikiCon North America to get to know the communities
  • Trustees attended regional / thematic events like WikiIndaba, WikiWomenCamp and more. Opportunity for trustees to get to know what volunteers are doing, how to support them, learn from volunteer event organizers, and share information that might be useful to volunteers.
  • The Board liaisons to AffCom are currently supporting AffCom by helping them to interview AffCom candidates. When we are finished, we will make a recommendation to the current AffCom members who are not running. We had 32 candidates, which is much larger than we expected!
  • Sister projects task force is looking at evaluating how to open, close and evaluate sister projects.
  • What’s coming up from the Community Affairs Committee:
    • Continue with the Conversations with the Trustees; continue to visit different conferences and strategic events; continue to support the MCDC; continue to support the Sister Projects Task Force; continue to support Senior Leadership with everything connected to the community. This is a channel for Board to continue to build bridges between the Foundation and the communities.

Product and Technology updates

[edit]
  • This annual plan establishes Product and Tech is the largest focus area in terms of staffing and in terms of budget. Essential work like maintenance, upgrades, and fixes and strategic work around improvements. Strategic work includes:
    • Improvements to the New Pages Patrol / PageTriage software, including the fixing of workflow-breaking bugs, the updating of deprecated code, and the rewriting of the New Pages Feed using vue.js which will make future fixes and improvements easier to make.
    • Edit Check”, which will automatically warn newer editors when they have added information without a reference, and encourage them to add a reference, and then check whether their reference comes from a dubious source.
    • Anti-vandalism bots to smaller language wikis that don’t currently have them so that patrollers can focus on more complex problems.
    • Dark mode, which has been a longstanding request in the Community Wishlist, as a way to make reading and editing Wikipedia more comfortable and accessible visually.
    • Other examples include patrolling on Android, Watchlist in iOS, upgrades to the Commons Upload Wizard and more.

Talking:2024

[edit]
  • Tour of conversations between Foundation trustees, leadership and community members. Goal is to listen and learn together. Big existential questions that face our movement can only be resolved through sharing and learning together, coming up with solutions.
  • Trustees, Foundation leaders have been talking to a cross-section of community members
  • Over 40 conversations to date. Set the stage for multi-year planning. More information and signup are here: https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Community_Affairs_Committee/Talking:_2024
  • Notes from each of the conversations, as we go we are aggregating teams. BoT, Foundation senior leadership, Endowment Board, and MCDC are meeting in March, hope is that we can bring some of the outputs of these conversations into those sessions. Communicate out with volunteers some of the key themes around February. The themes we’ve heard are in line with what we’ve been hearing–product and tech, external trends, work with affiliates, AI. We’re talking about the topics that will continue to be part of ongoing strategic planning

AI

[edit]
  • Wikimedia Poland experience:
    • Lots of media attention / requests for interviews. Using the occasion to talk about Wikimedia model in general and the opportunity AI presents.
    • Will be creating an FAQ to support people in external requests about Wikimedia and AI
    • Internal perspective: for the chapter, experimenting with AI tools. Trying out assistants in Beta in grantwriting, creating social media posts, enhancing brainstorming. Within Polish community, some editors are unsure whether they want to engage but some are very enthusiastic. Planning to do webinars to help community learn and share how to use AI tools in their Wikimedia work.
  • Foundation work:
    • Production Machine Learning models–any model out and live in the world powering things that are being used. Three main areas: build the infrastructure that runs the model (move from ORES to a new infrastructure called Lift Wing in production. 600 requests per second); model transparency through model cards (set of public documentation about the model about how to use them and with human rights reviews, provide for community feedback and governance of model); How we can use AI to help existing users (chatting / asking questions/ generating SPARQL queries etc)
    • Case study: Android app article descriptions pilot. Recommending potential article descriptions to add to articles. Model looks at first paragraphs and other descriptions in other languages, synthesizing and simplifying using community norms. Work started with external researchers where they had existing processes for this without AI supports, built pilot on Cloud Services. Will be deployed in next months on ML platform. Will be deployed in 25 language editions.
    • Future audiences is a small part of the annual plan devoted to doing quick experiments to understand how people search for and create knowledge online. Built a chatgpt plugin. We have not seen knowledge search move from Wikipedia to ChatGPT. Knowing content comes from Wikipedia increases users’ trust in content. We have an opportunity to double down as a platform providing human generated knowledge, maybe leveraging AI to make verification simpler and faster. Next experiment is building a plugin for that–making a way for people who are reading info on a different platform (news source, for example) to highlight certain claims that are suspicious and searching for those claims within Wikipedia, returning info on-screen. AI-augmented search.

Q&A

[edit]

Is Sindhi Wikipedia included in the 25 pilot languages for the android app?

In the future will AI chatbots be allowed to edit Wikipedia? What might that look like?

  • It is hard to imagine a human knowledge creation project that doesn’t center humans at the heart of it. What counts as reliability? What kinds of information is neutrally presented? These are human questions at the reason why people love and trust Wikipedia is because it is a human-curated knowledge store. AI can make some tasks easier and more intuitive, but when it comes to replacing humans, that would be a fundamentally different project from Wikipedia.

What is the Foundation / movement doing to prevent harmful AI-generated content from being added to Wikimedia projects?

  • There’s a wide consensus that ChatGPT would not be a good Wikipedian. First line of defense against AI generated mis or disinformation is existing community standards around reliable sources. There are some danger points–casual editors may be deceived by an AI generated science website for example. People have always tried to put bad information into Wikipedia, and that’s on us as a community to combat more than anything else.

Are the machine learning and research teams planning to train or host our own LLMs soon?

  • Our focus is on the features and products that make the experience on all wiki projects better. Sometimes that means a small model which we can host. Sometimes those are larger models. We are actively finding small models and deploying them as described in the Android work.
  • We want to focus on what the features are that make things better for our projects. When there are needs for something, we can build against those needs. But everything is based on specific use cases. So we’re not aiming for a big branded Wikipedia LLM but rather things that are pragmatically useful.

How does the Foundation plan to respond to AI reuse without attribution or compensation?

  • We come from the free software movement. People taking this gift and using it to train AIs actually fits within our mission. But attribution is critical and an important part of our values. Even though AI might not be plagiarizing, they are not citing their sources. How we enforce that is a technical and legal question. The AI doesn’t know where it got things from, so it will be a challenge.
  • Because the models don’t know where they got the information, attribution realistically will not be output-based, it may be in a more general sense from the developer of a model. We submitted a comment to the US Copyright Office on generative AI. If we push too hard on attribution, the people who are reusing our content will increase their emphasis on their uses being fair use, which we also support. Could set up a tension between our stance on fair use and the people reusing our content based on fair use. We are working through many channels on getting attribution.
  • Enterprise API (commercial API) will make our content more digestible and will allow for the search engine to more easily know where the information came from.
  • Sustainability, equity and transparency are leading principles on how we work with AI. How can we find ways of encouraging the people who have contributed so much to keep doing that, and find new people interested in contributing to the Commons. Make continual knowledge input into these systems sustainable.

How is the English fundraising campaign going so far? What has the community collaboration process looked like so far?

  • Revenue is on track to hit targets at the end of December
  • Collaboration has been a major focus since July. There is a collaboration page on-wiki and have talked to people at Wikimania and WikiCon North America. Collaboration will continue to happen throughout December. Messaging being submitted by volunteers on AI.
  • This year’s donor campaign is also piloting encouraging donors to edit on the Thank You page. Donors have been signing up and creating accounts. 1400 accounts created through this process. 11% of those accounts went on to make an unreverted edit within 24 hours.

Should chapters bearing the Wikimedia name and trademark be accountable to the values of the movement? Does the Foundation hold chapters accountable to this and, if so, how does that happen?

  • All Wikimedia organizations including Foundation and all affiliates should uphold Wikimedia movement values.
  • When groups apply to be chapters, the first requirement is that they have a mission that supports the values. Foundation staff and AffCom review all potential affiliates to make sure they will be able to uphold the values.
  • Accountability is mostly done through members. Chapters are independent organizations. This allows them to be autonomous and with their own governance structures. Chapter members elect chapter Boards. Board is responsible for activities of chapter and alignment with values.
  • If there are issues that are not being solved by chapter’s governance structure, AffCom and Foundation staff can support chapter in returning to compliance. If that fails, they can remove chapter status.

How does the Wikimedia Foundation ensure accountable use of grant funds and ensure that affiliates are complying with fair employment standards and treating their workers fairly?

  • Affiliates are independent organizations, which means they handle their own employment practices.
  • Each country’s laws are different, the Foundation often doesn’t have expertise in local laws in that country. However, if there are repeated concerns that the affiliate is not following governance best practices or not complying with the law or fulfilling intent of the grant, Foundation and AffCom will look into it and if they determine that the complaints are founded, can offer support to that affiliate.
  • Foundation is a major grantor and also a steward of donor money, and is accountable to many stakeholders. The Foundation will take steps to make sure that the grants are spent responsibly and in accordance with the goal of the grant.
  • Foundation has regular contact with grantees, and can request an external audit if needed, someone who knows the local laws and context.

How do you feel about relaxing the privacy restrictions that prevent the Foundation from leading a letter writing campaign in support of the jailed Arabic Wikipedians?

  • Privacy is only one of the principles that we think about when we consider advocacy or letter writing campaigns. More important principle is safety of people involved, including the people we are trying to help through the advocacy as well as other people affiliated with or connected with Wikimedia. We owe it to everyone that contributes to think about all of your safety whenever we take actions. We consider safety carefully and speak with experts, often getting second and third opinions, come up with a course of action based on a multitude of opinions.
  • This is all discussed at length in a response posted on Wikimedia-l. It goes into detail on how we do safety reviews and how the Human Rights Team responds with advocacy campaigns.

Why is the proposal for the next Board election giving shortlisting power to the affiliates?

  • Aim is to reduce the burden on the candidates and the voters.
  • We are discussing this and other ways to shortlist candidates. We think it makes sense for affiliates to play this important role, but we are open to feedback about how else we can shortlist or otherwise reduce the burden on voters and candidates. Can email thoughts and suggestions to askcac(_AT_)wikimedia.org