Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Jump to content

User:Alywhi/Report

From Wikipedia, the free encyclopedia

Wikipedia Advising Report

Managing the impact of generative artificial intelligence tools on the Wikipedia online community is a challenge proving to be just as difficult as it is important. Members of the Wikipedia community and the Wikimedia Foundation (WMF) have a duty to uphold their mission statement, which emphasizes the significance of engaging, accurate, and accessible content. AI is a threat to all parts of the WMF’s mission. It takes away the people’s power to be engaged in the production of its content, and, at AI’s current state of development, creates a huge risk for inaccurate, low quality, and ineffective content. Having spent the past seven weeks both engaging with Wikipedia, and engaging with other research and content related to online communities, I feel I can bring a fresh, yet educated perspective to this conundrum.

The Wikimedia Foundation ought to consider the proposed guidelines in “Wikipedia: Using neural network language models on Wikipedia”. These essentially state that one may not use artificial intelligence to write content or find sources. However, one is allowed to use these tools for writing advice, in a structural or grammatical sense. Users must note any use of AI and explicitly state what it was used for.

A large part of Wikipedia is the engagement of its community members in contributing to the site. AI poses a threat to engagement and commitment. Contributions made by AI can discourage regular Wikipedia contributors, by making them feel irrelevant or useless. However, this is not the case and it is imperative that Wikipedia keeps the engagement and commitment of its human contributors.

There are many ways in which to motivate people to contribute to an online community, including extrinsic incentives. While Wikipedia is a nonprofit and likely could not offer financial or physical rewards, there are other options. As Willer, in “Groups Reward Individual Sacrifice: The Status Solution to the Collective Action Problem.” suggests, communities can reward individuals by offering status related incentives. This can include a badge on their profile, a special colored name tag, and a variety of other virtual rewards. However, these can feel patronizing and may leave the community with users only looking for status.

As stated in the “Motivation crowding theory” Wikipedia article, the theory suggests that offering rewards or punishments for certain behaviors may decrease the amount of users acting with intrinsic motivation. On a platform such as Wikipedia, genuine passion and interest is imperative for retaining effective and knowledgeable contributors. Therefore, the best way to prevent AI related demotivation, is to ensure the Wikipedia community that work is being done to keep AI from contributing in meaningful ways.

As with any online community, moderation is key in Wikipedia’s success. Consistent and strong moderation is at risk due to generative AI tools. “Reddit Rules! Characterizing an Ecosystem of Governance” by Fiesler, et al., mentions the importance of both formal rules and informal social norms in managing and monitoring user behavior. While Wikipedia needs to have written rules about AI use, it needs to be understood that Wikipedia is not tolerant of AI use, and users need to understand why. Alongside its description of each rule, Wikipedia can include the benefits that come from following the rule, and harm that may come from violating it. In order to foster a community that establishes social norms against AI use, the community needs to be thoroughly informed.

Wikipedia needs to create a united front with each of its communities in establishing strict and consistent rules surrounding AI use. It should be widespread in its enforcement of AI intolerance. While moderation and rules are key to combating AI threats, “Rules and Rule-Making in the Five Largest Wikipedias” by Hwang and Shaw discusses how complex rules can deter new editors, as they can feel overwhelmed and discouraged. In order to support user engagement and retention rules need to be clear and easily understandable. Rules should deter bad behavior, not all behavior.

The Wikimedia Foundation must address the impact of generative AI tools in their community. They can begin doing so by creating clear, enforceable rules, such as the ones in the proposed guidelines of “Wikipedia: Using neural network language models on Wikipedia”. The rules must be clear and consistent throughout all Wikipedia communities. Wikipedia needs to take a strong stance against the use of generative AI tools within their community and ensure the community is confident in their moderation and monitorization of this issue. The regulation of AI use within the Wikipedia community is imperative to withhold the mission statement and ensure the legitimacy and longevity of the community itself.


References

Fiesler, Casey, Jialun Jiang, Joshua McCann, Kyle Frye, and Jed Brubaker. 2018. “Reddit Rules! Characterizing an Ecosystem of Governance.” Proceedings of the International AAAI Conference on Web and Social Media 12 (1). https://doi.org/10.1609/icwsm.v12i1.15033.

Hwang, Sohyeon, and Aaron Shaw. 2022. “Rules and Rule-Making in the Five Largest Wikipedias.” Proceedings of the International AAAI Conference on Web and Social Media 16 (May):347–57.

"Motivation crowding theory." n.d. English Wikipedia.

“Wikipedia:Using neural network language models on Wikipedia.” n.d. English Wikipedia.

Willer, Robb. 2009. “Groups Reward Individual Sacrifice: The Status Solution to the Collective Action Problem.” American Sociological Review 74 (1): 23–43.

Xu, Yu. 2018. “The Ecological Dynamics of Organizational Change: Density Dependence in the Rate of Weibo Adoption by Populations of News Organizations.” International Journal of Communication 12 (0): 26.